Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (749)

Search Parameters:
Keywords = autonomous vehicles and robots

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 12600 KB  
Article
Underwater Object Recovery Using a Hybrid-Controlled ROV with Deep Learning-Based Perception
by Inés Pérez-Edo, Salvador López-Barajas, Raúl Marín-Prades and Pedro J. Sanz
J. Mar. Sci. Eng. 2026, 14(2), 198; https://doi.org/10.3390/jmse14020198 - 18 Jan 2026
Viewed by 171
Abstract
The deployment of large remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs) typically requires support vessels, crane systems, and specialized personnel, resulting in increased logistical complexity and operational costs. In this context, lightweight and modular underwater robots have emerged as a cost-effective [...] Read more.
The deployment of large remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs) typically requires support vessels, crane systems, and specialized personnel, resulting in increased logistical complexity and operational costs. In this context, lightweight and modular underwater robots have emerged as a cost-effective alternative, capable of reaching significant depths and performing tasks traditionally associated with larger platforms. This article presents a system architecture for recovering a known object using a hybrid-controlled ROV, integrating autonomous perception, high-level interaction, and low-level control. The proposed architecture includes a perception module that estimates the object pose using a Perspective-n-Point (PnP) algorithm, combining object segmentation from a YOLOv11-seg network with 2D keypoints obtained from a YOLOv11-pose model. In addition, a Natural Language ROS Agent is incorporated to enable high-level command interaction between the operator and the robot. These modules interact with low-level controllers that regulate the vehicle degrees of freedom and with autonomous behaviors such as target approach and grasping. The proposed system is evaluated through simulation and experimental tank trials, including object recovery experiments conducted in a 12 × 8 × 5 m test tank at CIRTESU, as well as perception validation in simulated, tank, and harbor scenarios. The results demonstrate successful recovery of a black box using a BlueROV2 platform, showing that architectures of this type can effectively support operators in underwater intervention tasks, reducing operational risk, deployment complexity, and mission costs. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

5 pages, 1197 KB  
Proceeding Paper
Experimental Assessment of Autonomous Fleet Operations for Precision Viticulture Under Real Vineyard Conditions
by Gavriela Asiminari, Vasileios Moysiadis, Dimitrios Kateris, Aristotelis C. Tagarakis, Athanasios Balafoutis and Dionysis Bochtis
Proceedings 2026, 134(1), 47; https://doi.org/10.3390/proceedings2026134047 - 14 Jan 2026
Viewed by 77
Abstract
The increase in global population and climatic instability places unprecedented demands on agricultural productivity. Autonomous robotic systems, specifically unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), provide potential solutions by enhancing precision viticulture operations. This work presents the experimental evaluation of a [...] Read more.
The increase in global population and climatic instability places unprecedented demands on agricultural productivity. Autonomous robotic systems, specifically unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), provide potential solutions by enhancing precision viticulture operations. This work presents the experimental evaluation of a heterogeneous robotic fleet composed of Unmanned Ground Vehicles (UGVs) and Unmanned Aerial Vehicles (UAVs), operating autonomously under real-world vineyard conditions. Over the course of a full growing season, the fleet demonstrated effective autonomous navigation, environment sensing, and data acquisition. More than 4 UGV missions and 10 UAV flights were successfully completed, achieving a 95% data acquisition rate and mapping resolution of 2.5 cm/pixel. Vegetation indices and thermal imagery enabled accurate detection of water stress and crop vigor. These capabilities enabled high-resolution mapping and agricultural task execution, contributing significantly to operational efficiency and sustainability in viticulture. Full article
Show Figures

Figure 1

24 pages, 39327 KB  
Article
Forest Surveying with Robotics and AI: SLAM-Based Mapping, Terrain-Aware Navigation, and Tree Parameter Estimation
by Lorenzo Scalera, Eleonora Maset, Diego Tiozzo Fasiolo, Khalid Bourr, Simone Cottiga, Andrea De Lorenzo, Giovanni Carabin, Giorgio Alberti, Alessandro Gasparetto, Fabrizio Mazzetto and Stefano Seriani
Machines 2026, 14(1), 99; https://doi.org/10.3390/machines14010099 - 14 Jan 2026
Viewed by 136
Abstract
Forest surveying and inspection face significant challenges due to unstructured environments, variable terrain conditions, and the high costs of manual data collection. Although mobile robotics and artificial intelligence offer promising solutions, reliable autonomous navigation in forest, terrain-aware path planning, and tree parameter estimation [...] Read more.
Forest surveying and inspection face significant challenges due to unstructured environments, variable terrain conditions, and the high costs of manual data collection. Although mobile robotics and artificial intelligence offer promising solutions, reliable autonomous navigation in forest, terrain-aware path planning, and tree parameter estimation remain open challenges. In this paper, we present the results of the AI4FOREST project, which addresses these issues through three main contributions. First, we develop an autonomous mobile robot, integrating SLAM-based navigation, 3D point cloud reconstruction, and a vision-based deep learning architecture to enable tree detection and diameter estimation. This system demonstrates the feasibility of generating a digital twin of forest while operating autonomously. Second, to overcome the limitations of classical navigation approaches in heterogeneous natural terrains, we introduce a machine learning-based surrogate model of wheel–soil interaction, trained on a large synthetic dataset derived from classical terramechanics. Compared to purely geometric planners, the proposed model enables realistic dynamics simulation and improves navigation robustness by accounting for terrain–vehicle interactions. Finally, we investigate the impact of point cloud density on the accuracy of forest parameter estimation, identifying the minimum sampling requirements needed to extract tree diameters and heights. This analysis provides support to balance sensor performance, robot speed, and operational costs. Overall, the AI4FOREST project advances the state of the art in autonomous forest monitoring by jointly addressing SLAM-based mapping, terrain-aware navigation, and tree parameter estimation. Full article
Show Figures

Figure 1

28 pages, 9411 KB  
Article
A Real-Time Mobile Robotic System for Crack Detection in Construction Using Two-Stage Deep Learning
by Emmanuella Ogun, Yong Ann Voeurn and Doyun Lee
Sensors 2026, 26(2), 530; https://doi.org/10.3390/s26020530 - 13 Jan 2026
Viewed by 181
Abstract
The deterioration of civil infrastructure poses a significant threat to public safety, yet conventional manual inspections remain subjective, labor-intensive, and constrained by accessibility. To address these challenges, this paper presents a real-time robotic inspection system that integrates deep learning perception and autonomous navigation. [...] Read more.
The deterioration of civil infrastructure poses a significant threat to public safety, yet conventional manual inspections remain subjective, labor-intensive, and constrained by accessibility. To address these challenges, this paper presents a real-time robotic inspection system that integrates deep learning perception and autonomous navigation. The proposed framework employs a two-stage neural network: a U-Net for initial segmentation followed by a Pix2Pix conditional generative adversarial network (GAN) that utilizes adversarial residual learning to refine boundary accuracy and suppress false positives. When deployed on an Unmanned Ground Vehicle (UGV) equipped with an RGB-D camera and LiDAR, this framework enables simultaneous automated crack detection and collision-free autonomous navigation. Evaluated on the CrackSeg9k dataset, the two-stage model achieved a mean Intersection over Union (mIoU) of 73.9 ± 0.6% and an F1-score of 76.4 ± 0.3%. Beyond benchmark testing, the robotic system was further validated through simulation, laboratory experiments, and real-world campus hallway tests, successfully detecting micro-cracks as narrow as 0.3 mm. Collectively, these results demonstrate the system’s potential for robust, autonomous, and field-deployable infrastructure inspection. Full article
(This article belongs to the Special Issue Sensing and Control Technology of Intelligent Robots)
Show Figures

Figure 1

20 pages, 2119 KB  
Article
Intelligent Logistics Sorting Technology Based on PaddleOCR and SMITE Parameter Tuning
by Zhaokun Yang, Yue Li, Lizhi Sun, Yufeng Qiu, Licun Fang, Zibin Hu and Shouna Guo
Appl. Sci. 2026, 16(2), 767; https://doi.org/10.3390/app16020767 - 12 Jan 2026
Viewed by 146
Abstract
To address the current reliance on manual labor in traditional logistics sorting operations, which leads to low sorting efficiency and high operational costs, this study presents the design of an unmanned logistics vehicle based on the Robot Operating System (ROS). To overcome bounding-box [...] Read more.
To address the current reliance on manual labor in traditional logistics sorting operations, which leads to low sorting efficiency and high operational costs, this study presents the design of an unmanned logistics vehicle based on the Robot Operating System (ROS). To overcome bounding-box loss issues commonly encountered by mainstream video-stream image segmentation algorithms under complex conditions, the novel SMITE video image segmentation algorithm is employed to accurately extract key regions of mail items while eliminating interference. Extracted logistics information is mapped to corresponding grid points within a map constructed using Simultaneous Localization and Mapping (SLAM). The system performs global path planning with the A* heuristic graph search algorithm to determine the optimal route, autonomously navigates to the target location, and completes the sorting task via a robotic arm, while local path planning is managed using the Dijkstra algorithm. Experimental results demonstrate that the SMITE video image segmentation algorithm maintains stable and accurate segmentation under complex conditions, including object appearance variations, illumination changes, and viewpoint shifts. The PaddleOCR text recognition algorithm achieves an average recognition accuracy exceeding 98.5%, significantly outperforming traditional methods. Through the analysis of existing technologies and the design of a novel parcel-grasping control system, the feasibility of the proposed system is validated in real-world environments. Full article
Show Figures

Figure 1

18 pages, 3491 KB  
Article
Stationary State Recognition of a Mobile Platform Based on 6DoF MEMS Inertial Measurement Unit
by Marcin Bogucki, Waldemar Samociuk, Paweł Stączek, Mirosław Rucki, Arturas Kilikevicius and Radosław Cechowicz
Appl. Sci. 2026, 16(2), 729; https://doi.org/10.3390/app16020729 - 10 Jan 2026
Viewed by 159
Abstract
The article presents the analytic method for real-time detection of the stationary state of a vehicle based on information retrieved from 6 DoF IMU sensor. Reliable detection of stillness is essential for the application of resetting the inertial sensor’s output bias, called Zero [...] Read more.
The article presents the analytic method for real-time detection of the stationary state of a vehicle based on information retrieved from 6 DoF IMU sensor. Reliable detection of stillness is essential for the application of resetting the inertial sensor’s output bias, called Zero Velocity Update method. It is obvious that the signal from the strapped on inertial sensor differs while the vehicle is stationary or moving. Effort was then made to find a computational method that would automatically discriminate between both states with possibly small impact on the vehicle embedded controller. An algorithmic step-by-step method for building, optimizing, and implementing a diagnostic system that detects the vehicle’s stationary state was developed. The proposed method adopts the “Mahalanobis Distance” quantity widely used in industrial quality assurance systems. The method transforms (fuses) information from multiple diagnostic variables (including linear accelerations and angular velocities) into one scalar variable, expressing the degree of deviation in the robot’s current state from the stationary state. Then, the method was implemented and tested in the dead reckoning navigation system of an autonomous wheeled mobile robot. The method correctly classified nearly 93% of all stationary states of the robot and obtained only less than 0.3% wrong states. Full article
(This article belongs to the Special Issue Recent Advances and Future Challenges in Manufacturing Metrology)
Show Figures

Figure 1

29 pages, 4853 KB  
Article
ROS 2-Based Architecture for Autonomous Driving Systems: Design and Implementation
by Andrea Bonci, Federico Brunella, Matteo Colletta, Alessandro Di Biase, Aldo Franco Dragoni and Angjelo Libofsha
Sensors 2026, 26(2), 463; https://doi.org/10.3390/s26020463 - 10 Jan 2026
Viewed by 360
Abstract
Interest in the adoption of autonomous vehicles (AVs) continues to grow. It is essential to design new software architectures that meet stringent real-time, safety, and scalability requirements while integrating heterogeneous hardware and software solutions from different vendors and developers. This paper presents a [...] Read more.
Interest in the adoption of autonomous vehicles (AVs) continues to grow. It is essential to design new software architectures that meet stringent real-time, safety, and scalability requirements while integrating heterogeneous hardware and software solutions from different vendors and developers. This paper presents a lightweight, modular, and scalable architecture grounded in Service-Oriented Architecture (SOA) principles and implemented in ROS 2 (Robot Operating System 2). The proposed design leverages ROS 2’s Data Distribution System-based Quality-of-Service model to provide reliable communication, structured lifecycle management, and fault containment across distributed compute nodes. The architecture is organized into Perception, Planning, and Control layers with decoupled sensor access paths to satisfy heterogeneous frequency and hardware constraints. The decision-making core follows an event-driven policy that prioritizes fresh updates without enforcing global synchronization, applying zero-order hold where inputs are not refreshed. The architecture was validated on a 1:10-scale autonomous vehicle operating on a city-like track. The test environment covered canonical urban scenarios (lane-keeping, obstacle avoidance, traffic-sign recognition, intersections, overtaking, parking, and pedestrian interaction), with absolute positioning provided by an indoor GPS (Global Positioning System) localization setup. This work shows that the end-to-end Perception–Planning pipeline consistently met worst-case deadlines, yielding deterministic behaviour even under stress. The proposed architecture can be deemed compliant with real-time application standards for our use case on the 1:10 test vehicle, providing a robust foundation for deployment and further refinement. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion for Decision Making for Autonomous Driving)
Show Figures

Figure 1

17 pages, 9683 KB  
Article
Combined Infinity Laplacian and Non-Local Means Models Applied to Depth Map Restoration
by Vanel Lazcano, Mabel Vega-Rojas and Felipe Calderero
Signals 2026, 7(1), 2; https://doi.org/10.3390/signals7010002 - 7 Jan 2026
Viewed by 148
Abstract
Scene depth information is a key component of any robotic mobile application. Range sensors, such as LiDAR, sonar, or radar, capture depth data of a scene. However, the data captured by these sensors frequently presents missing regions or information with a low confidence [...] Read more.
Scene depth information is a key component of any robotic mobile application. Range sensors, such as LiDAR, sonar, or radar, capture depth data of a scene. However, the data captured by these sensors frequently presents missing regions or information with a low confidence level. These missing regions in the depth data could be large areas without information, making it difficult to make decisions, for instance, for an autonomous vehicle. Recovering depth data has become a primary activity for computer vision applications. This work proposes and evaluates an interpolation model to infer dense depth maps from a Lab color space reference picture and an incomplete-depth image embedded in a completion pipeline. The complete proposal pipeline comprises convolutional layers and a convex combination of the infinity Laplacian and non-local means model. The proposed model infers dense depth maps by considering depth data and utilizing clues from a color picture of the scene, along with a metric for computing differences between two pixels. The work contributes (i) the convex combination of the two models to interpolate the data, and (ii) the proposal of a class of function suitable for balancing between different models. The obtained results show that the model outperforms similar models in the KITTI dataset and outperforms our previous implementation in the NYU_v2 dataset, dropping the MSE by 34.86%, 3.35%, and 34.42% for 4×, 8×, 16× upsampling tasks, respectively. Full article
(This article belongs to the Special Issue Recent Development of Signal Detection and Processing)
Show Figures

Figure 1

19 pages, 2688 KB  
Article
Framework for the Development of a Process Digital Twin in Shipbuilding: A Case Study in a Robotized Minor Pre-Assembly Workstation
by Ángel Sánchez-Fernández, Elena-Denisa Vlad-Voinea, Javier Pernas-Álvarez, Diego Crespo-Pereira, Belén Sañudo-Costoya and Adolfo Lamas-Rodríguez
J. Mar. Sci. Eng. 2026, 14(1), 106; https://doi.org/10.3390/jmse14010106 - 5 Jan 2026
Viewed by 435
Abstract
This article proposes a framework for the development of process digital twins (DTs) in the shipbuilding sector, based on the ISO 23247 standard and structured around the achievement of three levels of digital maturity. The framework is demonstrated through a real pilot cell [...] Read more.
This article proposes a framework for the development of process digital twins (DTs) in the shipbuilding sector, based on the ISO 23247 standard and structured around the achievement of three levels of digital maturity. The framework is demonstrated through a real pilot cell developed at the Innovation and Robotics Center of NAVANTIA—Ferrol shipyard, incorporating various cutting-edge technologies such as robotics, artificial intelligence, automated welding, computer vision, visual inspection, and autonomous vehicles for the manufacturing of minor pre-assembly components. Additionally, the study highlights the crucial role of discrete event simulation (DES) in adapting traditional methodologies to meet the requirements of Process digital twins. By addressing these challenges, the research contributes to bridging the gap in the current state of the art regarding the development and implementation of Process digital twins in the naval sector. Full article
Show Figures

Figure 1

23 pages, 3312 KB  
Article
Service Mode Switching for Autonomous Robots and Small Intelligent Vehicles Using Pedestrian Personality Categorization and Flow Series Fluctuation
by Peimin Zhang, Wanwan Hu, Lusheng Wang, Hai Lin, Weiping Li and Min Peng
Information 2026, 17(1), 43; https://doi.org/10.3390/info17010043 - 4 Jan 2026
Viewed by 188
Abstract
Autonomous robots and small intelligent vehicles with diverse service functions have been extensively researched and are expected to be deployed in scenarios such as sci-tech parks, museums, and transportation hubs. Although designed as AI-driven assistants, they may not always provide optimal customer service. [...] Read more.
Autonomous robots and small intelligent vehicles with diverse service functions have been extensively researched and are expected to be deployed in scenarios such as sci-tech parks, museums, and transportation hubs. Although designed as AI-driven assistants, they may not always provide optimal customer service. A key challenge is achieving service intelligence, where adaptive mode switching plays a critical role. Our experimental research demonstrates that the composition of pedestrian types can be inferred from microscopic flow fluctuations. This finding enables the development of effective service mode switching strategies. Therefore, this article proposes a method that classifies pedestrians by their temperament-based behaviors, simulates their movement, and extracts microscopic features from flow data using moving standard deviation (MSTD) and moving root mean square (MRMS) indicators. Analysis of these features enables inference of approximate composition ratio of different pedestrian types, consequently enabling a targeted switching mechanism between active and passive service modes. Simulations confirm that each pedestrian type exhibits distinct flow patterns, and the employed indicators can effectively estimate pedestrian ratios through microscopic flow data analysis, thereby facilitating efficient service mode switching. Furthermore, validation using pedestrian flow data extracted from real-world video footage confirms the method’s applicability and effectiveness. Full article
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)
Show Figures

Figure 1

36 pages, 1402 KB  
Review
A Comprehensive Review of Bio-Inspired Approaches to Coordination, Communication, and System Architecture in Underwater Swarm Robotics
by Shyalan Ramesh, Scott Mann and Alex Stumpf
J. Mar. Sci. Eng. 2026, 14(1), 59; https://doi.org/10.3390/jmse14010059 - 29 Dec 2025
Viewed by 449
Abstract
The increasing complexity of marine operations has intensified the need for intelligent robotic systems to support ocean observation, exploration, and resource management. Underwater swarm robotics offers a promising framework that extends the capabilities of individual autonomous platforms through collective coordination. Inspired by natural [...] Read more.
The increasing complexity of marine operations has intensified the need for intelligent robotic systems to support ocean observation, exploration, and resource management. Underwater swarm robotics offers a promising framework that extends the capabilities of individual autonomous platforms through collective coordination. Inspired by natural systems, such as fish schools and insect colonies, bio-inspired swarm approaches enable distributed decision-making, adaptability, and resilience under challenging marine conditions. Yet research in this field remains fragmented, with limited integration across algorithmic, communication, and hardware design perspectives. This review synthesises bio-inspired coordination mechanisms, communication strategies, and system design considerations for underwater swarm robotics. It examines key marine-specific algorithms, including the Artificial Fish Swarm Algorithm, Whale Optimisation Algorithm, Coral Reef Optimisation, and Marine Predators Algorithm, highlighting their applications in formation control, task allocation, and environmental interaction. The review also analyses communication constraints unique to the underwater domain and emerging acoustic, optical, and hybrid solutions that support cooperative operation. Additionally, it examines hardware and system design advances that enhance system efficiency and scalability. A multi-dimensional classification framework evaluates existing approaches across communication dependency, environmental adaptability, energy efficiency, and swarm scalability. Through this integrated analysis, the review unifies bio-inspired coordination algorithms, communication modalities, and system design approaches. It also identifies converging trends, key challenges, and future research directions for real-world deployment of underwater swarm systems. Full article
(This article belongs to the Special Issue Wide Application of Marine Robotic Systems)
Show Figures

Figure 1

19 pages, 2276 KB  
Article
Towards Intelligent Water Safety: Robobuoy, a Deep Learning-Based Drowning Detection and Autonomous Surface Vehicle Rescue System
by Krittakom Srijiranon, Nanmanat Varisthanist, Thanapat Tardtong, Chatchadaporn Pumthurean and Tanatorn Tanantong
Appl. Syst. Innov. 2026, 9(1), 12; https://doi.org/10.3390/asi9010012 - 28 Dec 2025
Viewed by 428
Abstract
Drowning remains the third leading cause of accidental injury-related deaths worldwide, disproportionately affecting low- and middle-income countries where lifeguard coverage is limited or absent. To address this critical gap, we present Robobuoy, an intelligent real-time rescue system that integrates deep learning-based object detection [...] Read more.
Drowning remains the third leading cause of accidental injury-related deaths worldwide, disproportionately affecting low- and middle-income countries where lifeguard coverage is limited or absent. To address this critical gap, we present Robobuoy, an intelligent real-time rescue system that integrates deep learning-based object detection with an unmanned surface vehicle (USV) for autonomous intervention. The system employs a monitoring station equipped with two specialized object detection models: YOLO12m for recognizing drowning individuals and YOLOv5m for tracking the USV. These models were selected for their balance of accuracy, efficiency, and compatibility with resource-constrained edge devices. A geometric navigation algorithm calculates heading directions from visual detections and guides the USV toward the victim. Experimental evaluations on a combined open-source and custom dataset demonstrated strong performance, with YOLO12m achieving an mAP@0.5 of 0.9284 for drowning detection and YOLOv5m achieving an mAP@0.5 of 0.9848 for USV detection. Hardware validation in a controlled water pool confirmed successful target-reaching behavior in all nine trials, achieving a positioning error within 1 m, with traversal times ranging from 11 to 23 s. By combining state-of-the-art computer vision and low-cost autonomous robotics, Robobuoy offers an affordable and low-latency prototype to enhance water safety in unsupervised aquatic environments, particularly in regions where conventional lifeguard surveillance is impractical. Full article
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)
Show Figures

Figure 1

29 pages, 46239 KB  
Article
Radar and OpenStreetMap-Aided Consistent Trajectory Estimation in Canopy-Occluded Environments
by Youchen Tang, Bijun Li, Haoran Zhong, Maosheng Yan, Shuiyun Jiang and Jian Zhou
Remote Sens. 2026, 18(1), 70; https://doi.org/10.3390/rs18010070 - 25 Dec 2025
Viewed by 343
Abstract
Accurate localization in canopy-occluded, GNSS-challenged environments is critical for autonomous robots and intelligent vehicles. This paper presents a coarse-to-fine trajectory estimation framework using millimeter-wave radar as the primary sensor, leveraging its foliage penetration and robustness to low visibility. The framework integrates short- and [...] Read more.
Accurate localization in canopy-occluded, GNSS-challenged environments is critical for autonomous robots and intelligent vehicles. This paper presents a coarse-to-fine trajectory estimation framework using millimeter-wave radar as the primary sensor, leveraging its foliage penetration and robustness to low visibility. The framework integrates short- and long-term temporal feature enhancement to improve descriptor distinctiveness and suppress false loop closures, together with adaptive OpenStreetMap-derived priors that provide complementary global corrections in scenarios with sparse revisits. All constraints are jointly optimized within an outlier-robust backend to ensure global trajectory consistency under severe GNSS signal degradation. Evaluations conducted on the MulRan dataset, the OORD forest canopy dataset, and real-world campus experiments with partial and dense canopy coverage demonstrate up to 55.23% reduction in Absolute Trajectory Error (ATE) and a minimum error of 1.83 m compared with baseline radar- and LiDAR-based SLAM systems. The results indicate that the integration of temporally enhanced radar features with adaptive map constraints substantially improves large-scale localization robustness. Full article
(This article belongs to the Special Issue State of the Art in Positioning Under Forest Canopies)
Show Figures

Graphical abstract

15 pages, 1613 KB  
Article
Exploring the Cognitive Capabilities of Large Language Models in Autonomous and Swarm Navigation Systems
by Dawid Ewald, Filip Rogowski, Marek Suśniak, Patryk Bartkowiak and Patryk Blumensztajn
Electronics 2026, 15(1), 35; https://doi.org/10.3390/electronics15010035 - 22 Dec 2025
Viewed by 404
Abstract
The rapid evolution of autonomous vehicles necessitates increasingly sophisticated cognitive capabilities to handle complex, unstructured environments. This study explores the cognitive potential of Large Language Models (LLMs) in autonomous navigation and swarm control systems, addressing the limitations of traditional rule-based approaches. The research [...] Read more.
The rapid evolution of autonomous vehicles necessitates increasingly sophisticated cognitive capabilities to handle complex, unstructured environments. This study explores the cognitive potential of Large Language Models (LLMs) in autonomous navigation and swarm control systems, addressing the limitations of traditional rule-based approaches. The research investigates whether multimodal LLMs, specifically a customized version of LLaVA 7B (Large Language and Vision Assistant), can serve as a central decision-making unit for autonomous vehicles equipped with cameras and distance sensors. The developed prototype integrates a Raspberry Pi module for data acquisition and motor control with a main computational unit running the LLM via the Ollama platform. Communication between modules combines REST API for sensory data transfer and TCP sockets for real-time command exchange. Without fine-tuning, the system relies on advanced prompt engineering and context management to ensure consistent reasoning and structured JSON-based control outputs. Experimental results demonstrate that the model can interpret real-time visual and distance data to generate reliable driving commands and descriptive situational reasoning. These findings suggest that LLMs possess emerging cognitive abilities applicable to real-world robotic navigation and lay the groundwork for future swarm systems capable of cooperative exploration and decision-making in dynamic environments. These insights are particularly valuable for researchers in swarm robotics and developers of edge-AI systems seeking efficient, multimodal navigation solutions. Full article
(This article belongs to the Special Issue Data-Centric Artificial Intelligence: New Methods for Data Processing)
Show Figures

Figure 1

24 pages, 2000 KB  
Review
Remotely Operated and Autonomous Underwater Vehicles in Offshore Wind Farms: A Review on Applications, Challenges, and Sustainability Perspectives
by Rodolfo Augusto Kanashiro, Juliani Chico Piai Paiva, Willian Ricardo Bispo Murbak Nunes and Leonimer Flávio de Melo
Sustainability 2026, 18(1), 2; https://doi.org/10.3390/su18010002 - 19 Dec 2025
Viewed by 568
Abstract
The use of underwater vehicles, either remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs), has become increasingly relevant in the operation and maintenance (O&M) routines of offshore wind farms. This article provides a critical review of how these platforms are being integrated [...] Read more.
The use of underwater vehicles, either remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs), has become increasingly relevant in the operation and maintenance (O&M) routines of offshore wind farms. This article provides a critical review of how these platforms are being integrated into inspection and maintenance tasks, contributing not only to safer and more precise operations but also to greater autonomy in challenging marine environments. Beyond the technical and operational aspects, this review highlights their growing connection with artificial intelligence, digital twins, and multi-robot collaboration. The studies analyzed indicate a progressive shift away from conventional methods, traditionally dependent on crewed vessels and manual inspections, toward more automated, sustainable, and integrated approaches that align with the environmental and social commitments of the offshore wind sector. Finally, emerging trends and persisting obstacles, notably energy autonomy, are discussed, outlining the requirements for consolidating a robust, connected, and sustainability-oriented model for offshore maintenance. Full article
Show Figures

Figure 1

Back to TopTop