Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (344)

Search Parameters:
Keywords = visual robot navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2638 KB  
Article
Design and Implementation of Underwater Robotic Systems for Visual–Inertial Trajectory Estimation and Robust Motion Control
by Yangyang Wang, Tianzhu Gao, Yongqiang Zhao, Ziyu Liu, Hang Yu and Xijun Du
Symmetry 2026, 18(4), 621; https://doi.org/10.3390/sym18040621 - 6 Apr 2026
Abstract
Reliable trajectory estimation and precise motion control are the prerequisites for underwater robotic systems to perform complex autonomous tasks, which are essential for enhancing the operational efficiency of intelligent underwater facilities. However, the inherent asymmetry of underwater hydrodynamics, featureless images caused by complex [...] Read more.
Reliable trajectory estimation and precise motion control are the prerequisites for underwater robotic systems to perform complex autonomous tasks, which are essential for enhancing the operational efficiency of intelligent underwater facilities. However, the inherent asymmetry of underwater hydrodynamics, featureless images caused by complex environments, and the lack of high-frequency state feedback significantly hinder stable trajectory tracking and robust autonomous navigation. To address these challenges, this paper proposes an integrated autonomous navigation and robust control scheme for underwater robotic systems. Specifically, we first propose a visual–inertial trajectory estimation method for underwater robotic systems, which effectively overcomes the challenges of featureless images and provides consistent, real-time pose feedback for motion execution. Furthermore, we develop a hierarchical robust motion control strategy for autonomous underwater robots, which integrates model predictive control with incremental nonlinear dynamic inversion to achieve precise positioning performance and reliable operation under environmental disturbances. Finally, we design and implement a customized, highly integrated underwater robotic platform that integrates the proposed trajectory estimation and robust control modules, with its performance validated through extensive field experiments in underwater scenarios. The experimental results demonstrate that the proposed system can effectively achieve high-precision trajectory tracking and maintain operational stability, providing a comprehensive engineering solution for the autonomous navigation of underwater robots in complex environments. Full article
(This article belongs to the Special Issue Symmetry in Next-Generation Intelligent Information Technologies)
Show Figures

Figure 1

18 pages, 2029 KB  
Review
Artificial Intelligence in Head and Neck Surgical Oncology: A State-of-the-Art Review
by Steven X. Chen, Maria Feucht, Aditya Bhatt and Janice L. Farlow
J. Clin. Med. 2026, 15(7), 2767; https://doi.org/10.3390/jcm15072767 - 6 Apr 2026
Abstract
Artificial intelligence (AI) is rapidly reshaping head and neck surgical oncology by augmenting decision-making across the full perioperative continuum. This state-of-the-art review aims to provide head and neck surgical oncologists with a conceptual framework for understanding and critically appraising AI tools entering clinical [...] Read more.
Artificial intelligence (AI) is rapidly reshaping head and neck surgical oncology by augmenting decision-making across the full perioperative continuum. This state-of-the-art review aims to provide head and neck surgical oncologists with a conceptual framework for understanding and critically appraising AI tools entering clinical practice, summarizing how machine learning, deep learning, and generative AI are being integrated into contemporary surgical workflows. Preoperative applications include detection of occult nodal metastasis and extranodal extension. Intraoperative innovations include augmented reality-assisted navigation, real-time margin assessment, and improving visual clarity and tissue handling for robotic platforms. Postoperatively, AI can predict complications like free flap failure and oncologic outcomes. Large language models are being operationalized for clinician-facing applications such as documentation and inbox support, as well as patient-facing education. Despite promising results, broad clinical deployment remains limited by concerns about privacy, validation, reliability, safety, and ethics. Widespread adoption will require prospective clinical trials, robust governance, and human-centered workflows that ensure AI remains a safe, assistive copilot. Full article
(This article belongs to the Special Issue Clinical Advances in Head and Neck Cancer Diagnostics and Treatment)
Show Figures

Figure 1

25 pages, 4371 KB  
Article
GTS-SLAM: A Tightly-Coupled GICP and 3D Gaussian Splatting Framework for Robust Dense SLAM in Underground Mines
by Yi Liu, Changxin Li and Meng Jiang
Vehicles 2026, 8(4), 79; https://doi.org/10.3390/vehicles8040079 - 3 Apr 2026
Viewed by 176
Abstract
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for [...] Read more.
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for intelligent driving platforms such as underground mining vehicles, inspection robots, and tunnel autonomous navigation systems. The front-end performs covariance-aware point-cloud registration using GICP to achieve robust pose estimation under low texture, dust interference, and dynamic disturbances. The back-end employs probabilistic dense mapping based on 3DGS, combined with scale regularization, scale alignment, and keyframe factor-graph optimization, enabling synchronized optimization of localization and mapping. A Compact-3DGS compression strategy further reduces memory usage while maintaining real-time performance. Experiments on public datasets and real underground-like scenarios demonstrate centimeter-level trajectory accuracy, high-quality dense reconstruction, and real-time rendering. The system provides reliable perception capability for vehicle autonomous navigation, obstacle avoidance, and path planning in confined and weak-light environments. Overall, the proposed framework offers a deployable solution for autonomous driving and mobile robots requiring accurate localization and dense environmental understanding in challenging conditions. Full article
(This article belongs to the Special Issue AI-Empowered Assisted and Autonomous Driving)
Show Figures

Figure 1

16 pages, 1529 KB  
Article
Image Segmentation-Guided Visual Tracking on a Bio-Inspired Quadruped Robot
by Hewen Xiao, Guangfu Ma and Weiren Wu
Biomimetics 2026, 11(4), 234; https://doi.org/10.3390/biomimetics11040234 - 2 Apr 2026
Viewed by 219
Abstract
Bio-inspired quadrupedal robots exhibit superior adaptability and mobility in unstructured environments, making them suitable for complex task scenarios such as navigation, obstacle avoidance, and tracking in a variety of environments. Visual perception plays a critical role in enabling autonomous behavior, offering a cost-effective [...] Read more.
Bio-inspired quadrupedal robots exhibit superior adaptability and mobility in unstructured environments, making them suitable for complex task scenarios such as navigation, obstacle avoidance, and tracking in a variety of environments. Visual perception plays a critical role in enabling autonomous behavior, offering a cost-effective alternative to multi-sensor systems. This paper proposes an image segmentation-guided visual tracking framework to enhance both perception and motion control in quadruped robots. On the perception side, a cascaded convolutional neural network is introduced, integrating a global information guidance module to fuse low-level textures and high-level semantic features. This architecture effectively addresses limitations in single-scale feature extraction and improves segmentation accuracy under visually degraded conditions. On the control side, segmentation outputs are embedded into a biologically inspired central pattern generator (CPG), enabling coordinated generation of limb and spinal trajectories. This integration facilitates a closed-loop visual-motor system that adapts dynamically to environmental changes. Experimental evaluations on benchmark image segmentation datasets and robotic locomotion tasks demonstrate that the proposed framework achieves enhanced segmentation precision and motion flexibility, outperforming existing methods. The results highlight the effectiveness of vision-guided control strategies and their potential for deployment in real-time robotic navigation. Full article
Show Figures

Figure 1

33 pages, 16801 KB  
Article
A GNSS–Vision Integrated Autonomous Navigation System for Trellis Orchard Transportation Robots
by Huaiyang Liu, Haiyang Gu, Yong Wang, Tianjiao Zhong, Tong Tian and Changxing Geng
AI 2026, 7(4), 125; https://doi.org/10.3390/ai7040125 - 1 Apr 2026
Viewed by 260
Abstract
Autonomous navigation is essential for orchard transportation robots to support automated operations and precision orchard management. However, in trellis orchards, dense vegetation and complex canopy structures often degrade the stability of GNSS-based navigation in in-row environments. To address this issue, this study proposes [...] Read more.
Autonomous navigation is essential for orchard transportation robots to support automated operations and precision orchard management. However, in trellis orchards, dense vegetation and complex canopy structures often degrade the stability of GNSS-based navigation in in-row environments. To address this issue, this study proposes a GNSS–vision integrated navigation framework for orchard transportation robots. The performance of GNSS-based navigation in out-of-row environments and vision-based navigation in in-row environments was experimentally evaluated under representative orchard operating conditions. In out-of-row areas, the robot employs GNSS-based path planning and trajectory tracking to achieve reliable navigation in relatively open, lightly occluded environments. During in-row navigation, a deep learning-based real-time object detection approach is used to detect tree trunks and trellis supporting structures. By integrating corner-point selection with temporal RANSAC-based line fitting, a stable orchard row structure is constructed to generate robust navigation references. The visual perception module serves as the front-end sensing component of the navigation system and is designed to be independent of specific object detection architectures, allowing flexible integration with different real-time detection models. Field experiments were conducted under various orchard layouts and growth stages. The average lateral deviation of GNSS-based navigation in out-of-row scenarios ranged from 0.093 to 0.221 m, while the average heading deviation of in-row visual navigation was approximately 5.23° at a robot speed of 0.6 m/s. These results indicate that the proposed perception and navigation methods can maintain stable navigation performance within their respective applicable scenarios in trellis orchard environments. The experimental findings provide a practical and engineering-oriented basis for future research on automatic navigation mode switching and system-level integration of orchard transportation robots. Full article
Show Figures

Figure 1

18 pages, 5105 KB  
Article
Lightweight Visual Localization of Steel Surface Defects for Autonomous Inspection Robots Based on Improved YOLOv10n
by Jinwu Tong, Xin Zhang, Xinyun Lu, Han Cao, Lengtao Yao and Bingbing Gao
Sensors 2026, 26(7), 2132; https://doi.org/10.3390/s26072132 - 30 Mar 2026
Viewed by 348
Abstract
To address the challenges of steel surface defect detection—characterized by fine-grained textures, substantial scale variations, and complex background interference—conventional lightweight detectors often struggle to balance real-time navigation requirements with high-precision spatial localization on mobile inspection platforms. In this work, we propose KDM-YOLO, a [...] Read more.
To address the challenges of steel surface defect detection—characterized by fine-grained textures, substantial scale variations, and complex background interference—conventional lightweight detectors often struggle to balance real-time navigation requirements with high-precision spatial localization on mobile inspection platforms. In this work, we propose KDM-YOLO, a lightweight visual localization and detection method built upon YOLOv10n, designed to provide an efficient perception engine for autonomous inspection robots. The proposed approach enhances the baseline through three key perspectives: feature extraction, context modeling, and multi-scale fusion. Specifically, KWConv is introduced to strengthen the representation of fine-grained texture and edge cues; C2f-DRB is employed to enlarge the effective receptive field and improve long-range dependency perception to reduce missed detections; and a multi-scale attention fusion (MSAF) module is inserted before the detection head to adaptively integrate spatial details with semantic context while suppressing redundant background responses. Ablation studies confirm that each module contributes to performance gains, and their combination yields the best overall results. Comparative experiments further demonstrate that KDM-YOLO significantly improves detection performance while retaining a compact model size and high inference speed. Compared with the YOLOv10n baseline, Precision, Recall and mAP@50 are increased to 91.0%, 93.9%, and 95.4%, respectively, with a parameter count of 3.29 M and an inference speed of 155.6 f/s. These results indicate that KDM-YOLO achieves an ideal balance between the accuracy and computational efficiency required for embedded navigation platforms, providing an effective solution for online autonomous inspection and real-time localization of steel surface defects. Full article
(This article belongs to the Special Issue Deep Learning Based Intelligent Fault Diagnosis)
Show Figures

Figure 1

22 pages, 891 KB  
Systematic Review
The Use of Augmented Reality for Navigation in Minimally Invasive Abdominal and Thoracic Soft-Tissue Surgery: A Systematic Review
by Inga Steinberga, Victor Gabriel El-Hajj, Laura Cercenelli, Mario Romero, Kenny A. Rodriguez-Wallberg, Erik Edström and Adrian Elmi-Terander
Sensors 2026, 26(6), 1962; https://doi.org/10.3390/s26061962 - 20 Mar 2026
Viewed by 553
Abstract
Surgical navigation and augmented reality (AR) are widely used in neurosurgery, spinal surgery, and orthopedics. However, their use in minimally invasive abdominal and thoracic soft-tissue surgery is limited, as tracking deformable, mobile organs is challenging. Recent advances in AR may address these challenges [...] Read more.
Surgical navigation and augmented reality (AR) are widely used in neurosurgery, spinal surgery, and orthopedics. However, their use in minimally invasive abdominal and thoracic soft-tissue surgery is limited, as tracking deformable, mobile organs is challenging. Recent advances in AR may address these challenges to improve intraoperative navigation. This systematic review, registered in PROSPERO (2024) and based on PRISMA guidelines, analyzes literature from 2014 to 2024 about AR in minimally invasive abdominal and thoracic soft-tissue surgery. It identifies target organs, describes AR hardware and software, and evaluates accuracy levels, usability outcomes, clinical benefits, technical limitations, and research needs. Searches of PubMed, Web of Science, and Embase for English-language studies found 1297 records, of which only 28 (2%) met the inclusion criteria. Nearly half (n =12; 42%) focused on liver surgery; none on gynecologic surgery. The AR devices varied in tracking methods, image processing, visualization, and display. Overall, AR improved anatomical guidance and procedural planning, especially in complex surgeries. Integration with robotic systems may further boost visualization, precision, and workflow, though challenges remain in standardization, large-cohort validation, and workflow integration. Full article
(This article belongs to the Special Issue Virtual, Augmented, and Mixed Reality in Biomedical Engineering)
Show Figures

Figure 1

58 pages, 7331 KB  
Review
Human–Robot Interaction in Indoor Mobile Robotics: Current State, Interaction Modalities, Applications, and Future Challenges
by Arman Ahmed Khan and Kerstin Thurow
Sensors 2026, 26(6), 1840; https://doi.org/10.3390/s26061840 - 14 Mar 2026
Viewed by 462
Abstract
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as [...] Read more.
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as usability, trust, and social acceptance. Implementation challenges are discussed, encompassing safety, privacy, and regulatory considerations. Representative case studies, including healthcare and domestic platforms, highlight design trade-offs and integration lessons. We identify critical technical challenges, including robust perception, reliable multimodal fusion, navigation in dynamic spaces, and constraints on computation and power. Finally, we outline future directions, including embodied AI, adaptive context-aware interactions, and standards for safety and data protection. This survey aims to guide the development of indoor mobile robots capable of collaborating with humans naturally, safely, and effectively. Full article
Show Figures

Figure 1

13 pages, 246 KB  
Review
Innovations in Robotic-Assisted Bronchoscopy: Current Trends and Future Prospects
by Joshua M. Boster, S. Michael Goertzen, Brian D. Tran and Robert F. Browning
Diagnostics 2026, 16(6), 832; https://doi.org/10.3390/diagnostics16060832 - 11 Mar 2026
Viewed by 675
Abstract
Robotic-assisted bronchoscopy (RAB) represents a significant technological advance, providing superior precision, enhanced visualization, and increased maneuverability relative to conventional bronchoscopic methods. This review provides an overview of current research evaluating RAB’s diagnostic performance and exploring future prospects. Recent literature demonstrates advantages in navigating [...] Read more.
Robotic-assisted bronchoscopy (RAB) represents a significant technological advance, providing superior precision, enhanced visualization, and increased maneuverability relative to conventional bronchoscopic methods. This review provides an overview of current research evaluating RAB’s diagnostic performance and exploring future prospects. Recent literature demonstrates advantages in navigating difficult-to-reach lung lesions with improved safety profiles compared to transthoracic approaches. Incorporating advanced imaging technologies has enhanced real-time decision-making during procedures, and artificial intelligence applications are emerging. RAB has been rapidly adopted at many high-volume centers based on favorable navigational success and safety data. As the field matures, ongoing prospective studies will further define its role in improving patient outcomes, cost-effectiveness, and optimal integration with lung cancer screening programs. RAB faces ongoing challenges including substantial capital costs, training requirements, and need for standardized protocols. Therapeutic applications show promise and are under active investigation. Full article
(This article belongs to the Special Issue Advances in Interventional Pulmonology)
22 pages, 588 KB  
Review
Evolution of Percutaneous Nephrolithotomy (PCNL) from Standard to Miniaturized and Ultra-Mini Techniques: A Narrative Review
by Mladen Doykov, Jasmin Gurung, Usman Khalid, Gancho Kostov, Bozhidar Hristov, Petar Uchikov, Krasimir Kraev, Lyubomir Chervenkov and Elizabet Karen Dzhambazova
Medicina 2026, 62(3), 484; https://doi.org/10.3390/medicina62030484 - 4 Mar 2026
Viewed by 533
Abstract
Background and Objectives: Because of its consistently high stone-free rates (SFRs), percutaneous nephrolithotomy (PCNL) continues to be the first-line treatment for renal stones larger than 20 mm. Standard 24 to 30 Fr access tracts, however, are linked to access-related morbidity, such as bleeding, [...] Read more.
Background and Objectives: Because of its consistently high stone-free rates (SFRs), percutaneous nephrolithotomy (PCNL) continues to be the first-line treatment for renal stones larger than 20 mm. Standard 24 to 30 Fr access tracts, however, are linked to access-related morbidity, such as bleeding, pain, and extended hospital stays. These restrictions have led to progressive tract miniaturization and the development of mini-PCNL, ultra-mini PCNL, and micro-PCN techniques. Materials and Methods: We performed a narrative review of studies published through January 2026 using PubMed and Google Scholar. Search terms included percutaneous nephrolithotomy, mini-PCNL, ultra-mini PCNL, micro-PCNL, and vacuum-assisted PCNL. Original studies, systematic reviews, and meta-analyses reporting clinical outcomes, complications, and advancements were selected, whereas conference abstracts, non-English papers, and articles without accessible full text were excluded. Results: Across randomized trials, miniaturized PCNL generally preserves efficacy when patients are selected appropriately. Across randomized trials and meta-analyses, miniaturized PCNL achieved stone-free rates comparable to standard PCNL (typically ~80–90% for stones ≤20 mm and similar rates in selected stones >2 cm), while demonstrating lower hemoglobin decrease (mean difference approximately −0.6 to −1.0 g/dL), reduced transfusion rates, and shorter hospital stays, at the cost of longer operative time (mean difference ~8–12 min). On the other hand, operative time may increase, and smaller working channels can make visualization and fragment evacuation more demanding as stone burden rises. Raised intrarenal pressure is a recurring safety issue because it may increase infectious risk unless drainage is actively managed. Recent innovations aim to address these limitations, including vacuum-assisted access sheaths, pressure-controlled irrigation, improved laser and lithotripsy platforms, image-fusion guidance, navigation systems, and robotic assistance. Conclusions: PCNL now spans a spectrum of tract sizes rather than a single standard approach. When chosen appropriately and performed with attention to pressure control and fragment evacuation, miniaturized PCNL can reduce morbidity without sacrificing stone clearance. Future advancements in percutaneous stone surgery are more likely to rely on integrated technological solutions that improve accuracy, safety, and repeatability than on additional tract size reduction. Full article
(This article belongs to the Section Urology & Nephrology)
Show Figures

Figure 1

30 pages, 8087 KB  
Article
A Novel SLAM Approach for Trajectory Generation of a Dual-Arm Mobile Robot (DAMR) Using Sensor Fusion
by Narendra Kumar Kolla and Pandu Ranga Vundavilli
Automation 2026, 7(2), 42; https://doi.org/10.3390/automation7020042 - 3 Mar 2026
Viewed by 463
Abstract
Simultaneous Localization and Mapping (SLAM) is essential for autonomous movement in intelligent robotic systems. Traditional SLAM using a single sensor, such as an Inertial Measurement Unit (IMU), faces challenges including noise and drift. This paper introduces a novel Cartographer-based SLAM approach for DAMR [...] Read more.
Simultaneous Localization and Mapping (SLAM) is essential for autonomous movement in intelligent robotic systems. Traditional SLAM using a single sensor, such as an Inertial Measurement Unit (IMU), faces challenges including noise and drift. This paper introduces a novel Cartographer-based SLAM approach for DAMR trajectory generation in indoor environments to reduce drift errors and improve localization accuracy. This SLAM approach integrates multi-sensor data with extended Kalman filter (EKF) fusion from wheel odometry, an RGB-D camera (RTAB-Map), and an IMU for precise mapping with DAMR trajectory generation and is compared with the heading reference trajectory generated by robot pose estimation and frame transformation. This system is implemented in the Robot Operating System (ROS 2) for coordinated data acquisition, processing, and visualization. After experimental verification, the DAMR trajectories generated are closer to the reference trajectory and drift errors are tuned. The experimental results revealed that the DAMR trajectory with multi-sensor data integration using the EKF effectively improved the positioning accuracy and robustness of the system. The proposed approach shows improved alignment with the reference trajectory, yielding a mean displacement error of 0.352% and an absolute trajectory error of 0.007 m, highlighting the effectiveness of the fusion approach for accurate indoor robot navigation. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

9 pages, 3625 KB  
Proceeding Paper
A Framework for Integrity Monitoring for Positioning Through Graph-Based SLAM Optimization
by Sam Bekkers and Heiko Engwerda
Eng. Proc. 2026, 126(1), 25; https://doi.org/10.3390/engproc2026126025 - 25 Feb 2026
Viewed by 307
Abstract
As satellite navigation systems show vulnerabilities in specific circumstances such as urban canyons or jamming and spoofing situations, additional sensors such as cameras may be incorporated on the platform. Despite advancements in the robotics and computer vision community, which have led to increasingly [...] Read more.
As satellite navigation systems show vulnerabilities in specific circumstances such as urban canyons or jamming and spoofing situations, additional sensors such as cameras may be incorporated on the platform. Despite advancements in the robotics and computer vision community, which have led to increasingly accurate Simultaneous Localization and Mapping (SLAM) positioning solutions, visual navigation has its own vulnerabilities. It therefore remains of critical importance for many applications to study the integrity of fused navigation algorithms and their components, which is done less for SLAM than for satellite navigation. In this paper, a framework for integrity monitoring (IM) of a visual SLAM algorithm is proposed. A sensor-level IM scheme analyses feature reprojection errors. It is demonstrated that, in dynamic environments, multiple hypotheses can be generated from different subsets of extracted features. Additionally, the factor graph-based framework employs a fusion-level IM scheme which deals with these multiple hypotheses and selects the most probable one by calculating the sum of weighted measurement residuals. These concepts are applied to scenarios from real and simulated experiments in order to demonstrate applicability. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

29 pages, 31856 KB  
Article
A Vision–Locomotion Framework Toward Obstacle Avoidance for a Bio-Inspired Gecko Robot
by Wenrui Xiang, Barmak Honarvar Shakibaei Asli and Aihong Ji
Electronics 2026, 15(4), 882; https://doi.org/10.3390/electronics15040882 - 20 Feb 2026
Viewed by 388
Abstract
This paper presents the design and experimental evaluation of a bio-inspired gecko robot, focusing on mechanical design, vision-based obstacle perception, and rhythmic locomotion control as enabling technologies for future obstacle avoidance in complex environments. The robot features a 17-degrees-of-freedom mechanical structure with a [...] Read more.
This paper presents the design and experimental evaluation of a bio-inspired gecko robot, focusing on mechanical design, vision-based obstacle perception, and rhythmic locomotion control as enabling technologies for future obstacle avoidance in complex environments. The robot features a 17-degrees-of-freedom mechanical structure with a flexible spine and multi-jointed limbs, providing a physical basis for adaptive locomotion. For perception, a custom obstacle detection dataset was constructed from the robot’s onboard camera view and used to train a YOLOv5-based detection model. Experimental results show that the trained model achieves a mean average precision (mAP) of 0.979 and a maximum F1-score of 0.97 at an optimal confidence threshold, demonstrating reliable real-time obstacle perception under diverse indoor conditions. For motion control, a central pattern generator (CPG) based on Hopf oscillators is implemented to generate rhythmic locomotion. Experimental evaluations confirm stable diagonal gait generation, with coordinated joint trajectories oscillating at 1 Hz. The flexible spine exhibits periodic lateral deflection with peak amplitudes of ±15°, ±10°, and ±8° across spinal joints, enhancing locomotion continuity and turning capability. Physical robot experiments further demonstrate smooth straight-line crawling enabled by the coupled limb–spine motion. While visual perception and CPG-based locomotion are experimentally validated as independent subsystems, their real-time closed-loop integration is not implemented in this study. Instead, this work establishes a system-level framework and experimental baseline for future perception–motion coupling, providing a foundation for closed-loop obstacle avoidance and autonomous navigation in bio-inspired gecko robots. Full article
Show Figures

Figure 1

27 pages, 5554 KB  
Article
Hierarchical Autonomous Navigation for Differential-Drive Mobile Robots Using Deep Learning, Reinforcement Learning, and Lyapunov-Based Trajectory Control
by Ramón Jaramillo-Martínez, Ernesto Chavero-Navarrete and Teodoro Ibarra-Pérez
Technologies 2026, 14(2), 125; https://doi.org/10.3390/technologies14020125 - 17 Feb 2026
Viewed by 479
Abstract
Autonomous navigation in mobile robots operating in dynamic and partially known environments demands the coordinated integration of perception, decision-making, and control while ensuring stability, safety, and energy efficiency. This paper presents an integrated navigation framework for differential-drive mobile robots that combines deep learning-based [...] Read more.
Autonomous navigation in mobile robots operating in dynamic and partially known environments demands the coordinated integration of perception, decision-making, and control while ensuring stability, safety, and energy efficiency. This paper presents an integrated navigation framework for differential-drive mobile robots that combines deep learning-based visual perception, reinforcement learning (RL) for high-level decision-making, and a Lyapunov-based trajectory reference generator for low-level motion execution. A convolutional neural network processes RGB-D images to classify obstacle configurations in real time, enabling navigation without prior map information. Based on this perception layer, an RL policy generates adaptive navigation subgoals in response to environmental changes. To ensure stable motion execution, a Lyapunov-based control strategy is formulated at the kinematic level to generate smooth velocity references, which are subsequently tracked by embedded PID controllers, explicitly decoupling learning-based decision-making from stability-critical control tasks. The local stability of the trajectory-tracking error is analyzed using a quadratic Lyapunov candidate function, ensuring asymptotic convergence under ideal kinematic assumptions. Experimental results demonstrate that while higher control gains provide faster convergence in simulation, an intermediate gain value (K = 0.5I) achieves a favorable trade-off between responsiveness and robustness in real-world conditions, mitigating oscillations caused by actuator dynamics, delays, and sensor noise. Validation across multiple navigation scenarios shows average tracking errors below 1.2 cm, obstacle detection accuracies above 95% for human obstacles, and a significant reduction in energy consumption compared to classical A* planners, highlighting the effectiveness of integrating learning-based navigation with analytically grounded control. Full article
Show Figures

Figure 1

25 pages, 15267 KB  
Article
3D Semantic Map Reconstruction for Orchard Environments Using Multi-Sensor Fusion
by Quanchao Wang, Yiheng Chen, Jiaxiang Li, Yongxing Chen and Hongjun Wang
Agriculture 2026, 16(4), 455; https://doi.org/10.3390/agriculture16040455 - 15 Feb 2026
Viewed by 698
Abstract
Semantic point cloud maps play a pivotal role in smart agriculture. They provide not only core three-dimensional data for orchard management but also empower robots with environmental perception, enabling safer and more efficient navigation and planning. However, traditional point cloud maps primarily model [...] Read more.
Semantic point cloud maps play a pivotal role in smart agriculture. They provide not only core three-dimensional data for orchard management but also empower robots with environmental perception, enabling safer and more efficient navigation and planning. However, traditional point cloud maps primarily model surrounding obstacles from a geometric perspective, failing to capture distinctions and characteristics between individual obstacles. In contrast, semantic maps encompass semantic information and even topological relationships among objects in the environment. Furthermore, existing semantic map construction methods are predominantly vision-based, making them ill-suited to handle rapid lighting changes in agricultural settings that can cause positioning failures. Therefore, this paper proposes a positioning and semantic map reconstruction method tailored for orchards. It integrates visual, LiDAR, and inertial sensors to obtain high-precision pose and point cloud maps. By combining open-vocabulary detection and semantic segmentation models, it projects two-dimensional detected semantic information onto the three-dimensional point cloud, ultimately generating a point cloud map enriched with semantic information. The resulting 2D occupancy grid map is utilized for robotic motion planning. Experimental results demonstrate that on a custom dataset, the proposed method achieves 74.33% mIoU for semantic segmentation accuracy, 12.4% relative error for fruit recall rate, and 0.038803 m mean translation error for localization. The deployed semantic segmentation network Fast-SAM achieves a processing speed of 13.36 ms per frame. These results demonstrate that the proposed method combines high accuracy with real-time performance in semantic map reconstruction. This exploratory work provides theoretical and technical references for future research on more precise localization and more complete semantic mapping, offering broad application prospects and providing key technological support for intelligent agriculture. Full article
(This article belongs to the Special Issue Advances in Robotic Systems for Precision Orchard Operations)
Show Figures

Figure 1

Back to TopTop