Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (919)

Search Parameters:
Keywords = autonomous mobile robot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6210 KB  
Article
Robust Path Planning via Deep Reinforcement Learning
by Daeyeol Kang, Jongyoon Park and Pileun Kim
Sensors 2026, 26(9), 2658; https://doi.org/10.3390/s26092658 - 24 Apr 2026
Viewed by 578
Abstract
Deep reinforcement learning (DRL) for autonomous mobile robot navigation faces several inherent limitations. The stochastic nature of actions generated by DRL policies can undermine performance consistency, while inefficient exploration frequently delays the learning process or prevents the discovery of optimal solutions. This research [...] Read more.
Deep reinforcement learning (DRL) for autonomous mobile robot navigation faces several inherent limitations. The stochastic nature of actions generated by DRL policies can undermine performance consistency, while inefficient exploration frequently delays the learning process or prevents the discovery of optimal solutions. This research aims to enhance the robustness of path planning by addressing these challenges. To achieve this goal, we propose a hybrid approach that integrates the flexible decision-making capabilities of deep reinforcement learning with the stability of traditional path planning. The proposed model adopts the Twin Delayed Deep Deterministic Policy Gradient (TD3) network as its base. Notably, we pre-process LiDAR point cloud data to extract only essential features for the state representation, thereby preventing performance degradation from high-dimensional inputs and improving computational efficiency. Our model optimizes the learning process through two core strategies. First, it prioritizes experience data generated during training based on negative rewards, guiding the model to learn more frequently from critical failures rather than redundant successes. Second, it dynamically compares the action proposed by the TD3 network with a goal-oriented action from a classical path-planning algorithm in real time. By selecting the action with the higher estimated value, the model guides the policy toward a stable and effective trajectory from the earliest stages of training. To validate the efficacy of our approach, we conducted simulation-based experiments comparing the performance of the proposed model with existing reinforcement learning networks. To ensure statistical significance and mitigate the impact of random initialization, all reported results are averaged over 10 independent runs with different random seeds. The results quantitatively demonstrate that our model achieves significantly higher and more stable reward values, confirming a robust improvement in the path-planning process. Full article
(This article belongs to the Special Issue Advancements in Autonomous Navigation Systems for UAVs)
8 pages, 1931 KB  
Proceeding Paper
Maze Navigating Robot Using Lucas–Kanade Optical Flow with Coarse-to-Fine Method
by Hannah Mae Antaran and Cyrel O. Manlises
Eng. Proc. 2026, 134(1), 81; https://doi.org/10.3390/engproc2026134081 - 23 Apr 2026
Viewed by 141
Abstract
We applied the Lucas–Kanade optical flow method combined with a coarse-to-fine approach for robot navigation. While Lucas–Kanade is widely used for flow estimation and tracking, its utilization in robot navigation remains limited. Using a Raspberry Pi 5 (8 gigabytes) and a Logitech webcam, [...] Read more.
We applied the Lucas–Kanade optical flow method combined with a coarse-to-fine approach for robot navigation. While Lucas–Kanade is widely used for flow estimation and tracking, its utilization in robot navigation remains limited. Using a Raspberry Pi 5 (8 gigabytes) and a Logitech webcam, a mobile robot was developed that processes optical flow vectors to guide navigation decisions aimed at exiting a maze. While most maze navigation research relies on sensor fusion, we adopted computer vision to achieve collision-free navigation. The coarse-to-fine method effectively addresses the challenge of processing large motions inherent in Lucas–Kanade, resulting in an 80% success rate and 67% recovery rate. Simple linear regression analysis results revealed a negative correlation between optical flow magnitude and the robot’s distance to the nearest obstacle, indicating that closer obstacles correspond to higher flow magnitudes. The results highlight the potential of low-cost, vision-based autonomous navigation systems that eliminate the need for complex sensor arrays, making them suitable for cost-sensitive applications. The demonstrated effectiveness of the coarse-to-fine Lucas–Kanade method in handling large motion suggests its broader applicability in real-time robotic navigation, including autonomous vehicles and service robots operating in challenging or resource-limited environments. Full article
Show Figures

Figure 1

22 pages, 12161 KB  
Article
SV-LIO: A Probabilistic Adaptive Semantic Voxel Map for LiDAR–Inertial Odometry
by Lixiao Yang and Youbing Feng
Electronics 2026, 15(8), 1744; https://doi.org/10.3390/electronics15081744 - 20 Apr 2026
Viewed by 160
Abstract
Accurate and real-time localization is a fundamental prerequisite for the autonomous navigation of mobile robots. LiDAR–Inertial Odometry (LIO) achieves high-precision state estimation and scene reconstruction in unknown environments by effectively fusing data from LiDAR and Inertial Measurement Units (IMU). However, conventional LIO methods [...] Read more.
Accurate and real-time localization is a fundamental prerequisite for the autonomous navigation of mobile robots. LiDAR–Inertial Odometry (LIO) achieves high-precision state estimation and scene reconstruction in unknown environments by effectively fusing data from LiDAR and Inertial Measurement Units (IMU). However, conventional LIO methods typically rely solely on geometric features during point cloud registration. In complex scenarios, such as outdoor unstructured or dynamic environments, these methods are often susceptible to reduced localization accuracy due to geometric degeneration or mismatches. To address these challenges, we propose SV-LIO, A Probabilistic Adaptive Semantic Voxel Map for LiDAR–Inertial Odometry, which leverages point-wise semantic information from semantic segmentation to enhance registration accuracy and system robustness. Specifically, we construct a probabilistic adaptive semantic voxel map that extracts multi-scale spatial planes attached with semantic information. Building on this representation, we employ a semantic-guided strategy for nearest-neighbor plane association between LiDAR scans and the local map, and construct semantic-weighted point-to-plane residuals to constrain pose estimation. By jointly optimizing the IMU-propagated pose prior and semantic-guided LiDAR observation constraints, SV-LIO realizes high-precision real-time state estimation and semantic scene reconstruction. Extensive experiments on the KITTI dataset demonstrate that SV-LIO achieves significant improvements in both localization accuracy compared to state-of-the-art (SOTA) LIO methods, while also constructing semantic maps capable of providing rich environmental information. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 1480 KB  
Article
DAGH-Net: A Density-Adaptive Gated Hybrid Knowledge Graph Network for Pedestrian Trajectory Prediction
by Feiyang Xu, Bin Zhang and Yaqing Liu
Electronics 2026, 15(8), 1738; https://doi.org/10.3390/electronics15081738 - 20 Apr 2026
Viewed by 221
Abstract
Pedestrian trajectory prediction is a fundamental task in autonomous driving and mobile robotics, where accurate forecasting requires modeling of both social interactions and scene-related constraints. However, existing methods typically rely on a fixed interaction modeling strategy, which may be insufficient under heterogeneous crowd [...] Read more.
Pedestrian trajectory prediction is a fundamental task in autonomous driving and mobile robotics, where accurate forecasting requires modeling of both social interactions and scene-related constraints. However, existing methods typically rely on a fixed interaction modeling strategy, which may be insufficient under heterogeneous crowd densities. To address this limitation, we propose DAGH-Net, a density-adaptive gated hybrid network for pedestrian trajectory prediction. Built upon an SR-LSTM (State Refinement for LSTM) backbone, the proposed framework integrates two complementary reasoning pathways: a data-driven social interaction branch and a hybrid knowledge graph branch that encodes structured relational priors among pedestrians, obstacles, and walkable regions. A local-density-conditioned gating mechanism is further introduced to adaptively fuse these features according to the surrounding crowd condition of each pedestrian. This design helps suppress redundant interaction cues in sparse settings while strengthening socially compliant and scene-consistent reasoning in dense or conflict-prone environments. Experimental results on the ETH (Eidgenössische Technische Hochschule Zürich) and UCY (University of Cyprus) benchmarks, evaluated using Mean Average Displacement (MAD) and Final Average Displacement (FAD), show that DAGH-Net improves the average MAD and FAD by 1.6% and 4.2%, respectively, compared with SR-LSTM. Ablation studies further support the complementary contributions of the hybrid knowledge graph and the density-adaptive gating mechanism. We also discuss the limitations of the current density formulation and benchmark scale, which suggest several directions for future improvement. Full article
Show Figures

Figure 1

44 pages, 24044 KB  
Review
Ground Mobile Robots for High-Throughput Plant Phenotyping: A Review from the Closed-Loop Perspective of Perception, Decision, and Action
by Heng-Wei Zhang, Yi-Ming Qin, An-Qi Wu, Xi Xi, Pingfan Hu and Rui-Feng Wang
Plants 2026, 15(8), 1218; https://doi.org/10.3390/plants15081218 - 16 Apr 2026
Viewed by 655
Abstract
High-throughput plant phenotyping (HTPP) is increasingly limited by the mismatch between the need for field-relevant, fine-grained phenotypic information and the restricted capability of conventional observation platforms under complex agricultural conditions. Ground mobile robots are emerging as the key carrier for resolving this gap [...] Read more.
High-throughput plant phenotyping (HTPP) is increasingly limited by the mismatch between the need for field-relevant, fine-grained phenotypic information and the restricted capability of conventional observation platforms under complex agricultural conditions. Ground mobile robots are emerging as the key carrier for resolving this gap because they combine close-range sensing, autonomous mobility, and physical interaction within real field environments. In this paper, a structured scoping review is presented using a closed-loop perception–decision–action pipeline as the organizing principle. Within this framework, recent advances are synthesized from the perspectives of multimodal fusion, localization-aware sensing, motion planning, deep-learning-based phenotypic analysis, active observation, robotic intervention, and edge deployment. The review further clarifies the complementary roles of Unmanned Aerial Vehicles (UAVs), Unmanned Ground Vehicles (UGVs), and air–ground collaboration in multiscale phenotyping workflows. Beyond summarizing technologies, the article provides three concrete deliverables: a structured taxonomy of mobile phenotyping systems; comparative tables covering sensing modalities, localization/navigation methods, and AI models; and a research agenda linking technical progress to field deployability. The synthesis highlights four persistent bottlenecks, namely environmental generalization, annotation scarcity, limited standardization and reproducibility, and the gap between advanced models and agricultural edge hardware. Overall, ground robots are identified not merely as sensing platforms, but as the central system architecture for advancing mobile phenotyping toward autonomous, fine-grained, and field-deployable operation. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

36 pages, 1727 KB  
Article
Smart Cities in the Agentic AI Era: Three Vectors of Urban Transformation
by Esteve Almirall
Appl. Sci. 2026, 16(8), 3847; https://doi.org/10.3390/app16083847 - 15 Apr 2026
Viewed by 462
Abstract
Agentic artificial intelligence—systems that reason, plan, and act autonomously within governed workflows—is converging with autonomous electric mobility and urban robotics to reshape how cities govern, move, and manage physical space. We argue that the simultaneous arrival of these three vectors is triggering a [...] Read more.
Agentic artificial intelligence—systems that reason, plan, and act autonomously within governed workflows—is converging with autonomous electric mobility and urban robotics to reshape how cities govern, move, and manage physical space. We argue that the simultaneous arrival of these three vectors is triggering a transformation comparable in scope to the Industrial Revolution. Cities that deploy across all three domains are becoming the new hubs of innovation: they concentrate talent, accelerate knowledge circulation, enable cross-fertilisation, and generate hybrid proposals that no single vector could produce alone. Just as Manchester, Birmingham, and the Ruhr became the defining centres of industrialisation because steam, textiles, iron, and coal recombined through the proximity of the engineers and entrepreneurs who moved between them, a small number of cities today are pulling ahead because they host the shared talent pool around which agentic governance, autonomous mobility, and urban robotics co-evolve. Conceptually, we extend the mirroring hypothesis in two directions: dynamically, arguing that organisations and urban ecosystems converge toward the configurations new technologies make possible; and ontologically, arguing that agentic AI introduces non-human agents into organisational architectures, requiring hybrid human–AI coordination. We formalise this dynamic as five propositions (P1–P5) of cumulative recursive hybridisation (CRH), operating through four reinforcing feedback loops—data, regulation, infrastructure, and talent. Together, these loops explain why the emerging urban order is path-dependent: early movers accumulate compounding advantages, while latecomers face exponentially rising costs of entry. We demarcate CRH from adjacent frameworks—general-purpose technologies, organisational complementarities, and complex adaptive systems—and test it against counterfactual evidence from failed, stalled, and Global South trajectories (Sidewalk Toronto, the Cruise rollback, Songdo, Bengaluru). We also examine its political-economy, equity, and surveillance limits. Drawing on comparative evidence from public-sector chatbot deployments, autonomous mobility ecosystems in the United States and China, and emerging urban robotics cases, we conclude that what is at stake is not incremental modernisation but the construction of a new urban order. The cities that act as innovation hubs for the agentic AI era will shape global standards, attract global talent, and define the institutional templates that others eventually adopt—much as the industrial cities of the eighteenth and nineteenth centuries did. Full article
Show Figures

Figure 1

27 pages, 26831 KB  
Article
KA-IHO: A Kinematic-Aware Improved Hippo Optimization Algorithm for Collision-Free Mobile Robot Path Planning in Complex Grid Environments
by Chunhong Yuan, Yule Cai, Haohua Que, Yuting Pei, Xiang Zhang, Jiayue Xie, Qian Zhang, Lei Mu and Fei Qiao
Sensors 2026, 26(8), 2416; https://doi.org/10.3390/s26082416 - 15 Apr 2026
Viewed by 219
Abstract
Autonomous path planning in obstacle-dense environments remains challenging for swarm intelligence methods due to infeasible initialization, insufficient exploration–exploitation balance, and poor trajectory smoothness for real-robot execution. To address these issues, this paper proposes a Kinematic-Aware Improved Hippo Optimization algorithm (KA-IHO) for mobile robot [...] Read more.
Autonomous path planning in obstacle-dense environments remains challenging for swarm intelligence methods due to infeasible initialization, insufficient exploration–exploitation balance, and poor trajectory smoothness for real-robot execution. To address these issues, this paper proposes a Kinematic-Aware Improved Hippo Optimization algorithm (KA-IHO) for mobile robot path planning. The proposed method integrates four components: an elite safety pool initialization strategy to improve feasible solution generation in dense maps, a hierarchical elite-scout update mechanism to better balance global exploration and local exploitation, anti-stagnation mechanisms including a Population Stagnation Restart strategy and a 10-Direction Radial Micro-Search to guarantee high feasibility rates across all map complexities, and a late-stage Laplacian Line-of-Sight Ironing Operator to reduce path redundancy and improve trajectory smoothness. Comparative experiments are conducted on five reproducible grid maps with different complexity levels (40×40 and 80×80), where KA-IHO is evaluated against six representative algorithms, including HO, SBOA, PSO, GWO, ARO, and INFO, over 20 independent runs. The results show that KA-IHO consistently achieves collision-free planning and obtains lower mean fitness values with smaller standard deviations than the compared methods, indicating improved robustness and solution quality. In addition, hardware closed-loop experiments on a differential-drive mobile robot demonstrate that the planned paths can be executed reliably in real environments, with trajectory tracking errors controlled within ±4 cm. Full article
Show Figures

Figure 1

22 pages, 11000 KB  
Article
Cooperative Joint Mission Between Seismic Recording and Surveying UAVs for Autonomous Near-Surface Characterization
by Jory Alqahtani, Ahmad Ihsan Ramdani, Pavel Golikov, Artem Timoshenko, Grigoriy Yashin, Ilya Mashkov, Van Do and Ezzedeen Alfataierge
Drones 2026, 10(4), 281; https://doi.org/10.3390/drones10040281 - 14 Apr 2026
Viewed by 511
Abstract
Generally, land seismic data acquisition in arid areas is a labor-intensive, costly, and challenging process, often hindered by challenging terrain and safety risks. To overcome these limitations, we propose the integration of autonomous Unmanned Aerial Vehicles (UAVs) into land seismic data acquisition, enabling [...] Read more.
Generally, land seismic data acquisition in arid areas is a labor-intensive, costly, and challenging process, often hindered by challenging terrain and safety risks. To overcome these limitations, we propose the integration of autonomous Unmanned Aerial Vehicles (UAVs) into land seismic data acquisition, enabling efficient data collection in difficult, inaccessible terrain. This is a cooperative mission workflow combining a Scouting UAV for high-resolution aerial scouting, followed by the swarm deployment of an Autonomous Seismic Acquisition Device (ASAD) for seismic data recording. The cooperative system allows for precise landing and subsequent deployment of seismic sensors in optimal locations. Previously, we demonstrated the applicability of passive seismic recorded with ASAD drones to near-surface characterization. This study covers the results of a field trial, where both the ASAD and Scouting UAV systems successfully acquired high-resolution seismic data with an active source, comparable to that of a conventional seismic data acquisition system. The results show that the ASAD seismic data exhibit a slightly higher noise level due to coupling variances and the fact that geophones were hardwired into 9-sensor arrays. However, due to its single-point sensing nature, it yields a superior frequency bandwidth, making it suitable for imaging shallow anomalies. The system underwent P-wave refraction tomography modeling and accurately detected a shallow subsurface cavity, showcasing its potential for near-surface characterization and shallow geohazard identification. This heterogeneous robotic system can support seismic data acquisition by enhancing safety, improving efficiency, and streamlining equipment mobilization, while minimizing environmental footprint. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems for Geophysical Mapping and Monitoring)
Show Figures

Figure 1

36 pages, 7620 KB  
Article
Unified Modulation Matrix-Based Shared Control for Teleoperated Multi-Robot Formation and Obstacle Avoidance
by Ruidong Chen, Zhuoyue Zhang, Zhiyao Zhang, Jinyan Li and Haochen Zhang
Sensors 2026, 26(8), 2387; https://doi.org/10.3390/s26082387 - 13 Apr 2026
Viewed by 518
Abstract
Multi-omnidirectional mobile robot formations offer significant advantages for applications in unstructured environments. However, under constraints such as limited field of view and high operator cognitive load, existing teleoperation frameworks struggle to guarantee formation safety and stability. In this study, a bilateral shared control [...] Read more.
Multi-omnidirectional mobile robot formations offer significant advantages for applications in unstructured environments. However, under constraints such as limited field of view and high operator cognitive load, existing teleoperation frameworks struggle to guarantee formation safety and stability. In this study, a bilateral shared control framework for multi-robot formation that integrates intent perception and vortex-field modulation is proposed. First, an Intent-Mediated Asymmetric Vortex Modulation (IM-AVM) strategy is developed, where the operator’s micro-intentions are mapped to determine the topological orientation of a vortex field. By constructing a dynamic asymmetric modulation matrix, saddle points in the potential field are geometrically eliminated, enabling deadlock-free obstacle avoidance while maintaining a rigid formation. Second, a multi-dimensional perception-based dynamic authority arbitration and topological deadlock escape mechanism is constructed, facilitating a seamless transition from assisted deadlock to autonomous escape. Finally, a formation coordination system based on anisotropic flow field modulation and adaptive sliding mode control is designed. Rigid formation constraints are transformed into a tangential safe flow field, and robust tracking is subsequently achieved through an Adaptive Nonsingular Fast Terminal Sliding Mode Controller (ANFTSMC). Theoretical analysis and experimental results demonstrate that the proposed framework achieves collision-free navigation for the formation in simulated environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

28 pages, 3527 KB  
Article
Autonomous Tomato Harvesting System Integrating AI-Controlled Robotics in Greenhouses
by Mihai Gabriel Matache, Florin Bogdan Marin, Catalin Ioan Persu, Robert Dorin Cristea, Florin Nenciu and Atanas Z. Atanasov
Agriculture 2026, 16(8), 847; https://doi.org/10.3390/agriculture16080847 - 11 Apr 2026
Viewed by 1009
Abstract
Labor shortages and the need for increased productivity have accelerated the development of robotic harvesting systems for greenhouse crops; however, reliable operation under fruit occlusion and clustered arrangements remains a major challenge, particularly due to the limited integration between perception and motion planning [...] Read more.
Labor shortages and the need for increased productivity have accelerated the development of robotic harvesting systems for greenhouse crops; however, reliable operation under fruit occlusion and clustered arrangements remains a major challenge, particularly due to the limited integration between perception and motion planning modules. The paper presents the design and experimental validation of an autonomous robotic system for greenhouse tomato harvesting. The proposed platform integrates a rail-guided mobile base, a six-degrees-of-freedom robotic manipulator, and an adaptive end effector with a hybrid vision framework that combines convolutional neural networks and watershed-based segmentation to enable robust fruit detection and localization under occluded conditions. The proposed approach enables improved separation of overlapping fruits and provides accurate spatial localization through stereo vision combined with IMU-assisted camera-to-robot coordinate transformation. An occlusion-aware trajectory planning strategy was developed to generate collision-free manipulation paths in the presence of leaves and stems, enhancing harvesting safety and reliability. The system was trained and evaluated using a dataset of real greenhouse images supplemented with synthetic data augmentation. Experimental trials conducted under practical greenhouse conditions demonstrated a fruit detection precision of 96.9%, recall of 93.5%, and mean Intersection-over-Union of 79.2%. The robotic platform achieved an overall harvesting success rate of 78.5%, reaching 85% for unobstructed fruits, with an average cycle time of 15 s per fruit in direct harvesting scenarios. The rail-guided mobility significantly improved positioning stability and repeatability during manipulation compared with fully mobile platforms. The results confirm that integrating hybrid perception with occlusion-aware motion planning can substantially improve the functionality of robotic harvesting systems in protected cultivation environments. The proposed solution contributes to the advancement of automation technologies for greenhouse vegetable production and supports the transition toward more sustainable and labor-efficient agricultural practices. Full article
Show Figures

Figure 1

19 pages, 4757 KB  
Article
SCSANet: Split Convolution Selective Attention Network of Drivable Area Detection for Mobile Robots
by Maozhang Ye, Xiaoli Li, Jidong Dai, Hongyi Li, Zhouyi Xu and Chentao Zhang
Eng 2026, 7(4), 176; https://doi.org/10.3390/eng7040176 - 11 Apr 2026
Viewed by 213
Abstract
Detecting drivable areas is a fundamental task in autonomous driving systems. Although semantic segmentation networks have demonstrated strong performance in segmenting drivable regions, two key challenges persist. First, acquiring sufficient contextual information in complex road scenarios remains difficult, often leading to segmentation errors. [...] Read more.
Detecting drivable areas is a fundamental task in autonomous driving systems. Although semantic segmentation networks have demonstrated strong performance in segmenting drivable regions, two key challenges persist. First, acquiring sufficient contextual information in complex road scenarios remains difficult, often leading to segmentation errors. Second, the coarseness of extracted features may degrade accuracy even when texture information is available in RGB images. To address these issues, we propose an enhanced DeepLabv3+ algorithm called Split Convolution Selective Attention Network (SCSANet), which incorporates the Adaptive Kernel (AK) and Split Convolution Attention (SCA) modules. AK adaptively adjusts the receptive field to accommodate varying road scenarios, while SCA improves boundary clarity by enhancing channel interaction. In addition, we employ surface normals to provide complementary geometric information, thereby strengthening the ability of the network to recognize drivable areas. To compensate for the lack of publicly available datasets for closed or semi-closed scenarios, we introduce XMUROAD, a new dataset of binocular disparity images. Experiments on the XMUROAD dataset demonstrate that the proposed architectural improvements yield an mIoU gain of 1.63% under the same RGB input, and the full pipeline with surface normal input achieves improvements of 1.55% to 2.59% in mF1 and 2.94% to 4.83% in mIoU over state-of-the-art methods. Experiments on the KITTI dataset further verify the generalization capability of SCSANet, with improvements of 1.58% in mF1 and 2.88% in mIoU over state-of-the-art methods. The proposed method provides a practical approach for accurate drivable area detection in closed and semi-closed mobile-robot scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
Show Figures

Figure 1

25 pages, 7380 KB  
Article
Integrated Air–Ground Robotic System for Autonomous Post-Blast Operations in GNSS-Denied Tunnels
by Goretti Arias-Ferreiro, Marco A. Montes-Grova, Francisco J. Pérez-Grau, Sergio Noriega-del-Rivero, Rafael Herguedas, María T. Lázaro, Amaia Castelruiz-Aguirre, José Carlos Jimenez Fernandez, Mustafa Karahan and Antonio Alonso-Cepeda
Remote Sens. 2026, 18(8), 1133; https://doi.org/10.3390/rs18081133 - 10 Apr 2026
Viewed by 571
Abstract
Post-blast operations in tunnel construction represent a critical bottleneck due to mandatory downtime and hazardous environmental conditions. This study addresses these challenges by developing and validating an integrated cyber–physical architecture that coordinates an autonomous Unmanned Aerial Vehicle (UAV) and an Autonomous Wheel Loader [...] Read more.
Post-blast operations in tunnel construction represent a critical bottleneck due to mandatory downtime and hazardous environmental conditions. This study addresses these challenges by developing and validating an integrated cyber–physical architecture that coordinates an autonomous Unmanned Aerial Vehicle (UAV) and an Autonomous Wheel Loader (AWL) under the supervision of a Digital Twin acting as central operational digital interface. Specifically, this technology was designed to access the tunnel, evaluate post-blasting conditions, and initiate operations during mandatory exclusion periods for personnel. The system was validated in a realistic, Global Navigation Satellite System (GNSS)-denied tunnel environment emulating post-detonation visibility constraints. The results demonstrate that the aerial agent successfully navigated and mapped the excavation front in less than 8 min, establishing a shared coordinate system for the ground machinery. Through this collaborative workflow, the autonomous deployment enabled operations to commence 50% to 80% earlier than conventional manual procedures. Furthermore, the system reduced daily operational time by approximately 8%, with an estimated return on financial investment between one and seven months. Overall, the proposed framework eliminates human exposure during high-risk inspections and transforms the fragmented excavation cycle into a continuous, data-driven process. Full article
(This article belongs to the Special Issue Mobile Laser Scanning Systems for Underground Applications)
Show Figures

Figure 1

24 pages, 7253 KB  
Article
On the Design of Smooth Curvature Tunable Paths for Safe Motion of Autonomous Vehicles
by Gianfranco Parlangeli
Designs 2026, 10(2), 42; https://doi.org/10.3390/designs10020042 - 7 Apr 2026
Viewed by 231
Abstract
Navigation is an essential ability for autonomous systems, and efficient motion planning for mobile robots is a central topic for autonomous vehicle design and service robotics. Most path-planning algorithms produce reference paths with sharp or discontinuous turns, inducing several drawbacks during mission execution, [...] Read more.
Navigation is an essential ability for autonomous systems, and efficient motion planning for mobile robots is a central topic for autonomous vehicle design and service robotics. Most path-planning algorithms produce reference paths with sharp or discontinuous turns, inducing several drawbacks during mission execution, such as unexpected inertial stress and strain on the mechanical structure, passenger discomfort, and unsafe and unpredictable deviation of the real trajectory with respect to the reference planned one. Oppositely, smooth and feasible trajectories are often desired in real-time navigation for nonholonomic mobile robots where the surrounding environment can have a dynamic and complex shape with obstacles. In this paper, we propose a novel technique for the generation of smooth, collision-free, and near time-optimal paths for nonholonomic mobile robots. The proposed method exploits the features of a set of tunable bump functions, with the goal of pursuing smooth reference curves with tunable features (such as curvature, or jerk) yet seeking a reasonable length minimality, thus combining the advantages of the two most adopted techniques, namely Bezier interpolation and Dubins curves. After a thorough description of the analytical methods, the paper is primarily concerned with the design and tuning methods of the path-planning algorithm. Both a graphical method and numerical investigations and examples are performed to fully exploit the algorithm potentialities and to show the efficiency of the proposed strategy. Full article
Show Figures

Figure 1

25 pages, 4371 KB  
Article
GTS-SLAM: A Tightly-Coupled GICP and 3D Gaussian Splatting Framework for Robust Dense SLAM in Underground Mines
by Yi Liu, Changxin Li and Meng Jiang
Vehicles 2026, 8(4), 79; https://doi.org/10.3390/vehicles8040079 - 3 Apr 2026
Viewed by 529
Abstract
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for [...] Read more.
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for intelligent driving platforms such as underground mining vehicles, inspection robots, and tunnel autonomous navigation systems. The front-end performs covariance-aware point-cloud registration using GICP to achieve robust pose estimation under low texture, dust interference, and dynamic disturbances. The back-end employs probabilistic dense mapping based on 3DGS, combined with scale regularization, scale alignment, and keyframe factor-graph optimization, enabling synchronized optimization of localization and mapping. A Compact-3DGS compression strategy further reduces memory usage while maintaining real-time performance. Experiments on public datasets and real underground-like scenarios demonstrate centimeter-level trajectory accuracy, high-quality dense reconstruction, and real-time rendering. The system provides reliable perception capability for vehicle autonomous navigation, obstacle avoidance, and path planning in confined and weak-light environments. Overall, the proposed framework offers a deployable solution for autonomous driving and mobile robots requiring accurate localization and dense environmental understanding in challenging conditions. Full article
(This article belongs to the Special Issue AI-Empowered Assisted and Autonomous Driving)
Show Figures

Figure 1

16 pages, 1529 KB  
Article
Image Segmentation-Guided Visual Tracking on a Bio-Inspired Quadruped Robot
by Hewen Xiao, Guangfu Ma and Weiren Wu
Biomimetics 2026, 11(4), 234; https://doi.org/10.3390/biomimetics11040234 - 2 Apr 2026
Viewed by 434
Abstract
Bio-inspired quadrupedal robots exhibit superior adaptability and mobility in unstructured environments, making them suitable for complex task scenarios such as navigation, obstacle avoidance, and tracking in a variety of environments. Visual perception plays a critical role in enabling autonomous behavior, offering a cost-effective [...] Read more.
Bio-inspired quadrupedal robots exhibit superior adaptability and mobility in unstructured environments, making them suitable for complex task scenarios such as navigation, obstacle avoidance, and tracking in a variety of environments. Visual perception plays a critical role in enabling autonomous behavior, offering a cost-effective alternative to multi-sensor systems. This paper proposes an image segmentation-guided visual tracking framework to enhance both perception and motion control in quadruped robots. On the perception side, a cascaded convolutional neural network is introduced, integrating a global information guidance module to fuse low-level textures and high-level semantic features. This architecture effectively addresses limitations in single-scale feature extraction and improves segmentation accuracy under visually degraded conditions. On the control side, segmentation outputs are embedded into a biologically inspired central pattern generator (CPG), enabling coordinated generation of limb and spinal trajectories. This integration facilitates a closed-loop visual-motor system that adapts dynamically to environmental changes. Experimental evaluations on benchmark image segmentation datasets and robotic locomotion tasks demonstrate that the proposed framework achieves enhanced segmentation precision and motion flexibility, outperforming existing methods. The results highlight the effectiveness of vision-guided control strategies and their potential for deployment in real-time robotic navigation. Full article
Show Figures

Figure 1

Back to TopTop