Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (377)

Search Parameters:
Keywords = robot operating system (ROS)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5270 KB  
Article
Decoupled Detection and Category-Level 6D Pose Estimation for Robot Grasping
by Chia-Tse Lai, Chen-Chien Hsu, Shao-Kang Huang and Yin-Tien Wang
Electronics 2026, 15(8), 1706; https://doi.org/10.3390/electronics15081706 - 17 Apr 2026
Viewed by 157
Abstract
6D object pose estimation is an essential component for robotic grasping. Most existing deep learning-based approaches focus on instance-level pose estimation, which requires prior object models and consequently limits their applicability on unseen objects in real-world scenarios. In contrast, category-level 6D pose estimation [...] Read more.
6D object pose estimation is an essential component for robotic grasping. Most existing deep learning-based approaches focus on instance-level pose estimation, which requires prior object models and consequently limits their applicability on unseen objects in real-world scenarios. In contrast, category-level 6D pose estimation adopts Normalized Object Coordinate Space (NOCS) maps to represent intra-class object geometry, enabling pose prediction without relying on predefined object models and thus improving generalization to unseen instances. However, the original NOCS-based category-level framework typically trains NOCS prediction and object classification in a joint manner, which introduces NOCS regression error among inter-class instances with similar appearances, thereby degrading pose estimation accuracy. To address this issue, we integrate the YOLOv8 object detection with SegFormer and propose a novel Category-Level SegFormer for 6D Object Pose Estimation (CLSF-6DPE). By decoupling object classification from NOCS regression through independent learning branches, the proposed framework significantly improves pose estimation performance. Furthermore, we validate the practical feasibility of CLSF-6DPE by integrating it with a robotic gripper via the Robot Operating System (ROS) in a Real-World grasping setup. Experimental results on the CAMERA and Real-World datasets demonstrate that the proposed method achieves mAP scores of 93.8% and 81.1%, respectively. Overall, the proposed method provides a modular and effective solution for category-level pose estimation in real-world robotic grasping applications. Full article
(This article belongs to the Special Issue Robotics: From Technologies to Applications)
Show Figures

Figure 1

32 pages, 35109 KB  
Article
Semi-Automated Programming of Industrial Robotic Systems Using Large Language Models and Standardized Data Model
by Daniel Syniawa, Levin Droste and Bernd Kuhlenkötter
Robotics 2026, 15(4), 79; https://doi.org/10.3390/robotics15040079 - 15 Apr 2026
Viewed by 168
Abstract
The increasing application of industrial robots in modern production systems contrasts with a persistently high programming complexity that requires specialized know-how and creates substantial entry barriers. This work addresses this problem by introducing a systematic approach to robot programming based on Large Language [...] Read more.
The increasing application of industrial robots in modern production systems contrasts with a persistently high programming complexity that requires specialized know-how and creates substantial entry barriers. This work addresses this problem by introducing a systematic approach to robot programming based on Large Language Models (LLMs) that automatically translates natural language task descriptions into executable robot programs. The solution follows a two-stage pipeline: in Stage 1, the LLM structures the input into coherent process steps, and in Stage 2 these process steps are transformed into C++ code using a high-level function library. The performance is evaluated in simulation for the automated electrical cabinet assembly use case with terminal blocks, which is a significant element of various production processes. The architecture, based on the Robot Operating System 2 (ROS2) and MoveIt2, further integrates a standardized AutomationML-based configuration management for dynamic parameter handling and persistent state storage. A graphical user interface visualizes intermediate results, enables manual interventions and enables a simple operation for potential users without programming experience. The evaluation of the presented approach shows a success rate of up to 95% for interpreting natural language instructions and generating code in the application scenario focused. The system reliably recognizes object attributes and correctly executes complex assembly instructions. In general, this work demonstrates how modern LLMs can bridge the semantic gap between human intent and robotic code for industrial applications. The developed high-level abstraction makes the system usable for non-programmers, highlights the potential for intuitive robot programming, and simultaneously identifies concrete technical challenges. Full article
Show Figures

Figure 1

13 pages, 4062 KB  
Article
Robotic Harvesting of Apples Using ROS2
by Connor Ruybalid, Christian Salisbury and Duke M. Bulanon
Machines 2026, 14(4), 433; https://doi.org/10.3390/machines14040433 - 14 Apr 2026
Viewed by 360
Abstract
Rising global food demand, increasing labor costs, and farm labor shortages have created significant challenges for specialty crop production, particularly in labor-intensive tasks such as fruit harvesting. Robotic harvesting offers a promising long-term solution, yet its adoption in orchard environments remains limited due [...] Read more.
Rising global food demand, increasing labor costs, and farm labor shortages have created significant challenges for specialty crop production, particularly in labor-intensive tasks such as fruit harvesting. Robotic harvesting offers a promising long-term solution, yet its adoption in orchard environments remains limited due to unstructured conditions, variable lighting, and difficulties in fruit recognition and manipulation. This study presents an improved robotic fruit harvesting system, Orchard roBot (OrBot), developed by the Robotics Vision Lab at Northwest Nazarene University, with the goal of advancing autonomous apple harvesting applications. The updated OrBot platform integrates a dual-camera vision system consisting of an eye-to-hand stereo camera with a wide field of view for fruit detection and an eye-in-hand RGB-D camera for precise manipulation. The control architecture was redesigned using Robot Operating System 2 (ROS2) and Python, enabling modular subsystem development and coordination. Fruit detection was performed using a YOLOv5 deep learning model, and visual servoing was employed to guide the robotic manipulator toward the target fruit. System performance was evaluated through laboratory experiments using artificial trees and field tests conducted in a commercial apple orchard in Idaho. OrBot achieved a 100% harvesting success rate in indoor tests and a 75–80% success rate in outdoor orchard conditions. Experimental results demonstrate that the dual-camera approach significantly enhances fruit search efficiency and harvesting efficiency. Identified limitations include sensitivity to lighting conditions, end effector performance with varying fruit sizes, and depth estimation errors. Overall, the results indicate a positive potential toward effective robotic fruit harvesting and highlight key areas for future improvement in vision, manipulation, and system robustness. Full article
Show Figures

Figure 1

33 pages, 2787 KB  
Article
Energy-Aware Adaptive Communication Topology with Edge-AI Navigation for UAV Swarms in GNSS-Denied Environments
by Alizhan Tulembayev, Alexandr Dolya, Ainur Kuttybayeva, Timur Jussupbekov and Kalmukhamed Tazhen
Drones 2026, 10(4), 273; https://doi.org/10.3390/drones10040273 - 9 Apr 2026
Viewed by 316
Abstract
Energy-efficient and resilient decentralized unmanned aerial vehicles (UAV) swarm operation in global navigation satellite system (GNSS) denied environments remains challenging because propulsion demand, communication load, and onboard inference are tightly coupled at the mission level. Although prior studies have examined some of these [...] Read more.
Energy-efficient and resilient decentralized unmanned aerial vehicles (UAV) swarm operation in global navigation satellite system (GNSS) denied environments remains challenging because propulsion demand, communication load, and onboard inference are tightly coupled at the mission level. Although prior studies have examined some of these components separately, their joint evaluation within adaptive decentralized swarms remains limited under degraded navigation conditions. This study proposes an energy-aware adaptive communication-topology framework integrated with lightweight edge artificial intelligence (AI)-assisted navigation for decentralized UAV swarms operating without reliable GNSS support. The approach combines a unified mission-level energy-accounting structure for propulsion, communication, and onboard inference, a residual-energy-aware topology adaptation mechanism for preserving swarm connectivity, and a convolutional neural network-long short-term memory (CNN–LSTM) based edge-AI navigation module for improving localization robustness. The framework was evaluated in 1200 s Robot Operating System 2 (ROS2)–Gazebo–PX4 simulation scenarios against fixed topology and extended Kalman filter (EKF)-based baselines. Under the adopted simulation assumptions, the proposed configuration achieved a 22.7% reduction in total energy consumption, with the largest decrease observed in the communication-energy component, while preserving positive algebraic connectivity across all evaluated runs. The edge-AI module yielded a 4.8% root mean square error (RMSE) reduction relative to the EKF baseline, indicating a modest but meaningful improvement in localization performance. These results support the feasibility of integrated energy-aware swarm coordination in GNSS-denied environments; however, they should be interpreted as simulation-based evidence under the adopted modeling assumptions, and further high-fidelity propagation modeling, broader learning validation, and hardware-in-the-loop studies remain necessary. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

19 pages, 712 KB  
Article
Federated Learning-Driven Protection Against Adversarial Agents in a ROS2 Powered Edge-Device Swarm Environment
by Brenden Preiss and George Pappas
AI 2026, 7(4), 127; https://doi.org/10.3390/ai7040127 - 1 Apr 2026
Viewed by 666
Abstract
Federated learning (FL) enables collaborative model training across distributed devices and robotic systems while preserving data privacy, making it well-suited for swarm robotics and edge-device-powered intelligence. However, FL remains vulnerable to adversarial behaviors such as data and model poisoning, particularly in real-world deployments [...] Read more.
Federated learning (FL) enables collaborative model training across distributed devices and robotic systems while preserving data privacy, making it well-suited for swarm robotics and edge-device-powered intelligence. However, FL remains vulnerable to adversarial behaviors such as data and model poisoning, particularly in real-world deployments where detection methods must operate under strict computational and communication constraints. This paper presents a practical, real-world federated learning framework that enhances robustness to adversarial agents in a ROS2-based edge-device swarm environment. The proposed system integrates the Federated Averaging (FedAvg) algorithm with a lightweight average cosine similarity-based filtering method to detect and suppress harmful model updates during aggregation. Unlike prior work that primarily evaluates poisoning defenses in simulated environments, this framework is implemented and evaluated on physical hardware, consisting of a laptop-based aggregator and multiple Raspberry Pi worker nodes. A convolutional neural network (CNN) based on the MobileNetV3-Small architecture is trained on the MNIST dataset, with one worker executing a sign-flipping model poisoning attack. Experimental results show that FedAvg alone fails to maintain meaningful model accuracy under adversarial conditions, resulting in near-random classification performance with a final global model accuracy of 11% and a loss of 2.3. In contrast, the integration of cosine similarity filtering demonstrates effective detection of sign-flipping model poisoning in the evaluated ROS2 swarm experiment, allowing the global model to maintain model accuracy of around 90% and loss around 0.37, which is close to baseline accuracy of 93% of the FedAvg algorithm only under no attack with a very minimal increase in loss, despite the presence of an attacker. The proposed method also maintains a false positive rate (FPR) of around 0.01 and a false negative rate (FNR) of around 0.10 of the global model in the presence of an attacker, which is a minimal difference from the baseline FedAvg-only results of around 0.008 for FPR and 0.07 for FNR. Additionally, the proposed method of FedAvg + cosine similarity filtering maintains computational statistics similar to baseline FedAvg with no attacker. Baseline results show an average runtime of about 34 min, while our proposed method shows an average runtime of about 35 min. Also, the average size of the global model being shared among workers remains consistent at around 7.15 megabytes, showing little to no increase in message payload sizes between baseline results and our proposed method. These results demonstrate that computationally lightweight cosine similarity-based detection methods can be effectively deployed in real-world, resource-constrained robotic swarm environments, providing a practical path toward improving robustness in real-world federated learning deployments beyond simulation-based evaluation. Full article
Show Figures

Figure 1

21 pages, 3855 KB  
Article
Digital Twin Framework for Robot Path Planning and Real-Time Execution Using Unity-ROS Integration: Systems Architecture and Experimental Validation
by Dhananjaya Kawshan and Qingjin Peng
Machines 2026, 14(4), 387; https://doi.org/10.3390/machines14040387 - 1 Apr 2026
Viewed by 681
Abstract
Digital Twin (DT) systems combining physics-based simulation with hardware execution are critical for Industry 4.0 manufacturing, yet proprietary software solutions remain expensive and platform-dependent. This work addresses three technical challenges: maintaining geometric and kinematic fidelity across CAD-to-simulation conversion pipelines, synchronizing dual physics engines [...] Read more.
Digital Twin (DT) systems combining physics-based simulation with hardware execution are critical for Industry 4.0 manufacturing, yet proprietary software solutions remain expensive and platform-dependent. This work addresses three technical challenges: maintaining geometric and kinematic fidelity across CAD-to-simulation conversion pipelines, synchronizing dual physics engines (Unity and ROS middleware) under hardware latency constraints, and optimizing motion planning while preserving trajectory quality and interactive responsiveness. We developed an integrated framework for a 7-Degree-of-Freedom manipulator using CAD modeling, URDF/SRDF semantic representation, and bidirectional Unity-ROS (Robot Operating System) communication via WebSocket connectors. Motion planning uses RRTConnect from OMPL with collision-aware optimization through the Flexible Collision Library. Validation across 12 manipulation trials demonstrated positional synchronization accuracy of ±2.0 degrees, motion planning performance of 0.064 ± 0.020 s. Latency analysis reveals that hardware execution is the dominant system bottleneck, significantly exceeding network communication delays. The system achieves performance metrics comparable to proprietary industrial solutions. This work establishes a replicable, cost-effective Industry 4.0 framework, demonstrating that modern game engine technology combined with open-source robotics middleware can deliver DT systems matching proprietary solutions. The architecture and validated implementation enable adaptation to alternative robotic platforms and support broader adoption of simulation-validated automation in manufacturing contexts. Full article
(This article belongs to the Special Issue Intelligent Applications in Mechanical Engineering)
Show Figures

Figure 1

31 pages, 7864 KB  
Article
Development of a General-Purpose AI-Powered Robotic Platform for Strawberry Harvesting
by Muhammad Tufail, Jamshed Iqbal and Rafiq Ahmad
Agriculture 2026, 16(7), 769; https://doi.org/10.3390/agriculture16070769 - 31 Mar 2026
Viewed by 555
Abstract
The integration of emerging technologies such as robotics and artificial intelligence (AI) has the potential to transform agricultural harvesting by improving efficiency, reducing waste, lowering labor dependency, and enhancing produce quality. This paper presents the development of an intelligent robotic berry harvesting system [...] Read more.
The integration of emerging technologies such as robotics and artificial intelligence (AI) has the potential to transform agricultural harvesting by improving efficiency, reducing waste, lowering labor dependency, and enhancing produce quality. This paper presents the development of an intelligent robotic berry harvesting system that combines deep learning–based perception with autonomous robotic manipulation for real-time strawberry harvesting. A computer vision pipeline based on the YOLOv11 segmentation model was developed and integrated into a Smart Mobile Manipulator (SMM) equipped with autonomous navigation, a 6-degree-of-freedom (6-DoF) xArm 6 robotic arm, and ROS middleware to enable real-time operation. Using a publicly available strawberry dataset comprising 2,800 images collected under ridge-planted cultivation conditions, the proposed YOLOv11-small segmentation model achieved 84.41% mAP@0.5, outperforming YOLOv11 object detection, Faster R-CNN, and RT-DETR in segmentation quality while maintaining real-time performance at 10 FPS on an NVIDIA Jetson Orin Nano edge GPU. A PCA-based fruit orientation and geometric analysis method achieved 86.5% localization accuracy on 200 test images. Controlled indoor harvesting experiments using synthetic strawberries demonstrated an overall harvesting success rate of 72% across 50 trials. The proposed system provides a general-purpose platform for berry harvesting in controlled environments, offering a scalable and efficient solution for autonomous harvesting. Full article
(This article belongs to the Special Issue Advances in Robotic Systems for Precision Orchard Operations)
Show Figures

Figure 1

28 pages, 9658 KB  
Article
Design and Implementation of a Real-Time Visual Tracking System for UAVs Based on PSDK
by Ranjun Yang, Ningbo Xie, Qinlin Li, Kefei Liao, Jie Lang and Kamarul Hawari Bin Ghazali
Sensors 2026, 26(7), 2145; https://doi.org/10.3390/s26072145 - 31 Mar 2026
Viewed by 450
Abstract
This paper presents the design and implementation of a real-time visual tracking system for unmanned aerial vehicles (UAVs), based on the DJIPayload Software Development Kit (PSDK), addressing the challenge of balancing high precision with low latency on resource-constrained edge platforms. By utilizing DJI [...] Read more.
This paper presents the design and implementation of a real-time visual tracking system for unmanned aerial vehicles (UAVs), based on the DJIPayload Software Development Kit (PSDK), addressing the challenge of balancing high precision with low latency on resource-constrained edge platforms. By utilizing DJI PSDK to abandon the Robot Operating System (ROS) layer and its associated serialization overhead, the proposed Middleware-Free Architecture reduces end-to-end latency by over 60% to approximately 30 ms. To address computational constraints, a Lightweight Asymmetric De-coupled Visual Servoing (ADVS) strategy is proposed. It adopts orthogonal kinematic de-coupling to bypass Jacobian matrix inversion and integrates a non-linear dead-zone mechanism with dynamics-aware gain scheduling to compensate for sensing anisotropy and gravitational nonlinearity. Simultaneously, a Geometry-Aware Fusion strategy is employed to reject visual outliers, while a Finite State Machine (FSM) strictly enforces temporal consistency. Field experiments in various scenarios verify the system’s stability and tracking capability. Specifically, the platform maintains a robust lock on targets at speeds up to 23 m/s across dynamic maneuvers. The successful implementation of this system confirms that high-performance edge tracking does not rely solely on the scaling of visual model complexity but can also be effectively achieved through the architectural minimization of latency combined with the optimization of theoretically grounded robust control strategies. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

19 pages, 87001 KB  
Article
DEM-Based Traversability Map Generation for 2.5D Autonomous Multirobot Navigation
by David Orbea, Juan Mateos Budiño, Christyan Cruz Ulloa, Jaime del Cerro and Antonio Barrientos
Appl. Sci. 2026, 16(7), 3351; https://doi.org/10.3390/app16073351 - 30 Mar 2026
Viewed by 463
Abstract
Autonomous mobile robots operating in outdoor environments must have an understanding of the surrounding terrain geometry to ensure efficient and safe navigation. This article presents a DEM-based intelligent traversability mapping framework to transform open-source geospatial data into slope-aware cost maps for multirobot autonomous [...] Read more.
Autonomous mobile robots operating in outdoor environments must have an understanding of the surrounding terrain geometry to ensure efficient and safe navigation. This article presents a DEM-based intelligent traversability mapping framework to transform open-source geospatial data into slope-aware cost maps for multirobot autonomous navigation within the ROS2 framework. The proposed cv_gdal algorithm automatically processes GeoTIFF elevation data using adaptive slope thresholding based on each robot’s physical capabilities, generating ROS-compatible cell occupancy maps. Six regions of Spain were used to evaluate terrain representation accuracy and navigation performance in kilometer-scale DEMS. This framework enables autonomous perception-to-planning pipelines and supports the deployment of multirobot systems for search and rescue (SAR) tasks. By bridging geospatial analytics with robotic perception and adaptive decision-making, this work contributes to the development of intelligent, self-configuring robotic systems capable of operating safely in complex outdoor environments. Full article
(This article belongs to the Special Issue Robotics and Intelligent Systems: Technologies and Applications)
Show Figures

Figure 1

18 pages, 12077 KB  
Article
ROS 2-Driven Navigation and Sensor Platform for Quadruped Robots
by Vegard Brekke, Erlend Odd Berge, Eirik Dybdahl, Jayant Singh and Ilya Tyapin
Robotics 2026, 15(4), 70; https://doi.org/10.3390/robotics15040070 - 26 Mar 2026
Viewed by 1048
Abstract
This paper presents an open-source ROS 2 navigation and sensor platform for quadruped robots, demonstrated on Boston Dynamics Spot in a laboratory environment. The platform integrates SLAM Toolbox for mapping and localisation, Navigation2 with MPPI and Smac Hybrid-A* for global path planning, and [...] Read more.
This paper presents an open-source ROS 2 navigation and sensor platform for quadruped robots, demonstrated on Boston Dynamics Spot in a laboratory environment. The platform integrates SLAM Toolbox for mapping and localisation, Navigation2 with MPPI and Smac Hybrid-A* for global path planning, and a frontier-based autonomous exploration module with practical handling of unreachable frontiers. The paper validates and verifies current, open-source algorithms deployed on off-the-shelf hardware. A greedy wavefront-based frontier selection method is presented that prioritizes Time-to-Closest-Viable-Frontier (TCVF) by terminating the search as soon as a feasible frontier is identified. On a real robot dataset replayed across five goal scenarios, the method reduces median selection latency from 94.31 ms to 51.08 ms (95th percentile: 109.54 ms to 56.99 ms), corresponding to a 1.85-times improvement in compute time compared to a standard implementation. The system also employs Zenoh middleware and Foxglove for remote monitoring and control, enabling flexible, high-bandwidth operation. The platform, including configuration files and launch scripts, is released openly to support future research and deployment on quadruped robots. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

23 pages, 27743 KB  
Review
A Framework for Safe Mobile Manipulation in Human-Centered Applications
by Pangcheng David Cen Cheng, Cesare Luigi Blengini, Rosario Francesco Cavelli, Angela Ripi and Marina Indri
Robotics 2026, 15(4), 68; https://doi.org/10.3390/robotics15040068 - 25 Mar 2026
Viewed by 570
Abstract
In recent years, applications with robots collaborating actively with humans have been increasing. The transition from Industry 4.0 to 5.0 rearranges the focus of fully automated processes to a human-centered system that allows more customization and flexibility. In human-centered systems, the robot is [...] Read more.
In recent years, applications with robots collaborating actively with humans have been increasing. The transition from Industry 4.0 to 5.0 rearranges the focus of fully automated processes to a human-centered system that allows more customization and flexibility. In human-centered systems, the robot is expected to safely assist or provide support to the human operator, avoiding any unintentional harm, while the latter is focused on tasks that require human reasoning, since current decision-making systems still have some limitations. This survey reviews all the main functionalities required to make a robot (collaborative or not) act as an assistant for human operators, analyzing and comparing solutions proposed by the authors (based on previous works) and/or the ones available in the literature. In this way, it is possible to combine those functionalities and build a complete framework enabling safe mobile manipulation while interacting with humans. In particular, a mobile manipulator is used to receive requests from a user, navigate in a human-shared environment, identify the requested object, and grasp and safely deliver such an object to the user. The framework, which is completed by a user interface designed using Android Studio, is developed in ROS1, tested, and validated on a real mobile manipulator in real-world conditions. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

30 pages, 22493 KB  
Article
H-CoRE: A Cooperative Framework for Heterogeneous Multi-Robot Exploration and Inspection
by Simone D’Angelo, Francesca Pagano, Riccardo Caccavale, Vincenzo Scognamiglio, Alessandro De Crescenzo, Pasquale Merone, Stefano Ciaravino, Alberto Finzi and Vincenzo Lippiello
Drones 2026, 10(4), 232; https://doi.org/10.3390/drones10040232 - 25 Mar 2026
Viewed by 639
Abstract
This paper presents the H-CoRE (Heterogeneous Cooperative Multi-Robot Execution) framework designed to enable autonomous multi-robot operations in GNSS-denied environments. Built on an ROS 2-based architecture, H-CoRE enables collaborative, structured task execution through standardized software stacks. Each robot’s stack combines a high-level executive system [...] Read more.
This paper presents the H-CoRE (Heterogeneous Cooperative Multi-Robot Execution) framework designed to enable autonomous multi-robot operations in GNSS-denied environments. Built on an ROS 2-based architecture, H-CoRE enables collaborative, structured task execution through standardized software stacks. Each robot’s stack combines a high-level executive system with an agent-specific motion layer and leverages multi-sensor fusion for localization and mapping. The framework is inherently reconfigurable, allowing individual agents to operate autonomously or as part of a multi-robot team for collaborative missions. In the considered scenario, the system integrates aerial and ground vehicles, a fixed pan–tilt–zoom camera, and a human supervisory interface within a unified, modular infrastructure. The proposed system has been deployed in indoor, GNSS-denied environments, demonstrating autonomous navigation, cooperative area coverage, and real-time information sharing across multiple agents. Experimental results confirm the effectiveness of H-CoRE in maintaining general awareness and mission continuity, paving the way for future applications in search-and-rescue, inspection, and exploration tasks. Full article
Show Figures

Figure 1

30 pages, 8087 KB  
Article
A Novel SLAM Approach for Trajectory Generation of a Dual-Arm Mobile Robot (DAMR) Using Sensor Fusion
by Narendra Kumar Kolla and Pandu Ranga Vundavilli
Automation 2026, 7(2), 42; https://doi.org/10.3390/automation7020042 - 3 Mar 2026
Viewed by 619
Abstract
Simultaneous Localization and Mapping (SLAM) is essential for autonomous movement in intelligent robotic systems. Traditional SLAM using a single sensor, such as an Inertial Measurement Unit (IMU), faces challenges including noise and drift. This paper introduces a novel Cartographer-based SLAM approach for DAMR [...] Read more.
Simultaneous Localization and Mapping (SLAM) is essential for autonomous movement in intelligent robotic systems. Traditional SLAM using a single sensor, such as an Inertial Measurement Unit (IMU), faces challenges including noise and drift. This paper introduces a novel Cartographer-based SLAM approach for DAMR trajectory generation in indoor environments to reduce drift errors and improve localization accuracy. This SLAM approach integrates multi-sensor data with extended Kalman filter (EKF) fusion from wheel odometry, an RGB-D camera (RTAB-Map), and an IMU for precise mapping with DAMR trajectory generation and is compared with the heading reference trajectory generated by robot pose estimation and frame transformation. This system is implemented in the Robot Operating System (ROS 2) for coordinated data acquisition, processing, and visualization. After experimental verification, the DAMR trajectories generated are closer to the reference trajectory and drift errors are tuned. The experimental results revealed that the DAMR trajectory with multi-sensor data integration using the EKF effectively improved the positioning accuracy and robustness of the system. The proposed approach shows improved alignment with the reference trajectory, yielding a mean displacement error of 0.352% and an absolute trajectory error of 0.007 m, highlighting the effectiveness of the fusion approach for accurate indoor robot navigation. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

22 pages, 6376 KB  
Article
Simulator-Based Digital Twin of a Robotics Laboratory
by Lluís Ribas-Xirgo
Machines 2026, 14(3), 273; https://doi.org/10.3390/machines14030273 - 1 Mar 2026
Viewed by 726
Abstract
Simulator-based digital twins are widely used in robotics education and industrial development to accelerate prototyping and enable safe experimentation. However, they often hide implementation details that are essential for understanding, diagnosing, and correcting system failures. This paper introduces a technology-independent model-based design framework [...] Read more.
Simulator-based digital twins are widely used in robotics education and industrial development to accelerate prototyping and enable safe experimentation. However, they often hide implementation details that are essential for understanding, diagnosing, and correcting system failures. This paper introduces a technology-independent model-based design framework that provides students with full visibility of the computational mechanisms underlying robotic controllers while remaining feasible within a 150-h undergraduate course. The approach relies on representing controller behavior using networks of Extended Finite State Machines (EFSMs) and their stacked extension (EFS2M), which unify all abstraction levels of the control architecture—from low-level reactive behaviors to high-level deliberation—under a single formal model. A structured programming template ensures traceable, optimization-free software synthesis, facilitating debugging and enabling self-diagnosis of design flaws. The framework includes real-time synchronized simulation, transparent switching between virtual and physical robots, and a smart data logger that captures meaningful events for model updating and error detection. Integrated into the Intelligent Robots course, the system supports topics such as kinematics, control, perception, and simultaneous localization and mapping (SLAM) while avoiding dependency on specific middleware such as Robot Operating System (ROS) 2. Over three academic years, students reported positive hands-on experiences, strong adaptability to diverse modeling approaches, and consistently high survey ratings reflecting the course’s overall quality. The proposed environment thus offers an effective methodology for teaching end-to-end robot controller design through transparent, simulation-driven digital twins. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

21 pages, 3979 KB  
Article
A Docker-Enabled Real-Time Framework for Robotic Applications in Heterogeneous ROS 2 Environments
by Ji Min Lim, Keon Woo Kim, Byoung Wook Choi and Raimarius Delgado
Processes 2026, 14(5), 804; https://doi.org/10.3390/pr14050804 - 28 Feb 2026
Viewed by 707
Abstract
Real-time performance remains a core requirement for safety-critical robotic applications. ROS 2 has become a de facto middleware standard, while Docker is increasingly adopted for modular and portable deployment. However, embedded hardware updates often constrain Linux distributions and real-time kernel versions, while existing [...] Read more.
Real-time performance remains a core requirement for safety-critical robotic applications. ROS 2 has become a de facto middleware standard, while Docker is increasingly adopted for modular and portable deployment. However, embedded hardware updates often constrain Linux distributions and real-time kernel versions, while existing software stacks depend on older ROS 2 releases and legacy libraries. This mismatch forces costly porting and revalidation, motivating heterogeneous deployments that mix ROS 2 versions across host and Docker container runtimes. Yet the overheads introduced by Docker and cross-version ROS 2 communication are not well quantified in terms of real-time guarantees. This paper presents a Docker-enabled real-time framework for evaluating robotic applications in heterogeneous ROS 2 deployments. The framework integrates an RT-PREEMPT–patched Linux kernel, Dockerized ROS 2 distributions, and configurable cross-version communication pathways to enable controlled, repeatable experiments without full-stack migration. We empirically quantify Docker-induced effects on real-time execution using task periodicity, jitter, and response time, and assess ROS 2 communication using end-to-end latency under host-only, container-only, and hybrid configurations. To demonstrate practical viability, we apply the framework to an operational mobile-robot use case that integrates legacy control code with new modules, including a reinforcement-learning decision layer, within a mixed host–container ROS 2 stack. The resulting analyses provide reusable tooling and actionable guidelines for deploying deterministic ROS 2 systems under containerized heterogeneous constraints. Full article
(This article belongs to the Special Issue Advances in the Control of Complex Dynamic Systems)
Show Figures

Figure 1

Back to TopTop