Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (672)

Search Parameters:
Keywords = interaction robotized tasks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2411 KB  
Review
Granular Jamming in Soft Robotics: Simulation Frameworks and Emerging Possibilities—Review
by Stella Hrehova, Alexander Hošovský, Jozef Husár and Tibor Krenický
Biomimetics 2026, 11(3), 193; https://doi.org/10.3390/biomimetics11030193 (registering DOI) - 6 Mar 2026
Abstract
Soft robotics has become a dynamic field that emphasizes adaptability and safe interaction with complex environments. These structures utilize deformable materials and continuum mechanics to adapt their shape, absorb shocks, and perform tasks in unstructured environments. However, the design and optimization of these [...] Read more.
Soft robotics has become a dynamic field that emphasizes adaptability and safe interaction with complex environments. These structures utilize deformable materials and continuum mechanics to adapt their shape, absorb shocks, and perform tasks in unstructured environments. However, the design and optimization of these systems is challenging, primarily due to the nonlinear and discontinuous behavior of granular materials. In this paper, we address the role of simulation frames as an important tool for understanding, designing, and extending the functionality of software robotic devices utilizing granular jamming. The analysis suggests that DEM is essential for capturing particle-level mechanisms, while FEM is more effective for system-level optimization but tends to smooth out the transition of jamming. Hybrid FEM–DEM approaches provide the highest physical accuracy, albeit at an increased computational cost. Overall, the findings emphasize that the choice of framework must be application-oriented and that multiphysics coupling represents the future development. The review gives an up-do-date review of the simulation tools and approaches for granular-jamming-based systems with a specific focus on continuum arms with a granular-jamming-based central backbone. Such methods can be used for the optimization the back-bone geometry and its filling material (shape, porosity, granule size) with possible use in the real-time control of such arms. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
Show Figures

Graphical abstract

53 pages, 5533 KB  
Systematic Review
Embodied AI with Foundation Models for Mobile Service Robots: A Systematic Review
by Matthew Lisondra, Beno Benhabib and Goldie Nejat
Robotics 2026, 15(3), 55; https://doi.org/10.3390/robotics15030055 - 4 Mar 2026
Abstract
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action models, have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, [...] Read more.
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action models, have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, reason, and act through physical interaction, mobile service robots can achieve more flexible understanding, adaptive behavior, and robust task execution in dynamic real-world environments. Despite this progress, embodied AI for mobile service robots continues to face fundamental challenges related to the translation of natural language instructions into executable robot actions, multimodal perception in human-centered environments, uncertainty estimation for safe decision-making, and computational constraints for real-time onboard deployment. In this paper, we present the first systematic review of foundation models in mobile service robotics, following the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines. Using an OpenAlex literature search, we considered 7506 papers for the years spanning 1968–2025. Our detailed analysis identified four main challenges and how recent advances in foundation models, related to the translation of natural language instructions into executable robot actions, multimodal perception in human-centered environments, uncertainty estimation for safe decision-making, and computational constraints for real-time onboard deployment, have addressed these challenges. We further examine real-world applications in domestic assistance, healthcare, and service automation, highlighting how foundation models enable context-aware, socially responsive, and generalizable robot behaviors. Beyond technical considerations, we discuss ethical, societal, human-interaction, and physical design and ergonomic implications associated with deploying foundation-model-enabled service robots in human environments. Finally, we outline future research directions emphasizing reliability and lifelong adaptation, privacy-aware and resource-constrained deployment, as well as the governance and human-in-the-loop frameworks required for safe, scalable, and trustworthy mobile service robotics. Full article
(This article belongs to the Special Issue Embodied Intelligence: Physical Human–Robot Interaction)
Show Figures

Figure 1

19 pages, 4128 KB  
Review
When Robots Learn: A Bibliometric Review of Artificial Intelligence in Engineering Applications of Robotics
by Eduardo García-Sardón, Pablo Fernández-Arias, Antonio del Bosque and Diego Vergara
Appl. Sci. 2026, 16(5), 2466; https://doi.org/10.3390/app16052466 - 4 Mar 2026
Abstract
The convergence of Robotics and artificial intelligence (AI) has transformed engineering by enabling the design of intelligent systems capable of learning, adapting, and performing complex tasks. These synergies are driving innovation across multiple engineering disciplines, including mechanical, materials, electrical, industrial, civil, and aerospace [...] Read more.
The convergence of Robotics and artificial intelligence (AI) has transformed engineering by enabling the design of intelligent systems capable of learning, adapting, and performing complex tasks. These synergies are driving innovation across multiple engineering disciplines, including mechanical, materials, electrical, industrial, civil, and aerospace engineering. This review provides a comprehensive overview of the knowledge structure and emerging research directions of Robotics and AI in engineering, with the aim of identifying research trends, influential authors, leading institutions, and emerging thematic areas. Data were collected from the Web of Science and Scopus databases, covering the period from 2020 to 2025, and analyzed using bibliometric mapping techniques and performance indicators. The results reveal a sustained growth in research on autonomous systems, collaborative robots, and human–robot interaction within engineering contexts, with a strong emphasis on AI-driven optimization. Bibliometric analyses show that deep learning, reinforcement learning, and computer vision constitute the core enabling technologies structuring the field. In addition, the results highlight a high degree of international collaboration and a concentration of scientific output and impact in a limited number of leading countries, institutions, and journals. Full article
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)
Show Figures

Figure 1

22 pages, 5149 KB  
Article
Proof of Concept of an Occupational Machine for Biomechanical Load Reduction: Interpreting the User’s Intent
by Francesco Durante
Robotics 2026, 15(3), 53; https://doi.org/10.3390/robotics15030053 - 28 Feb 2026
Viewed by 91
Abstract
This paper presents a bench-top occupational power-assist robot aimed at reducing biomechanical effort during repetitive material handling. The prototype adopts a SCARA-like structure with three degrees of freedom and provides assistance on the vertical (z) axis through a three-phase brushless DC (BLDC) motor [...] Read more.
This paper presents a bench-top occupational power-assist robot aimed at reducing biomechanical effort during repetitive material handling. The prototype adopts a SCARA-like structure with three degrees of freedom and provides assistance on the vertical (z) axis through a three-phase brushless DC (BLDC) motor driven in field-oriented control with inner-loop current regulation. The user interacts with the robot through a single handle-mounted load cell. The measured interaction force is converted, via a calibration-based mapping, into a motor current reference that enforces a prescribed force-sharing ratio. In this way, the drive’s embedded current loop acts as the low-level torque regulator, and the system can share gravitational and inertial loads without additional environment force sensing or explicit high-level impedance/admittance dynamics. A coupled electro-mechanical model is derived and used to select the assistance gain and to verify feasibility in simulation. A pilot experimental campaign with eight participants and two payloads (0.5 kg and 1.5 kg) was carried out on sinusoidal and random tracking tasks. With assistance enabled, the operator contribution was reduced to about 15% of the total load, and the mean bicep brachii EMG amplitude decreased by about 60%, while tracking accuracy was generally preserved and often improved. Full article
Show Figures

Figure 1

34 pages, 463 KB  
Article
Data-Driven Ergonomic Load Dynamics for Human–Autonomy Teams
by Nikitas Gerolimos, Vasileios Alevizos and Georgios Priniotakis
Big Data Cogn. Comput. 2026, 10(3), 74; https://doi.org/10.3390/bdcc10030074 - 28 Feb 2026
Viewed by 71
Abstract
Ergonomic load in human–autonomy teams is commonly treated as a static score or a post-hoc audit, even though modern sensing and communication enable real-time regulation of operator effort. We model ergonomic load as a dissipative dynamical state inferred online from multimodal effort proxies [...] Read more.
Ergonomic load in human–autonomy teams is commonly treated as a static score or a post-hoc audit, even though modern sensing and communication enable real-time regulation of operator effort. We model ergonomic load as a dissipative dynamical state inferred online from multimodal effort proxies and task context, and couple it to autonomy through load-dependent gain moderation and compliance shaping. The method is evaluated on public human–swarm and human–robot interaction traces together with effort-proximal wearable and myographic datasets using a unified, windowed pipeline and controlled stress tests that emulate latency, downsampling, packet loss, and channel dropouts. On a large human–swarm benchmark, the estimator achieves strong discrimination and calibration for rare high-load events (up to AUROC 0.87, AUPRC 0.41, ECE 0.031 at q=0.90) and degrades predictably under delay, with a knee around 300–400ms (AUROC 0.870.80, ECE 0.0310.061 at 500ms). Embedding the estimate in the adaptation schedule reduces overload incidence and oscillatory redistribution while preserving coordination proxies in surrogate closed-loop simulation: overload time drops from 7.8% to 4.1% (relative reduction  47%) with throughput maintained near baseline (1.000.97) and oscillation power reduced (0.260.14) under nominal timing. These results provide a reproducible pathway for making ergonomics a control-relevant feedback signal, together with explicit operational constraints on estimator calibration (target ECE 0.05) and end-to-end latency (effective τ300ms) required to avoid regime switching and maintain stable, interpretable adaptation. Full article
Show Figures

Figure 1

32 pages, 13390 KB  
Article
Robotic Arm Control Using a Q-Learning Reinforcement Algorithm
by Afonso M. Timóteo, Ramiro S. Barbosa and Isabel S. Jesus
Robotics 2026, 15(3), 50; https://doi.org/10.3390/robotics15030050 - 27 Feb 2026
Viewed by 289
Abstract
This paper presents the design and implementation of an integrated robotic system capable of detecting objects through computer vision and making decisions based on logic strategies to perform physical tasks. For that, the system uses a robotic arm to play the Tic-Tac-Toe game [...] Read more.
This paper presents the design and implementation of an integrated robotic system capable of detecting objects through computer vision and making decisions based on logic strategies to perform physical tasks. For that, the system uses a robotic arm to play the Tic-Tac-Toe game utilizing a Q-learning algorithm to determine optimal moves. The system can be controlled using a graphical interface that enables real-time monitoring, facilitating seamless interaction between the user and the robotic arm. Three algorithms with different decision strategies were developed: a random decision algorithm, the MiniMax algorithm, and Q-learning, a reinforcement-learning algorithm. The results obtained highlight the control of the robotic arm using kinematic equations, the training of a robust YOLOv5 model, and the effective learning capability of a Q-learning algorithm. The proposed system presents practical implementation of the robotic system which can be used as a basis for further projects and for teaching robotics. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

17 pages, 14849 KB  
Article
A Collaborative Robotic System for Autonomous Object Handling with Natural User Interaction
by Federico Neri, Gaetano Lettera, Giacomo Palmieri and Massimo Callegari
Robotics 2026, 15(3), 49; https://doi.org/10.3390/robotics15030049 - 27 Feb 2026
Viewed by 153
Abstract
In Industry 5.0, the transition from fixed traditional automation to flexible human–robot collaboration (HRC) needs interfaces that are both intuitive and efficient. This paper introduces a novel, multimodal control system for autonomous object handling, specifically designed to enhance natural user interaction in dynamic [...] Read more.
In Industry 5.0, the transition from fixed traditional automation to flexible human–robot collaboration (HRC) needs interfaces that are both intuitive and efficient. This paper introduces a novel, multimodal control system for autonomous object handling, specifically designed to enhance natural user interaction in dynamic work environments. The system integrates a 6-Degrees of Freedom (DoF) collaborative robot (UR5e) with a hand-eye RGB-D vision system to achieve robust autonomy. The core technical contribution lies in a vision pipeline utilizing deep learning for object detection and point cloud processing for accurate 6D pose estimation, enabling advanced tasks such as human-aware object handover directly onto the operator’s hand. Crucially, an Automatic Speech Recognition (ASR) is incorporated, providing a Natural Language Understanding (NLU) layer that allows operators to issue real-time commands for task modification, error correction and object selection. Experimental results demonstrate that this multimodal approach offers a streamlined workflow aiming to improve operational flexibility compared to traditional HMIs, while enhancing the perceived naturalness of the collaborative task. The system establishes a framework for highly responsive and intuitive human–robot workspaces, advancing the state of the art in natural interaction for collaborative object manipulation. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

44 pages, 4964 KB  
Review
Digital Twin-Enabled Human–Robot Collaborative Assembly: A Review of Technical Systems, Application Evolution, and Future Outlook
by Qingwei Nie, Jingtao Chen, Changchun Liu, Zhen Zhao and Haoxuan Xu
Machines 2026, 14(3), 255; https://doi.org/10.3390/machines14030255 - 24 Feb 2026
Viewed by 201
Abstract
With the transition from Industry 4.0 to Industry 5.0, human–robot collaborative assembly (HRCA) has progressed from physical copresence to cognitive integration and knowledge sharing. Digital twins (DTs) serve as enabling technologies that connect physical and virtual spaces. Support is provided for dynamic, safe, [...] Read more.
With the transition from Industry 4.0 to Industry 5.0, human–robot collaborative assembly (HRCA) has progressed from physical copresence to cognitive integration and knowledge sharing. Digital twins (DTs) serve as enabling technologies that connect physical and virtual spaces. Support is provided for dynamic, safe, and human-centered collaboration. This study presents a systematic review of the research progress and practical applications of DT-enabled HRCA. First, conceptual boundaries between HRCA and general human–robot collaboration (HRC) in manufacturing are defined. Core elements of DT-driven state perception, task planning, and constraint modeling are described. Second, four task-allocation paradigms are classified and summarized, including optimization-based, constraint satisfaction-based, data-driven intelligent, and large language model (LLM)-assisted approaches. Applicable scenarios are identified. Third, the effects of collaboration modes and interaction modalities on planning logic are analyzed. Collaboration modes are categorized as parallel, sequential, and tightly coupled. Interaction modalities are grouped into AR-based explicit interaction, implicit intention perception, and multimodal fusion. Fourth, cross-domain application characteristics and engineering bottlenecks are summarized. Target domains include precision assembly, disassembly and remanufacturing, and construction on-site operations. Finally, four core challenges are distilled, including dynamic uncertainty, multi-objective conflicts, human factor adaptation, and system integration. Four future directions are outlined: LLM-enabled adaptive planning, safety–efficiency co-optimization, personalized collaboration, and standardized integration. The proposed technology–application–challenge–outlook framework is intended to provide a theoretical reference and practical guidance for transitioning HRCA from laboratory prototypes to large-scale industrial deployment. Full article
(This article belongs to the Section Industrial Systems)
Show Figures

26 pages, 10726 KB  
Article
PI-VLA: Adaptive Symmetry-Aware Decision-Making for Long-Horizon Vision–Language–Action Manipulation
by Yina Jian, Di Tian, Xuan-Jing Chen, Zhen-Yuan Wei, Chen-Wei Liang and Mu-Jiang-Shan Wang
Symmetry 2026, 18(3), 394; https://doi.org/10.3390/sym18030394 - 24 Feb 2026
Viewed by 224
Abstract
Vision–language–action (VLA) models often suffer from limited robustness in long-horizon manipulation tasks—where robots must execute extended sequences of actions over multiple time steps to achieve complex goals—due to their inability to explicitly exploit structural symmetries and to react adaptively when such symmetries are [...] Read more.
Vision–language–action (VLA) models often suffer from limited robustness in long-horizon manipulation tasks—where robots must execute extended sequences of actions over multiple time steps to achieve complex goals—due to their inability to explicitly exploit structural symmetries and to react adaptively when such symmetries are violated by environmental uncertainty. To address this limitation, this paper proposes PI-VLA, a symmetry-aware predictive and interactive VLA framework for robust robotic manipulation. PI-VLA is built upon three key symmetry-driven principles. First, a Cognitive–Motor Synergy (CMS) module jointly generates discrete and continuous action chunks together with predictive world-model features in a single forward pass, enforcing cross-modal action consistency as an implicit symmetry constraint across heterogeneous action representations. Second, a unified training objective integrates imitation learning, reinforcement learning, and state prediction, encouraging invariance to task-relevant transformations while enabling adaptive symmetry breaking when long-horizon deviations emerge. Third, an Active Uncertainty-Resolving Decider (AURD) explicitly monitors action consensus discrepancies and state prediction errors as symmetry-breaking signals, dynamically adjusting the execution horizon through closed-loop replanning. Extensive experiments on long-horizon benchmarks demonstrate that PI-VLA achieves state-of-the-art performance, attaining a 73.2% average success rate on the LIBERO benchmark (with particularly strong gains on the Long-Horizon suite) and an 88.3% success rate in real-world manipulation tasks under visual distractions and unseen conditions. Ablation studies confirm that symmetry-aware action consensus and uncertainty-triggered replanning are critical to robust execution. These results establish PI-VLA as a principled framework that leverages symmetry preservation and controlled symmetry breaking to enable reliable and interactive robotic manipulation. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 3016 KB  
Article
Force Sensing Control for Physical Human–Robot Interaction: A Transformer-Based Action Chunking Approach
by Zhenyu Pan and Weiming Wang
Machines 2026, 14(2), 249; https://doi.org/10.3390/machines14020249 - 23 Feb 2026
Viewed by 281
Abstract
In human–robot collaboration (HRC) scenarios with direct physical contact, accurately estimating human intentions and adjusting robot behaviors based on multimodal information is the core factors that restrict the efficiency and precision of current HRC tasks. To enhance the performance of human–robot collaboration under [...] Read more.
In human–robot collaboration (HRC) scenarios with direct physical contact, accurately estimating human intentions and adjusting robot behaviors based on multimodal information is the core factors that restrict the efficiency and precision of current HRC tasks. To enhance the performance of human–robot collaboration under physical contact conditions, we propose a joint network model named ACT_force_cooperative (AFC). This model leverages force sensing information as a representation of human intent to achieve human intent prediction during physical interaction, while simultaneously capturing visual information and robot state data, thereby enabling more efficient execution of human–robot collaborative tasks with multimodal information processing. Existing HRC methods often ignore humans’ collaborative experience in the environment and fail to recognize the uniqueness of interactive force in expressing human intentions. Focusing on the special role of interactive force among various types of data in physical interaction environments, the proposed model predicts humans’ future behavioral intentions and adopts a Transformer model to realize the fusion and representation of multimodal information, thus accomplishing more accurate and effective HRC tasks. Experimental results demonstrate that, through the processing of force sensing information and fusion of multimodal data, the proposed model reduces the motion error by 44.9% and increases the effective collaborative time ratio by 20.2% compared with the baseline Action Chunk Transformer (ACT) model. This not only improves the motion accuracy of the robot in collaborative tasks but also enhances the collaborative experience of human operators. Full article
(This article belongs to the Special Issue Robots with Intelligence: Developments and Applications)
Show Figures

Figure 1

34 pages, 8501 KB  
Article
A Multimodal Adaptive Framework for Social Interaction with the MiRo-E Robot
by Yufeng Yang, Pei Shan Yap, Sobanawartiny Wijeakumar, Aly Magassouba and Nikhil Deshpande
Sensors 2026, 26(4), 1209; https://doi.org/10.3390/s26041209 - 12 Feb 2026
Viewed by 339
Abstract
Adaptivity is a key component of social human–robot interaction (HRI) towards achieving more natural and human-like interactions. Current interactive systems tend to rely on preset and repetitive verbal communication and isolated nonverbal interactions, which results in unappealing engagement. This study proposes an integrated [...] Read more.
Adaptivity is a key component of social human–robot interaction (HRI) towards achieving more natural and human-like interactions. Current interactive systems tend to rely on preset and repetitive verbal communication and isolated nonverbal interactions, which results in unappealing engagement. This study proposes an integrated framework that combines a coordinated nonverbal interaction system based on real-time emotion expression with a fine-tuned large language model-based verbal communication system, resulting in more engaging and context-aware interaction. The design utilises the MiRo-E as the zoomorphic social interaction platform, with the aim of enhancing the consistency across verbal and nonverbal modalities and improving user engagement through adaptive and emotionally aligned responses. To evaluate the effectiveness of the approach, a user study was conducted with tasks designed to assess user engagement, task performance, and the perceived naturalness of interaction. Task performance metrics and subjective questionnaire responses indicate that the framework significantly enhances user experience, improving task completion rates, engagement, and perceived naturalness. Full article
(This article belongs to the Special Issue Smart Sensing System for Intelligent Human–Computer Interaction)
Show Figures

Figure 1

23 pages, 16353 KB  
Article
RepACNet: A Lightweight Reparameterized Asymmetric Convolution Network for Monocular Depth Estimation
by Wanting Jiang, Jun Li, Yaoqian Niu, Hao Chen and Shuang Peng
Sensors 2026, 26(4), 1199; https://doi.org/10.3390/s26041199 - 12 Feb 2026
Viewed by 241
Abstract
Monocular depth estimation (MDE) is a cornerstone task in 2D/3D scene reconstruction and recognition with widespread applications in autonomous driving, robotics, and augmented reality. However, existing state-of-the-art methods face a fundamental trade-off between computational efficiency and estimation accuracy, limiting their deployment in resource-constrained [...] Read more.
Monocular depth estimation (MDE) is a cornerstone task in 2D/3D scene reconstruction and recognition with widespread applications in autonomous driving, robotics, and augmented reality. However, existing state-of-the-art methods face a fundamental trade-off between computational efficiency and estimation accuracy, limiting their deployment in resource-constrained real-world scenarios. It is of high interest to design lightweight but effective models to enable potential deployment on resource-constrained mobile devices. To address this problem, we present RepACNet, a novel lightweight network that addresses this challenge through reparameterized asymmetric convolution designs and CNN-based architecture that integrates MLP-Mixer components. First, we propose Reparameterized Token Mixer with Asymmetric Convolution (RepTMAC), an efficient block that captures long-range dependencies while maintaining linear computational complexity. Unlike Transformer-based methods, our approach achieves global feature interaction with tiny overhead. Second, we introduce Squeeze-and-Excitation Consecutive Dilated Convolutions (SECDCs), which integrates adaptive channel attention with dilated convolutions to capture depth-specific features across multiple scales. We validate the effectiveness of our approach through extensive experiments on two widely recognized benchmarks, NYU Depth v2 and KITTI Eigen. The experimental results demonstrate that our model achieves competitive performance while maintaining significantly fewer parameters compared to state-of-the-art models. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

11 pages, 973 KB  
Opinion
Neurorehabilitation as Network Perturbation: Shaping Neuroplasticity with Robotics, Virtual Reality, and Neuromodulation
by Rocco Salvatore Calabrò and Angelo Quartarone
Biomedicines 2026, 14(2), 411; https://doi.org/10.3390/biomedicines14020411 - 11 Feb 2026
Viewed by 384
Abstract
Neurological injury induces widespread neuroplastic changes that extend well beyond focal structural damage, altering synaptic function, circuit dynamics, and large-scale network organization. While these processes provide the biological substrate for recovery, they can also drive the stabilization of maladaptive network states that constrain [...] Read more.
Neurological injury induces widespread neuroplastic changes that extend well beyond focal structural damage, altering synaptic function, circuit dynamics, and large-scale network organization. While these processes provide the biological substrate for recovery, they can also drive the stabilization of maladaptive network states that constrain long-term functional improvement. Traditional neurorehabilitation has largely emphasized compensation and task practice, often without explicitly targeting the neural dynamics that underlie persistent disability. In this Opinion, we propose that contemporary rehabilitation technologies, including robotics, virtual reality, and neuromodulation, should be conceptualized as mechanistically grounded interventions that actively perturb neural networks and interact with the pathobiology of post-injury reorganization. Drawing on advances in systems and network neuroscience, we examine key molecular, synaptic, and network-level mechanisms that govern adaptive and maladaptive plasticity, and discuss how these technologies modulate error processing, sensory context, and excitability landscapes to reshape recovery trajectories. We argue that when interventions are appropriately structured, timed, and combined within adaptive and closed-loop frameworks, technology-assisted rehabilitation can move beyond compensation and toward principled modulation of neuroplasticity, aligning therapeutic innovation with the biological rules that govern recovery. This perspective highlights the need for network-informed biomarkers and longitudinal approaches to translate technological advances into durable functional gains. Full article
(This article belongs to the Section Neurobiology and Clinical Neuroscience)
Show Figures

Figure 1

23 pages, 5683 KB  
Article
Optimizing RTAB-Map Viewability to Reduce Cognitive Workload in VR Teleoperation: A User-Centric Approach
by Hojin Yoon, Haegyeom Choi, Jaehoon Jeong and Donghun Lee
Mathematics 2026, 14(3), 579; https://doi.org/10.3390/math14030579 - 6 Feb 2026
Viewed by 305
Abstract
In industrial environments, providing intuitive spatial information via 3D maps is essential for maximizing the efficiency of teleoperation. However, existing SLAM algorithms generating 3D maps predominantly focus on improving robot localization accuracy, often neglecting the optimization of viewability required for human operators to [...] Read more.
In industrial environments, providing intuitive spatial information via 3D maps is essential for maximizing the efficiency of teleoperation. However, existing SLAM algorithms generating 3D maps predominantly focus on improving robot localization accuracy, often neglecting the optimization of viewability required for human operators to clearly perceive object depth and structure in virtual environments. To address this, this study proposes a methodology to optimize the viewability of RTAB-Map-based 3D maps using the Taguchi method, aiming to enhance VR teleoperation efficiency and reduce cognitive workload. We identified eight key parameters that critically affect visual quality and utilized an L18 orthogonal array to derive an optimal combination that controls point cloud density and noise levels. Experimental results from a target object picking task demonstrated that the optimized 3D map reduced task completion time by approximately 9 s compared to the RGB image condition, achieving efficiency levels approaching those of the physical-world baseline. Furthermore, evaluations using NASA-TLX confirmed that intuitive visual feedback minimized situational awareness errors and substantially alleviated cognitive workload. This study suggests a new direction for constructing high-efficiency teleoperation interfaces from a Human–Robot Interaction perspective by expanding SLAM optimization criteria from geometric precision to user-centric visual quality. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Systems)
Show Figures

Figure 1

14 pages, 1049 KB  
Article
Fractional Fuzzy Force-Position Control of Constrained Robots
by Aldo Jonathan Muñoz-Vázquez, Mohamed Gharib, Juan Diego Sánchez-Torres and Anh-Tu Nguyen
Mathematics 2026, 14(3), 565; https://doi.org/10.3390/math14030565 - 4 Feb 2026
Viewed by 246
Abstract
Modern robotic tasks often require interaction with the surrounding elements in the workspace. In some high-precision tasks, it is essential to stabilize the contact force on a smooth yet rigid surface, which can be modeled as a unilateral constraint. This challenge becomes increasingly [...] Read more.
Modern robotic tasks often require interaction with the surrounding elements in the workspace. In some high-precision tasks, it is essential to stabilize the contact force on a smooth yet rigid surface, which can be modeled as a unilateral constraint. This challenge becomes increasingly complex in the presence of disturbances. This study addresses these issues using a robust fuzzy force-position controller that combines the approximation capabilities of fuzzy inference systems with the nonlocal properties of fractional operators. The proposed approach extends the error integration to include proportional-integral-derivative (PID) components of the position error, along with the integral of the contact force error. This formulation leverages the orthogonality between force and velocity subspaces to achieve accurate force-position stabilization. Additionally, an adaptive mechanism enhances closed-loop performance and robustness. The effectiveness of the proposed controller is validated through analytical derivations and simulations, thereby demonstrating its reliability in constrained environments. Full article
Show Figures

Figure 1

Back to TopTop