Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (219)

Search Parameters:
Keywords = human-centered robotics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2794 KB  
Article
Enhancing Trust in Collaborative Assembly Through Resilient Adversarial Reinforcement Learning
by Dario Antonelli, Khurshid Aliev and Bo Yang
Appl. Sci. 2026, 16(7), 3244; https://doi.org/10.3390/app16073244 - 27 Mar 2026
Viewed by 101
Abstract
Collaborative robots, or cobots, are designed to improve productivity and safety in industrial settings. However, effective Human–Robot Collaboration (HRC) relies heavily on the human operator’s trust in the robotic partner. This study posits that trust is significantly enhanced by the robot’s ability to [...] Read more.
Collaborative robots, or cobots, are designed to improve productivity and safety in industrial settings. However, effective Human–Robot Collaboration (HRC) relies heavily on the human operator’s trust in the robotic partner. This study posits that trust is significantly enhanced by the robot’s ability to adapt to unpredictable human behavior. To achieve this adaptability, we propose applying an Adversarial Reinforcement Learning (ARL) framework to the robot’s activity planning. We model the assembly process as a Markov Decision Process (MDP) on a Directed Acyclic Graph (DAG). The robot learns an assembly policy using an on-policy algorithm while a simulated human agent, trained with the same algorithm, acts as an adversary that introduces disturbances and delays. We applied the proposed approach to a simple industrial case study and evaluated it on complex assembly sequences generated synthetically. Although the ARL-trained robot did not outperform conventional assembly optimization algorithms in terms of task completion time, it guaranteed robustness against human variability. This ensured task completion within a bounded timeframe regardless of human actions. By demonstrating consistent performance and adaptability in the face of uncertainty, the robot exhibits the Ability and Benevolence components of the ABI model of trust. This fosters a more resilient and trustworthy collaborative environment. Full article
Show Figures

Figure 1

9 pages, 2562 KB  
Article
Manual Insertion of Cochlear Implant Electrodes Versus Robot-Assisted Insertion and Analysis by Micro-CT: A Temporal Bone Study
by Alexandre Karkas, Clément Arnold, Yann Lelonge, Norbert Laroche, Fabien Tinquaut, Florian Bergandi, Hubert Marotte and Kelly Daouda
Audiol. Res. 2026, 16(2), 51; https://doi.org/10.3390/audiolres16020051 - 26 Mar 2026
Viewed by 124
Abstract
Background/Objectives: Atraumatic electrode array insertion should be targeted in cochlear implantation. Robotic insertion is used in many centers worldwide. Our objective was to evaluate manual electrode placement and robot-assisted placement using RobOtol® on human temporal bones (TBs), in terms of endocochlear [...] Read more.
Background/Objectives: Atraumatic electrode array insertion should be targeted in cochlear implantation. Robotic insertion is used in many centers worldwide. Our objective was to evaluate manual electrode placement and robot-assisted placement using RobOtol® on human temporal bones (TBs), in terms of endocochlear trauma and completion of insertion. Methods: Sixteen TBs originating from eight bodies were implanted with Medel-FLEX24 electrodes through the round window. The right TB was implanted manually, while the left TB of the same body was implanted using RobOtol® for electrode insertion. Results were analyzed through micro-computed tomography imaging. No statistical analysis was used, given the small sample size; a descriptive interpretation of micro-CT scans was rather preferred. Results: In the “manual group”, there were two cases (25%) of insertion trauma: elevation of basilar membrane at basal turn (Eshraghi-stage-1). In the “robotic group”, there were two cases (25%) of insertion trauma: one case of elevation of basilar membrane at the middle turn (Eshraghi-stage-1) and one case of dislocation of all electrodes in scala vestibuli (Eshraghi-stage-3). There were six cases (75%) of incomplete insertion in the “manual group” and four cases (50%) of incomplete insertion in the “robotic group”. Conclusions: Both techniques of electrode placement yielded fairly similar results, in terms of endocochlear trauma and completion of insertion. New larger-scale cadaveric and clinical studies are needed to determine the possible benefit of robot-assisted electrode insertion in cochlear implantation. Full article
(This article belongs to the Special Issue Innovations in Cochlear Implant Surgery)
Show Figures

Figure 1

23 pages, 27743 KB  
Review
A Framework for Safe Mobile Manipulation in Human-Centered Applications
by Pangcheng David Cen Cheng, Cesare Luigi Blengini, Rosario Francesco Cavelli, Angela Ripi and Marina Indri
Robotics 2026, 15(4), 68; https://doi.org/10.3390/robotics15040068 (registering DOI) - 25 Mar 2026
Viewed by 258
Abstract
In recent years, applications with robots collaborating actively with humans have been increasing. The transition from Industry 4.0 to 5.0 rearranges the focus of fully automated processes to a human-centered system that allows more customization and flexibility. In human-centered systems, the robot is [...] Read more.
In recent years, applications with robots collaborating actively with humans have been increasing. The transition from Industry 4.0 to 5.0 rearranges the focus of fully automated processes to a human-centered system that allows more customization and flexibility. In human-centered systems, the robot is expected to safely assist or provide support to the human operator, avoiding any unintentional harm, while the latter is focused on tasks that require human reasoning, since current decision-making systems still have some limitations. This survey reviews all the main functionalities required to make a robot (collaborative or not) act as an assistant for human operators, analyzing and comparing solutions proposed by the authors (based on previous works) and/or the ones available in the literature. In this way, it is possible to combine those functionalities and build a complete framework enabling safe mobile manipulation while interacting with humans. In particular, a mobile manipulator is used to receive requests from a user, navigate in a human-shared environment, identify the requested object, and grasp and safely deliver such an object to the user. The framework, which is completed by a user interface designed using Android Studio, is developed in ROS1, tested, and validated on a real mobile manipulator in real-world conditions. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

17 pages, 912 KB  
Review
Beyond Incremental: Embracing Transformative Innovation in Women’s Health
by Mark I. Evans, Lawrence D. Devoe, Gregory F. Ryan, David W. Britt and Christian R. Macedonia
Reprod. Med. 2026, 7(1), 16; https://doi.org/10.3390/reprodmed7010016 - 23 Mar 2026
Viewed by 273
Abstract
Background/Objectives: Women’s health has historically lagged behind other medical specialties in transformative innovation, despite significant technological advances in adjacent fields. In this collection of papers, we examine the current state of innovation in women’s health and maternal–fetal medicine, identify barriers to transformation, and [...] Read more.
Background/Objectives: Women’s health has historically lagged behind other medical specialties in transformative innovation, despite significant technological advances in adjacent fields. In this collection of papers, we examine the current state of innovation in women’s health and maternal–fetal medicine, identify barriers to transformation, and propose strategies for accelerating breakthrough developments. This paper presents an overview of multiple forces and their often-competing relationships that influence the environment in which advances in multiple areas of healthcare have had to navigate to enter mainstream practice. An understanding of these forces is essential to explain why some new technologies are readily deployed into clinical practice while others take many years to be adopted. Understanding the entire “echo-system” around any specific technology provides a much fuller understanding of how any individual advance can make its way into actual utilization. Methods: We synthesized current literature on innovation in women’s health, analyzing technological advances in artificial intelligence, precision medicine, non-invasive diagnostics, and surgical robotics. We examined patterns of innovation adoption and barriers to implementation across multiple domains. Results: Several key areas presented in this paper and the following show promise for transformative change: artificial intelligence (AI)-driven diagnostics achieving expert-level performance in prenatal screening, precision medicine approaches transforming genetic disease management, and non-invasive monitoring technologies revolutionizing maternal–fetal care. However, systemic barriers including regulatory complexity, liability concerns, and institutional inertia continue to limit widespread adoption of numerous breakthrough technologies. Conclusions: The convergence of multiple technological advances, particularly artificial intelligence and precision medicine, positions women’s health for unprecedented transformation. Success requires fostering innovation-ready environments, embracing systems-awareness approaches, and maintaining focus on human-centered care while leveraging technological capabilities with continual feedback and course corrections. Full article
(This article belongs to the Special Issue Game-Changing Concepts in Reproductive Health)
Show Figures

Figure 1

58 pages, 7331 KB  
Review
Human–Robot Interaction in Indoor Mobile Robotics: Current State, Interaction Modalities, Applications, and Future Challenges
by Arman Ahmed Khan and Kerstin Thurow
Sensors 2026, 26(6), 1840; https://doi.org/10.3390/s26061840 - 14 Mar 2026
Viewed by 329
Abstract
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as [...] Read more.
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as usability, trust, and social acceptance. Implementation challenges are discussed, encompassing safety, privacy, and regulatory considerations. Representative case studies, including healthcare and domestic platforms, highlight design trade-offs and integration lessons. We identify critical technical challenges, including robust perception, reliable multimodal fusion, navigation in dynamic spaces, and constraints on computation and power. Finally, we outline future directions, including embodied AI, adaptive context-aware interactions, and standards for safety and data protection. This survey aims to guide the development of indoor mobile robots capable of collaborating with humans naturally, safely, and effectively. Full article
Show Figures

Figure 1

17 pages, 602 KB  
Review
Artificial Intelligence Applications in Gastric Cancer Surgery: Bridging Early Diagnosis and Responsible Precision Medicine
by Silvia Malerba, Miljana Vladimirov, Aman Goyal, Audrius Dulskas, Augustinas Baušys, Tomasz Cwalinski, Sergii Girnyi, Jaroslaw Skokowski, Ruslan Duka, Robert Molchanov, Bojan Jovanovic, Francesco Antonio Ciarleglio, Alberto Brolese, Kebebe Bekele Gonfa, Abdi Tesemma Demmo, Zilvinas Dambrauskas, Adolfo Pérez Bonet, Mario Testini, Francesco Paolo Prete, Valentin Calu, Natale Calomino, Vikas Jain, Aleksandar Karamarkovic, Karol Polom, Adel Abou-Mrad, Rodolfo J. Oviedo, Yogesh Vashist and Luigi Maranoadd Show full author list remove Hide full author list
J. Clin. Med. 2026, 15(6), 2208; https://doi.org/10.3390/jcm15062208 - 13 Mar 2026
Viewed by 715
Abstract
Background: Artificial intelligence is emerging as a promising tool in surgical oncology, with growing evidence suggesting potential applications in diagnostic support, intraoperative guidance, and perioperative risk assessment. In gastric cancer surgery, emerging applications range from AI-assisted endoscopic detection to data-driven perioperative risk [...] Read more.
Background: Artificial intelligence is emerging as a promising tool in surgical oncology, with growing evidence suggesting potential applications in diagnostic support, intraoperative guidance, and perioperative risk assessment. In gastric cancer surgery, emerging applications range from AI-assisted endoscopic detection to data-driven perioperative risk prediction, while some technological developments, particularly in robotic autonomy, derive from broader surgical or experimental models that may inform future gastric procedures. Methods: A narrative review was conducted following established methodological standards, including the Scale for the Assessment of Narrative Review Articles (SANRA) and the Search–Appraisal–Synthesis–Analysis (SALSA) framework. English-language studies indexed in PubMed, Scopus, Embase, and Web of Science up to October 2025 were included. Evidence was synthesized thematically across five domains: AI-assisted anatomical recognition and lymphadenectomy support, autonomous robotic systems, early cancer detection, perioperative predictive and frailty models, and ethical and regulatory considerations. Results: AI-based computer vision and deep learning algorithms have demonstrated promising capabilities for real-time anatomical recognition, surgical phase classification, and intraoperative guidance, although evidence of direct patient-level benefit remains limited. In diagnostic settings, AI-assisted endoscopy and Raman spectroscopy have been shown to improve early lesion detection and reduce dependence on operator experience. Predictive models, including MySurgeryRisk and AI-driven frailty assessments, may support individualized prehabilitation planning and perioperative risk stratification. Persistent limitations include small and heterogeneous datasets, insufficient external validation, and unresolved concerns related to data privacy, algorithmic interpretability, and medico-legal responsibility. Conclusions: Artificial intelligence is progressively emerging as a promising tool in gastric cancer surgery, integrating automation, advanced analytics, and human clinical reasoning. Its safe and ethical adoption requires robust validation, transparent governance, and continuous surgeon oversight. When developed within human-centered and ethically grounded frameworks, AI can augment, rather than replace, surgical expertise, potentially advancing precision, safety, and equity in oncologic care. Full article
Show Figures

Figure 1

29 pages, 645 KB  
Article
BCI-Inspired Adaptive Agents in Human–Robot Interaction: A Structural Framework for Coordinated Interaction Design
by Ionica Oncioiu, Iustin Priescu, Daniela Joița, Geanina Silviana Banu and Cătălina-Mihaela Priescu
Electronics 2026, 15(6), 1206; https://doi.org/10.3390/electronics15061206 - 13 Mar 2026
Viewed by 270
Abstract
The accelerated integration of intelligent agents in user-centered digital environments has intensified research in the field of Human–Robot Interaction, especially regarding mechanisms for adaptive, intuitive, and cognitively aligned communication. The present study develops and empirically examines a structural model of BCI-inspired adaptive agents [...] Read more.
The accelerated integration of intelligent agents in user-centered digital environments has intensified research in the field of Human–Robot Interaction, especially regarding mechanisms for adaptive, intuitive, and cognitively aligned communication. The present study develops and empirically examines a structural model of BCI-inspired adaptive agents designed to support coordinated interaction in HRI contexts. The study analyzes users’ perceptions of standardized hypothetical interaction scenarios involving BCI-inspired adaptive digital agents, where BCI inspiration is conceptual and refers to adaptive architectures interpreting behavioral cues rather than direct neural signal acquisition. The proposed model integrates four main constructs—perceived technological innovation, user involvement, agent adaptivity, and digital synergy—and examines their associations with user satisfaction in digital collaborative environments. Data were collected through an anonymous questionnaire (N = 268) and analyzed using structural equation modeling with the PLS-SEM method. The structural model demonstrates substantial explanatory power, accounting for 66.8% of the variance in user satisfaction (R2 = 0.668). The study contributes by empirically supporting a scenario-based structural evaluation framework suitable for early-stage adaptive HRI system design. The results highlight the role of digital synergy in aligning innovation, engagement, and adaptive behavior in BCI-inspired adaptive HRI systems, providing directions for the design of adaptive robotic agents oriented toward coordinated interaction, user-centered integration, and responsible use in collaborative digital ecosystems. Full article
(This article belongs to the Special Issue Human Robot Interaction: Techniques, Applications, and Future Trends)
Show Figures

Figure 1

13 pages, 1037 KB  
Systematic Review
Artificial Intelligence in Esophagectomy: A Systematic Review
by Vladimir Aleksiev, Daniel Markov, Kristian Bechev, Desislav Stanchev, Filip Shterev and Galabin Markov
J. Clin. Med. 2026, 15(6), 2169; https://doi.org/10.3390/jcm15062169 - 12 Mar 2026
Viewed by 257
Abstract
Background: Esophagectomy remains a technically demanding oncologic procedure with substantial morbidity, despite ongoing advances in minimally invasive and robotic techniques. Limitations in intraoperative visualization and anatomical recognition contribute to complications such as nerve injury and bleeding. Artificial intelligence (AI)-based intraoperative video analysis [...] Read more.
Background: Esophagectomy remains a technically demanding oncologic procedure with substantial morbidity, despite ongoing advances in minimally invasive and robotic techniques. Limitations in intraoperative visualization and anatomical recognition contribute to complications such as nerve injury and bleeding. Artificial intelligence (AI)-based intraoperative video analysis has emerged as a potential adjunct to enhance surgical perception and safety, but its application in esophagectomy has not been comprehensively reviewed. Methods: A systematic review was conducted in accordance with PRISMA guidelines. PubMed, Scopus, and Web of Science were searched without a lower date limit to identify eligible studies published up to January 2026, capturing early and contemporary applications of intraoperative AI in esophagectomy. Human studies involving any surgical approach were included. Data on the AI task, methodology, validation strategy, performance metrics, and reported clinical outcomes was extracted. Risk of bias was assessed using the ROBINS-I tool. Results: Six studies met the inclusion criteria, predominantly evaluating AI-driven analysis of intraoperative video during minimally invasive or robotic esophagectomy. Reported applications included real-time anatomical structure recognition, recurrent laryngeal nerve segmentation, detection of excessive nerve traction, instrument and event recognition, and surgical phase identification. Across studies, AI systems demonstrated performance comparable to expert surgeons for selected tasks and achieved real-time or near–real-time inference. One study reported earlier detection of excessive recurrent laryngeal nerve traction compared to conventional nerve integrity monitoring. However, most studies were retrospective, single-center, and feasibility-focused, with limited external validation and minimal assessment of patient-centered clinical outcomes. Conclusions: Artificial intelligence-based intraoperative analysis in esophagectomy is increasingly achievable and may enhance anatomical recognition, intraoperative risk detection, and procedural awareness. Nevertheless, current evidence remains preliminary, heterogeneous, and largely exploratory. Prospective, multicenter studies with standardized reporting and clinically meaningful outcome evaluation are required before routine implementation. Until such data is available, AI should be regarded as a complementary intraoperative tool rather than a standalone clinical decision-making system. Full article
(This article belongs to the Special Issue Recent Clinical Advances in Esophageal Surgery)
Show Figures

Figure 1

32 pages, 2223 KB  
Article
From Large Language Models to Agentic AI in Industry 5.0 and the Post-ChatGPT Era: A Socio-Technical Framework and Review on Human–Robot Collaboration
by Enrique Coronado
Robotics 2026, 15(3), 58; https://doi.org/10.3390/robotics15030058 - 12 Mar 2026
Viewed by 659
Abstract
Generative Artificial Intelligence (GenAI), particularly Foundation Models (FMs), has recently become a key component of Industry 5.0. Despite growing interest in integrating these technologies into industrial environments, comprehensive analyses of the socio-technical opportunities and challenges of deploying these emerging AI systems in real-world [...] Read more.
Generative Artificial Intelligence (GenAI), particularly Foundation Models (FMs), has recently become a key component of Industry 5.0. Despite growing interest in integrating these technologies into industrial environments, comprehensive analyses of the socio-technical opportunities and challenges of deploying these emerging AI systems in real-world settings remain limited. This article proposes a socio-technical conceptual perspective, termed Responsible Agentic Robotics (RAR), which structures the lifecycle deployment of agentic AI-enabled robotic systems around three core layers: context, design, and value. Additionally, this article presents a brief review of 21 peer-reviewed studies published between 2023 and 2025 (post-ChatGPT era) on FMs and agentic AI-enabled Human–Robot Collaboration (HRC) in industrial assembly/disassembly environments. The results indicate that existing research remains predominantly technology-centric, with a strong emphasis on enhancing robot autonomy, while comparatively limited attention is devoted to human-centered and responsible practices. Moreover, empirical evaluations of human, social, and sustainability dimensions, such as worker empowerment, human factors, well-being, inclusivity, resource utilization, and environmental impact, are rarely conducted and poorly discussed. This article concludes by identifying key socio-technical gaps, outlining future research directions. Full article
(This article belongs to the Special Issue Human-Centered Robotics: The Transition to Industry 5.0)
Show Figures

Figure 1

53 pages, 5533 KB  
Systematic Review
Embodied AI with Foundation Models for Mobile Service Robots: A Systematic Review
by Matthew Lisondra, Beno Benhabib and Goldie Nejat
Robotics 2026, 15(3), 55; https://doi.org/10.3390/robotics15030055 - 4 Mar 2026
Viewed by 1490
Abstract
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action models, have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, [...] Read more.
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action models, have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, reason, and act through physical interaction, mobile service robots can achieve more flexible understanding, adaptive behavior, and robust task execution in dynamic real-world environments. Despite this progress, embodied AI for mobile service robots continues to face fundamental challenges related to the translation of natural language instructions into executable robot actions, multimodal perception in human-centered environments, uncertainty estimation for safe decision-making, and computational constraints for real-time onboard deployment. In this paper, we present the first systematic review of foundation models in mobile service robotics, following the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines. Using an OpenAlex literature search, we considered 7506 papers for the years spanning 1968–2025. Our detailed analysis identified four main challenges and how recent advances in foundation models, related to the translation of natural language instructions into executable robot actions, multimodal perception in human-centered environments, uncertainty estimation for safe decision-making, and computational constraints for real-time onboard deployment, have addressed these challenges. We further examine real-world applications in domestic assistance, healthcare, and service automation, highlighting how foundation models enable context-aware, socially responsive, and generalizable robot behaviors. Beyond technical considerations, we discuss ethical, societal, human-interaction, and physical design and ergonomic implications associated with deploying foundation-model-enabled service robots in human environments. Finally, we outline future research directions emphasizing reliability and lifelong adaptation, privacy-aware and resource-constrained deployment, as well as the governance and human-in-the-loop frameworks required for safe, scalable, and trustworthy mobile service robotics. Full article
(This article belongs to the Special Issue Embodied Intelligence: Physical Human–Robot Interaction)
Show Figures

Figure 1

35 pages, 1070 KB  
Article
Adaptive Deep Learning Framework for Emotion Recognition in Social Robots: Toward Inclusive Human–Robot Interaction for Users with Special Needs
by Eryka Probierz and Adam Gałuszka
Electronics 2026, 15(5), 924; https://doi.org/10.3390/electronics15050924 - 25 Feb 2026
Viewed by 433
Abstract
Emotion recognition is a key capability of social robots operating in real-world human-centered environments, especially when interacting with users with special needs. Such users may express emotions in atypical, subtle, or strongly context-dependent ways. These characteristics pose significant challenges for conventional emotion recognition [...] Read more.
Emotion recognition is a key capability of social robots operating in real-world human-centered environments, especially when interacting with users with special needs. Such users may express emotions in atypical, subtle, or strongly context-dependent ways. These characteristics pose significant challenges for conventional emotion recognition systems. This paper proposes an adaptive deep learning framework for emotion recognition in social robots. The framework is designed to support inclusive and accessible human–robot interaction. It combines region-based convolutional neural networks with adaptive learning mechanisms. These mechanisms explicitly model individual variability, contextual information, and interaction dynamics. Multiple deep architectures are evaluated to assess robustness across diverse emotional expressions, including those influenced by cognitive, sensory, or developmental differences. Rather than relying on fixed emotion models, the proposed approach emphasizes adaptability. The system dynamically adjusts its perception strategies to user-specific expressive patterns. Experimental validation is conducted using context-aware emotion datasets. Performance is evaluated in terms of detection accuracy, robustness to variability, and generalization across emotion categories. The results show that adaptive mechanisms improve recognition performance in scenarios characterized by non-standard or low-intensity expressions, compared to static baseline models. This study highlights the importance of flexible, context-sensitive perception for inclusive social robotics. It also discusses design implications for deploying emotion-aware robots in assistive, educational, and therapeutic settings. Overall, the proposed framework represents a step toward socially intelligent robots capable of engaging more effectively with users with special needs. Full article
(This article belongs to the Special Issue Research on Deep Learning and Human-Robot Collaboration)
Show Figures

Figure 1

44 pages, 4964 KB  
Review
Digital Twin-Enabled Human–Robot Collaborative Assembly: A Review of Technical Systems, Application Evolution, and Future Outlook
by Qingwei Nie, Jingtao Chen, Changchun Liu, Zhen Zhao and Haoxuan Xu
Machines 2026, 14(3), 255; https://doi.org/10.3390/machines14030255 - 24 Feb 2026
Viewed by 689
Abstract
With the transition from Industry 4.0 to Industry 5.0, human–robot collaborative assembly (HRCA) has progressed from physical copresence to cognitive integration and knowledge sharing. Digital twins (DTs) serve as enabling technologies that connect physical and virtual spaces. Support is provided for dynamic, safe, [...] Read more.
With the transition from Industry 4.0 to Industry 5.0, human–robot collaborative assembly (HRCA) has progressed from physical copresence to cognitive integration and knowledge sharing. Digital twins (DTs) serve as enabling technologies that connect physical and virtual spaces. Support is provided for dynamic, safe, and human-centered collaboration. This study presents a systematic review of the research progress and practical applications of DT-enabled HRCA. First, conceptual boundaries between HRCA and general human–robot collaboration (HRC) in manufacturing are defined. Core elements of DT-driven state perception, task planning, and constraint modeling are described. Second, four task-allocation paradigms are classified and summarized, including optimization-based, constraint satisfaction-based, data-driven intelligent, and large language model (LLM)-assisted approaches. Applicable scenarios are identified. Third, the effects of collaboration modes and interaction modalities on planning logic are analyzed. Collaboration modes are categorized as parallel, sequential, and tightly coupled. Interaction modalities are grouped into AR-based explicit interaction, implicit intention perception, and multimodal fusion. Fourth, cross-domain application characteristics and engineering bottlenecks are summarized. Target domains include precision assembly, disassembly and remanufacturing, and construction on-site operations. Finally, four core challenges are distilled, including dynamic uncertainty, multi-objective conflicts, human factor adaptation, and system integration. Four future directions are outlined: LLM-enabled adaptive planning, safety–efficiency co-optimization, personalized collaboration, and standardized integration. The proposed technology–application–challenge–outlook framework is intended to provide a theoretical reference and practical guidance for transitioning HRCA from laboratory prototypes to large-scale industrial deployment. Full article
(This article belongs to the Section Industrial Systems)
Show Figures

24 pages, 10860 KB  
Article
PostureSense: A Low-Cost Solution for Postural Monitoring
by Nicoletta Cinardi, Giuseppe Sutera, Dario Calogero Guastella and Giovanni Muscato
Actuators 2026, 15(2), 125; https://doi.org/10.3390/act15020125 - 16 Feb 2026
Viewed by 437
Abstract
Assistive devices in recent years have transitioned from a passive mode of operation to the integration of smart solutions that enable humans to interact with active and robotic platforms. The main problems in the evolution of this kind of device are accessibility in [...] Read more.
Assistive devices in recent years have transitioned from a passive mode of operation to the integration of smart solutions that enable humans to interact with active and robotic platforms. The main problems in the evolution of this kind of device are accessibility in terms of price and the functional limitations of the smart integrated solutions. This project proposes an armrest prototype for integration into smart walkers or wheelchairs that can detect the user’s intentions at a low development cost. The smart principle of operation is based on Hall-effect sensors, strategically positioned to measure the Center of Pressure (CoP) of the user’s forearm and to classify motor intention using machine learning algorithms such as Random Forest and Leave-One-Subject-Out (LOSO). The detection and correct classification of the user’s intention is a tool that can be integrated as a control system for both motorized and passive assistive devices. Full article
(This article belongs to the Special Issue Rehabilitation Robotics and Intelligent Assistive Devices)
Show Figures

Figure 1

15 pages, 5971 KB  
Article
A Resource-Efficient Method for Real-Time Flexion–Extension Angle Estimation with an Under-Sensorized Finger Exoskeleton
by Alessia Di Natale, Matilde Gelli, Gherardo Liverani, Alessandro Ridolfi, Benedetto Allotta and Nicola Secciani
Appl. Sci. 2026, 16(3), 1575; https://doi.org/10.3390/app16031575 - 4 Feb 2026
Viewed by 367
Abstract
Hand exoskeletons are used in rehabilitation together with serious games to enhance patient experience and, possibly, therapy outcomes. To achieve good engagement, a realistic virtual representation of hand motion is needed; however, the relationship between exoskeleton joint motion and anatomical finger kinematics is [...] Read more.
Hand exoskeletons are used in rehabilitation together with serious games to enhance patient experience and, possibly, therapy outcomes. To achieve good engagement, a realistic virtual representation of hand motion is needed; however, the relationship between exoskeleton joint motion and anatomical finger kinematics is rarely obtained using low-cost procedures. This work introduces a mechanical redesign and modeling pipeline that utilizes temporary sensors to identify the exoskeleton–finger mapping, enabling qualitatively realistic virtual hand motion driven solely by the existing on-board sensor. A recently developed hand exoskeleton prototype was redesigned to host two temporary rotary encoders aligned with the MetaCarpoPhalangeal (MCP) and Proximal InterPhalangeal (PIP) joints, in addition to the actuation encoder. Healthy subjects wore the modified device and performed full flexion–extension cycles. Encoder trajectories were processed; then each cycle was approximated by a third-order polynomial in the normalized actuation angle, and a group-level model was obtained by averaging coefficients across valid cycles. Finally, the encoder-based reconstructions of MCP and PIP motion were evaluated against measurements from a gold-standard optical motion capture system. Results indicate that the proposed polynomial model enables joint-angle estimation with sufficient accuracy for interactive rehabilitation scenarios, supporting its use to drive smooth virtual hand motion from the on-board exoskeleton encoder alone. Full article
(This article belongs to the Special Issue Latest Advances and Prospects of Human-Robot Interaction (HRI))
Show Figures

Figure 1

22 pages, 4580 KB  
Article
Experimental Evaluation of Kinematic Compatibility in Three Upper Limb Exoskeleton Configurations Using Interface Force and Torque
by Hui Zeng, Hao Liu, Longfei Fu and Qiang Cao
Biomimetics 2026, 11(2), 97; https://doi.org/10.3390/biomimetics11020097 - 1 Feb 2026
Viewed by 433
Abstract
Upper limb rehabilitation exoskeletons form a spatial closed kinematic chain with the human arm, where inevitable joint-center and axis misalignment can generate hyperstatic interaction forces and torques. Passive degrees of freedom (DOF) are widely introduced to improve kinematic compatibility, yet different compatible configurations [...] Read more.
Upper limb rehabilitation exoskeletons form a spatial closed kinematic chain with the human arm, where inevitable joint-center and axis misalignment can generate hyperstatic interaction forces and torques. Passive degrees of freedom (DOF) are widely introduced to improve kinematic compatibility, yet different compatible configurations may exhibit distinct wearable performance. This study experimentally compares three compatible four-degree-of-freedom exoskeleton configurations derived from the synthesis of Li et al. using a single reconfigurable rehabilitation robot. The platform is assembled into each configuration through modular passive units and instrumented with two six-axis force–torque sensors at the upper-arm and forearm interfaces. Interaction forces and torques are measured in passive training mode during eating and combing trajectories. For each configuration, tests are performed with passive joints released and with passive joints locked to quantify the effect of passive motion accommodation. Directional and resultant metrics are computed using mean and peak values over movement cycles. Results show that releasing passive joints consistently reduces interaction loading, and Category 2 achieves the lowest forces and torques with the strongest peak suppression, indicating the best practical compatibility. Full article
(This article belongs to the Special Issue Bioinspired Engineered Systems)
Show Figures

Figure 1

Back to TopTop