Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,829)

Search Parameters:
Keywords = human-robot interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1725 KB  
Review
A Comprehensive Narrative Review of Abrupt Movements in Human–Robot Interaction
by Greta Di Vincenzo, Elisa Digo, Valerio Cornagliotto, Laura Gastaldi and Stefano Pastorelli
Appl. Sci. 2026, 16(7), 3350; https://doi.org/10.3390/app16073350 - 30 Mar 2026
Abstract
Human–robot interaction (HRI) takes place in dynamic environments where both humans and robots act as active agents, making the system inherently unpredictable. Abrupt movements can originate from either side and include human reflexes, fatigue, or unexpected reactions, as well as robot malfunctions, control [...] Read more.
Human–robot interaction (HRI) takes place in dynamic environments where both humans and robots act as active agents, making the system inherently unpredictable. Abrupt movements can originate from either side and include human reflexes, fatigue, or unexpected reactions, as well as robot malfunctions, control errors, or task changes. These unpredictable events generate significant risks for both interaction fluency and safety, affecting not only the physical domain (e.g., collisions, excessive forces) but also cognitive aspects such as trust and predictability. Although different application areas present domain-specific challenges, a comprehensive overview of abrupt movements in HRI is still lacking, especially in the industrial scenario. This review aims to consolidate current knowledge regarding how abrupt phenomena are analyzed, prevented, and mitigated across various contexts and to offer new insights for researchers. In detail, after describing the literature search and the screening process, the review categorizes abrupt events, highlights key methodological approaches, and identifies gaps and future directions. By providing a structured synthesis of existing strategies, this work guides researchers in developing safer and more adaptive HRI frameworks capable of handling unpredictability. Full article
(This article belongs to the Special Issue Latest Advances and Prospects of Human-Robot Interaction (HRI))
Show Figures

Figure 1

34 pages, 1413 KB  
Systematic Review
A Systematic Review of Safety-Driven Approaches in Human–Robot Collaborative Systems
by Akhtar Khan, Maaz Akhtar, Sheheryar Mohsin Qureshi, Muzzamil Mustafa, Naser A. Alsaleh and Imran Ahmad
Sensors 2026, 26(7), 2079; https://doi.org/10.3390/s26072079 - 27 Mar 2026
Viewed by 351
Abstract
Collaboration between humans and robots (HRC) is advancing rapidly due to the intersection of robotics and generative artificial intelligence (GenAI). The current paper includes a systematic review of 103 studies on the role of generative models, including Generative Adversarial Networks (GANs), Variational Autoencoders [...] Read more.
Collaboration between humans and robots (HRC) is advancing rapidly due to the intersection of robotics and generative artificial intelligence (GenAI). The current paper includes a systematic review of 103 studies on the role of generative models, including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), diffusion models, and Large Language Models (LLMs) in improving the safety, trust, and adaptability of collaborative robotics using a PRISMA-based systematic approach. The review recognizes four major themed areas of GenAI-based safety frameworks—namely, data-driven simulation to synthesize hazards, predictive reasoning to forecast human motion, adaptive control to reduce risks dynamically, and trust-aware cognition to explain human–robot interaction. Findings indicate that generative models transform robotic safety from a reactive mechanism to proactive, contextual and interpretable systems. Nevertheless, real-time performance, interpretability, standard benchmarking, and ethical assurance are still some of the challenges to be overcome. The paper proposes a taxonomy linking generative modeling layers and physical, cognitive and ethical aspects of HRC safety, and gives a roadmap of certifiable hybrid systems with generative foresight and deterministic control. This synthesis provides a foundation for developing transparent, adaptive, and trustworthy collaborative robotic systems. Full article
(This article belongs to the Special Issue Feature Review Papers in Sensors and Robotics)
Show Figures

Figure 1

32 pages, 1329 KB  
Review
Deep Learning-Based Gaze Estimation: A Review
by Ahmed A. Abdelrahman, Basheer Al-Tawil and Ayoub Al-Hamadi
Robotics 2026, 15(4), 69; https://doi.org/10.3390/robotics15040069 - 25 Mar 2026
Viewed by 349
Abstract
Gaze estimation, a critical facet of understanding user intent and enhancing human–computer interaction, has seen substantial advancements with the integration of deep learning technologies. Despite the progress, the application of deep learning in gaze estimation presents unique challenges, notably in the adaptation and [...] Read more.
Gaze estimation, a critical facet of understanding user intent and enhancing human–computer interaction, has seen substantial advancements with the integration of deep learning technologies. Despite the progress, the application of deep learning in gaze estimation presents unique challenges, notably in the adaptation and optimization of these models for precise gaze tracking. This paper conducts a thorough review of recent developments in deep learning-based gaze estimation, with a particular focus on the evolution from traditional methods to sophisticated appearance-based techniques. We examine the key components of successful gaze estimation systems, including input feature processing, neural network architectures, and the importance of data preprocessing in achieving high accuracy. Our analysis extends to a comprehensive comparison of existing methods, shedding light on their effectiveness and limitations within various implementation contexts. Through this systematic review, we aim to consolidate existing knowledge in the field, identify gaps in current research, and suggest directions for future investigation. By providing a clear overview of the state-of-the-art in gaze estimation and discussing ongoing challenges and potential solutions, our work seeks to inspire further innovation and progress in developing more accurate and efficient gaze estimation systems. Full article
Show Figures

Figure 1

17 pages, 2659 KB  
Article
Estimation of Fingertip Contact Angle from Tactile Pressure Contours
by Qianqian Tian, Jixiao Liu, Funing Hou and Shijie Guo
Appl. Sci. 2026, 16(7), 3172; https://doi.org/10.3390/app16073172 - 25 Mar 2026
Viewed by 188
Abstract
Tactile sensing is an important perceptual modality that enables robots to understand human contact behaviors. Estimating the fingertip contact angle based on tactile pressure distribution provides a simplified representation of the finger’s contact configuration and supports tactile-based perception in human–robot interaction. However, the [...] Read more.
Tactile sensing is an important perceptual modality that enables robots to understand human contact behaviors. Estimating the fingertip contact angle based on tactile pressure distribution provides a simplified representation of the finger’s contact configuration and supports tactile-based perception in human–robot interaction. However, the relationship between tactile pressure distributions and fingertip contact configuration remains insufficiently understood. In this study, a simplified contact mechanics model was employed to investigate the relationship between tactile pressure characteristics and fingertip contact conditions. Theoretical analysis indicates that both the contact area and the contour dimensions of the pressure distribution are influenced by the contact angle and contact force, with varying sensitivities in different directions to these factors. Based on this theory, simplified finite element modeling of the fingertip and multi-subject experiments were conducted. The deformation behavior of the contact region under different contact angles and contact forces was analyzed. The experimental results were generally consistent with the theoretical analysis. Furthermore, contour descriptors were extracted from the tactile pressure distribution to establish a relationship model for estimating the fingertip contact angle, and the model’s accuracy was analyzed. The experimental results indicate that the extracted contour features exhibit systematic variations with contact angle, and the proposed method achieves a mean absolute error (MAE) of 2.73° and a root mean square error (RMSE) of 7.25°. These results demonstrate that tactile pressure contours provide an effective and computationally efficient cue for estimating fingertip contact configuration. This approach may help robots understand human behavior and has potential applications in human–robot interaction and robotic grasping. Full article
Show Figures

Figure 1

23 pages, 27743 KB  
Review
A Framework for Safe Mobile Manipulation in Human-Centered Applications
by Pangcheng David Cen Cheng, Cesare Luigi Blengini, Rosario Francesco Cavelli, Angela Ripi and Marina Indri
Robotics 2026, 15(4), 68; https://doi.org/10.3390/robotics15040068 (registering DOI) - 25 Mar 2026
Viewed by 258
Abstract
In recent years, applications with robots collaborating actively with humans have been increasing. The transition from Industry 4.0 to 5.0 rearranges the focus of fully automated processes to a human-centered system that allows more customization and flexibility. In human-centered systems, the robot is [...] Read more.
In recent years, applications with robots collaborating actively with humans have been increasing. The transition from Industry 4.0 to 5.0 rearranges the focus of fully automated processes to a human-centered system that allows more customization and flexibility. In human-centered systems, the robot is expected to safely assist or provide support to the human operator, avoiding any unintentional harm, while the latter is focused on tasks that require human reasoning, since current decision-making systems still have some limitations. This survey reviews all the main functionalities required to make a robot (collaborative or not) act as an assistant for human operators, analyzing and comparing solutions proposed by the authors (based on previous works) and/or the ones available in the literature. In this way, it is possible to combine those functionalities and build a complete framework enabling safe mobile manipulation while interacting with humans. In particular, a mobile manipulator is used to receive requests from a user, navigate in a human-shared environment, identify the requested object, and grasp and safely deliver such an object to the user. The framework, which is completed by a user interface designed using Android Studio, is developed in ROS1, tested, and validated on a real mobile manipulator in real-world conditions. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

17 pages, 15683 KB  
Article
A Rigid–Flexible Coupled Six-Dimensional Force Sensor and Its PINN-Based Decoupling Algorithm
by Yinlong Zhu, Zhengyu Xie, Chuanwei Lu, Shuang Xi and Xu Wang
Sensors 2026, 26(7), 2038; https://doi.org/10.3390/s26072038 (registering DOI) - 25 Mar 2026
Viewed by 141
Abstract
Six-dimensional force sensors are widely used in compliant robotic control and safe human–machine interactions due to their mature sensing mechanisms and high accuracy. However, conventional six-dimensional force sensors often suffer from complex structures, bulky size, and high manufacturing costs. To address these limitations, [...] Read more.
Six-dimensional force sensors are widely used in compliant robotic control and safe human–machine interactions due to their mature sensing mechanisms and high accuracy. However, conventional six-dimensional force sensors often suffer from complex structures, bulky size, and high manufacturing costs. To address these limitations, this paper proposes a compact and low-cost six-axis force sensor based on capacitive sensing. By employing a tailored arrangement of flexible sensing units, partial structural decoupling of force and torque in specific directions is achieved. A Physically Informed Neural Network (PINN) is further introduced to decouple the residual coupled signals. Experimental results demonstrate that the proposed method significantly improves decoupling accuracy, achieving force decoupling errors of 1.75%, 1.20%, and 1.31% for Fx, Fy, and Fz, respectively, and torque decoupling errors of 0.95%, 0.93%, and 0.97% for Mx, My, and Mz. The proposed sensor offers low-cost fabrication, compact integration, and high sensitivity, making it well suited for lightweight and high-precision sensing applications. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

22 pages, 5469 KB  
Article
Reinforcement-Based Person-Specific Training for Children with Autism Using a Humanoid Robot NAO
by Masud Karim, Md. Solaiman Mia, Saifuddin Md. Tareeq and Md. Hasanuzzaman
Robotics 2026, 15(4), 66; https://doi.org/10.3390/robotics15040066 - 25 Mar 2026
Viewed by 554
Abstract
Autism Spectrum Disorder (ASD) is defined by ongoing difficulties in social communication, flexibility in behavior, and adaptive learning skills. Interventions that utilize robots have demonstrated potential in providing organized training for children with ASD; however, there is a lack of controlled studies that [...] Read more.
Autism Spectrum Disorder (ASD) is defined by ongoing difficulties in social communication, flexibility in behavior, and adaptive learning skills. Interventions that utilize robots have demonstrated potential in providing organized training for children with ASD; however, there is a lack of controlled studies that specifically examine the effects of reinforcement strategies. This research introduces a systematic interaction policy based on reinforcement, founded on the principles of Applied Behavior Analysis (ABA), and assesses its effectiveness through a randomized controlled experimental design with observation. The humanoid robot NAO was used in two different interaction scenarios, one involving a reinforcement condition (RC) and the other a non-reinforcement condition (RC), ensuring that the instructional material and environment were maintained, while only the availability of contingent positive feedback was altered. A total of 50 participants diagnosed with ASD Level 2 engaged in structured word-learning sessions. Learning outcomes were assessed using institutional performance criteria, average response time, and emotion analysis derived from a CNN-based facial expression model. Independent samples t-tests revealed statistically significant improvements in both performance scores (t(48) = 3.779, p < 0.05) and response times (t(48) = 3.758, p < 0.05) in the reinforcement condition compared to the non-reinforcement condition. The findings demonstrate that structured ABA-based reinforcement within robotic interaction significantly enhances learning efficiency and task engagement, contributing methodologically rigorous evidence to robot-assisted ASD intervention research. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

20 pages, 15973 KB  
Article
Streamlining Human–Robot Interaction: Integrating LLM-Based Planning into Modular Robotic Frameworks
by MinHyuk Kim, JooHee Park, Kwanyong Park, Yong-Ju Lee and Sanghun Jeon
Sensors 2026, 26(6), 1978; https://doi.org/10.3390/s26061978 - 21 Mar 2026
Viewed by 419
Abstract
Embodied artificial intelligence (AI), which integrates AI and robotics, has made significant progress, particularly in human–robot interaction, task-assisting robots, and the integration of multimodal AI models. Experimental studies have demonstrated strong performance in complex tasks, such as providing human assistance, performing household chores, [...] Read more.
Embodied artificial intelligence (AI), which integrates AI and robotics, has made significant progress, particularly in human–robot interaction, task-assisting robots, and the integration of multimodal AI models. Experimental studies have demonstrated strong performance in complex tasks, such as providing human assistance, performing household chores, and object manipulation through pick-and-place operations. However, despite these impressive capabilities, real-world applicability remains limited. While tasks such as household chores and object manipulation offer significant practical utility, users often struggle to provide effective instructions, and execution remains prohibitively slow for real-world deployment. This study introduces an approach to enhance usability through spoken human instructions and reduce operation time by streamlining intermediate steps through our Module Handler. The proposed approach leverages a large language model to extract information from spoken human instructions accurately. Through experiments, we validated the accuracy of our approach and confirmed speed improvements compared with related studies. Our experiments evaluated system accuracy in extracting relevant information from spoken human instruction, achieving an object identification accuracy rate of approximately 92.47%. In addition, our method reduced task completion times by an average of 33 s across four different experimental environments compared with existing modular robotics systems. This time reduction is significant for enhancing robotic task execution efficiency. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

16 pages, 3213 KB  
Article
Novel Design of a Soft–Rigid Hybrid Pneumatic Actuator Incorporating a Spine-like Internal Structure
by Yuanzhong Li and Hiroyuki Ishii
Robotics 2026, 15(3), 64; https://doi.org/10.3390/robotics15030064 - 20 Mar 2026
Viewed by 199
Abstract
Soft pneumatic actuators (SPAs) are widely used in robotic systems due to their inherent compliance and safety during human–robot interaction. However, their intrinsic softness often leads to insufficient stiffness and a low load-bearing capacity, which limit their applicability. In this work, a novel [...] Read more.
Soft pneumatic actuators (SPAs) are widely used in robotic systems due to their inherent compliance and safety during human–robot interaction. However, their intrinsic softness often leads to insufficient stiffness and a low load-bearing capacity, which limit their applicability. In this work, a novel soft–rigid hybrid pneumatic actuator incorporating a spine-like internal structure is proposed to enhance the effective stiffness while preserving bending flexibility. Inspired by the biomechanical structure of the human spine, the embedded spine-like structure consists of interconnected rigid vertebrae integrated along the central axis of a soft pneumatic actuator. Static bending experiments under different base orientations and external loads are conducted to evaluate the actuator’s performance. The experimental results demonstrate that the proposed actuator exhibits improved posture retention, enhanced load-bearing capacity, and higher robustness against gravitational loading compared to a soft pneumatic actuator without a spine-like structure. These results confirm that the spine-like internal structure effectively increases the actuator’s effective stiffness, enabling stable bending behavior under various working conditions. Full article
(This article belongs to the Special Issue Soft Robotic Actuation and Locomotion: The State of the Art)
Show Figures

Figure 1

18 pages, 2996 KB  
Article
A Multimodal Agentic AI Framework for Intuitive Human–Robot Collaboration
by Xiaoyun Liang and Jiannan Cai
Sensors 2026, 26(6), 1958; https://doi.org/10.3390/s26061958 - 20 Mar 2026
Viewed by 405
Abstract
Widespread acceptance of collaborative robots in human-involved scenarios requires accessible and intuitive interfaces for lay workers and non-expert users. Existing interfaces often rely on users to plan and issue low-level commands, necessitating extensive knowledge of robot control. This study proposes a multimodal agentic [...] Read more.
Widespread acceptance of collaborative robots in human-involved scenarios requires accessible and intuitive interfaces for lay workers and non-expert users. Existing interfaces often rely on users to plan and issue low-level commands, necessitating extensive knowledge of robot control. This study proposes a multimodal agentic AI framework integrating natural user interfaces (NUIs) to foster effortless human-like partnerships in human–robot collaboration (HRC), which enhance intuitiveness and operational efficiency. First, it allows users to instruct robots using plain language verbally, coupled with gaze, revealing objects precisely. Second, it offloads users’ workload for robot motion planning by understanding context and reasoning task decomposition. Third, coordinating with AI agents built on large language models (LLMs), the system interprets users’ requests effectively and provides feedback to establish transparent communication. This proof-of-concept study included experiments to demonstrate a practical implementation of the agentic AI framework on a mobile manipulation robot in the collaborative task of human–robot wood assembly. Seven participants were recruited to interact with this AI-integrated agentic robotic system. Task performance and user experience metrics were measured in terms of completion time, intervention rate, NASA TLX survey for workload, and valuable insights of practical applications were summarized through a qualitative analysis. This study highlights the potential of NUIs and agentic AI-embodied robots to overcome existing HRC barriers and contributes to improving HRC intuitiveness and efficiency. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

29 pages, 3356 KB  
Review
Comparative Analysis of Actuation Methods in Flexible Upper-Limb Exoskeleton Robots
by Cuizhi Fei, Zheng Deng, Chongyu Wang, Shuai Wang and Hui Li
Actuators 2026, 15(3), 171; https://doi.org/10.3390/act15030171 - 18 Mar 2026
Viewed by 229
Abstract
The flexible upper-limb exoskeleton robot (exosuit) is composed of fabrics, soft actuators and compliant force-transmitting structures, which provides assistance or rehabilitation training for the shoulders, elbows, wrists and hands. By realizing human–robot collaboration, this kind of system has the advantages of comfort, light [...] Read more.
The flexible upper-limb exoskeleton robot (exosuit) is composed of fabrics, soft actuators and compliant force-transmitting structures, which provides assistance or rehabilitation training for the shoulders, elbows, wrists and hands. By realizing human–robot collaboration, this kind of system has the advantages of comfort, light weight and portability, thus promoting motor function recovery and neural plasticity. This review establishes a classification and comparison framework for flexible upper-limb exoskeletons based on the actuation modalities and systematically summarizes the research progress under different actuation modalities. The relevant literature published from 2015 to 2025 was retrieved from the EI, IEEE Xplore, PubMed and Web of Science databases. After screening according to the preset inclusion and exclusion criteria, a total of 64 original research papers meeting the criteria were finally included for analysis. According to the actuation modalities, the flexible upper-limb exoskeleton robot is classified, and all kinds of systems are summarized and compared. Motor–cable/tendon actuation and pneumatic/hydraulic actuation have advanced substantially and are approaching technical maturity for flexible upper-limb exoskeletons. Meanwhile, designs based on passive/hybrid mechanisms (e.g., elastic energy storage elements and clutches) and new intelligent material actuations are showing a diversified development trend. In the future, the development is expected to further focus on lightweight and compliance, and by integrating multimodal sensing and feedback control, motion intention recognition and human–robot interaction theories, actuation systems will be developed towards modularization, intelligence and high-power density, in order to achieve more comfortable, lighter and more effective flexible upper-limb exoskeleton systems. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

23 pages, 376 KB  
Article
INTELLECTUM: A Hybrid AR-VR Metaverse Framework for Smart Cities
by Andrey Nechesov and Janne Ruponen
Appl. Syst. Innov. 2026, 9(3), 61; https://doi.org/10.3390/asi9030061 - 17 Mar 2026
Viewed by 363
Abstract
This work presents INTELLECTUM as a reference architecture and design-time evaluation framework for multi-entity XR–AI–digital twin systems. Rather than optimizing a specific implementation, the paper formalizes architectural invariants, event semantics, and coordination mechanisms that precede and inform system realization. INTELLECTUM provides a conceptual [...] Read more.
This work presents INTELLECTUM as a reference architecture and design-time evaluation framework for multi-entity XR–AI–digital twin systems. Rather than optimizing a specific implementation, the paper formalizes architectural invariants, event semantics, and coordination mechanisms that precede and inform system realization. INTELLECTUM provides a conceptual framework for structuring interactions across physical and virtual environments, emphasizing human-centered design, immersive digital twins, and collaborative extended-reality workspaces. The technical specification defines core architectural components, human integration modalities via WebXR and heterogeneous sensor networks, and representative usage scenarios within smart city ecosystems. By enabling AI-assisted urban planning, interactive simulation, and multi-actor coordination, INTELLECTUM positions itself as an XR-based architectural foundation for next-generation smart city platforms. Full article
(This article belongs to the Special Issue Information Industry and Intelligence Innovation)
Show Figures

Figure 1

20 pages, 1971 KB  
Article
Human–Robot Interaction Strategy of Service Robot with Insufficient Capability in Self-Service Shop
by Wa Gao, Tao He, Yang Ji, Yue Kan and Fusheng Zha
Biomimetics 2026, 11(3), 213; https://doi.org/10.3390/biomimetics11030213 - 16 Mar 2026
Viewed by 431
Abstract
This paper explores the interaction strategies of service robots in self-service shops from a user experience perspective in the case of robots with insufficient capabilities. A Yanshee robot and a self-developed localization-rotation system are employed as the experimental platform. A sales return in [...] Read more.
This paper explores the interaction strategies of service robots in self-service shops from a user experience perspective in the case of robots with insufficient capabilities. A Yanshee robot and a self-developed localization-rotation system are employed as the experimental platform. A sales return in a self-service shop is employed as the experimental scenario. Two types of robot’s insufficient capabilities, three strategies of robots’ apology and a social interaction cue imitated from a human salesperson are considered in the design of interaction strategy between human and robot in this scenario. The results show that robots’ social insufficiency leads to more negative influence on customer experiences of fluency, comprehensibility, impression, intelligence, willingness for future interaction than robots’ performance insufficiency. An empathetic apology when the robot has insufficient performance is an effective interaction strategy. The interaction cue that the robot turns to face customers is not beneficial to customer experiences but does influence the internal relationship between customer experiences during HRI and after HRI. In the case of robots with social insufficiency in a self-service shop, impression, intelligence and interaction capability have positive impacts on the willingness for future interaction, while they are also positively affected by fluency or comprehensibility. In the case of robots with performance insufficiency, impression has a positive impact on willingness, while it is not directly related to fluency. The findings are valuable for informing the interaction design of service robots deployed in shopping, especially in real environments where performance and cost must be balanced. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

25 pages, 2220 KB  
Article
HRC Metrology: Assessment Criteria, Metrics and Methods for Human–Robot Co-Manipulation Tasks
by S. M. Mizanoor Rahman
Machines 2026, 14(3), 336; https://doi.org/10.3390/machines14030336 - 16 Mar 2026
Viewed by 315
Abstract
We developed a human–robot collaborative manipulation system (co-manipulation system) in the form of a power assist robotic system (PARS) where a human and a robot collaborated to perform the co-manipulation of an object with power assistance. We conducted an experiment (the first experiment), [...] Read more.
We developed a human–robot collaborative manipulation system (co-manipulation system) in the form of a power assist robotic system (PARS) where a human and a robot collaborated to perform the co-manipulation of an object with power assistance. We conducted an experiment (the first experiment), where in each trial of the experiment, a human subject performed the co-manipulation of the object with the PARS, and an expert human–robot co-manipulation researcher observed the co-manipulation task. We collected the co-manipulation and observation data, analyzed the data, and conducted reviews of the related literature, and developed the HRC (human–robot collaboration) metrology, which consisted of necessary criteria, metrics and methods to assess human–robot collaborative manipulation tasks. The proposed HRC metrology consisted of both human–robot collaborative performance and human–robot interactions (HRI) related assessment criteria. Then, we developed another human–robot co-manipulation system using a robot manipulator. In this system, the human–robot co-manipulation task was performed in conjunction with a collaborative assembly task between the robot and human co-workers. In another experiment (the second experiment), we assessed the co-manipulation task for each robotic system separately based on the developed HRC metrology (set of assessment criteria, metrics and methods) to verify and validate the practicality, usability and effectiveness of the criteria, metrics and methods. The results showed that the HRC metrology was effective and practical in assessing the co-manipulation tasks. We then discussed the strengths and limitations of the assessment criteria, metrics and methods. The proposed HRC metrology can be used to assess human–robot collaborative performance and human–robot interactions in human–robot co-manipulation tasks with potential real-world applications in industrial manipulation and manufacturing, transport, logistics, civil construction, rescue and disaster management, timber processing, etc. Full article
(This article belongs to the Special Issue Design and Control of Assistive Robots)
Show Figures

Figure 1

58 pages, 7331 KB  
Review
Human–Robot Interaction in Indoor Mobile Robotics: Current State, Interaction Modalities, Applications, and Future Challenges
by Arman Ahmed Khan and Kerstin Thurow
Sensors 2026, 26(6), 1840; https://doi.org/10.3390/s26061840 - 14 Mar 2026
Viewed by 329
Abstract
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as [...] Read more.
This paper provides a comprehensive survey of Human–Robot Interaction (HRI) for indoor mobile robots operating in human-centered environments such as hospitals, laboratories, offices, and homes. We review interaction modalities—including speech, gesture, touch, visual, and multimodal interfaces—and examine key user experience factors such as usability, trust, and social acceptance. Implementation challenges are discussed, encompassing safety, privacy, and regulatory considerations. Representative case studies, including healthcare and domestic platforms, highlight design trade-offs and integration lessons. We identify critical technical challenges, including robust perception, reliable multimodal fusion, navigation in dynamic spaces, and constraints on computation and power. Finally, we outline future directions, including embodied AI, adaptive context-aware interactions, and standards for safety and data protection. This survey aims to guide the development of indoor mobile robots capable of collaborating with humans naturally, safely, and effectively. Full article
Show Figures

Figure 1

Back to TopTop