Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (302)

Search Parameters:
Keywords = physical human-robot interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 31486 KB  
Article
Design and Implementation of a Companion Robot with LLM-Based Hierarchical Emotion Motion Generation
by Yoongu Lim, Jaeuk Cho, Duk-Yeon Lee, Dongwoon Choi and Dong-Wook Lee
Appl. Sci. 2025, 15(23), 12759; https://doi.org/10.3390/app152312759 - 2 Dec 2025
Viewed by 148
Abstract
Recently, human–robot interaction (HRI) with social robots has attracted significant attention. Among them, companion robots, which exhibit pet-like behaviors and interact with people primarily through non-verbal means, particularly require the generation of appropriate gestures. This paper presents the design and implementation of a [...] Read more.
Recently, human–robot interaction (HRI) with social robots has attracted significant attention. Among them, companion robots, which exhibit pet-like behaviors and interact with people primarily through non-verbal means, particularly require the generation of appropriate gestures. This paper presents the design and implementation of a companion cat robot, named PEPE, with a large language model (LLM)-based hierarchical emotional motion generation method. To design the cat-like companion robot, an analysis of feline emotional behaviors was conducted to identify the body parts and motions essential for effective emotional expression. Based on this analysis, the required degrees of freedom (DoFs) and structural configuration for PEPE were derived. To generate expressive gestures efficiently and reliably, a hierarchical LLM-based emotional motion generation method was proposed. The process defines the robot’s structural features, establishes a gesture generation code format, and incorporates emotion-based guidelines grounded in feline behavioral analysis to mitigate LLM hallucination and ensure physical feasibility. The proposed method was implemented on the physical robot, and eight emotional gestures were generated—Happy, Angry, Sad, Fearful, Joyful, Excited, Positive Feedback, and Negative Feedback. A user study with 15 participants was conducted to validate the system. The high-arousal gestures—Angry, Joyful, and Excited—were rated significantly above the neutral clarity threshold (p < 0.05), demonstrating clear user recognition. Meanwhile, low-arousal gestures exhibited neutral-level perceptions consistent with their subtle motion profiles. These results confirm that the proposed LLM-based framework effectively generates expressive, physically executable gestures for a companion robot. Full article
Show Figures

Figure 1

28 pages, 3050 KB  
Review
Safety Engineering for Humanoid Robots in Everyday Life—Scoping Review
by Dávid Kóczi and József Sárosi
Electronics 2025, 14(23), 4734; https://doi.org/10.3390/electronics14234734 - 1 Dec 2025
Viewed by 319
Abstract
As humanoid robots move from controlled industrial environments into everyday human life, their safe integration is essential for societal acceptance and effective human–robot interaction (HRI). This scoping review examines engineering safety frameworks for humanoid robots across four core domains: (1) physical safety in [...] Read more.
As humanoid robots move from controlled industrial environments into everyday human life, their safe integration is essential for societal acceptance and effective human–robot interaction (HRI). This scoping review examines engineering safety frameworks for humanoid robots across four core domains: (1) physical safety in HRI, (2) cybersecurity and software robustness, (3) safety standards and regulatory frameworks, and (4) ethical and societal implications. In the area of physical safety, recent research trends emphasize proactive, multimodal perception-based collision avoidance, the use of compliance mechanisms, and fault-tolerant control to handle hardware failures and falls. In cybersecurity and software robustness, studies increasingly address the full threat landscape, secure real-time communication, and reliability of artificial intelligence (AI)-based control. The analysis of standards and regulations reveals a lag between technological advances and the adaptation of key safety standards in current research. Ethical and societal studies show that safety is also shaped by user trust, perceived safety, and data protection. Within the corpus of 121 peer-reviewed studies published between 2021 and 2025 and included in this review, most work concentrates on physical safety, while cybersecurity, standardization, and socio-ethical aspects are addressed less frequently. These gaps point to the need for more integrated, cross-domain approaches to safety engineering for humanoid robots. Full article
Show Figures

Figure 1

36 pages, 895 KB  
Review
Robotic Motion Techniques for Socially Aware Navigation: A Scoping Review
by Jesus Eduardo Hermosilla-Diaz, Ericka Janet Rechy-Ramirez and Antonio Marin-Hernandez
Future Internet 2025, 17(12), 552; https://doi.org/10.3390/fi17120552 - 1 Dec 2025
Viewed by 218
Abstract
The increasing inclusion of robots in social areas requires continuous improvement of behavioral strategies that robots must follow. Although behavioral strategies mainly focus on operational efficiency, other aspects should be considered to provide a reliable interaction in terms of sociability (e.g., methods for [...] Read more.
The increasing inclusion of robots in social areas requires continuous improvement of behavioral strategies that robots must follow. Although behavioral strategies mainly focus on operational efficiency, other aspects should be considered to provide a reliable interaction in terms of sociability (e.g., methods for detection and interpretation of human behaviors, how and where human–robot interaction is performed, and participant evaluation of robot behavior). This scoping review aims to answer seven research questions related to robotic motion in socially aware navigation, considering some aspects such as: type of robots used, characteristics, and type of sensors used to detect human behavioral cues, type of environment, and situations. Articles were collected on the ACM Digital Library, Emerald Insight, IEEE Xplore, ScienceDirect, MDPI, and SpringerLink databases. The PRISMA-ScR protocol was used to conduct the searches. Selected articles met the following inclusion criteria. They: (1) were published between January 2018 and August 2025, (2) were written in English, (3) were published in journals or conference proceedings, (4) focused on social robots, (5) addressed Socially Aware Navigation (SAN), and (6) involved the participation of volunteers in experiments. As a result, 22 studies were included; 77.27% of them employed mobile wheeled robots. Platforms using differential and omnidirectional drive systems were each used in 36.36% of the articles. 50% of the studies used a functional robot appearance, in contrast to bio-inspired appearances used in 31.80% of the cases. Among the frequency of sensors used to collect data from participants, vision-based technologies were the most used (with monocular cameras and 3D-vision systems each reported in 7 articles). Processing was mainly performed on board (50%) of the robot. A total of 59.1% of the studies were performed in real-world environments rather than simulations (36.36%), and a few studies were performed in hybrid environments (4.54%). Robot interactive behaviors were identified in different experiments: physical behaviors were present in all experiments. A few studies employed visual behaviors (2 times). In just over half of the studies (13 studies), participants were asked to provide post-experiment feedback. Full article
(This article belongs to the Special Issue Mobile Robotics and Autonomous System)
Show Figures

Figure 1

16 pages, 2588 KB  
Article
Modeling Human–Robot Proxemics Based on Human Communication Theory: A Behavior–Interaction–Object-Dependent Approach
by Syadza Atika Rahmah, Muhammad Ramadhan Hadi Setyawan, Takenori Obo, Naoyuki Takesue and Naoyuki Kubota
Appl. Sci. 2025, 15(23), 12516; https://doi.org/10.3390/app152312516 - 25 Nov 2025
Viewed by 190
Abstract
Understanding human comfort when in the presence of robots is vital to constructing socially adaptive robotic systems. This study introduces the Human–Robot Proxemic Index (HRPI). This quantitative model estimates user comfort based on three contextual dimensions: human activity (behavior-dependent, BD), interaction type (interaction-dependent, [...] Read more.
Understanding human comfort when in the presence of robots is vital to constructing socially adaptive robotic systems. This study introduces the Human–Robot Proxemic Index (HRPI). This quantitative model estimates user comfort based on three contextual dimensions: human activity (behavior-dependent, BD), interaction type (interaction-dependent, ID), and object characteristics (object-dependent, OD). Unlike previous proxemic models that focused solely on physical distance, HRPI integrates multidimensional contextual factors and applies sigmoid-based personalization to account for individual sensitivity. A ceiling-mounted service robot and nine participants took part in experiments. Pre- and post-interaction questionnaires were used to find out how comfortable the participants felt and what distance they preferred. The collected data were normalized and incorporated into HRPI through weighted assessment, and validation with ideal dummy data in trials showed that HRPI-based control dynamically adjusted the robot’s approach distance and speed according to user preferences. These findings highlight the strengths of HRPI as a multidimensional, context-aware framework for guiding socially appropriate robot movements and suggest that its integration with topological spatial mapping could further enhance human–robot collaboration in real-world environments. Full article
Show Figures

Figure 1

38 pages, 2219 KB  
Review
A Review of Human Intention Recognition Frameworks in Industrial Collaborative Robotics
by Mokone Kekana, Shengzhi Du, Nico Steyn, Abderraouf Benali and Halim Djerroud
Robotics 2025, 14(12), 174; https://doi.org/10.3390/robotics14120174 - 24 Nov 2025
Viewed by 593
Abstract
The integration of intention recognition systems in industrial collaborative robotics is crucial for improving safety and efficiency in modern manufacturing environments. This review paper looks at frameworks that enable collaborative robots to understand human intentions. This ability is essential for providing effective robotic [...] Read more.
The integration of intention recognition systems in industrial collaborative robotics is crucial for improving safety and efficiency in modern manufacturing environments. This review paper looks at frameworks that enable collaborative robots to understand human intentions. This ability is essential for providing effective robotic assistance and promoting seamless human–robot collaboration, particularly in enhancing safety, improving operational efficiency, and enabling natural interactions. The paper discusses learning techniques such as rule-based, probabilistic, machine learning, and deep learning models. These technologies empower robots with human-like adaptability and decision-making skills. It also explores cues for intention recognition, categorising them into physical, physiological, and contextual cues. It highlights how implementing these various sensory inputs sharpen the interpretation of human intentions. Additionally, the discussion assesses the limitations of current research, including the need for usability, robustness, industrial readiness, real-time processing, and generalisability across various industrial applications. This evaluation identifies future research gaps that could improve the effectiveness of these systems in industrial settings. This work contributes to the ongoing conversation about the future of collaborative robotics, laying the foundation for advancements that can bridge the gap between human and robotic interactions. The key findings point out the significance of predictive understanding in promoting safer and more efficient human–robot interactions in industrial environments and provide recommendations for its use. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

15 pages, 7465 KB  
Article
Sensorless Payload Estimation of Serial Robots Using an Improved Disturbance Kalman Filter with a Variable-Parameter Noise Model
by Ruiqing Luo, Jianjun Yuan, Yimin He, Sheng Bao, Liang Du and Zhengtao Hu
Actuators 2025, 14(12), 568; https://doi.org/10.3390/act14120568 - 23 Nov 2025
Viewed by 335
Abstract
The accurate estimation of the end-effector load force is essential in dynamic robotic scenarios, especially when the end-effector payload varies, to ensure safe and stable physical interaction among humans, robots, and environments. Currently, most applications still rely on payload calibration schemes, but existing [...] Read more.
The accurate estimation of the end-effector load force is essential in dynamic robotic scenarios, especially when the end-effector payload varies, to ensure safe and stable physical interaction among humans, robots, and environments. Currently, most applications still rely on payload calibration schemes, but existing calibration techniques often struggle to balance efficiency and accuracy. Moreover, current-based payload estimation methods, which are a commonly used and low-cost technique, face practical challenges such as non-negligible noise. To handle these issues, we propose a sensorless scheme based on a modified disturbance Kalman filter for accurately estimating the load force exerted on robots. Specifically, we introduce the dynamic model of robots that incorporates the nonlinear friction related to velocity and load. Subsequently, a generalized disturbance observer for the robot dynamics is adopted to avoid the measurement noise of joint acceleration. Considering the influence of friction and velocity on the noise parameters in the Kalman filter, a variable-parameter noise model is established. Finally, experimental results demonstrate that the proposed method achieves better performance in terms of accuracy, response, and overshoot suppression compared to the existing methods. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

26 pages, 2301 KB  
Review
Fault Detection and Diagnosis for Human-Centric Robotic Actuation in Healthcare: Methods, Failure Modes, and a Validation Framework
by Camelia Adela Maican, Cristina Floriana Pană, Nicolae Răzvan Vrăjitoru, Daniela Maria Pătrașcu-Pană and Virginia Maria Rădulescu
Actuators 2025, 14(12), 566; https://doi.org/10.3390/act14120566 - 21 Nov 2025
Viewed by 444
Abstract
This review synthesises fault detection and diagnosis (FDD) methods for robotic actuation in healthcare, where precise, compliant, and safe physical human–robot interaction (pHRI) is essential. Actuator families—harmonic-drive electric transmissions, series-elastic designs, Cable/Bowden mechanisms, permanent-magnet synchronous motors (PMSM), and force–torque-sensed architectures—are mapped to characteristic [...] Read more.
This review synthesises fault detection and diagnosis (FDD) methods for robotic actuation in healthcare, where precise, compliant, and safe physical human–robot interaction (pHRI) is essential. Actuator families—harmonic-drive electric transmissions, series-elastic designs, Cable/Bowden mechanisms, permanent-magnet synchronous motors (PMSM), and force–torque-sensed architectures—are mapped to characteristic fault classes and to sensing, residual-generation, and decision pipelines. Four methodological families are examined: model-based observers/parity relations, parameter-estimation strategies, signal-processing with change detection, and data-driven pipelines. Suitability for pHRI is assessed by attention to latency, robustness to movement artefacts, user comfort, and fail-safe behaviour. Aligned with ISO 14971 and the IEC 60601/80601 series, a validation framework is introduced, with reportable metrics—time-to-detect (TTD), minimal detectable fault amplitude (MDFA), and false-alarm rate (FAR)—at clinically relevant thresholds, accompanied by a concise reporting checklist. Across 127 studies (2016–2025), a pronounced technology-dependent structure emerges in the actuator-by-fault relationship; accuracy (ACC/F1) is commonly reported, whereas MDFA, TTD, and FAR are rarely documented. These findings support actuation-aware observers and decision rules and motivate standardised reporting beyond classifier accuracy to enable clinically meaningful, reproducible evaluation in contact-rich pHRI. Full article
Show Figures

Figure 1

10 pages, 2134 KB  
Proceeding Paper
Bilateral Teleoperation of a Formation of Mobile Robot Using Proportional Control and Obstacle Avoidance: Experimental Results
by Juan Cabrera, Gabriela M. Andaluz, Paulo Leica and Oscar Camacho
Eng. Proc. 2025, 115(1), 10; https://doi.org/10.3390/engproc2025115010 - 15 Nov 2025
Viewed by 234
Abstract
This article proposes a distributed formation control strategy for mobile robots (using TurtleBot3 Burger platforms) based on teleoperation using artificial forces and mechanical impedance modeling. The proposed control law is structured in cascade, consisting of an external loop responsible for maintaining the formation [...] Read more.
This article proposes a distributed formation control strategy for mobile robots (using TurtleBot3 Burger platforms) based on teleoperation using artificial forces and mechanical impedance modeling. The proposed control law is structured in cascade, consisting of an external loop responsible for maintaining the formation and an internal loop dedicated to obstacle avoidance. Bilateral teleoperation is enabled by integrating the Novint Falcon haptic device, which allows the human operator to issue velocity commands to the formation and receive force feedback based on the robots’ physical interactions with congested environments. This strategy improves remote perception of the environment and promotes safe and collaborative navigation, validated through experiments in real-world environments. Full article
(This article belongs to the Proceedings of The XXXIII Conference on Electrical and Electronic Engineering)
Show Figures

Figure 1

17 pages, 4118 KB  
Article
Research on the Design and Control Method of Robotic Flexible Magneto-Rheological Actuator
by Ran Shi, Sheng Jian, Guangzeng Chen and Pengpeng Yao
Sensors 2025, 25(22), 6921; https://doi.org/10.3390/s25226921 - 12 Nov 2025
Viewed by 332
Abstract
To meet the safety and compliance requirements pertaining to robots when interacting physically with humans or the environment in unstructured settings such as households and factories, in this study, we focus on methods for the design and control of a flexible robotic magneto-rheological [...] Read more.
To meet the safety and compliance requirements pertaining to robots when interacting physically with humans or the environment in unstructured settings such as households and factories, in this study, we focus on methods for the design and control of a flexible robotic magneto-rheological actuator (MRA). Firstly, for the magneto-rheological fluid clutch (MRC), which is the core component of the MRA, an equivalent magnetic circuit model was established to accurately calculate the magnetic field inside the clutch, and a thermal circuit model was constructed to analytically determine the operating temperature of each component. Considering practical engineering constraints, including mechanical structure, magnetic saturation, maximum current, and maximum temperature, a genetic algorithm was used to optimize parameters of the MRC. Secondly, based on the dynamic characteristics of the MRA, a dynamic model incorporating the motor, reducer, MRC, and load link was established. Given scenarios where torque sensors cannot be installed due to cost and structural space limitations, a model reference PID feedforward control strategy was designed. Torque was estimated using input current. Finally, an experimental platform was built, and static and dynamic torque output experiments were conducted. These experiments verified the excellent torque tracking performance of the designed MRA. Through multi-physics modeling, parameter optimization, and control strategy design, this paper provides a solution for flexible robotic joints that integrates high torque, high compliance, and safety. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

17 pages, 16406 KB  
Article
Loong: An Open-Source Platform for Full-Size Universal Humanoid Robot Toward Better Practicality
by Lei Jiang, Heng Zhang, Boyang Xing, Zhenjie Liang, Zeyuan Sun, Jingran Cheng, Song Zhou, Xu Song, Xinyue Li, Hai Zhou, Yongyao Li and Yufei Liu
Biomimetics 2025, 10(11), 745; https://doi.org/10.3390/biomimetics10110745 - 5 Nov 2025
Viewed by 1406
Abstract
In recent years, humanoid robots have made substantial advances in motion control and multimodal interaction. However, full-size humanoid robots face significant technical challenges due to their inherent geometric and physical properties, leading to large inertia of humanoid robots and substantial driving forces. These [...] Read more.
In recent years, humanoid robots have made substantial advances in motion control and multimodal interaction. However, full-size humanoid robots face significant technical challenges due to their inherent geometric and physical properties, leading to large inertia of humanoid robots and substantial driving forces. These characteristics result in issues such as limited biomimetic capabilities, low control efficiency, and complex system integration, thereby restricting practical applications of full-size humanoid robots in real-world settings. To address these limitations, this paper incorporates a biomimetic design approach that draws inspiration from biological structures and movement mechanisms to enhance the robot’s human-like movements and overall efficiency. The platform introduced in this paper, Loong, is designed to overcome these challenges, offering a practically viable solution for full-size humanoid robots. The research team has innovatively used highly biomimetic joint designs to enhance Loong’s capacity for human-like movements and developed a multi-level control architecture along with a multi-master high-speed real-time communication mechanism that significantly improves its control efficiency. In addition, Loong incorporates a modular system integration strategy, which offers substantial advantages in mass production and maintenance, which improves its adaptability and practical utility for diverse operational environments. The biomimetic approach not only enhances Loong’s functionality but also enables it to perform better in complex and dynamic environments. To validate Loong’s design performance, extensive experimental tests were performed, which demonstrated the robot’s ability to traverse complex terrains such as 13 cm steps and 20° slopes and its competence in object manipulation and transportation. These innovations provide a new design paradigm for the development of full-size humanoid robots while laying a more compatible foundation for the development of hardware platforms for medium- and small-sized humanoid robots. This work makes a significant contribution to the practical deployment of humanoid robots. Full article
(This article belongs to the Special Issue Bionic Engineering Materials and Structural Design)
Show Figures

Figure 1

23 pages, 18423 KB  
Article
Deployable and Habitable Architectural Robot Customized to Individual Behavioral Habits
by Ye Zhang, Penghua Ren, Haoyi Wang, Yu Cui and Zhen Xu
Appl. Syst. Innov. 2025, 8(6), 169; https://doi.org/10.3390/asi8060169 - 5 Nov 2025
Viewed by 724
Abstract
Architectural robotics enables physical spaces and their components to act, think, and grow with their inhabitants. However, this is still a relatively new field that requires further improvements in portability, customizability, and flexibility. This study integrates spatial embedding knowledge, small-space design principles based [...] Read more.
Architectural robotics enables physical spaces and their components to act, think, and grow with their inhabitants. However, this is still a relatively new field that requires further improvements in portability, customizability, and flexibility. This study integrates spatial embedding knowledge, small-space design principles based on human scales and behaviors, and robotic kinematics to propose a prototype robot capable of efficient batch storage, habitability, and autonomous mobility. Based on the spatial distribution of its user’s dynamic skeletal points, determined using a human–computer interaction design system, this prototype robot can automatically adjust parameters to generate a customized solution aligned with the user’s behavioral habits. This study highlights how considering the inhabitant’s personality can create new possibilities for architectural robots and offers insights for future works that expand architecture into intelligent machines. Full article
(This article belongs to the Special Issue Autonomous Robotics and Hybrid Intelligent Systems)
Show Figures

Figure 1

19 pages, 6901 KB  
Article
Assessing User Experience with Piezoresistive Force Sensors: Interpreting Button Press Impulse and Duration
by Carlos Gilberto Gomez-Monroy, Vicente Borja, Alejandro Ramirez-Reivich and Maria del Pilar Corona-Lira
Sensors 2025, 25(21), 6685; https://doi.org/10.3390/s25216685 - 1 Nov 2025
Viewed by 522
Abstract
As robotic systems become increasingly integrated into daily life, the need for user experience (UX) assessment methods that are both privacy-conscious and suitable for embedded hardware platforms has grown. Traditional UX evaluations relying on vision, audio, or lengthy questionnaires are often intrusive, computationally [...] Read more.
As robotic systems become increasingly integrated into daily life, the need for user experience (UX) assessment methods that are both privacy-conscious and suitable for embedded hardware platforms has grown. Traditional UX evaluations relying on vision, audio, or lengthy questionnaires are often intrusive, computationally demanding, or impractical for low-power devices. In this study, we introduce a novel sensor-based method for assessing UX through direct physical interaction. We designed a robot lamp with a force-sensing button interface and conducted a user study involving controlled robot errors. Participants interacted with the lamp during a reading task and rated their UX on a 7-point Likert scale. Using force and time data from button presses, we correlated force and time data to user experience and demographic information. Our results demonstrate the potential of bodily interaction metrics as a viable alternative for UX assessment in human-robot interaction, enabling real-time, embedded, and privacy-aware evaluation of user satisfaction in robotic systems. Full article
Show Figures

Figure 1

23 pages, 1945 KB  
Article
A Symmetry-Informed Multimodal LLM-Driven Approach to Robotic Object Manipulation: Lowering Entry Barriers in Mechatronics Education
by Jorge Gudiño-Lau, Miguel Durán-Fonseca, Luis E. Anido-Rifón and Pedro C. Santana-Mancilla
Symmetry 2025, 17(10), 1756; https://doi.org/10.3390/sym17101756 - 17 Oct 2025
Viewed by 696
Abstract
The integration of Large Language Models (LLMs), particularly Visual Language Models (VLMs), into robotics promises more intuitive human–robot interactions; however, challenges remain in efficiently translating high-level commands into precise physical actions. This paper presents a novel architecture for vision-based object manipulation that leverages [...] Read more.
The integration of Large Language Models (LLMs), particularly Visual Language Models (VLMs), into robotics promises more intuitive human–robot interactions; however, challenges remain in efficiently translating high-level commands into precise physical actions. This paper presents a novel architecture for vision-based object manipulation that leverages a VLM’s reasoning capabilities while incorporating symmetry principles to enhance operational efficiency. Implemented on a Yahboom DOFBOT educational robot with a Jetson Nano platform, our system introduces a prompt-based framework that uniquely embeds symmetry-related cues to streamline feature extraction and object detection from visual data. This methodology, which utilizes few-shot learning, enables the VLM to generate more accurate and contextually relevant commands for manipulation tasks by efficiently interpreting the symmetric and asymmetric features of objects. The experimental results in controlled scenarios demonstrate that our symmetry-informed approach significantly improves the robot’s interaction efficiency and decision-making accuracy compared to generic prompting strategies. This work contributes a robust method for integrating fundamental vision principles into modern generative AI workflows for robotics. Furthermore, its implementation on an accessible educational platform shows its potential to simplify complex robotics concepts for engineering education and research. Full article
Show Figures

Graphical abstract

19 pages, 9302 KB  
Article
Real-Time Face Gesture-Based Robot Control Using GhostNet in a Unity Simulation Environment
by Yaseen
Sensors 2025, 25(19), 6090; https://doi.org/10.3390/s25196090 - 2 Oct 2025
Viewed by 858
Abstract
Unlike traditional control systems that rely on physical input devices, facial gesture-based interaction offers a contactless and intuitive method for operating autonomous systems. Recent advances in computer vision and deep learning have enabled the use of facial expressions and movements for command recognition [...] Read more.
Unlike traditional control systems that rely on physical input devices, facial gesture-based interaction offers a contactless and intuitive method for operating autonomous systems. Recent advances in computer vision and deep learning have enabled the use of facial expressions and movements for command recognition in human–robot interaction. In this work, we propose a lightweight, real-time facial gesture recognition method, GhostNet-BiLSTM-Attention (GBA), which integrates GhostNet and BiLSTM with an attention mechanism, is trained on the FaceGest dataset, and is integrated with a 3D robot simulation in Unity. The system is designed to recognize predefined facial gestures such as head tilts, eye blinks, and mouth movements with high accuracy and low inference latency. Recognized gestures are mapped to specific robot commands and transmitted to a Unity-based simulation environment via socket communication across machines. This framework enables smooth and immersive robot control without the need for conventional controllers or sensors. Real-time evaluation demonstrates the system’s robustness and responsiveness under varied user and lighting conditions, achieving a classification accuracy of 99.13% on the FaceGest dataset. The GBA holds strong potential for applications in assistive robotics, contactless teleoperation, and immersive human–robot interfaces. Full article
(This article belongs to the Special Issue Smart Sensing and Control for Autonomous Intelligent Unmanned Systems)
Show Figures

Figure 1

20 pages, 1951 KB  
Article
Virtual Prototyping of the Human–Robot Ecosystem for Multiphysics Simulation of Upper Limb Motion Assistance
by Rocco Adduci, Francesca Alvaro, Michele Perrelli and Domenico Mundo
Machines 2025, 13(10), 895; https://doi.org/10.3390/machines13100895 - 1 Oct 2025
Viewed by 461
Abstract
As stroke is becoming more frequent nowadays, cutting edge rehabilitation approaches are required to recover upper limb functionalities and to support patients during daily activities. Recently, focus has moved to robotic rehabilitation; however, therapeutic devices are still highly expensive, making rehabilitation not easily [...] Read more.
As stroke is becoming more frequent nowadays, cutting edge rehabilitation approaches are required to recover upper limb functionalities and to support patients during daily activities. Recently, focus has moved to robotic rehabilitation; however, therapeutic devices are still highly expensive, making rehabilitation not easily affordable. Moreover, devices are not easily accepted by patients, who can refuse to use them due to not feeling comfortable. The presented work proposes the exploitation of a virtual prototype of the human–robot ecosystem for the study and analysis of patient–robot interactions, enabling their simulation-based investigation in multiple scenarios. For the accomplishment of this task, the Dynamics of Multi-physical Systems platform, previously presented by the authors, is further developed to enable the integration of biomechanical models of the human body with mechatronics models of robotic devices for motion assistance, as well as with PID-based control strategies. The work begins with (1) a description of the background; hence, the current state of the art and purpose of the study; (2) the platform is then presented and the system is formalized, first from a general side and then (3) in the application-specific scenario. (4) The use case is described, presenting a controlled gym weightlifting exercise supported by an exoskeleton and the results are analyzed in a final paragraph (5). Full article
Show Figures

Figure 1

Back to TopTop