Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (356)

Search Parameters:
Keywords = human–robot communication

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 7197 KiB  
Review
Recent Advances in Electrospun Nanofiber-Based Self-Powered Triboelectric Sensors for Contact and Non-Contact Sensing
by Jinyue Tian, Jiaxun Zhang, Yujie Zhang, Jing Liu, Yun Hu, Chang Liu, Pengcheng Zhu, Lijun Lu and Yanchao Mao
Nanomaterials 2025, 15(14), 1080; https://doi.org/10.3390/nano15141080 - 11 Jul 2025
Viewed by 317
Abstract
Electrospun nanofiber-based triboelectric nanogenerators (TENGs) have emerged as a highly promising class of self-powered sensors for a broad range of applications, particularly in intelligent sensing technologies. By combining the advantages of electrospinning and triboelectric nanogenerators, these sensors offer superior characteristics such as high [...] Read more.
Electrospun nanofiber-based triboelectric nanogenerators (TENGs) have emerged as a highly promising class of self-powered sensors for a broad range of applications, particularly in intelligent sensing technologies. By combining the advantages of electrospinning and triboelectric nanogenerators, these sensors offer superior characteristics such as high sensitivity, mechanical flexibility, lightweight structure, and biocompatibility, enabling their integration into wearable electronics and biomedical interfaces. This review presents a comprehensive overview of recent progress in electrospun nanofiber-based TENGs, covering their working principles, operating modes, and material composition. Both pure polymer and composite nanofibers are discussed, along with various electrospinning techniques that enable control over morphology and performance at the nanoscale. We explore their practical implementations in both contact-type and non-contact-type sensing, such as human–machine interaction, physiological signal monitoring, gesture recognition, and voice detection. These applications demonstrate the potential of TENGs to enable intelligent, low-power, and real-time sensing systems. Furthermore, this paper points out critical challenges and future directions, including durability under long-term operation, scalable and cost-effective fabrication, and seamless integration with wireless communication and artificial intelligence technologies. With ongoing advancements in nanomaterials, fabrication techniques, and system-level integration, electrospun nanofiber-based TENGs are expected to play a pivotal role in shaping the next generation of self-powered, intelligent sensing platforms across diverse fields such as healthcare, environmental monitoring, robotics, and smart wearable systems. Full article
(This article belongs to the Special Issue Self-Powered Flexible Sensors Based on Triboelectric Nanogenerators)
Show Figures

Figure 1

16 pages, 4481 KiB  
Article
Construction and Validation of a Digital Twin-Driven Virtual-Reality Fusion Control Platform for Industrial Robots
by Wenxuan Chang, Wenlei Sun, Pinghui Chen and Huangshuai Xu
Sensors 2025, 25(13), 4153; https://doi.org/10.3390/s25134153 - 3 Jul 2025
Viewed by 351
Abstract
Traditional industrial robot programming methods often pose high usage thresholds due to their inherent complexity and lack of standardization. Manufacturers typically employ proprietary programming languages or user interfaces, resulting in steep learning curves and limited interoperability. Moreover, conventional systems generally lack capabilities for [...] Read more.
Traditional industrial robot programming methods often pose high usage thresholds due to their inherent complexity and lack of standardization. Manufacturers typically employ proprietary programming languages or user interfaces, resulting in steep learning curves and limited interoperability. Moreover, conventional systems generally lack capabilities for remote control and real-time status monitoring. In this study, a novel approach is proposed by integrating digital twin technology with traditional robot control methodologies to establish a virtual–real mapping architecture. A high-precision and efficient digital twin-based control platform for industrial robots is developed using the Unity3D (2022.3.53f1c1) engine, offering enhanced visualization, interaction, and system adaptability. The high-precision twin environment is constructed from the three dimensions of the physical layer, digital layer, and information fusion layer. The system adopts the socket communication mechanism based on TCP/IP protocol to realize the real-time acquisition of robot state information and the synchronous issuance of control commands, and constructs the virtual–real bidirectional mapping mechanism. The Unity3D platform is integrated to develop a visual human–computer interaction interface, and the user-oriented graphical interface and modular command system effectively reduce the threshold of robot use. A spatially curved part welding experiment is carried out to verify the adaptability and control accuracy of the system in complex trajectory tracking and flexible welding tasks, and the experimental results show that the system has high accuracy as well as good interactivity and stability. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

26 pages, 2296 KiB  
Article
Novel Design of Three-Channel Bilateral Teleoperation with Communication Delay Using Wave Variable Compensators
by Bo Yang, Chao Liu, Lei Zhang, Long Teng, Jiawei Tian, Siyuan Xu and Wenfeng Zheng
Electronics 2025, 14(13), 2595; https://doi.org/10.3390/electronics14132595 - 27 Jun 2025
Viewed by 276
Abstract
Bilateral teleoperation systems have been widely used in many fields of robotics, such as industrial manipulation, medical treatment, space exploration, and deep-sea operation. Delays in communication, known as an inevitable issues in practical implementation, especially for long-distance operations and challenging communication situations, can [...] Read more.
Bilateral teleoperation systems have been widely used in many fields of robotics, such as industrial manipulation, medical treatment, space exploration, and deep-sea operation. Delays in communication, known as an inevitable issues in practical implementation, especially for long-distance operations and challenging communication situations, can destroy system passivity and potentially lead to system failure. In this work, we address the time-delayed three-channel teleoperation design problem to guarantee system passivity and achieve high transparency simultaneously. To realize this, the three-channel teleoperation structure is first reformulated to form a two-channel-like architecture. Then, the wave variable technique is used to handle the communication delay and guarantee system passivity. Two novel wave variable compensators are proposed to achieve delay-minimized system transparency, and energy reservoirs are employed to monitor and regulate the energy introduced via these compensators to preserve overall system passivity. Numerical studies confirm that the proposed method significantly improves both kinematic and force tracking performance, achieving near-perfect correspondence with only a single-trip delay. Quantitative analyses using Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Dynamic Time Warping (DTW) metrics show substantial error reductions compared to conventional wave variable and direct transmission-based three-channel teleoperation approaches. Moreover, statistical validation via the Mann–Whitney U test further confirms the significance of these improvements in system performance. The proposed design guarantees passivity with any passive human operator and environment without requiring restrictive assumptions, offering a robust and generalizable solution for teleoperation tasks with communication time delay. Full article
(This article belongs to the Special Issue Intelligent Perception and Control for Robotics)
Show Figures

Figure 1

28 pages, 1791 KiB  
Article
Speech Recognition-Based Wireless Control System for Mobile Robotics: Design, Implementation, and Analysis
by Sandeep Gupta, Udit Mamodiya and Ahmed J. A. Al-Gburi
Automation 2025, 6(3), 25; https://doi.org/10.3390/automation6030025 - 24 Jun 2025
Viewed by 512
Abstract
This paper describes an innovative wireless mobile robotics control system based on speech recognition, where the ESP32 microcontroller is used to control motors, facilitate Bluetooth communication, and deploy an Android application for the real-time speech recognition logic. With speech processed on the Android [...] Read more.
This paper describes an innovative wireless mobile robotics control system based on speech recognition, where the ESP32 microcontroller is used to control motors, facilitate Bluetooth communication, and deploy an Android application for the real-time speech recognition logic. With speech processed on the Android device and motor commands handled on the ESP32, the study achieves significant performance gains through distributed architectures while maintaining low latency for feedback control. In experimental tests over a range of 1–10 m, stable 110–140 ms command latencies, with low variation (±15 ms) were observed. The system’s voice and manual button modes both yield over 92% accuracy with the aid of natural language processing, resulting in training requirements being low, and displaying strong performance in high-noise environments. The novelty of this work is evident through an adaptive keyword spotting algorithm for improved recognition performance in high-noise environments and a gradual latency management system that optimizes processing parameters in the presence of noise. By providing a user-friendly, real-time speech interface, this work serves to enhance human–robot interaction when considering future assistive devices, educational platforms, and advanced automated navigation research. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

35 pages, 14963 KiB  
Article
Research on the Digital Twin System of Welding Robots Driven by Data
by Saishuang Wang, Yufeng Jiao, Lijun Wang, Wenjie Wang, Xiao Ma, Qiang Xu and Zhongyu Lu
Sensors 2025, 25(13), 3889; https://doi.org/10.3390/s25133889 - 22 Jun 2025
Viewed by 428
Abstract
With the rise of digital twin technology, the application of digital twin technology to industrial automation provides a new direction for the digital transformation of the global smart manufacturing industry. In order to further improve production efficiency, as well as realize enterprise digital [...] Read more.
With the rise of digital twin technology, the application of digital twin technology to industrial automation provides a new direction for the digital transformation of the global smart manufacturing industry. In order to further improve production efficiency, as well as realize enterprise digital empowerment, this paper takes a welding robot arm as the research object and constructs a welding robot arm digital twin system. Using three-dimensional modeling technology and model rendering, the welding robot arm digital twin simulation environment was built. Parent–child hierarchy and particle effects were used to truly restore the movement characteristics of the robot arm and the welding effect, with the help of TCP communication and Bluetooth communication to realize data transmission between the virtual segment and the physical end. A variety of UI components were used to design the human–machine interaction interface of the digital twin system, ultimately realizing the data-driven digital twin system. Finally, according to the digital twin maturity model constructed by Prof. Tao Fei’s team, the system was scored using five dimensions and 19 evaluation factors. After testing the system, we found that the combination of digital twin technology and automation is feasible and achieves the expected results. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 467 KiB  
Article
A Socially Assistive Robot as Orchestrator of an AAL Environment for Seniors
by Carlos E. Sanchez-Torres, Ernesto A. Lozano, Irvin H. López-Nava, J. Antonio Garcia-Macias and Jesus Favela
Technologies 2025, 13(6), 260; https://doi.org/10.3390/technologies13060260 - 19 Jun 2025
Viewed by 289
Abstract
Social robots in Ambient Assisted Living (AAL) environments offer a promising alternative for enhancing senior care by providing companionship and functional support. These robots can serve as intuitive interfaces to complex smart home systems, allowing seniors and caregivers to easily control their environment [...] Read more.
Social robots in Ambient Assisted Living (AAL) environments offer a promising alternative for enhancing senior care by providing companionship and functional support. These robots can serve as intuitive interfaces to complex smart home systems, allowing seniors and caregivers to easily control their environment and access various assistance services through natural interactions. By combining the emotional engagement capabilities of social robots with the comprehensive monitoring and support features of AAL, this integrated approach can potentially improve the quality of life and independence of elderly individuals while alleviating the burden on human caregivers. This paper explores the integration of social robotics with ambient assisted living (AAL) technologies to enhance elderly care. We propose a novel framework where a social robot is the central orchestrator of an AAL environment, coordinating various smart devices and systems to provide comprehensive support for seniors. Our approach leverages the social robot’s ability to engage in natural interactions while managing the complex network of environmental and wearable sensors and actuators. In this paper, we focus on the technical aspects of our framework. A computational P2P notebook is used to customize the environment and run reactive services. Machine learning models can be included for real-time recognition of gestures, poses, and moods to support non-verbal communication. We describe scenarios to illustrate the utility and functionality of the framework and how the robot is used to orchestrate the AAL environment to contribute to the well-being and independence of elderly individuals. We also address the technical challenges and future directions for this integrated approach to elderly care. Full article
Show Figures

Figure 1

16 pages, 23492 KiB  
Article
CAGNet: A Network Combining Multiscale Feature Aggregation and Attention Mechanisms for Intelligent Facial Expression Recognition in Human-Robot Interaction
by Dengpan Zhang, Wenwen Ma, Zhihao Shen and Qingping Ma
Sensors 2025, 25(12), 3653; https://doi.org/10.3390/s25123653 - 11 Jun 2025
Viewed by 455
Abstract
The development of Facial Expression Recognition (FER) technology has significantly enhanced the naturalness and intuitiveness of human-robot interaction. In the field of service robots, particularly in applications such as production assistance, caregiving, and daily service communication, efficient FER capabilities are crucial. However, existing [...] Read more.
The development of Facial Expression Recognition (FER) technology has significantly enhanced the naturalness and intuitiveness of human-robot interaction. In the field of service robots, particularly in applications such as production assistance, caregiving, and daily service communication, efficient FER capabilities are crucial. However, existing Convolutional Neural Network (CNN) models still have limitations in terms of feature representation and recognition accuracy for facial expressions. To address these challenges, we propose CAGNet, a novel network that combines multiscale feature aggregation and attention mechanisms. CAGNet employs a deep learning-based hierarchical convolutional architecture, enhancing the extraction of features at multiple scales through stacked convolutional layers. The network integrates the Convolutional Block Attention Module (CBAM) and Global Average Pooling (GAP) modules to optimize the capture of both local and global features. Additionally, Batch Normalization (BN) layers and Dropout techniques are incorporated to improve model stability and generalization. CAGNet was evaluated on two standard datasets, FER2013 and CK+, and the experiment results demonstrate that the network achieves accuracies of 71.52% and 97.97%, respectively, in FER. These results not only validate the effectiveness and superiority of our approach but also provide a new technical solution for FER. Furthermore, CAGNet offers robust support for the intelligent upgrade of service robots. Full article
Show Figures

Figure 1

18 pages, 4185 KiB  
Article
An Empirical Study on Pointing Gestures Used in Communication in Household Settings
by Tymon Kukier, Alicja Wróbel, Barbara Sienkiewicz, Julia Klimecka, Antonio Galiza Cerdeira Gonzalez, Paweł Gajewski and Bipin Indurkhya
Electronics 2025, 14(12), 2346; https://doi.org/10.3390/electronics14122346 - 8 Jun 2025
Viewed by 404
Abstract
Gestures play an integral role in human communication. Our research aims to develop a gesture understanding system that allows for better interpretation of human instructions in household robotics settings. We conducted an experiment with 34 participants who used pointing gestures to teach concepts [...] Read more.
Gestures play an integral role in human communication. Our research aims to develop a gesture understanding system that allows for better interpretation of human instructions in household robotics settings. We conducted an experiment with 34 participants who used pointing gestures to teach concepts to an assistant. Gesture data were analyzed using manual annotations (MAXQDA) and the computational methods of pose estimation and k-means clustering. The study revealed that participants tend to maintain consistent pointing styles, with one-handed pointing and index finger gestures being the most common. Gaze and pointing often co-occur, as do leaning forward and pointing. Using our gesture categorization algorithm, we analyzed gesture information values. As the experiment progressed, the information value of gestures remained stable, although the trends varied between participants and were associated with factors such as age and gender. These findings underscore the need for gesture recognition systems to balance generalization with personalization for more effective human–robot interaction. Full article
(This article belongs to the Special Issue Applications of Computer Vision, 3rd Edition)
Show Figures

Figure 1

24 pages, 3481 KiB  
Article
Exploring the Potential of Wi-Fi in Industrial Environments: A Comparative Performance Analysis of IEEE 802.11 Standards
by Luis M. Bartolín-Arnau, Federico Orozco-Santos, Víctor Sempere-Payá, Javier Silvestre-Blanes, Teresa Albero-Albero and David Llacer-Garcia
Telecom 2025, 6(2), 40; https://doi.org/10.3390/telecom6020040 - 5 Jun 2025
Viewed by 578
Abstract
The advent of Industry 4.0 brought about digitalisation and the integration of advanced technologies into industrial processes, with wireless networks emerging as a key enabler in the interconnection of smart devices, cyber–physical systems, and data analytics platforms. With the development of Industry 5.0 [...] Read more.
The advent of Industry 4.0 brought about digitalisation and the integration of advanced technologies into industrial processes, with wireless networks emerging as a key enabler in the interconnection of smart devices, cyber–physical systems, and data analytics platforms. With the development of Industry 5.0 and its emphasis on human–machine collaboration, Wi-Fi has positioned itself as a viable alternative for industrial wireless connectivity, supporting seamless communication between robots, automation systems, and human operators. However, its adoption in critical applications remains limited due to persistent concerns over latency, reliability, and interference in shared-spectrum environments. This study evaluates the practical performance of Wi-Fi standards from 802.11n (Wi-Fi 4) to 802.11be (Wi-Fi 7) across three representative environments: residential, laboratory, and industrial. Six configurations were tested under consistent conditions, covering various frequency bands, channel widths, and traffic types. Results prove that Wi-Fi 6/6E delivers the best overall performance, particularly in low-interference 6 GHz scenarios. Wi-Fi 5 performs well in medium-range settings but is more sensitive to congestion, while Wi-Fi 4 consistently underperforms. Early Wi-Fi 7 hardware does not yet surpass Wi-Fi 6/6E consistently, reflecting its ongoing development. Despite these variations, the progression observed across generations clearly demonstrates incremental gains in throughput stability and latency control. While these improvements already provide tangible benefits for many industrial communication scenarios, the most significant leap in industrial applicability is expected to come from the effective implementation of high-efficiency mechanisms. These include OFDMA, TWT, scheduled uplink access, and enhanced QoS features. These capabilities, already embedded in the Wi-Fi 6 and 7 standards, represent the necessary foundation to move beyond conventional best-effort connectivity and toward supporting critical, latency-sensitive industrial applications. Full article
Show Figures

Figure 1

43 pages, 128295 KiB  
Article
A Knowledge-Driven Framework for AI-Augmented Business Process Management Systems: Bridging Explainability and Agile Knowledge Sharing
by Danilo Martino, Cosimo Perlangeli, Barbara Grottoli, Luisa La Rosa and Massimo Pacella
AI 2025, 6(6), 110; https://doi.org/10.3390/ai6060110 - 28 May 2025
Viewed by 1402
Abstract
Background: The integration of Artificial Intelligence (AI) into Business Process Management Systems (BPMSs) has led to the emergence of AI-Augmented Business Process Management Systems (ABPMSs). These systems offer dynamic adaptation, real-time process optimization, and enhanced knowledge management capabilities. However, key challenges remain, particularly [...] Read more.
Background: The integration of Artificial Intelligence (AI) into Business Process Management Systems (BPMSs) has led to the emergence of AI-Augmented Business Process Management Systems (ABPMSs). These systems offer dynamic adaptation, real-time process optimization, and enhanced knowledge management capabilities. However, key challenges remain, particularly regarding explainability, user engagement, and behavioral integration. Methods: This study presents a novel framework that synergistically integrates the Socialization, Externalization, Combination, and Internalization knowledge model (SECI), Agile methods (specifically Scrum), and cutting-edge AI technologies, including explainable AI (XAI), process mining, and Robotic Process Automation (RPA). The framework enables the formalization, verification, and sharing of knowledge via a well-organized, user-friendly software platform and collaborative practices, especially Communities of Practice (CoPs). Results: The framework emphasizes situation-aware explainability, modular adoption, and continuous improvement to ensure effective human–AI collaboration. It provides theoretical and practical mechanisms for aligning AI capabilities with organizational knowledge management. Conclusions: The proposed framework facilitates the transition from traditional BPMSs to more sophisticated ABPMSs by leveraging structured methodologies and technologies. The approach enhances knowledge exchange and process evolution, supported by detailed modeling using BPMN 2.0. Full article
Show Figures

Figure 1

17 pages, 18945 KiB  
Article
Collaborative Robot Control Based on Human Gaze Tracking
by Francesco Di Stefano, Alice Giambertone, Laura Salamina, Matteo Melchiorre and Stefano Mauro
Sensors 2025, 25(10), 3103; https://doi.org/10.3390/s25103103 - 14 May 2025
Viewed by 505
Abstract
Gaze tracking is gaining relevance in collaborative robotics as a means to enhance human–machine interaction by enabling intuitive and non-verbal communication. This study explores the integration of human gaze into collaborative robotics by demonstrating the possibility of controlling a robotic manipulator with a [...] Read more.
Gaze tracking is gaining relevance in collaborative robotics as a means to enhance human–machine interaction by enabling intuitive and non-verbal communication. This study explores the integration of human gaze into collaborative robotics by demonstrating the possibility of controlling a robotic manipulator with a practical and non-intrusive setup made up of a vision system and gaze-tracking software. After presenting a comparison between the major available systems on the market, OpenFace 2.0 was selected as the primary gaze-tracking software and integrated with a UR5 collaborative robot through a MATLAB-based control framework. Validation was conducted through real-world experiments, analyzing the effects of raw and filtered gaze data on system accuracy and responsiveness. The results indicate that gaze tracking can effectively guide robot motion, though signal processing significantly impacts responsiveness and control precision. This work establishes a foundation for future research on gaze-assisted robotic control, highlighting its potential benefits and challenges in enhancing human–robot collaboration. Full article
(This article belongs to the Special Issue Advanced Robotic Manipulators and Control Applications)
Show Figures

Figure 1

20 pages, 5632 KiB  
Article
Filtering Unintentional Hand Gestures to Enhance the Understanding of Multimodal Navigational Commands in an Intelligent Wheelchair
by Kodikarage Sahan Priyanayana, A. G. Buddhika P. Jayasekara and R. A. R. C. Gopura
Electronics 2025, 14(10), 1909; https://doi.org/10.3390/electronics14101909 - 8 May 2025
Viewed by 392
Abstract
Natural human–human communication consists of multiple modalities interacting together. When an intelligent robot or wheelchair is being developed, it is important to consider this aspect. One of the most common modality pairs in multimodal human–human communication is speech–hand gesture interaction. However, not all [...] Read more.
Natural human–human communication consists of multiple modalities interacting together. When an intelligent robot or wheelchair is being developed, it is important to consider this aspect. One of the most common modality pairs in multimodal human–human communication is speech–hand gesture interaction. However, not all the hand gestures that can be identified in this type of interaction are useful. Some hand movements can be misinterpreted as useful hand gestures or intentional hand gestures. Failing to filter out these unintentional gestures could lead to severe faulty identifications of important hand gestures. When speech–hand gesture multimodal systems are designed for disabled/elderly users, the above-mentioned issue could result in grave consequences in terms of safety. Gesture identification systems developed for speech–hand gesture systems commonly use hand features and other gesture parameters. Hence, similar gesture features could result in the misidentification of an unintentional gesture as a known gesture. Therefore, in this paper, we have proposed an intelligent system to filter out these unnecessary gestures or unintentional gestures before the gesture identification process in multimodal navigational commands. Timeline parameters such as time lag, gesture range, gesture speed, etc., are used in this filtering system. They are calculated by comparing the vocal command timeline and gesture timeline. For the filtering algorithm, a combination of the Locally Weighted Naive Bayes (LWNB) and K-Nearest Neighbor Distance Weighting (KNNDW) classifiers is proposed. The filtering system performed with an overall accuracy of 94%, sensitivity of 97%, and specificity of 90%, and it had a Cohen’s Kappa value of 88%. Full article
Show Figures

Figure 1

27 pages, 5560 KiB  
Article
A Stackelberg Trust-Based Human–Robot Collaboration Framework for Warehouse Picking
by Yang Liu, Fuqiang Guo and Yan Ma
Systems 2025, 13(5), 348; https://doi.org/10.3390/systems13050348 - 3 May 2025
Viewed by 507
Abstract
The warehouse picking process is one of the most critical components of logistics operations. Human–robot collaboration (HRC) is seen as an important trend in warehouse picking, as it combines the strengths of both humans and robots in the picking process. However, in current [...] Read more.
The warehouse picking process is one of the most critical components of logistics operations. Human–robot collaboration (HRC) is seen as an important trend in warehouse picking, as it combines the strengths of both humans and robots in the picking process. However, in current human–robot collaboration frameworks, there is a lack of effective communication between humans and robots, which results in inefficient task execution during the picking process. To address this, this paper considers trust as a communication bridge between humans and robots and proposes the Stackelberg trust-based human–robot collaboration framework for warehouse picking, aiming to achieve efficient and effective human–robot collaborative picking. In this framework, HRC with trust for warehouse picking is defined as the Partially Observable Stochastic Game (POSG) model. We model human fatigue with the logistic function and incorporate its impact on the efficiency reward function of the POSG. Based on the POSG model, belief space is used to assess human trust, and human strategies are formed. An iterative Stackelberg trust strategy generation (ISTSG) algorithm is designed to achieve the optimal long-term collaboration benefits between humans and robots, which is solved by the Bellman equation. The generated human–robot decision profile is formalized as a Partially Observable Markov Decision Process (POMDP), and the properties of human–robot collaboration are specified as PCTL (probabilistic computation tree logic) with rewards, such as efficiency, accuracy, trust, and human fatigue. The probabilistic model checker PRISM is exploited to verify and analyze the corresponding properties of the POMDP. We take the popular human–robot collaboration robot TORU as a case study. The experimental results show that our framework improves the efficiency of human–robot collaboration for warehouse picking and reduces worker fatigue while ensuring the required accuracy of human–robot collaboration. Full article
Show Figures

Figure 1

24 pages, 1798 KiB  
Article
HEalthcare Robotics’ ONtology (HERON): An Upper Ontology for Communication, Collaboration and Safety in Healthcare Robotics
by Penelope Ioannidou, Ioannis Vezakis, Maria Haritou, Rania Petropoulou, Stavros T. Miloulis, Ioannis Kouris, Konstantinos Bromis, George K. Matsopoulos and Dimitrios D. Koutsouris
Healthcare 2025, 13(9), 1031; https://doi.org/10.3390/healthcare13091031 - 30 Apr 2025
Viewed by 418
Abstract
Background: Healthcare robotics needs context-aware policy-compliant reasoning to achieve safe human–agent collaboration. The current ontologies fail to provide healthcare-relevant information and flexible semantic enforcement systems. Methods: HERON represents a modular upper ontology which enables healthcare robotic systems to communicate and collaborate while ensuring [...] Read more.
Background: Healthcare robotics needs context-aware policy-compliant reasoning to achieve safe human–agent collaboration. The current ontologies fail to provide healthcare-relevant information and flexible semantic enforcement systems. Methods: HERON represents a modular upper ontology which enables healthcare robotic systems to communicate and collaborate while ensuring safety during operations. The system enables domain-specific instantiations through SPARQL queries and SHACL-based constraint validation to perform context-driven logic. The system models robotic task interactions through simulated eldercare and diagnostic and surgical support scenarios which follow ethical and regulatory standards. Results: The validation tests demonstrated HERON’s capacity to enable safe and explainable autonomous operations in changing environments. The semantic constraints enforced proper eligibility for roles and privacy conditions and policy override functionality during agent task execution. The HERON system demonstrated compatibility with healthcare IT systems and demonstrated adaptability to the GDPR and other policy frameworks. Conclusions: The semantically rich framework of HERON establishes an interoperable foundation for healthcare robotics. The system architecture maintains an open design which enables HL7/FHIR standard integration and robotic middleware compatibility. HERON demonstrates superior healthcare-specific capabilities through its evaluation against SUMO HL7 and MIMO. The future research will focus on optimizing HERON for low-resource clinical environments while extending its applications to remote care emergency triage and adaptive human–robot collaboration. Full article
(This article belongs to the Section TeleHealth and Digital Healthcare)
Show Figures

Figure 1

26 pages, 5529 KiB  
Article
Statistically Informed Multimodal (Domain Adaptation by Transfer) Learning Framework: A Domain Adaptation Use-Case for Industrial Human–Robot Communication
by Debasmita Mukherjee and Homayoun Najjaran
Electronics 2025, 14(7), 1419; https://doi.org/10.3390/electronics14071419 - 31 Mar 2025
Viewed by 413
Abstract
Cohesive human–robot collaboration can be achieved through seamless communication between human and robot partners. We posit that the design aspects of human–robot communication (HRCom) can take inspiration from human communication to create more intuitive systems. A key component of HRCom systems is perception [...] Read more.
Cohesive human–robot collaboration can be achieved through seamless communication between human and robot partners. We posit that the design aspects of human–robot communication (HRCom) can take inspiration from human communication to create more intuitive systems. A key component of HRCom systems is perception models developed using machine learning. Being data-driven, these models suffer from the dearth of comprehensive, labelled datasets while models trained on standard, publicly available datasets do not generalize well to application-specific scenarios. Complex interactions and real-world variability lead to shifts in data that require domain adaptation by the models. Existing domain adaptation techniques do not account for incommensurable modes of communication between humans and robot perception systems. Taking into account these challenges, a novel framework is presented that leverages existing domain adaptation techniques off-the-shelf and uses statistical measures to start and stop the training of models when they encounter domain-shifted data. Statistically informed multimodal (domain adaptation by transfer) learning (SIMLea) takes inspiration from human communication to use human feedback to auto-label for iterative domain adaptation. The framework can handle incommensurable multimodal inputs, is mode and model agnostic, and allows statistically informed extension of datasets, leading to more intuitive and naturalistic HRCom systems. Full article
Show Figures

Figure 1

Back to TopTop