Next Article in Journal
Classification of Cracks in Composite Structures Subjected to Low-Velocity Impact Using Distribution-Based Segmentation and Wavelet Analysis of X-ray Tomograms
Previous Article in Journal
Adaptive Grasping of Moving Objects through Tactile Sensing
Previous Article in Special Issue
A Knowledge-Based Cognitive Architecture Supported by Machine Learning Algorithms for Interpretable Monitoring of Large-Scale Satellite Networks
 
 
Article

A Novel Training and Collaboration Integrated Framework for Human–Agent Teleoperation

1
Department of Bioengineering, Imperial College London, London SW7 2BX, UK
2
Department of Computing, Imperial College London, London SW7 2BX, UK
3
School of Education, Communication & Society, King’s College London, London SE5 9RJ, UK
4
Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2BX, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editors: John Oyekan, Christopher Turner, Yuchun Xu and Ming Zhang
Sensors 2021, 21(24), 8341; https://doi.org/10.3390/s21248341
Received: 11 November 2021 / Revised: 3 December 2021 / Accepted: 11 December 2021 / Published: 14 December 2021
Human operators have the trend of increasing physical and mental workloads when performing teleoperation tasks in uncertain and dynamic environments. In addition, their performances are influenced by subjective factors, potentially leading to operational errors or task failure. Although agent-based methods offer a promising solution to the above problems, the human experience and intelligence are necessary for teleoperation scenarios. In this paper, a truncated quantile critics reinforcement learning-based integrated framework is proposed for human–agent teleoperation that encompasses training, assessment and agent-based arbitration. The proposed framework allows for an expert training agent, a bilateral training and cooperation process to realize the co-optimization of agent and human. It can provide efficient and quantifiable training feedback. Experiments have been conducted to train subjects with the developed algorithm. The performances of human–human and human–agent cooperation modes are also compared. The results have shown that subjects can complete the tasks of reaching and picking and placing with the assistance of an agent in a shorter operational time, with a higher success rate and less workload than human–human cooperation. View Full-Text
Keywords: human–agent interaction; teleoperation; reinforcement learning human–agent interaction; teleoperation; reinforcement learning
Show Figures

Figure 1

MDPI and ACS Style

Huang, Z.; Wang, Z.; Bai, W.; Huang, Y.; Sun, L.; Xiao, B.; Yeatman, E.M. A Novel Training and Collaboration Integrated Framework for Human–Agent Teleoperation. Sensors 2021, 21, 8341. https://doi.org/10.3390/s21248341

AMA Style

Huang Z, Wang Z, Bai W, Huang Y, Sun L, Xiao B, Yeatman EM. A Novel Training and Collaboration Integrated Framework for Human–Agent Teleoperation. Sensors. 2021; 21(24):8341. https://doi.org/10.3390/s21248341

Chicago/Turabian Style

Huang, Zebin, Ziwei Wang, Weibang Bai, Yanpei Huang, Lichao Sun, Bo Xiao, and Eric M. Yeatman. 2021. "A Novel Training and Collaboration Integrated Framework for Human–Agent Teleoperation" Sensors 21, no. 24: 8341. https://doi.org/10.3390/s21248341

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop