Action Recognition for Human–Robot Teaming: Exploring Mutual Performance Monitoring Possibilities
Abstract
:1. Introduction
1.1. Human–Robot Teaming: A System Modeling Perspective
1.2. Trust in Human–Robot Teaming
1.3. Translating Human–Human Teaming Characteristics into a Human–Robot Teaming Setting
1.4. The Need for Mutual Performance Monitoring in Human–Robot Teaming
- RQ1: Can action recognition be used in human–robot teaming to pave the way for an important element of human teamwork, i.e., MPM?
- RQ2: What types of sensor data and ML algorithms can be deployed for its practical implementation?
2. Related Works
Research Gap
3. Materials and Methods
3.1. Conceptual Foundation
3.2. Dataset Description and Experimental Design
3.3. Action Recognition Model Configurations Based on the InHARD Dataset
4. Results and Discussion
Findings and Limitations
5. Conclusions and Future Directions
- Proposal 1: Advanced ML techniques for developing visual recognition-based MPM—The integration and implementation of advanced ML models in MPM represent several unexplored areas. Specifically, we suggest exploring deep learning architectures, representation learning [53], and evolutionary computation techniques for adapting to environmental cues [54], and fusion techniques to overcome obstacles in MPM action recognition such as context information, model performance, and data-related issues. Deep learning architectures, adept at processing complex data, can accurately interpret context, enhancing MPM’s effectiveness.
- Proposal 2: Empirical validation—While this study has laid the groundwork for introducing the HhT element into HrT, conducting validation and evaluation studies is essential. The imperative nature of undertaking validation and evaluation studies cannot be overstated, as they are instrumental in generating empirical evidence concerning the effectiveness and implications of incorporating HhT components within real-world HrT settings.
- Proposal 3: Task-oriented action recognition for improving security in collaborative applications—Action recognition can be regarded as a practical method to improve security measures in HRC. It can function as a useful tool for detecting anomalies in behavior or performance that might signify potential security problems. For example, if a robot or human team member indicates activities or performance patterns that differ from established norms, this could be an early sign of a security breach. Repetition of such anomalies can be used as an indicator of a system being potentially compromised. Such deviations, once detected by the action recognition system, would prompt further investigation and appropriate response steps. This proactive strategy for security risks within HRC can manage immediate risks and contribute to the development of more resilient and secure collaborative systems.
- Proposal 1: Further exploring big-five teaming elements in HrT—The intersection of established big-five HhT characteristics with the evolving landscape of HrT invites further empirical analysis. This study suggests exploring the potential alignment between the key characteristics of HhT, such as mutual performance monitoring, team orientation, backup behaviors, team leadership, and adaptability, and the distinctive skills exhibited by robots. This exploration has the potential to generate novel approaches for enhancing team performance.
- Proposal 2: Improving levels of safety and security in HrT—Considering the complexity of HrT, especially with ML for performance monitoring, future research should focus on enhancing safety and security levels. The example explored in this paper regarding action recognition shows the potential to redefine safety proximity criteria, and at the same time, provides an additional tool for identifying possible security breaches. It introduces the potential of a more context-aware robotic intelligent system. As stated by Schaefer et al. [55], incorporating context-driven AI is important for advancing future robotic capabilities, thereby promoting the development of situational awareness, calibrating trust, and enhancing team performance in collaborative HrT.Furthermore, the application of MPM based on human action recognition is likely to be highly advantageous in safety-critical settings such as shared manufacturing cells. By using human data, the system can foresee possible threats and take proactive steps to prevent hazardous situations. Hence, the system can improve the performance, safety, and efficiency of the production cell by accurately predicting human behaviors. Therefore, the system can make more informed decisions about its actions and better understand the context of the manufacturing process. However, this requires improving the reliability and robustness of the AI algorithm underpinning these functions beyond the values achieved by the confusion matrix discussed in this paper.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Devlin, S.P.; Flynn, J.R.; Riggs, S.L. Connecting the big five taxonomies: Understanding how individual traits contribute to team adaptability under workload transitions. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Baltimore, MD, USA, 3–8 October 2021; SAGE Publications Sage CA: Los Angeles, CA, USA, 2018; Volume 62, pp. 119–123. [Google Scholar]
- Wolf, F.D.; Stock-Homburg, R. Human-robot teams: A review. In Proceedings of the International Conference on Social Robotics, Golden, CO, USA, 14–18 November 2020; pp. 246–258. [Google Scholar]
- Martinetti, A.; Chemweno, P.K.; Nizamis, K.; Fosch-Villaronga, E. Redefining safety in light of human-robot interaction: A critical review of current standards and regulations. Front. Chem. Eng. 2021, 3, 32. [Google Scholar] [CrossRef]
- Tuncer, S.; Licoppe, C.; Luff, P.; Heath, C. Recipient design in human–robot interaction: The emergent assessment of a robot’s competence. AI Soc. 2023, 1–16. [Google Scholar] [CrossRef]
- Mutlu, B.; Forlizzi, J. Robots in organizations: The role of workflow, social, and environmental factors in human-robot interaction. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The Netherlands, 12–15 March 2008; pp. 287–294. [Google Scholar]
- Harper, C.; Virk, G. Towards the development of international safety standards for human robot interaction. Int. J. Soc. Robot. 2010, 2, 229–234. [Google Scholar] [CrossRef]
- Hoffman, G.; Breazeal, C. Collaboration in human-robot teams. In Proceedings of the AIAA 1st Intelligent Systems Technical Conference, Chicago, IL, USA, 20–22 September 2004; p. 6434. [Google Scholar]
- De Visser, E.; Parasuraman, R. Adaptive aiding of human-robot teaming: Effects of imperfect automation on performance, trust, and workload. J. Cogn. Eng. Decis. Mak. 2011, 5, 209–231. [Google Scholar] [CrossRef]
- Gombolay, M.C.; Huang, C.; Shah, J. Coordination of human-robot teaming with human task preferences. In Proceedings of the 2015 AAAI Fall Symposium Series, Arlington, VA, SUA, 12–14 November 2015. [Google Scholar]
- Tabrez, A.; Luebbers, M.B.; Hayes, B. A survey of mental modeling techniques in human–robot teaming. Curr. Robot. Rep. 2020, 1, 259–267. [Google Scholar] [CrossRef]
- Zhang, Q.; Lee, M.L.; Carter, S. You complete me: Human-ai teams and complementary expertise. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022; pp. 1–28. [Google Scholar]
- Webber, S.S.; Detjen, J.; MacLean, T.L.; Thomas, D. Team challenges: Is artificial intelligence the solution? Bus. Horizons 2019, 62, 741–750. [Google Scholar] [CrossRef]
- Lewis, M.; Sycara, K.; Walker, P. The role of trust in human-robot interaction. Found. Trust. Auton. 2018, 117, 135–159. [Google Scholar]
- Guo, Y.; Yang, X.J. Modeling and predicting trust dynamics in human–robot teaming: A Bayesian inference approach. Int. J. Soc. Robot. 2021, 13, 1899–1909. [Google Scholar] [CrossRef]
- Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.; De Visser, E.J.; Parasuraman, R. A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 2011, 53, 517–527. [Google Scholar] [CrossRef]
- Onnasch, L.; Roesler, E. A taxonomy to structure and analyze human–robot interaction. Int. J. Soc. Robot. 2021, 13, 833–849. [Google Scholar] [CrossRef]
- Albon, R.; Jewels, T. Mutual performance monitoring: Elaborating the development of a team learning theory. Group Decis. Negot. 2014, 23, 149–164. [Google Scholar] [CrossRef]
- Salas, E.; Sims, D.E.; Burke, C.S. Is there a “big five” in teamwork? Small Group Res. 2005, 36, 555–599. [Google Scholar] [CrossRef]
- Ma, L.M.; IJtsma, M.; Feigh, K.M.; Pritchett, A.R. Metrics for human-robot team design: A teamwork perspective on evaluation of human-robot teams. ACM Trans. Hum.-Robot Interact. (THRI) 2022, 11, 1–36. [Google Scholar] [CrossRef]
- You, S.; Robert, L. Teaming up with robots: An IMOI (inputs-mediators-outputs-inputs) framework of human-robot teamwork. Int. J. Robot. Eng. 2018, 2. [Google Scholar] [CrossRef]
- Guznov, S.; Lyons, J.; Pfahler, M.; Heironimus, A.; Woolley, M.; Friedman, J.; Neimeier, A. Robot transparency and team orientation effects on human–robot teaming. Int. J. Hum. Interact. 2020, 36, 650–660. [Google Scholar] [CrossRef]
- Yasar, M.S.; Iqbal, T. Robots That Can Anticipate and Learn in Human-Robot Teams. In Proceedings of the 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Sapporo, Hokkaido, Japan, 7–10 March 2022; pp. 1185–1187. [Google Scholar]
- De Visser, E.J.; Peeters, M.M.; Jung, M.F.; Kohn, S.; Shaw, T.H.; Pak, R.; Neerincx, M.A. Towards a theory of longitudinal trust calibration in human–robot teams. Int. J. Soc. Robot. 2020, 12, 459–478. [Google Scholar] [CrossRef]
- Shah, J.; Breazeal, C. An empirical analysis of team coordination behaviors and action planning with application to human–robot teaming. Hum. Factors 2010, 52, 234–245. [Google Scholar] [CrossRef] [PubMed]
- Gervasi, R.; Mastrogiacomo, L.; Maisano, D.A.; Antonelli, D.; Franceschini, F. A structured methodology to support human–robot collaboration configuration choice. Prod. Eng. 2022, 2022, 1–17. [Google Scholar] [CrossRef]
- Dahiya, A.; Aroyo, A.M.; Dautenhahn, K.; Smith, S.L. A survey of multi-agent Human–Robot Interaction systems. Robot. Auton. Syst. 2023, 161, 104335. [Google Scholar] [CrossRef]
- Lemaignan, S.; Cooper, S.; Ros, R.; Ferrini, L.; Andriella, A.; Irisarri, A. Open-source Natural Language Processing on the PAL Robotics ARI Social Robot. In Proceedings of the Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, 13–16 March 2023; pp. 907–908. [Google Scholar]
- Hari, S.K.K.; Nayak, A.; Rathinam, S. An approximation algorithm for a task allocation, sequencing and scheduling problem involving a human-robot team. IEEE Robot. Autom. Lett. 2020, 5, 2146–2153. [Google Scholar] [CrossRef]
- Singh, S.; Heard, J. Human-aware reinforcement learning for adaptive human robot teaming. In Proceedings of the 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Sapporo, Hokkaido, Japan, 7–10 March 2022; pp. 1049–1052. [Google Scholar]
- Tian, C.; Xu, Z.; Wang, L.; Liu, Y. Arc fault detection using artificial intelligence: Challenges and benefits. Math. Biosci. Eng. 2023, 20, 12404–12432. [Google Scholar] [CrossRef] [PubMed]
- Naser, M.; Alavi, A. Insights into performance fitness and error metrics for machine learning. arXiv 2020, arXiv:2006.00887. [Google Scholar]
- Chakraborti, T.; Kambhampati, S.; Scheutz, M.; Zhang, Y. Ai challenges in human-robot cognitive teaming. arXiv 2017, arXiv:1707.04775. [Google Scholar]
- Huang, X.; Cai, Z. A review of video action recognition based on 3D convolution. Comput. Electr. Eng. 2023, 108, 108713. [Google Scholar] [CrossRef]
- Rodomagoulakis, I.; Kardaris, N.; Pitsikalis, V.; Mavroudi, E.; Katsamanis, A.; Tsiami, A.; Maragos, P. Multimodal human action recognition in assistive human-robot interaction. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 2702–2706. [Google Scholar]
- Kong, Y.; Fu, Y. Human action recognition and prediction: A survey. Int. J. Comput. Vis. 2022, 130, 1366–1401. [Google Scholar] [CrossRef]
- Dallel, M.; Havard, V.; Baudry, D.; Savatier, X. Inhard-industrial human action recognition dataset in the context of industrial collaborative robotics. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; pp. 1–6. [Google Scholar]
- Seraj, E. Embodied Team Intelligence in Multi-Robot Systems. In Proceedings of the AAMAS, Auckland, New Zealand, 9–13 May 2022; pp. 1869–1871. [Google Scholar]
- Perzanowski, D.; Schultz, A.C.; Adams, W.; Marsh, E.; Bugajska, M. Building a multimodal human-robot interface. IEEE Intell. Syst. 2001, 16, 16–21. [Google Scholar] [CrossRef]
- Chiou, E.K.; Demir, M.; Buchanan, V.; Corral, C.C.; Endsley, M.R.; Lematta, G.J.; Cooke, N.J.; McNeese, N.J. Towards human–robot teaming: Tradeoffs of explanation-based communication strategies in a virtual search and rescue task. Int. J. Soc. Robot. 2021, 14, 1117–1136. [Google Scholar] [CrossRef]
- Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
- Guo, Y.; Yang, X.J.; Shi, C. TIP: A Trust Inference and Propagation Model in Multi-Human Multi-Robot Teams. arXiv 2023, arXiv:2301.10928. [Google Scholar]
- Natarajan, M.; Seraj, E.; Altundas, B.; Paleja, R.; Ye, S.; Chen, L.; Jensen, R.; Chang, K.C.; Gombolay, M. Human-Robot Teaming: Grand Challenges. Curr. Robot. Rep. 2023, 4, 1–20. [Google Scholar] [CrossRef]
- He, Z.; Song, Y.; Zhou, S.; Cai, Z. Interaction of Thoughts: Towards Mediating Task Assignment in Human-AI Cooperation with a Capability-Aware Shared Mental Model. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–29 April 2023; pp. 1–18. [Google Scholar]
- Demir, M.; Cohen, M.; Johnson, C.J.; Chiou, E.K.; Cooke, N.J. Exploration of the impact of interpersonal communication and coordination dynamics on team effectiveness in human-machine teams. Int. J. Hum.-Interact. 2023, 39, 1841–1855. [Google Scholar] [CrossRef]
- Zhang, Y.; Williams, B. Adaptation and Communication in Human-Robot Teaming to Handle Discrepancies in Agents’ Beliefs about Plans. In Proceedings of the International Conference on Automated Planning and Scheduling, Prague, Czech Republic, 8–12 July 2023; Volume 33, pp. 462–471. [Google Scholar]
- Schmidbauer, C.; Zafari, S.; Hader, B.; Schlund, S. An Empirical Study on Workers’ Preferences in Human–Robot Task Assignment in Industrial Assembly Systems. IEEE Trans. Hum.-Mach. Syst. 2023, 53, 293–302. [Google Scholar] [CrossRef]
- Wang, L.; Ge, L.; Li, R.; Fang, Y. Three-stream CNNs for action recognition. Pattern Recognit. Lett. 2017, 92, 33–40. [Google Scholar] [CrossRef]
- Hossin, M.; Sulaiman, M.N. A review on evaluation metrics for data classification evaluations. Int. J. Data Min. Knowl. Manag. Process 2015, 5, 1. [Google Scholar]
- Gholamrezaii, M.; Almodarresi, S.M.T. Human activity recognition using 2D convolutional neural networks. In Proceedings of the 2019 27th Iranian Conference on Electrical Engineering (ICEE), Yazd, Iran, 30 April–2 May 2019; pp. 1682–1686. [Google Scholar]
- Stamoulakatos, A.; Cardona, J.; Michie, C.; Andonovic, I.; Lazaridis, P.; Bellekens, X.; Atkinson, R.; Hossain, M.M.; Tachtatzis, C. A comparison of the performance of 2D and 3D convolutional neural networks for subsea survey video classification. In Proceedings of the OCEANS 2021: San Diego–Porto, San Diego, CA, USA, 20–23 September 2021; pp. 1–10. [Google Scholar]
- Taye, M.M. Theoretical understanding of convolutional neural network: Concepts, architectures, applications, future directions. Computation 2023, 11, 52. [Google Scholar] [CrossRef]
- Shi, Y.; Li, L.; Yang, J.; Wang, Y.; Hao, S. Center-based transfer feature learning with classifier adaptation for surface defect recognition. Mech. Syst. Signal Process. 2023, 188, 110001. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, Z.; Xu, J.; Yan, W. Heterogeneous network representation learning approach for ethereum identity identification. IEEE Trans. Comput. Soc. Syst. 2022, 10, 890–899. [Google Scholar] [CrossRef]
- Liu, Z.; Yang, D.; Wang, Y.; Lu, M.; Li, R. EGNN: Graph structure learning based on evolutionary computation helps more in graph neural networks. Appl. Soft Comput. 2023, 135, 110040. [Google Scholar] [CrossRef]
- Schaefer, K.E.; Oh, J.; Aksaray, D.; Barber, D. Integrating context into artificial intelligence: Research from the robotics collaborative technology alliance. Ai Mag. 2019, 40, 28–40. [Google Scholar] [CrossRef]
Meta-Action Class Label | No. of Samples |
---|---|
Assemble System | 1378 |
Consult Sheets | 132 |
No Action | 500 |
Picking in Front | 456 |
Picking Left | 641 |
Put Down Component | 385 |
Put Down Measuring Rod | 74 |
Put Down Screwdriver | 416 |
Put Down Subsystem | 77 |
Take Component | 485 |
Take Measuring Rod | 76 |
Take Screwdriver | 420 |
Take Subsystem | 39 |
Turn Sheets | 224 |
Meta-Action Class Label | Accuracy % |
---|---|
Assemble System | ∼64 |
Consult Sheets | ∼58 |
No Action | ∼35 |
Picking in Front | ∼52 |
Picking Left | ∼95 |
Put Down Component | ∼78 |
Put Down Measuring Rod | ∼47 |
Put Down Screwdriver | ∼68 |
Put Down Subsystem | ∼90 |
Take Component | ∼69 |
Take Measuring Rod | ∼60 |
Take Screwdriver | ∼65 |
Take Subsystem | ∼88 |
Turn Sheets | ∼32 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mehak, S.; Kelleher, J.D.; Guilfoyle, M.; Leva, M.C. Action Recognition for Human–Robot Teaming: Exploring Mutual Performance Monitoring Possibilities. Machines 2024, 12, 45. https://doi.org/10.3390/machines12010045
Mehak S, Kelleher JD, Guilfoyle M, Leva MC. Action Recognition for Human–Robot Teaming: Exploring Mutual Performance Monitoring Possibilities. Machines. 2024; 12(1):45. https://doi.org/10.3390/machines12010045
Chicago/Turabian StyleMehak, Shakra, John D. Kelleher, Michael Guilfoyle, and Maria Chiara Leva. 2024. "Action Recognition for Human–Robot Teaming: Exploring Mutual Performance Monitoring Possibilities" Machines 12, no. 1: 45. https://doi.org/10.3390/machines12010045
APA StyleMehak, S., Kelleher, J. D., Guilfoyle, M., & Leva, M. C. (2024). Action Recognition for Human–Robot Teaming: Exploring Mutual Performance Monitoring Possibilities. Machines, 12(1), 45. https://doi.org/10.3390/machines12010045