From Human–Machine Interaction to Human–Machine Cooperation: Status and Progress
1. Introduction
2. An Overview of Published Articles
3. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Ren, M.; Chen, N.; Qiu, H. Human–machine collaborative decision-making: An evolutionary roadmap based on cognitive intelligence. Int. J. Soc. Robot. 2023, 15, 1101–1114. [Google Scholar] [CrossRef]
- Semeraro, F.; Griffiths, A.; Cangelosi, A. Human–robot collaboration and machine learning: A systematic review of recent research. Rob. Comput.-Integr. Manuf. 2023, 79, 102432. [Google Scholar] [CrossRef]
- Lodhi, S.K.; Zeb, S. AI-driven robotics and automation: The evolution of human–machine collaboration. J. World Sci. 2025, 4, 422–437. [Google Scholar] [CrossRef]
- Vijay, R.; Kumar, A. The collaboration between humans and machines. In Advanced Digital Technologies in Financial and Business Management; Apple Academic Press: Waretown, NJ, USA, 2025; pp. 275–290. [Google Scholar]
- Pizoń, J.; Gola, A. Human–machine relationship—Perspective and future roadmap for Industry 5.0 solutions. Machines 2023, 11, 203. [Google Scholar] [CrossRef]
- Zhang, Z.; Wang, H.; Geng, J.; Jiang, W.; Deng, X.; Miao, W. An information fusion method based on deep learning and fuzzy discount-weighting for target intention recognition. Eng. Appl. Artif. Intell. 2022, 109, 104610. [Google Scholar] [CrossRef]
- Krumm, J. (Ed.) Ubiquitous Computing Fundamentals; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar] [CrossRef]
- Gomez Cubero, C.; Rehm, M. Intention recognition in human–robot interaction based on eye tracking. In Proceedings of the IFIP Conference on Human–Computer Interaction (INTERACT 2021), Bari, Italy, 30 August–3 September 2021; Springer: Cham, Switzerland, 2021; pp. 428–437. [Google Scholar] [CrossRef]
- Lindblom, J.; Alenljung, B. The ANEMONE: Theoretical foundations for UX evaluation of action and intention recognition in human–robot interaction. Sensors 2020, 20, 4284. [Google Scholar] [CrossRef]
- Awais, M.; Saeed, M.Y.; Malik, M.S.A.; Younas, M.; Asif, S.R.I. Intention-based comparative analysis of human–robot interaction. IEEE Access 2020, 8, 205821–205835. [Google Scholar] [CrossRef]
- Fan, J.; Zheng, P.; Li, S. Vision-based holistic scene understanding towards proactive human–robot collaboration. Rob. Comput.-Integr. Manuf. 2022, 75, 102304. [Google Scholar] [CrossRef]
- Conte, D.; Furukawa, T. Autonomous robotic escort incorporating motion prediction and human intention. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 3480–3486. [Google Scholar] [CrossRef]
- Hongyi, L.; Lihui, W. Collision-free human–robot collaboration based on context awareness. Rob. Comput.-Integr. Manuf. 2021, 67, 102022. [Google Scholar] [CrossRef]
- Schmitz, A. Human–robot collaboration in industrial automation: Sensors and algorithms. Sensors 2022, 22, 5848. [Google Scholar] [CrossRef] [PubMed]
- Bi, Z.M.; Luo, C.; Miao, Z.; Zhang, B.; Zhang, W.J.; Wang, L. Safety assurance mechanisms of collaborative robotic systems in manufacturing. Rob. Comput. Integr. Manuf. 2021, 67, 102022. [Google Scholar] [CrossRef]
- Goyal, R.; Kahou, S.E.; Michalski, V.; Materzynska, J.; Westphal, S.; Kim, H.; Haenel, V.; Fruend, I.; Yianilos, P.; Mueller-Freitag, M.; et al. The “Something Something” video database for learning and evaluating visual common sense. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5843–5851. [Google Scholar] [CrossRef]
- Damen, D.; Doughty, H.; Farinella, G.M.; Fidler, S.; Furnari, A.; Kazakos, E.; Moltisanti, D.; Munro, J.; Perrett, T.; Price, W.; et al. Scaling egocentric vision: The EPIC-KITCHENS dataset. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Springer: Cham, Switzerland, 2018; pp. 720–736. [Google Scholar] [CrossRef]
- Dallel, M.; Havard, V.; Baudry, D.; Savatier, X. InHARD—Industrial human action recognition dataset in the context of industrial collaborative robotics. In Proceedings of the 2020 IEEE International Conference on Human–Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Yan, S.; Xiong, Y.; Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018), New Orleans, LA, USA, 2–7 February 2018. [Google Scholar] [CrossRef]
- Ullah, A.; Muhammad, K.; Del Ser, J.; Baik, S.W.; de Albuquerque, V.H.C. Activity recognition using temporal optical flow convolutional features and multilayer LSTM. IEEE Trans. Ind. Electron. 2019, 66, 9692–9702. [Google Scholar] [CrossRef]
- Li, S.; Fan, J.; Zheng, P.; Wang, L. Transfer learning-enabled action recognition for human–robot collaborative assembly. Procedia CIRP 2021, 104, 1795–1800. [Google Scholar] [CrossRef]
- Fazli, M.; Kowsari, K.; Gharavi, E.; Barnes, L.; Doryab, A. HHAR-net: Hierarchical human activity recognition using neural networks. In Proceedings of the International Conference on Intelligent Human Computer Interaction (IHCI 2020), Cham, Switzerland, 24–26 November 2020; Springer: Cham, Switzerland, 2020; pp. 48–58. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
- Mazzia, V.; Angarano, S.; Salvetti, F.; Angelini, F.; Chiaberge, M. Action Transformer: A self-attention model for short-time pose-based human action recognition. Pattern Recognit. 2022, 124, 107760. [Google Scholar] [CrossRef]
- Ul-Haq, A.; Akhtar, N.; Pogrebna, G.; Mian, A. Vision transformers for action recognition: A survey. arXiv 2022, arXiv:2209.05700. [Google Scholar] [CrossRef]
- Kaseris, M.; Kostavelis, I.; Malassiotis, S. A comprehensive survey on deep learning methods in human activity recognition. Mach. Learn. Knowl. Extr. 2024, 6, 842–876. [Google Scholar] [CrossRef]
- Babiarz, A.; Bugaj, M. Application of deep neural networks in recognition of selected types of objects in digital images. Appl. Sci. 2025, 15, 7931. [Google Scholar] [CrossRef]
- Liu, L.; Li, W.; Moxley, B. AI-based classification of pediatric breath sounds: Toward a tool for early respiratory screening. Appl. Sci. 2025, 15, 7145. [Google Scholar] [CrossRef]
- Sankaran, G.; Palomino, M.A.; Knahl, M.; Siestrup, G. Towards a system dynamics framework for human–machine learning decisions: A case study of New York Citi Bike. Appl. Sci. 2024, 14, 10647. [Google Scholar] [CrossRef]
- Álvarez-Jiménez, M.; Calle-Jimenez, T.; Hernández-Álvarez, M. A comprehensive evaluation of features and simple machine learning algorithms for electroencephalographic-based emotion recognition. Appl. Sci. 2024, 14, 2228. [Google Scholar] [CrossRef]
- Trstenjak, M.; Gregurić, P.; Janić, Ž.; Salaj, D. Integrated multilevel production planning solution according to Industry 5.0 principles. Appl. Sci. 2023, 14, 160. [Google Scholar] [CrossRef]
- Ertem-Eray, T.; Cheng, Y. A review of artificial intelligence research in peer-reviewed communication journals. Appl. Sci. 2025, 15, 1058. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Stipancic, T.; Rosenberg, D. From Human–Machine Interaction to Human–Machine Cooperation: Status and Progress. Appl. Sci. 2025, 15, 9475. https://doi.org/10.3390/app15179475
Stipancic T, Rosenberg D. From Human–Machine Interaction to Human–Machine Cooperation: Status and Progress. Applied Sciences. 2025; 15(17):9475. https://doi.org/10.3390/app15179475
Chicago/Turabian StyleStipancic, Tomislav, and Duska Rosenberg. 2025. "From Human–Machine Interaction to Human–Machine Cooperation: Status and Progress" Applied Sciences 15, no. 17: 9475. https://doi.org/10.3390/app15179475
APA StyleStipancic, T., & Rosenberg, D. (2025). From Human–Machine Interaction to Human–Machine Cooperation: Status and Progress. Applied Sciences, 15(17), 9475. https://doi.org/10.3390/app15179475