Human–AI–Robot Teaming (HART)

A special issue of Robotics (ISSN 2218-6581). This special issue belongs to the section "AI in Robotics".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 1577

Special Issue Editors


E-Mail Website
Guest Editor
DISIM Department, University of L'aquila, L'Aquila, Italy
Interests: artificial intelligence; knowledge representation and (neuro-symbolic)reasoning; computational logic; software agents; cognitive robotics; AI trustworthiness

E-Mail Website
Guest Editor
ITN Department of Science and Technology, Linköping University, Linköping, Sweden
Interests: computational logic; agents and multi-agent systems; game AI; cognitive robotics

E-Mail Website
Guest Editor
DISIM Department, University of L’aquila, L’Aquila, Italy
Interests: cognitive robotics; neuro-symbolic reasoning for robotics; computer vision; natural language processing

E-Mail Website
Guest Editor
DISIM Department, University of L’aquila, L’Aquila, Italy
Interests: artificial intelligence; data science; graph machine learning; graph data management; natural language processing; AI trustworthiness

Special Issue Information

Dear Colleagues,

A recent focus in the fields of Artificial Intelligence (AI) and Robotics is the development of intelligent systems in which humans, AI systems, and possibly robots work together as teams. In fact, robots represent physically embedded AI agents. As AI systems and robots become more autonomous, their roles are shifting from being operated and controlled by humans to actively interacting with them.

Humans in the loop possibly team up with AI, robots, or both. The aim is to leverage the potentially beneficial relationships between humans and automation, resulting in “hybrid” systems where cooperation is necessary to tackle complex tasks. When working together, humans, AI, and robots can produce results that exceed what they can achieve alone as they can control and improve each other.

Effective team interaction, dynamics, and shared cognition are relevant in human–automation teaming. Agents and robots can thus be endowed with emotion recognition and be capable of empathy and of modelling aspects of the Theory of Mind (ToM), in the sense of being able to reconstruct what humans are thinking or feeling.

Human–automation interaction is a central theme in human-centered AI, encompassing considerations such as respect for human autonomy, harm prevention, fairness, and explainability. This topic is also pertinent to trustworthy AI, which emphasizes the need for AI systems to be reliable and ethical, and responsible AI, which advocates for the safe and ethical deployment of AI technology.

Topics of interest include but are not limited to the following:

  • Knowledge representation and reasoning for human–AI interaction;
  • Decision-making in human–AI teaming;
  • Architectures and frameworks for human–AI teaming;
  • Emphatic agents/robots;
  • Affective computing in human–AI Interaction and teaming;
  • Emotional intelligence and Theory of Mind in human–AI teaming;
  • Managing human variability in human–AI teaming;
  • Human–agent interaction with neurorobotics;
  • Behavior trees in human–AI interaction and teaming;
  • Prosociality in human-AI teaming;
  • Evaluation of human-AI teams;
  • Ethical aspects in human–AI–robot teaming;
  • Human factors and environment awareness for cognitive robots;
  • Relationship between humans, AI-enabled technologies, and the environment;
  • Visualization-Empowered Human-AI teaming.

Dr. Stefania Costantini
Dr. Pierangelo Dell’Acqua
Dr. Giovanni De Gasperis
Dr. Francesco Gullo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–AI interaction
  • human–AI teaming
  • cognitive robots
  • neurorobotics
  • human–agent interaction
  • visualization-empowered human-AI interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 32676 KiB  
Article
Action Recognition via Multi-View Perception Feature Tracking for Human–Robot Interaction
by Chaitanya Bandi and Ulrike Thomas
Robotics 2025, 14(4), 53; https://doi.org/10.3390/robotics14040053 - 19 Apr 2025
Viewed by 172
Abstract
Human–Robot Interaction (HRI) depends on robust perception systems that enable intuitive and seamless interaction between humans and robots. This work introduces a multi-view perception framework designed for HRI, incorporating object detection and tracking, human body and hand pose estimation, unified hand–object pose estimation, [...] Read more.
Human–Robot Interaction (HRI) depends on robust perception systems that enable intuitive and seamless interaction between humans and robots. This work introduces a multi-view perception framework designed for HRI, incorporating object detection and tracking, human body and hand pose estimation, unified hand–object pose estimation, and action recognition. We use the state-of-the-art object detection architecture to understand the scene for object detection and segmentation, ensuring high accuracy and real-time performance. In interaction environments, 3D whole-body pose estimation is necessary, and we integrate an existing work with high inference speed. We propose a novel architecture for 3D unified hand–object pose estimation and tracking, capturing real-time spatial relationships between hands and objects. Furthermore, we incorporate action recognition by leveraging whole-body pose, unified hand–object pose estimation, and object tracking to determine the handover interaction state. The proposed architecture is evaluated on large-scale, open-source datasets, demonstrating competitive accuracy and faster inference times, making it well-suited for real-time HRI applications. Full article
(This article belongs to the Special Issue Human–AI–Robot Teaming (HART))
Show Figures

Figure 1

18 pages, 2430 KiB  
Article
The Art of Replication: Lifelike Avatars with Personalized Conversational Style
by Michele Nasser, Giuseppe Fulvio Gaglio, Valeria Seidita and Antonio Chella
Robotics 2025, 14(3), 33; https://doi.org/10.3390/robotics14030033 - 13 Mar 2025
Viewed by 669
Abstract
This study presents an approach for developing digital avatars replicating individuals’ physical characteristics and communicative style, contributing to research on virtual interactions in the metaverse. The proposed method integrates large language models (LLMs) with 3D avatar creation techniques, using what we call the [...] Read more.
This study presents an approach for developing digital avatars replicating individuals’ physical characteristics and communicative style, contributing to research on virtual interactions in the metaverse. The proposed method integrates large language models (LLMs) with 3D avatar creation techniques, using what we call the Tree of Style (ToS) methodology to generate stylistically consistent and contextually appropriate responses. Linguistic analysis and personalized voice synthesis enhance conversational and auditory realism. The results suggest that ToS offers a practical alternative to fine-tuning for creating stylistically accurate responses while maintaining efficiency. This study outlines potential applications and acknowledges the need for further work on adaptability and ethical considerations. Full article
(This article belongs to the Special Issue Human–AI–Robot Teaming (HART))
Show Figures

Figure 1

Back to TopTop