The Future of Intelligent Human-Robot Collaboration

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (6 January 2020) | Viewed by 19628

Special Issue Editors


E-Mail Website
Guest Editor
School of Information, University of Michigan, Ann Arbor, MI, USA
Interests: human robot teams; virtual, augmented and mixed reality for human robot interaction; social robots; cobots; autonomous systems; automotive vehicles

E-Mail Website
Guest Editor
Department of Information Systems and Operations Management, HEC Paris, Jouy-en-Josas, France
Interests: human robot teams; physical embodiment; human robot interaction modality; robot emotions

E-Mail Website
Guest Editor
College of Engineering, University of Michigan, Ann Arbor, MI, USA
Interests: collaborative robotics; future of work; mixed-reality human robot interfaces; cognitive workload in human robot teams

E-Mail Website
Guest Editor
Civil and Environmental Engineering, University of Michigan, Ann Arbor, MI 48109, USA
Interests: construction engineering and management; simulation and sensing; sustainability and energy; behavior and social influence

Special Issue Information

Dear Colleagues,

Robots have been defined as physically embodied technologies that can intelligently perceive, think and respond to its environment. Interaction with robots is distinct from other artificial intelligence (AI)-enabled technologies. Robots have a physical body that allows them to manifest their presence and actions in their physical environment. Humans cannot only talk to robots but also touch and be touched by robots. This distinguishes interactions with robots from disembodied AI agents, such as voice agents like Siri by Apple and Alexa by Amazon. Therefore, human robot interactions pose new and interesting questions to scholars in various fields such as information and computer science, robotics, psychology, sociology and engineering.

As robots become increasingly intelligent they are rapidly being employed in both private and public spaces. This proliferation is leading to human‒robot interactions across such diverse settings as the home, workplace, and public spaces. In the home, robots are performing household chores, baby sitting and acting as elderly companions. At work, robots are taking on traditional human jobs in logistics, transportation, and manufacturing. In public spaces, robots are also being utilized as tour guides, janitors, and security officers in museums and airports.

This special issue aims to enhance our understanding of intelligent human‒robot interactions to deal effectively with the emerging challenges and opportunities. This special issue seeks submissions that draw upon and contribute to the existing knowledge on human robot interactions, while acknowledging the disruptive potential of advances in robotics. This special issue encourages submission that will either significantly build and/or challenge existing research on human‒robot interactions.

This special issue seeks submissions from a diverse range of HRI topics. The special issue welcomes papers that explore human‒robot interactions at any level (i.e. individual, team, organizational, and societal). This also includes submissions that examine many facets of interactions in any context (e.g., homes, work, and public services) and role (e.g., companion, co-worker, boss, and adversary). Submissions to the special issue can also include empirical studies and conceptual frameworks which seek to advance our knowledge of the topic.

Topics of interest include, but are not limited to, the following:

  • Theoretical frameworks for human‒robot interaction
  • Empirical studies examining the social aspects of human‒robot interactions
  • Case studies of human‒robot interaction
  • Design implications for robot interactions at home, work and public spaces
  • New methodological approaches to studying human‒robot interactions
  • Promoting cooperative and collaborative interaction with robots
  • Examining uncooperative and adversarial human interactions with robots
  • The impact of haptic feedback and touch on human‒robot interaction
  • The ethics of human‒robot interactions
  • Application of Immersive Virtual Environments in the study human‒robot interaction

Dr. Lionel P. Robert Jr.
Dr. Sangseok You
Dr. Vineet Kamat
Dr. SangHyun Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous systems
  • automotive vehicles
  • brain robot interface
  • cobots
  • cognitive workload in human robot teams
  • collaborative robotics
  • healthcare robots
  • human robot interaction
  • human robot collaboration
  • human computer interaction
  • human robot interaction modality
  • human robot teams
  • future of work
  • mixed-reality human robot interfaces
  • personal robots
  • physical embodiment
  • robot emotions
  • social robots
  • virtual, augmented and mixed reality for human robot interaction

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 1160 KiB  
Article
Behavior‒Output Control Theory, Trust and Social Loafing in Virtual Teams
by Lionel P. Robert, Jr.
Multimodal Technol. Interact. 2020, 4(3), 39; https://doi.org/10.3390/mti4030039 - 08 Jul 2020
Cited by 16 | Viewed by 8920
Abstract
Social loafing, the act of withholding effort in teams, has been identified as an important problem in virtual teams. A lack of social control and the inability to observe or trust that others are fulfilling their commitments are often cited as major causes [...] Read more.
Social loafing, the act of withholding effort in teams, has been identified as an important problem in virtual teams. A lack of social control and the inability to observe or trust that others are fulfilling their commitments are often cited as major causes of social loafing in virtual teams where there is geographic dispersion and a reliance on electronic communications. Yet, more research is needed to better understand such claims. The goal of this study was to examine the impact of control and trust on social loafing in virtual teams. To accomplish this, we proposed and empirically tested a multi-level research model that explains the relationships among team controls, trust, social loafing, and team performance. We tested the model with 272 information technology employees in 39 virtual teams. Results indicate that control and trust reduce social loafing separately and also jointly. Full article
(This article belongs to the Special Issue The Future of Intelligent Human-Robot Collaboration)
Show Figures

Figure 1

17 pages, 5569 KiB  
Article
F-Formations for Social Interaction in Simulation Using Virtual Agents and Mobile Robotic Telepresence Systems
by Sai Krishna Pathi, Annica Kristoffersson, Andrey Kiselev and Amy Loutfi
Multimodal Technol. Interact. 2019, 3(4), 69; https://doi.org/10.3390/mti3040069 - 17 Oct 2019
Cited by 5 | Viewed by 5234
Abstract
F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial [...] Read more.
F-formations are a set of possible patterns in which groups of people tend to spatially organize themselves while engaging in social interactions. In this paper, we study the behavior of teleoperators of mobile robotic telepresence systems to determine whether they adhere to spatial formations when navigating to groups. This work uses a simulated environment in which teleoperators are requested to navigate to different groups of virtual agents. The simulated environment represents a conference lobby scenario where multiple groups of Virtual Agents with varying group sizes are placed in different spatial formations. The task requires teleoperators to navigate a robot to join each group using an egocentric-perspective camera. In a second phase, teleoperators are allowed to evaluate their own performance by reviewing how they navigated the robot from an exocentric perspective. The two important outcomes from this study are, firstly, teleoperators inherently respect F-formations even when operating a mobile robotic telepresence system. Secondly, teleoperators prefer additional support in order to correctly navigate the robot into a preferred position that adheres to F-formations. Full article
(This article belongs to the Special Issue The Future of Intelligent Human-Robot Collaboration)
Show Figures

Figure 1

17 pages, 2680 KiB  
Article
The Influence of Feedback Type in Robot-Assisted Training
by Neziha Akalin, Annica Kristoffersson and Amy Loutfi
Multimodal Technol. Interact. 2019, 3(4), 67; https://doi.org/10.3390/mti3040067 - 09 Oct 2019
Cited by 21 | Viewed by 4732
Abstract
Robot-assisted training, where social robots can be used as motivational coaches, provides an interesting application area. This paper examines how feedback given by a robot agent influences the various facets of participant experience in robot-assisted training. Specifically, we investigated the effects of feedback [...] Read more.
Robot-assisted training, where social robots can be used as motivational coaches, provides an interesting application area. This paper examines how feedback given by a robot agent influences the various facets of participant experience in robot-assisted training. Specifically, we investigated the effects of feedback type on robot acceptance, sense of safety and security, attitude towards robots and task performance. In the experiment, 23 older participants performed basic arm exercises with a social robot as a guide and received feedback. Different feedback conditions were administered, such as flattering, positive and negative feedback. Our results suggest that the robot with flattering and positive feedback was appreciated by older people in general, even if the feedback did not necessarily correspond to objective measures such as performance. Participants in these groups felt better about the interaction and the robot. Full article
(This article belongs to the Special Issue The Future of Intelligent Human-Robot Collaboration)
Show Figures

Figure 1

Back to TopTop