sensors-logo

Journal Browser

Journal Browser

Special Issue "Human-Robot Interaction and Sensors for Social Robotics"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (1 October 2020).

Special Issue Editor

Dr. Carina Soledad González González
E-Mail Website
Guest Editor
Department of Computer Engineering and Systems, University of La Laguna, 08018 Barcelona, Spain
Interests: human–computer interaction; intelligent tutoring systems; intelligent interfaces; human-centered design; UX; serious games; gamification; digital culture

Special Issue Information

Dear Colleagues,

In recent years, there is an increasing interest in the human factors related to robots in the area of human–robot interaction (HRI). There are different topics of interest that are of interest to researchers in HRI, such as robot emotions and personalities, intelligent electronic sex hardware, affective, gender, psychological, sociological, and philosophical approaches, morality and ethics, among others. The utilization of robots for interaction and communication with humans in recent decades has motivated different therapies and applications in healthcare and education. For example, there are different initiatives to introduce robotics into hospitals for socioemotional and educational attention. Among the different studies on the use of social robots to support children in hospital contexts, those that stand out are about using the robots NAO, IROMEC, Pleo, Robovie, MOnarCH, and Paro. These studies show the positive benefits of their use and good acceptance by children, their families, health professionals, and educators.

This Special Issue will cover all aspects of social robotics, including user studies, design frameworks, techniques and strategies, methodologies, tools and applications, analysis and assessment, and systems integrations and architectures, as well as any work in progress.

We welcome submissions from all topics of social robotics applied to health and education, including but not limited to the following Topics:

  • User studies
  • Design frameworks
  • Interaction with social robots
  • Sensors, tools, and applications
  • Therapies
  • Affective computing
  • Intelligent social approaches
  • Systems integrations
  • Architectures
  • Ethical and moral approaches
  • Cognition and sensing
  • Human–robot interaction

Prof. Dr. Carina González
Dr. Carina Soledad González-González
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • social robots
  • human–robot interaction
  • affective computing
  • intelligent social robotics

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
The AMIRO Social Robotics Framework: Deployment and Evaluation on the Pepper Robot
Sensors 2020, 20(24), 7271; https://doi.org/10.3390/s20247271 - 18 Dec 2020
Cited by 2 | Viewed by 693
Abstract
Recent studies in social robotics show that it can provide economic efficiency and growth in domains such as retail, entertainment, and active and assisted living (AAL). Recent work also highlights that users have the expectation of affordable social robotics platforms, providing focused and [...] Read more.
Recent studies in social robotics show that it can provide economic efficiency and growth in domains such as retail, entertainment, and active and assisted living (AAL). Recent work also highlights that users have the expectation of affordable social robotics platforms, providing focused and specific assistance in a robust manner. In this paper, we present the AMIRO social robotics framework, designed in a modular and robust way for assistive care scenarios. The framework includes robotic services for navigation, person detection and recognition, multi-lingual natural language interaction and dialogue management, as well as activity recognition and general behavior composition. We present AMIRO platform independent implementation based on a Robot Operating System (ROS). We focus on quantitative evaluations of each functionality module, providing discussions on their performance in different settings and the possible improvements. We showcase the deployment of the AMIRO framework on a popular social robotics platform—the Pepper robot—and present the experience of developing a complex user interaction scenario, employing all available functionality modules within AMIRO. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Using a Social Robot to Evaluate Facial Expressions in the Wild
Sensors 2020, 20(23), 6716; https://doi.org/10.3390/s20236716 - 24 Nov 2020
Cited by 2 | Viewed by 755
Abstract
In this work an affective computing approach is used to study the human-robot interaction using a social robot to validate facial expressions in the wild. Our global goal is to evaluate that a social robot can be used to interact in a convincing [...] Read more.
In this work an affective computing approach is used to study the human-robot interaction using a social robot to validate facial expressions in the wild. Our global goal is to evaluate that a social robot can be used to interact in a convincing manner with human users to recognize their potential emotions through facial expressions, contextual cues and bio-signals. In particular, this work is focused on analyzing facial expression. A social robot is used to validate a pre-trained convolutional neural network (CNN) which recognizes facial expressions. Facial expression recognition plays an important role in recognizing and understanding human emotion by robots. Robots equipped with expression recognition capabilities can also be a useful tool to get feedback from the users. The designed experiment allows evaluating a trained neural network in facial expressions using a social robot in a real environment. In this paper a comparison between the CNN accuracy and human experts is performed, in addition to analyze the interaction, attention and difficulty to perform a particular expression by 29 non-expert users. In the experiment, the robot leads the users to perform different facial expressions in motivating and entertaining way. At the end of the experiment, the users are quizzed about their experience with the robot. Finally, a set of experts and the CNN classify the expressions. The obtained results allow affirming that the use of social robot is an adequate interaction paradigm for the evaluation on facial expression. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Face Memorization Using AIM Model for Mobile Robot and Its Application to Name Calling Function
Sensors 2020, 20(22), 6629; https://doi.org/10.3390/s20226629 - 19 Nov 2020
Viewed by 496
Abstract
We are developing a social mobile robot that has a name calling function using a face memorization system. It is said that it is an important function for a social robot to call to a person by her/his name, and the name calling [...] Read more.
We are developing a social mobile robot that has a name calling function using a face memorization system. It is said that it is an important function for a social robot to call to a person by her/his name, and the name calling can make a friendly impression of the robot on her/him. Our face memorization system has the following features: (1) When the robot detects a stranger, it stores her/his face images and name after getting her/his permission. (2) The robot can call to a person whose face it has memorized by her/his name. (3) The robot system has a sleep–wake function, and a face classifier is re-trained in a REM sleep state, or execution frequencies of information processes are reduced when it has nothing to do, for example, when there is no person around the robot. In this paper, we confirmed the performance of these functions and conducted an experiment to evaluate the impression of the name calling function with research participants. The experimental results revealed the validity and effectiveness of the proposed face memorization system. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Graphical abstract

Article
An Automated Planning Model for HRI: Use Cases on Social Assistive Robotics
Sensors 2020, 20(22), 6520; https://doi.org/10.3390/s20226520 - 14 Nov 2020
Viewed by 790
Abstract
Using Automated Planning for the high level control of robotic architectures is becoming very popular thanks mainly to its capability to define the tasks to perform in a declarative way. However, classical planning tasks, even in its basic standard Planning Domain Definition Language [...] Read more.
Using Automated Planning for the high level control of robotic architectures is becoming very popular thanks mainly to its capability to define the tasks to perform in a declarative way. However, classical planning tasks, even in its basic standard Planning Domain Definition Language (PDDL) format, are still very hard to formalize for non expert engineers when the use case to model is complex. Human Robot Interaction (HRI) is one of those complex environments. This manuscript describes the rationale followed to design a planning model able to control social autonomous robots interacting with humans. It is the result of the authors’ experience in modeling use cases for Social Assistive Robotics (SAR) in two areas related to healthcare: Comprehensive Geriatric Assessment (CGA) and non-contact rehabilitation therapies for patients with physical impairments. In this work a general definition of these two use cases in a unique planning domain is proposed, which favors the management and integration with the software robotic architecture, as well as the addition of new use cases. Results show that the model is able to capture all the relevant aspects of the Human-Robot interaction in those scenarios, allowing the robot to autonomously perform the tasks by using a standard planning-execution architecture. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Integration of a Social Robot in a Pedagogical and Logopedic Intervention with Children: A Case Study
Sensors 2020, 20(22), 6483; https://doi.org/10.3390/s20226483 - 13 Nov 2020
Cited by 5 | Viewed by 820
Abstract
The effectiveness of social robots such as NAO in pedagogical therapies presents a challenge. There is abundant literature focused on therapies using robots with children with autism, but there is a gap to be filled in other educational different needs. This paper describes [...] Read more.
The effectiveness of social robots such as NAO in pedagogical therapies presents a challenge. There is abundant literature focused on therapies using robots with children with autism, but there is a gap to be filled in other educational different needs. This paper describes an experience of using a NAO as an assistant in a logopedic and pedagogical therapy with children with different needs. Even if the initial robot architecture is based on genericbehaviors, the loading and execution time for each specific requirement and the needs of each child in therapy, made it necessary to develop “Adaptive Behaviors”. These evolve into an adaptive architecture, appliedto the engineer–therapist–child interaction, requiring the engineer-programmer to be always present during the sessions. Benefits from the point of view of the therapist and the children and the acceptance of NAO in therapy are shown. A robot in speech-therapy sessions can play a positive role in several logopedic aspectsserving as a motivating factor for the children.Future works should be oriented in developing intelligent algorithms so as to eliminate the presence of the engineer-programmer in the sessions. Additional work proposals should consider deepening the psychological aspects of using humanoid robots in educational therapy. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Dual Arm Co-Manipulation Architecture with Enhanced Human–Robot Communication for Large Part Manipulation
Sensors 2020, 20(21), 6151; https://doi.org/10.3390/s20216151 - 29 Oct 2020
Cited by 1 | Viewed by 938
Abstract
The emergence of collaborative robotics has had a great impact on the development of robotic solutions for cooperative tasks nowadays carried out by humans, especially in industrial environments where robots can act as assistants to operators. Even so, the coordinated manipulation of large [...] Read more.
The emergence of collaborative robotics has had a great impact on the development of robotic solutions for cooperative tasks nowadays carried out by humans, especially in industrial environments where robots can act as assistants to operators. Even so, the coordinated manipulation of large parts between robots and humans gives rise to many technical challenges, ranging from the coordination of both robotic arms to the human–robot information exchange. This paper presents a novel architecture for the execution of trajectory driven collaborative tasks, combining impedance control and trajectory coordination in the control loop, as well as adding mechanisms to provide effective robot-to-human feedback for a successful and satisfactory task completion. The obtained results demonstrate the validity of the proposed architecture as well as its suitability for the implementation of collaborative robotic systems. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Human–Robot Interface for Embedding Sliding Adjustable Autonomy Methods
Sensors 2020, 20(20), 5960; https://doi.org/10.3390/s20205960 - 21 Oct 2020
Cited by 1 | Viewed by 673
Abstract
This work discusses a novel human–robot interface for a climbing robot for inspecting weld beads in storage tanks in the petrochemical industry. The approach aims to adapt robot autonomy in terms of the operator’s experience, where a remote industrial joystick works in conjunction [...] Read more.
This work discusses a novel human–robot interface for a climbing robot for inspecting weld beads in storage tanks in the petrochemical industry. The approach aims to adapt robot autonomy in terms of the operator’s experience, where a remote industrial joystick works in conjunction with an electromyographic armband as inputs. This armband is worn on the forearm and can detect gestures from the operator and rotation angles from the arm. Information from the industrial joystick and the armband are used to control the robot via a Fuzzy controller. The controller works with sliding autonomy (using as inputs data from the angular velocity of the industrial controller, electromyography reading, weld bead position in the storage tank, and rotation angles executed by the operator’s arm) to generate a system capable of recognition of the operator’s skill and correction of mistakes from the operator in operating time. The output from the Fuzzy controller is the level of autonomy to be used by the robot. The levels implemented are Manual (operator controls the angular and linear velocities of the robot); Shared (speeds are shared between the operator and the autonomous system); Supervisory (robot controls the angular velocity to stay in the weld bead, and the operator controls the linear velocity); Autonomous (the operator defines endpoint and the robot controls both linear and angular velocities). These autonomy levels, along with the proposed sliding autonomy, are then analyzed through robot experiments in a simulated environment, showing each of these modes’ purposes. The proposed approach is evaluated in virtual industrial scenarios through real distinct operators. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Sequential Localizing and Mapping: A Navigation Strategy via Enhanced Subsumption Architecture
Sensors 2020, 20(17), 4815; https://doi.org/10.3390/s20174815 - 26 Aug 2020
Viewed by 805
Abstract
In this paper, we present a navigation strategy exclusively designed for social robots with limited sensors for applications in homes. The overall system integrates a reactive design based on subsumption architecture and a knowledge system with learning capabilities. The component of the system [...] Read more.
In this paper, we present a navigation strategy exclusively designed for social robots with limited sensors for applications in homes. The overall system integrates a reactive design based on subsumption architecture and a knowledge system with learning capabilities. The component of the system includes several modules, such as doorway detection and room localization via convolutional neural network (CNN), avoiding obstacles via reinforcement learning, passing the doorway via Canny edge’s detection, building an abstract map called a Directional Semantic Topological Map (DST-Map) within the knowledge system, and other predefined layers within the subsumption architecture. The individual modules and the overall system are evaluated in a virtual environment using Webots simulator. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
The ANEMONE: Theoretical Foundations for UX Evaluation of Action and Intention Recognition in Human-Robot Interaction
Sensors 2020, 20(15), 4284; https://doi.org/10.3390/s20154284 - 31 Jul 2020
Cited by 6 | Viewed by 1644
Abstract
The coexistence of robots and humans in shared physical and social spaces is expected to increase. A key enabler of high-quality interaction is a mutual understanding of each other’s actions and intentions. In this paper, we motivate and present a systematic user experience [...] Read more.
The coexistence of robots and humans in shared physical and social spaces is expected to increase. A key enabler of high-quality interaction is a mutual understanding of each other’s actions and intentions. In this paper, we motivate and present a systematic user experience (UX) evaluation framework of action and intention recognition between humans and robots from a UX perspective, because there is an identified lack of this kind of evaluation methodology. The evaluation framework is packaged into a methodological approach called ANEMONE (action and intention recognition in human robot interaction). ANEMONE has its foundation in cultural-historical activity theory (AT) as the theoretical lens, the seven stages of action model, and user experience (UX) evaluation methodology, which together are useful in motivating and framing the work presented in this paper. The proposed methodological approach of ANEMONE provides guidance on how to measure, assess, and evaluate the mutual recognition of actions and intentions between humans and robots for investigators of UX evaluation. The paper ends with a discussion, addresses future work, and some concluding remarks. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Using a Rotating 3D LiDAR on a Mobile Robot for Estimation of Person’s Body Angle and Gender
Sensors 2020, 20(14), 3964; https://doi.org/10.3390/s20143964 - 16 Jul 2020
Cited by 1 | Viewed by 992
Abstract
We studied the use of a rotating multi-layer 3D Light Detection And Ranging (LiDAR) sensor (specifically the Velodyne HDL-32E) mounted on a social robot for the estimation of features of people around the robot. While LiDARs are often used for robot self-localization and [...] Read more.
We studied the use of a rotating multi-layer 3D Light Detection And Ranging (LiDAR) sensor (specifically the Velodyne HDL-32E) mounted on a social robot for the estimation of features of people around the robot. While LiDARs are often used for robot self-localization and people tracking, we were interested in the possibility of using them to estimate the people’s features (states or attributes), which are important in human–robot interaction. In particular, we tested the estimation of the person’s body orientation and their gender. As collecting data in the real world and labeling them is laborious and time consuming, we also looked into other ways for obtaining data for training the estimators: using simulations, or using LiDAR data collected in the lab. We trained convolutional neural network-based estimators and tested their performance on actual LiDAR measurements of people in a public space. The results show that with a rotating 3D LiDAR a usable estimate of the body angle can indeed be achieved (mean absolute error 33.5 ° ), and that using simulated data for training the estimators is effective. For estimating gender, the results are satisfactory (accuracy above 80%) when the person is close enough; however, simulated data do not work well and training needs to be done on actual people measurements. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
An Acceptance Test for Assistive Robots
Sensors 2020, 20(14), 3912; https://doi.org/10.3390/s20143912 - 14 Jul 2020
Cited by 3 | Viewed by 981
Abstract
Socially assistive robots have been used in the care of elderly or dependent people, particularly with patients suffering from neurological diseases, like autism and dementia. There are some proposals, but there are no standardized mechanisms for assessing a particular robot’s suitability for specific [...] Read more.
Socially assistive robots have been used in the care of elderly or dependent people, particularly with patients suffering from neurological diseases, like autism and dementia. There are some proposals, but there are no standardized mechanisms for assessing a particular robot’s suitability for specific therapy. This paper reports the evaluation of an acceptance test for assistive robots applied to people with dementia. The proposed test focuses on evaluating the suitability of a robot during therapy sessions. The test measures the rejection of the robot by the patient based on observational data. This test would recommend what kind of robot and what functionalities can be used in therapy. The novelty of this approach is the formalization of a specific validation process that only considers the reaction of the person to whom the robot is applied, and may be used more effectively than existing tests, which may not be adequate for evaluating assistance robots. The test’s feasibility was tested by applying it to a set of dementia patients in a specialized care facility. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Age-Related Differences in Fixation Pattern on a Companion Robot
Sensors 2020, 20(13), 3807; https://doi.org/10.3390/s20133807 - 07 Jul 2020
Cited by 2 | Viewed by 1177
Abstract
Recent studies have addressed the various benefits of companion robots and expanded the research scope to their design. However, the viewpoints of older adults have not been deeply investigated. Therefore, this study aimed to examine the distinctive viewpoints of older adults by comparing [...] Read more.
Recent studies have addressed the various benefits of companion robots and expanded the research scope to their design. However, the viewpoints of older adults have not been deeply investigated. Therefore, this study aimed to examine the distinctive viewpoints of older adults by comparing them with those of younger adults. Thirty-one older and thirty-one younger adults participated in an eye-tracking experiment to investigate their impressions of a bear-like robot mockup. They also completed interviews and surveys to help us understand their viewpoints on the robot design. The gaze behaviors and the impressions of the two groups were significantly different. Older adults focused significantly more on the robot’s face and paid little attention to the rest of the body. In contrast, the younger adults gazed at more body parts and viewed the robot in more detail than the older adults. Furthermore, the older adults rated physical attractiveness and social likeability of the robot significantly higher than the younger adults. The specific gaze behavior of the younger adults was linked to considerable negative feedback on the robot design. Based on these empirical findings, we recommend that impressions of older adults be considered when designing companion robots. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Effects of the Level of Interactivity of a Social Robot and the Response of the Augmented Reality Display in Contextual Interactions of People with Dementia
Sensors 2020, 20(13), 3771; https://doi.org/10.3390/s20133771 - 05 Jul 2020
Cited by 2 | Viewed by 1274
Abstract
The well-being of people with dementia (PWD) living in long-term care facilities is hindered due to disengagement and social isolation. Animal-like social robots are increasingly used in dementia care as they can provide companionship and engage PWD in meaningful activities. While most previous [...] Read more.
The well-being of people with dementia (PWD) living in long-term care facilities is hindered due to disengagement and social isolation. Animal-like social robots are increasingly used in dementia care as they can provide companionship and engage PWD in meaningful activities. While most previous human–robot interaction (HRI) research studied engagement independent from the context, recent findings indicate that the context of HRI sessions has an impact on user engagement. This study aims to explore the effects of contextual interactions between PWD and a social robot embedded in the augmented responsive environment. Three experimental conditions were compared: reactive context-enhanced robot interaction; dynamic context-enhanced interaction with a static robot; a control condition with only the dynamic context presented. Effectiveness evaluations were performed with 16 participants using four observational rating scales on observed engagement, affective states, and apathy related behaviors. Findings suggested that the higher level of interactivity of a social robot and the interactive contextualized feedback helped capture and maintain users’ attention during engagement; however, it did not significantly improve their positive affective states. Additionally, the presence of either a static or a proactive robot reduced apathy-related behaviors by facilitating purposeful activities, thus, motivating behavioral engagement. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Social STEAM Learning at an Early Age with Robotic Platforms: A Case Study in Four Schools in Spain
Sensors 2020, 20(13), 3698; https://doi.org/10.3390/s20133698 - 01 Jul 2020
Cited by 4 | Viewed by 1606
Abstract
Robotics is one of the key learnings in a world where learners will interact with multiple robotic technologies and operating systems throughout their lives. However, school teachers, especially in the elementary and primary education stages, often have difficulties incorporating these tools in the [...] Read more.
Robotics is one of the key learnings in a world where learners will interact with multiple robotic technologies and operating systems throughout their lives. However, school teachers, especially in the elementary and primary education stages, often have difficulties incorporating these tools in the classroom. Four elementary teachers in three schools in Catalonia were trained to introduce robotics in the classroom to seventy-five students. The main actions consisted in classroom accompaniment by a university-trained support teacher, curricular materials’ development, and assessment of the students’ and teachers’ learning. The designed contents and evaluation criteria took into account the potential of educational robotics to improve soft skills and to promote Science, Technology, Engineering, Arts, and Mathematics (STEAM) interdisciplinary learning. Teachers perceived the training to be supportive and useful and ended the school year feeling confident with the used robotic platform (KIBO). The assessment of the students’ learning showed an average mark of 7.1–7.7 over 10 in the final evaluation criteria. Moreover, students’ learning was higher in the classes where the teachers had higher initial interest in the training. We present and analyse the actions carried out, with a critical and constructive look at extending the experience to other educational centers. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Article
Modelling Multimodal Dialogues for Social Robots Using Communicative Acts
Sensors 2020, 20(12), 3440; https://doi.org/10.3390/s20123440 - 18 Jun 2020
Cited by 3 | Viewed by 811
Abstract
Social Robots need to communicate in a way that feels natural to humans if they are to effectively bond with the users and provide an engaging interaction. Inline with this natural, effective communication, robots need to perceive and manage multimodal information, both as [...] Read more.
Social Robots need to communicate in a way that feels natural to humans if they are to effectively bond with the users and provide an engaging interaction. Inline with this natural, effective communication, robots need to perceive and manage multimodal information, both as input and output, and respond accordingly. Consequently, dialogue design is a key factor in creating an engaging multimodal interaction. These dialogues need to be flexible enough to adapt to unforeseen circumstances that arise during the conversation but should also be easy to create, so the development of new applications gets simpler. In this work, we present our approach to dialogue modelling based on basic atomic interaction units called Communicative Acts. They manage basic interactions considering who has the initiative (the robot or the user), and what is his/her intention. The two possible intentions are either ask for information or give information. In addition, because we focus on one-to-one interactions, the initiative can only be taken by the robot or the user. Communicative Acts can be parametrised and combined in a hierarchical manner to fulfil the needs of the robot’s applications, and they have been equipped with built-in functionalities that are in charge of low-level communication tasks. These tasks include communication error handling, turn-taking or user disengagement. This system has been integrated in Mini, a social robot that has been created to assist older adults with cognitive impairment. In a case of use, we demonstrate the operation of our system as well as its performance in real human–robot interactions. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Review

Jump to: Research

Review
Human–Robot Interaction and Sexbots: A Systematic Literature Review
Sensors 2021, 21(1), 216; https://doi.org/10.3390/s21010216 - 31 Dec 2020
Cited by 5 | Viewed by 1580
Abstract
At present, sexual robots have become a new paradigm of social robots. In this paper, we developed a systematic literature review about sexual robots (sexbots). To do this, we used the Scopus and WoS databases to answer different research questions regarding the design, [...] Read more.
At present, sexual robots have become a new paradigm of social robots. In this paper, we developed a systematic literature review about sexual robots (sexbots). To do this, we used the Scopus and WoS databases to answer different research questions regarding the design, interaction, and gender and ethical approaches from 1980 until 2020. In our review, we found a male bias in this discipline, and in recent years, articles have shown that user opinion has become more relevant. Some insights and recommendations on gender and ethics in designing sexual robots were also made. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Review
Trust in AI Agent: A Systematic Review of Facial Anthropomorphic Trustworthiness for Social Robot Design
Sensors 2020, 20(18), 5087; https://doi.org/10.3390/s20185087 - 07 Sep 2020
Cited by 5 | Viewed by 1472
Abstract
As an emerging artificial intelligence system, social robot could socially communicate and interact with human beings. Although this area is attracting more and more attention, limited research has tried to systematically summarize potential features that could improve facial anthropomorphic trustworthiness for social robot. [...] Read more.
As an emerging artificial intelligence system, social robot could socially communicate and interact with human beings. Although this area is attracting more and more attention, limited research has tried to systematically summarize potential features that could improve facial anthropomorphic trustworthiness for social robot. Based on the literature from human facial perception, product, and robot face evaluation, this paper systematically reviews, evaluates, and summarizes static facial features, dynamic features, their combinations, and related emotional expressions, shedding light on further exploration of facial anthropomorphic trustworthiness for social robot design. Full article
(This article belongs to the Special Issue Human-Robot Interaction and Sensors for Social Robotics)
Show Figures

Figure 1

Back to TopTop