sensors-logo

Journal Browser

Journal Browser

Special Issue "Social Robots and Sensors"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 30 September 2021.

Special Issue Editors

Dr. Juan Pedro Bandera
E-Mail Website1 Website2
Guest Editor
Department of Electronic Technology, University of Málaga, Campus de Teatinos, 29071 Málaga, Spain
Interests: social robotics; artificial vision; autonomous robots; human-robot interaction; social navigation; pose analysis; gesture recognition
Dr. Rebeca Marfil
E-Mail Website
Guest Editor
Department of Electronic Technology, University of Málaga, Campus de Teatinos, 29071 Málaga, Spain
Interests: social robotics; cognitive architectures; human-robot interaction; visual attention; artificial vision; image processing
Dr. Fernando Fernandez Rebollo
E-Mail Website
Guest Editor
Computer Science Department, Universidad Carlos III de Madrid, Av. Universidad 30, Leganés 28911, Spain
Interests: automated planning; machine learning; social autonomous robotics

Special Issue Information

Dear Colleagues,

The inclusion of robots in daily life environments, where they interact and cooperate with people in solving everyday tasks, has become a reality in the last years. Stepping from controlled environments and predefined tasks to these open scenarios causes many challenges, but the huge potential of these new application domains has driven an important effort towards overcoming them. Hence, robots are becoming significantly more aware, cooperative, autonomous, cognitive, interactive, adaptive, and/or proactive. They can adapt to new manufacturing processes using learning from demonstration, cooperate with human workers through stigmergy, or directly share attention and actions, engage people in social interactions to propose activities or share information, drive exercises in rehabilitation therapies, become natural and intuitive interfaces between a smart environment and its users, or act as social facilitators in public scenarios such as retirement houses or museums. Moreover, these new robots are flexible and adaptable enough to achieve not one but many of these functionalities using only one platform with minimal (if any) hardware changes.

Many of the new market domains identified for robots (e.g. in the EU report ‘Robotics 2020 multi-annual roadmap for robotics in Europe’) require robots to exhibit social abilities. A social robot is an agent included in a heterogeneous environment in which it can perceive, interact with, and learn from people, other robots, and the environment itself. Such an agent needs to be endowed, among other features, with a carefully co-designed appearance and functionality, that eases acceptability and utility, a versatile and powerful cognitive architecture, a set of actuators that guarantee safe operation, and a set of sensors that provide the robot with all data required by its perceptive and cognitive systems.

New sensors are being developed to match the requirements of social robots in terms of dimensions, energy consumption, functionality and adaptability. Some examples are embedded small-size vision systems including one or more cameras, or flexible distributed low-cost haptic sensors. On the other hand, many sensors for social robots are adopted from other research or market fields. RGB-D devices are probably one of the best examples of this process, but the same applies to haptic sensors, voice detection and recognition systems, or LIDAR devices for navigation. It is in the adaptation of these devices to the context of social robotics that the interest lies, as well as in the use of sensor fusion techniques to merge data streams coming from different sensors into a common representation. The fusion of multiple sensory inputs is not only desirable for social robots in terms of increasing their perceptual capabilities and robustness, it is also a key requirement for robots that must maximize acceptability and utility for sometimes untrained users. For example, a socially assistive robot working with elderly people will benefit from offering multimodal interaction channels (e.g. audio, gestures, tactile screens, etc.), to ease accessibility for the wider range of users despite their possible impairments or interaction preferences.

The Special Issue “Social Robots and Sensors” aims to offer a detailed view of the state of the art of the research and the technology on sensors for social robots. Special attention will also be given to the sensor fusion approaches and co-design procedures that are devoted to making these agents useful, friendly and accessible devices in the daily life environments of the near future. Therefore, the Special Issue is open to studies on the integration of sensor devices and perceived data with the software architecture of the social robot, data management and fusion. Reasoning and decision-making capabilities based on sensory inputs and their relationship with actuators through the robotic cognitive architecture is also a key topic of this Special Issue.

Dr. Juan Pedro Bandera
Dr. Rebeca Marfil
Dr. Fernando Fernandez Rebollo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • social robots 
  • artificial vision 
  • embedded perceptual systems 
  • conversational systems 
  • human activity recognition 
  • human–robot joint attention 
  • biologically inspired sensors
  • sensor fusion 
  • haptic sensors
  • social navigation 
  • social robots and IoT 
  • integration of sensing, reasoning and action 
  • sensing, cognition and decision making

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Positive Emotion Amplification by Representing Excitement Scene with TV Chat Agents
Sensors 2020, 20(24), 7330; https://doi.org/10.3390/s20247330 - 21 Dec 2020
Cited by 1 | Viewed by 704
Abstract
This paper proposes emotion amplification for TV chat agents allowing users to get more excited in TV sports programs, and a model that estimates the excitement level of TV programs based on the number of social comment posts. The proposed model extracts the [...] Read more.
This paper proposes emotion amplification for TV chat agents allowing users to get more excited in TV sports programs, and a model that estimates the excitement level of TV programs based on the number of social comment posts. The proposed model extracts the exciting intervals from social comments to the program scenes. By synchronizing recorded video streams and the intervals, the agents may talk with the user dynamically changing the frequency and volume of upbeat utterances, increasing the excitement of the user. To test these agents, participants watched TV content under three conditions: without an agent, with four agents that utter with a flat voice, and with four agents with emotion amplification. Results from 24 young adult Japanese individuals showed that their arousal of participants’ subjective and physiological emotional responses were boosted because of the agents, enhancing their motivation to interact with the agent in the future. With empirical evidence, this paper supports these expectations and demonstrates that these agents can amplify the positive emotions of TV watchers, enhancing their motivation to interact with the agent in the future. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Article
Measuring Smoothness as a Factor for Efficient and Socially Accepted Robot Motion
Sensors 2020, 20(23), 6822; https://doi.org/10.3390/s20236822 - 29 Nov 2020
Cited by 1 | Viewed by 937
Abstract
Social robots, designed to interact and assist people in social daily life scenarios, require adequate path planning algorithms to navigate autonomously through these environments. These algorithms have not only to find feasible paths but also to consider other requirements, such as optimizing energy [...] Read more.
Social robots, designed to interact and assist people in social daily life scenarios, require adequate path planning algorithms to navigate autonomously through these environments. These algorithms have not only to find feasible paths but also to consider other requirements, such as optimizing energy consumption or making the robot behave in a socially accepted way. Path planning can be tuned according to a set of factors, being the most common path length, safety, and smoothness. This last factor may have a strong relation with energy consumption and social acceptability of produced motion, but this possible relation has never been deeply studied. The current paper focuses on performing a double analysis through two experiments. One of them analyzes energy consumption in a real robot for trajectories that use different smoothness factors. The other analyzes social acceptance for different smoothness factors by presenting different simulated situations to different people and collecting their impressions. The results of these experiments show that, in general terms, smoother paths decrease energy consumption and increase acceptability, as far as other key factors, such as distance to people, are fulfilled. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Article
A Framework for User Adaptation and Profiling for Social Robotics in Rehabilitation
Sensors 2020, 20(17), 4792; https://doi.org/10.3390/s20174792 - 25 Aug 2020
Cited by 5 | Viewed by 979
Abstract
Physical rehabilitation therapies for children present a challenge, and its success—the improvement of the patient’s condition—depends on many factors, such as the patient’s attitude and motivation, the correct execution of the exercises prescribed by the specialist or his progressive recovery during the therapy. [...] Read more.
Physical rehabilitation therapies for children present a challenge, and its success—the improvement of the patient’s condition—depends on many factors, such as the patient’s attitude and motivation, the correct execution of the exercises prescribed by the specialist or his progressive recovery during the therapy. With the aim to increase the benefits of these therapies, social humanoid robots with a friendly aspect represent a promising tool not only to boost the interaction with the pediatric patient, but also to assist physicians in their work. To achieve both goals, it is essential to monitor in detail the patient’s condition, trying to generate user profile models which enhance the feedback with both the system and the specialist. This paper describes how the project NAOTherapist—a robotic architecture for rehabilitation with social robots—has been upgraded in order to include a monitoring system able to generate user profile models through the interaction with the patient, performing user-adapted therapies. Furthermore, the system has been improved by integrating a machine learning algorithm which recognizes the pose adopted by the patient and by adding a clinical reports generation system based on the QUEST metric. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Article
Designing a Cyber-Physical System for Ambient Assisted Living: A Use-Case Analysis for Social Robot Navigation in Caregiving Centers
Sensors 2020, 20(14), 4005; https://doi.org/10.3390/s20144005 - 18 Jul 2020
Cited by 3 | Viewed by 947
Abstract
The advances of the Internet of Things, robotics, and Artificial Intelligence, to give just a few examples, allow us to imagine promising results in the development of smart buildings in the near future. In the particular case of elderly care, there are new [...] Read more.
The advances of the Internet of Things, robotics, and Artificial Intelligence, to give just a few examples, allow us to imagine promising results in the development of smart buildings in the near future. In the particular case of elderly care, there are new solutions that integrate systems that monitor variables associated with the health of each user or systems that facilitate physical or cognitive rehabilitation. In all these solutions, it is clear that these new environments, usually called Ambient Assisted Living (AAL), configure a Cyber-Physical System (CPS) that connects information from the physical world to the cyber-world with the primary objective of adding more intelligence to these environments. This article presents a CPS-AAL for caregiving centers, with the main novelty that includes a Socially Assistive Robot (SAR). The CPS-AAL presented in this work uses a digital twin world with the information acquired by all devices. The basis of this digital twin world is the CORTEX cognitive architecture, a set of software agents interacting through a Deep State Representation (DSR) that stored the shared information between them. The proposal is evaluated in a simulated environment with two use cases requiring interaction between the sensors and the SAR in a simulated caregiving center. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Article
Human 3D Pose Estimation with a Tilting Camera for Social Mobile Robot Interaction
Sensors 2019, 19(22), 4943; https://doi.org/10.3390/s19224943 - 13 Nov 2019
Cited by 2 | Viewed by 1297
Abstract
Human–Robot interaction represents a cornerstone of mobile robotics, especially within the field of social robots. In this context, user localization becomes of crucial importance for the interaction. This work investigates the capabilities of wide field-of-view RGB cameras to estimate the 3D position and [...] Read more.
Human–Robot interaction represents a cornerstone of mobile robotics, especially within the field of social robots. In this context, user localization becomes of crucial importance for the interaction. This work investigates the capabilities of wide field-of-view RGB cameras to estimate the 3D position and orientation (i.e., the pose) of a user in the environment. For that, we employ a social robot endowed with a fish-eye camera hosted in a tilting head and develop two complementary approaches: (1) a fast method relying on a single image that estimates the user pose from the detection of their feet and does not require either the robot or the user to remain static during the reconstruction; and (2) a method that takes some views of the scene while the camera is being tilted and does not need the feet to be visible. Due to the particular setup of the tilting camera, special equations for 3D reconstruction have been developed. In both approaches, a CNN-based skeleton detector (OpenPose) is employed to identify humans within the image. A set of experiments with real data validate our two proposed methods, yielding similar results than commercial RGB-D cameras while surpassing them in terms of coverage of the scene (wider FoV and longer range) and robustness to light conditions. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Article
A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera
Sensors 2019, 19(14), 3142; https://doi.org/10.3390/s19143142 - 17 Jul 2019
Cited by 4 | Viewed by 1299
Abstract
Estimating distances between people and robots plays a crucial role in understanding social Human–Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human–robot teams. For [...] Read more.
Estimating distances between people and robots plays a crucial role in understanding social Human–Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human–robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Article
Alternating Electric Field-Based Static Gesture-Recognition Technology
Sensors 2019, 19(10), 2375; https://doi.org/10.3390/s19102375 - 23 May 2019
Viewed by 1487
Abstract
Currently, gesture recognition based on electric-field detection technology has received extensive attention, which is mostly used to recognize the position and the movement of the hand, and rarely used for identification of specific gestures. A non-contact gesture-recognition technology based on the alternating electric-field [...] Read more.
Currently, gesture recognition based on electric-field detection technology has received extensive attention, which is mostly used to recognize the position and the movement of the hand, and rarely used for identification of specific gestures. A non-contact gesture-recognition technology based on the alternating electric-field detection scheme is proposed, which can recognize static gestures in different states and dynamic gestures. The influence of the hand on the detection system is analyzed from the principle of electric-field detection. A simulation model of the system is established to investigate the charge density on the hand surface and the potential change of the sensing electrodes. According to the simulation results, the system structure is improved, and the signal-processing circuit is designed to collect the signal of sensing electrodes. By collecting a large amount of data from different operators, the tree-model recognition algorithm is designed and a gesture-recognition experiment is implemented. The results show that the gesture-recognition correct rate is over 90%. With advantages of high response speed, low cost, small volume, and immunity to the surrounding environment, the system could be assembled on a robot that communicates with operators. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Back to TopTop