sensors-logo

Journal Browser

Journal Browser

Special Issue "Social Robots and Sensors"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 1 April 2020.

Special Issue Editors

Dr. Juan Pedro Bandera
E-Mail Website1 Website2
Guest Editor
Department of Electronic Technology, University of Málaga, Campus de Teatinos, Málaga 29071, Spain
Interests: social robotics; artificial vision; autonomous robots; human-robot interaction; social navigation; pose analysis; gesture recognition
Dr. Rebeca Marfil
E-Mail Website
Guest Editor
Department of Electronic Technology, University of Málaga, Campus de Teatinos, Málaga 29071, Spain
Interests: social robotics; cognitive architectures; human-robot interaction; visual attention; artificial vision; image processing
Dr. Fernando Fernandez Rebollo
E-Mail Website
Guest Editor
Computer Science Department, Universidad Carlos III de Madrid, Av. Universidad 30, Leganés 28911, Spain
Interests: automated planning; machine learning; social autonomous robotics

Special Issue Information

Dear Colleagues,

The inclusion of robots in daily life environments, where they interact and cooperate with people in solving everyday tasks, has become a reality in the last years. Stepping from controlled environments and predefined tasks to these open scenarios causes many challenges, but the huge potential of these new application domains has driven an important effort towards overcoming them. Hence, robots are becoming significantly more aware, cooperative, autonomous, cognitive, interactive, adaptive, and/or proactive. They can adapt to new manufacturing processes using learning from demonstration, cooperate with human workers through stigmergy, or directly share attention and actions, engage people in social interactions to propose activities or share information, drive exercises in rehabilitation therapies, become natural and intuitive interfaces between a smart environment and its users, or act as social facilitators in public scenarios such as retirement houses or museums. Moreover, these new robots are flexible and adaptable enough to achieve not one but many of these functionalities using only one platform with minimal (if any) hardware changes.

Many of the new market domains identified for robots (e.g. in the EU report ‘Robotics 2020 multi-annual roadmap for robotics in Europe’) require robots to exhibit social abilities. A social robot is an agent included in a heterogeneous environment in which it can perceive, interact with, and learn from people, other robots, and the environment itself. Such an agent needs to be endowed, among other features, with a carefully co-designed appearance and functionality, that eases acceptability and utility, a versatile and powerful cognitive architecture, a set of actuators that guarantee safe operation, and a set of sensors that provide the robot with all data required by its perceptive and cognitive systems.

New sensors are being developed to match the requirements of social robots in terms of dimensions, energy consumption, functionality and adaptability. Some examples are embedded small-size vision systems including one or more cameras, or flexible distributed low-cost haptic sensors. On the other hand, many sensors for social robots are adopted from other research or market fields. RGB-D devices are probably one of the best examples of this process, but the same applies to haptic sensors, voice detection and recognition systems, or LIDAR devices for navigation. It is in the adaptation of these devices to the context of social robotics that the interest lies, as well as in the use of sensor fusion techniques to merge data streams coming from different sensors into a common representation. The fusion of multiple sensory inputs is not only desirable for social robots in terms of increasing their perceptual capabilities and robustness, it is also a key requirement for robots that must maximize acceptability and utility for sometimes untrained users. For example, a socially assistive robot working with elderly people will benefit from offering multimodal interaction channels (e.g. audio, gestures, tactile screens, etc.), to ease accessibility for the wider range of users despite their possible impairments or interaction preferences.

The Special Issue “Social Robots and Sensors” aims to offer a detailed view of the state of the art of the research and the technology on sensors for social robots. Special attention will also be given to the sensor fusion approaches and co-design procedures that are devoted to making these agents useful, friendly and accessible devices in the daily life environments of the near future. Therefore, the Special Issue is open to studies on the integration of sensor devices and perceived data with the software architecture of the social robot, data management and fusion. Reasoning and decision-making capabilities based on sensory inputs and their relationship with actuators through the robotic cognitive architecture is also a key topic of this Special Issue.

Dr. Juan Pedro Bandera
Dr. Rebeca Marfil
Dr. Fernando Fernandez Rebollo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • social robots 
  • artificial vision 
  • embedded perceptual systems 
  • conversational systems 
  • human activity recognition 
  • human–robot joint attention 
  • biologically inspired sensors
  • sensor fusion 
  • haptic sensors
  • social navigation 
  • social robots and IoT 
  • integration of sensing, reasoning and action 
  • sensing, cognition and decision making

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Human 3D Pose Estimation with a Tilting Camera for Social Mobile Robot Interaction
Sensors 2019, 19(22), 4943; https://doi.org/10.3390/s19224943 - 13 Nov 2019
Abstract
Human–Robot interaction represents a cornerstone of mobile robotics, especially within the field of social robots. In this context, user localization becomes of crucial importance for the interaction. This work investigates the capabilities of wide field-of-view RGB cameras to estimate the 3D position and [...] Read more.
Human–Robot interaction represents a cornerstone of mobile robotics, especially within the field of social robots. In this context, user localization becomes of crucial importance for the interaction. This work investigates the capabilities of wide field-of-view RGB cameras to estimate the 3D position and orientation (i.e., the pose) of a user in the environment. For that, we employ a social robot endowed with a fish-eye camera hosted in a tilting head and develop two complementary approaches: (1) a fast method relying on a single image that estimates the user pose from the detection of their feet and does not require either the robot or the user to remain static during the reconstruction; and (2) a method that takes some views of the scene while the camera is being tilted and does not need the feet to be visible. Due to the particular setup of the tilting camera, special equations for 3D reconstruction have been developed. In both approaches, a CNN-based skeleton detector (OpenPose) is employed to identify humans within the image. A set of experiments with real data validate our two proposed methods, yielding similar results than commercial RGB-D cameras while surpassing them in terms of coverage of the scene (wider FoV and longer range) and robustness to light conditions. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Open AccessArticle
A Novel Method for Estimating Distances from a Robot to Humans Using Egocentric RGB Camera
Sensors 2019, 19(14), 3142; https://doi.org/10.3390/s19143142 - 17 Jul 2019
Abstract
Estimating distances between people and robots plays a crucial role in understanding social Human–Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human–robot teams. For [...] Read more.
Estimating distances between people and robots plays a crucial role in understanding social Human–Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human–robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Open AccessArticle
Alternating Electric Field-Based Static Gesture-Recognition Technology
Sensors 2019, 19(10), 2375; https://doi.org/10.3390/s19102375 - 23 May 2019
Abstract
Currently, gesture recognition based on electric-field detection technology has received extensive attention, which is mostly used to recognize the position and the movement of the hand, and rarely used for identification of specific gestures. A non-contact gesture-recognition technology based on the alternating electric-field [...] Read more.
Currently, gesture recognition based on electric-field detection technology has received extensive attention, which is mostly used to recognize the position and the movement of the hand, and rarely used for identification of specific gestures. A non-contact gesture-recognition technology based on the alternating electric-field detection scheme is proposed, which can recognize static gestures in different states and dynamic gestures. The influence of the hand on the detection system is analyzed from the principle of electric-field detection. A simulation model of the system is established to investigate the charge density on the hand surface and the potential change of the sensing electrodes. According to the simulation results, the system structure is improved, and the signal-processing circuit is designed to collect the signal of sensing electrodes. By collecting a large amount of data from different operators, the tree-model recognition algorithm is designed and a gesture-recognition experiment is implemented. The results show that the gesture-recognition correct rate is over 90%. With advantages of high response speed, low cost, small volume, and immunity to the surrounding environment, the system could be assembled on a robot that communicates with operators. Full article
(This article belongs to the Special Issue Social Robots and Sensors)
Show Figures

Figure 1

Back to TopTop