Special Issue "Cognitive Robotics"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Electrical, Electronics and Communications Engineering".

Deadline for manuscript submissions: closed (15 December 2020).

Special Issue Editors

Prof. Antonio Bandera
E-Mail Website
Guest Editor
Dpto. Tecnologia Electronica, Malaga University, Malaga, Spain
Interests: assistive robotics; embedded vision
Special Issues and Collections in MDPI journals
Dr. Luis Manso Fernández-Argüéllez
E-Mail Website
Guest Editor
Aston University, UK
Interests: social robotics; human–robot interaction; deep learning
Special Issues and Collections in MDPI journals
Dr. Zoe Falomir
E-Mail Website
Guest Editor
Bremen Spatial Cognition Center, University of Bremen, Germany
Interests: spatial reasoning; cognitive systems

Special Issue Information

Dear Colleagues,

There is a growing desire to develop robots that are capable of helping humans with daily tasks. Cognitive robots need to explore and understand their environment, choose a safe and human-aware course of action, and learn—not only from experience but also through interaction. In particular, cognitive robotics aims to endow robots with the capacity to plan solutions for complex goals and to enact those plans while being reactive to unexpected changes in their environments. Among the limiting factors for their application in real-life scenarios, there are clearly ethical, technological, and economic challenges.

Cognitive robotics includes studies on advanced mechatronics, artificial intelligence, and machine learning, as well as cognitive psychology and brain science in the frame of cognitive science. The aim of this Special Issue is to gather scientific papers addressing any of the challenges of cognitive robotics. The topics of this Special Issue include, but are not limited to, the following:

  • Active perception;
  • Architectures and frameworks for cognition;
  • Cognitive human–robot interaction;
  • Cognitive modeling and development;
  • Knowledge discovery and representation in robots;
  • Learning for action and interaction;
  • Cognitive architectures for interactive robots;
  • Neurorobotics;
  • Social and assistive robots.

Prof. Antonio Bandera
Dr. Luis J. Manso
Dr. Zoe Falomir
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Intrinsically Motivated Open-Ended Multi-Task Learning Using Transfer Learning to Discover Task Hierarchy
Appl. Sci. 2021, 11(3), 975; https://doi.org/10.3390/app11030975 - 21 Jan 2021
Cited by 1 | Viewed by 467
Abstract
In open-ended continuous environments, robots need to learn multiple parameterised control tasks in hierarchical reinforcement learning. We hypothesise that the most complex tasks can be learned more easily by transferring knowledge from simpler tasks, and faster by adapting the complexity of the actions [...] Read more.
In open-ended continuous environments, robots need to learn multiple parameterised control tasks in hierarchical reinforcement learning. We hypothesise that the most complex tasks can be learned more easily by transferring knowledge from simpler tasks, and faster by adapting the complexity of the actions to the task. We propose a task-oriented representation of complex actions, called procedures, to learn online task relationships and unbounded sequences of action primitives to control the different observables of the environment. Combining both goal-babbling with imitation learning, and active learning with transfer of knowledge based on intrinsic motivation, our algorithm self-organises its learning process. It chooses at any given time a task to focus on; and what, how, when and from whom to transfer knowledge. We show with a simulation and a real industrial robot arm, in cross-task and cross-learner transfer settings, that task composition is key to tackle highly complex tasks. Task decomposition is also efficiently transferred across different embodied learners and by active imitation, where the robot requests just a small amount of demonstrations and the adequate type of information. The robot learns and exploits task dependencies so as to learn tasks of every complexity. Full article
(This article belongs to the Special Issue Cognitive Robotics)
Show Figures

Figure 1

Article
Semantic Mapping with Low-Density Point-Clouds for Service Robots in Indoor Environments
Appl. Sci. 2020, 10(20), 7154; https://doi.org/10.3390/app10207154 - 14 Oct 2020
Viewed by 518
Abstract
The advancements in the robotic field have made it possible for service robots to increasingly become part of everyday indoor scenarios. Their ability to operate and reach defined goals depends on the perception and understanding of their surrounding environment. Detecting and positioning objects [...] Read more.
The advancements in the robotic field have made it possible for service robots to increasingly become part of everyday indoor scenarios. Their ability to operate and reach defined goals depends on the perception and understanding of their surrounding environment. Detecting and positioning objects as well as people in an accurate semantic map are, therefore, essential tasks that a robot needs to carry out. In this work, we walk an alternative path to build semantic maps of indoor scenarios. Instead of relying on high-density sensory input, like the one provided by an RGB-D camera, and resource-intensive processing algorithms, like the ones based on deep learning, we investigate the use of low-density point-clouds provided by 3D LiDARs together with a set of practical segmentation methods for the detection of objects. By focusing on the physical structure of the objects of interest, it is possible to remove complex training phases and exploit sensors with lower resolution but wider Field of View (FoV). Our evaluation shows that our approach can achieve comparable (if not better) performance in object labeling and positioning with a significant decrease in processing time than established approaches based on deep learning methods. As a side-effect of using low-density point-clouds, we also better support people privacy as the lower resolution inherently prevents the use of techniques like face recognition. Full article
(This article belongs to the Special Issue Cognitive Robotics)
Show Figures

Figure 1

Article
Measuring Quality of Service in a Robotized Comprehensive Geriatric Assessment Scenario
Appl. Sci. 2020, 10(18), 6618; https://doi.org/10.3390/app10186618 - 22 Sep 2020
Cited by 1 | Viewed by 656
Abstract
Comprehensive Geriatric Assessment (CGA) is an integrated clinical process to evaluate frail elderly people in order to create therapy plans that improve their quality and quantity of life. The whole process includes the completion of standardized questionnaires or specific movements, which are performed [...] Read more.
Comprehensive Geriatric Assessment (CGA) is an integrated clinical process to evaluate frail elderly people in order to create therapy plans that improve their quality and quantity of life. The whole process includes the completion of standardized questionnaires or specific movements, which are performed by the patient and do not necessarily require the presence of a medical expert. With the aim of automatizing these parts of the CGA, we have designed and developed CLARC (smart CLinic Assistant Robot for CGA), a mobile robot able to help the physician to capture and manage data during the CGA procedures, mainly by autonomously conducting a set of predefined evaluation tests. Using CLARC to conduct geriatric tests will reduce the time medical professionals have to spend on purely mechanical tasks, giving them more time to develop individualised care plans for their patients. In fact, ideally, CLARC will perform these tests on its own. In parallel with the effort to correctly address the functional aspects, i.e., the development of the robot tasks, the design of CLARC must also deal with non-functional properties such as the degree of interaction or the performance. We argue that satisfying user preferences can be a good way to improve the acceptance of the robot by the patients. This paper describes the integration into the software architecture of the CLARC robot of the modules that allow these properties to be monitored at run-time, providing information on the quality of its service. Experimental evaluation illustrates that the defined quality of service metrics correctly capture the evolution of the aspects of the robot’s activity and its interaction with the patient covered by the non-functional properties that have been considered. Full article
(This article belongs to the Special Issue Cognitive Robotics)
Show Figures

Figure 1

Article
Interface Transparency Issues in Teleoperation
Appl. Sci. 2020, 10(18), 6232; https://doi.org/10.3390/app10186232 - 08 Sep 2020
Viewed by 681
Abstract
Transferring skills and expertise to remote places, without being present, is a new challenge for our digitally interconnected society. People can experience and perform actions in distant places through a robotic agent wearing immersive interfaces to feel physically there. However, technological contingencies can [...] Read more.
Transferring skills and expertise to remote places, without being present, is a new challenge for our digitally interconnected society. People can experience and perform actions in distant places through a robotic agent wearing immersive interfaces to feel physically there. However, technological contingencies can affect human perception, compromising skill-based performances. Considering the results from studies on human factors, a set of recommendations for the construction of immersive teleoperation systems is provided, followed by an example of the evaluation methodology. We developed a testbed to study perceptual issues that affect task performance while users manipulated the environment either through traditional or immersive interfaces. The analysis of its effect on perception, navigation, and manipulation relies on performances measures and subjective answers. The goal is to mitigate the effect of factors such as system latency, field of view, frame of reference, or frame rate to achieve the sense of telepresence. By decoupling the flows of an immersive teleoperation system, we aim to understand how vision and interaction fidelity affects spatial cognition. Results show that misalignments between the frame of reference for vision and motor-action or the use of tools affecting the sense of body position or movement have a higher effect on mental workload and spatial cognition. Full article
(This article belongs to the Special Issue Cognitive Robotics)
Show Figures

Figure 1

Article
Evolution of a Cognitive Architecture for Social Robots: Integrating Behaviors and Symbolic Knowledge
Appl. Sci. 2020, 10(17), 6067; https://doi.org/10.3390/app10176067 - 01 Sep 2020
Cited by 1 | Viewed by 779
Abstract
This paper presents the evolution of a robotic architecture intended for controlling autonomous social robots. The first instance of this architecture was originally designed according to behavior-based principles. The building blocks of this architecture were behaviors designed as a finite state machine and [...] Read more.
This paper presents the evolution of a robotic architecture intended for controlling autonomous social robots. The first instance of this architecture was originally designed according to behavior-based principles. The building blocks of this architecture were behaviors designed as a finite state machine and organized in an ethological inspired way. However, the need of managing explicit symbolic knowledge in human–robot interaction required the integration of planning capabilities into the architecture and a symbolic representation of the environment and the internal state of the robot. A major contribution of this paper is the description of the working memory that integrates these two approaches. This working memory has been implemented as a distributed graph. Another contribution is the use of behavior trees instead of state machine for implementing the behavior-based part of the architecture. This late version of the architecture has been tested in robotic competitions (RoboCup or European Robotics League, among others), whose performance is also discussed in this paper. Full article
(This article belongs to the Special Issue Cognitive Robotics)
Article
MERLIN a Cognitive Architecture for Service Robots
Appl. Sci. 2020, 10(17), 5989; https://doi.org/10.3390/app10175989 - 29 Aug 2020
Viewed by 705
Abstract
Many social robots deployed in public spaces hide hybrid cognitive architectures for dealing with daily tasks. Mostly, two main blocks sustain these hybrid architectures for robot behavior generation: deliberative and behavioral-based mechanisms. Robot Operating System offers different solutions for implementing these blocks, however, [...] Read more.
Many social robots deployed in public spaces hide hybrid cognitive architectures for dealing with daily tasks. Mostly, two main blocks sustain these hybrid architectures for robot behavior generation: deliberative and behavioral-based mechanisms. Robot Operating System offers different solutions for implementing these blocks, however, some issues arise when both are released in the robot. This paper presents a software engineering approach for normalizing the process of integrating them and presenting them as a fully cognitive architecture named MERLIN. Providing implementation details and diagrams for established the architecture, this research tests empirically the proposed solution using a variation from the challenge defined in the SciRoc @home competition. The results validate the usability of our approach and show MERLIN as a hybrid architecture ready for short and long-term tasks, showing better results than using a by default approach, particularly when it is deployed in highly interactive scenarios. Full article
(This article belongs to the Special Issue Cognitive Robotics)
Show Figures

Figure 1

Article
A Novel Grid and Place Neuron’s Computational Modeling to Learn Spatial Semantics of an Environment
Appl. Sci. 2020, 10(15), 5147; https://doi.org/10.3390/app10155147 - 27 Jul 2020
Cited by 2 | Viewed by 615
Abstract
Health-related limitations prohibit a human from working in hazardous environments, due to which cognitive robots are needed to work there. A robot cannot learn the spatial semantics of the environment or object, which hinders the robot from interacting with the working environment. To [...] Read more.
Health-related limitations prohibit a human from working in hazardous environments, due to which cognitive robots are needed to work there. A robot cannot learn the spatial semantics of the environment or object, which hinders the robot from interacting with the working environment. To overcome this problem, in this work, an agent is computationally devised that mimics the grid and place neuron functionality to learn cognitive maps from the input spatial data of an environment or an object. A novel quadrant-based approach is proposed to model the behavior of the grid neuron, which, like the real grid neuron, is capable of generating periodic hexagonal grid-like output patterns from the input body movement. Furthermore, a cognitive map formation and their learning mechanism are proposed using the place–grid neuron interaction system, which is meant for making predictions of environmental sensations from the body movement. A place sequence learning system is also introduced, which is like an episodic memory of a trip that is forgettable based on their usage frequency and helps in reducing the accumulation of error during a visit to distant places. The model has been deployed and validated in two different spatial data learning applications, one being the 2D object detection by touch, and another is the navigation in an environment. The result analysis shows that the proposed model is significantly associated with the expected outcomes. Full article
(This article belongs to the Special Issue Cognitive Robotics)
Show Figures

Figure 1

Article
On Cognitive Assistant Robots for Reducing Variability in Industrial Human-Robot Activities
Appl. Sci. 2020, 10(15), 5137; https://doi.org/10.3390/app10155137 - 26 Jul 2020
Cited by 1 | Viewed by 812
Abstract
In the industrial domain, one important research activity for cognitive robotics is the development of assistant robots. In this work, we show how the use of a cognitive assistant robot can contribute to (i) improving task effectiveness and productivity, (ii) providing autonomy for [...] Read more.
In the industrial domain, one important research activity for cognitive robotics is the development of assistant robots. In this work, we show how the use of a cognitive assistant robot can contribute to (i) improving task effectiveness and productivity, (ii) providing autonomy for the human supervisor to make decisions, providing or improving human operators’ skills, and (iii) giving feedback to the human operator in the loop. Our approach is evaluated on variability reduction in a manual assembly system. The overall study and analysis are performed on a model of the assembly system obtained using the Functional Resonance Analysis Method (FRAM) and tested in a robotic simulated scenario. Results show that a cognitive assistant robot is a useful partner in the role of improving the task effectiveness of human operators and supervisors. Full article
(This article belongs to the Special Issue Cognitive Robotics)
Show Figures

Figure 1

Other

Jump to: Research

Brief Report
A Robot Has a Mind of Its Own Because We Intuitively Share It
Appl. Sci. 2020, 10(18), 6531; https://doi.org/10.3390/app10186531 - 18 Sep 2020
Viewed by 548
Abstract
People perceive the mind in two dimensions: intellectual and affective. Advances in artificial intelligence enable people to perceive the intellectual mind of a robot through their semantic interactions. Conversely, it has been still controversial whether a robot has an affective mind of its [...] Read more.
People perceive the mind in two dimensions: intellectual and affective. Advances in artificial intelligence enable people to perceive the intellectual mind of a robot through their semantic interactions. Conversely, it has been still controversial whether a robot has an affective mind of its own without any intellectual actions or semantic interactions. We investigated pain experiences when observing three different facial expressions of a virtual agent modeling affective minds (i.e., painful, unhappy, and neutral). The cold pain detection threshold of 19 healthy subjects was measured as they watched a black screen, then changes in their cold pain detection thresholds were evaluated as they watched the facial expressions. Subjects were asked to rate the pain intensity from the respective facial expressions. Changes of cold pain detection thresholds were compared and adjusted by the respective pain intensities. Only when watching the painful expression of a virtual agent did, the cold pain detection threshold increase significantly. By directly evaluating intuitive pain responses when observing facial expressions of a virtual agent, we found that we ‘share’ empathic neural responses, which can be intuitively emerge, according to observed pain intensity with a robot (a virtual agent). Full article
(This article belongs to the Special Issue Cognitive Robotics)
Show Figures

Figure 1

Back to TopTop