Special Issue "Representations and Reasoning for Robotics"

Quicklinks

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (28 February 2015)

Special Issue Editors

Guest Editor
Dr. Nicola Bellotto (Website)

School of Computer Science University of Lincoln Brayford Pool Lincoln, LN6 7TS United Kingdom
Interests: mobile robotics; machine perception; active vision; sensor fusion; qualitative spatial representation; cybernetics
Guest Editor
Dr. Nick Hawes (Website)

School of Computer Science University of Birmingham Edgbaston Birmingham, B15 2TT United Kingdom
Interests: autonomy; robotics; AI; planning; spatio-temporal reasoning
Guest Editor
Dr. Mohan Sridharan (Website)

Department of Electrical and Computer Engineering The University of Auckland Private Bag 92019 Auckland Mail Centre Auckland 1142, New Zealand
Fax: +64 9 373 7461
Interests: autonomous robots; knowledge representation and reasoning; machine learning; computational vision; applied cognitive science
Guest Editor
Prof. Dr. Daniele Nardi (Website)

Dipartimento di Ingegneria Informatica, Automatica e Gestionale "A. Ruberti" "Sapienza" Universita' di Roma Via Ariosto 25, 00185 ROMA Italy
Phone: +393478507199
Fax: +39-06-77274106
Interests: cognitive robotics, knowledge representation and reasoning, semantic mapping, human robot interaction, robot soccer

Special Issue Information

Dear Colleagues,

As the field of robotics matures, the development of ever more intelligent robots becomes possible. However, robots deployed in homes, offices and other complex domains are faced with the formidable challenge of representing, revising and reasoning with incomplete domain knowledge about their capabilities, their environments, and how the former interacts with the latter.

Many algorithms have been developed for qualitatively and quantitatively representing and reasoning with knowledge and uncertainty. Unfortunately, research contributions in this area are fragmented, making it difficult for researchers with different expertise to share advances in their respective fields. The objective of this special issue is therefore to promote a deeper understanding of recent breakthroughs and challenges in knowledge representation and reasoning for robots. We are interested in efforts that integrate, or motivate an integration of algorithms for knowledge representation and/or commonsense reasoning, on one or more robots, in different application domains.

Topics of interest include (but are not limited to):

  • Knowledge acquisition and representation
  • Symbolic and probabilistic representations
  • Reasoning with incomplete knowledge
  • Interactive and cooperative decision-making
  • Learning and symbol grounding
  • Qualitative representations and reasoning

We particularly encourage the submission of papers that ground these topics in research areas such as robot perception, human–robot (and multirobot) collaboration, and robot planning.

Dr. Nicola Bellotto
Dr. Nick Hawes
Dr. Mohan Sridharan
Prof. Dr. Daniele Nardi
Guest Editor

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed Open Access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 300 CHF (Swiss Francs). English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.


Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle Intent Understanding Using an Activation Spreading Architecture
Robotics 2015, 4(3), 284-315; doi:10.3390/robotics4030284
Received: 1 March 2015 / Revised: 11 July 2015 / Accepted: 15 July 2015 / Published: 30 July 2015
PDF Full-text (1602 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a new approach for recognizing intentions of humans by observing their activities with a color plus depth (RGB-D) camera. Activities and goals are modeled as a distributed network of inter-connected nodes in an Activation Spreading Network (ASN). [...] Read more.
In this paper, we propose a new approach for recognizing intentions of humans by observing their activities with a color plus depth (RGB-D) camera. Activities and goals are modeled as a distributed network of inter-connected nodes in an Activation Spreading Network (ASN). Inspired by a formalism in hierarchical task networks, the structure of the network captures the hierarchical relationship between high-level goals and low-level activities that realize these goals. Our approach can detect intentions before they are realized and it can work in real-time. We also extend the formalism of ASNs to incorporate contextual information into intent recognition. We further augment the ASN formalism with special nodes and synaptic connections to model ordering constraints between actions, in order to represent and handle partial-order plans in our ASN. A fully functioning system is developed for experimental evaluation. We implemented a robotic system that uses our intent recognition to naturally interact with the user. Our ASN based intent recognizer is tested against three different scenarios involving everyday activities performed by a subject, and our results show that the proposed approach is able to detect low-level activities and recognize high-level intentions effectively in real-time. Further analysis shows that contextual and partial-order ASNs are able to discriminate between otherwise ambiguous goals. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Open AccessArticle Leveraging Qualitative Reasoning to Learning Manipulation Tasks
Robotics 2015, 4(3), 253-283; doi:10.3390/robotics4030253
Received: 1 March 2015 / Revised: 26 June 2015 / Accepted: 7 July 2015 / Published: 13 July 2015
Cited by 1 | PDF Full-text (897 KB) | HTML Full-text | XML Full-text
Abstract
Learning and planning are powerful AI methods that exhibit complementary strengths. While planning allows goal-directed actions to be computed when a reliable forward model is known, learning allows such models to be obtained autonomously. In this paper we describe how both methods [...] Read more.
Learning and planning are powerful AI methods that exhibit complementary strengths. While planning allows goal-directed actions to be computed when a reliable forward model is known, learning allows such models to be obtained autonomously. In this paper we describe how both methods can be combined using an expressive qualitative knowledge representation. We argue that the crucial step in this integration is to employ a representation based on a well-defined semantics. This article proposes the qualitative spatial logic QSL, a representation that combines qualitative abstraction with linear temporal logic, allowing us to represent relevant information about the learning task, possible actions, and their consequences. Doing so, we empower reasoning processes to enhance learning performance beyond the positive effects of learning in abstract state spaces. Proof-of-concept experiments in two simulation environments show that this approach can help to improve learning-based robotics by quicker convergence and leads to more reliable action planning. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Open AccessArticle Learning Task Knowledge from Dialog and Web Access
Robotics 2015, 4(2), 223-252; doi:10.3390/robotics4020223
Received: 20 March 2015 / Revised: 29 May 2015 / Accepted: 5 June 2015 / Published: 17 June 2015
PDF Full-text (10336 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the [...] Read more.
We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it). KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon). We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Open AccessArticle How? Why? What? Where? When? Who? Grounding Ontology in the Actions of a Situated Social Agent
Robotics 2015, 4(2), 169-193; doi:10.3390/robotics4020169
Received: 28 February 2015 / Accepted: 8 May 2015 / Published: 10 June 2015
PDF Full-text (1300 KB) | HTML Full-text | XML Full-text
Abstract
Robotic agents are spreading, incarnated as embodied entities, exploring the tangible world and interacting with us, or as virtual agents crawling over the web, parsing and generating data. In both cases, they require: (i) processes to acquire information; (ii) structures to model [...] Read more.
Robotic agents are spreading, incarnated as embodied entities, exploring the tangible world and interacting with us, or as virtual agents crawling over the web, parsing and generating data. In both cases, they require: (i) processes to acquire information; (ii) structures to model and store information as usable knowledge; (iii) reasoning systems to interpret the information; and (iv) finally, ways to express their interpretations. The H5W (How, Why, What, Where, When, Who) framework is a conceptualization of the problems faced by any agent situated in a social environment, which has defined several robotic studies. We introduce the H5W framework, through a description of its underlying neuroscience and the psychological considerations it embodies, we then demonstrate a specific implementation of the framework. We will focus on the motivation and implication of the pragmatic decisions we have taken. We report the numerous studies that have relied upon this technical implementation as a proof of its robustness and polyvalence; moreover, we conduct an additional validation of its applicability to the natural language domain by designing an information exchange task as a benchmark. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Figures

Open AccessArticle DOF Decoupling Task Graph Model: Reducing the Complexity of Touch-Based Active Sensing
Robotics 2015, 4(2), 141-168; doi:10.3390/robotics4020141
Received: 28 February 2015 / Revised: 28 April 2015 / Accepted: 6 May 2015 / Published: 19 May 2015
PDF Full-text (1686 KB) | HTML Full-text | XML Full-text
Abstract
This article presents: (i) a formal, generic model for active sensing tasks; (ii) the insight that active sensing actions can very often be searched on less than six-dimensional configuration spaces (bringing an exponential reduction in the computational costs involved in the search); [...] Read more.
This article presents: (i) a formal, generic model for active sensing tasks; (ii) the insight that active sensing actions can very often be searched on less than six-dimensional configuration spaces (bringing an exponential reduction in the computational costs involved in the search); (iii) an algorithm for selecting actions explicitly trading off information gain, execution time and computational cost; and (iv) experimental results of touch-based localization in an industrial setting. Generalizing from prior work, the formal model represents an active sensing task by six primitives: configuration space, information space, object model, action space, inference scheme and action-selection scheme; prior work applications conform to the model as illustrated by four concrete examples. On top of the mentioned primitives, the task graph is then introduced as the relationship to represent an active sensing task as a sequence of low-complexity actions defined over different configuration spaces of the object. The presented act-reason algorithm is an action selection scheme to maximize the expected information gain of each action, explicitly constraining the time allocated to compute and execute the actions. The experimental contributions include localization of objects with: (1) a force-controlled robot equipped with a spherical touch probe; (2) a geometric complexity of the to-be-localized objects up to industrial relevance; (3) an initial uncertainty of (0.4 m, 0.4 m, 2Π); and (4) a configuration of act-reason to constrain the allocated time to compute and execute the next action as a function of the current uncertainty. Localization is accomplished when the probability mass within a 5-mm tolerance reaches a specified threshold of 80%. Four objects are localized with final {mean; standard-deviation} error spanning from {0.0043 m; 0.0034 m} to {0.0073 m; 0.0048 m}. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Figures

Open AccessArticle A Computational Model of Human-Robot Spatial Interactions Based on a Qualitative Trajectory Calculus
Robotics 2015, 4(1), 63-102; doi:10.3390/robotics4010063
Received: 31 December 2014 / Revised: 9 March 2015 / Accepted: 17 March 2015 / Published: 23 March 2015
Cited by 1 | PDF Full-text (8908 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we propose a probabilistic sequential model of Human-Robot Spatial Interaction (HRSI) using a well-established Qualitative Trajectory Calculus (QTC) to encode HRSI between a human and a mobile robot in a meaningful, tractable, and systematic manner. Our key contribution is [...] Read more.
In this paper we propose a probabilistic sequential model of Human-Robot Spatial Interaction (HRSI) using a well-established Qualitative Trajectory Calculus (QTC) to encode HRSI between a human and a mobile robot in a meaningful, tractable, and systematic manner. Our key contribution is to utilise QTC as a state descriptor and model HRSI as a probabilistic sequence of such states. Apart from the sole direction of movements of human and robot modelled by QTC, attributes of HRSI like proxemics and velocity profiles play vital roles for the modelling and generation of HRSI behaviour. In this paper, we particularly present how the concept of proxemics can be embedded in QTC to facilitate richer models. To facilitate reasoning on HRSI with qualitative representations, we show how we can combine the representational power of QTC with the concept of proxemics in a concise framework, enriching our probabilistic representation by implicitly modelling distances. We show the appropriateness of our sequential model of QTC by encoding different HRSI behaviours observed in two spatial interaction experiments. We classify these encounters, creating a comparative measurement, showing the representational capabilities of the model. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Figures

Journal Contact

MDPI AG
Robotics Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
robotics@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Robotics
Back to Top