Representations and Reasoning for Robotics

A special issue of Robotics (ISSN 2218-6581).

Deadline for manuscript submissions: closed (28 February 2015) | Viewed by 54035

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Engineering, University of Padua, Padova, Italy
Interests: mobile robotics; machine perception; active vision; sensor fusion; qualitative spatial representation; causal inference

E-Mail Website
Guest Editor
School of Computer Science, University of Birmingham, Birmingham B15 2TT, UK
Interests: autonomy; robotics; AI; planning; spatio-temporal reasoning

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering The University of Auckland Private Bag 92019 Auckland Mail Centre Auckland 1142, New Zealand
Interests: autonomous robots; knowledge representation and reasoning; machine learning; computational vision; applied cognitive science

E-Mail Website
Guest Editor
Dipartimento di Ingegneria Informatica, Automatica e Gestionale "A. Ruberti" "Sapienza" Universita' di Roma Via Ariosto 25, 00185 Roma, Italy
Interests: cognitive robotics; knowledge representation and reasoning; semantic mapping; human robot interaction; robot soccer

Special Issue Information

Dear Colleagues,

As the field of robotics matures, the development of ever more intelligent robots becomes possible. However, robots deployed in homes, offices and other complex domains are faced with the formidable challenge of representing, revising and reasoning with incomplete domain knowledge about their capabilities, their environments, and how the former interacts with the latter.

Many algorithms have been developed for qualitatively and quantitatively representing and reasoning with knowledge and uncertainty. Unfortunately, research contributions in this area are fragmented, making it difficult for researchers with different expertise to share advances in their respective fields. The objective of this special issue is therefore to promote a deeper understanding of recent breakthroughs and challenges in knowledge representation and reasoning for robots. We are interested in efforts that integrate, or motivate an integration of algorithms for knowledge representation and/or commonsense reasoning, on one or more robots, in different application domains.

Topics of interest include (but are not limited to):

  • Knowledge acquisition and representation
  • Symbolic and probabilistic representations
  • Reasoning with incomplete knowledge
  • Interactive and cooperative decision-making
  • Learning and symbol grounding
  • Qualitative representations and reasoning

We particularly encourage the submission of papers that ground these topics in research areas such as robot perception, human–robot (and multirobot) collaboration, and robot planning.

Dr. Nicola Bellotto
Dr. Nick Hawes
Dr. Mohan Sridharan
Prof. Dr. Daniele Nardi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

1602 KiB  
Article
Intent Understanding Using an Activation Spreading Architecture
by Mohammad Taghi Saffar, Mircea Nicolescu, Monica Nicolescu and Banafsheh Rekabdar
Robotics 2015, 4(3), 284-315; https://doi.org/10.3390/robotics4030284 - 30 Jul 2015
Cited by 2 | Viewed by 6186
Abstract
In this paper, we propose a new approach for recognizing intentions of humans by observing their activities with a color plus depth (RGB-D) camera. Activities and goals are modeled as a distributed network of inter-connected nodes in an Activation Spreading Network (ASN). Inspired [...] Read more.
In this paper, we propose a new approach for recognizing intentions of humans by observing their activities with a color plus depth (RGB-D) camera. Activities and goals are modeled as a distributed network of inter-connected nodes in an Activation Spreading Network (ASN). Inspired by a formalism in hierarchical task networks, the structure of the network captures the hierarchical relationship between high-level goals and low-level activities that realize these goals. Our approach can detect intentions before they are realized and it can work in real-time. We also extend the formalism of ASNs to incorporate contextual information into intent recognition. We further augment the ASN formalism with special nodes and synaptic connections to model ordering constraints between actions, in order to represent and handle partial-order plans in our ASN. A fully functioning system is developed for experimental evaluation. We implemented a robotic system that uses our intent recognition to naturally interact with the user. Our ASN based intent recognizer is tested against three different scenarios involving everyday activities performed by a subject, and our results show that the proposed approach is able to detect low-level activities and recognize high-level intentions effectively in real-time. Further analysis shows that contextual and partial-order ASNs are able to discriminate between otherwise ambiguous goals. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Show Figures

Figure 1

897 KiB  
Article
Leveraging Qualitative Reasoning to Learning Manipulation Tasks
by Diedrich Wolter and Alexandra Kirsch
Robotics 2015, 4(3), 253-283; https://doi.org/10.3390/robotics4030253 - 13 Jul 2015
Cited by 5 | Viewed by 8987
Abstract
Learning and planning are powerful AI methods that exhibit complementary strengths. While planning allows goal-directed actions to be computed when a reliable forward model is known, learning allows such models to be obtained autonomously. In this paper we describe how both methods can [...] Read more.
Learning and planning are powerful AI methods that exhibit complementary strengths. While planning allows goal-directed actions to be computed when a reliable forward model is known, learning allows such models to be obtained autonomously. In this paper we describe how both methods can be combined using an expressive qualitative knowledge representation. We argue that the crucial step in this integration is to employ a representation based on a well-defined semantics. This article proposes the qualitative spatial logic QSL, a representation that combines qualitative abstraction with linear temporal logic, allowing us to represent relevant information about the learning task, possible actions, and their consequences. Doing so, we empower reasoning processes to enhance learning performance beyond the positive effects of learning in abstract state spaces. Proof-of-concept experiments in two simulation environments show that this approach can help to improve learning-based robotics by quicker convergence and leads to more reliable action planning. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Show Figures

Figure 1

10336 KiB  
Article
Learning Task Knowledge from Dialog and Web Access
by Vittorio Perera, Robin Soetens, Thomas Kollar, Mehdi Samadi, Yichao Sun, Daniele Nardi, René Van de Molengraft and Manuela Veloso
Robotics 2015, 4(2), 223-252; https://doi.org/10.3390/robotics4020223 - 17 Jun 2015
Cited by 13 | Viewed by 8908
Abstract
We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, [...] Read more.
We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it). KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon). We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Show Figures

Figure 1

1300 KiB  
Article
How? Why? What? Where? When? Who? Grounding Ontology in the Actions of a Situated Social Agent
by Stephane Lallee and Paul F.M.J. Verschure
Robotics 2015, 4(2), 169-193; https://doi.org/10.3390/robotics4020169 - 10 Jun 2015
Cited by 15 | Viewed by 11367
Abstract
Robotic agents are spreading, incarnated as embodied entities, exploring the tangible world and interacting with us, or as virtual agents crawling over the web, parsing and generating data. In both cases, they require: (i) processes to acquire information; (ii) structures to model and [...] Read more.
Robotic agents are spreading, incarnated as embodied entities, exploring the tangible world and interacting with us, or as virtual agents crawling over the web, parsing and generating data. In both cases, they require: (i) processes to acquire information; (ii) structures to model and store information as usable knowledge; (iii) reasoning systems to interpret the information; and (iv) finally, ways to express their interpretations. The H5W (How, Why, What, Where, When, Who) framework is a conceptualization of the problems faced by any agent situated in a social environment, which has defined several robotic studies. We introduce the H5W framework, through a description of its underlying neuroscience and the psychological considerations it embodies, we then demonstrate a specific implementation of the framework. We will focus on the motivation and implication of the pragmatic decisions we have taken. We report the numerous studies that have relied upon this technical implementation as a proof of its robustness and polyvalence; moreover, we conduct an additional validation of its applicability to the natural language domain by designing an information exchange task as a benchmark. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Show Figures

Graphical abstract

1686 KiB  
Article
DOF Decoupling Task Graph Model: Reducing the Complexity of Touch-Based Active Sensing
by Niccoló Tosi, Olivier David and Herman Bruyninckx
Robotics 2015, 4(2), 141-168; https://doi.org/10.3390/robotics4020141 - 19 May 2015
Cited by 1 | Viewed by 6435
Abstract
This article presents: (i) a formal, generic model for active sensing tasks; (ii) the insight that active sensing actions can very often be searched on less than six-dimensional configuration spaces (bringing an exponential reduction in the computational costs involved in the search); (iii) [...] Read more.
This article presents: (i) a formal, generic model for active sensing tasks; (ii) the insight that active sensing actions can very often be searched on less than six-dimensional configuration spaces (bringing an exponential reduction in the computational costs involved in the search); (iii) an algorithm for selecting actions explicitly trading off information gain, execution time and computational cost; and (iv) experimental results of touch-based localization in an industrial setting. Generalizing from prior work, the formal model represents an active sensing task by six primitives: configuration space, information space, object model, action space, inference scheme and action-selection scheme; prior work applications conform to the model as illustrated by four concrete examples. On top of the mentioned primitives, the task graph is then introduced as the relationship to represent an active sensing task as a sequence of low-complexity actions defined over different configuration spaces of the object. The presented act-reason algorithm is an action selection scheme to maximize the expected information gain of each action, explicitly constraining the time allocated to compute and execute the actions. The experimental contributions include localization of objects with: (1) a force-controlled robot equipped with a spherical touch probe; (2) a geometric complexity of the to-be-localized objects up to industrial relevance; (3) an initial uncertainty of (0.4 m, 0.4 m, 2Π); and (4) a configuration of act-reason to constrain the allocated time to compute and execute the next action as a function of the current uncertainty. Localization is accomplished when the probability mass within a 5-mm tolerance reaches a specified threshold of 80%. Four objects are localized with final {mean; standard-deviation} error spanning from {0.0043 m; 0.0034 m} to {0.0073 m; 0.0048 m}. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Show Figures

Graphical abstract

8908 KiB  
Article
A Computational Model of Human-Robot Spatial Interactions Based on a Qualitative Trajectory Calculus
by Christian Dondrup, Nicola Bellotto, Marc Hanheide, Kerstin Eder and Ute Leonards
Robotics 2015, 4(1), 63-102; https://doi.org/10.3390/robotics4010063 - 23 Mar 2015
Cited by 21 | Viewed by 10576
Abstract
In this paper we propose a probabilistic sequential model of Human-Robot Spatial Interaction (HRSI) using a well-established Qualitative Trajectory Calculus (QTC) to encode HRSI between a human and a mobile robot in a meaningful, tractable, and systematic manner. Our key contribution is to [...] Read more.
In this paper we propose a probabilistic sequential model of Human-Robot Spatial Interaction (HRSI) using a well-established Qualitative Trajectory Calculus (QTC) to encode HRSI between a human and a mobile robot in a meaningful, tractable, and systematic manner. Our key contribution is to utilise QTC as a state descriptor and model HRSI as a probabilistic sequence of such states. Apart from the sole direction of movements of human and robot modelled by QTC, attributes of HRSI like proxemics and velocity profiles play vital roles for the modelling and generation of HRSI behaviour. In this paper, we particularly present how the concept of proxemics can be embedded in QTC to facilitate richer models. To facilitate reasoning on HRSI with qualitative representations, we show how we can combine the representational power of QTC with the concept of proxemics in a concise framework, enriching our probabilistic representation by implicitly modelling distances. We show the appropriateness of our sequential model of QTC by encoding different HRSI behaviours observed in two spatial interaction experiments. We classify these encounters, creating a comparative measurement, showing the representational capabilities of the model. Full article
(This article belongs to the Special Issue Representations and Reasoning for Robotics)
Show Figures

Graphical abstract

Back to TopTop