Next Article in Journal
An Ultra-Wide Band Polarization-Independent Random Coding Metasurface for RCS Reduction
Next Article in Special Issue
On Inferring Intentions in Shared Tasks for Industrial Collaborative Robots
Previous Article in Journal
The Frequency-Domain Fusion Virtual Multi-Loop Feedback Control System with Measured Disturbance Feedforward Method in Telescopes
Previous Article in Special Issue
Optimized Proportional-Integral-Derivative Controller for Upper Limb Rehabilitation Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RTPO: A Domain Knowledge Base for Robot Task Planning

College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(10), 1105; https://doi.org/10.3390/electronics8101105
Submission received: 21 August 2019 / Revised: 18 September 2019 / Accepted: 26 September 2019 / Published: 1 October 2019
(This article belongs to the Special Issue Cognitive Robotics & Control)

Abstract

:
Knowledge can enhance the intelligence of robots’ high-level decision-making. However, there is no specific domain knowledge base for robot task planning in this field. Aiming to represent the knowledge in robot task planning, the Robot Task Planning Ontology (RTPO) is first designed and implemented in this work, so that robots can understand and know how to carry out task planning to reach the goal state. In this paper, the RTPO is divided into three parts: task ontology, environment ontology, and robot ontology, followed by a detailed description of these three types of knowledge, respectively. The OWL (Web Ontology Language) is adopted to represent the knowledge in robot task planning. Then, the paper proposes a method to evaluate the scalability and responsiveness of RTPO. Finally, the corresponding task planning algorithm is designed based on RTPO, and then the paper conducts experiments on the basis of the real robot TurtleBot3 to verify the usability of RTPO. The experimental results demonstrate that RTPO has good performance in scalability and responsiveness, and the robot can achieve given high-level tasks based on RTPO.

1. Introduction

Nowadays, artificial intelligence (AI) has become a growing field in which many theory studies and practical applications are enjoying a boom. Deep learning, a typical approach to AI, has propelled its entry into a new stage of development [1]. In the field of robotics, AI technology represented by deep learning has also grown to be extensive and vital in several applications [2,3]. However, the deep learning model, a kind of end-to-end learning, helps to achieve uninterpretable and opaque results, which limits its application in some areas requiring knowledge reasoning. For example, in the field of robot combat task planning, the plans need to be interpretable, so that the operational commander can evaluate the advantages and disadvantages of the plans.
Since the 1970s, AI researchers have gradually realized that symbolic knowledge representation is exerting a key role in more powerful AI systems, believing that knowledge and reasoning is the core of AI. From then on, ontology has been developing robustly as a form of knowledge base. It can represent and understand the sophisticated world. Until now, it has been widely employed in various fields such as AI [4,5], Semantic Web [6,7], Information Science [8], etc. Ontology-based task planning is essentially a series of relevant queries and reasoning on ontology knowledge [9]. Made up of a large number of individuals, concepts, and their semantic relations, it can first interpret the elements of the query and reasoning path of the task planning process on ontology knowledge [10]. Similar to a human’s thinking mode, a robot can also utilize knowledge and knowledge reasoning to realize smart high-level decision-making.
The challenge in building RTPO is how to efficiently and reasonably represent the intricate task knowledge. There is a need for thorough consideration of temporal and spatial information, as well as continuous and discrete information. Furthermore, it needs to have good performance in scalability and responsiveness, so as to guarantee the usability in the task planning algorithm.
The world knowledge, which is categorized into static knowledge, can be easily and feasibly exhibited, as is said by the previous researches on robot task planning. In this regard, as a result of the feasibility of representing world knowledge, the problem file of the task planner can be obtained from the world knowledge efficiently [11,12]. However, comparatively, the causal knowledge [13], is more represented in the formal language, like the PDDL (Planning Domain Definition Language) [14], HTN (Hierarchical Task Network) [15], and so on. Correspondingly, as is shown in Figure 1, the domain file of the task planner is inclined to come from the causal knowledge manually, which makes it comparatively complex and lacking universality in large-scale applications. Consequently, the paper proposed a task planning algorithm based on the built RTPO, avoiding the complex process of manual generation of domain file.
The representations of high-level tasks and atomic actions in RTPO are independent from each other. Then, task planning is realized by matching the execution preconditions of atomic actions and their effects on the environment from the initial state to the goal state. In this way, with the changing of input tasks, the task planning module can also be carried out in accordance with the present atomic action resources. The plans generated by the task planning algorithm will add and update to RTPO, and will improve the efficiency of task planning if the same task needs to be planned next time.
The research work mainly involves two aspects. Firstly, an ontology knowledge base is built for robot task planning and a method to evaluate the knowledge base is proposed. Secondly, an experiment with an indoor study case is carried out to verify the usability of RTPO and the flexibility of the proposed task planning algorithm. During the real robot experiments, we used the ROS (Robot Operating System) [16] under Ubuntu as the software system, and adopted TurtleBot3 as the hardware platform. The experimental results demonstrate that RTPO is of good usability and the task planning algorithm harbors flexibility to address the unexpected events.
There are some typical study works and applications on robot knowledge base. KnowRob [17,18,19], an integrated knowledge management system for autonomous robots, aims at the construction of an indoor service robot knowledge base. It is composed of ontology, entities, and extensible reasoning engine in OWL language. However, KnowRob holds the limitation of having less abundant knowledge. RoboEarth [20,21,22], based on KnowRob, has already given definition to sub-actions of specific tasks, along with the definition of the temporal and spatial constraints among different sub-actions. That is, when a high-level task needs to be planned, the user should request the high-level task to generate its corresponding plans. As the task planning methods have to rely on specific high-level task instructions, it is possible that task planning will fail to input other different task instructions.
Besides this, ORO (OpenRobots Common-Sense ontology) [23,24] is built in OWL and stored in the OpenJena ontology management library. The knowledge information is reasoned by the Pellet reasoner. Nonetheless, the ORO knowledge management system highlights the interaction between robots and humans. SWARMs (Smart and Networking Underwater Robots in Cooperation Meshes) Ontology [25] is built to represent and understand the knowledge of unmanned underwater robots to facilitate cooperation between them. The SWARMs ontology in Reference [13] is divided into four domain-specific ontologies (environmental model, vehicle model, communication model, and mission model) and a core ontology which connects the four domain-specific ontologies. The Semantic Web Rule Language (SWRL) is adopted in the SWARMs ontology for the compensation of the inability of OWL to represent complex rule formations and relations. Reference [26] offers an evaluation of the SWARMs Ontology and verification of applicability by conducting multiple underwater robots cooperation experiments. Comparatively, the ontology in SWARMs is merely constructed for the unmanned underwater robots, so it is hard to extend to other robot application types.
The ontology theory is also applied to other fields, such as industrial production collaboration [27] and navigation in indoor scenarios [28,29]. The work in References [27,30] portrays the domain knowledge for the robot task planning in logic language BC [31,32]. However, this approach is disadvantageous in sharing knowledge and updating the new knowledge inferred. In addition, the ontology representation language OWL [33,34] is endowed with sharing as its natural advantage, which can be easily shared on the web.
Currently, the existing robot ontology knowledge bases embrace the complete representation, which covers each expectation in robot task planning. However, the actual application of the ontology is lacking in intelligence. Meanwhile, the knowledge is just stored in libraries and queried to get the output in use. That is, different knowledge and reasoning lacks associations to obtain new knowledge, like humans’ thinking mode.
The main contributions of this paper can be summarized as follows:
  • It contributes a domain knowledge base RTPO for robots to have a better understanding of the task planning knowledge.
  • An evaluation method of knowledge base is proposed and implemented to test scalability and responsiveness in this paper.
  • A task planning algorithm based on RTPO is proposed which has good flexibility and avoids the shortcoming of manual editing domain knowledge in traditional task planners.
  • We carried out the experimental research and applied the proposed approach on the real robot.
The rest of the paper is arranged as follows. Section 2 introduces the purpose and requirements for the building of RTPO. Section 3 presents the building method of RTPO. Section 4 describes the knowledge representation in RTPO. In Section 5, we implement a method to test the scalability and responsiveness of RTPO and an algorithm of robot task planning based on RTPO on a real robot. Finally, Section 6 summarizes all the work and introduces future work.

2. Building Considerations for Robot Task Planning Ontology (RTPO)

The building considerations for RTPO will be presented in this section. The first part will illustrate the purpose of RTPO building, and then the requirements for RTPO building will be unfolded.

2.1. Purpose of Robot Task Planning Ontology (RTPO)

It is essential to clarify the purpose of RTPO building. Based on a clear understanding of the purpose, it is possible to decide the contents to be included in the RTPO, which expectations they should be divided into, and which tools should be adopted. The RTPO is mainly associated with knowledge in connection to the robot task planning, including the robot-itself-related concepts, the concepts of environment, and the task-related concepts. Our RTPO design and building aim to provide a comprehensive and available knowledge base for the application of robot task planning. Various heterogeneous robots can query and reason the knowledge from the knowledge base to obtain useful information, which helps to improve the efficiency of task planning and increase the intelligence by reasoning knowledge to plan automatically.

2.2. Requirements of Robot Task Planning Ontology (RTPO)

There are many approaches and forms for building the ontology, but there is no unified standard pattern. However, the great ontology should harbor the standard features which should not change with the different approaches and forms. Therefore, the RTPO should meet a set of requirements to ensure an appropriate outcome. Generally, the paper takes into consideration that the great ontology in robotics should cover the following characteristics [35]:
  • With unambiguous knowledge representation, it is easy to be understood by humans and robots.
  • With strong editability, it is easy to be operated and utilized by developers.
  • With knowledge representation, it is consistent and free from contradictory knowledge or definitions.

3. The Building of Robot Task Planning Ontology (RTPO)

This section will give a detailed description of the building method of RTPO, composed of the model, approaches, and knowledge reasoning.

3.1. The Model of Robot Task Planning Ontology (RTPO)

This section starts with a formal definition of the model in RTPO employed for building the RTPO later on.
Definition 1.
A high-level task model is defined as a 5-tuple.
T = ( T n a m e , T a t t r , T e n t i t y , T t a s k e r , T m e t h o d s )
Consisting of a set of task names T n a m e   , a set of task attributes T a t t r , e.g., the start time and initial state, a set of entities carrying out the tasks T e n t i t y , a set of taskers giving the high-level tasks T t a s k e r , and a set of methods to decompose the high-level task T m e t h o d s .
Definition 2.
an atomic action model is defined as a 5-tuple.
A = ( A n a m e , A a t t r , A e n t i t y , A p r e , A e f f e c t )
Consisting of a set of atomic actions names   A n a m e , a set of atomic actions attributes A a t t r , a set of entities carrying out the atomic actions A e n t i t y , a set of execution preconditions of atomic actions A p r e , and a set of actions effects A e f f e c t .

3.2. The Approaches to Build Robot Task Planning Ontology (RTPO)

Some software editors are available to edit the ontologies, such as Protégé [36], Neon-tool kit, OntoWiki, and so on [37]. The editor employed for RTPO building is Protégé, equipped with the RDF (Resource Description Framework) triple, which was developed by Stanford University. Besides this, Java is applied as the development language of Protégé. With many embedded plugins in Protégé, the set has been evolved to be one of the essential ontology editors [36]. The Protégé harbors many conception constraints, which helps to add and update the corresponding inferred knowledge. In addition, the knowledge of robot task planning can be derived from Internet, books, or manual editing by humans, just as is shown in Figure 2.
In Figure 2, the query of relevant knowledge from RTPO is realized in the language SWI–Prolog [38]. This logic language, which can be well combined with ROS, can easily and quickly query the relevant knowledge from RTPO. Also, the rules and knowledge can be edited and stored through SWI–Prolog language. The matching query of knowledge in task planning algorithm later can be conducted through rules.
In the duration of specific implementation, the knowledge is stored in .owl and .pl file formats. Specifically, the .owl file stores the ontology knowledge in RDF triples and the .pl file stores the rules knowledge in Prolog language. RTPO harbors the application in which the paper adopts the ROS as the robot middleware, which is featured by the ROS node and communication through the mechanism of the topic.
OWL, whose file can be generated by Protégé, is further applied as logic language in our research work. An OWL file’s generation comes following the ontology building, which holds excellent portability, and can be well used in other approaches. It also breaks down the knowledge interaction barrier among different knowledge systems. OWL file can be published in the World Wide Web and may refer to or be referred from other OWL files [34].
ROS provides the rosprolog function package, through which the users can conveniently explore and debug the knowledge in the terminal window. However, if you would like to apply the knowledge in your robot’s control program, you need a way to send queries from your program. This functionality is provided by the json_prolog package (http://wiki.ros.org/json_prolog). It provides a service that exposes a Prolog shell via the ROS node. The ROS Node programs can be written in multiple languages, such as Python, C++, and Java. Moreover, the ROS provides a number of function packages for developers. A snapshot of the hierarchy of RTPO is shown in Figure 3.

4. Knowledge Representation in Robot Task Planning Ontology (RTPO)

This section offers the introduction of the knowledge representation in RTPO we have built, including the knowledge types, knowledge structure, and knowledge reasoning in RTPO.

4.1. The Knowledge in Robot Task Planning Ontology (RTPO)

During the decision process of robots, the task planning module receives the high-level task and subsequently generates a sequence of atomic actions. The execution of atomic actions sequence will have an effect on the environment state, changing the environment state until the goal state is achieved. It is evident that the process of task planning requires knowledge. The knowledge for robot task planning can be composed of two parts. One is the knowledge related to the initial state of the world, which also corresponds to the problem file of task planners. The other is the knowledge relevant to how the given high-level task can be planned from the initial state to the goal state which is also corresponding to the domain file of task planners. The first kind of knowledge exists in the form of world knowledge, including information of environment and robots themselves. The second kind of knowledge is presented in causal knowledge, including the preconditions and the effects on environment state of atomic actions. Taking the high-level task “DeliveryHandbooktoLeo” as an example, the robot task planner calls for the location and capability of robots, the location of humans, the environmental map, and so on, which is defined as world knowledge. Besides this, what the robot needs to know is how to search the optimal sequence of atomic actions from the initial state to the goal state. For example, the task decomposition includes the following steps of “DeliveryHandbooktoLeo”: move to Jack, get the handbook, move to Leo, and give the book to Leo. That is the causal knowledge [13].
Similar to a human, when a robot wants to complete a task, first of all, it has to know who it is. That refers to the robot-itself knowledge, including the hardware and software, location information, dynamics information, and so forth. Secondly, in order to complete the task, it also has to harbor the familiarity with the surrounding environment, namely environmental knowledge, which consists of the location and recognition of humans and objects, environment map, information of other robots, and so on. Finally, following the assignment, it needs to understand how to decompose the given high-level task into atomic actions and how to re-plan when the environment is changed, which is the task knowledge.
Therefore, it is easy to understand that nothing else is more important than the three kinds of knowledge representation in our research work. Thus, the paper classifies the RTPO building into three parts: robot ontology, environment ontology, and task ontology, respectively. Next, the paper will describe the structure of RTPO in detail.

4.2. The Structure of Robot Task Planning Ontology (RTPO)

RTPO contains three parts: robot ontology, environment ontology, and task ontology. The task ontology describes the knowledge connected with robot tasks, such as the task decomposition, task allocation, and task execution, etc. Then, the robot ontology is designed to portray the knowledge or concepts corresponding to the robot itself in the hierarchical structure. Finally, environment ontology gives a description of the knowledge relevant to the environment, such as the environmental map, environmental objects, and so on. The overall structure diagram of the RTPO is shown in Figure 4.

4.2.1. Robot Ontology

As shown in Figure 5, the robot ontology contains the following three parts: robots, hardware, and software, which demonstrate the capability and characteristics of robots. Robots include various types of robot, like ground robots, underwater robots, and air robots. The concepts of robots can be installed as individuals in Protégé. The hardware is composed of the components and devices the various types of robots may have. It can be further divided into perception devices, navigation devices, and base devices. Navigation devices refer to the hardware devices that robots need to navigate and locate, such as IMU (Inertial Measurement Unit). Perception devices refer to the hardware devices that robots need to perceive and understand the environment, such as lidar and camera. Base devices refer to the hardware devices related to the low-level control of the robot, such as motor, battery, and so on. Software consists of the functional ROS node, which can publish its specific topic and subscribe to the other topics. It can also achieve communication among control nodes. A variety of relationships can be defined by developers to describe the relationship among the robot ontology and other ontologies.

4.2.2. Environment Ontology

Environment ontology is mainly designed for the detailed description of the indoor environment where TurtleBot3 moves. According to the experiment environment, it is seen in Figure 6 that the environment ontology includes the map, obstacles, doors, and other objects in an indoor environment. Besides this, the research requires that the environment ontology should be instantiated partly, for instance, the door and the room. The RTPO is capable of adding and updating environment knowledge after the task planning of robots.

4.2.3. Task Ontology

As is displayed in Figure 7, the paper extends the task ontology from four typical tasks, which are, respectively, monitor task, mapping task, delivery task, and charge task. The domain files which most planners use for decomposition are written manually, which is inefficient and less portable. The task description structure is similar to the HTN. Thus, we can decompose the given high-level tasks through ontology for task planning. For better utilizing the hierarchical structures, the representation method of task ontology is designated to meet the experimental requirements. Going forward, the preconditions of each task and sub-action are defined, including the robot’s capability and environment state. Besides this, the atomic actions are defined to have effects (delete and add) on the environment state.
Figure 8 interprets the representation method of atomic actions, defined as the smallest granular actions which can be directly executed by a robot. Atomic actions are made up of execution preconditions and action effects. The former will have an effect on environment state, such as deleting or adding some state, as shown in Figure 8.
Generally speaking, for a given high-level task, for example, DeliveryHandbooktoLeo in Figure 9, its representation information in the ontology is insufficient in the initial state, and lacks the update of reasoning knowledge, such as the constraints among different atomic actions. The task knowledge is represented with the insufficient form as follows at the initial state.
Class: DeliveryHandbooktoLeo
  SubClassOf:
   DeliveryTask
(subAction some GetHandbook)
and (subAction some GiveHandbookToLeo)
and (subAction some MovetoHandbook)
and (subAction some MovetoLeo)
The robot can adopt a specific task planning algorithm on the basis of the initial state of environment. The task planning algorithm aims to match the preconditions and action effects of atomic actions, so that the execution order constraints can be generated and added into the task ontology. The system can further obtain the atomic action sequence of the specific task. At this moment, new knowledge is obtained and updated based on the reasoning of existing knowledge.
After the task planning algorithm runs, a complete representation of a high-level task decomposition can be obtained and shown as follows, composed of the parent classes, the subactions, and the execution order constraints among subactions. When the same specific task requires planning, the users can query the ontology to obtain the atomic actions sequence of the specific task directly, which will improve the efficiency of task planning.
Class: DeliveryHandbooktoLeo
  SubClassOf:
   DeliveryTask
(subAction some GetHandbook)
and (subAction some GiveHandbookToLeo)
and (subAction some MovetoHandbook)
and (subAction some MovetoLeo)
and (orderingConstraints value DeliveryActions12)
and (orderingConstraints value DeliveryActions13)
and (orderingConstraints value DeliveryActions14)
and (orderingConstraints value DeliveryActions23)
and (orderingConstraints value DeliveryActions24)
and (orderingConstraints value DeliveryActions34)
Relying on the individual class in Protégé, the order constraints among sub-actions is defined as follows. Assuming that a specific task has n sub-actions, it is easy to know that the total number to define the specific task is   C   n 2 , provided that we completely define the execution order of all sub-actions.
Individuals: DeliveryActions12
  Types:
    PartialOrdering-Strict
  Annotations:
    occursAfterInOrdering GetHandbook
    occursBeforeInOrderingMovetoHandbook

4.2.4. Communications Among the Three Parts

Apart from the contents above, corresponding communications also exist among the three parts which connect this knowledge with another. These relationships can be defined according to the developers’ own needs. Taking the indoor service monitor task as an example (Figure 10), the monitor task is achieved by the mobile wheeled robot TurtleBot3 and in the environmental map of Room2. The latter is built by the Lidar of TurtleBot1. Accordingly, these three ontology modules can be linked and added to constraints so that they make up the whole ontology jointly.

4.3. Knowledge Reasoning

Figure 11 gives an example of the reasoning in the RTPO knowledge base. The left is the clarification of knowledge and the relationship between the knowledge, but the position of the book is not indicated. That is, we fail to achieve the position of the handbook just through the existing ontology knowledge. On the right, after adding the following rule knowledge, we can infer that the exact location of the handbook is room #1.
In (Book, Room) :-
  Has (Human, Book),
  In (Human, Room)

5. Evaluation and Experiments

The evaluation of robot knowledge base is a complicated work. So far, there is neither an established benchmark nor evaluation methods. In addition, each robot knowledge system has different coverage and application fields, which brings great challenges to the evaluation of robot knowledge base. It is difficult to evaluate robot knowledge base with only one or a few indicators. Therefore, considering the knowledge system and its application field, this section evaluates RTPO with a combinatorial evaluation method. We adopt a quantitative evaluation method for the scalability and responsiveness of RTPO. Then, this section conducts an experimental case study to verify the usability of RTPO and the flexibility of task planning algorithm based on RTPO on TurtleBot3.

5.1. The Evaluation of Robot Task Planning Ontology (RTPO)

The scalability of robot knowledge system lies in the efficiency of knowledge updating and storage. The addition of new knowledge is the premise of the application expansion of the knowledge base. The good scalability of knowledge base is the basis of its sustainable development. In order to test the scalability of RTPO, we designed a test algorithm in the process of knowledge base instantiation. The process of knowledge base instantiation refers to the update of individual knowledge, such as the addition of a cabinet and a chair in the indoor environment. Therefore, we designed an algorithm for writing new knowledge to RTPO, which can automatically generate a large number of individuals in a single loop statement:
g_individuals (0).
g_individuals (?Num) :-
  new_individuals (?Num).
  ?Num is ?Num−1.
  g_individuals (?Num).
The g_individuals (?Num) function just calls the new_individuals (?Num) functions Num times, which then generates Num individuals in RTPO. We used the prolog’s time() that is ?-time(g_individuals (?Num)) to evaluate the performance of g_individuals (?Num). The scalability of RTPO is tested by changing the number of automatically generated individuals and then counting the time consumed by each number of individuals. Figure 12 shows how the consumed time changes with the number of generated individuals ((blue square markers). It is easy to see that the consumed time increases linearly with the number of generated individuals, where it takes about 2.34 s to generate 55,000 individuals. The maximum generation rate is about 23,500 individuals per second. As a comparison, KnowRob [18] has a maximum generation rate of 22,000 individuals per second and ORO [24] has a maximum generation rate of 7245 individuals per second.
The responsiveness of knowledge system is mainly reflected in the knowledge query speed. The faster the knowledge query speed, the faster the responsiveness of knowledge system and the better the performance of knowledge system. We choose the Prolog statement “?- time (findall(?A, owl_individual_of (?A, ‘Obstacles’)))” to calculate the query speed of individuals knowledge. As shown in Figure 12 (yellow circle markers), the experimental results show that the response time increases linearly with the increase in the number of individual knowledge to query. Query 52,000 individuals can be completed within 10 s, which ensures the real-time performance of RTPO in the application. Compared with the KnowRob [18], both knowledge systems have similar responsiveness. In a word, RTPO has good scalability and responsiveness.

5.2. Verification Using a Case Study

In this section, we take the indoor delivery task as a case study to study the robot task planning based on RTPO on TurtleBot3.

5.2.1. Hardware and Software

As is shown in Figure 13, the hardware system in our system is established on TurtleBot3, which is a new generation of mobile robot platform established on ROS (Robot Operation System). The series of TurtleBot from TurtleBot1 to TurtleBot3 is becoming more and more powerful with the ROS. Based on ROS, it is an ideal and essential platform to do research work. Table 1 explains the configuration table of the burger.
Accordingly, the software system is based on ROS, which is the most popular and vital middleware for robot system development. Figure 14 contributes our software system framework, which illustrates that the design and development of ROS nodes contributes to the central part of the software system framework. Based on ROS, the software system framework can be respectively classified into three control levels: control of actions, control of navigation, and control of velocity, corresponding to the task planning module, navigation module, and base control module.

5.2.2. The Experimental Scenario

An assumption is made in the case study that the robot is able to grasp objects. In addition, it is noteworthy that the adopted TurtleBot3 in the experiment only harbors mobility and obstacles perception. However, our research attaches importance to the decision-making at the related top level to the robot task planning and does not focus on such basic-level issues as how to control the movement and navigation. Therefore, we make a reasonable assumption that the robot is able to grasp objects.
Under the circumstances of the real robot experiment, the robot needs to get the handbook and then give it to Leo. The built experimental scenario is shown in Figure 15a. The object elements in the environment are displayed in virtue of the label objects on the ground, for example, persons, books and bookshelves. The environment map built through the LIDAR LDS–01 is shown in Figure 15b.
During the verification experiment, the communication among the devices is made through LAN, which shares a master computer and topic to realize the communication requirements between devices. As shown in Figure 16, ontology knowledge is stored in computer PC_2. Besides this, computer PC_1 undertakes the master computer. It is also the central computer employed to run the master node. Turtlebot tb_1 commits the program storage of map building, navigation, and path planning. It is worth noting that the map building algorithm adopts Gmapping; the local path planning algorithm adopts the DWA (Dynamic Window Approach), the global path planning algorithm adopts the D*, and the location algorithm adopts the amcl.

5.2.3. The Experiments and Results

The proposed task planning algorithm Algorithm 1 based on RTPO is shown as the pseudo-code below. Its inputs are the initial state s, the given high-level task t, and the ontology knowledge O. The output of the algorithm is the sequence of atomic actions, which is also the plan for accomplishing the t from the initial state.
Algorithm 1 Task Planning Algorithm Based on Robot Task Planning Ontology (RTPO)
Input: s: the initial state; t: the given high-level task; O: the ontology knowledge
Output: P : A plan for accomplishing the t from the initial state;
1: procedure generate a plan for accomplishing the t
2:    P = the   empty   plan
3:   function task_planning (t)
4:     if t is a primitive task then
5:      modify s by deleting del(t) and adding add(t)
6:      append t to P
7:     else
8:      for all subtask in subtasks(t) do
9:        if preconditions(subtask) matches the s then
10:         task_planning (subtask)
11:   return P
12: end procedure
The specific implementation process of the study case on TurtleBot3 is demonstrated in Figure 17. The atomic actions sequence can be obtained by the task planning algorithm based on RTPO, as shown in Figure 18. The given high-level task “DeliveryHandbooktoLeo”, whose initial environment state is “Leo in Room#2; Jack in Room#1; Jack Has Handbook; tb1 in Room#3; tb1 has_ability DeliveryTask”, is decomposed into a sequence of atomic actions: MovetoHandbook, GetHandbook, MovetoLeo, and GiveHandbookToLeo. The corresponding action attributes, such as the target point and the time constraint of some actions, can be obtained by virtue of the analysis of the atomic action list and querying the RTPO. The robot then subscribes to the message and performs the corresponding actions through the navigation and path planning algorithm. Finally, the given high-level task is completed with achieving the goal environment state: “tb1 in Room#2; Leo in Room#2; Leo has Handbook”.
Figure 19 shows a sequence of snapshots for the real execution of the atomic action sequence generated by the decomposition of the given high-level task “DeliveryHandbooktoLeo” in a real-world experimental scenario. A sequence of atomic actions obtained from RTPO is displayed as navigating across the target points in a proper order according to atomic action’s type. For example, the Moveto action needs to move to the specific target point. Figure 19a shows that the Turtlebot tb_1 is at the initial position. Figure 19b demonstrates that the Turtlebot tb_1 has subscribed the topic of atomic action MovetoHandbook and then the atomic action MovetoHandbook is executed. Figure 19c shows that the Turtlebot tb_1 executes the atomic action MovetoLeo. Figure 19d presents that the Turtlebot tb_1 has arrived at the position of Leo. The experimental video can be watched at the supplementary material Video S1. The experimental result demonstrates that the robot knowledge base RTPO is of good usability and can be well used in the robot task planning with the proposed task planning algorithm.
Besides this, another test scenario is further designed and implemented to further validate the flexibility of task planning algorithm based on RTPO. In this scenario, we suppose that Leo will ask the TurtleBot3 to put the handbook on the bookshelf in room #2 after the robot completed the delivery task “DeliveryHandbooktoLeo”. However, in accordance with the topic related to the battery published by the TurtleBot3, the battery of TurtleBot3 becomes insufficient at this time. In this regard, the robot makes a flexible plan for this scenario. First, it needs to recharge its battery in Room #3 and then put the handbook on the bookshelf Room #2, as shown in Figure 20. The given high-level task “PutHandbookonBookshelf” from Leo is decomposed into a sequence of atomic actions: GetHandbook, BatteryCharge, MovetoBookshelf, PutonHandbook. Figure 20e shows that the Turtlebot tb_1 moves to charge point to have a charge and Figure 20f demonstrates that the Turtlebot tb_1 moves to the target position of Bookshelf carrying the Handbook. The experimental video can be watched at supplementary material Video S2. The experimental results show that the task planning algorithm based on RTPO has good flexibility to address the unexpected events, such as the insufficient battery.

6. Conclusions

In conclusion, the paper manages to build a robot ontology called RTPO which is applied to the robots’ task planning. Then, RTPO is designed and implemented followed by a proposed evaluation method to test its scalability and responsiveness. The test results show that RTPO has good performance of scalability and responsiveness compared with the existing knowledge base. Finally, we proposed a task planning algorithm based on RTPO and conducted the real robot experiments to verify the usability of RTPO and the flexibility of the proposed algorithm. The experimental results show that the robot can complete the given high-level task smoothly, and can also address unexpected events with good flexibility.
Future research work mainly focuses on the following aspects. Firstly, a large number of available robot ontologies have been constructed, such as KnowRob, ORO, SWARMs, and so on. In this aspect, it is suggested that more consideration of future research should be taken into the fusion of different ontologies. Then, multiple heterogeneous robots can cooperatively help to accomplish some more complex tasks which cannot be accomplished by just a single robot. Thereby, it is of great significance to study cooperative task planning and application based on multi-robot task planning. Additionally, the real world is complex and changeable, due to inaccuracy, randomness, and incompleteness. Therefore, it is necessary to study robot task planning under uncertain environments. Finally, the application of cloud-based knowledge will reduce robots’ dependence of specific hardware. It is conducive to the research of multi-agent and swarm intelligence.

Supplementary Materials

The following are available online at https://susy.mdpi.com/user/manuscripts/displayFile/8ec60ffb629e5ad518f79653d69d980d/supplementary, Video S1: DeliveryHandbooktoLeo, Video S2: PutHandbookonBookshelf.

Author Contributions

Conceptualization, X.S. and Y.Z.; Formal analysis, X.S.; Funding acquisition, J.C.; Investigation, X.S.; Methodology, X.S. and Y.Z.; Project administration, Y.Z. and J.C.; Resources, J.C.; Supervision, J.C.; Validation, X.S.; Writing – original draft, X.S.; Writing – review & editing, Y.Z. and J.C.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 61806212, No.61603403, No. U1734208 and No. 61702528).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  2. Yang, P.-C.; Suzuki, K.; Kase, K.; Sasaki, K.; Sugano, S.; Ogata, T. Repeatable folding task by humanoid robot worker using deep learning. IEEE Robot. Autom. Lett. 2016, 2, 397–403. [Google Scholar] [CrossRef]
  3. Levine, S.; Pastor, P.; Krizhevsky, A.; Ibarz, J.; Quillen, D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 2018, 37, 421–436. [Google Scholar] [CrossRef]
  4. Shiang, C.W.; Tee, F.S.; Halin, A.A.; Yap, N.K.; Hong, P.C. Ontology reuse for multiagent system development through pattern classification. Softw. Pract. Exp. 2018, 48, 1923–1939. [Google Scholar] [CrossRef]
  5. El-Sappagh, S.; Alonso, J.M.; Ali, F.; Ali, A.; Jang, J.-H.; Kwak, K.-S. An ontology-based interpretable fuzzy decision support system for diabetes diagnosis. IEEE Access 2018, 6, 37371–37394. [Google Scholar] [CrossRef]
  6. Liu, J.; Li, Y.; Tian, X.; Sangaiah, A.K.; Wang, J. Towards Semantic Sensor Data: An Ontology Approach. Sensors 2019, 19, 1193. [Google Scholar] [CrossRef]
  7. Wen, Y.; Zhang, Y.; Huang, L.; Zhou, C.; Xiao, C.; Zhang, F.; Peng, X.; Zhan, W.; Sui, Z. Semantic Modelling of Ship Behavior in Harbor Based on Ontology and Dynamic Bayesian Network. ISPRS Int. J. Geo-Inf. 2019, 8, 107. [Google Scholar] [CrossRef]
  8. Ibrahim, M.E.; Yang, Y.; Ndzi, D.L.; Yang, G.; Al-Maliki, M. Ontology-based personalized course recommendation framework. IEEE Access 2019, 7, 5180–5199. [Google Scholar] [CrossRef]
  9. Jeon, H.; Yang, K.-M.; Park, S.; Choi, J.; Lim, Y. An Ontology-Based Home Care Service Robot for Persons with Dementia; IEEE: Piscataway, NJ, USA, 2018; pp. 540–545. [Google Scholar]
  10. Xu, G.; Cao, Y.; Ren, Y.; Li, X.; Feng, Z. Network security situation awareness based on semantic ontology and user-defined rules for Internet of Things. IEEE Access 2017, 5, 21046–21056. [Google Scholar] [CrossRef]
  11. Stock, S.; Mansouri, M.; Pecora, F.; Hertzberg, J. Hierarchical Hybrid Planning in a Mobile Service Robot; Springer: Berlin/Heidelberg, Germany, 2015; pp. 309–315. [Google Scholar]
  12. Wang, Y.; Sun, H.; Chen, G.; Jia, Q.; Yu, B. Hierarchical task planning for multiarm robot with multiconstraint. Math. Probl. Eng. 2016, 2016, 2508304. [Google Scholar] [CrossRef]
  13. Galindo, C.; Fernández-Madrigal, J.-A.; González, J.; Saffiotti, A. Robot task planning using semantic maps. Robot. Auton. Syst. 2008, 56, 955–966. [Google Scholar] [CrossRef] [Green Version]
  14. Cashmore, M.; Fox, M.; Long, D.; Magazzeni, D.; Ridder, B.; Carrera, A.; Palomeras, N.; Hurtos, N.; Carreras, M. Rosplan: Planning in the robot operating system. In Proceedings of the Twenty-Fifth International Conference on Automated Planning and Scheduling, Jerusalem, Israel, 7–11 June 2015. [Google Scholar]
  15. Lu, F.; Tian, G.; Li, Q. Autonomous cognition and planning of robot service based on ontology. Jiqiren/Robot 2017, 39, 423–430. [Google Scholar]
  16. IsaacSaito.Wiki: ROS [EB/OL]. Available online: http://wiki.ros.org/ROS/ (accessed on 23 April 2013).
  17. Tenorth, M. Knowledge Processing for Autonomous Robots. Ph.D. Thesis, Technische Universität München, Munich, Germany, 2011. [Google Scholar]
  18. Tenorth, M.; Beetz, M. KNOWROB—Knowledge processing for autonomous personal robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009; pp. 4261–4266. [Google Scholar]
  19. Tenorth, M.; Beetz, M. Representations for robot knowledge in the KnowRob framework. Artif. Intell. 2015, 247, 151–169. [Google Scholar] [CrossRef]
  20. Tenorth, M.; Perzylo, A.C.; Lafrenz, R.; Beetz, M. Representation and Exchange of Knowledge about Actions, Objects, and Environments in the RoboEarth Framework. IEEE Trans. Autom. Sci. Eng. 2013, 10, 643–651. [Google Scholar] [CrossRef]
  21. Waibel, M.; Beetz, M.; Civera, J.; d’Andrea, R.; Elfring, J.; Galvez-Lopez, D.; Häussermann, K.; Janssen, R.; Montiel, J.M.M.; Perzylo, A.; et al. Roboearth—A world wide web for robots. IEEE Robot. Autom. Mag. (RAM) 2011, 18, 69–82. [Google Scholar] [CrossRef]
  22. Riazuelo, L.; Civera, J.; Montiel, J.; Montiel, J.M.M. C2tam: A cloud framework for cooperative tracking and mapping. Robot. Auton. Syst. 2014, 62, 401–413. [Google Scholar] [CrossRef]
  23. Lemaignan, S. Grounding the Interaction: Knowledge Management for Interactive Robots. KI-Künstliche Intell. 2013, 27, 183–185. [Google Scholar] [CrossRef] [Green Version]
  24. Lemaignan, S.; Ros, R.; Mösenlechner, L.; Alami, R.; Beetz, M. ORO, a knowledge management platform for cognitive architectures in robotics. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 3548–3553. [Google Scholar]
  25. Li, X.; Bilbao, S.; Martín-Wanton, T.; Bastos, J.; Rodriguez, J. SWARMs ontology: A common information model for the cooperation of underwater robots. Sensors 2017, 17, 569. [Google Scholar] [CrossRef]
  26. Landa-Torres, I.; Manjarres, D.; Bilbao, S.; Del Ser, J. Underwater robot task planning using multi-objective meta-heuristics. Sensors 2017, 17, 762. [Google Scholar] [CrossRef]
  27. Sadik, A.R.; Urban, B. An Ontology-Based Approach to Enable Knowledge Representation and Reasoning in Worker–Cobot Agile Manufacturing. Future Internet 2017, 9, 90. [Google Scholar] [CrossRef]
  28. Diab, M.; Akbari, A.; Din, M.U.; Rosell, J. PMK—A Knowledge Processing Framework for Autonomous Robotics Perception and Manipulation. Sensors 2019, 19, 1166. [Google Scholar] [CrossRef]
  29. Schlenoff, C.; Prestes, E.; Madhavan, R.; Goncalves, P.; Li, H.; Balakirsky, S.; Kramer, T.; Miguelanez, E. An IEEE standard ontology for robotics and automation. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1337–1342. [Google Scholar]
  30. Khandelwal, P.; Zhang, S.; Sinapov, J.; Leonetti, M.; Thomason, J.; Yang, F.; Gori, I.; Svetlik, M.; Khante, P.; Lifschitz, V.; et al. Bwibots: A platform for bridging the gap between ai and human–robot interaction research. Int. J. Robot. Res. 2017, 36, 635–659. [Google Scholar] [CrossRef]
  31. Khandelwal, P.; Yang, F.; Leonetti, M.; Lifschitz, V.; Stone, P. Planning in Action Language BC while Learning Action Costs for Mobile Robots. In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling, Portsmouth, NH, USA, 21–26 June 2014. [Google Scholar]
  32. Lee, J.; Lifschitz, V.; Yang, F. Action Language BC: Preliminary Report. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, Beijing, China, 3–9 August 2013; pp. 983–989. [Google Scholar]
  33. McGuinness, D.L.; Van Harmelen, F. OWL web ontology language overview. W3C Recomm. 2004, 10, 2004. [Google Scholar]
  34. OWL Working Group. OWL—Semantic Web Standard [EB/OL]. Available online: https://www.w3.org/2001/sw/wiki/OWL,2-013-12-21 (accessed on 21 August 2019).
  35. Zhai, Z.; Ortega, J.-F.M.; Martínez, N.L.; Castillejo, P. A Rule-Based Reasoner for Underwater Robots Using OWL and SWRL. Sensors 2018, 18, 3481. [Google Scholar] [CrossRef] [PubMed]
  36. TaniaTudorache. Protégé Wiki [EB/OL]. Available online: https://protegewiki.stanford.edu/wiki/Main_Page (accessed on 23 May 2016).
  37. Maedche, A.; Staab, S. Ontology learning for the Semantic Web. Intell. Syst. IEEE 2001, 16, 72–79. [Google Scholar] [CrossRef]
  38. Clocksin, W.F.; Mellish, C.S. Programming in Prolog; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
Figure 1. Knowledge representation in robot task planning.
Figure 1. Knowledge representation in robot task planning.
Electronics 08 01105 g001
Figure 2. The implementation process of RTPO building.
Figure 2. The implementation process of RTPO building.
Electronics 08 01105 g002
Figure 3. A snapshot of the hierarchy of the Robot Task Planning Ontology (RTPO).
Figure 3. A snapshot of the hierarchy of the Robot Task Planning Ontology (RTPO).
Electronics 08 01105 g003
Figure 4. The diagram of the whole structure of Robot Task Planning Ontology (RTPO).
Figure 4. The diagram of the whole structure of Robot Task Planning Ontology (RTPO).
Electronics 08 01105 g004
Figure 5. The robot ontology shown in Protégé.
Figure 5. The robot ontology shown in Protégé.
Electronics 08 01105 g005
Figure 6. The environment ontology shown in Protégé.
Figure 6. The environment ontology shown in Protégé.
Electronics 08 01105 g006
Figure 7. The task ontology shown in Protégé.
Figure 7. The task ontology shown in Protégé.
Electronics 08 01105 g007
Figure 8. The representation method of atomic actions.
Figure 8. The representation method of atomic actions.
Electronics 08 01105 g008
Figure 9. The environment state changes in a simple indoor service DeliveryHandbooktoLeo: the TurtleBot3 need to get the handbook from Jack and then give the handbook to Leo.
Figure 9. The environment state changes in a simple indoor service DeliveryHandbooktoLeo: the TurtleBot3 need to get the handbook from Jack and then give the handbook to Leo.
Electronics 08 01105 g009
Figure 10. The communications among three parts in Protégé.
Figure 10. The communications among three parts in Protégé.
Electronics 08 01105 g010
Figure 11. Example of knowledge reasoning. (a) Left: the handbook is asserted to be just a book, and Jack has it. (b) Right: If some rule knowledge is added, the knowledge system can infer the location of the handbook.
Figure 11. Example of knowledge reasoning. (a) Left: the handbook is asserted to be just a book, and Jack has it. (b) Right: If some rule knowledge is added, the knowledge system can infer the location of the handbook.
Electronics 08 01105 g011
Figure 12. The consumed time changes with the number of generated instances (blue square markers) and the response time changes with the number of individual knowledge to query (yellow circle markers).
Figure 12. The consumed time changes with the number of generated instances (blue square markers) and the response time changes with the number of individual knowledge to query (yellow circle markers).
Electronics 08 01105 g012
Figure 13. The TurtleBot3 Burger.
Figure 13. The TurtleBot3 Burger.
Electronics 08 01105 g013
Figure 14. The framework of the software system.
Figure 14. The framework of the software system.
Electronics 08 01105 g014
Figure 15. The experimental environment we built: (a) the real experimental scenario; (b) the corresponding environment map built by TurtleBot3 in Gmapping algorithm.
Figure 15. The experimental environment we built: (a) the real experimental scenario; (b) the corresponding environment map built by TurtleBot3 in Gmapping algorithm.
Electronics 08 01105 g015
Figure 16. The communication among different devices.
Figure 16. The communication among different devices.
Electronics 08 01105 g016
Figure 17. The specific implementation process of study case on TurtleBot3.
Figure 17. The specific implementation process of study case on TurtleBot3.
Electronics 08 01105 g017
Figure 18. The snapshot of the query sentence from the ROS terminal.
Figure 18. The snapshot of the query sentence from the ROS terminal.
Electronics 08 01105 g018
Figure 19. The sequence of snapshots for the execution of the task: “PutHandbookonBookshelf”. (a) The initial position of TurtleBot3; (b) The atomic action of MovetoHandbook. (c) The atomic action MovetoLeo; (d) The arrival at the position of Leo.
Figure 19. The sequence of snapshots for the execution of the task: “PutHandbookonBookshelf”. (a) The initial position of TurtleBot3; (b) The atomic action of MovetoHandbook. (c) The atomic action MovetoLeo; (d) The arrival at the position of Leo.
Electronics 08 01105 g019
Figure 20. The sequence of snapshots for the execution of the task: “PutHandbookonBookshelf”. (a) The initial position of TurtleBot3; (b) The atomic action of MovetoHandbook. (c) The atomic action MovetoLeo; (d) The arrival at the position of Leo; (e) The atomic action of BatteryCharge; (f) The action of MovetoBookshelf.
Figure 20. The sequence of snapshots for the execution of the task: “PutHandbookonBookshelf”. (a) The initial position of TurtleBot3; (b) The atomic action of MovetoHandbook. (c) The atomic action MovetoLeo; (d) The arrival at the position of Leo; (e) The atomic action of BatteryCharge; (f) The action of MovetoBookshelf.
Electronics 08 01105 g020
Table 1. The configuration table of burger.
Table 1. The configuration table of burger.
ItemsConfiguration
LIDAR360-degree laser LIDAR LDS-01 (HLS-LFCD2)
SBCRaspberry PI 3 and Intel Joule 570x
BatteryLithium polymer 11.1 V 1800 mAh
IMUGyroscope 3 Axis
Accelerometer 3 Axis
Magnetometer 3 Axis
MCUOpenCR (32-bit ARM Cortex® M7)
MotorDYNAMIXEL(XL430)

Share and Cite

MDPI and ACS Style

Sun, X.; Zhang, Y.; Chen, J. RTPO: A Domain Knowledge Base for Robot Task Planning. Electronics 2019, 8, 1105. https://doi.org/10.3390/electronics8101105

AMA Style

Sun X, Zhang Y, Chen J. RTPO: A Domain Knowledge Base for Robot Task Planning. Electronics. 2019; 8(10):1105. https://doi.org/10.3390/electronics8101105

Chicago/Turabian Style

Sun, Xiaolei, Yu Zhang, and Jing Chen. 2019. "RTPO: A Domain Knowledge Base for Robot Task Planning" Electronics 8, no. 10: 1105. https://doi.org/10.3390/electronics8101105

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop