Open Access
This article is

- freely available
- re-usable

*Machines*
**2018**,
*6*(4),
46;
https://doi.org/10.3390/machines6040046

Article

Robot Coverage Path Planning under Uncertainty Using Knowledge Inference and Hedge Algebras

^{1}

School of Information Technology and Communication, Hanoi University of Science and Technology, Hanoi, Vietnam

^{2}

School of Information Science and Engineering, Lanzhou University, Lanzhou 730030, China

^{*}

Author to whom correspondence should be addressed.

Received: 4 August 2018 / Accepted: 28 September 2018 / Published: 3 October 2018

## Abstract

**:**

Human behaviour demonstrates environmental awareness and self-awareness which is used to arrive at decisions and actions or reach conclusions based on reasoning and inference. Environmental awareness and self-awareness are traits which autonomous robotic systems must have to effectively plan an optimal route and operate in dynamic operating environments. This paper proposes a novel approach to enable autonomous robotic systems to achieve efficient coverage path planning, which combines adaptation with knowledge reasoning techniques and hedge algebras to achieve optimal coverage path planning in multiple decision-making under dynamic operating environments. To evaluate the proposed approach we have implemented it in a mobile cleaning robot. The results demonstrate the ability to avoid static and dynamic (moving) obstacles while achieving efficient coverage path planning with low repetition rates. While alternative current coverage path planning algorithms have achieved acceptable results, our reported results have demonstrated a significant performance improvement over the alternative coverage path planning algorithms.

Keywords:

autonomous robotic systems; coverage path planning; knowledge reasoning and inference; hedge algebras; decision-support systems; machine cognition; machine consciousness; self-awareness; environmental awareness## 1. Introduction

Robotic systems are now ubiquitous in the manufacturing industry. Robots are capable of reliably manipulating objects using artificial intelligence techniques, which allows a machine to determine how a task can be completed successfully [1]. However, when employed in the manufacturing process, robots are pre-programmed with limited or no decision-making capability. This paper considers autonomous robotic systems with intelligent decision-support under dynamic uncertainty. An autonomous mobile robot (hereafter termed a “robot”) must traverse a defined dynamic operational environment (hereafter termed an “environment”) completely while minimising repetition in coverage; this is termed coverage path planning (CPP). Galceran and Carreras have defined the CPP problem as: “the task of determining a path that passes over all points of an area or volume of interest while avoiding obstacles ... this task is integral to many robotic applications” [2].

Robots perform complex tasks in a diverse range of environments which include hostile environments characterised by: (a) uncertainty due to environmental dynamics; (b) imprecision in control; (c) imperfect sensing and localisation; and (d) environmental unpredictability. However, robots need a decision-support system (DSS) to operate effectively. The primary of objective of a DSS is the provision of an experience set to accommodate the environment, which relates to all knowledge, skills and attitudes for the control, application, and operation of robots. Robots are an innovation technology with research providing information and instructions to formalise problems and develop algorithmic solutions that will “generalise in a wide class of computing problems and enable automation in robotic systems” [3].

To enable robots to implement optimal decisions we have investigated the CPP problem and have considered two fundamental questions related to decision-support:

- How can a high-level probabilistic representation (a model) of the environment be created;
- How can understanding and reasoning about the environment be achieved to enable CPP with completion of the required task(s) while a robot is in motion.

The first question directly addresses issues relating to the identification of obstacles (feature extraction), the second question relates to modelling the environment. When considered in unison, the two questions address the creation of a representative model. However, such a model must address uncertainty and unpredictability in an appropriate way. Humans create and update a cognitive model of their “world” (the environment) based on a “unique perceptual filter” [4,5], the model relies on environmental awareness and self-awareness. Humans interact with their “world” based on a constantly evolving conceptual model with actions and conclusions based on reasoning and inference with preferences learned over time derived from experience and observation [4,5].

We recognise the potential for cognitive dissonance in human behaviour [6,7,8,9]. While cognitive dissonance has not been considered for robotic systems, the semantic and linguistic-based DSSs proposed in this paper offer the potential to mirror (albeit on a primitive level) human reactions to external and internal stimuli [5]. We consider cognition as it relates to both human and machine cognition in Section 7.

Robot DSS designed for use in diverse environments must be context-aware [10]. Context- awareness (CA) provides a basis upon which entities (which include humans and robots) [5,10,11] can interact with the environment and perform the required tasks and behaviours; this demands adaptability. Adaptability requires environmental awareness and self-awareness which provides the basis for optimal decisions, choices, and actions. For example, a mobile robot will generally be battery powered, environmental awareness will identify the location and self-awareness will include knowledge of its current state (or context) which will include the robot’s location and battery condition. The robot’s knowledge of these parameters will enable the robot to monitor the changing location and battery condition in “real time” and return automatically to a charging point as required.

Humans view their world in semantic terms using linguistic descriptors (or words) to conceptualise and describe the environment (e.g., “very hot”, “hot”, “warm”, “cold”, “very cold”, etc.). Similarly, humans use words to conceptualise and describe emotional reactions to a range of stimuli [5] (e.g., “hate”, “dislike”, “like”, “really like”, “love” etc.). The use of such words provides finely grained descriptors which, while fuzzy [12], provide a domain specific interpretation of a current context or environment [13]. Context is highly domain specific [10], and fuzzy linguistic descriptors share this trait; for example, the word “hot” will describe a different condition when discussing the weather as opposed to cooking. A detailed discussion on fuzzy systems and variables is beyond the scope of this paper, for a detailed discussion on the topic see [12]. For a discussion on fuzzy rule-based systems and defuzzification see [14]. Fuzzy systems are as they relate to context-aware systems (CAS) are addressed in [10].

The use of semantics and linguistics in computing have received significant research effort; for example, see [13,15,16,17,18,19,20]. In this paper we argue that a similar approach based on semantics and linguistics is required in robots which must adapt to the environment. Such an approach when applied to robots and decision-support is novel and applies a different methodology to that adopted in conventional artificial intelligence (AI) approaches.

Galceran and Carreras, in a survey of CPP approaches, have reviewed the “most successful CPP methods” [2]. The conclusion drawn is that: “using classical CPP methods it is not always possible to satisfy all the required criteria in complex dynamic environments”. This conclusion identifies limitations in the conventional CPP methods and we propose a novel approach to enable CPP. The proposed approach combines: (a) adaptation; (b) knowledge reasoning; (c) inference techniques; and (d) hedge algebras implemented in a rule-based approach using available knowledge with rules in a knowledge base. Our proposed approach is posited as an effective approach to improve the granularity and effectiveness of CPP.

The proposed CPP algorithm is a development of the heuristic spanning tree coverage (STC) algorithm [21] which “subdivides the work-area into disjoint cells corresponding to the square-shaped tool, then follows a spanning tree of the graph induced by the cells, while covering every point precisely once ... then follows a spanning tree of the graph induced by the cells, while covering every point precisely once” [22].

Our contribution lies in an extension to the STC algorithm which combines reasoning techniques with hedge algebras in CPP to find the optimal coverage path. Our proposed CPP algorithm can apply the available knowledge using rules from the knowledge base to find the optimal path while minimising path repetition and overlapping. The Methodology and the CPP algorithm is set out in Section 4.

To evaluate our proposed approach, we have implemented a cleaning robot which operates in a defined environment. The experimental results (see Section 6) demonstrate the capability to avoid both static and moving obstacles while achieving effective CPP. In a comparative analysis with alternative CPP methods, our proposed approach has achieved high coverage and low repetition rates with improved performance over the alternative CPP methods. The approach proposed in this paper can be simply applied to both static and dynamic environments.

The remainder of this paper is structured as follows: in Section 2 we consider CPP related research. The CPP problem formulation is addressed in Section 3 with the methodology and proposed CPP algorithm presented in Section 4. Section 5 presents an evaluation with the experimental results and a comparative analysis of the proposed approach and the alternative CPP methods provided in Section 6. Section 7 presents a discussion which considers machine cognition. Potential future directions for research is considered in Section 7.3, the paper closing with concluding observations in Section 8.

## 2. Related Research

There are two general algorithmic approaches to the CPP problem: classical and heuristic [2,3,23,24,25]. Classical algorithms may be classified under several headings: cell decomposition, potential field, sampling-based methods, and sub-goal networks. Heuristic algorithms may also be classified under several headings: artificial neural networks, fuzzy systems, nature inspired systems, and hybrid systems. In our study we focus on the heuristic approach.

CPP research has identified a large number of problems where path planning is required. The CPP problem shares with the generalized travelling salesman problem (GTSP) [26] a common objective: to find the shortest path to traverse an environment completely without repetition in the shortest time. It has been shown that a wide variety of combinatorial optimization problems can be modelled as GTSPs. These problems include location-routing problems, material flow system design, post-box collection, stochastic vehicle routing and arc routing [26].

CPP algorithms have addressed many “real-world” problems including: cleaning and monitoring, automatic lawn mowers, inspection, painting, and industrial robots [2,3,23,24,27,28,29,30,31,32,33,34]. A robotic needle-punching path planning method is applied to adjust different “preforms” with variable shapes, dimensions, and needling distributions for complex surfaces [23]. Investigations into mobile robots has shown that the CPP problem can be solved with narrow spaces and a complex mapping for environments with multiple obstacles [27,28].

In related studies investigations have proposed a manipulator control and theoretical ideas on using artificial neural networks with reinforcement learning for multiple robotic tasks [1,35]. Other related research using artificial neural networks [36] has utilised decentralized reinforcement learning [37].

For the identification of multiple objects, an optimal hierarchical global path planning approach has been applied in a “cluttered environment” [38] using particle swarm optimisation. Patle et al. [39] propose an approach based on matrix-binary codes with a genetic algorithm to implement CPP including manipulator control and theoretical ideas to solve the CPP problem. Munoz et al. have proposed a unified framework for path and task planning for autonomous robots [40].

In “real-world” environments robots are dependent on the ability to understand, interpret, and generate representations of the environment, ideally in both a human and machine-readable formalism [36,37]. Representation of an entity’s “world” [4] (for a discussion on the nature of entities see [5,10]) through perception and action is a long-standing feature of artificial intelligence research where: “the notions of central and peripheral systems evaporate ... everything is both central and peripheral” [41]; such concepts apply to a robot’s interaction with the environment. Decision-support for robots is required for “obstacle avoidance to achieve the required tasks in dynamic environments while identifying optimal CPP” [38,42].

The value iteration-based algorithm has been effectively applied to multiple robotic tasks [37]. An interactive robot using psychological phenomena during communication has shown that this approach may provide a basis for suitable decision-support in robot systems [43]; this opens a new way for robot adaptation in making decisions to accommodate environmental dynamics.

This brief overview of the related research has identified the potential efficacy of CPP. However, as discussed in Section 6 and Section 7, the current approaches fall short in achieving optimal CPP while minimising overlap within the defined operating environment. In this paper we present our novel approach to the CPP problem to address the perceived issues in the related research considered.

## 3. Problem Formulation

The CPP problem has multiple goals: complete coverage with no overlapping or repetition with tasks completed using the shortest path. In this section we set out the problem formulation and introduce hedge algebras with decision-support using quantitative semantic mapping. Hedge Algebras were initially proposed by Ho and Wechler in 1990 [18], subsequent research has produced many interesting developments of the concept along with successful applications [15,16,17,19].

#### 3.1. Awareness in CPP with Decision-Support

We have developed our proposed approach to enable the simulation of awareness with inference and reasoning to provide decision-support in CPP. Our aim is an effective basis upon which robots can monitor the environment and reach optimal CPP decisions. In this section we present the concepts in our fuzzy approach and introduce the structural calculations which process the simulation of human reasoning (albeit on a primitive level as discussed in [5,11]).

Consider $\left(T\left(X\right)\right)$ on the linguistic variable $\left(X\right)$. According to [19,28] axioms $\left(T\left(X\right)\right)$ may be represented as algebraic structures and symbols $\{AH=\left(T\left(X\right),G,H,\le \right)\}$ where:

- $\left(G\right)$ is a collection of birth elements of linguistic variable;
- $\left(H\right)$ is a set of hedges; (and)
- $\left(\le \right)$ is semantic relation on $\left(T\left(X\right)\right)$.

$IF\left(G\right)$ containing elements $\left(0,1,X\right)$ is the smallest element, the largest (and neutral) element is $\left(X\right)$. An algebraic structure $\{AH=\left(T\left(X\right),G,H,\le \right)\}$ where $\left(H=\left({H}^{\wedge}+\cup {H}^{\wedge}-\right)\right)$ is called $\left(HA\right)IF$ the formula satisfies the following axioms [29,37]:

- Each element is either positive or negative for any part in $\left(HA\right)$ including itself;
- The two elements $\left(u\right)$ and $\left(v\right)$ are independent. That is: $\left(u\ne H\left(v\right)\right)$ $AND$ $\left(v\ne H\left(u\right)\right)$ are comparable with $\left(\forall x\in H\left(u\right)\right)$ $AND$ $\left(x\in H\left(v\right)\right)$. $IF$ $\left(u\right)$ $AND$ $\left(v\right)$ are not comparable, $THEN$ $\left(\forall x\in H\left(u\right)\right)$ $AND$ $\left(\forall y\in H\left(v\right)\right)$ are $NOT$ comparable;
- $IF$$\left(x\ne {h}_{x}\right)$$THEN$$\left(x\in H\left({h}_{x}\right)\right)$$AND$$IF$$\left(h\ne k\right)$$AND$$\left({h}_{x}\le {k}_{x}\right)$$THEN$$\left({h}^{\prime}{h}_{x}\le {k}^{\prime}{k}_{x}\right)$ with $\left(\forall {h}^{\prime},{k}^{\prime},h,k,\in H\right)$;
- $IF$$\left(u\in H\left(v\right)\right)$$AND$$\left(u\le v\right)$$OR$$\left(u\ge v\right)$$THEN$$\left(u\le {h}_{v}\right)$$OR$$\left(u\ge {h}_{v}\right)$ where $\left(\forall h\in H\right)$;

Set $\left(H\right)$ includes positive hedges $\left({H}^{\wedge}+\right)$ and negative hedges $\left({H}^{\wedge}-\right)$. The positive hedges increase the semantic representation [of a word] while negative hedges reduce the semantic representation. Without loss of generality, we always assume that $\left({H}^{-}=\left\{{h}_{1}>{h}_{2}\cdots >{h}_{p}\right\}\right)$ $AND$ $\left({H}^{+}=\{{h}_{p+1}<{h}_{p+2}<\cdots <{h}_{p+q}\}\right)$.

#### Fuzzy Linguistic Representation

Consider the linguistic values for the truth variable $TRUTH$: (dom($TRUTH$) = true, false, very true, very false, more true, more false, little true, little false ...). The (dom($TRUTH$)) may be expressed as an algebraic structure $\left(AT=\left(\left(T\left(X\right)\right),G,H,\le \right)\right)$ where:

- $\left(T\left(X\right)\right)$: is a set of linguistic values for (dom($TRUTH$);
- $\left(G\right)$: is a set of primitive words - birth elements (true, false);
- $\left(H\right)$: is a set of linguistic hedges (very, more, little);
- $\left(\le \right)$: is the semantic relation on “words" (a fuzzy concept). The semantic relations are the ordered relations derived from the natural language meaning, i.e., $\left(false\le true\right)$, $\left(moretrue\le verytrue\right)$, $\left(veryfalse\le moretrue\right)$, $\left(possibletrue\le true\right)$, $\left(false\le possiblefalse\right)$, ….

The set of linguistic values $\left(T\left(X\right)\right)$ is the result derived from $\left(G\right)$ by the hedges $\left(H\right)$. Thus, each element $\left(x\in T\left(X\right)\right)$ will be represented where:

- $\left(x={h}_{n},{h}_{n-1},\dots {h}_{1}g,G\in G\right)$
- $\left(H\left(x\right)\right)$ is set of elements is resulting from $\left(x\right)$
- Considering $\left(V\in {H}^{+}\left(V-very\right)\right)$, $\left(L\in {H}^{-}\left(L-little\right)\right)$, $\left(g\in G\right)$ is positive $IF$ $\left(g\le Vg\right)$ and is negative $IF$ $\left(g\ge Vg\right)$ $\left(OR\right)$ $\left(g\in G\right)$ is positive $IF$ $\left(g\ge Lg\right)$ and is negative $IF$ $\left(g\le Lg\right)$
- $IF$$\left(G\right)$ has exactly two fuzzy primitive elements $\left({g}^{+}\right)$ and $\left({g}^{-}\right)$, then $\left({g}^{+}\right)$ is called a positive birth element and $\left({g}^{-}\right)$ is called a negative birth element and $\left({g}^{-}<{g}^{+}\right)$

In this example: $truth$ is positive and $false$ is negative

#### 3.2. Quantitative Semantic Mapping

In terms of quantitative semantic mapping consider the following:

Where $\left(HA\phantom{\rule{0.277778em}{0ex}}AT=\left(T\left(X\right),G,H,\le \right)\right)$ is mapped $\left(f:T\left(X\right)\to \left[0,1\right]\right)$ as a quantitative semantic function on $\left(AT\right)$ $IF$ $\left(\forall h,k\in {H}^{+}OR\forall h,k\in {H}^{-}AND\forall x,y\in T\left(X\right)\right)$, we have Equation (1):

$$\left[\frac{f\left(hx\right)-f\left(x\right)}{f\left(kx\right)-f\left(x\right)}\right]=\left[\frac{f\left(hy\right)-f\left(y\right)}{f\left(ky\right)-f\left(y\right)}\right]$$

Using hedge algebras with quantitative semantic functions we can define abstract concepts which are difficult to define “satisfactorily” with the required granularity in traditional fuzzy set theory and conventional artificial intelligence techniques. In traditional fuzzy set theory [12,14], the semantic representation of fuzzy variables is generally limited to a small number of linguistic descriptors, thus the “fuzziness” of a fuzzy concept [or fuzzy set] fails to form an effective representation with sufficient granularity for our purposes.

Consider the following values: true, false, more true, and more false. The problem is how to define the “fuzziness” of the linguistic value when we consider the linguistic values in terms of a spectrum around truth and/or falsity. Based on the use of hedge algebras, we have a defining “fuzziness” value based on the size of $\left(H\left(x\right)\right)$. Given a quantitative semantic function $\left(f\right)$ on $\left(T\left(X\right)\right)$ consider that for $\left(x\in X\right)$ where the “fuzziness” of $\left(x\right)$ is measured by the diameter of the episode $\left(f\left(H\left(x\right)\right)\subseteq \left[0,1\right]\right)$. Figure 1 models the fuzziness of linguistic values.

## 4. The Methodology

As discussed in Section 1 the proposed CPP algorithm is a development of the heuristic STC algorithm [21,22] in a rule-based approach using reasoning techniques with hedge algebras to find the optimal CPP. Figure 2 models the proposed approach to enable effective CPP.

In the proposed CPP model the rules from the knowledge base are programmed into the robot before it is released into the environment. The rules are created and updated to identify the optimal CPP and the approach to traversing the environment and performing the instructed tasks. The rules define the following requirements for the robot:

- Using the optimal path traverse the complete environment and visit all nodes without repeating or overlapping paths;
- Identify if the nodes (cells) are: (a) clear; (b) occupied by an obstacle (static or moving); or (c) are bounded by walls;
- Avoid all static and moving obstacles;
- Find the optimal CPP and traverse operating environment with multiple decision-making.

#### 4.1. The Coverage Path Planning Problem

The CPP problem is formulated as follows:

- Hedge_DSS_Robot: the objective optimisation function to maximise the operational efficiency of the robot in CPP and enable multiple robot decision-making objectives;
- $\left({w}_{j}\right)$: the weight which is representative of $\left({S}_{i}\right)$ and $\left({w}_{j}\right)$. The weight is a value of the linguistic variable that can recognised as the value in range: important, very important, more important, little important, very little important, possibly important, ...;
- Based on quantitative semantic mapping of $\left(HA\right)$ linguistic values for $\left({w}_{j}\right)$ fall in the range $\left[0,\phantom{\rule{0.277778em}{0ex}}1\right]$ and are used in multiple decision-making objectives for the Robot tasks $\left({S}_{i}\right)$ where $\left(\left({S}_{i}\cap {S}_{j}\right)=\left(\theta ,\forall \phantom{\rule{0.277778em}{0ex}}i,j\in \{1,2,\cdots r\}\right)\right)$;
- $\left({Q}_{j}\left(X\right)\right)$: the objective function for the multiple decision-making objective. $\left({Q}_{j}\left(X\right)\right)$ recognises the linguistic value of the linguistic variable used in the quantitative semantics mapping of $\left(HA\right)$ and transfers the linguistic value in the range $\left[0,\phantom{\rule{0.277778em}{0ex}}1\right]$;
- The decision variable $\left({X}_{ij}\right)$ is binary and defines the tasks for the multiple decision- making objectives;
- Calculate the objective function value for $\left({Q}_{j}\left(X\right)\right)$.

#### 4.2. The Proposed Coverage Path Planning Algorithm

A proposal of new CPP algorithm is implemented in five steps as follows:

**Step 1**: Identify the objectives the for the moving robot in the operating environment;

The CPP model for Hedge_DSS_Robot (HDR) when identifying multiple objectives is as Equation (2):

$$HDR=\frac{Max}{x}\sum {w}_{j}.{Q}_{j}\left(X\right)$$

The constraints include:

$\begin{array}{l}{\sum}_{k}{X}_{ik}=1\forall i\in N\\ {\sum}_{i\in {s}_{j}\cup {s}_{i}}{x}_{i1}\ge 1\\ {x}_{ik}=\{\frac{1,\phantom{\rule{0.277778em}{0ex}}If\phantom{\rule{0.277778em}{0ex}}option\phantom{\rule{0.277778em}{0ex}}i\phantom{\rule{0.277778em}{0ex}}is\phantom{\rule{0.277778em}{0ex}}assigned\phantom{\rule{0.277778em}{0ex}}Robot\phantom{\rule{0.277778em}{0ex}}to\phantom{\rule{0.277778em}{0ex}}mission\phantom{\rule{0.277778em}{0ex}}\left(k\right)}{0,\phantom{\rule{0.277778em}{0ex}}other}\end{array}$

**Step 2**: The applied STC algorithm for Robot travel in a graph representation:

Create Recursive STC $\left(w,x\right)$: while $\left(x\right)$ is a mega-cell meaning current cell with a start point, $\left(w\right)$ is a cell for the previous point.

**Initialization**: Call STC2 $\left(Null,S\right)$ where $\left(S\right)$ is the starting cell. While spanning tree construction, the robot subdivides every cell it encounters into four identical sub-cells

**Procedure STC2 $\left(w,x\right)$**:

(1) Mark the current cell $\left(x\right)$ as an old cell;

(2)

**While**x has a new free or partially occupied neighbouring cell $\left(Y\right)$ where $\left(x\ne \theta \right)$:(3) Scan for the first new neighbour of x in counter-clockwise order, starting with the parent cell w: call this neighbour $\left({y}_{j}\in Y\right)$;

(4) Calculate the time from current sub-cell of $\left(x\right)$ to sub-cell of destination $\left({y}_{j}\right)$ based on Hedge algebra in the time series $\left(\left({T}_{j}\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left({\sum}_{u}{t}_{u}\right)\right)$ and construct a spanning tree edge from $\left(x\right)$ to $\left(y\right)$;

(5) Calculate time to estimate obstacles with the nearest y moving to sub-cell destination in the time series $\left({P}_{j}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(n\phantom{\rule{0.277778em}{0ex}}*t\right)\right)$;

(6) Consider $IF$ mega-cell $\left(y\right)$ satisfies $\left(MAX\phantom{\rule{0.277778em}{0ex}}\left({P}_{j}-{T}_{j}\right)\right)$;

(7) Move to a sub-cell of $\left(y\right)$ along the spanning tree edges using a path determined by the type of edge from $\left(x\right)$ to $\left(y\right)$ as described in the following steps;

End of

**while**loop.(8) Execute STC2 $\left(x,y\right)$.

(9) $IF$ $\left(x\ne S\right)$, move back from $\left(x\right)$ to a sub-cell of $\left(y\right)$ along a path determined by the type of edge from $\left(x\right)$ to $\left(w\right)$;

(10) Return. End of STC2 $\left(w,x\right)$.

**Step 3**: Apply reasoning techniques using rules in the knowledge-based

Consider the rules in reasoning techniques in robot reasoning. The rule form is as shown in Equation (3):
where $\left({x}_{i}={a}_{i}\right)$ with operations $\left(<;\phantom{\rule{0.166667em}{0ex}}>,\phantom{\rule{0.277778em}{0ex}}\le ,\phantom{\rule{0.277778em}{0ex}}\ge \right)$, $\left(p\right)$ is the weight
where $\left({c}_{i},{p}_{i}\right)$ represents an event, weight $\left({w}_{i}\right)$, and certain weight $\left(c\right)$ of the considered $Rule\phantom{\rule{0.277778em}{0ex}}i$.

$$Rule\phantom{\rule{0.277778em}{0ex}}i:\left({\left({x}_{1}={a}_{1}\right)}^{\wedge}{\left({x}_{2}={a}_{2}\right)}^{\wedge}{\left({x}_{3}={a}_{3}\right)}^{\wedge}{\left({x}_{4}={a}_{4}\right)}^{\wedge}\cdots \left({x}_{i}={a}_{i}\right)\right)\to \left(c,p\right)$$

**$\left({w}_{i}\right)$**of $Rule\phantom{\rule{0.277778em}{0ex}}i$ with certain factor weighting $\left(c\right)$. In typical rules we can consider reasoning techniques combined with events and as shown in Equation (4):
$$IF\phantom{\rule{0.277778em}{0ex}}\left({\left({c}_{1},{p}_{1}\right)}^{\wedge}{\left({c}_{q},{p}_{2}\right)}^{\wedge}{\left({c}_{3},{p}_{3}\right)}^{\wedge}\cdots \left({c}_{i},{p}_{i}\right)\right)\phantom{\rule{0.277778em}{0ex}}THEN\phantom{\rule{0.277778em}{0ex}}\to \left(r,c\right)$$

**Step 4**: Process rules with reasoning forward chaining

Apply reasoning techniques of the robot in forward chaining together with results of previous rules and events resulting from these rules.

**Step 5**: Find the appropriate rules applied in Knowledge Base (KB)

$IF$ an existent rule is placed in K $THEN$ apply the automated robot rule, $ELSE$ a new rule from an expert will be added to the KB

All of steps can be repeated when a robot completes its action(s) in the multiple decision- making objectives.

## 5. Evaluation

In this section we set out the evaluation of the proposed approach using a robot case study, the results are set out in Section 6.

#### 5.1. The Robot Case Study

As identified, in robot decision problems the operating robot identifies targets and tasks involving multiple potential objectives. $\left({Q}_{j}\left(X\right)\right)$ is the objective function updated in the robot KB. The objective function aims to limit the potential objectives for the robot to five. The robot must:

- $\left(Q\left(1\right)\right)$: Traverse the operating environment visiting all the nodes in the operating environment;
- $\left(Q\left(2\right)\right)$: Identify if the nodes (cells) are: (a) clear, (b) occupied by an obstacle(s), or (c) bounded by wall(s);
- $\left(Q\left(3\right)\right)$: Complete its traverse over the operating environment without repeating or overlapping paths;
- $\left(Q\left(4\right)\right)$: Avoid all static and dynamic (independently moving) obstacles;
- $\left(Q\left(5\right)\right)$: Apply CPP to find the “optimal path” to traverse operating environment with multiple decision-making;

A human operator (the decision maker) can instruct an on-line to follow a required strategy. The range of strategic objectives (tasks) is as follows:

- ${S}_{1}$ Simple cleaning function;
- ${S}_{2}$ Cleaning and picking up rubbish;
- ${S}_{3}$ Cleaning while avoiding objects;
- ${S}_{4}$ Intelligent cleaning with decision-support in selecting tasks;
- ${S}_{5}$ Heavy cleaning.

For example, the value $\left({Q}_{j}\left(X\right)\right)$ is determined using the linguistic variables: high, low, very high, very low, little high, little low, possible high, ... and the quantitative semantic value could be: $\left(\mu \left(high\right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.65.\phantom{\rule{0.277778em}{0ex}}\mu \left(low\right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.35.\phantom{\rule{0.277778em}{0ex}}\mu \left(very\phantom{\rule{0.166667em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.12.\phantom{\rule{0.277778em}{0ex}}\mu \left(very\phantom{\rule{0.166667em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}0.85\right)$. The values for the opinions of experts related to the value of the objective function is shown in Table 1. The relative degree of importance of the objective function (in semantic terms using linguistic descriptors) corresponding to required action is shown in Table 2.

In considering hedge algebras the following semantic properties apply:

$A{H}_{Mu{c}_{D}{o}_{D}a{p}_{U}ng}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(T\left(X\right),\phantom{\rule{0.166667em}{0ex}}G,\phantom{\rule{0.166667em}{0ex}}H,\phantom{\rule{0.166667em}{0ex}}\le \right)$

$G\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{low,\phantom{\rule{0.166667em}{0ex}}high\right\},\phantom{\rule{0.277778em}{0ex}}fm\left(low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5,\phantom{\rule{0.277778em}{0ex}}fm\left(high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.75$

${H}^{+}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{very\right\}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{{h}_{2}\right\},\phantom{\rule{0.277778em}{0ex}}q\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}1$

${H}^{-}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(little\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{{h}_{1}\right\},\phantom{\rule{0.277778em}{0ex}}p\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}1$

$\theta \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5\phantom{\rule{0.277778em}{0ex}}and\phantom{\rule{0.277778em}{0ex}}\alpha \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5$

$\alpha \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}{\sum}_{i\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}1}^{p}\mu \left({h}_{1}\right)\phantom{\rule{0.166667em}{0ex}}\mu \left(little\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5$

$\beta \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}{\sum}_{i=p+1}^{p=q}\mu \left({h}_{i}\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\mu \left({h}_{i}\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\mu \left(very\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(1\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}\alpha \right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\left(1\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5$

$\alpha \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\beta \phantom{\rule{0.277778em}{0ex}}\to \phantom{\rule{0.277778em}{0ex}}\left(w\left({h}_{j}x\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\frac{1}{2}\left[1\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left({h}_{j}x\right)\phantom{\rule{0.277778em}{0ex}}Sign\left({h}_{p}\phantom{\rule{0.166667em}{0ex}}{h}_{j}x\right)\phantom{\rule{0.166667em}{0ex}}\left(\beta \phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}\alpha \right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5,\phantom{\rule{0.277778em}{0ex}}\left(\forall {h}_{j}\in H\right)$

$fm\left(low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\theta \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5$

$fm\left(very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\mu \left(very\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}fm\left(low\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(0.5\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.25$

$fm\left(little\phantom{\rule{0.166667em}{0ex}}low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\mu \left(little\right){\phantom{\rule{0.166667em}{0ex}}}^{*}fm\left(low\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(0.5\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.25$

$fm\left(high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(1\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}fm\left(high\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(1\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5$

$fm\left(very\phantom{\rule{0.166667em}{0ex}}high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\mu \left(very\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}fm\left(high\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(0.5\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5$

$fm\left(very\phantom{\rule{0.166667em}{0ex}}high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\mu \left(little\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}fm\left(high\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(0.5\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5$

$\nu \left(W\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(\theta \right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5$

$\nu \left(low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\theta \phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}\alpha fm\left(low\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(0.5\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}\left(0.5\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.25$

$\nu \left(high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\theta \phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\alpha fm\left(high\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(0.5\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.5\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.75$

$\nu \left(very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(low\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(very\phantom{\rule{0.277778em}{0ex}}low\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(very\phantom{\rule{0.277778em}{0ex}}low\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.25\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(-1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.25\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.25\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.125$

$\nu \left(little\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(low\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(little\phantom{\rule{0.277778em}{0ex}}low\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(little\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(little\phantom{\rule{0.277778em}{0ex}}low\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.25\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(+1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.25\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.25\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.375$

$\nu \left(very\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(high\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(very\phantom{\rule{0.277778em}{0ex}}high\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(very\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(very\phantom{\rule{0.277778em}{0ex}}high\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.75\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(+1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.875$

$\nu \left(little\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(high\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(little\phantom{\rule{0.277778em}{0ex}}high\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(little\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(little\phantom{\rule{0.277778em}{0ex}}high\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.75\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(-1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.625$

$\nu \left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(little\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}low\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(little\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(little\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}low\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.125\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(+1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.0625$

$\nu \left(little\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}low\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}low\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.125\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(+1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.1875$

$\nu \left(very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(little\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}low\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}low\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.375\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(-1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.3125$

$\nu \left(very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(little\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(little\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}low\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(little\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}low\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(little\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}low\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.375\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(+1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.4375$

$\nu \left(little\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(little\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(little\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}high\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(little\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}0.5fm\left(little\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}high\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.625\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(-1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.5625$

$\nu \left(very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(little\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}high\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(very\phantom{\rule{0.277778em}{0ex}}little\phantom{\rule{0.277778em}{0ex}}high\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.625\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(+1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.6875$

$\nu \left(little\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(very\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(little\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}high\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(little\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(little\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}high\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.875\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(-1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.8125$

$\nu \left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\nu \left(very\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}Sign\left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}high\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left[fm\left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}high\right)\phantom{\rule{0.166667em}{0ex}}-\phantom{\rule{0.166667em}{0ex}}0.5fm\left(very\phantom{\rule{0.277778em}{0ex}}very\phantom{\rule{0.277778em}{0ex}}high\right)\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.875\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(+1\right){\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}\left(0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5{\phantom{\rule{0.166667em}{0ex}}}^{*}\phantom{\rule{0.166667em}{0ex}}0.5\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.9375$

The testimonials of specialist functions targeted are shown in Table 3. We may observe the hedge algebras relationships as shown below.

$A{H}_{S{u}_{Q}ua{n}_{T}rong}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(T\left(X\right),\phantom{\rule{0.277778em}{0ex}}G,\phantom{\rule{0.277778em}{0ex}}H,\phantom{\rule{0.277778em}{0ex}}\le \right)$

$G\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{unImp,\phantom{\rule{0.166667em}{0ex}}imp\right\}\phantom{\rule{0.277778em}{0ex}}fm\left(unImp\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.4,\phantom{\rule{0.277778em}{0ex}}fm\left(imp\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.6$

${H}^{+}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{very\right\}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{{h}_{2}\right\},\phantom{\rule{0.277778em}{0ex}}q\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}1,\phantom{\rule{0.277778em}{0ex}}\mu \left(very\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.65$

${H}^{+}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{little\right\}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left\{{h}_{1}\right\},\phantom{\rule{0.277778em}{0ex}}q\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}1,\phantom{\rule{0.277778em}{0ex}}\mu \left(little\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.35$

$\theta \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}W\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.4\phantom{\rule{0.277778em}{0ex}}and\phantom{\rule{0.277778em}{0ex}}\alpha \phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.35$

Similarly, we obtain a quantitative semantic values for the linguistic variable as shown in Table 4. The quantitative degree of importance of the objective function corresponding to specific actions is shown in Table 5.

Hedge_DSS_Robot calculates which option is selected and the order of priority schemes as follows:

$Hedge\_DSS\_Robot;=\phantom{\rule{0.277778em}{0ex}}\frac{Min}{x}{\sum}_{j}{w}_{j}.{Q}_{j}\left(X\right)$

${\sum}_{j}{w}_{j}\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}{Q}_{1}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\left(0.875\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.61\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.125\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.61\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.1875\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.26\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.25\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.835225\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.86756$

${\sum}_{j}{w}_{j}\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}{Q}_{2}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\left(0.25\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.26\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.125\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.26\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.75\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.61\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.375\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.26\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.6525$

${\sum}_{j}{w}_{j}\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}{Q}_{3}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\left(0.125\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.26\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.1875\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.26\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.625\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.26\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.25\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.26\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.30875$

${\sum}_{j}{w}_{j}\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}{Q}_{4}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\left(0.625\phantom{\rule{0.166667em}{0ex}}z\phantom{\rule{0.166667em}{0ex}}0.5365\right)\phantom{\rule{0.277778em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.6875\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5365\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.375\phantom{\rule{0.166667em}{0ex}}z\phantom{\rule{0.166667em}{0ex}}0.169\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.375\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5365\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.96872$

${\sum}_{j}{w}_{j}\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}{Q}_{4}\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}\left(\left(0.75\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5365\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.1375\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.5365\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.1125\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.7465\right)\phantom{\rule{0.166667em}{0ex}}+\phantom{\rule{0.166667em}{0ex}}\left(0.875\phantom{\rule{0.166667em}{0ex}}x\phantom{\rule{0.166667em}{0ex}}0.15225\right)\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.6876$

After calculated the proposed model, we have the result as follows:

$$\frac{Min}{x}\phantom{\rule{0.166667em}{0ex}}\sum _{j}{w}_{j}.{Q}_{j}\left(X\right)\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}Min\left[0.86756,\phantom{\rule{0.277778em}{0ex}}0.6525,\phantom{\rule{0.277778em}{0ex}}0.3087,\phantom{\rule{0.277778em}{0ex}}0.9687,\phantom{\rule{0.277778em}{0ex}}0.6876\right]\phantom{\rule{0.277778em}{0ex}}=\phantom{\rule{0.277778em}{0ex}}0.3087$$

Following the application of the proposed model, decisions will be transmitted to based on priority. According to the calculation results for the above examples the order will be as follows: ${S}_{4}\phantom{\rule{0.166667em}{0ex}}>\phantom{\rule{0.166667em}{0ex}}{S}_{1}\phantom{\rule{0.166667em}{0ex}}>\phantom{\rule{0.166667em}{0ex}}{S}_{5}\phantom{\rule{0.166667em}{0ex}}>\phantom{\rule{0.166667em}{0ex}}{S}_{2}\phantom{\rule{0.166667em}{0ex}}>\phantom{\rule{0.166667em}{0ex}}{S}_{3}$.

## 6. Experimental Testing and Comparative Analysis

In this section we present the experimental results with a comparative analysis which compares the performance of our proposed CPP approach with the following alternative CPP algorithms: (a) Breadth First Search (BFS); (b) Internal Spiral Search (ISS); and (c) U-turn ${A}^{*}$ Path Planning (UAPP) [44].

In further testing, we have implemented the ISS algorithm as an inner spiral algorithm where a robot traverses the environment in a certain direction as introduced in [44]. The UAPP algorithm is a complete path planning coverage approach which uses the ${A}^{*}$ algorithm as a heuristic in the U-turn search algorithm. Our proposed model is predicated on the same conditions and mapping with respect to the range of cell numbering (80–250 cells).

The cleaning robot is shown in Figure 3 with the defined operating environment shown in Figure 4, Figure 5 and Figure 6. The Figure 4, Figure 5 and Figure 6 show the results for the robot traversing the defined operating environment.

#### 6.1. The Experimental Results

To evaluate our proposed CPP approach we have evaluated the repetition rate with the ability to avoid repetition in traversing the defined operating environment. Table 6 summarises the repetition rate for: regular obstacles, irregular obstacles, multiple obstacles, and multiple irregular obstacles.

In Table 6 the percentage (%) repetition rates for regular, irregular, multiple regular, and multiple irregular obstacles is shown with the average percentage (%) repetition rate. The proposed CPP approach demonstrates a significant performance improvement in terms of repetition rates over the BFS, ISS and UAPP algorithms with respect to simple regular and irregular obstacles. For complex regular and irregular obstacles, the proposed CPP approach demonstrates a clear performance improvement with low path repetition rates which are lower than achieved for the BFS, ISS and UAPP algorithms. From Table 6 the results for the average percentage (%) repetition rate are also superior to the BFS, ISS and UAPP CPP algorithms. In summary, the proposed CPP model achieves improved performance as compared to the alternative CPP algorithms.

#### 6.2. A Comparative Analysis for Moving Obstacles

In the evaluation we have compared a number of performance parameters: (a) duration; (b) awareness; (c) the ability of the robot to avoid both static (fixed obstacles) and dynamic environments (independently moving obstacles); and (d) the repetition rate. The testing evaluated the ability to avoid: regular, irregular, multiple regular, and multiple irregular obstacles. The experimental testing is based on an operating environment defined with cell numbers in the range [80–250]; the results are summarised in Figure 4, Figure 5 and Figure 6 and Table 6, Table 7 and Table 8.

From Table 6, Table 7 and Table 8 it may be seen that the repetition rate for our CPP approach outperforms the alternative CPP algorithms. The improvement in the performance is evidenced by the ability of the proposed CPP approach to achieve complete coverage [of the environment] with a low repetition rates in the range 3% to 11.3% with an average repetition rate in multiple decision-making operation of 13.2%. In the general case, our proposed CPP method provides the shortest duration with the lowest repetition rate. The experimental results demonstrate the effectiveness of the proposed CPP algorithm in terms of the repetition and time travelled in uncertain environments where the robot will encounter both static and moving obstacles. The results for the CPP model are clearly superior to the results for the BFS, ISS, and UAPP methods

## 7. Discussion

In this paper we have introduced our novel CPP approach implemented using a linguistic approach with hedge algebras, for a discussion on hedge algebras see [13,15,16,17,18,19]. The proposed method has been evaluated in a cleaning robot case study; the results support the conclusion that the proposed approach enables effective CPP in dynamic environments. Moreover, in a comparative analysis, the proposed approach shows improved performance over alternative CPP algorithms.

In the developing field of robotics, autonomous robotic systems which implement models of human senses (an important component in machine cognition and awareness) using touch (computational haptics [45,46]) and auditory responses [47] is gaining significant traction. Current research is applying vision in the highly acclaimed ground-breaking research by Prof Sheila Nirenberg and her colleagues at the Nirenberg lab in the Department of Physiology and Biophysics at the Weill Medical College of Cornell University, USA.

Nirenberg originally focussed on addressing blindness in human patients [48] to develop retinal implants (prosthetics) to restore blind patients’ sight [49,50]. However, the technology developed for human vision has been applied to realise robot vision as demonstrated at the SMC 2018 conference in October 2018 [51]. Nirenberg achieved the conflation of medical and computer science research with remarkable results for both humans and robotic systems [52,53].

#### 7.1. The Concept of “Self”

In discussing the philosophy of the mind and cognitive science, Gallagher introduces the concept of self which includes self-awareness [54]. A detailed discussion on this topic is beyond the scope of this paper, for a discussion on the topic see [54]. In summary, there are two concepts of self: the minimal self (“a self devoid of temporal extension”) and the narrative self (which “involves personal identity and continuity across time”). The concept of self may be viewed as an entity’s internalized view of the “world” developed over time based on the interactive experience of their externalized world [5,11].

The twin concepts of self illustrate how the philosophical approach can inform cognitive science and suggests that a two-way collaboration [between neuroscience and computer science] may lead to a more fully developed account of self and awareness in entities (including robotic systems as discussed in Section 7.2) with potential application in machine cognition, machine consciousness, and machine self-awareness.

Going further, we must consider the stimuli that prompt a reactive response. There are two types of stimuli: (a) external stimuli (reactive situations that confront an entity in their interaction with their environment) and (b) internal stimuli (internally generated actions that the entity initiates). In practice, entities (which include both humans and intelligent robotic systems) interact with their “world” and learned experience gained from their external environment “feeds” into the internalized self, which in turn influences the way entities interact with their world. This process can be viewed as a continuous cognitive information processing feedback loop [5]. The aim of machine cognition research is to implement (at least on a primitive level) in computerised systems a representation of a human dynamic cognitive model. For humans, internal cognitive models form a significant component in response to stimuli [4,55]. Such a component equally applies to intelligent robotic systems implementing internal cognitive models.

The aim of intelligent systems (including intelligent robotic systems) is to achieve set goals, which may be reflected in CPP and meeting defined goals for entities in dynamic environments as introduced in this paper. Intelligent systems must be context-aware and must be adaptive to dynamic environments [10] and, hence, must adopt different forms when the environments are correspondingly different [56]. Such adaptive systems may be described as “artificial” [56] for, as environments change, systems must change to match the dynamic states and thus mirror the new situation (or context) [10,57]. Machine cognition is designed to incorporate cognitive functions and processes (on a primitive level as compared to human cognition) in highly autonomous machines (such as robotic systems) and intelligent entities.

#### 7.2. Machine Cognition

Human cognition creates a psychophysiological process triggered spontaneously by the conscious and subconscious sense of an object [58,59]. Given the traction in intelligent systems (including robotic systems) there exists the potential to model cognitive response in intelligent machines and computer systems. Due to the complexity in human cognitive processing when viewed from the perspective of physiology and psychophysiology, machine cognition represents a significant challenge and realising machine cognition remains an open research question.

Prakash Mondal in [60] considers machine cognition and poses the question: “Does computation reveal machine cognition?”. He argues that the nature of machine cognition has been shrouded in “incomprehensibility” and that “human cognition is still faintly understood”. Mondal goes further in arguing that machine cognition is far less understood than human cognition despite the current knowledge relating to computer architectures and systems. Human interpretation [of computation] is required, where it becomes a type of “semiotic causation” (SC), which “gives meaning to computation” [60]. The research documented in this paper has a correlation with semiotics and SC in that humans recognise and communicate using linguistics which are important in semiotics [20,61].

Intelligent systems (such as robotic systems) may leverage semantics and semiotics to realise machine cognition that replicates (at least on a primitive level) human cognitive processes. The work of the Nirenberg lab represents an important contribution in the field of machine cognition and the generation of “awareness” in robots and their capability to operate in uncertain environments. Moreover, work by Prof Nirenberg and her colleagues [48,49,50,51,52,53] feeds into visual sensing (and therefore SC) which is an important element in a robot’s view of its “world” and the ability to operate in dynamic uncertain environments.

Computational entities will include intelligent agents and autonomous robotic systems (which include both physical and computational entities). Intelligent robotic systems may be viewed as entities which will embody: (i) the concept of self; (ii) self-awareness; and (iii) awareness of their “world”. Such systems and agents must adapt dynamically to the changing environment based on learning as modelled in the “feedback-loop” [5]. Achieving these research goals remains an open research question.

#### 7.3. Future Directions for Research

In considering the research documented in this paper, while many issues relating to semantics and the use of linguistic methods [as they relate to CPP] have been explored and resolved there remain open research questions relating to the operational capabilities of intelligent entities in dynamic operating environments.

The approach presented in this paper has been shown provide an effective basis for CA in autonomous robot control. However, there may be use-cases where it may be desirable for a robot to move directly from one specified point to another [within a defined operational environment] where an object is located (for any number of reasons). In such a use case, the proposed approach may be extended to identify the most direct route while retaining the capability to avoid dynamic (independently mobile) obstacles. We propose a number of interesting directions for research:

- Extending the proposed rule-based linguistic approach using semantics with kansei engineering in combination with hedge algebras forms an interesting research direction;
- A further potentially profitable direction for research (in computing terms) lies in the use of semiotics [20] and SC [60] to recognise the type and nature of obstacles or other robots operating in the environment. Semiotics employs both linguistics and images to create a representative model, their combined use in context-aware intelligent robotic systems is a potentially profitable direction for robotics research;
- There are potential use-cases where multiple mobile robots may operate collaboratively using for example “forward chaining” [62,63,64]; in such a use-case awareness of their environment and other robots operating in the same environment is required. For example, in a large search area multiple robots may be deployed to investigate an environment where efficient search requires both CPP for each robot while avoiding duplication in the search activity.

These research directions are recognised as challenges and are open research questions which form the basis for ongoing future research.

## 8. Concluding Observations

In this paper, we have considered the CPP problem and we have presented our novel approach to realise effective CPP. To evaluate our proposed approach we have presented an implementation based on a cleaning robot traversing a dynamic environment with both static (non-moving) and dynamic (independently moving) obstacles. In operation, the robot has shown the capability to map the operating space thereby remembering static objects and also capturing the location of dynamic objects. Our novel approach achieves improved performance as compared to alternative CPP algorithms and has demonstrated the capability to traverse an operating environment efficiently with very limited repetition or overlapping of coverage paths. The proposed approach forms an interesting direction for research into intelligent systems and autonomous robot control.

## Author Contributions

The original fundamental research relating to fuzzy systems, semantic systems, and linguistic representation has been carried out in collaborative studies by P.M. and H.V.P. The research documented in this paper has been conceptualised and administered by H.V.P. The preparation and writing of the original draft manuscript with the review and editing has been carried out by P.M.

## Funding

This research is funded by Vietnam Ministry of Education and Training under grant number B2015-01-91.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Martínez-Tenor, A.; Fernández-Madrigal, J.A.; Cruz-Martín, A.; González-Jiménez, J. Towards a common implementation of reinforcement learning for multiple robotic tasks. Expert Syst. Appl.
**2018**, 100, 246–259. [Google Scholar] [CrossRef][Green Version] - Galceran, E.; Carreras, M. A survey on coverage path planning for robotics. Robot. Auton. Syst.
**2013**, 61, 1258–1276. [Google Scholar] [CrossRef][Green Version] - Bayat, F.; Najafinia, S.; Aliyari, M. Mobile robots path planning: Electrostatic potential field approach. Expert Syst. Appl.
**2018**, 100, 68–78. [Google Scholar] [CrossRef] - Moore, P.T.; Pham, H.V. On Context and the Open World Assumption. In Proceedings of the 29th IEEE International Conference on Advanced Information Networking and Applications (AINA-2015), Gwangju, Korea, 25–27 March 2015; pp. 387–392. [Google Scholar]
- Moore, P. Do We Understand the Relationship between Affective Computing, Emotion and Context- Awareness? Machines
**2017**, 5, 16. [Google Scholar] [CrossRef] - Bem, D.J. Self-perception: An alternative interpretation of cognitive dissonance phenomena. Psychol. Rev.
**1967**, 74, 183. [Google Scholar] [CrossRef] [PubMed] - Brehm, J.W.; Cohen, A.R. Explorations in Cognitive Dissonance; John Wiley & Sons Inc: New Jersey, NJ, USA, 1962. [Google Scholar]
- Festinger, L. A Theory of Cognitive Dissonance; Stanford University Press: Palo Alto, CA, USA, 1957. [Google Scholar]
- Festinger, L. Cognitive dissonance. Sci. Am.
**1962**, 207, 93–106. [Google Scholar] [CrossRef] [PubMed] - Moore, P.; Pham, H.V. Personalization and rule strategies in human-centric data intensive intelligent context-aware systems. Knowl. Eng. Rev.
**2015**, 30, 140–156. [Google Scholar] [CrossRef] - Moore, P.; Pham, H.V. On Wisdom and Rational Decision-Support in Context-Aware Systems. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC2017), Banff, AB, Canada, 5–8 October 2017; pp. 1982–1987. [Google Scholar]
- Klir, G.; Yuan, B. Fuzzy Sets and Fuzzy Logic: Theory and Applications; Prentice Hall: Upper Saddle River, NJ, USA, 1995; Volume 4. [Google Scholar]
- Pham, H.V.; Moore, P.; Tran, K.D. Context Matching with Reasoning and Decision Support using Hedge Algebra with Kansei Evaluation. In Proceedings of the Fifth Symposium on Information and Communication Technology (SoICT 2014), Hanoi, Vietnam, 4–5 December 2014; ACM: New York, NY, USA, 2014; pp. 202–210. [Google Scholar] [CrossRef]
- Berkan, R.C.; Trubatch, S. Fuzzy System Design Principles; Wiley-IEEE Press: New Jersey, NJ, USA, 1997. [Google Scholar]
- Nguyen, C.H.; Tran, T.S.; Pham, D.P. Modeling of a semantics core of linguistic terms based on an extension of hedge algebra semantics and its application. Knowl.-Based Syst.
**2014**, 67, 244–262. [Google Scholar] [CrossRef] - Cetisli, B. Development of an adaptive neuro-fuzzy classifier using linguistic hedges: Part 1. Expert Syst. Appl.
**2010**, 37, 6093–6101. [Google Scholar] [CrossRef] - Chatterjee, A.; Siarry, P. A PSO-aided neuro-fuzzy classifier employing linguistic hedge concepts. Expert Syst. Appl.
**2007**, 33, 1097–1109. [Google Scholar] [CrossRef] - Ho, N.C.; Wechler, W. Hedge algebras: An algebraic approach to structure of sets of linguistic truth values. Fuzzy Sets Syst.
**1990**, 35, 281–293. [Google Scholar] - Ho, N.C.; Long, N.V. Fuzziness measure on complete hedge algebras and quantifying semantics of terms in linear hedge algebras. Fuzzy Sets Syst.
**2007**, 158, 452–471. [Google Scholar] [CrossRef] - Chandler, D. Semiotics: The Basics; Psychology Press: London, UK, 2002. [Google Scholar]
- Gabriely, Y.; Rimon, E. Spanning-tree based coverage of continuous areas by a mobile robot. In Proceedings of the 2001 IEEE International Conference on Robotics and Automation (ICRA) (Cat. No. 01CH37164), Seoul, Korea, 21–26 May 1999; Volume 2, pp. 1927–1933. [Google Scholar]
- Gabriely, Y.; Rimon, E. Spanning-tree based coverage of continuous areas by a mobile robot. Ann. Math. Artif. Intell.
**2001**, 31, 77–98. [Google Scholar] [CrossRef] - Chen, X.; Zhang, Y.; Xie, J.; Du, P.; Chen, L. Robot needle-punching path planning for complex surface preforms. Robot. Comput.-Integr. Manuf.
**2018**, 52, 24–34. [Google Scholar] [CrossRef] - Mac, T.T.; Copot, C.; Tran, D.T.; Keyser, R.D. Heuristic approaches in robot path planning: A survey. Robot. Auton. Syst.
**2016**, 86, 13–28. [Google Scholar] [CrossRef] - Mohanan, M.; Salgoankar, A. A survey of robotic motion planning in dynamic environments. Robot. Auton. Syst.
**2018**, 100, 171–185. [Google Scholar] [CrossRef] - Laporte, G.; Asef-Vaziri, A.; Sriskandarajah, C. Some Applications of the Generalized Travelling Salesman Problem. J. Oper. Res. Soc.
**1996**, 47, 1461–1467. [Google Scholar] [CrossRef] - Acar, E.U.; Choset, H.; Zhang, Y.; Schervish, M. Path planning for robotic demining: Robust sensor-based coverage of unstructured environments and probabilistic methods. Int. J. Robot. Res.
**2003**, 22, 441–466. [Google Scholar] [CrossRef] - Han, J.; Seo, Y. Mobile robot path planning with surrounding point set and path improvement. Appl. Soft Comput.
**2017**, 57, 35–47. [Google Scholar] [CrossRef] - Mac, T.T.; Copot, C.; Tran, D.T.; Keyser, R.D. A hierarchical global path planning approach for mobile robots based on multi-objective particle swarm optimization. Appl. Soft Comput.
**2017**, 59, 68–76. [Google Scholar] [CrossRef] - Palacios-Gasós, J.M.; Talebpour, Z.; Montijano, E.; Sagüés, C.; Martinoli, A. Optimal path planning and coverage control for multi-robot persistent coverage in environments with obstacles. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 1321–1327. [Google Scholar]
- Bouzid, Y.; Bestaoui, Y.; Siguerdidjane, H. Quadrotor-UAV optimal coverage path planning in cluttered environment with a limited onboard energy. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 979–984. [Google Scholar]
- Li, D.; Wang, X.; Sun, T. Energy-optimal coverage path planning on topographic map for environment survey with unmanned aerial vehicles. Electron. Lett.
**2016**, 52, 699–701. [Google Scholar] [CrossRef] - Wang, J.; Chen, J.; Cheng, S.; Xie, Y. Double Heuristic Optimization Based on Hierarchical Partitioning for Coverage Path Planning of Robot Mowers. In Proceedings of the 2016 12th International Conference on Computational Intelligence and Security (CIS), Wuxi, China, 16–19 December 2016; pp. 186–189. [Google Scholar]
- Chen, K.; Liu, Y. Optimal complete coverage planning of wall-climbing robot using improved biologically inspired neural network. In Proceedings of the 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR), Okinawa, Japan, 14–18 July 2017; pp. 587–592. [Google Scholar]
- Jin, L.; Li, S.; Yu, J.; He, J. Robot manipulator control using neural networks: A survey. Neurocomputing
**2018**, 285, 23–34. [Google Scholar] [CrossRef] - Leottau, D.L.; del Solar, J.R.; Babuška, R. Decentralized Reinforcement Learning of Robot Behaviors. Artif. Intell.
**2018**, 256, 130–159. [Google Scholar] [CrossRef] - Karapetyan, N.; Benson, K.; McKinney, C.; Taslakian, P.; Rekleitis, I. Efficient multi-robot coverage of a known environment. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1846–1852. [Google Scholar]
- Agostini, A.; Torras, C.; Wörgötter, F. Efficient interactive decision-making framework for robotic applications. Artif. Intell.
**2017**, 247, 187–212. [Google Scholar] [CrossRef][Green Version] - Patle, B.; Parhi, D.; Jagadeesh, A.; Kashyap, S.K. Matrix-Binary Codes based Genetic Algorithm for path planning of mobile robot. Comput. Electr. Eng.
**2018**, 67, 708–728. [Google Scholar] [CrossRef] - Muñoz, P.; R-Moreno, M.D.; Barrero, D.F. Unified framework for path-planning and task-planning for autonomous robots. Robot. Auton. Syst.
**2016**, 82, 1–14. [Google Scholar] [CrossRef] - Brooks, R.A. Intelligence without representation. Artif. Intell.
**1991**, 47, 139–159. [Google Scholar] [CrossRef][Green Version] - Fakoor, M.; Kosari, A.; Jafarzadeh, M. Humanoid robot path planning with fuzzy Markov decision processes. J. Appl. Res. Technol.
**2016**, 14, 300–310. [Google Scholar] [CrossRef] - Lin, Y.Y.; Ni, C.C.; Lei, N.; Gu, X.D.; Gao, J. Robot Coverage Path planning for general surfaces using quadratic differentials. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5005–5011. [Google Scholar]
- Cai, Z.; Li, S.; Gan, Y.; Zhang, R.; Zhang, Q. Research on complete coverage path planning algorithms based on a* algorithms. Open Cybern. Syst. J.
**2014**, 8, 418–426. [Google Scholar] - Minsky, M.D.R. Computational Haptics: The Sandpaper System for Synthesizing Texture for a Force-Feedback Display. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1995. [Google Scholar]
- Salisbury, K.; Conti, F.; Barbagli, F. Haptic rendering: introductory concepts. IEEE Comput. Graph. Appl.
**2004**, 24, 24–32. [Google Scholar] [CrossRef] [PubMed] - Reeve, R.; Webb, B.; Horchler, A.; Indiveri, G.; Quinn, R. New technologies for testing a model of cricket phonotaxis on an outdoor robot. Robot. Auton. Syst.
**2005**, 51, 41–54. [Google Scholar] [CrossRef] - Copenhagen, D. Retinal Neurobiology and Visual Processing; Technical Report; Federation of American Societies for Experimental Biology: Bethesda, MD, USA, 1996. [Google Scholar]
- Grifantini, K. To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind. IEEE Pulse
**2017**, 8, 35–38. [Google Scholar] [CrossRef] [PubMed] - Talan, J. BEHIND THE BENCH: What MacArthur Awardee Sheila Nirenberg Is Doing to Help Blind People See. Neurol. Today
**2013**, 13, 24–27. [Google Scholar] [CrossRef] - Basu, A. IEEE SMC 2017 in Banff, Alberta, Canada [Conference Reports]. IEEE Syst. Man Cybern. Mag.
**2018**, 4, 36–39. [Google Scholar] [CrossRef] - Nirenberg, S.; Carcieri, S.M.; Jacobs, A.L.; Latham, P.E. Retinal ganglion cells act largely as independent encoders. Nature
**2001**, 411, 698. [Google Scholar] [CrossRef] [PubMed] - Nirenberg, S.; Pandarinath, C. Retinal prosthetic strategy with the capacity to restore normal vision. Proc. Natl. Acad. Sci. USA
**2012**, 109, 15012–15017. [Google Scholar] [CrossRef] [PubMed][Green Version] - Gallagher, S. Philosophical conceptions of the self: Implications for cognitive science. Trends Cogn. Sci.
**2000**, 4, 14–21. [Google Scholar] [CrossRef] - Moore, P.; Xhafa, F.; Barolli, L. Semantic valence modeling: Emotion recognition and affective states in context-aware systems. In Proceedings of the 28th International Conference on Advanced Information Networking and Applications Workshops (WAINA 2014), Victoria, BC, Canada, 13–16 May 2014; pp. 536–541. [Google Scholar] [CrossRef]
- Simon, H.A. Cognitive science: The newest science of the artificial. Cogn. Sci.
**1980**, 4, 33–46. [Google Scholar] [CrossRef] - Simon, H.A. The sciences of the Artificial; Massachusetts Institute of Technology: Cambridge, MA, USA, 1969. [Google Scholar]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput.
**2012**, 3, 18–31. [Google Scholar] [CrossRef] - Chen, J.; Hu, B.; Moore, P.; Zhang, X. Ontology-Based Model for Mining User’s Emotions on the Wisdom Web. In Wisdom Web of Things; Zhong, N., Ma, J., Liu, J., Huang, R., Tao, X., Eds.; Springer International Publishing: Cham, Switzerland, 2016; Chapter 6; pp. 121–153. [Google Scholar] [CrossRef]
- Mondal, P. Does Computation Reveal Machine Cognition? Biosemiotics
**2014**, 7, 97–110. [Google Scholar] [CrossRef] - Tanaka-Ishii, K. Semiotics of Programming; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
- Bell, G.; Weir, M. Forward Chaining for Robot and Agent Navigation Using Potential Fields. In Proceedings of the 27th Australasian Conference on Computer Science (ACSC ’04); Australian Computer Society, Inc.: Darlinghurst, Australia, 2004; Volume 26, pp. 265–274. [Google Scholar]
- Chen, C.L.; Lin, I.H. Location-Aware Dynamic Session-Key Management for Grid-Based Wireless Sensor Networks. Sensors
**2010**, 10, 7347–7370. [Google Scholar] [CrossRef] [PubMed] - Apoorva, G.R.; Kala, R. Motion Planning for a Chain of Mobile Robots Using A* and Potential Field. Robotics
**2018**, 7, 20. [Google Scholar] [CrossRef]

${\mathit{Q}}_{1}$ | ${\mathit{Q}}_{2}$ | ${\mathit{Q}}_{3}$ | ${\mathit{Q}}_{4}$ | |
---|---|---|---|---|

${S}_{1}$ | very high | very low | little very low | low |

${S}_{2}$ | low | very low | high | little low |

${S}_{3}$ | very low | little very low | little high | low |

${S}_{4}$ | little high | little little high | little low | little low |

${S}_{5}$ | high | very high | little very high | very high |

${\mathit{W}}_{1}$ | ${\mathit{W}}_{2}$ | ${\mathit{W}}_{3}$ | ${\mathit{W}}_{4}$ | |
---|---|---|---|---|

${S}_{1}$ | imp | imp | unimp | very very imp |

${S}_{2}$ | unimp | unimp | imp | unimp |

${S}_{3}$ | unimp | unimp | unimp | unimp |

${S}_{4}$ | little imp | little imp | very unimp | little imp |

${S}_{5}$ | little imp | little imp | very imp | very very imp |

${\mathit{Q}}_{1}$ | ${\mathit{Q}}_{2}$ | ${\mathit{Q}}_{3}$ | ${\mathit{Q}}_{4}$ | |
---|---|---|---|---|

${S}_{1}$ | 0.875 | 0.125 | 0.1875 | 0.25 |

${S}_{2}$ | 0.25 | 0.125 | 0.75 | 0.375 |

${S}_{3}$ | 0.125 | 0.1875 | 0.625 | 0.25 |

${S}_{4}$ | 0.625 | 0.6875 | 0.375 | 0.375 |

${S}_{5}$ | 0.75 | 0.1375 | 0.1125 | 0.875 |

μ (very very unimportant) = 0.10985 | μ (very unimportant) = 0.169 |

$\mu $ (little very unimportant) = 0.20085 | $\mu $ (unimportant) = 0.26 |

$\mu $ (little little unimportant) = 0.29185 | $\mu $ (little unimportant) = 0.309 |

$\mu $ (very little unimportant) = 0.34085 | $\mu $ (very little important) = 0.488725 |

$\mu $ (little important) = 0.5365 | $\mu $ (little little important) = 0.562225 |

$\mu $ (important) = 0.61 | $\mu $ (little very important) = 0.698725 |

$\mu $ (very important) = 0.7465 | $\mu $ (very very important) = 0.835225 |

${\mathit{W}}_{1}$ | ${\mathit{W}}_{2}$ | ${\mathit{W}}_{3}$ | ${\mathit{W}}_{4}$ | |
---|---|---|---|---|

${S}_{1}$ | 0.61 | 0.61 | 0.26 | 0.835225 |

${S}_{2}$ | 0.26 | 0.26 | 0.61 | 0.26 |

${S}_{3}$ | 0.26 | 0.26 | 0.26 | 0.26 |

${S}_{4}$ | 0.5365 | 0.5365 | 0.169 | 0.5365 |

${S}_{5}$ | 0.5365 | 0.5365 | 0.7465 | 0.15225 |

Methods | Obstacles | Multiple Decision Making Objectives | |||
---|---|---|---|---|---|

Regular (%) | Irregular (%) | Multiple Regular (%) | Multiple Irregular (%) | Average (%) | |

BFS | 4.00 | 3.10 | 36.50 | 32.50 | 50 |

ISS | 7.00 | 20.50 | 19.50 | 26.10 | 53 |

UAPP | 5.00 | 5.40 | 8.85 | 14.40 | 67 |

CPP | 0.00 | 2.20 | 2.00 | 7.30 | 96 |

Methods | Duration (s)/Obstacles | ||||
---|---|---|---|---|---|

Regular | Irregular | Multiple Regular | Multiple Irregular | Average | |

BFS | 134 | 154 | 150 | 144 | 140 |

ISS | 115 | 135 | 130 | 125 | 120 |

UAPP | 95 | 115 | 95 | 110 | 100 |

CPP | 66 | 78 | 79 | 74 | 82 |

Methods | Repetition Rate (%)/Obstacles | Repetition Rate (%)/Multiple Decision Making | |||
---|---|---|---|---|---|

Regular (%) | Irregular (%) | Multiple Regular (%) | Multiple Irregular (%) | Average (%) | |

BFS | 14 | 29 | 38.5 | 38 | 40 |

ISS | 16 | 25 | 29.5 | 32 | 35 |

UAPP | 8 | 12 | 15 | 25 | 29 |

CPP | 3 | 4 | 3 | 11.3 | 13.2 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).