- freely available
- re-usable

*Sensors*
**2012**,
*12*(10),
13947-13963;
doi:10.3390/s121013947

## Abstract

**:**The paper demonstrates a following robot with omni-directional wheels, which is able to take action to avoid obstacles. The robot design is based on both fuzzy and extension theory. Fuzzy theory was applied to tune the PMW signal of the motor revolution, and correct path deviation issues encountered when the robot is moving. Extension theory was used to build a robot obstacle-avoidance model. Various mobile models were developed to handle different types of obstacles. The ultrasonic distance sensors mounted on the robot were used to estimate the distance to obstacles. If an obstacle is encountered, the correlation function is evaluated and the robot avoids the obstacle autonomously using the most appropriate mode. The effectiveness of the proposed approach was verified through several tracking experiments, which demonstrates the feasibility of a fuzzy path tracker as well as the extensible collision avoidance system.

## 1. Introduction

The development of omni-directional wheel systems has made it possible to build robots that can move laterally without needing to rotate. Several researchers have employed omni-directional wheels in their robot designs. A dynamic model and a nonlinear mobile issue are explored in [1–3], where an omni-directional vehicle is equipped with up to three motor sets. Using a Field Programmable Gate Array (FPGA) as the control core, a multi-robot [4] was developed by integrating a robot arm into an omni-directional mobile robot, enabling better interaction between the robot and users.

Many factors can cause path deviation in a robot, such as variation in motor mechanical tolerances, power output, the weight borne by wheels, and even the ground surface; thus, path deviation is unavoidable. This study employs a compensation approach based on a motor encoder using fuzzy logic to resolve the problem of straight path deviation [5,6]. Motor revolutions are evaluated based on the feedback pulses that are dispatched from the motor encoder at specific intervals interval; in order to tune the value of the Pulse Width Modulation (PWM), and thus, the specified motor revolutions are set.

Several approaches have been employed to avoid obstacles. These include lasers and infrared [7,8], vision systems [9,10], and wall following using ultrasonic sensors [11–13]. Extension theory was proposed in 1983 to solve contradictions and incompatibility problems. It consists of two parts—matter-element model and extended set theory—and can be applied to classification or fault diagnosis [14]. Fuzzy theory, proposed by L. A. Zadeh in 1965, uses fuzzy rules and fuzzy inference to replace complicated equations. It's widely used in robot control [15]. In this paper, the obstacle-avoidance system is modeled as multi-dimensional obstacle-avoidance matter-elements, where the names of the extension matter-elements are the same as the number of obstacle-avoidance modes. The proposed approach utilizes ultrasound to complete the task and to implement the matter-element extension model. In this way, obstacles can be modeled and an optimal tracking approach is implemented.

This paper is organized as follows: Section 2 describes the hardware configuration of the robots. Section 3 introduces the omni-directional wheeled mobile robot designed using fuzzy logic theory. Section 4 presents an extension theory based on an obstacle avoidance system. Section 5 gives a discussion of experimental results and Section 6 presents our conclusions.

## 2. Hardware Design

The omni-directional mobile autonomous following robot control system is composed of an industrial motherboard (SBC86850) along with a Peripheral Interface Controller (PIC) micro controller (DSPIC30F6010A) that serves as the control core and commands the peripheral hardware. The peripheral hardware itself consists of a motor (3863A024C), a motor driver (MD03), a motor encoder (HEDS-5500 A12), an ultrasonic distance sensor (the PING™ Ultrasonic Range Finder), and a Bluetooth controller. Figure 1 shows a diagram of the omni-directional mobile robot. Figure 2 shows the three parts of the robot tracking system, namely the user interface, a workstation, and the robots.

There are two user modes available: manual and autonomous tracking. In manual mode, we use a Wii controller to control the robot's direction of movement and speed. In the autonomous tracking mode, the operator is provided with an infrared emitter module, as shown in Figure 3, which emits infrared signals to enable the robot to track and follow the operator autonomously.

The camera is connected to the workstation and has an infrared filter so that an infrared image is displayed on the screen. The camera is a Logitech Quick Cam Pro 5000, which captures an infrared light source handled by the operator. The industrial motherboard comprises the follower-tracking component of the system, and analyzes the infrared data, which is converted to physical coordinates giving the relative position of the user and the robot. This data is then used to control the direction and speed of the robot.

The robot employs two PIC micro controller boards, with one serving as a signal capture board and the other as a motor control board. This decentralized setup can improve the processing efficiency of the PIC micro controller. The signal capture board receives the command issued by the workstation and the Wii controller signal. The motor control board handles the motor control function, and receives information from the ultrasonic distance sensor.

## 3. An Omni-Motion Control System Based Upon Fuzzy Theory

The robot implemented in this research is capable of directed translation movement along the x and y axis, and rotational movement along the z axis. All signals dispatched from the motor encoders are translated into the PWM format and are used to calculate the motor revolution. These are then compensated by applying fuzzy logic theory to correct the path deviation. The flowchart of the proposed omni-motion control system is shown in Figure 4.

#### 3.1. Building a Kinematic Equation

Figure 5 shows the three motors positioned at a distance R from the origin, (i.e., the base center), and placed at equal angles to each other, where the angle between the wheel axes is 120 degrees. The angles θ_{1}, θ_{2}, and θ_{3} are the angles of the wheels measured relative to the x y plane. φ is the rotational angle of the robot; V_{1}, V_{2}, and V_{3} represent the three wheel speeds, and V_{m} is the target movement direction. The experimental setup of the robot is shown in Figure 6.

V_{m} is the intended movement direction of the robot and can be represented as V_{m} = (V_{mx}, V_{my}), form its x and y components. φ is the robot's rotate angular velocity. The equations of motion V_{1}, V_{2}, V_{3} are as expressed as:

**A** movement rule base can be designed from the kinematic equation.

#### 3.2. Fuzzy Controller Design from Motor Encoders

Robots in real environments are easily affected by several factors. For example, the output power of the three groups of motors may be uneven, ground friction may vary, or the weight balance may be uneven. All these factors can cause the path of the robot to deviate.

As shown in Figure 7, the output the output u_{k} is the sum of u_{k-1} and Δu. Δu is the motor revolution error and error difference, represents the amount of PWM adjustment needed. The inputs to the fuzzy controller are e and de, where e is motor speed error and de is error variation. Therefore, the straight path deviation is compensated through an invariant motor revolution.

Figure 8 shows membership functions of the input variable e and de and the output Δu, where a triangle function is used as the single membership function. The fuzzy set is composed of the following values: negative big (NB), negative medium (NM), negative small (NS), zero (ZO), positive small (PS), positive medium (PM) and positive big (PB).

By developing rules of thumb based upon several measurements, premise and consequence are deduced and the tuned ranges of membership functions are determined accordingly. The rule base consisting of 21 rules for motor encoder compensation is given in Table 1, and an example is illustrated by:

It is assumed that if the motor revolution error e is NM, the revolution error difference de is ZO, and the PWM adjustment (Δu) is PM. This situation is assumed to represent a realistic motor revolution value that is less than the expected value, which has an error within the tolerance range. Therefore, the motor PWM is increased by the value of PM.

We perform a fuzzy interference using a minimum inference engine. The controller output represents the center of gravity defuzzification and is determined by the algorithm. The input and output relationship curve of the fuzzy controller is shown in Figure 9.

We set motor PWM to divide into 128 equal parts, i.e., 0–127. As shown in Figure 10(a), we set the initial values of the PWM driving the three motors to 80. A straight path deviation is observed because of the non-uniform feedback signal. Figure 10(b) shows the activation of the fuzzy controller at 15ms to equalize the three motor revolutions. However, it should be noted that the three motor sets experience different levels of PWM due to variation in the load carried by the robot, motor efficiency, and other factors.

## 4. An Extensible Obstacle-Avoidance System Design

Extension theory is used to describe the inference process of obstacle-avoidance, which allows us to transform a complex problem in the real world into one expressed through matter-elements. As shown in Figure 11 and in Equation (3), nine sets of ultrasonic distance sensors are installed on the left, front-left, front-right, and right sides. An appropriate movement path can be selected by applying extension theory after converting the analogue distance signal into a digital value:

#### 4.1. Matter-Element Extension Set

In this section we quantify the extension set characteristics mathematically. The set with name (Name, N), characteristic (Characteristic, C), and with characteristic value (Value, V) is used to describe the three basic elements.

In this work, the obstacle-avoidance system is modeled as multi-dimensional obstacle-avoidance matter-elements, where the names of the extension matter-elements are the same as the number of obstacle-avoidance modes. The ultrasonic distance sensors are defined by four characteristics. Various motion approaches based on the principle of the multi-dimensional extension matter-element model, are expressed in Equation (4). These are associated with various motion strategies:

R_{1,i1}, R_{2,i2}, R_{3,i3} represent various matter-element models, N_{1,i1}, N_{2,i2}, N_{3,i3} are the names of various obstacle modes, and C_{1,i1}, C_{2,i2}, C_{3,i3} are the distances to the obstacles in each aspect. The terms <a_{1,i1,j}, b_{1,i1,j}>, <a_{2,i2,j}, b_{2,i2,j}>, <a_{3,i3,j}, b_{3,i3,j}> are the scopes of the classical domains defined in various aspects. This work comprises three sets of models, representing various motion approaches. Strategies are built to avoid obstacles in various aspect directions. There are up to seven motion strategies specified for the forward motion, and six for the left forward and the right forward, as show in Tables 2 to 4.

#### 4.2. Correlation Function

In Figure 12 a classical domain and a neighborhood domain are defined on the interval X_{0} = <a, b> and X = <c, d> respectively. The neighborhood domain is defined as X = <0, 200>, and X_{0} X, without any common end points. The primary correlation function can be defined as:

_{0}, X) is a point position value, and the relationship between a point and two different ranges is defined as:

The relationship between x and X_{0} is defined as:

#### 4.3. Evaluation Method and the Best Strategy of Obstacle Avoidance

In Figure 13 the procedure for an optimal evaluation strategy is described in order to find the best strategy of obstacle-avoidance. The optimal obstacle-avoidance depends on the obtained maximum correlation degree by means of the following procedure:

We first define evaluation conditions. The correlation set K

_{1,i1}, K_{2,i2}, K_{3,i3}, (i.e., the obstacle-avoidance strategies) made by three mobile modes and four sets of distance sensors mounted in various angles, are expressed as:$$\{\begin{array}{ccc}{K}_{1,i1}=\left\{{K}_{1,i1,j}\right\}& i1=1,2,\cdots ,7& ,j=1,2,3,4\\ {K}_{2,i2}=\left\{{K}_{2,i2,j}\right\}& i2=1,2,\cdots ,6& ,j=1,2,3,4\\ {K}_{3,i3}=\left\{{K}_{3,i3,j}\right\}& i3=1,2,\cdots ,6& ,j=1,2,3,4\end{array}$$The weightings W

_{1,i1,j}, W_{2,i2,j}, W_{3,i3,j}of four sets of distance sensors to detect obstacles are assigned the same value of 1/4. The correlation between each distance sensed is expressed as:$$\{\begin{array}{ll}{\overline{K}}_{1,i1}=\sum _{j=1}^{4}{W}_{1,i1,j}{K}_{1,i1,j}\hfill & ,\text{i}1=1,2,\dots ,7\hfill \\ {\overline{K}}_{2,i2}=\sum _{j=1}^{4}{W}_{2,i2,j}{K}_{2,i2,j}\hfill & ,\text{i}2=1,2,\dots ,6\hfill \\ {\overline{K}}_{3,i3}=\sum _{j=1}^{4}{W}_{3,i3,j}{K}_{3,i3,j}\hfill & ,\text{i}3=1,2,\dots ,6\hfill \end{array}$$The maximum degree of correlation in the individual mobile modes is extracted for the optimal obstacle-avoidance strategy, and can be found by comparing the optimal degree of evaluation between K

_{1,i1}, K_{2,i2}and K_{3,i3}, as:$$\{\begin{array}{c}{\overline{K}}_{1max}=\underset{i1=1,2,\cdots 7}{max}{\overline{K}}_{1,i1}\\ {\overline{K}}_{2max}=\underset{i2=1,2,\cdots 6}{max}{\overline{K}}_{2,i2}\\ {\overline{K}}_{3max}=\underset{i3=1,2,\cdots 6}{max}{\overline{K}}_{3,i3}\end{array}$$The choice of obstacle-avoidance mode for a robot is evaluated as the degree of correlation within a set of multi-dimensional obstacle-avoidance matter-element modes. This is then translated into the optimal strategy to avoid obstacles.

In Equation (12) the directions representing the mobile mode are given. If the direction is equal to 1, it denotes forward motion, 2 represents front forward, and any other value represents a forward-right direction. The obstacle direction representing the optimal obstacle-avoidance strategy is sent to the robot, where K_{1}_max represents the optimal strategy for forward motion, K_{2}_max for forward-left motion, and K_{3}_max for forward-right:

## 5. Experimental Results and Analysis

The interface design of the autonomous mobile robot controller is shown in Figure 14. A data link interface to the robot through an RS232 serial transmission, carries the control signal sent from the workstation to the micro controller. This is usually in manual mode. An operational test for obstacle-avoidance by the omni-directional robot was conducted, as shown in Figure 15 and Figure 16.

The omni-directional mobile robot was seen to exhibit high mobility in a complex environment. We also tested the obstacle avoidance capability. In order to correct the path deviation, the aspect angle of movement, and the target speed are transmitted to the three motor sets driven by a fuzzy logic controller to provide a compensation approach for the motor encoder. The robot was able to avoid all obstacles, proving the effectiveness of the proposed system. Thus, the feasibility and the effectiveness of the robot were validated.

## 6. Conclusions

In this study, omni-directional wheels were used to develop a robot capable of omni-directional movement. The robot offers improved mobility as it utilizes lateral movement over rotational movement by utilizing the omni-directional wheel design. The robot was tested in various mobile modes in a complex environment, and was able to compensate for path deviations through motor encoder compensation based on fuzzy logic theory. The robot was also able to avoid all obstacles in its path autonomously by employing ultrasonic distance sensors with an obstacle-avoidance algorithm. The aim of implementing omni-directional motion control for a three-wheeled autonomous robot was achieved. The robot offers high mobility, motion path correction, and an obstacle avoidance capability. This robot system is suitable for libraries, supermarkets, airports, hospitals and similar scenarios.

## Acknowledgments

This work was funded by The National Science Council (grant number NSC-100-2221-E-167-004).

## References

- Hung, N.; Kim, D.H.; Kim, H.K.; Kim, S.B. Tracking Controller Design of Omnidirectional Mobile Manipulator System. Proceedings of the 2009 ICROS-SICE International Joint Conference, Fukuoka, Japan, 18–21 August 2009; pp. 539–544.
- Zhao, D.; Deng, X.; Yi, J. Motion and internal force control for omnidirectional wheeled mobile robots. IEEE-ASME T. Mech.
**2009**, 14, 382–387. [Google Scholar] - Ye, C.; Ma, S. Development of an Omnidirectional Mobile Platform. Proceedings of the IEEE 2009 International Conference on Mechatronics and Automation (ICMA 2009), Changchun, China, 9– 12 August 2009; pp. 1111–1115.
- Huang, H.C.; Tsai, C.C. FPGA Implementation of an embedded robust adaptive controller for autonomous omnidirectional mobile platform. IEEE Trans. Ind. Electron.
**2009**, 56, 1604–1616. [Google Scholar] - Meng, J.E.; Chang, D. Online tuning of fuzzy inference systems using dynamic fuzzy q-learning. IEEE Trans. Syst. Man CY. B-Cyber
**2004**, 34, 1478–1489. [Google Scholar] - Chiu, C.S.; Lian, K.Y. Hybrid fuzzy model-based control of nonholonomic systems: A unified viewpoint. IEEE Trans. Fuzzy Syst.
**2008**, 16, 85–96. [Google Scholar] - Chi, K.H.; Lee, M.R. Obstacle Avoidance in Mobile Robot Using Neural Network. Proceedings of the International Conference on Consumer Electronics, Communications and Networks (CECNet), Xianning, China, 11–13 March 2011; pp. 5082–5085.
- You, B.; Qiu, J.; Li, D. A Novel Obstacle Avoidance Method for Low-Cost Household Mobile Robot. Proceedings of the 2008 IEEE International Conference on Automation and Logistics, ICAL, Qingdao, China, 1– 3 September 2008; pp. 111–116.
- Green, W.E.; Oh, P.Y. optic-flow-based collision avoidance. IEEE Robot. Autom. Mag.
**2008**, 15, 96–103. [Google Scholar] - Wei, Z.; Lee, D.J.; Nelson, B.E.; Archibald, J.K. Hardware-friendly vision algorithms for embedded obstacle detection applications. IEEE Trans. Circ. Syst. Video Tech.
**2010**, 20, 1577–1589. [Google Scholar] - Kim, P.G.; Park, C.G.; Jong, Y.H.; Yun, J.H.; Mo, E.J.; Kim, C.S.; Jie, M.S.; Hwang, S.C.; Lee, K.W. Obstacle avoidance of a mobile robot using vision system and ultrasonic sensor. Comp. Sci. (LNCS)
**2007**, 4681, 545–553. [Google Scholar] - Juang, C.F.; Hsu, C.H. Reinforcement ant optimized fuzzy controller for mobile robot wall following control. IEEE Trans. Ind. Electron.
**2009**, 56, 3931–3940. [Google Scholar] - Chen, C.Y.; Shih, B.Y.; Chou, W.C.; Li, Y.J.; Chen, Y.H. Obstacle avoidance design for a humanoid intelligent robot with ultrasonic sensors. J. Vib. Control
**2011**, 17, 1798–1804. [Google Scholar] - Wang, M.-H. Application of extension theory to vibration fault diagnosis of generator sets. IEE Proc.-Gener. Transm. Distrib.
**2004**, 151, 503–508. [Google Scholar] - David, H.; Humberto, M. Fuzzy Mobile-robot positioning in intelligent spaces using wireless sensor networks. Sensors
**2011**, 11, 10820–10839. [Google Scholar]

**Figure 10.**Experimental results of the motor revolution using fuzzy theory. (

**a**) PWM tuning curves and (

**b**) encoder feedback curves for the three motor sets.

**Figure 15.**A set of omni-directional mobile experiments. (

**a**) Forward, (

**b**) turn right, (

**c**) turn left, and (

**d**) rotate back.

**Figure 16.**A set of obstacle-avoidance experiments. (

**a**–

**d**) Actual experimental pictures of the robot pass the first obstacle and (

**e**–

**h**) actual experimental pictures of the robot pass the second obstacle.

e | NB | NM | NS | ZO | PS | PM | PB |
---|---|---|---|---|---|---|---|

de | |||||||

NB | PB | PB | PM | ZO | NM | NM | NB |

ZO | PB | PM | PS | ZO | NS | NM | NB |

PB | PM | PS | ZO | ZO | ZO | NS | NM |

Number | Obstacle Location | Extension Element Model | Approach |
---|---|---|---|

1 | No obstacle | ${\text{R}}_{11}=\left[\begin{array}{lll}{N}_{11}\hfill & ,{C}_{111}\hfill & ,<15,200>\hfill \\ & ,{C}_{112}\hfill & ,<15,200>\hfill \\ & ,{C}_{113}\hfill & ,<15,200>\hfill \\ & ,{C}_{114}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move forward |

2 | Left forward | ${\text{R}}_{12}=\left[\begin{array}{lll}{N}_{12}\hfill & ,{C}_{121}\hfill & ,<15,200>\hfill \\ & ,{C}_{122}\hfill & ,<0,15>\hfill \\ & ,{C}_{123}\hfill & ,<15,200>\hfill \\ & ,{C}_{124}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move right |

3 | Right forward | ${\text{R}}_{13}=\left[\begin{array}{lll}{N}_{13}\hfill & ,{C}_{131}\hfill & ,<15,200>\hfill \\ & ,{C}_{132}\hfill & ,<15,200>\hfill \\ & ,{C}_{133}\hfill & ,<0,15>\hfill \\ & ,{C}_{134}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move left |

4 | Left forward, Right forward | ${\text{R}}_{14}=\left[\begin{array}{ccc}{N}_{14}\hfill & ,{C}_{141}\hfill & ,<15,200>\hfill \\ & ,{C}_{142}\hfill & ,<0,15>\hfill \\ & ,{C}_{143}\hfill & ,<0,15>\hfill \\ & ,{C}_{144}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move left or right |

5 | Left forward, Right forward, Right | ${\text{R}}_{15}=\left[\begin{array}{lll}{N}_{15}\hfill & ,{C}_{151}\hfill & ,<15,200>\hfill \\ & ,{C}_{152}\hfill & ,<0,15>\hfill \\ & ,{C}_{153}\hfill & ,<0,15>\hfill \\ & ,{C}_{154}\hfill & ,<0,15>\hfill \end{array}\right]$ | move left |

6 | Left forward, Right forward, Left | ${\text{R}}_{16}=\left[\begin{array}{lll}{N}_{16}\hfill & ,{C}_{161}\hfill & ,<0,15>\hfill \\ & ,{C}_{162}\hfill & ,<0,15>\hfill \\ & ,{C}_{163}\hfill & ,<0,15>\hfill \\ & ,{C}_{164}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move right |

7 | All | ${\text{R}}_{17}=\left[\begin{array}{lll}{N}_{17}\hfill & ,{C}_{171}\hfill & ,<0,15>\hfill \\ & ,{C}_{172}\hfill & ,<0,15>\hfill \\ & ,{C}_{173}\hfill & ,<0,15>\hfill \\ & ,{C}_{174}\hfill & ,<0,15>\hfill \end{array}\right]$ | Move backward |

Number | Obstacle Location | Extension Element Model | Approach |
---|---|---|---|

1 | No obstacle | ${\text{R}}_{21}=\left[\begin{array}{lll}{N}_{21}\hfill & ,{C}_{211}\hfill & ,<15,200>\hfill \\ & ,{C}_{212}\hfill & ,<15,200>\hfill \\ & ,{C}_{213}\hfill & ,<15,200>\hfill \\ & ,{C}_{214}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move left forward |

2 | Left | ${\text{R}}_{22}=\left[\begin{array}{lll}{N}_{22}\hfill & ,{C}_{221}\hfill & ,<0,15>\hfill \\ & ,{C}_{222}\hfill & ,<15,200>\hfill \\ & ,{C}_{223}\hfill & ,<15,200>\hfill \\ & ,{C}_{224}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move forward |

3 | Left, Left forward | ${\text{R}}_{23}=\left[\begin{array}{lll}{N}_{23}\hfill & ,{C}_{231}\hfill & ,<0,15>\hfill \\ & ,{C}_{232}\hfill & ,<0,15>\hfill \\ & ,{C}_{233}\hfill & <15,200>\hfill \\ & ,{C}_{234}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move right |

4 | Left, Right forward | ${\text{R}}_{24}=\left[\begin{array}{lll}{N}_{24}\hfill & ,{C}_{241}\hfill & ,<0,15>\hfill \\ & ,{C}_{242}\hfill & ,<15,200>\hfill \\ & ,{C}_{243}\hfill & ,<0,15>\hfill \\ & ,{C}_{244}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move right |

5 | Left, Left forward, Right forward | ${\text{R}}_{25}=\left[\begin{array}{lll}{N}_{25}\hfill & ,{C}_{251}\hfill & ,<0,15>\hfill \\ & ,{C}_{252}\hfill & ,<0,15>\hfill \\ & ,{C}_{253}\hfill & ,<0,15>\hfill \\ & ,{C}_{254}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move right |

6 | Upper left corner | ${\text{R}}_{26}=\left[\begin{array}{lll}{N}_{26}\hfill & ,{C}_{261}\hfill & ,<15,200>\hfill \\ & ,{C}_{262}\hfill & ,<0,15>\hfill \\ & ,{C}_{263}\hfill & ,<15,200>\hfill \\ & ,{C}_{264}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move right forward |

Number | Obstacle Location | Extension Element Model | Approach |
---|---|---|---|

1 | No obstacle | ${\text{R}}_{31}=\left[\begin{array}{lll}{N}_{31}\hfill & ,{C}_{311}\hfill & ,<15,200>\hfill \\ & ,{C}_{312}\hfill & ,<15,200>\hfill \\ & ,{C}_{313}\hfill & ,<15,200>\hfill \\ & ,{C}_{314}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move right forward |

2 | Right | ${\text{R}}_{32}=\left[\begin{array}{lll}{N}_{32}\hfill & ,{C}_{321}\hfill & ,<15,200>\hfill \\ & ,{C}_{322}\hfill & ,<15,200>\hfill \\ & ,{C}_{323}\hfill & ,<15,200>\hfill \\ & ,{C}_{324}\hfill & ,<0,15>\hfill \end{array}\right]$ | Move forward |

3 | Left forward, Right | ${\text{R}}_{33}=\left[\begin{array}{lll}{N}_{33}\hfill & ,{C}_{331}\hfill & ,<15,200>\hfill \\ & ,{C}_{332}\hfill & ,<0,15>\hfill \\ & ,{C}_{333}\hfill & <15,200>\hfill \\ & ,{C}_{334}\hfill & ,<0,15>\hfill \end{array}\right]$ | Move left |

4 | Right forward, Right | ${\text{R}}_{34}=\left[\begin{array}{lll}{N}_{34}\hfill & ,{C}_{341}\hfill & ,<15,200>\hfill \\ & ,{C}_{342}\hfill & ,<15,200>\hfill \\ & ,{C}_{343}\hfill & ,<0,15>\hfill \\ & ,{C}_{344}\hfill & ,<0,15>\hfill \end{array}\right]$ | Move left |

5 | Left forward, Right forward, Right | ${\text{R}}_{35}=\left[\begin{array}{lll}{N}_{35}\hfill & ,{C}_{351}\hfill & ,<15,200>\hfill \\ & ,{C}_{352}\hfill & ,<0,15>\hfill \\ & ,{C}_{353}\hfill & ,<0,15>\hfill \\ & ,{C}_{354}\hfill & ,<0,15>\hfill \end{array}\right]$ | Move left |

6 | Upper right corner | ${\text{R}}_{36}=\left[\begin{array}{lll}{N}_{36}\hfill & ,{C}_{361}\hfill & ,<15,200>\hfill \\ & ,{C}_{362}\hfill & ,<15,200>\hfill \\ & ,{C}_{363}\hfill & ,<0,15>\hfill \\ & ,{C}_{364}\hfill & ,<15,200>\hfill \end{array}\right]$ | Move left forward |

© 2012 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).