# Adaptive Neuro-Fuzzy Technique for Autonomous Ground Vehicle Navigation

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Kinematic Model of an Autonomous Ground Vehicle

_{o}in x-axis and y

_{o}in y-axis. Similarly, target point coordinates are denoted as x

_{t}and y

_{t}respectively. The vehicle’s current position is coordinate P

_{c}. The vehicle has four fixed standard wheels and is differentially driven by skid steer motion. The wheels have the same radius, r. The driving wheels are separated by a distance L. In general, the posture of the vehicle in the two dimensional plane at any instant is defined by the systems Cartesian coordinates and heading angle, $\theta $, with respect to a global frame reference, Pc = (x

_{c}, y

_{c}, $\theta $). In this paper, the wheels’ radius equals r = 0.1 m, and the distance between the driving wheels equals L = 0.3 m. It is assumed that the autonomous ground vehicle is subject to the kinematic constraints such as the contact between the wheels and the ground is pure rolling, and non-slipping [17]. Such constraints cause some challenge in the motion planning.

- No slip constraint$$\stackrel{\u0307}{{x}_{c}}\left(t\right)\mathrm{cos}\theta \left(t\right)-\stackrel{\u0307}{{y}_{c}}\left(t\right)\mathrm{sin}\theta \left(t\right)=a\dot{\theta}\left(t\right)$$
- Pure rolling constraint$$\stackrel{\u0307}{{x}_{c}}\left(t\right)\mathrm{cos}\theta \left(t\right)+\stackrel{\u0307}{{y}_{c}}\left(t\right)\mathrm{sin}\theta \left(t\right)+L\dot{\theta}\left(t\right)=r\stackrel{\u0307}{{\varnothing}_{r}}\left(t\right)$$$$\stackrel{\u0307}{{x}_{c}}\left(t\right)\mathrm{cos}\theta \left(t\right)+\stackrel{\u0307}{{y}_{c}}\left(t\right)\mathrm{sin}\theta \left(t\right)-L\dot{\theta}\left(t\right)=r\stackrel{\u0307}{{\varnothing}_{l}}\left(t\right)$$$$\left[\begin{array}{c}{v}_{x}\left(t\right)\\ {v}_{y}\left(t\right)\\ \dot{\theta}\left(t\right)\end{array}\right]=\left[\begin{array}{cc}\frac{r}{2}& \frac{r}{2}\\ 0& 0\\ -\frac{r}{L}& \frac{r}{L}\end{array}\right]\left[\begin{array}{c}{w}_{l}\left(t\right)\\ {w}_{r}\left(t\right)\end{array}\right]$$$$v\left(t\right)=r.\frac{{w}_{r}\left(t\right)+{w}_{l}\left(t\right)}{2}$$$$\dot{\theta}\left(t\right)=r.\left[\frac{{w}_{r}\left(t\right)-{w}_{l}\left(t\right)}{L}\right]$$$$\dot{x}\left(t\right)=v\left(t\right)\mathrm{cos}\theta \left(t\right)$$$$\dot{y}\left(t\right)=v\left(t\right)\mathrm{sin}\theta \left(t\right)$$$$w\left(t\right)=\dot{\theta}\left(t\right)\left(9\right)$$$${\mathrm{w}}_{\mathrm{r}}\left(\mathrm{t}\right)=\stackrel{\u0307}{{\varnothing}_{r}}\left(t\right)$$$${\mathrm{w}}_{\mathrm{l}}\left(\mathrm{t}\right)=\stackrel{\u0307}{{\varnothing}_{l}}\left(t\right)$$$$\text{or}\left[\begin{array}{c}\dot{x}\left(t\right)\\ \dot{y}\left(t\right)\\ \dot{\theta}\left(t\right)\end{array}\right]=\left[\begin{array}{cc}cos\theta & 0\\ Sin\theta & 0\\ 0& 1\end{array}\right]\left[\begin{array}{c}V\\ w\end{array}\right]$$
- ${\mathrm{v}}_{\mathrm{x}}\left(\mathrm{t}\right)$ = longitudinal velocity of the moving vehicle [m/s]
- ${\mathrm{v}}_{\mathrm{y}}\left(\mathrm{t}\right)$ = lateral velocity of the moving vehicle [m/s]
- $\text{\theta}$ = moving vehicle orientation [degree]
- ${\mathrm{w}}_{\mathrm{r}}\left(\mathrm{t}\right)$ = angular velocity of right wheel [rad/s]
- ${\mathrm{w}}_{\mathrm{l}}\left(\mathrm{t}\right)$ = angular velocity of left wheel [rad/s]
- $\mathrm{w}\left(\mathrm{t}\right)$ = angular velocity of vehicle [rad/s]
- r = wheel radius [m]
- a = the distance between the centre of mass and driving wheels axis in x-direction [m]
- L = the distance between each driving wheel and the vehicle axis of symmetry in y-direction [m].

## 3. The Architecture of Adaptive Neuro-Fuzzy Inference System (ANFIS)

_{1}and k

_{2}, and one output f.

**Layer 1**: Every node i in this layer is an adaptive node with a node function

_{i}(or B

_{i-2}) is a linguistic label (such as "small" or "large") associated with this node. In other words, O

_{1,i}is the membership grade of a fuzzy set A ( = A

_{1}, A

_{2}, B

_{1}or B

_{2}) and it specifies the degree to which the given input ${k}_{1}$ (or ${k}_{2}$) satisfies the quantifier A. Here the membership function for A is assumed be triangular shaped membership function:

_{i}, b

_{i}, c

_{i}} is the parameter set. As the values of these parameters change, the bell-shaped function varies accordingly, thus exhibiting various forms of membership function for fuzzy set A. Parameters in this layer are referred to as premise parameters.

**Layer 2:**Every node in this layer is a fixed node labelled Π, whose output is the product of all the incoming signals:

**Layer 3:**Every node in this layer is a fixed node labelled N. The i

^{th}node calculates the ratio of the i

^{th}rule’s firing strength to the sum of all of the rules firing strengths:

**Layer 4:**Every node i in this layer is an adaptive node with a node function:

_{i}, q

_{i}, r

_{i}} is the parameter set of this node. Parameters in this layer are referred to as consequent parameters.

**Layer 5:**The single node in this layer is a fixed node labelled ∑, which computes the overall output, f, as the summation of all incoming signals:

_{i}, b

_{i}, c

_{i}} and {p

_{i}, q

_{i}, r

_{i}} are adjusted to make the ANFIS output match the training data. The adopted hybrid learning technique combines the least square method and gradient descent method in ANFIS Toolbox. The hybrid learning algorithm is composed of a forward pass and a backward pass. The least squares method (forward pass) is used to optimize the consequent parameters with the premise parameters fixed. Once the optimal consequent parameters are found, the backward pass starts immediately. The gradient descent method (backward pass) is used to adjust optimally the premise parameters corresponding to the fuzzy sets in the input domain. The output of the ANFIS is calculated by employing the consequent parameters found in the forward pass. The output error is used to adapt the premise parameters by means of a standard back propagation algorithm. It has been proven that this hybrid algorithm is highly efficient in training the ANFIS.

## 4. Design of the Autonomous Ground Vehicle’s ANFIS Controller

#### 4.1. Autonomous Ground Vehicle Platform

_{c}, y

_{c}, $\theta $) in the global coordinate system. The s-function is a Matlab Simulink block was used to create the platform. As illustrated in the Figure 4, the Simulink block receives six inputs and produces five outputs. The platform parameters are explained below:

**Remark 1.**The front, right and left distances (FD, RD, and LD) represent the shortest distance between the vehicle and obstacles in the front, right and left directions respectively. The sensory information is modelled by assuming these three sensors are placed on a vehicle’s platform, and each senor carries the information for three directions of the platform. These sensor outputs change depending on the distance between the instantaneous positions of the vehicle.

**Remark 2.**The angle difference (AD) represents the difference between the vehicle’s heading and the target point.

**Remark 3.**Obstacle sensing (OS) signal is generated in accordance to the measured distances (front, right, left) from the sensory information. If the vehicle does not sense an obstacle in its path, this OS parameter will indicate ‘0’ if there is no an obstacle, and ‘1’ if the vehicle senses an obstacle near to its platform.

**Remark 4.**The clock timer is used for measuring the simulation running time that the vehicle elapsed to reach the destination in the platform.

#### 4.2. Target Reaching ANFIS Controller

Item Number | Angle Difference | Right Angular Velocity | Left Angular Velocity |
---|---|---|---|

1 | 0 | 80 | 80 |

2 | −44.007 | 40.882 | −37.352 |

3 | −29.214 | 54.031 | 2.093 |

4 | −14.226 | 67.353 | 42.061 |

5 | −4.319 | 76.160 | 68.480 |

6 | 0.505 | 78.652 | 79.550 |

7 | 1.975 | 74.731 | 78.243 |

8 | 1.809 | 75.173 | 78.391 |

9 | 0.530 | 78.584 | 79.528 |

10 | −0.009 | 79.991 | 79.974 |

11 | 4.4693 | 68.081 | 76.027 |

12 | 13.668 | 43.550 | 67.850 |

13 | 25.192 | 12.819 | 57.606 |

14 | 26.956 | 8.116 | 56.038 |

15 | 27.114 | 7.694 | 55.898 |

16 | 18.5223 | 30.607 | 63.535 |

17 | 9.047 | 55.873 | 71.957 |

18 | 2.774 | 72.602 | 77.534 |

19 | −0.295 | 79.737 | 79.211 |

20 | 6.759 | 61.975 | 73.991 |

21 | 15.705 | 38.117 | 66.039 |

22 | 29.555 | 1.186 | 53.728 |

23 | 43.460 | −35.895 | 41.368 |

25 | 55.161 | −40 | 40 |

26 | 62.435 | −40 | 40 |

27 | 14.562 | 41.167 | 67.055 |

28 | 4.477 | 68.058 | 76.019 |

29 | −5.788 | 74.854 | 64.563 |

30 | −13.107 | 68.348 | 45.046 |

31 | −20.435 | 61.835 | 25.505 |

32 | −27.755 | 55.328 | 5.985 |

33 | −85.460 | 40 | −40 |

34 | −37.315 | 46.830 | −19.507 |

35 | −22.376 | 60.110 | 20.330 |

36 | −22.843 | 59.694 | 19.082 |

37 | −15.563 | 66.165 | 38.496 |

38 | −7.999 | 72.888 | 58.66 |

39 | −2.812 | 77.500 | 72.500 |

40 | 0.2973 | 79.207 | 79.735 |

**Figure 6.**Relationship between training error and epoch’s number for the target-reaching ANFIS controller.

**Figure 7.**The output data for target reaching ANFIS controller. (

**a**) The initial data before the training (

**b**). The matched data after the ANFIS is trained.

**Figure 8.**The surface view for target reaching ANFIS controller. (

**a**) The first ANFIS controller. (

**b**) The second ANFIS controller.

#### 4.3. Obstacle Avoidance ANFIS Controller

Item Number | Front Distance | Right Distance | Left Distance | Right Angular Velocity | Left Angular Velocity |
---|---|---|---|---|---|

1 | 0.100 | 0.100 | 0.100 | −40 | −40 |

2 | 0.800 | 0.800 | 0.800 | 80 | 80 |

3 | 0.800 | 0.399 | 0.800 | 79.99 | 79.999 |

4 | 0.800 | 0.339 | 0.800 | 67.999 | 43.999 |

5 | 0.800 | 0.279 | 0.800 | 55.999 | 7.999 |

6 | 0.800 | 0.249 | 0.800 | 49.99 | −10.006 |

7 | 0.547 | 0.800 | 0.800 | 80 | 80 |

8 | 0.4873 | 0.309 | 0.800 | 61.997 | 25.992 |

9 | 0.397 | 0.279 | 0.800 | 54.922 | 8.351 |

10 | 0.4269 | 0.219 | 0.800 | 43.997 | −28.008 |

11 | 0.515 | 0.219 | 0.800 | 43.967 | −28.098 |

12 | 0.800 | 0.249 | 0.800 | 49.853 | −10.440 |

13 | 0.800 | 0.308 | 0.800 | 61.644 | 24.932 |

14 | 0.800 | 0.800 | 0.392 | 75.580 | 78.526 |

15 | 0.800 | 0.800 | 0.333 | 40.013 | 66.671 |

16 | 0.786 | 0.800 | 0.303 | 22.230 | 60.743 |

17 | 0.800 | 0.800 | 0.304 | 22.418 | 60.806 |

18 | 0.800 | 0.800 | 0.304 | 22.803 | 60.934 |

19 | 0.800 | 0.800 | 0.335 | 41.084 | 67.028 |

20 | 0.800 | 0.800 | 0.395 | 77.330 | 79.110 |

21 | 0.800 | 0.800 | 0.455 | 80 | 80 |

**Figure 11.**The output data for obstacle avoidance ANFIS. (

**a**) The initial data before the training. (

**b**) The matched data after the ANFIS is trained.

**Figure 12.**The surface view for obstacle avoidance ANFIS controller. (

**a**) The surface view between input-1, input-2 and the output. (

**b**) The surface view between input-1, input-3 and the output. (

**c**) The surface view between input-2, input-3 and the output.

**Figure 13.**The surface view for obstacle avoidance ANFIS controller. (

**a**) The surface view between input-1, input-2 and the output. (

**b**) The surface view between input-1, input-3 and the output. (

**c**) The surface view between input-2, input-3 and the output.

## 5. Simulation Results

_{s }(0, 0) and P

_{t }(15, 15) respectively.

Obstacle No. | Peripheral Vertices Coordinates |
---|---|

1 | (2, 2), (2, 3.5), (3.5, 3.5), (3.5, 2) |

2 | (8, 4), (8, 5.5), (9.5, 5.5), (9.5, 4) |

3 | (6, 7), (6, 8.5), (7.5, 8.5), (7.5, 7) |

4 | (10, 8), (10, 9.5), (11.5, 9.5), (11.5, 8) |

5 | (12, 12), (12, 13.5), (13.5, 13.5), (13.5, 12) |

6 | (3.5, 12.5), (3.5, 14), (5, 14), (5, 12.5) |

7 | (8, 14), (8, 15.5), (9.5, 15.5), (9.5, 14) |

Obstacle No. | Peripheral Vertices Coordinates | Shape Type |
---|---|---|

1 | (3.5, 3.5), (5, 5), (2.5, 4), (4, 2.5) | Square |

2 | (8, 0), (9, 1.5), (10, 0) | Triangle |

3 | Centre (9, 4) and Radius = 0.75m | Circle |

4 | (12, 1), (12, 4), (14, 4), (14, 1) | Rectangle |

5 | (0, 6), (0, 8), (2, 8), (2, 6) | Square |

6 | (3.5, 3.5), (5, 5), (15.5, 4) | Triangle |

7 | (14.5, 8), (16, 8), (15.5, 6), (14, 6) | parallelogram |

8 | Centre (10, 10) and Radius = 1m | Circle |

9 | Centre (3, 13) and Radius = 0.75m | Circle |

10 | (12, 12), (12.5, 5), (13.5, 13), (14, 12) | Trapezoid |

11 | (8, 14), (8, 15.5), (9.5, 15.5), (9.5, 14) | Square |

**Figure 22.**The Error rate between the target point and vehicle’s actual point. (

**a**) First scenario. (

**b**) Second scenario.

_{t}= 15, y

_{t}= 15) and the vehicle’s actual points during the navigation was obtained. These signals of the autonomous ground vehicle for the first and second scenarios are shown in Figure 22a,b respectively. The results show the behaviour of the vehicle as it reached its destination. Before the vehicle started moving the error rate was maximum and when the vehicle arrived at its target the error rate became zero.

## 6. Conclusion

## Acknowledgements

## Author Contributions

## Conflicts of Interest

## References

- Nonami, K.; Kartidjo, M.; Yoon, K.-J.; Budiyono, A. Autonomous Control Systems and Vehicles: Intelligent Unmanned Systems; Springer Tokyo Heidelberg: Tokyo, Japan, 2013; p. 306. [Google Scholar]
- Sedighi, K.H.; Ashenayi, K.; Manikas, T.W.; Wainwright, R.L. Autonomous Local Path Planning for a Mobile Robot Using a Genetic Algorithm. In Proceedings of the IEEE Congress on Evolutionary Computation, Portland Marriott Downtown, Portland, OR, USA, 19–23 June 2004; pp. 1338–1345.
- Wang, M.; Liu, J.N.K. Fuzzy Logic Based Robot Path Planning in Unknown Environment. In Proceedings of the IEEE International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; Volume 1, pp. 813–818.
- De Berg, O.S.M.; van Kreveld, M.; Overmars, M. Computaional Geometry: Algorithms and Applications; Germany: Springer-Verlag: Berlin, Germany, 2000. [Google Scholar]
- Garrido, S.; Moreno, L.; Blanco, D.; Jurewicz, P. Path planning for mobile robot navigation using voronoi diagram and fast marching. Int. J. Robot. Autom.
**2011**, 2, 42–64. [Google Scholar] - L-Taharwa, I.A.; Sheta, A.; Al-Weshah, M. A mobile robot path planning using genetic algorithm in static environment. J. Comput. Sci.
**2008**, 4, 341–344. [Google Scholar] [CrossRef] - Samadi, M.; Othman, M.F. Global Path Planning for Autonomous Mobile Robot Using Genetic Algorithm. In Proceedings of the IEEE International Conference on Signal-Image Technology & Internet-Based Systems, Kyoto, Japan, 2–5 December 2013; pp. 726–730.
- Cui, S.; Su, X.; Zhao, L.; Bing, Z.; Yang, G. Study on Ultrasonic Obstacle Avoidance of Mobile Robot Based on Fuzzy Controller. In Proceedings of the IEEE International Conference on Computer Application and System Modeling, TaiYuan, China, 22–24 October 2010; pp. 233–237.
- Pradhan, S.K.; Parhi, D.R.; Panda, A.K. Fuzzy logic techniques for navigation of several mobile robots. Appl. Soft Comput.
**2009**, 9, 290–304. [Google Scholar] [CrossRef] - Li, X.; Choi, B. Design of Obstacle Avoidance System for Mobile Robot Using Fuzzy Logic Systems. Int. J. Smart Home
**2013**, 7, 321–328. [Google Scholar] [CrossRef] - Hajar, S.; Mohammad, A.; Jeffril, M.A.; Sariff, N. Mobile Robot Obstacle Avoidance by Using Fuzzy Logic Technique. In Proceedings of the 3rd IEEE International Conference on System Engineering and Technology, Shah Alam, Kuala Lumpur, 19–20 August 2013; pp. 331–335.
- Chi, K.-H.; Lee, M.-F.R. Obstacle Avoidance in Mobile Robot Using Neural Network. In Proceedings of the IEEE International Conference on Consumer Electronics, Communications and Networks, XianNing, China, 16–18 April 2011; pp. 5082–5085.
- Deshpande, S.U.; Bhosale, S.S. Adaptive neuro-fuzzy inference system based robotic navigation. In Proceedings of the IEEE International Conference on Computational Intelligence and Computing Research, Enathi, India, 26–28 December 2013; pp. 1–4.
- Joshi, M.M.; Zaveri, M.A. Neuro-Fuzzy Based Autonomous Mobile Robot Navigation System. In Proceedings of the 11th International Conference Control, Automation, Robotics and Vision, Singapore, 7–10 December 2010; pp. 384–389.
- Singh, M.K.; Parhi, D.R.; Pothal, J.K. ANFIS Approach for Navigation of Mobile Robots. In Proceedings of the IEEE International Conference on Advances in Recent Technologies in Communication and Computing, Kottayam, Kerala, India, 27–28 October 2009; pp. 727–731.
- Khelchandra, T.; Huang, J.; Debnath, S. Path planning of mobile robot with neuro-genetic-fuzzy technique in static environment. Int. J. Hybrid Intell. Syst.
**2014**, 11, 71–80. [Google Scholar] - Fierro, R.; Lewis, F.L. Control of a nonholonomic mobile robot using neural networks. IEEE Trans. Neural Networks
**1998**, 9, 589–600. [Google Scholar] [CrossRef] - Jang, J.R. ANFIS: Adaptive network based fuzzy inference system. IEEE Trans. Syst. Man Cybern.
**1993**, 23, 665–685. [Google Scholar] [CrossRef]

© 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Al-Mayyahi, A.; Wang, W.; Birch, P. Adaptive Neuro-Fuzzy Technique for Autonomous Ground Vehicle Navigation. *Robotics* **2014**, *3*, 349-370.
https://doi.org/10.3390/robotics3040349

**AMA Style**

Al-Mayyahi A, Wang W, Birch P. Adaptive Neuro-Fuzzy Technique for Autonomous Ground Vehicle Navigation. *Robotics*. 2014; 3(4):349-370.
https://doi.org/10.3390/robotics3040349

**Chicago/Turabian Style**

Al-Mayyahi, Auday, William Wang, and Phil Birch. 2014. "Adaptive Neuro-Fuzzy Technique for Autonomous Ground Vehicle Navigation" *Robotics* 3, no. 4: 349-370.
https://doi.org/10.3390/robotics3040349