Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.


Introduction
The area of autonomous mobile robots has gained an increasing interest in the last decade. Autonomous mobile robots are robots that can navigate freely without human involvement. Due to the increased demand of this type of robots, various techniques and algorithms are developed. Most of them are focused on navigating the robot in collision-free trajectories with the controlling of the robot's speed and direction. The robot can be mounted by different kinds of sensors in order to observe the surrounding environment and thus steer the robot accordingly. However, many factors affect the reliability and efficiency of these sensors. The integration of multi-sensor fusion systems can overcome this problem by combining inputs coming from different types of sensors, hence have more reliable and complete outputs. This plays a key role in building a more efficient autonomous mobile robotic system.
There are many sensor fusion techniques that have been proven to be effective and beneficial, especially in detecting and avoiding obstacles as well as path planning of the mobile robot. Fuzzy logic, neural network, neuro-fuzzy, and genetic algorithms are examples of well-known fusion techniques that help in moving the robot from the starting point to the target without colliding with any obstacles along its path.
Obstacles detected can be moving or static objects in known or unknown environments. In addition, the path planning behavior can be categorized as global path planning where the

Related Work
Many obstacle detection, obstacle avoidance, path planning techniques have been proposed in the field of autonomous robotic systems. This section presents some of these techniques with the collaboration of sensor fusion to obtain best results.
Chen and Richardson proposed a collision avoidance mechanism for mobile robot navigating in unknown environments based on a dynamic recurrent neuro-fuzzy system (DRNFS). In this technique, a short memory is used that is capable of memorizing the past and the current information for a more reliable behavior. The ordered derivative algorithm is implemented for updating the DRNFS parameters [6]. Another collision avoidance approach for mobile robots was proposed by [7], which is based on multi sensor fusion technology. With the use of ultrasonic sensors and infrared distance sensors, a 180˝rolling window was established in front of the robot. The robot's design has mostly focused on four main layers as follows: energy layer, driver layer, sensor layer, and, finally, the master layer [7].
In addition, a collision avoidance algorithm for a network-based autonomous robot was discussed in [8]. The algorithm is based on the Vector Field Histogram (VFH) algorithm with the consideration of the network's delay. The system consists of sensors, actuators, and the VFH controller. Kalman filter fusion is applied for the robot's localization in order to compensate for the delay between the odometry and environmental sensor readings [8].
The Kalman filtering fusion technique for multiple sensors has been applied in [9]. The Kalman filter is used for predicting the position and distance to the obstacle or wall using three infrared range finder sensors. Authors claimed that this technique is mostly helpful in robots' localization, automatic robots' parking, and collision avoidance [9]. Furthermore, in [10], a path control for mobile robots based on sensor fusion is presented where the deliberative/reactive hybrid architecture is used for handling the mobile robot motion and path control. The sensor fusion technology helps the robot to reach the target point successfully [10]. Another multi sensor fusion system was designed in [11]. This system was mainly for navigating coal mine rescue robots. It used various types of sensors such as infrared and ultrasonic sensors with digital signal processing. The multi-sensor data fusion system helped in decreasing errors caused by the blind zone of ultrasonic sensors [11].
Moreover, a transferable belief model (TBM) was applied in mobile robot for the purpose of a collision-free path planning navigation in a dynamic environment which contains both static and moving objects. TBM was used for building the fusion system. In addition, a new path planning mechanism has been proposed based on TBM. The main benefit of designing such mechanisms is the recognition of the obstacle's type whether it is dynamic or static without the need of any previous information [12].
In [13], the authors developed a switching path-planning control scheme that helped in avoiding obstacles for a mobile robot while reaching its target. In this scheme, a motion tracking mode, obstacle avoidance mode, and self-rotation mode were designed without the need of any previous environmental information [13].
Another multi-sensor particle filter fusion based algorithm for mobile robot localization was proposed by [14]. The algorithm was able to fuse data coming from various types of sensors. Authors also proposed an easy and fast deployment mechanism of the proposed system. A laser range-finder, a WiFi system, many external cameras, and a magnetic compass along with a probabilistic and mapping strategy were used to validate the work proposed [14].
In [15], a novel multi-sensor data fusion methodology for autonomous mobile robots in unknown environments was designed. The flood fill algorithms and fuzzy algorithms were used for the robot's path planning, whereas Principal Component Analysis (PCA) was used for object detection. Multiple sensor data were fused using Kalman Filter fusion technique from infrared sensor, ultrasonic sensor, camera, and accelerometer. The proposed technique has successfully reduced the time and energy consumption.

Proposed Methodology
This section presents the proposed methodology for mobile robot collision free navigation with the integration of the fuzzy logic fusion technique. The mobile robot is equipped with distance sensors, ground sensors, camera, and GPS. Distance sensors which are infrared sensors, and the camera are used for collision avoidance behavior where the ground sensors are used for path follower behavior. GPS is used to get the robot's position. The goal of the proposed technique is as follows: - The capability of the mobile robot to avoid obstacles along its path; - The integration of sensor fusion using fuzzy logic rules based on sensor inputs and defined membership functions; - The capability of the mobile robot to follow a predetermined path; - The performance of the mobile robot when programmed with the fuzzy logic sets and rules.

Robot and Environment Modeling
Webots Pro simulator is used to model the robot and the environment. Webots Pro is a Graphical User Interface (GUI) which creates an environment that is suitable for mobile robot simulation. It also allows creating obstacles in different shapes and sizes. The mobile robot used in Webots Pro simulator is called E-puck robot which is equipped with a large choice of sensors and actuators such as camera, infrared sensors, GPS, and LED sensors [16].
The environment in this paper is modeled with a white floor that has a black line in order for the robot to follow it. It also has solid obstacles where the robot should avoid them. The environment in Webots Pro is called "world." A world file can be built using a new project directory. Each project file composed of four main windows which are: the Scene tree which represents a hierarchical view of the world, the 3D window that demonstrates the 3D simulation, the Text editor that has the source code (Controller), and the Console that shows outputs and compilation [16].
The two differential wheel robot (E-puck robot) that is used in this paper is equipped with eight infrared sensors (distance sensors), a camera, and three ground sensors which are also infrared sensors. The eight distance sensors are used to detect obstacles. Each distance sensor has a range of 0 to 2000 where 0 is the initial value of the distance sensor, which means there is no obstacle detected. As the mobile robot approaches the obstacle, its value is increased accordingly. When an obstacle is detected, the distance sensor value will be 1000 or more depending on the distance between the sensor and the obstacle. The camera sensor that is used in this work is a range finder type of camera which allows obtaining distance in meters between the camera and the obstacle from the OpenGL context of the camera. Finally, the three ground sensors are located in front of the e-puck robot where all of them are pointing directly to the ground. These sensors are used to follow the black line drawn on the floor. Figure 1 shows the E-puck robot top view with different types of sensors.

Design of the Fusion Model
Multisensory fusion model is designed for better obstacle detection and avoidance by fusing eight distance sensors and the range finder camera. The fusion model is based on Fuzzy Logic fusion technique using MATLAB software. A fuzzy logic system (FLS) is composed of four main parts which are: fuzzifier, rules, inference engine, and defuzzifier. The block diagram of FLS is shown in Figure 2.
The fuzzification stage is the process of converting a set of inputs to fuzzy sets based on defined fuzzy variables and membership functions. According to a set of rules, the inference is made. Finally, at the defuzzification stage, membership functions are used to map every fuzzy output to a crisp output.

Fuzzy Sets of the Input and Output
There are nine inputs to the fuzzy logic system and two outputs. The inputs are basically the values of eight distance sensors donated as SF1, SF2, SR1, SR2, SL1, SL2, SB1, and SB2. These sensors measure the amount of light in a range of 0 to 2000 where the threshold is set to 1000 for detected obstacle. The ninth input is the range finder camera value that measures the distance to an obstacle. Two outputs are generated left velocity (LV) and right velocity (RV). Figure 3 shows the Mamdani System using Fuzzy Inference System (FIS) with nine inputs and two outputs.

Membership Functions of the Input and Output
Input variables of distance sensors readings are divided into membership functions which are Obstacle Not Found (OBSNF), and Obstacle Found (OBSF). Both membership functions are a type of trapezoidal-shaped membership function.
The range for the distance sensor values is [0, 2000] and the threshold is set to 1000 where the value of 1000 or more means an obstacle is found and the robot should avoid it. The input variables of the range finder camera are divided into two trapezoidal-shaped membership functions "Near" and "Far". The range finder camera measures the distance from the camera to an obstacle in meters. The overall range of the camera input is [0, 1] where 0.1 m is considered as "Near" distance, and collision behavior avoidance should be applied. The input membership functions for distance sensors and the camera are displayed in Figures 4 and 5 respectively.
Let us assume that x is the sensor value and R is the range of all sensors values where xPR. The trapezoidal-shaped membership function based on four scalar parameters i, j, k, and l, can be expressed as in Equation (1).
The output variables of left and right velocities of the mobile robot (LV and RV) are divided into two membership functions negative velocity "NEG_V" and positive velocity "POS_V". The effect and the action of these two memberships on the differential wheels of the robot are summarized as follows: -If both LV and RV speeds are set to POS_V, then the robot will move forward; -If LV is set to POS_V and RV is set to NEG_V, then the robot will turn right; -If LV is set to NEG_V and RV is set to POS_V, then the robot will turn left.
The "NEG_V" is a Z-shaped membership function. This function is represented in Equation (2) where u and q are two parameters of the most left and most right of the slope.
In addition, the "POS_V" is an S-shaped membership function where y1and y2 are two parameters of the leftmost and rightmost of the slope. The S-shaped membership function can be expressed as in Equation (3). Figure 6 shows the output membership functions.

Designing Fuzzy Rules
Based on the membership functions of the fuzzy set and inputs and outputs, rules are defined. There are 24 rules for collision avoidance of the mobile robot.. We can use AND or OR operations for connecting membership values where the fuzzy AND is the minimum of two or more membership values and OR is the maximum of two or more membership values. Let µγ and µδ be two membership values, then the fuzzy AND and fuzzy OR are described as in Equations (4) and (5), respectively. In addition, Table 1 lists all the rules with the fuzzy AND operator that express the movement behavior of the mobile robot.
The last step of designing the fuzzy logic fusion system is the defuzzification process where outputs are generated based on fuzzy rules, membership values, and a set of inputs. The method used for defuzzification is the Centroid method.
Moreover, the fuzzy logic fusion model was designed for preventing the mobile robot from colliding with any obstacles while following the line. The fusion model composed of nine inputs, two outputs, and 24 rules. Figure 7 demonstrates the proposed methodology. As shown in Figure 7, the initialization of the robot and its sensors is the first step. After that, the distance sensors and camera values are fed into the fuzzy logic fusion system for obstacle detection and distance measurements. If an obstacle is found, the mobile robot will adjust its speed for turning left or right based on the position of the obstacle. The decision is made based on defined fuzzy rules. After avoiding the obstacle, the mobile robot should continue following the line by obtaining ground sensor values and finally adjust its speed accordingly. On the other hand, if there is no obstacle detected, the mobile robot should follow the line while it checks for obstacles to avoid at each time step.

Simulation and Real Time Implementation for Mobile Robot Navigation
The environment and the robot are modeled using the Webots Pro simulator for mobile robot collision free navigation. The e-puck used has eight distance sensors which are infrared sensors, camera, three ground sensors, and GPS. The e-puck first senses the environment for possible collisions by using the distance sensors and the range finder camera readings. If there is no obstacle detected, the e-puck follows a black line drawn on a white surface. Snapshots of the simulation and real time experiment for one robot detecting and avoiding an obstacle while following the line are depicted in Figures 8 and 9 respectively. Both figures show the environment with one mobile robot moving forward until it detects an obstacle. After the detection of the obstacle, all readings are fed into the proposed fuzzy logic fusion model and, based on the defined fuzzy rules, the e-puck will turn accordingly by adjusting the left and right wheels velocities. After that, the e-puck will continue moving forward and follow the line.
In addition, a more complex environment with various obstacles in different shapes and sizes has been modeled and tested through simulation and real time experiments. Figures 10 and 11 present snapshots of the simulation and real time experiments for two robots following a black line and avoiding different types of obstacles. As shown in these figures, both robots face and detect each other successfully. Each robot tries to avoid the other by adjusting its speeds and returns to follow the line. These robots are considered as dynamic obstacles to each other.

Data Collection and Analysis
This section presents the sensors values obtained from the simulation at different time steps. Three different scenarios have been presented. The first one is a simple environment containing one obstacle and one robot, whereas the second one is a more complex environment that has more static and dynamic obstacles and two robots. The third scenario has more cluttered obstacles, which makes it a more challenging environment for the mobile robot navigation. Tables 2 and 3 show the sensors values before and after applying the fuzzy logic fusion model in a simple environment at three different simulation times. At T1, the robot is far away from the obstacle; at T2, the robot is very close to the obstacle, and, at T3, the robot has passed the obstacle successfully. When distance sensor values are below the threshold (1000), it means that there is no obstacle detected. However, when it goes above the threshold, it means that there is an obstacle and the robot needs to adjust its movement. As shown in Table 2, the front distance sensor SF1 has a value of 1159.18, which is higher than the threshold value at T2 and occurs before applying the fuzzy logic fusion method.

First Scenario
In addition, both front distance sensors SF1 and SF2 have higher values than the threshold, which are 1127.19 and1077.76, respectively, at T2 where the fuzzy logic technique is applied. Again, this means that there is an obstacle detected.    Table 3 shows the distance to obstacles measured in meters by the range finder camera at various simulation times T1, T2, and T3. As represented in Table 3, once the robot approaches the obstacle, the distance between the robot and the obstacle is decreased. At T2, the distance between the robot and the obstacle, where the obstacle is firstly detected by the camera and before applying the fuzzy logic method is 0.097 m where it is only 0.040 m after applying the fusion model. The camera can measure the distance up to one meter ahead. At T3, the camera could not measure the distance to an obstacle because it did not find any obstacles within its range.
In addition, Table 4 shows the three ground sensors values used for the line following approach at different simulation times. It also shows the delta values which are the difference between the left and right ground sensors. Delta values are used to adjust the robot's left and right speeds to follow the line.
Finally, Table 5 shows the robot's position, orientation, and velocities. The position of the robot has been obtained through the GPS sensor. The position and orientation of the robot according to Webots global coordinates system. As shown in Table 5, when the robot detects an obstacle at a time 22 s of the simulation, its left and right velocities are adjusted. The negative left velocity and the positive right velocity means that the robot is turning at the left direction to avoid the obstacle. At a time of 38 s, the robot has avoided the obstacle and turned right to continue following the line.

Second Scenario
This section demonstrates the proposed model in a more complex environment where it is composed of a number of obstacles in different sizes and shapes. Two robots are running in this scenario where both are avoiding obstacles and each other as well. Each robot considers the other as a dynamic obstacle. As presented in Figure 10, two robots (1 and 2) in opposite directions are following the line and overtaking obstacles. Figures 12-14 show all distance sensor readings, distance to obstacles in meters, and left and right velocities through the entire loop for the two robots, respectively. In Figure 13, the distances to obstacles are obtained by the range finder camera where sometimes the obstacle is either in a distance greater than one meter or it is outside the camera field of the view.
In later cases, the camera cannot measure the distances between the robots and the obstacles, which explains the gabs in Figure 13a,b.    At the beginning of the simulation, the distance between robot 1 and the obstacle is 0.15 m as shown in Figure 13a. At a time of eight seconds, robot 1 detects an obstacle where both front distance sensors (SF1 and SF2) have values greater than the threshold value as presented in Figure 12a. The distance between the robot and the obstacle at a time of eight seconds has been decreased to 0.047 m as shown in Figure 13a. As a result, the robot will adjust its speed accordingly to avoid colliding with the obstacle. Figure 14a, shows the left and right velocities for robot 1. At a time of eight seconds, the left wheel velocity is a negative value (´168) and the right wheel velocity is a positive value (454), which indicates that robot 1 is turning left to avoid collision. At time 16 s, robot 1 is turning right around the obstacle where the left wheel velocity is 395 and the right wheel velocity is´199. After that, the robot will continue following the line until another obstacle is detected. The ground sensor values and traveled distance by left and right wheels for both robots during the entire loop are demonstrated in Figures 15 and 16 respectively. Furthermore, at time 115 s, robots 1 and 2 face each other after avoiding a couple of obstacles successfully. At that time, the distance between both robots is approximately 0.096 m. At a time of 118 s, the distance between them reaches 0.044 m. Both robots turn in opposite directions to avoid collision. To illustrate, at this time, the speed of robot 2 has been adjusted as shown in Figure 14b. Both robots will get around each other and return to follow the line. The position and orientation of both robots according to Webots global coordinates system are presented in Table 6.

Third Scenario
In this scenario, there are more cluttered obstacles in the environment where it is more challenging for the robot to avoid them. The robot needs to adjust its speed and orientation according to obstacles positions. Figures 17 and 18 represent the simulation of the mobile robot with many cluttered obstacles around.    Figure 19 demonstrates the distance sensor values, and Figure 20 shows the distance to obstacles obtained by the range finder camera at various simulation times. At the beginning of the simulation, the robot starts sensing the environment for possible obstacle detection. It also follows the predefined line using the ground sensors. As shown in Figure 18b, the robot turned left due to obstacle presence. At 32 s of the simulation time, SF1 (front distance sensor) reached a value of 1261, which indicates that there is an obstacle detected (Figure 19). In addition, the range finder camera has measured the distance to that obstacle which is 0.043 m as in Figure 20. At 37 s, the robot detects another obstacle on its right side and moves forward as in Figure 18c. Then, the robot gets stuck in between two obstacles and another obstacle in front of it. As shown in Figure 18d, the robot tries to travel in between both obstacles. At 50 s, the distance between the robot and the front obstacle is 0.21 m as in Figure 20. After that, the robot turns right to catch the line again as in Figure 18f. Figure 21 depicts the ground sensor values at different times, and Figure 22 shows the left and right wheels' velocities of the robot. At a time of 97 s, the robot detects another obstacle on its path and turns left as indicated in Figures 18g and 22. At 112 s, the robot detects an obstacle on its right side, which is very close to the first one and another obstacle at the front as in Figure 18h. Again, the robot tries to move in between both obstacles to recover its path as in Figure 18i. Table 7 summaries the robot's position and rotation angle in degrees at various simulation times.

Results and Discussion
The e-puck has successfully detected different types of obstacles (static and dynamic obstacles) with various shapes and sizes, and avoided them while it was following the line. Different scenarios have been presented with simple, complex, and challenging environments. Distance sensors and camera are used for obstacle detection and distance measurement. The distance sensor can only detect the obstacle when the robot is very close to the obstacle while the camera can detect it up to one meter ahead of the robot. Before applying the proposed fusion model, the distance sensors had detected the obstacle at a distance of 0.076 m from that obstacle to the robot while the camera has detected the obstacle at a distance of 0.097 m between the camera and the obstacle. On the other hand, after implementing the proposed fuzzy logic fusion methodology for collision avoidance behavior, the robot has detected the obstacle at a distance of 0.040 m. Detecting obstacles in a short distance range is very efficient and beneficial, especially in a dynamic environment where the robot quickly detects obstacles just gotten in its way. Figure 23 demonstrates the distance to obstacle measurements by using the distance sensor, or the camera, both with the integration of fusion model. As shown in Figure 23, fusing both sensors outweighs the performance of using each sensor separately.  Furthermore, the distance traveled by the left and right robot's differential wheels is observed. As shown in Figure 24, the proposed fusion model has helped in reducing the distance traveled by the robot as opposed to each sensor separately, especially at the beginning of the simulation, which saves more energy, time, and computational load. In addition, an example of the proposed model using the MATLAB rule viewer is presented in Figure 25. In this figure, the sensor values of SF1, SF2, SR1, and SR2 which are the front and right distance sensors are higher than the set threshold. As a result, there are obstacles detected at the front and right sides of the robot's position. As shown in Figure 25, LV has a negative value and RV has a positive value, which means that the robot turns left due to the presence of obstacles at the front and right sides.
In addition, our approach aims at following the robot along a predefined path (black line on a white surface) while avoiding multiple obstacles on its way such as static, dynamic, and cluttered obstacles. Applying the fuzzy logic fusion has successfully reduced the distance traveled by the robot's wheels and minimized the distance between the robot and the obstacle detected as compared to a non-fuzzy logic approach, which is beneficially in a dynamic environment. Unlike other optimal planners such as Dijkstra, our approach does not focus on the shortest route and time taken to a specific target, the terrain characteristic, and energy of control actions.

Conclusions
In this article, a multisensory fusion based model was proposed for collision avoidance and path following the mobile robot. Eight distance sensors and a range finder camera were used for the collision avoidance behavior where three ground sensors were used for the line following approach. In addition, a GPS was used to obtain the robot's position. The fusion model designed is based on the fuzzy logic inference system, which is composed of nine inputs, two outputs, and 24 fuzzy rules. Multiple membership functions for inputs and outputs are developed. The proposed methodology has been successfully tested in the Webots Pro simulator and with the real time experiment. Different scenarios have been presented with simple, complex, and challenging environments. The robot detected static and dynamic obstacles with different shapes and sizes in a short distance range, which is very efficient in dynamic environment. The distance traveled by the robot was reduced using the fusion model, which reduces energy and computational consumptions and time.