Obstacle Avoidance System for Unmanned Ground Vehicles by Using Ultrasonic Sensors

Abstract: Artificial intelligence is the ability of a computer to perform the functions and reasoning typical of the human mind. In its purely informatic aspect, it includes the theory and techniques for the development of algorithms that allow machines to show an intelligent ability and/or perform an intelligent activity, at least in specific areas. In particular, there are automatic learning algorithms based on the same mechanisms that are thought to be the basis of all the cognitive processes developed by the human brain. Such a powerful tool has already started to produce a new class of self-driving vehicles. With the projections of population growth that will increase until the year 2100 up to 11.2 billion, research on innovating agricultural techniques must be continued. In order to improve the efficiency regarding precision agriculture, the use of autonomous agricultural machines must become an important issue. For this reason, it was decided to test the use of the “Neural Network Toolbox” tool already present in MATLAB to design an artificial neural network with supervised learning suitable for classification and pattern recognition by using data collected by an ultrasonic sensor. The idea is to use such a protocol to retrofit kits for agricultural machines already present on the market.


Introduction
For an Unmanned Ground Vehicle (UGV) working in dynamic environments, path planning remains one of the most problematic issues to solve [1].The practical demands of UGVs request new obstacle avoidance algorithms.In Khan et al. (2017), the authors highlighted the importance of an autonomous navigation scheme for UGVs operating under a complex operational scenario that required obstacle detection.During the detection phase, the relationship between encountered obstacles and the robot's path was inferred [2].Ji et al. (2017) focused their work on path planning together with one of the main issues that was represented by tracking activity.The role of such topics is fundamental in evaluating a collision-free path for self-driving vehicles.The authors formulated the tracking controller as a Multiconstrained Model Predictive Control (MMPC) problem in order to follow the planned path for maneuvering in order to avoid obstacles by evaluating the proper steering angle to avoid collisions [3].Wang et al. (2017), instead, underlined the safety issues regarding obstacle avoidance, being important features for any kind of vehicle.A LiDAR sensor was used to detect the obstacles along the route and to optimize the path automatically by using the information about the vehicle position, the location of the obstacle, the operational capabilities of the vehicle and environmental restrictions [4].Lee et al. (2017) presented a method for identifying objects in a dynamic environment by using a 3D light detection and ranging sensor, for high speed object detection [5].Al-Mayyahi et al. (2014) used a Fuzzy-based Inference System (FIS) for navigation by using sensor information fusion.Such a system is made of two controllers: the first one uses sensors positioned in the front of the vehicle to detect obstacles, while the second controller evaluates the difference between the heading and the target angle [6].Furthermore, in Al-Mayyahi et al. (2014), the authors used an adaptive neuro-fuzzy inference system for navigation purposes by fusing sensor information.Such a system was made of four controllers: two are used for angular velocity regulation for reaching the target position and the other two are used for obstacle avoidance [7][8][9][10].Rajashekaraiah et al. (2017) proposed the MATLAB/Simulink simulation environment as a powerful tool for implementing the PTEM algorithm (Probabilistic Threat Exposure Map) to improve the obstacle avoidance capability for moving and stationary obstacles [11].Zhang and Jasiobedzki (2017) investigated safety issues for manned vehicles.The authors developed a system capable of evaluating the commands of an operator and, in the case of the detection of obstacles, automatically correcting unsafe operations [12].Furthermore, Giesbrecht et al. (2017) focused on driving assistance algorithms in order to reduce low level tasks for a driver in the presence of cluttered and difficult areas.Such a system shares the burden between the autonomous algorithms and the driver, manages proximity warnings, trajectory control in the case of narrow passages, wall following, etc. [13].In Mohammadi and Khaloozadeh (2016), a nonlinear sub-optimal regulator is proposed for trajectory planning and avoidance of obstacles.The State-Dependent Riccati Equation (SDRE) is used to design a sub-optimal nonlinear controller.Such an approach allows one to create an efficient and well-organized method for the control design of a non-linear system [14].Tee Kit et al. (2018) used Microsoft Robotics Developer Studio 4 (MRDS) to create autonomous system navigation.The authors implemented an indoor robot navigation system by using multi-sensor fusion, obtaining information by a depth camera, proximity sensors and an IR marker tracking system.The navigation system implemented this by transforming the data of the three sensors into tendency arrays in order to fuse them to decide on object-avoiding manoeuvres.The algorithm established the appropriate maneuvers according to the short, medium or long distance from the obstacle to be avoided [15].In Gonzales et al. ( 2018), the use of low-cost devices for obstacle avoidance in automotive engineering was reported.The authors in particular used an Arduino Microcontroller to acquire data to be used in vehicle dynamic analysis, demonstrating that such a low-cost technology is applicable to the automotive industry, reducing project costs [16][17][18].Negrete et al. (2018) reported how the automation of small-and large-scale agriculture is of colossal importance, because applying mechatronic technologies to agriculture would help to stimulate productivity in the Mexican agricultural industry [19].Furthermore, the use of low-cost electronic components does not affect the possibility to obtain good models by means of identification techniques [20][21][22][23][24].The identification of models and very detailed multi-body models, created by using 3D CAD models [25][26][27][28][29][30][31], are the starting point for designing optimal control laws [32][33][34][35][36][37][38][39][40][41][42].Most multi-body simulation software considers systems made up of rigid bodies, but it is possible to study the behavior of flexible multi-body systems by using ANCFtechniques [43][44][45][46].In this paper, we decided to use low-cost sensors together with neural network algorithms for object recognition for unmanned vehicles.To test the use of neural networks for object recognition during autonomous navigation, we decided to carry out an experimental investigation conducted by using the five SRF05 ultrasonic sensors and a small UGV, used for the identification and control applications [47][48][49].The results of this study will be used to develop a small tracked machine for agricultural applications to be used in viticulture.This activity has been developed at our Laboratory of Applied Mechanics, where we build small autonomous vehicles and robots for practical civil and agricultural applications [50][51][52][53][54][55][56][57][58] and test control systems in the presence of friction [59][60][61][62][63][64][65][66][67][68].This work is organized as follows.In Section 2, we report neural network algorithms that allow us to create, train, visualize and simulate both shallow and deep neural networks.In Section 3, we describe the preliminary analysis conducted on a three-dimensional test-rig for testing such algorithms and the ultrasonic sensor for object recognition.Section 4 shows the training and testing of the neural network designed to make the unmanned vehicle capable of making decisions based on the objects recognized during navigation.The exploration activity of the rover is also reported with the actual arrangement of the objects recognized by the vehicle during the navigation activity.The final Section 5 presents our conclusions.

Neural Network Algorithm
The goals of neural network research are to better understand the human brain and develop algorithms that can deal with abstract problems.Borrowing from the description of the biological neuron, a node, in a neural network, is a computational unit that takes a series of inputs, thanks to incoming connections, performs calculations in the "cellular" body and sends outputs to other nodes through an outgoing connection.
The neural network's architecture reported in Figure 1 is the most commonly-used structure.Generally, a network is made of three layers: input layers, hidden layers and output layers.In the scheme, in green are reported the input nodes, and the input values are indicated with x 1 , x 2 and x 3 .In blue, the hidden layer is reported with a 1 , a 2 and a 3 , representing the cellular bodies.Finally, the output layer is in red, with the final hypothesis output.Input layer nodes are also called passive nodes because they do not change the input data, but they receive and duplicate data for their multiple outputs, while the hidden layers and the output layers are defined both as active nodes.
In ( 1), the hypothesis function is reported, which returns the output y starting from x input parametrized by Θ 0 , also called the weight matrix.The cost function for an artificial neural network, reported in (2), is nothing more than a more generalized form of the logistic regression cost function, in which rather than having only one output unit, there are three.
Machine learning typically works with three large randomly sampled datasets =.The largest one should be the training set.Such data teach the net how to weigh different features, assigning coefficients in order to minimize training errors.Such coefficients, known as metadata, are contained in vectors.The network that we used for the object detection problem is a two-layer feed-forward network, with the sigmoid transfer function both in the hidden layer and in the output layer.The number of neurons in the hidden layer is set to 500.The number of neurons in the output layer is set to three since it is the number of elements of the target vector.
Hidden layer

Input layer
Output layer

Preliminary Analysis
To test such a method for object recognition, we decided to conduct a preliminary study on a test-rig by using the SRF05 ultrasonic sensors.The test-rig used for this analysis is made of 60 cm-long Bosh aluminum profiles.On the structure, five ultrasonic sensor are installed as reported in Figure 2.For this application, we decided to use three objects, a cylinder with a regular shape, reported in Figure 2a, a cone and a parallelepiped with an irregular shape, reported in Figure 2b.For the acquisition campaign, we decided to carry out 3300 data acquisitions, moving, for each campaign, the position of the objects with respect to the five sensors.The ultrasonic sensor used in our application is the SRF05 Ultra-sonic ranger, reported in Figure 3a.The low-cost ultrasonic sensor SRF05 is a sensor capable of evaluating distances from 1 cm up to 4 m.It was born as an evolution of the "SRF-04" and was designed to improve flexibility, the range of action and to further reduce costs compared to the previous model.It consists essentially of two sonar transducers installed on an electronic board, one that emits a sound signal and one that receives it.To perform the measurement, a short pulse of 10 µs must be supplied to the trigger input to start the sensor.The emitter will send a burst of eight ultrasound cycles at 40 kHz and increase the echo line.The SRF05 waits for the echo, and as soon as the signal is detected, the echo line is lowered again.The echo line is therefore an impulse whose width is proportional to the distance of the object.The measurement can be calculated from the pulse time.In Figure 3b is reported the beam pattern diagram.By listening for the returning wave-fronts, the sonar detects the echo signal.Such a signal has an attack/decay envelope.Such an envelope means that it builds up to a peak value then fades away.Depending on which wave-front is the first to be detected, depending on which could be the first, second or even third, the result may differ.Another problem that can affect accuracy is the phasing effect when the point of the source of the echo is coming from another source, like in cases of long surfaces.In this case, it is important to make sure to measure only by using the first wave-fronts.In the presence of several ultrasonic sensors, as in our case, it is also necessary to defer the moment of activation of each sensor, thus avoiding false readings from waves sent by other sensors.In particular, the manufacturer suggests to fire them sequentially 65-ms apart.
In Figure 4 is reported the neural network scheme for the experimental activities.As reported, the input data are provided by the five sensors, while the outputs are the three chosen objects.Therefore, for the cylinder, we will have the vector h θ (x) ≈ [1, 0, 0, 0] T , for the cone, the vector h θ (x) ≈ [0, 1, 0, 0] T , and finally, for the parallelepiped, the vector h θ (x) ≈ [0, 0, 1, 0] T .Therefore, given the training set {(x (1) , y (1) ), . . ., (x (n) , y (n) )}, we used the MATLAB neural network tool to design, train and display the neural network to be loaded later on the on-board controller.
In Table 1 is reported the usage of the data collected for the validation and testing of the network.The training samples are presented to the network during training, and the network is adjusted according to its error, while the validation data, instead, are used to measure network generalization and to halt training when the generalization stops improving.Furthermore, the testing samples do not have any effect on the training sample and so provide an independent measure of network performance.In Figure 5 are reported the confusion matrices of the trained neural network for the preliminary activity.The rows of such matrices correspond to the predicted classes, and the columns report the true classes.The diagonal cells show how many classes of observation and what percentage are correctly estimated by the trained network.By analyzing the training confusion matrix, reported in Figure 5a, we can observe that the cylinder, having a regular geometry, is correctly recognized 919 times corresponding to 100% of the time.Instead, in the case of irregular geometries such as the cone and the parallelepiped, the network is not always able to correctly identify which object it is.In the case of the cone, only 501 cases correctly identify the object as a cone, equal to 59.2%, while in 66.8% of cases, the parallelepiped is correctly identified.In total, we can state that 77.2% of the time, the network correctly identifies the geometry.In Figure 5b, the test confusion matrix is reported, which in a predictable way, confirms the previously-mentioned results.The relevant error, present in the identification of non-regular geometry objects, is due, in our opinion, to the interference of the signals of the various sensors.Target Class

Experimental Activity
For the experimental activity, we used a small unmanned vehicle, built in our Laboratory of Applied Mechanics.The autonomous robot is a small three-wheeled system with two fixed-axis wheels driven by two EMG30 DC electric gear-motors with digital incremental encoders.The chassis of the vehicle is made of methyl-methacrylate.The onboard controller for this activity is an Arduino-Galileo, an Intel Quark SoC X1000 Application Processor board-based controller.Sensors, actuators and micro-controller are powered by a 12-Volt battery.The basic transducers' configuration of the vehicle is made of ultrasonic sensors for detecting the presence of obstacles along the trajectory and a triple-axis gyroscope, a triple-axis magnetometer and a triple-axis accelerometer for the inertial navigation.To allow the robot to recognize such objects from any angle, measurements were obtained by randomly shifting the position of the three objects with respect to the five ultrasonic sensors, as shown in Figure 6.The unmanned vehicle is reported in Figure 7.As previously done, it was necessary to carry out a campaign of acquisition by the sensors mounted on the rover, in order to train and then test the new neural network.The collected data used for this second application are reported in Table 2.In Figure 7 is reported the acquiring phase carried out on the cone.For this phase, we created a sub-routine in MATLAB capable of firing the ultrasonic sensors by serial communication with the Arduino controller.For the 27 sessions, 100 measurements were taken, every time changing the relative position of the object with respect to the sensors.In Figure 8 is reported the confusion matrices of the trained neural network for the unmanned ground vehicle.The rows of such a matrix correspond to the predicted class, and the columns report the true class, as reported before.By analyzing the training matrix, reported in Figure 8a, we find a perfect identification of the three geometries confirming the fears about the interference between sensors, as stated by the SRF05 manufacturer.Instead, analyzing the testing matrix shown in Figure 8b, we can observe that only in two cases does the neural network incorrectly identify the parallelepiped in place of the cone, corresponding to 1.2% of the cases.Target Class  In Figure 9 is reported the list of operations carried out by the vehicle, during the patrol of the entire area, in search of objects to be recognized.Finally, Figure 10 shows the map created by the rover during its exploration phase of the perimeter.During navigation, it constantly uses the front and side ultrasonic sensors to detect the presence of objects and walls.The figure also shows the actual trajectory followed by the unmanned vehicle during the patrol.
Furthermore, the real position of the cylinder, cone and the parallelepiped is reported together with the enclosed area in which the robot can navigate.The robot is initially placed at the position (0, 0) inside the fence.At this stage, the autonomous vehicle does not know the position of objects distributed within the work area, the geometry of the work area nor its location within the area.However, the robot knows its footprint, information needed by the robot to perform maneuvering in the proximity of obstacles.
For navigation, the rover will try to advance until it finds an obstacle.In such a case, it will tend to overcome the obstacle by turning to its right if its path is free or it will turn to its left.Moving to the right or to the left will depend on the information collected by the side ultrasound sensors.In the case the robot cannot advance and there are obstructions on the sides, it will reverse its motion, as reported in Figure 9.

Conclusions
In this paper, the use of neural networks and ultrasonic sensors is tested for object identification.The idea is to use such a methodology on small autonomous agricultural machines by using retrofitting techniques.To do so, a small robot was used to test the potential of such a method.The UGV is a small three-wheeled system with two fixed-axis wheels driven by DC electric gear-motors with digital incremental encoders used for testing identification techniques and designing optimal control laws by the Applied Mechanics research team at the University of Salerno.Data collected by the five SRF05 sensors have been used to train the neural network so that it can recognize a cylinder, a cone and a parallelepiped.Once the network has been trained, the algorithm is tested by the autonomous navigating vehicle in an enclosure where the three objects to be recognized were present.During navigation, the vehicle correctly reports the type of object encountered and its position inside the enclosure.The result of this work will be also used for testing localization techniques based on the fusion of multiple sensors [69] for a tracked machine for agricultural applications.

Figure 1 .
Figure 1.Neural network scheme with the input layer, hidden layer and output layer.
(a) Measuring of a regular geometry (b) Measuring of an irregular geometry

Figure 2 .
Figure 2. Experimental test-rig used for the preliminary analysis made of Bosh aluminum profiles on which the SRF05 sensors were installed.

Figure 3 .
Figure 3. SRF05 ultrasonic sensor and conical beam pattern of the sensor.

Figure 4 .
Figure 4. Structure of the neural network used for the experimental application.

Figure 5 .
Figure 5. Confusion matrices for the training and test activity for the preliminary analysis.

Figure 6 .
Figure 6.Scheme of the network training activity by moving the position of the object relative to the rover.

Figure 7 .
Figure 7. Data acquisition process for the cone geometry.

Figure 8 .
Figure 8. Confusion matrices for the training and test activity for the experimental application.

Figure 9 .
Figure 9. Program flowchart showing the overview of the robot navigation operations.

Figure 10 .
Figure 10.Map of the enclosure with the arrangement of objects within it.

Table 1 .
Dataset distribution for the preliminary analysis.

Table 2 .
Acquired dataset distribution for neural network training.