Field Navigation Using Fuzzy Elevation Maps Built with Local 3D Laser Scans

: The paper describes the case study of the mobile robot Andabata navigating on natural terrain at low speeds with fuzzy elevation maps (FEMs). To this end, leveled three-dimensional (3D) point clouds of the surroundings are obtained by synchronizing ranges obtained from a 360 ◦ ﬁeld of view 3D laser scanner with odometric and inertial measurements of the vehicle. Then, ﬁltered point clouds are employed to produce FEMs and their corresponding fuzzy reliability masks (FRMs). Finally, each local FEM and its FRM are processed to choose the best motion direction to reach distant goal points through traversable areas. All these tasks have been implemented using ROS (Robot Operating System) nodes distributed among the cores of the onboard processor. Experimental results of Andabata during non-stop navigation on an urban park are presented.


Introduction
Autonomous navigation of unmanned ground vehicles (UGVs) requires detailed and updated information on the environment obtained from onboard sensors [1]. Laser rangefinders are sensors commonly employed to obtain three-dimensional (3D) point clouds of the area where the mobile robot moves. Thus, these kinds of sensors are used by UGVs for off-road navigation [2], for planetary exploration [3], urban search and rescue [4] or for agricultural applications [5].
The processing of outdoor scenes acquired with a 3D laser scanner is very different from object scanning [6], since in any direction and at any distance relevant data can be found. In addition, there may be more occlusions, erroneous ranges or numerous points from unconnected areas. In urban areas and indoors, processing of laser information can benefit from the recognition of common geometric features, such as planes [7]. Natural terrain classification can be performed by computing saliency features that capture the local spatial distribution of 3D points [8,9].
In past years, 3D data acquisition from UGVs was usually performed in a stop-and-go manner [10,11], but, nowadays, it can be performed in motion by using some kind of simultaneous localization and mapping (SLAM) by combining data from odometry [12] or from an inertial unit [13].
A leveled 3D point cloud is a representation of the surroundings that can be used for local navigation [14]. Elevation maps offer a compact way of representing 3D point clouds with a 2.5-dimensional data structure [15]. Moreover, fuzzy elevation maps (FEMs) can operate with measurement uncertainty of the sensors [16,17]. However, fuzzy interpolation can produce unreliable solutions in areas with sparse measurements, and it is necessary to use a fuzzy reliability mask (FRM) for each FEM [18,19].
Navigation on natural terrains can be performed by following distant goal points given by their geographic coordinates [20,21]. These waypoints can be planned based on 2D terrestrial maps or aerial images [22], and represents the trajectory that, broadly, the vehicle should follow. However, The paper is organized as follows. The next section presents the mobile robot Andabata including its 3D laser sensor. Section 3 explains how leveled 3D point clouds are obtained while the robot is moving. Then, Section 4 details the filter applied to the leveled 3D point cloud. The construction of the FEM and FRM with the filtered 3D point cloud is described in Section 5. The multithreaded computing of FEMs and FRMs under ROS is discussed in Section 6. Section 7 presents the field navigation strategy of Andabata and some experimental results. The paper ends with conclusions, acknowledgements and references.

The Mobile Robot Andabata
Andabata is an outdoor mobile robot that weighs 41 kg and is powered by batteries. The maximum dimensions of the robot are 0.67 m long, 0.54 m wide and 0.81 m high (see Figure 1).
Andabata consists of a skid-steered vehicle with four 20 cm diameter wheels. Each wheel has a passive suspension system (with two springs and a linear guide with a stroke of 6.5 cm) and is actuated by its own direct-current motor with an encoder and a gearbox. The motors are driven by two dual Sabertooth power stages connected to two dual Kangaroo controllers. These controllers receive speed commands for the left and right treads and send motor encoder measurements via an USB port to the computer (see Figure 2). The maximum speed of the vehicle is v max = 0.68 m s −1 during straight line motion, but decreases to zero as the turning radius reduces.
The chassis of the robot has three levels: bottom for the motors and the battery, middle for electronics and the computer, and top for sensors and a WiFi router (see Figure 1). The computer has an Intel Core processor i7 4771 (four cores, 3.5 GHz, 8 MB cache) and 16 GB RAM. A tablet is used for remote supervision and teleoperation of the robot via the onboard WiFi router (see Figure 2). Andabata has a smartphone to obtain data from its GPS (with a horizontal resolution of 10 m), inclinometers, gyroscopes, magnetometers and its camera. In addition, the robot has a 3D laser rangefinder (see Figure 1), built from a two-dimensional (2D) scanner Hokuyo UTM-30LX-EW [28]. Power and data transmission between the base and the head is carried out by a slip ring that allows unconstrained rotation. The range of measurements varies from 0.1 m to 30 m (reduced to 15 m with direct sunlight). The 3D sensor has the same vertical resolution of the 2D scanner (i.e., 0.25 • ). The horizontal resolution can be selected by modifying the rotation speed of the head. The blind zone of the 2D sensor is 90 • , which is located below to avoid interferences with the robot. Thus, the blind zone for the 3D laser rangefinder is a cone with the radius of the base equal to the height h = 0.723 m of its optic centre above the floor. Andabata can be teleoperated by a person through an Android app in the tablet called Andabata, which is an adaptation of the Mover-bot app [30] that runs both on the tablet and the onboard smartphone. The application on the tablet has two sliders for both hands to indicate the speed of the left and right treads, which are sent through the local WiFi network to the Andabata app on the smartphone (see Figure 3). Video feedback, GPS position, and robot inclination is provided by the smartphone to the tablet via WiFi. Another app on the smartphone, called Android_sensors_driver, publishes a ROS topic directly on the computer with all the sensory data of the smartphone via WiFi [31].  Figure 4 shows a general scheme of the ROS nodes employed by Andabata. The Navigation_mode node selects tread commands from teleoperation or from autonomous navigation, which are sent to the motor drivers through the Motor_control node. During teleoperation, the Andabata app on the smartphone retransmits the speed commands from the tablet to the Teleoperation node via USB. The rest of ROS nodes are explained in the following sections.

Obtaining Leveled 3D Point Clouds
This section describes the acquisition of leveled 3D point clouds on Andabata while moving on irregular terrain. For this purpose, it is necessary to synchronize the motion of the vehicle with 2D scan acquisition. All in all, it represents a local SLAM scheme implemented at low speeds without loop closures.
Let XYZ be the reference system associated with the robot, which is located at the geometric center of the rectangle defined by the contact points of the wheels with the ground. The X-axis coincides with the longitudinal axis of the vehicle pointing forwards, the Z-axis is normal to the support plane and points upwards and the Y-axis completes this orthogonal coordinate system pointing to the left (see Figure 5). The Cartesian coordinates for a leveled 3D point cloud are expressed in an X p Y p Z p reference system whose origin coincides with that of the vehicle at the beginning of each scan. The X p axis coincides with the projection of X on the horizontal plane, Y p = Y and Z p is defined with the same direction as gravity and opposite direction (see Figure 5). The Kinematic_model node is in charge of calculating the actual longitudinal and angular velocities of the vehicle (see Figure 4). For this purpose, it employs the encoder readings of the motors (obtained by the Motor_control node from the motor drivers) on an approximate 2D kinematic model of skid-steering [32].
The Local_positioning node employs an unscented Kalman filter [33] to localize the vehicle with respect to X p Y p Z p during the acquisition of the whole 3D scan. To this end, the node merges the roll and pitch angles, the yaw speed of the vehicle (provided by the smartphone and published by the Android_sensors_driver app), and the linear and angular velocities from the Kinematic_model node.
For 3D laser scan acquisition, the Spinning_head node commands a constant rotation speed for the head and periodically receives (every 25 ms) the relative angle between the head and the base at the start of each 2D scan via the USB connection. In this way, this node can publish the transformation between the 2D laser sensor reference system and XYZ. In addition, the 2D_laser_scanner node allows for configuring the Hokuyo sensor and publishes the 2D scans that the computer receives through the Ethernet connection.
Finally, the Cloud_maker node publishes the 3D leveled point cloud by combining the successive 2D vertical scans (published by the 2D_laser_scanner node) with the relative transformations between the reference system of the 2D laser scanner and XYZ (provided by the Spinning_head node), and between XYZ and X p Y p Z p (published by the Local_positioning node).
With Andabata stopped, a 180 • horizontal turn of the head of the 3D scanner is enough to capture all the surroundings of the robot, but, with the vehicle in motion, it is necessary to perform a full 360 • turn. In this way, non-sampled areas around the robot are avoided and the blind zone above the robot is reduced (see Figure 6).

Filtering 3D Point Clouds
To build useful FEMs and FRMs, it is necessary to filter the leveled 3D point clouds. Filtering is composed of three consecutive steps: far points elimination, overhangs removal and maximum height normalization. These steps are performed by the Filter node, and its output is processed by the Map node (see Figure 4).
The first step is to eliminate far points, concretely those whose projection on the horizontal plane is far from d max = 10 m from the origin of the 3D scan. The reason for this is the low visibility of the ground from the point of view of the 3D laser scanner, which is limited to h on Andabata (see Section 2) and because distant zones are sampled with a lower density than closer ones. In addition, the number of points to be processed is reduced and the resulting 3D point cloud focuses on the area where the next movements of the vehicle will take place.
Overhangs that do not constitute an obstacle for Andabata, such as the ceiling of a tunnel or tree canopy, need to be removed. For this purpose, the second step of the filter employs the collapsible cubes method with coarse binary cubes with an edge of E = 0.5 m [34]. Basically, occupied 3D cubes above a minimum vertical gap are omitted. Thus, points inside unsupported cubes are eliminated from the point cloud. Two empty cubes are enough for Andabata to navigate safely below overhangs.
In the third step, the maximum height of obstacles is limited to 3 m above the ground. The objective for this step is that the height of tall and small obstacles can be comparable. For this purpose, the z-coordinates of 3D points inside coarse cubes with five or more cubes above its vertical ground cube are lowered. Figure 7 shows a leveled 3D scan obtained while Andabata was moving in an urban park (see Figure 8). It can be observed in Figure 7 that far points (blue) and overhangs from the canopy of the trees (green) are discarded from the filtered 3D scan (red). It is also noticeable that the maximum height of the trees has been limited.

Building FEMs and FRMs
This section describes the computation of an FEM and its FRM from a filtered 3D point cloud. Instead of using the adaptive neuro-fuzzy inference system (ANFIS) for identification [16,18,19], least squares fuzzy modeling [29] has been employed to improve computation time. Additionally, cylindrical coordinates (α, r, z) have been employed instead of Cartesian coordinates (x, y, z).
It is assumed that the ground surface around the robot can be represented as a continuous function z = H(α, r), where the angle α = arctan(y/x) and the distance r = x 2 + y 2 describe the projection of a 3D point on the X p Y p plane. Similarly, an FRM is defined as a continuous function v = V(α, r), where values of v close to one and zero, represent the maximum and minimum reliability values for the FEM, respectively. The FEM and FRM are defined in the universe U α = [0 • , 360 • ] for variable α, and U r = [0 m, d max ] for r, which corresponds to a circle centered at the origin of X p Y p .
A total of sixteen fuzzy sets for α have been defined evenly-distributed over U α (see Figure 9). In this case, triangular membership functions µ i for i = 1, 2, . . . , 16 are defined as: where α i = (i − 1)22.5 • represent the peak values, i.e., µ i (α i ) = 1. For distance r, ten uneven-distributed fuzzy sets have been defined over U r to provide higher detail for regions closest to the robot (see Figure 9). Concretely, peak parameter r j for j = 1, 2, . . . 10 are computed as: where τ = 1.0624 is an expansion ratio that fulfills that r 1 = h/2. Then, the membership functions µ j for r are defined as: (3) The combined membership for both variables µ i,j is calculated with the product operator for a given 3D point: where the following condition is satisfied: Let m be a row vector that contains all the possible memberships: m(α, r) = (µ 1,1 (α, r), µ 1,2 (α, r), . . . , µ 16,10 (α, r)) , where most of its 160 elements are null. Non null elements correspond to the regions where the projection of the 3D point falls on the X p Y p plane and its nearby regions (see Figure 10). Using zero-order Sugeno consequents for the rule that relates the fuzzy set i for α and the fuzzy set j for r requires just one parameter z i,j for the FEM, and another v i,j for the FRM. Let θ z and θ v be the column vectors that contains all the unknown parameters: Given the algebraic equations: z = m(α, r) θ z and 1 = m(α, r) θ v for the FEM and the FRM, respectively, the least-squares method finds the parameters θ z and θ v , respectively, that minimizes the sum of the squares of the residuals: z − m(α, r) θ z and 1 − m(α, r) θ v , respectively, made in the results of every 3D point in cylindrical coordinates (α, r, z).
For model training, the recursive least squares technique has been employed in the Map node. At first, θ z and θ v are initialized with zeroes, with the exception of the parameters associated with the first fuzzy set for r, i.e., those numbered j = 1. These zones may not contain any 3D point due to the blind zone of the 3D laser scanner. Thus, the parameters z i,1 are filled with the heights deduced from the pitch and roll angles of the mobile robot at X p Y p Z p . Additionally, for all i, v i,1 = 1 to indicate that these zones on the FEM are reliable because Andabata has already been on them.
To obtain θ z , recursive least squares equations are: where L 1 is a gain vector and P 1 is the covariance matrix, which is chosen initially as the identity matrix multiplied by a big positive number to indicate that there is no confidence on the initial z i,j parameters. Similarly, to calculate θ v , the recursive equations are: where L 2 and P 2 are the gain vector and the covariance matrix, respectively. Initially, P 2 is chosen as the identity matrix multiplied by a small positive number to indicate that there is confidence on the initial v i,j parameters (which means no reliability on the initial FEM).
Once identified the fuzzy parameters in θ z and θ v , it is possible to estimate for every (α, r) the height on the FEM:ẑ = m(α, r) θ z and the reliability value on the FRM:v = m(α, r) θ v . Figure 11 shows the FEM and the FRM obtained from the filtered 3D point cloud represented in Figure 7.

Multithreaded Computation of FEMs and FRMs
The onboard computer of Andabata has a processor with four physical cores, but it employs simultaneous multithreading to divide each core into two virtual units, resulting in eight independent virtual cores. This section discusses how to employ the multithreading capability of ROS in the virtual cores to enhance the output rate and steadiness of FEMs and FRMs.
With an horizontal resolution of 2.4 • for the 3D laser scanner, a filtered 3D point cloud is obtained every 3.75 s, with 60, 000 laser points approximately (this number depends heavily on sky visibility, that reduces the number of valid ranges). However, the calculation of the FEM and its FRM (performed by the Map node) takes much more time. This time depends on the total number of 3D points and on the processor load.
To increase the output rate, the calculation of the FEM and the FRM has been parallelized into two threads inside the Map node by separating the calculation of Equation (8) to that of Equation (9). For further improvement, the Map node can be launched two or more times working on different 3D point clouds.
To decide which 3D point clouds will be processed and which will be discarded, the Cloud_selector node has been developed (see Figure 4). When a Map node becomes available, it is necessary to decide to employ the previous received 3D point cloud or to wait for the reception of the current 3D point cloud. Moreover, in case of using multiple Map nodes, it can also be of interest to wait for future 3D scans to avoid the processing of consecutive 3D laser scans (introducing one or more spacing). Figures 12 and 13 show timetables with the output rates of the 3D point clouds, FEMs and FRMs using the previous and actual 3D scans, respectively. In these figures, different selection strategies have been employed by the Cloud_selector node. Processed 3D point clouds are colored and the same color is employed to indicate when its corresponding FEM and FRM have been obtained. Discarded 3D point clouds are not colored at all.   Table 1 compares the map age (i.e., the time employed to produce an FEM and its FRM from a filtered 3D point cloud), the interval between consecutive maps and their respective standard deviations for different selection strategies. When using the current 3D point cloud, the age of the map is smaller, but the interval between maps is greater than when using the previous 3D point cloud. The best map age can be obtained by processing the current 3D point cloud with only one Map node. On the other hand, map interval can be improved by using more Map nodes. The ideal strategy would be the one that minimizes both map age and map interval. A compromise between both objectives can be achieved by using two Map nodes and one forced spacing with the previous point cloud. Consequently, the latter is the selection strategy adopted for autonomous navigation of Andabata.

Field Navigation
A navigation mission for Andabata is defined by an ordered list of distant goal points, each one determined by its latitude and longitude coordinates. The navigation strategy consists of approaching the goal points while avoiding local obstacles reactively. The mission will end either when the last goal point is reached (mission completion) or when the roll or pitch angles of the vehicle surpass 20 • (mission failure due to excessive inclination).
The ROS package actionlib [35] has been used to implement this navigation strategy, since the mission takes a long time to be completed and it should be able to be canceled or interrupted at any time.

Global Localization
The relative position and orientation of the mobile robot with respect to the current goal point are obtained by the Global_positioning node. This node process the data of the GPS and magnetometers of the smartphone published by the Android_sensors_driver app (see Figure 4).
Before starting navigation, it is necessary to calibrate the magnetometers of the smartphone, since their measurements can be affected by the environment and by the mobile robot itself. Calibration consists of turning the robot on spot during one full turn on horizontal plane with lateral motor speed of the same magnitude but different sign. Then, the magnetic measurements on the X and Y axes, B x and B y , respectively, are adjusted to resemble a circumference (see Figure 14).
Afterwards, the orientation of the vehicle with respect to the north, i.e., the yaw, can be obtained during navigation as: and the relative orientation of the vehicle with respect to the current goal point can be deduced. When the distance between the current position of Andabata (given by its GPS coordinates) and the goal point is smaller than 10 m (that corresponds to the GPS horizontal resolution), the goal point is considered already visited and the next goal point from the mission list becomes the current goal point.

Direction Selection
Every time an FEM and its FRM are delivered by a Map node, the Dir_selection node computes a new motion direction (see Figure 4). A total of 32 possible directions d i from the projection of the current XYZ frame onto the X p Y p plane are checked (see Figure 15). The chosen direction d j will be the one that minimizes the following cost function: where ∆ i is the difference between d i and the goal point direction, δ i is the angular difference between d i and the projection of the X-axis, g = 0.02 is a factor that penalizes changes of direction, and D i is the maximum distance that the robot could travel over the FEM following d i (see Figure 15). To compute D i , several points (x, y) are evaluated in the FEM and FRM along d i with steps of 0.1 m starting from the current position of the robot. Whenever the checked point has a reliability value on the FRM less than 0.75, or has a gradient magnitude on the FEM greater than 0.35, or reach the borders of the FEM, the distance D i , which is initially zero, is not increased more. The gradient of the FEM ∇H is employed to detect non traversable areas [18]. The magnitude of the gradient at a point with cylindrical coordinates (α, r) on the FEM can be calculated as: where: and:

Computing Tread Speeds
Once the motion direction d j is determined, the commanded angular velocity ω for Andabata is computed as: ω = G δ j , where G = 1 is a proportional gain that regulates heading changes of the vehicle [21]. The longitudinal velocity during navigation v is a constant value that depends on the difficulty of the terrain.
The Dir_selection node sends the desired longitudinal v and angular ω velocities to the Kinematic_model node, which are converted into left and right tread speeds by using a symmetric inverse kinematic model: where y ICR = 0.45 m is the mean value of tread instantaneous centers of rotation (ICRs) of Andabata on horizontal plane [32]. When any of the tread speeds exceeds their maximum speed v max , the commanded speeds (Equation (17)) are divided by: In this way, the commanded curvature γ of the vehicle (and its turning radius) is kept unaltered:

Experimental Results
The performance of the navigation system of Andabata has been tested in an urban park with several lampposts, stone benchs and trees (see Figure 8). The park contains soft slopes and different terrains such as grass and sand.
The experiment shown in Figure 16 consists of an outdoor trip from a start point (white cross) to reach two GPS goals (blue dots), which are fulfilled when the robot is closer than 10 m (red circles). The path followed by Andabata is shown with black dots representing the GPS positions obtained from the onboard smartphone. The longitudinal speed of the robot was set to v = 0.34 m s −1 and the trip took 540 s, approximately. Figure 16 has been generated with the help of Google Earth [36]. Changes in the yaw angle during this experiment (obtained with the magnetic compass) can be observed in Figure 17. Remarkable changes occur when the robot tries to aim to the first and second goal points at the beginning and the middle of the trajectory, respectively. Once the goal direction is obtained, some corrections are made in order to cross through traversable areas.  Figure 18 shows a local FEM and its FRM generated during autonomous navigation. Small black arrows have been superimposed to represent the successive positions of the mobile robot in local coordinates (provided by the Local_positioning node). It can observed how obstacle avoidance is performed while moving towards the goal point.

Conclusions
This paper has described the field navigation system of the mobile robot Andabata at low speeds with fuzzy elevation maps (FEMs). For this purpose, the robot obtains leveled three-dimensional (3D) laser scans in motion by integrating ranges from a 360 • field of view 3D laser scanner with odometry, inclinometers and gyroscopes. Each leveled 3D scan is employed to produce an FEM and a fuzzy reliability mask (FRM) of the surroundings, which are processed to choose the best motion direction to reach a distant goal point while crossing through traversable areas. This has been implemented with different ROS (Robot Operating System) nodes distributed among the cores of the onboard processor.
It is the first time that a mobile robot navigates with FEMs obtained from 3D laser scans in outdoors. To do so, a complex navigation scheme has been developed by integrating numerous tasks, where it has been necessary to employ 3D point cloud filtering and multithreaded execution to increase the building frequency of FEMs and FRMs. Experimental results of Andabata navigating autonomously in an urban park have been presented.
Work in progress includes navigation in more complicated natural environments and quantitative comparison with state-of-the-art outdoor navigation methods.