1. Introduction
In a modern robot, the localization sub-module is based on a combination of multiple sensors. Each sensor has its strengths and weaknesses, and only when properly combined can they yield robust localization in every situation.
One of the most used fusion techniques is the Kalman filter [
1,
2] and its variants. A Kalman filter is a statistical method that fuses information from sensors based on their accuracy. Very accurate sensors with small covariances can be fused with less accurate ones. All the sensors contribute to the localization; even relatively imprecise sensors can be used to improve the final estimated position of the wheelchair. The key step in the Kalman filter is to characterize the sensors’ covariance, so the fusion can be done properly to obtain an accurate final localization at every point.
This paper presents a localization system for an intelligent wheelchair, the prototype is shown in
Figure 1. The wheelchair is designed to transport people with severe disability, helping users move with less effort and improving their quality of life. This intelligent wheelchair is controlled by the user, indicating the final desired pose. A set of sensors will detect obstacles and reconstruct the environment [
3] in order to navigate safely and autonomously [
4,
5]. The wheelchair will make the low-level decisions, avoiding obstacles and selecting the best path to arrive at the desired location [
3]. In order to achieve this task, the localization subsystem has a key role in the movement process. This paper presents the localization subsystem for the wheelchair, which is based on a sensor fusion scheme.
The wheelchair localization subsystem consists of the following sensors:
Odometry. Wheel odometry uses incremental encoders coupled to the motors. The system captures the movement of the wheels in real time. It is one of the main sensors for a robotic prototype and the basis for any localization system [
6]. The position and orientation is obtained by integrating wheel displacement. Odometry is an accurate sensor for short distances, but the error becomes unlimited over long distances.
IMU. Inertial Measurement Unit (IMU) uses a set of sensors to estimate inertial magnitudes. In this case an electronic gyroscope, which measures angular speed, is integrated to estimate orientation. The IMU also has an electronic compass to determine the north direction, but the behavior of this sensor can be erratic indoors due to magnetic interference, and it is not used in the localization module.
Lidar. Light Detection and Ranging (LIDAR), with the scan matching technique [
7], calculates the difference between two consecutive laser scans to estimate the movement between them. This difference is converted to an incremental displacement of the wheelchair, similar to conventional odometry.
The set of sensors used in the wheelchair is shown in
Figure 1. Circled in orange are the lidars, a pair of Sick TiM 551, valid for indoor and outdoor use, with a maximum range of 8 m, an angular resolution of 1 degree, a field of view of 270 degrees, and a measurement frequency of 15 Hz. In red are the HB100 Doppler effect sensors, described in
Section 4. In green is the IMU unit, located under the seat. It is an MPU9250 which includes an accelerometer, gyroscope and magnetometer in a single chip.
This paper describes the fusion process for the sensors presented above. Each sensor’s accuracy is characterized by a variable covariance, so the fusion algorithm can weigh sensor importance depending on the environment. The method for obtaining the variable covariance of each sensor, the fusion algorithm and the tests for comparing variable and static covariance localization, done with a real prototype, are presented in the sections below.
This paper is organized as follows.
Section 2 contains a study of previous research using Doppler based localization, which is compared to the topic of this paper. In
Section 3, the standard fusion algorithm for a static covariance sensor is described. In
Section 4, the method for estimating variable covariance based on an odometric sensor and Doppler speed radar is introduced.
Section 5 describes the experiments with the algorithm presented and provides a comparison to a standard algorithm in a real environment with a working prototype.
Section 6 presents the conclusions of our work.
2. Previous Work
Modern robot localization is based on a multiple-sensor fusion algorithm. One of the most widely used algorithms is the Kalman filter, where each sensor in the filter is characterized by a covariance and the filter is applied in order to obtain a robust localization [
8,
9]. The covariance depends on the sensor’s physical characteristics and is usually calculated based on data or tuned by hand [
10]. Other algorithms, such as Monte Carlo localization, use a set of particles to sample the probability distribution of the robot’s position [
11,
12,
13]. In this case, the particles evolve based on sensor data and covariance, so covariance is a key parameter in any localization algorithm.
Covariance is one of the main ways to characterize sensor accuracy, so variable covariance is necessary in order to obtain a robust algorithm, as in [
14], where multiple sensing is integrated using a Kalman filter and variable covariance based on errors for each sensor, or in [
15], where a fusion scheme for multiple sensors in a driver assistance system is presented, with each sensor being characterized by a variable covariance.
Intelligent wheelchairs are a specific case of an autonomous robot, and an extensive bibliography is available on the topic. The main difference between a standard mobile robot is safety. The robot is designed to transport disabled people, meaning each movement must be comfortable and safe. Ref. [
16] provides a survey of different classical intelligent wheelchair projects, including a description of technologies and interfaces. In [
17], a 2017 review of wheelchair technology is collected. The main obstacle detection and localization sensors used nowadays in wheelchairs are odometers, lidars, IMU and cameras, the information from which is fused to yield a final position. The main area of research in intelligent wheelchairs is the human machine interface so as to simplify user control. In [
18], the sensors in an intelligent wheelchair are used to compensate for uncertainty in the commands given to the wheelchair by an eye tracking device. In [
19], a specific wheelchair prototype with a sensory system based on odometry, ultrasound, and a camera is presented.
Previous works used different techniques to fuse the information coming from the sensors and obtain robust and accurate real-time localization. In the intelligent wheelchair case, an accurate localization is the first step in navigating safely, so the system must be robust. In this paper, an innovative method based on sensor fusion and variable covariance calculation is used to improve localization accuracy. Odometry is a key part of the localization subsystem of any robot, though the accuracy of this kind of sensor is usually very hard to measure. Odometry is affected by systematic and non-systematic errors, so systematic ones can be included in a static covariance calculation; however, non-systematic errors cannot be included in a static covariance and they can produce significant errors in the final position. The problem with non-systematic errors is that they may appear unexpectedly (for example, when the robot passes over an unexpected object on the ground, or the robot’s wheels slip), and they can cause large position errors, so a robot that relies on odometry is not reliable. In this paper, other low-cost sensors, such as Doppler radar speed sensors, are used to measure odometry information accuracy and change their covariance to improve the final localization accuracy.
The application of radars to localization is not widespread, only a few works are presented in the bibliography. In [
20], a continuous wave rotating radar and a scan matching algorithm between two consecutive scans are used to estimate the position change between two consecutive scans. In [
21], a scan matching technique based on Iterative Closest Point ICP over radar data and a Kalman filter are used in order to integrate the data for vehicle localization. The most similar work is [
22], where the Doppler data of all the angles is used to obtain the ego vehicle’s speed, and the directional information of each angle in the radar scan is used to detect landmarks and estimate the change in the vehicle’s position based on scan matching algorithm. In all this works, the radar is used as a Lidar, and a scan matching algorithm is used to detect robot movement.
In this work, the data produced by a low cost Doppler radar is used to measure the accuracy of the wheelchair’s odometric sensor. Doppler radar can measure the displacement speed based on the Doppler frequency shift, however the sensor is very noisy, so it is difficult to integrate as independent odometric sensor. However, it can be used as a accuracy sensor, to measure the variable localization quality of a standard odometric system in form of a variable covariance matrix. As far as authors concern, this fusion strategy is not previously used. The experiments presented in
Section 3 and
Section 5 show that this fusion can reduce significantly the accumulative localization errors when some of the sensors fails.
3. Localization with Odometric Static Covariance
In this section, we present the basic localization algorithm based on fusing static covariance sensors. This algorithm is usually the first step in the localization system of any robot, and for simple mobile robots it may be enough. However, for an intelligent wheelchair, the algorithm has to be robust, meaning some improvements must be made.
Covariance is a key parameter in the information from a sensor and must be specified for each sensor in a localization system. In the localization system of the wheelchair, lidars are used as a localization source based on a scan matching algorithm [
7] which measures the difference between two scans to obtain the change in the robot’s position. Lidar scan matching covariance can be calculated based on the similarity between two consecutive scans [
23]. If the current and previous scans are very similar, the accuracy of the position change calculation will be high. The change in the robot’s position is obtained based on the movement of similar representative points between two scans. The change between the points in the first and the second scans gives the final incremental movement of the robot. If all the points are correlated by the expected movement, the covariance will be small, indicating an accurate incremental movement. If the environment is not well defined or has fuzzy points, if other obstacles are moving in the lidar field of view or not enough references are available within the range of the lidar, the number of representative points will be small and the result will not be accurate. The incremental movement will thus have a large covariance, resulting in a bad localization. It is also possible that the difference between two consecutive scans will not match, so the movement between two matches cannot be estimated. This will generate an error and a very high covariance, indicating a bad incremental localization. The accuracy of the laser scan matching algorithm is thus characterized by a variable covariance. This allows the fusion algorithm to weigh its contribution to the final position every measurement.
The equations that govern an encoder odometric-based sensor are shown in Equation (
1). These equations are valid when
and
are small, so the increment angle
and the displacement
are also small. In Equation (
1),
and
are the displacement of left and right wheels in a certain time period based on the odometric sensor,
and
are the left and right wheel radii,
and
are the number of encoder ticks that the electronics receive in time period
T,
is the encoder resolution in one turn.
is the variation in the wheelchair angle in the time period, and depends on
, which is the distance between the robot’s wheels.
is the change in distance traveled based on wheel displacement, and
and
are the robot’s position in Cartesian coordinates.
Wheel odometry and IMU covariance are frequently more difficult to adjust. Sensor information in a good or bad localization scenario does not change, and a static average covariance is usually assigned to these sensors [
9]. This covariance underestimates odometry localization when the wheels travel in a perfect environment, but this does not usually introduce localization errors. The main disadvantage appears when a non-systematic error occurs. This can happen when a wheel slips in any area of the robot’s path. The odometry system cannot detect this error and its covariance will remain constant. The sensor fusion algorithm would consider odometric movement as valid data with a small covariance. This can lead to a bad localization of the robot’s final position. The IMU sensor measures the rotational speed, but as with any other analog sensor, white noise is included in the measurement, so the orientation angle based on the integration of this noisy sensor has a very low accuracy. A high covariance is assigned to the IMU sensor due to its low accuracy, so the sensor will be only used when other sensors are failing. In a standard localization implementation based on laser scan matching, encoder odometry and an IMU, the laser scan matching will have a variable covariance, while wheel odometry and IMU have a static covariance. The traditional way to optimize the fusion filter is to tune the weight of the odometry covariance to obtain the best position, but non-systematic errors can lead to robot delocalization.
Figure 2 shows the basic localization diagram based on a Kalman filter [
1,
2]. The localization module receives sensors and covariances as inputs, and gives a localization estimation as the output. The Kalman filter combines sensor information and a system model in order to estimate the current state. Equation (
2) presents a model of the localization system [
24], where
is the space state we want to estimate,
are the actual measurements of the system, in our case, sensor information,
is the model state space matrix, which depends on the previous state
, the model input
and the process noise
, and
is the observation model which converts internal states to measurement outputs and depends on measurement noise
.
The robot movement model is based on a 2D omni-directional model Equation (
3). The model relies on the assumption that velocity does not change over a period of time, meaning it is a constant speed model. The model’s inputs are unknown; the only information available is the wheel encoders that are used for the odometry and are input as a measurement to the Kalman filter. A theoretical model can be constructed based on the motor transfer function, but it will provide the same information as odometry, because the same sensors are used to calculate motor speed. So a standard model without inputs is used based on a kinematic model derived from Newtonian mechanics. The model is shown in Equation (
4), where
T is the time period, and
are the speeds of every variable.
The state of the system is shown in Equation (
3), which relies on the Cartesian position
, orientation
and speed. A Kalman filter will be used to estimate the current state
. This gives good results for smooth, non-linear functions, such as those described by the robot, if the sample period is small.
Equation (
4) shows the Kalman filter algorithm, where
is the covariance of the model noise
, which is set to a high value to give more importance to the measurement than to the model evolution.
is the covariance of each measurement (lidar, encoder odometry and IMU)
. This covariance can be variable, as in the case of the lidar, for example, or static, for odometric and IMU.
is the covariance for the estimated space state
starting in the initial position covariance
.
is the mapping matrix between the sensor output and the model. This matrix represents the measurement in the space state vector
for each sensor. The Kalman filter provides an estimation based on the state space
and an accuracy estimation
for every sensor measurement received. The final accuracy
is better than the individual sensors.
The tests are conducted in a long corridor with a turnaround area at the end,
Figure 3. Along the route, different speeds and different kinds of floors are used to ensure the experiments cover many circumstances. The ground truth is obtained using a Velodyne HDL 32 lidar. This sensor consists of 32 lidars with different angles to scan the transverse area. Each laser has an accuracy of 2 cm. The lidar information is used to obtain the route traveled and the map of the environment using a LOAM slam algorithm [
25]. This algorithm has exhibited very high localization accuracy when used with a Velodyne lidar, enough to be considered as ground truth.
Figure 3a shows a picture of the wheelchair with the Velodyne installed. The Velodyne is only used to obtain the ground truth.
Figure 3b shows the map generated based on the LOAM algorithm, with the route taken during the experiment shown with a green line.
Figure 4 shows an example of wheelchair localization in the specific environment using static covariance for IMU and odometry and variable covariance for laser scan matching. The setting is a long corridor with a turn at the end. The corridor is a very complicated test for the laser scan matcher because the laser can only see two walls and it has very few reference points, which makes it very difficult to calculate the robot’s displacement. Besides, the floor where the robot turns is slippery, so the wheels move but the wheelchair does not. Odometry indicates that the robot is turning, but the wheels are slipping. The covariance is static, so the sensor fusion algorithm considers odometry information as valid. This situation generates a non-systematic error which affects the final localization.
Figure 4 shows the evolution of this experiment with IMU and odometric static covariance. The route begins with good localization reported by all the sensors, but when the wheelchair arrives at the turnaround area, the wheels slip and generate an orientation error of more than 90 degrees, as shown in the green plot. This slippage also introduce erroneous displacement to the odometric output localization. However, in the slippage area, laser localization is quite accurate, as shown in the purple plot. Upon returning to the initial point, the laser scan matching algorithm cannot obtain an accurate localization in the corridor due to the absence of representative points. The position obtained from laser scan matching indicates less displacement than actual, thus generating a position error.
Figure 4 shows the behavior depending on the odometry covariance weight. In the case of an odometry overweight scenario,
Figure 4a, the distance traveled is correct, but the orientation error includes the wheel slip and the odometry error getting a final error of 34 m. In the other case,
Figure 4b shows an underweight odometry scenario: the turn is made correctly based on the laser, but in the corridor, the distance traveled is less than actual. The final point is marked in the plot with a red circle. This localization reports a displacement of about 18 m away from the actual end position.
This result shows that, in this experiment, it is not possible to properly adjust the weights to solve the localization problem with static covariance values. The weight can prioritize odometry or laser scan matching, but the final position is usually a combination of the error introduced by each sensor, as shown in
Figure 4. The only solution is a variable covariance, where the accuracy of odometric data is weighted and its covariance changes according to its results.
The IMU data is also used in the combination, but its data is less accurate than that of the other sensors, so its effect is residual compared to laser scan matching or odometry. This sensor just manages the orientation results when the other two sensors fail. As with any other sensor, it will improve the final results but the system usually works well with a manually tuned static covariance.