3.1. IEKF Formulation
This work uses an iterated extended Kalman filter to estimate the pose (position and orientation) of a mobile robot.
Figure 1 provides a graphical overview of the algorithm. A detailed explanation is elaborated in this section.
The standard formulation of the Kalman filter estimates the state of a dynamical system, if the state transition and the measurement probabilities are linear. The extended Kalman filter (EKF) is able to perform state estimation for non-linear models, by linearizing around the current estimate. The IEKF can improve performance in the case of a highly non-linear measurement model by making local iterations of the update step [
29]. The local iterations are stopped when subsequent estimates are close enough in state space, or when a maximum number of iterations is reached.
The Kalman filter and its derivatives all work in two steps. The first step or “prediction” step makes a prediction of the next state based on the previous state and a state transition model (also called the “motion model”):
where
is the state vector,
is the control input vector (odometry measurements in this work),
is the state covariance matrix,
is the state transition matrix, and
is the process noise covariance matrix.
is obtained by taking the Jacobian matrix of the state transition model with respect to the state vector
. In this paper, boldface symbols are used to represent matrix quantities (e.g.,
). A tilde symbol is used for estimated quantities (e.g.,
), and a hat symbol is used for predicted quantities (e.g.,
). Subscripts
n, and
are used to denote time steps. In the case of a differential drive mobile robot, the motion model can be defined as [
30]:
where
is the translation relative to the previous position and
is the rotation relative to the previous position.
In principle, one could continue to use the prediction step indefinitely. However, each prediction has a slight error; therefore, the total estimation error grows without bounds and the uncertainty can only increase (all terms in the covariance equation of Equation (
1) are positive definitive). The Kalman filter therefore also employs a second step, which is known as the “update” step. Based on the predicted state, a measurement is predicted using a so-called “measurement model”. This predicted measurement is then compared to sensor data. The difference between the real and the predicted measurement is new information (also known as the “innovation”), which is weighed with the predicted state in order to obtain a new estimate. The weighing factor is called the “Kalman gain” and is calculated based on the relative uncertainties of the motion and measurement model. Mathematically, the update step is formulated as follows:
where
is the innovation,
is the measurement,
is the predicted measurement (which in this work is the sampled voltage of the receiver),
is the Kalman gain,
is the state observation matrix,
is the measurement noise covariance matrix, and
is the identity matrix.
is obtained by taking the Jacobian matrix of the measurement model with respect to the state vector
.
The measurement model is based on a Lambertian radiation model for the transmitter and a photodiode with a trans-impedance amplifier (TIA) as receiver [
31]. Only the line-of-sight components are considered in the measurement model. In the case a single receiver is used, the receiver coordinates should coincide with the robot origin, thus removing the need for coordinate frame transformations. In the case multiple receivers are used, it is desirable to express the receiver position relative to the robot frame in polar coordinates. With this formulation, the robot orientation
becomes a variable in the measurement model, and it can be estimated by the Kalman filter (instead of only being able to estimate
x- and
y-coordinates with the previous formulation). The receiver position is thus expressed as [
28]:
where
is the position of receiver
j,
is the polar angle of receiver
j, and
is the radial coordinate of receiver
j (see
Figure 2). The subscript
is used to denote the coordinates of the robot.
The voltage sampled at the output of the receiver is used as the measurement for the Kalman filter (
in Equation (
3)) and is given by [
28,
31]:
where
is the output voltage of the receiver;
is the number of transmitters,
m is the Lambertian emission order of the LED;
and
are the irradiance and incidence angles with respect to the light source, respectively (see
Figure 3);
d is the Euclidean distance between transmitter and receiver;
G is the transimpedance of the receiver;
is shot noise current on the received power;
is shot noise current due to background illumination;
is thermal noise current [
31]; and
is the field of view of the photodetector. Finally,
in Equation (
5) is a constant that contains all non-position dependent parameters [
28]:
where
A is the active area of the photodiode;
is the peak responsivity of the photodiode;
and
are the minimum and maximum wavelengths of light for which the photodiode produces a current, respectively;
is the transmitted optical power of transmitter
i; and
is the normalized spectral response of the photodiode.
3.3. Experimental Setup
To verify the proposed approach, experiments were performed in a lab environment (see
Figure 5). The robot was equipped with a custom sensor platform, and data of the various sensors were collected. Position estimation was performed in post-processing. Therefore, the experimental results are not real-time positioning results. However, we would like to point out that the proposed algorithm is not very costly from a computational point of view (see
Section 5). The choice to perform the estimation in post-processing was mostly motivated by the fact that this speeds up development significantly, as data can be recorded once and then used throughout the development of the filter. To ensure proper separation between calibration data and validation data, some datasets were only used for determining filter parameters and were thus not included in the overall performance metrics (see
Section 5). All processing was performed on a Dell Latitude E5550 laptop, equipped with an Intel i5-5300U CPU and 8 gigabytes of RAM.
A Kobuki robot [
32] was selected as the mobile platform (see
Figure 6). Several photodiodes were placed on the topmost platform and were connected via a TIA to the sampling pins of an Arduino UNO. This microcontroller relays the light intensity values to a Raspberry Pi (RPi), which is also connected to the robot. The different sensors are synchronized offline. Only the measurements whose timestamps are close enough together are considered. Besides receiving sensor data, the RPi also controls the robot by sending velocity commands. During experiments, a constant forward velocity was applied as a reference to the open-loop speed controller of the robot during each experiment, which enabled it to approximately follow the same trajectory in subsequent experiments. These trajectories were the same as those used in simulations (see
Figure 4). The forward speed of the robot was approximately 0.2 m/s.
Four LED lights were mounted to the ceiling as transmitters, the model and manufacturer of the LEDs and other components are listed in
Table 1. The transmitters were powered by a lab bench power supply. As a ground truth reference (that is, the real location of the robot), the ultrasound positioning system by Marvelmind Robotics was used. We expected the accuracy of our approach to be an order of magnitude lower (that is, a higher error value) than the Marvelmind accuracy of 2 cm [
33]. The system was therefore a suitable ground truth reference. The beacons of the Marvelmind system were placed next to the LEDs at a known distance. Through the self-calibration function of the Marvelmind system, we could therefore obtain both beacon and LED coordinates. Finally, a separate computer was also present for visualization, data recording, and sending teleoperation commands via the keyboard. A schematic representation of the system can be seen in
Figure 7.