Pedestrian Navigation Using Foot-Mounted Inertial Sensor and LIDAR

Foot-mounted inertial sensors can be used for indoor pedestrian navigation. In this paper, to improve the accuracy of pedestrian location, we propose a method using a distance sensor (LIDAR) in addition to an inertial measurement unit (IMU). The distance sensor is a time of flight range finder with 30 m measurement range (at 33.33 Hz). Using a distance sensor, walls on corridors are automatically detected. The detected walls are used to correct the heading of the pedestrian path. Through experiments, it is shown that the accuracy of the heading is significantly improved using the proposed algorithm. Furthermore, the system is shown to work robustly in indoor environments with many doors and passing people.


Introduction
Although the global position system (GPS) is normally used in pedestrian navigation systems [1,2], alternative techniques are needed for GPS-denied environments such as indoors [3][4][5][6][7][8][9], urban canyons [10,11], underground [12][13][14], and mountain regions where GPS signals are weak or unavailable. Two research methods are mainly used in the indoor position estimation [15][16][17][18][19][20]. The first method is based on an existing network of receivers or transmitters placed at known locations. This method is known as beacon-based navigation in which position is estimated using triangulation (or trilateration) method from measured ranges (or angles). The method usually uses different technologies such as vision, ultrasound or short range radio which are generally named local positioning systems (LPSs). The survey of LPSs can be found in [15]. The second method is based on dead reckoning algorithms using sensors installed on a person or an object to locate them and known as beacon-free navigation. Since there is no environment installation requirement, this method is preferred in some applications. Several dead reckoning approaches using inertial measurement unit (IMUs) have been proposed. In [16,21], the position of a person is estimated using an inertial navigation algorithm (INA). Since its accuracy degrades over time, additional sensors are often used with IMUs such as a time of arrival based on LPSs [17], received signal strength (RSS) based on LPSs [16] or distance sensors [18][19][20].
In [16], persons are accurately located by combining an active RFID technology with an inertial navigation system in which the received signal strengths obtained from several active RFID tags are used to aid a foot-mounted IMU based position estimation. This method requires that RFID tags be installed at the known locations. In [17], the RF 3D location system is applied to improve the accuracy. RF receivers are preinstalled outside around building and an RF transmitter is attached on a foot. The position of a foot is computed using time-of-arrival from the transmitter to each receiver. In [18], the position error from dead reckoning is corrected by deploying ultrasound beacons as landmarks. Another method is using radar in [19] and distance sensors in [20] to improve the accuracy. Since these papers use a floor as the reference plane, the distance sensor gives information on the foot height. Although there is no installation requirement as the methods in [16][17][18], the methods in [19,20] only improve accuracy of the foot height estimation but do not improve accuracy of horizontal position estimation. Since the foot height is an important information in the gait analysis, the methods in [19,20] can be used effectively for the gait analysis. However, the horizontal position (that is, a person's position) is a key information in the pedestrian navigation. Thus, the methods in [19,20] are not suitable for pedestrian navigation.
Our method improves the estimation accuracy using a distance sensor in addition to foot-mounted IMUs. We recognize that an important problem in pedestrian navigation is heading correction, and we use a vertical plane (such as a wall) as a reference plane to update heading and position. The proposed system does not require any installation in the environment and any prior knowledge on environment (such as a map).

Overview System
An IMU (Xsens MTi) is attached on the shoe as in Figure 1. The IMU contains three-axis accelerometers and gyroscopes with 100 Hz sampling frequency. A LIDAR (light detection and ranging, model LL-905-PIN-01 [22]) is also attached on the shoe and used as a distance sensor. The distance is obtained by measuring flight time of infrared light with 30 m measurement range and 33.33 Hz sampling frequency. Body coordinate system (BCS) and world coordinate system (WCS) are used in this paper. The BCS coincides with the IMU coordinate system. The z axis of the WCS is pointing upward while the x and y axes are chosen arbitrarily. The origin of the WCS is assumed to be on the floor. The notation [a] w ([a] b ) is used to denote that vector a is represented in the world (body) coordinate system. To compute a LIDAR point's position, parameters of distance sensor such as the position and pointing direction with respect to BCS are required. In Figure 1, vector [r l ] b ∈ R 3 denotes the position of a distance sensor while unit vector [n l ] b ∈ R 3 represents the pointing direction.
The distance sensor parameters ([r l ] b , [n l ] b in Figure 1) can be determined using a ruler and a protractor. However, it is not easy to get high accurate parameters using this method. Thus, the parameters are calibrated in the next section.
The main idea of this paper is using vertical planes (such as walls) along a walking path to update position and heading. For example, if a person is walking on a long straight corridor, we can use the wall as a reference to update position and heading. The proposed algorithm automatically detects existence of vertical planes (such as walls) and uses them as references. Thus, no prior map information is required.
The detection of vertical planes during walking is explained using an example indoor environment in Figure 2. Suppose a person is walking along the black colored pedestrian path. While walking, a foot touches the floor almost periodically. When a foot is on the floor, the velocity of a foot is zero, and this time interval is called a zero velocity interval (ZVI). At each ZVI i, we compute LIDAR point P i based on position and heading estimated using the INA. At ZVI 2, the vertical plane 1 is defined using all LIDAR points between P 1 and P 2 (including P 1 and P 2 ). Now, suppose a person is at the ZVI 3 and LIDAR point P 3 is computed. If P 3 is close to the vertical plane 1, it is assumed that P 3 is on the vertical plane 1. Since we know the vertical plane 1 equation and the distance from the vertical plane 1, we can update position and heading. At ZVI 4, LIDAR point P 4 is not near the vertical plane 1. At this time, vertical plane 1 is not used to update position and heading. At ZVI 5, a new vertical plane is formed. This process is repeated during walking.

Distance Sensor Calibration
In this section, the sensor unit is handheld while LIDAR is pointing at a floor for the calibration. The coordinate of a LIDAR point with respect to the WCS is computed as follows: where r is the position of IMU with respect to WCS, C w b is the rotation matrix from BCS to WCS, is the position of LIDAR point with respect to BCS and d is the distance from a distance sensor to LIDAR point. r and C w b can be computed from the INA. We setup the distance sensor and IMU so that [r l ] b = k [n l ] b (see Figure 1). Since [n l ] b is a unit vector, k is the distance between IMU and the distance sensor. Since k ( 3 cm) is very small compared with d (1∼5 m), k is measured by a ruler.
Since the floor is flat and horizontal, the height of IMU is identified by where h is the height of the IMU, which is measured by a ruler. If we repeat the measurement n times with different poses, we have [n l ] b is estimated by minimizing the following where The analytic solution to the least squares problem is given by

Basic INA
In this subsection, a basic INA is given. This basic algorithm is not a new result and is from [23,24].
Let v ∈ R 3 and r ∈ R 3 denote the velocity and position of IMU with respect to the WCS. Let C(q) ∈ R 3×3 be the direction cosine matrix corresponding to the quaternion [25] q ∈ R 4 which represents the rotation relationship between the BCS and WCS.
The quaternion, velocity and position are related as follows [23]: where ω is the angular velocity of the BCS with respect to the WCS and [a] b ∈ R 3 is the acceleration in the BCS. The gyroscope output (y g ∈ R 3 ) and accelerometer output (y a ∈ R 3 ) are given by where [g] w ∈ R 3 is the local gravitational vector in the WCS. b g ∈ R 3 and b a ∈ R 3 are biases of gyroscope and accelerometer, respectively. The numerical integration algorithm to integrate Equation (6) (replacing [a] b by y a − C(q)g and replacing ω by y g ) is given in [26]. Letq,r andv be the integrated values.
There are errors inq,r andv due to sensor noises, represented byq ∈ R 3 ,r ∈ R 3 andv ∈ R 3 : where ⊗ denotes the quaternion multiplication and q * is the conjugate quaternion of q. The three dimensional (instead of four dimensional) error description of the quaternion in Equation (8) is from [27]. The state of a Kalman filter is defined by The system equation for the Kalman filter is given by [28]: where [a×] ∈ R 3×3 is a skew symmetric matrix corresponding to a vector a ∈ R 3×1 . The noises w bg and w ba represent small variation of biases. Two measurement equations are used in this paper. The first one is the measurement equation based on the ZVIs, and the other is the measurement equation using the distance sensor (which is given in Section 4.2).
When a foot is touching the ground during walking, its velocity must be zero. This leads to resetting the velocity error in INA. In [29], ZVIs are detected directly using a Doppler velocity sensor. However, ZVIs can be detected indirectly using zero velocity detection algorithms [30,31].
A simple zero velocity detection algorithm is used in this paper. If the following conditions are satisfied, the discrete time index k is assumed to belong to ZVIs where N g and N a are integers. During ZVIs, we have the following zero velocity updating equation: where

Proposed Position and Heading Updating Algorithm Using the Distance Sensor
The proposed position and heading updating algorithm is shown in Figure 4. As illustrated in Figure 2, the basic idea of the proposed algorithm is to use vertical planes as references to update position and heading. The key issue is when to form a vertical plane and when to abandon an existing vertical plane. This issue is first discussed. At ZVI i, LIDAR point P i is given by where C w i,b ∈ R 3×3 is a rotation matrix from BCS to WCS, and [b i ] b ∈ R 3 is a LIDAR point with respect to BCS at ZVI i.
If there is no defined vertical plane, all LIDAR points in one walking step which is identified by ZVI i − 1 and i (i > 1) are used to define the first vertical plane. All of these points could be on a vertical plane (such as a wall). On the other hand, it is also possible that these points are not on a plane. For example, if there is a passing person, some LIDAR points could be from the person. In addition, it is possible that some points are from a door, which is not on the wall. We first make an assumption that all LIDAR points are on the same plane and derive the plane equation. After that, we verify the same plane assumption in Equation (18). Assume that P i,j ∈ R 3×1 (1 ≤ j ≤ N) are all LIDAR points during the step i-th (N is number of LIDAR points in the step), and (n i , d i ) are the vertical plane equation parameters satisfying The plane equation parameters (n i , d i ) are computed by minimizing the following: where The minimizing solution is given by Since n i is a normal vector, we have Now, we check whether the LIDAR points are from the same vertical plane. If LIDAR points are near the plane (n i , d i ) (first and second conditions in Equation (18)) and the plane is vertical (third condition in Equation (18)), we assume that LIDAR points are from the same vertical plane.
where α mean , α max and α angle are threshold parameters. Note that the inclination angle of the plane equation (n i , d i ) is given by cos −1 (n i (3)). If Equation (18) is satisfied, the estimated plane (n i , d i ) becomes the first defined vertical plane. If Equation (18) is not satisfied, the first vertical plane is searched in the next step. Now, we have the vertical plane equation (n k , d k ) and the next LIDAR point P i is obtained. If this point belongs to the vertical plane, we use the plane equation to update position and heading of a pedestrian. The LIDAR point P i is determined to belong to the vertical plane (n k , d k ) if the following is satisfied where α plane is threshold. If LIDAR point P i does not satisfy Equation (19), the point P i could be from an obstacle (such as passing person or door) or could be from a new wall. For example, LIDAR point P i in Figure 5a is from an obstacle (the next LIDAR point P i+1 satisfies Equation (19)) while the point P i in Figure 5b,c is from a new wall ( P i+1 does not satisfy Equation (19)). In the proposed algorithm, a new vertical plane is only formed using Equation (17) when Equation (18) is satisfied (Figure 5c).
We are going to derive position and heading updating algorithm using LIDAR point P i when the point belongs to the vertical plane (n k , d k ). Since P i is on the plane (n k , d k ), we have (see Equation (13)) where C denotes C b w (q i ). From [32], we have From Equation (20), we have Equation (22) can be rewritten as follows: This equation is used as a measurement equation in Kalman filter when P i belongs to the vertical plane where v p is a measurement noise and

Experiments and Results
Three experiments are performed to verify the proposed algorithm. In the experiments, an IMU (Xsens MTi sensor unit) and distance sensor (LIDAR) are attached on the shoe as shown in Figure 6. Since the algorithm verification is the main purpose of the experiments, the measurement data are collected and processed in Matlab offline.
The first experiment is walking along a corridor (84 m) with many doors and passing persons as in Figure 7. The results of the first experiment are shown in Figure 8 (pure INA) and Figure 9 (the proposed algorithm). In the figures, the pedestrian path is drawn in a green color and the vertical wall plane is expressed by the blue line. If LIDAR points satisfy Equation (19), they are used for the measurement update in Equation (24). These "updated LIDAR points" are represented by red "*" symbols. Those points not used for the measurement update are represented by blue "o" symbols. In Figure 8, the estimated pedestrian path tends to drift over time since there is no updating in theheading. In Figure 9, the estimated pedestrian walking path is more accurate since the heading is corrected using the wall information. As can be seen in the zoomed area in Figure 9, the proposed algorithm is working robustly even with many doors and passing persons.     Walking is a complex process which depends a lot on each person's walking style. To see whether the proposed method is affected by different walking styles, the proposed method is tested with five subjects. Each subject is asked to walk 30 m along a corridor three times (thus there are total 15 walking), and the result is shown in Table 2. As can be seen from Table 2, the proposed algorithm can work well independent of subjects, and the position error mean is significantly reduced from 2.03 m (using pure INA) to 0.42 m (using the proposed method).
The second experiment is about walking around a U shaped (39 × 26.5 m) corridor with starting position (0, 0) and final position (0, 22.26 m). The final position is marked by the red star in Figures 10  and 11. There are four doors which are colored in black. The estimated pedestrian path are shown in Figure 10 (pure INA) and Figure 11 (the proposed method). In Figure 10, the heading tends to drift over time since the heading is not corrected as in the Figure 8 case. In Figure 11, we can notice that the heading is effectively corrected using the wall information.
The RMS position error of a pure INA is 3.88 m. The proposed method gives a better RMS error 0.58 m using the wall information (see Table 3).
The third experiment is to show that our method can work well with a complex map. In Figure 12, a person started from room 1 and arrived at room 2 after walking a 60 m corridor. Finally, the person went around room 2 and came back to the corridor. The red square in Figure 12 is the starting position, the estimated pedestrian path is shown in the green color and the red star is the final position. We can see that the vertical plane updating is not used inside rooms 1 and 2 (there is no updated LIDAR point inside rooms) due to obstacles.

Discussion
The experiment in Table 3 demonstrates that the proposed algorithm can work well with a complex walking path. In Figure 12, there are some updated LIDAR points (red "*" symbols) along the corridor, but there is no updated LIDAR point in room 1 and room 2. This means a pure INA is applied in room 1 and room 2 where there are many obstacles and short walls. The vertical wall plane is automatically detected when the pedestrian enters the corridor. Thus, the estimated pedestrian walking path is more accurate while walking along corridors.
Although Table 2 shows that the proposed method can work well with different walking styles, the proposed algorithm could be affected by walking styles since the sensor is mounted on a shoe. The walking style mainly affects the zero velocity detection used in the INA. This paper uses a simple zero velocity algorithm (see Equation (11)) where its parameters (N g , N a , N g and N a ) are chosen so that the algorithm can detect ZVIs for most normal walking styles. These fixed parameters are used during all experiments. To show the effect of this algorithm, another experiment is done for a subject who was asked to walk 3.6 m in a straight line with different speeds. An optical marker and an IMU were attached to his shoe. The IMU was used to detect zero velocity points using zero velocity algorithm, and the marker is used to track the trajectory of the foot using a ground truth system (Flex-13 OptiTrack system).
In Figures 13 and 14, the detected zero velocity points are represented in red "*" symbols and the trajectory of a foot (by an optical tracker) is presented by the blue line. The algorithm can detect ZVIs for three different speeds.  Although the algorithm can detect all occurences of ZVIs, the detected length of ZVI is shorter than the true length of ZVI due to fixed parameters for all walking speeds. This is the drawback of the simple zero velocity algorithm.
The experiment data in Table 2 are used to verify the robustness of zero velocity detection and the result is shown in Table 4. As can be seen in Table 4, there is no missing ZVI detection with walking speed up to 6 km/h. Thus, our algorithm can work well with different walking styles using the fixed parameters. However, the last seven pieces of data, which were obtained by asking subjects to walk as fast as possible, show the drawback of this algorithm. The algorithm with fixed parameters can not detect ZVIs in very high speed walking. For example, there are 22 ZVIs, but the zero velocity algorithm only successfully detects four of them in the last data.
The drawback can be solved by using adjustable parameters for zero velocity detection. For example, the parameters will be adjusted based on the current velocity of the foot. The drawback also can be solved by directly measuring ground touching intervals (for example, using force sensors).

Conclusions
The proposed method uses vertical planes (such as walls) to improve heading and position estimation accuracy. The vertical planes are constructed using a distance sensor.
The proposed method is verified through three experiments in a straight corridor, a U shaped corridor and a complex walking path. Experiment results in Tables 1-3 show that the proposed method gives a better RMS error compared with a pure INA. Furthermore, the zoomed area in Figure 9 indicates that the proposed algorithm is working robustly even with many doors and passing persons.
The paper also shows the drawbacks and solutions of using a simple zero velocity algorithm through the experiment in discussion section.
Since there are many vertical walls indoors, the proposed algorithm can be used for almost any indoor environments. Furthermore, the proposed system does not require any installation on environment and any prior knowledge on environment.
As a future research topic, this paper can be improved by using two distance sensors which point to different sides (left and right). By using two distance sensors, vertical planes (such as walls) in two sides of corridor will be recognized. This could improve our algorithm, especially when one side loses tracking vertical plane due to obstacles or no wall.