A Human Gait Tracking System Using Dual Foot-Mounted IMU and Multiple 2D LiDARs

This paper proposes a human gait tracking system using a dual foot-mounted IMU and multiple 2D LiDARs. The combining system aims to overcome the disadvantages of each single sensor system (the short tracking range of the single 2D LiDAR and the drift errors of the IMU system). The LiDARs act as anchors to mitigate the errors of an inertial navigation algorithm. In our system, two 2D LiDARs are used. LiDAR 1 is placed around the starting point, and LiDAR 2 is placed at the ending point (in straight walking) or at the turning point (in rectangular path walking). Using the LiDAR 1, we can estimate the initial headings and positions of each IMU without any calibration process. We also propose a method to calibrate two LiDARs that are placed far apart. Then, the measurement from two LiDARs can be combined in a Kalman filter and the smoother algorithm to correct the two estimated feet trajectories. If straight walking is detected, we update the current stride heading and the foot position using the previous stride headings. Then, it is used as a measurement update in the Kalman filter. In the smoother algorithm, a step width constraint is used as a measurement update. We evaluate the stride length estimation through a straight walking experiment along a corridor. The root mean square errors compared with an optical tracking system are less than 3 cm. The performance of proposed method is also verified with a rectangular path walking experiment.


Introduction
Human gait monitoring and analysis are important in clinical rehabilitation [1,2], functional mobility diagnostics [3,4], and sports training [5][6][7]. With some technological tools, the gait phase and human gait parameters can be determined through gait analysis. The Vicon optical tracking system [8] and the GAITRite system [9] are the gold standards of gait analysis since they provide accurate measurements. The optical system captures the position of passive reflective markers attached on human body using multi-cameras, while the GAITRite system measures gait parameters via an electronic force platform. However, these standard systems are bulky, expensive, and require a long time for set up and post-processing the data. It may be difficult to deploy these evaluation systems in some rural areas or remote locations.
Wearable sensors, such as inertial sensors or inertial measurement units (IMUs), are used as low-cost alternative instruments in gait analysis [10]. They are small and lightweight, have low power consumption, and are integrated in most smart wearable devices. They can be attached to various parts of the human body including the foot, waist, chest, and head. The gait parameters are then estimated using the recorded movement signal. For example, the walking step length can be estimated with a waist-mounted IMU using a linear regression model [11]. This method requires a calibration process to construct the step length model. A common method for foot-mounted IMU is the numerical integration of inertial data to estimate foot velocity and position. However, the drawback of low-cost IMU is the sensor noises, which result in the accumulated integration errors. A most widely used technique to reduce these errors is the zero-velocity update (ZVU) [12,13], which assumes that the foot velocity is zero when it contacts the ground. Once the stance phase is detected, a ZVU is used as a pseudo measurement of the Kalman filter to reset the errors. However, as the position and heading are still unobservable, the heading drift grows over time. Thus, the positioning accuracy is decreased.
Some researchers use a dual foot-mounted IMU system to reduce the symmetrical heading errors [14,15]. There are two problems in the dual foot-mounted IMU system. The first one is the initial heading and position of each IMU, which is usually solved using a calibration process. The other one is the heading drift over time. Using the mapping information, a heuristic drift elimination (HDE) algorithm is proposed to update the heading with the dominant direction of building [16,17]. This method cannot apply to large space, where the building direction is not available. Based on the relative position of two feet, some distance constraints are proposed as the measurements in the Kalman filter frame work [18][19][20][21][22]. These methods update the stride heading and foot position if straight walking is detected or use the maximum range of stride length or two feet step length to constrain the divergence of the moving foot. However, the heading cannot be effectively corrected, especially in the turning phase.
In human gait analysis, a 2D LiDAR is also used to track the lower limb. The LiDAR does not require external markers and has simple installation. It can be mounted on a moving platform such as a mobile robot [23,24] or a smart walker [25,26], or can be fixed on the ground at a height of human shin [27][28][29][30]. Using the LiDAR with moving platform requires more additional sensors as well as a control system. Thus, the system is complicated and bulky. The system with the LiDAR is installed on the ground is simple. It scans human leg and estimates the leg center point during walking. The LiDAR-based human leg estimation is quite accurate and contains the walking direction with respect to the local environment. However, the effective human leg tracking range of a single 2D LiDAR is limited within a few meters. In [31], a fusion system of multiple 2D LiDARs has been proposed to extend the human leg tracking range. However, the LiDARs cannot be placed far apart to guarantee the continuity of leg scan data. Therefore, the tracking range is still limited. For example, for 20 m of straight walking, three LiDARs need to be used.
The aim of this paper is to combine multiple 2D LiDARs with a dual foot-mounted IMU system to overcome the disadvantage of each single system. The proposed system can increase the human gait tracking range and reduce the heading drift of the inertial sensor system. Two 2D LiDARs are used in this paper. The first one is placed around the starting point. The other one is placed based on the walking route: at the ending point in straight walking or at the turning point in rectangular path walking. The LiDAR-based human leg center point estimation is then used as a measurement update of the inertial sensor system. The main contributions of this paper are as follows: • Multiple 2D LiDARs are combined in the dual foot-mounted IMU system for heading and position correction; • The initial heading and position of each IMU are automatically estimated using the first LiDAR without any calibration process; • A calibration algorithm of two 2D LiDARs (placed far apart) is used, and the calibration parameters are then used to transform the human leg center point into the same coordinate system; • In contrast with the HDE algorithm, which uses the building heading information, we propose a simple walking stride heading update algorithm when straight walking is detected. The foot position estimation from the updated heading is then used as a measurement update of the Kalman filter; • The estimated errors of the Kalman filter are then compensated using a quadratic optimization-based smoother algorithm [32]. A constraint of the step width is also proposed in the smoother algorithm if the straight walking is detected.

System Overview
The proposed system is configured, as shown in Figure 1. Two foot-mounted IMU systems are synchronized and consist of a microSD card to store the initial data. Two 2D LiDARs are horizontally placed on the ground to scan human leg. The data from IMUs and LiDARs are then post-processed using Matlab. As we can see in Figure 1, there are four sensor coordinate systems: two IMUs and two LiDARs. To track the human gait, we define the global coordinate system and two foot body coordinate systems. The left/right foot body coordinate systems are set to coincide with corresponding left/right IMU coordinate system. Subscripts L and R denote the left and right side, respectively. The three axes of the global coordinate system are chosen as follows: x and y axes coincide with the LiDAR 1 coordinate system, the z axis points upwards, and the origin is the projection of LiDAR 1 origin to the horizontal plane of two IMUs. The height of the global coordinate system is shifted so that the initial z−positions of two feet are zero. For a given vector, we sometimes use subscript b (body) and w (global) to emphasize that a vector is expressed in the particular coordinate.
The inertial sensor outputs are acceleration y a ∈ R 3 and angular velocity y g ∈ R 3 , and are modeled as follows: where a b ∈ R 3 is the acceleration produced by forces other than the gravitational field; g = 0 0 9.8 T is the gravitational vector in the global coordinate system; C b w ∈ R 3×3 is the rotation matrix from global coordinate system to the body coordinate system; ω b ∈ R 3 is the body angular velocity; and v a ∈ R 3 and v g ∈ R 3 are the accelerometer and gyroscope measurement noises, respectively. We assume that v a and v g are white Gaussian noises, where their covariances are given by: The block diagram of our proposed system is shown in Figure 2. Firstly, the data from sensors is preprocessed. The single LiDAR processing module estimates the human leg trajectories using LiDAR scan data. The leg positions are expressed in each LiDAR coordinate system. The first calibration algorithm calibrates two LiDARs coordinate systems to transform the leg positions from LiDAR 2 into the LiDAR 1 coordinate system. The second calibration algorithm uses the IMUs and LiDAR 1 data to output the initial rotation angles and position of each IMU in the LiDAR 1 coordinate system. Then, the Kalman filter and the smoother algorithm are used to estimate the two feet trajectories. In the Kalman filter, the zero-velocity update (ZVU) and the LiDAR-based stance foot positions are used as a measurement update. A simple straight walking detection algorithm is also proposed to correct the heading estimation and the foot position. A smoother algorithm is then used to reduce the estimation error and smooth the trajectories of the two feet. In this smoother algorithm, the straight-walking-based heading and position update is replaced by the step width constraint measurement update.

Data Preprocessing
The aim of this part is to estimate the human leg center point from LiDAR and estimate the initial heading and position of each IMU in the global system. In this paper, the symbols |.| and ||.|| represent the absolute value and the Euclidean norm, respectively. We also denote the estimated value, the true value, and the error of x asx,x, and x e , respectively, wherex =x + x e .

Single LiDAR-Based Human Leg Center Point Estimation
The LiDAR scan data are then used to estimate the human leg center point trajectories using the algorithm in [27,31]. The outputs of this module are the human leg positions in stance phases which are represented by diamond shapes in Figure 3.  the left or right leg, and k j = 0, 1, 2, ... denotes the sequence of stance phases of j leg. For example, k L = 0 means the left foot is in initial standing still period. Since these leg positions are expressed in different LiDAR coordinate systems, the pose between two LiDARs is computed in the next section.

Calibration 1: Two LiDARs Calibration
The calibration parameters of two LiDARs are the rotation angle and the translation vector. In our previous research of human gait tracking using multiple LiDARs, the working space is shared by the LiDARs; therefore, a cylinder is placed arbitrarily in the common scanning space to calibrate the system. In this research, two LiDARs are placed far apart and there is no common scan data. Therefore, in stead of arbitrary choice, we place the cylinder at some fixed points which are already known their geometric relationship.
The calibration configuration is shown in Figure 4. The cylinder with measured radiuŝ r c is placed at four points A 1 , B 1 (in LiDAR 1 scanning range), A 2 , and B 2 (in LiDAR 2 scanning range). These four points are manually calibrated to identify the following parameters: the intersection O of A 1 B 1 and A 2 B 2 , the angleα between A 1 B 1 and A 2 B 2 , and the corresponding distance from O to each pointd m i , where m = A, B and i = 1, 2. We also denote the measurement errors ofr c ,α, andd m i by r c,e , α e , and d e , respectively. Figure 4. The configuration of LiDARs calibration.
The coordinate systems of two LiDARs are represented by O 1 − x 1 y 1 and O 2 − x 2 y 2 . The calibration parameters are the translation vector T 12 from O 1 to O 2 and the rotation angle θ 12 from the LiDAR 2 coordinate system to the LiDAR 1 coordinate system. To account the uncertainties of the LiDAR data measurement and the measured configuration parameters, we propose an iterative algorithm to compensate the calibration parameters errors. The state vector x c consists of the calibration parameters and the cylinder's estimated center points at points A 1 and B 1 : Let s L i m i ,k represent the cylinder sample scan data of LiDAR i at position m i . At each position, the total number of scan data point is N m i .

Initialization
We firstly initialize the cylinder's center points using the least square fitting algorithm. The initial cylinder's center pointsĉ L i m i ,0 are then used to estimate the initial calibration parameters as follows: • The unit vector is estimated by: • The initial rotation angle is estimated by: The initial translation vector is estimated by: In the above equations, R(ϕ) is the rotation matrix of an angle ϕ with respect to the z-axis: R(ϕ) = cosϕ −sinϕ sinϕ cosϕ .

Iterative Algorithm
The errors state vector is defined as follows: The measurement equations are then identified using the LiDAR scan data. For LiDAR 1, the circle fitting equation is as follows: By inserting the measurement errors, we can identify the measurement equation for each scan data point of LiDAR 1 as follows: and v s m 1 ,k is the measurement errors of sample data s L 1 m 1 ,k .
For LiDAR 2, we fit the circle after transforming the scan data into the LiDAR 1 coordinate system:r wherec L 1 The small error of unit vectorṽ L 1 can be represented by a small angle β e as follows: The rotation matrix is approximated as follows: By inserting the measurement errors, becomes: The measurement equation at point m 2 , m = A, B is as follows: where By combining (5) and (7), the measurement equation for all four positions of cylinder is as follows: where can be identified using (5) and (7). Here, N a = ∑ m=(A,B),i= (1,2) N m i is the total number of sample scan data at four positions of cylinder. We assume that the measurement noises are zero mean white Gaussian noises, where covariances are given by: The covariance of LiDAR measurements can be found in [31]. Then, we can compute the covariance matrix of measurement noises of (8) as follows: The error-state vector is estimated as follows: The state vector (3) is then updated using x c,e . Using the updated cylinder center points at A 1 and B 1 , the unit vectorv L 1 is calculated at each iteration. This process is iterated until a stop condition is satisfied: Using the calibration parameters, we can transform the stance leg positions from LiDAR 2 coordinate into LiDAR coordinate as follows:

Calibration 2: Initial Heading and Position of Each IMU Estimation
The initial rotation angle and position of each IMU in the LiDAR 1 coordinate system are automatically estimated using the first walking stride data from IMUs and LiDAR 1. To estimate the initial rotation angle, we use the TRIAD algorithm [33]. This algorithm requires two pairs of reference and corresponding observation unit vectors. Two reference vectors in the LiDAR 1 coordinate system are gravitational vectorg and the first walking stride Two corresponding observation vectors are the average accelerometer output of one second standing stillȳ a,j,0 and the first walking stride direction estimated from each IMU To estimate the IMU-based first walking stride direction, a simple inertial navigation algorithm with ZVU is applied to the first walking stride data of each IMU. The initial rotation matrix can be estimated using the following equations: After that, we can estimate the initial rotation angles using C b w,0 . The initial positions of each IMU (r j,0 = x j,0 y j,0 0 T , j = L, R) are estimated using the estimation of LiDAR-based human leg center points ( Figure 6). We assume that the human leg is vertically straight during the initial standing still phase, as well as the midstance phase of walking. Therefore, the ankle joint K j can be approximated as the projection of leg center point on the x − y plane. By assuming that the coordinate of K j in IMU j, r , is pre-calibrated, the initial position of each IMU in LiDAR 1 coordinate can be computed by:

Proposed Algorithm
The proposed human gait tracking algorithm can be referred to as an inertial navigation system (INS).

Basic Inertial Navigation System Mechanization
The basic equations [34] for inertial navigation are given as follows: where q ∈ R 4 is the quaternion which represents the attitude of foot, and v ∈ R 3 and r ∈ R 3 are the velocity and position of foot in global coordinate, respectively. The symbol Ω(ω b ) is defined by: The inertial navigation algorithm is used to estimateq,v andr by integrating (16). Due to sensor noises in the low-cost inertial sensor, the errors increase quickly if there is no reference measurement. To overcome this problem, a ZVU method is used to reset the accumulated errors. The method assumes that the velocity of the foot is zero when it touches the ground. Thus, once the stance phase is detected, the ZVU can be fused with the INS algorithm by an error-state Kalman filter.
To detect the stance phase during walking, a simple zero-velocity detection algorithm is used [35,36]. A discrete time k is determined to belong to the zero-velocity interval if the following conditions are satisfied: where B g and B a are threshold values, and N g and N a are even number integers. The errors in the numerically integrated values of (16) are estimated using a Kalman filter. Let q e ∈ R 4 , v e ∈ R 3 , and r e ∈ R 3 be the estimation errors inq,v, andr, which are defined as follows: where ⊗ is the quaternion multiplication and q * denotes the complex quaternion conjugate of q. It is assumed that the quaternion errors are small. Thus, q e can be approximated by: With this assumption, the attitude error is represented by the three-dimensional vector q e . We define the error-state vector as follows: The state transition equation is given by: where

The Proposed Dual Foot-Mounted IMU Algorithm
The dual foot-mounted IMU errors state vector is defined by combining the left and right foot error states: The dynamic equation is expressed as follows: where F = A L 0 9 0 9 A R and W = w L w R .
Let τ be the sampling period of the inertial sensor output. The system equation is discretized as follows: where Φ k = e Fτ ≈ I 18 + Fτ + 0.5(Fτ) 2 and Q k = E{W k W T k }. In the proposed system, the model and observation equations are linear except the quaternion model, which is approximately linearized. The divergence can be the direct consequence of modeling errors. In the previous initial heading estimation section, although the initial heading of the INS system is unknown and has large uncertainty, it does not affect our proposed system. Three measurement updates are used in this Kalman filter: the zero-velocity update, the LiDAR-based stance foot position update, and the straightwalking-based heading and foot position update.

Zero-Velocity Update
If the left/right foot is in the stance phase, the corresponding measurement equations are given by: where H v,L = 0 3 0 3 I 3 0 3 0 3 0 3 , H v,R = 0 3 0 3 0 3 0 3 0 3 I 3 , n v,L and n v,R are the white Gaussian measurement noises, and R v = E{n v,j n T v,j }.

Lidar-Based Stance Foot Position Update
This update is used in the stance phase of each foot if the stance leg position estimation of LiDAR p i,j (i = 1, 2 and j = L, R) is available. Here, the LiDAR-based stance leg positions are already transformed into the LiDAR 1 coordinate system, so that we ignore the right superscript L 1 in p i,j .
As we mention in the Section 3.3, we assume that the human leg is vertically straight during the mid-stance phase; therefore, we can transform the LiDAR-based stance leg center point position to the x − y coordinate of stance foot position f i,j using the following equation: whereq j is the currently estimated quaternion of IMU j, and r I j K j is the pre-calibrated coordinate of ankle joint in IMU j coordinate, as demonstrated in Section 3.3. Now, we can update the x − y coordinate of the stance foot using the following measurement equations: z lidar,L = f i,L − r L,x r L,y = H r,L X e,k + n lidar,L , z lidar,R = f i,R − r R,x r R,y = H r,R X e,k + n lidar,R , where H r,L = 0 2×3 1 0 0 0 1 0 0 2×3 0 2×3 0 2×3 0 2×3 , H r,R = 0 2×3 0 2×3 0 2×3 0 2×3 1 0 0 0 1 0 0 2×3 , n lidar,L , and n lidar,R are the white Gaussian measurement noises with covariance R lidar .

Heading and Position Update during Straight Walking
Instead of using the pre-defined building heading, we update the current walking stride heading using the previous stride headings if straight walking is detected. The new position of current stance foot is then calculated based on this heading. Then, it is used as the measurement update in the Kalman filter. Since we use the heading of previous strides, the heading errors are still accumulated until the measurement data of LiDAR 2 is available. These errors are compensated later in the smoother algorithm. Figure 7 shows an example of the proposed algorithm for current stance left foot. Let r L,s denote the current estimated left foot position, where s is the discrete index of the current stance phase. The current walking stride on the right side and the adjacent walking stride on the left side are used. Their stance foot positions are represented by (r R,s , r R,s−1 ), and (r L,s−1 , r L,s−2 ), respectively. The straight detection and heading correction procedures are as follows: • Condition 1: The current step length is calculated: Only the walking step with Ŝ LR > δ s is considered. Here, the threshold is empirically chosen to check whether a normal walking step has indeed been taken. • Condition 2: If the current walking step is normal, we compute the angles between the current stride vectorŜ L,c =r L,s − r L,s−1 and the previous left stride vector S L = r L,s−1 − r L,s−2 and the right stride vector S R = r R,s − r R,s−1 : If they are smaller than a threshold δ α , straight walking is detected. Here, we use the same threshold values used in [22], i.e., δ s = 0.5 m and δ α = Using the new current left/right foot positions, the measurement equations are given by: z p,L =r L,s,new − r L,x r L,y = H r,L X e,k + n p,L , where H r,L and H r,R are given in (26), and n p,L and n p,R are the white Gaussian measurement noises with covariance R p . Note that we only update the x − y positions in this part.
In the Kalman filter, X e,k is estimated and used to compensate the errors inq,r, andv using (18). After updating, the errors are set to zero.

Smoother Algorithm
Since we use the estimated previous heading to update the stride heading in straight walking, the heading errors are still accumulated. To archive our proposed correction performance, the errors of the Kalman filter estimated values (denoted byq KF,k ,v KF,k , andv KF,k ) are compensated using the smoother algorithm [32]. For that, we define the estimation error-state vector as follows: The system equation for smoother estimation values is given by: where Φ k is defined in (23) and q SM,L,k = 0 3×1 I 3 f * 1,k ⊗q KF,L,k+1 , q SM,R,k = 0 3×1 I 3 f * 4,k ⊗q KF,R,k+1 . Function f k in (31) is the numerical integration of quaternion, position, and velocity from kτ to (k + 1)τ. Using (28), (30) and (31), we can expand the left side of (29) as the errors of the numerical integration of the Kalman filter estimated values. Thus, (29) represents how the estimation errors evolve after the integration of (31).
The smoothing problem can be formulated as a quadratic optimization problem using the method in [32]. Let the optimization variableX be defined by: We can estimateX by minimizing the cost function: where P init = P L,init The initial attitude error covariance P q j,0 is given by the algorithm in [33]. The initial position and velocity error covariance P r j,0 , P v j,0 can be considered as design parameters.
In this smoother algorithm, we use three measurements: the zero-velocity update, the LiDAR-based stance foot positions update, and the step width constraint update. The first two measurements are explained in the previous section of the Kalman filter. The third measurement is proposed using an assumption that the width of walking step is almost constant during straight walking.
As an example, during a current stance phase of left foot, the step width is estimated as follows: • The walking step vector is computed: S LR =r KF,L,k −r KF,R,k ; • The orthogonal unit vector n L of the current walking direction vectorŜ L,c,new that points toward opposite foot side is computed, so that n T L S LR > 0; • The step width d is now calculated as follows: d = n T L S LR = n T L (r KF,L,k +r L,k −r KF,R,k −r R,k ).
The step width measurement equation for the left stance foot is given as follows: where z d,L,k = d − n T L (r KF,L,k −r KF,R,k ) and Similarly, the step width measurement equation for the right stance foot is given as follows: where z d,R,k = d − n T R (r KF,R,k −r KF,L,k ) and (34) and (35), n d,L and n d,R are the white Gaussian measurement noises with covariance R d .
Consequently, the "constraint terms" in (33) are detected as follows: where Z v,i , Z r,i , and Z d,i denote the sets of all discrete time indices belonging to zerovelocity intervals, LiDAR-based stance foot position measurement intervals, and step width measurement intervals, respectively. The cost function (33) can be rewritten as a quadratic function ofX as follows: where M 3 is irrelevant in the optimization, and M 1 and M 2 can be easily computed using (33) and (36). The details can be found in [32]. This is the main advantage of the quadratic-based optimization smoother algorithm, where the measurement constraints can be easily included in the optimization problem. The minimizing solution of (37) can be computed by solving the following equation: whereX * is the minimizing solution. OnceX * is computed, the smoother estimation valueŝ q SM,L,k ,r SM,L,k ,v SM,L,k ,q SM,R,k ,r SM,R,k , andv SM,R,k can be found using (28).

Hardware Description
A Xsens MTi-1 IMU module with a micro SD card is installed in each foot. The sampling rate of two IMUs is 100 Hz. Two LiDARs (RPLIDAR A3) have a scan rate of 10 Hz. The total cost of the system is around USD 1500 (USD 150 for an IMU and USD 600 for a LiDAR). The sensors specifications are shown in Table 1. An OptiTrack optical motion capture system, which is composed of six Flex 13 cameras with a resolution of 1280 × 1024 at 120 Hz, is used to provide the ground truth of the walking step length estimation.
The parameters used in this paper are given in Table 2.

Stride Length Estimation Evaluation
To evaluate the stride length estimation, we perform a straight walking experiment with the optical camera system. Two male healthy volunteers, which have no problems with gait, balance, and coordination, are recruited. Their information is given in Table 3. They are asked to walk along the corridor with the travel distances of 20 m and 50 m. Each person repeats the walk 10 and 5 times in 20 m and 50 m experiments, respectively.  Table 2. Parameters used in this paper.

Parameters Related Equations
P r j,0 = 0.000001I 3 , P v j,0 = 0.000001I 3 (33) R d = 0.01 (9) and (36) R lidar = 0.01I 2 (26) The configuration of the experiment is given in Figure 8. The optical system cannot cover all the walking path; thus, it is placed so that a walking path outside two LiDARs range can be captured. The walking path outside the LiDARs range has the least measurement update; therefore, the walking stride length errors may be largest. With this setup, our OptiTrack system can capture a walking range of 8-13 m measured from LiDAR 2 origin. By comparing the walking stride lengths captured in this path, the performance of our proposed system is evaluated. The LiDAR 2 and OptiTrack system positions are fixed during both experiments. The LiDAR 1 position is changed with the walking distance.
The subject may have difficulties in following the straight path during walking. We model a simple case of abnormal gait by removing the availability of straight walking in 5 s (20 m walking) and 10 s (50 m walking). The straight-walking-based foot position update is therefore not applied in this interval. The step width constraint is still used with larger uncertainty. The modeled measurement update is then fed to the proposed method. An example of measurement availability of two feet is given in Figure 9. The top plot is the normal availability, while the bottom one is the modeled availability of measurement update. The non-zero values represent the presence of measurements. We can see the periodicity of zero-velocity measurements during walking. The LiDAR 1-based stance foot position measurements are available for the first three and four stance phases of the left and right foot, respectively. LiDAR 2 provides the stance foot position update for the last four and three stance phases of the left and right foot, respectively. In the stance phases where the LiDAR data are not available, the straight-walking-based heading and position update (in the Kalman filter) or the step width constraint update (in the smoother algorithm) is integrated. As in the bottom plot, the straight-walking-based measurement update is not available during the interval from 15 to 25 s.   The two estimated feet trajectories of all 20 m and 50 m walking experiments using the normal and modeled update are given in Figures 10 and 11. Figure 12 shows an example of a single 50 m walking estimation result for clearer vision. The LiDAR-based stance foot position estimation is denoted by a circle. We can see that the Kalman filter with the initial heading estimated from Section 3.3 and the foot position update during straight walking reduces the heading errors significantly. However, the heading errors can still be observed as the estimated trajectories do not match with the foot position update from LiDAR 2. This error is compensated using the smoother algorithm, where the step width constraint is used instead of using the heading and foot position update. If the heading and foot position update is used, the Kalman-filer-based heading estimation will be maintained in the smoother algorithm. Therefore, the heading errors are not compensated. Based on the heading estimation from the Kalman filter, the advantage of dual foot-mounted IMU is expressed in the step width constraint. From the all estimated walking trajectories results, we can see a small bulge in the middle of the walk. The further walking distance is, the bigger bulge is. This is because there is no actual heading observation outside the LiDARs' range. However, with the measurement update from LiDAR 2, the heading estimation errors can be corrected. In 20 m and 50 m walking estimation, the results using the normal and modeled update are almost similar. This shows the robustness of the proposed method.

Start End
Left Right The walking stride length estimation is then evaluated. For each walking, the optical OptiTrack system can capture three to four walking strides for each foot. The total counts of captured walking strides are 98 and 101 for the left and right foot, respectively. We compare the estimated walking stride length from the proposed smoother algorithm (SL I MU ) with the optical estimation (SL optical ) by computing the errors: e = SL optical − SL I MU . Figure 13 shows the histogram of the stride length estimation errors. Table 4 shows the mean estimation of the stride length and variance of each subject with a normal measurement update. The estimation errors compared to the optical system estimation are also given. For all subjects, the mean and root mean square errors (RMSE) of stride length estimation are less than 3 cm, while the maximum errors are less than 8 cm. The performances of the stride length estimation of the normal measurement update and the modeled measurement update are almost identical.

Evaluation of Rectangular Path Walking Estimation
To verify the effectiveness of the proposed method in heading correction, a rectangular path walking experiment is carried out. In this experiment, a person starts from the starting point, walks counter-clockwise twice along a rectangular path, and stops at the ending point as in Figure 14. The total of walking distance is around 124 m, with six turning phases. The LiDARs are placed to be able to capture the human leg data during turning phases.  Figure 15 shows the left and right foot estimated trajectories results of the proposed algorithm. We plot the corridor wall scan data as a reference of the walking direction as well as the results of calibrating the two LiDARs. The shading circle area represents the human leg tracking range of each LiDAR. As we can see, the proposed method can correct the heading during turning phases using the LiDAR measurements. There is no intersection between two feet trajectories. For comparison, we also plot the result of the dual foot-mounted IMU algorithm, which is proposed in [22], as in Figure 16. The algorithm using only dual foot-mounted IMU with a foot position update during straight walking based on the maximum range of the one-side foot stride length and the both-sides feet step length. Since there is no measurement of the heading, the errors can be seen clearly after the second round.

Conclusions
This paper proposes a human gait tracking system using a dual foot-mounted IMU and multiple 2D LiDARs. Two LiDARs are placed far apart: LiDAR 1 is at the starting point and LiDAR 2 is at the destination in straight walking or at the turning point in rectangular path walking. A calibration algorithm is proposed to estimate the rotation angle and translation vector of two LiDARs. The calibration uses a cylinder placed at four positions which are known their geometrical information. An iterative algorithm is then used to estimate the calibration parameters with the including of LiDAR measurement errors and configuration errors. Then, we can use the LiDARs as anchors to correct the inertial sensor system estimation. The LiDAR-based stance leg position estimations are transformed to stance foot position, and then combined as the measurement update of the Kalman filter and the smoother algorithm.
Since the LiDAR can provide the environment information, the LiDAR 1 coordinate system is chosen as the global coordinate system. The initial heading and position of each IMU can be estimated using IMUs data and the leg scan data from LiDAR 1. For further improvement, we update the heading and foot position if straight walking is detected. This is used as a measurement update in the Kalman filter. In the smoother algorithm, the straight-walking-based foot position update is replaced by a relative step width constraint update. We assume that in normal walking, the width of walking step is almost constant. We verify the stride length estimation of the proposed system with straight walking of 20 m and 50 m. We also model the abnormal walking by assuming that the subject cannot follow the straight path during a time interval. The measurement update of straight walking is therefore not available during this interval. The root mean square errors of the walking stride length estimation of both normal and modeled update are less than 3 cm compared with an optical tracking system. Through the rectangular path walking experimental results, we can see the LiDAR measurement can be used to correct the heading effectively. The proposed system has the potential for use in many fields, such as human gait monitoring and human gait navigation systems.