Next Article in Journal
Sliding Window Mapping for Omnidirectional RGB-D Sensors
Previous Article in Journal
OPTYRE—Real Time Estimation of Rolling Resistance for Intelligent Tyres
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pose Prediction of Autonomous Full Tracked Vehicle Based on 3D Sensor

1
School of Mechanical and Aerospace Engineering, Jilin University, Changchun 130022, China
2
School of Mechanical Engineering, Yanshan University, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(23), 5120; https://doi.org/10.3390/s19235120
Submission received: 19 October 2019 / Revised: 18 November 2019 / Accepted: 19 November 2019 / Published: 22 November 2019
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Autonomous vehicles can obtain real-time road information using 3D sensors. With road information, vehicles avoid obstacles through real-time path planning to improve their safety and stability. However, most of the research on driverless vehicles have been carried out on urban even driveways, with little consideration of uneven terrain. For an autonomous full tracked vehicle (FTV), the uneven terrain has a great impact on the stability and safety. In this paper, we proposed a method to predict the pose of the FTV based on accurate road elevation information obtained by 3D sensors. If we could predict the pose of the FTV traveling on uneven terrain, we would not only control the active suspension system but also change the driving trajectory to improve the safety and stability. In the first, 3D laser scanners were used to get real-time cloud data points of the terrain for extracting the elevation information of the terrain. Inertial measurement units (IMUs) and GPS are essential to get accurate attitude angle and position information. Then, the dynamics model of the FTV was established to calculate the vehicle’s pose. Finally, the Kalman filter was used to improve the accuracy of the predicted pose. Compared to the traditional method of driverless vehicles, the proposed approach was more suitable for autonomous FTV. The real-world experimental result demonstrated the accuracy and effectiveness of our approach.

1. Introduction

Thanks to the efforts of many researchers, driverless technology has developed rapidly in the 21st century. Several research teams developed advanced autonomous vehicles to traverse complex terrain in the 2004 and 2005 DARPA grand challenges [1], and then urban roads in the 2007 DARPA urban challenge (DUC) [2]. Research related to self-driving has continued at a fast pace not only in the academic field but also in the industrial field. Some driverless taxis from Google and driverless trucks from TuSimple have entered the stage of commercial operation. Localization and perception are two significant issues relating to driverless technology. Accurate localization can ensure the safety of autonomous vehicles on the road without collision with surrounding objects. Multi-sensor fusion is a stable and effective method for locating autonomous vehicles on the road [3,4,5,6], which can achieve centimeter-level localization accuracy by fusing data from GNSS, LiDAR, and inertial measurement unit (IMU). The perception element of vehicles is mainly completed by cameras equipped to the inside of the car. Onboard computers use some algorithms to detect, classify, and identify the information captured by the camera. The deep neural network is an effective method for recognizing the information obtained from the camera [7,8,9]. Although researchers have done a lot of research on driverless cars, most of the research on autonomous vehicles has been carried out on an urban even road, rarely considering uneven terrain. However, uneven terrain can seriously affect the stability of autonomous full tracked vehicle (FTV), especially some vehicle-mounted equipment, such as laser lidar and inertial navigation system (INS). How to maintain the stability and safety of the vehicle on uneven terrain is an important issue. One way to solve the problem is to apply an active suspension system to the vehicle, which calculates the road inputs of vehicles in advance through preview information [10,11,12,13,14,15,16].
In the last two decades, 3D laser scanners were widely used in autonomous vehicles because it could provide high-precision information about the surrounding environment [17,18,19,20]. Several methods to combine 3D sensors with preview control have been proposed. Youn et al. [21] investigated the stochastic estimation of a look-ahead sensor scheme using the optimal preview control for an active suspension system of a full tracked vehicle (FTV). In this scheme, the estimated road disturbance input at the front wheels was utilized as preview information for the control of subsequently following wheels of FTV. Christoph Göhrle [22] built a model predictive controller incorporating the nonlinear constraints of the damper characteristic. The approximate linear constraints were obtained by a prediction of passive vehicle behavior over the preview horizon using a linearized model. The simulation result showed improved ride comfort.
Accurate pose estimation can be used not only as an input parameter of the active suspension system but also as a reference for path planning. Yingchong Ma [23] presented a method for pose estimation of off-road vehicles moving over uneven terrain. With the cloud data points of terrain, accurate pose estimations can be calculated used for motion planning and stability analyses. Julian Jordan [24] described a method for pose estimation of four-wheeled vehicles, which utilized the fixed resolution of digital elevation maps to generate a detailed vehicle model. The result showed that the method was fast enough for real-time operation. Jae-Yun Jun [25] proposed a novel path-planning algorithm as a tracked mobile robot to traverse uneven terrains, which could efficiently search for stability sub-optimal paths. The method demonstrated that the proposed algorithm could be advantageous over its counterparts in various aspects of the planning performance.
Many filtering techniques have been widely used in engineering practice. Kalman filter is a kind of filtering technology, which is mainly used to correct errors caused by model inaccuracies. Dingxuan Zhao [26] proposed an approach using the 3D sensor, IMU, and GPS to get accurate cloud data points of the road. Both GPS/INS loosely-coupled integrated navigation and Kalman filter (KF) were used to get accurate attitude angle and position information. The results demonstrated that the KF could effectively improve the performance of the loosely coupled INS/GPS integration. Hyunhak Cho [27] presented an autonomous guided vehicle (AGV) with simultaneous localization and map building (SLAM) based on a matching method and extended Kalman filter SLAM. The proposed method was more efficient than the typical methods used in the comparison.
In these theses of pose prediction, vehicles were generally regarded as a rigid body, which was not in line with the characteristics of the vehicle’s wheels. Moreover, most of the research on preview control did not give a detailed description of adjusting the suspension system through terrain information. Furthermore, there are few studies on autonomous FTV. In this paper, we proposed an approach to predict the pose of autonomous FTV using GPS, IMU, and 3D laser scanner. In Section 2, we introduced the configuration of the autonomous FTV and the overall flow of our approach. In Section 3, the dynamic model of autonomous FTV was established. Then, we presented the process of combining a Kalman filter with a dynamic model of the vehicle and the control method of the active suspension system. Finally, we demonstrated the effectiveness and accuracy of our approach with real-world experiments.

2. System Structure

In the driverless field, how to ensure the safety and stability of vehicles on uneven roads is a significant issue. Changing the control strategy by getting road information and calculating its impact on vehicles in advance is an almost perfect way to solve the issue. Three-dimensional laser scanners, IMU, and GPS are necessary to implement the above method. Our vehicle was equipped with an IMU, two GPS, two 3D laser scanners, and an active suspension system, as shown in Figure 1. With 3D laser scanners, we could obtain real-time points cloud data around the vehicle for extracting elevation information of the road. The IMU and GPS were used to get accurate attitude angle and position information of the vehicle. Then, the contact height between wheels and the ground could be obtained by integrating the 3D laser scanner, IMU, and GPS data. Ultimately, the vehicle’s pose could be calculated by importing the wheels’ height information into the dynamic model. However, the predictive pose was inaccurate due to the error caused by sensors. To improve the precision of the pose, a Kalman filter was used to compensate for the errors caused by sensors. Figure 2 shows the main steps.

Mapping and Location

In the last two decades, sensor technology that can obtain information about the surrounding environment has been rapidly developed, such as depth camera, lidar, and vision sensors. With real-time environmental information, autonomous vehicles can achieve safe autonomous navigation in a complex environment. In this paper, we selected 3D laser scanners to obtain road information because of their many advantages, such as high precision of measurement and good stability in complex environments.
Two Velodyne lidars were chosen as the 3D sensor of our autonomous vehicle in our research, just as shown in Figure 1. The 3D sensor has a sweep angle of 360° and a horizontal angle resolution of 0.1°–0.4°, which ensures to obtain high-density points cloud data. The lidar has many kinds of frequencies. The scanning frequency of 20 Hz meets the need for extracting the height information of the vehicle’s wheels. Furthermore, a lidar was installed at the bottom of the vehicle in our study, which means that it needed a larger measuring range. Velodyne lidar could meet our needs. Table 1 shows the main performance index of the lidar.
Location is significant because it is a prerequisite for us to extract the height information of the vehicle’s wheels. Inertial measurement units are used to measure the physical information of the vehicle. For example, we could calculate the position, velocity, and attitude angle of the vehicle with the outputs of IMU. However, the IMU has accumulated errors because it obtains location information of vehicles in the form of mathematical integration. The global position system is widely used in the field of autonomous vehicles because of its advantage of providing a long-term stable location in all weathers. Although GPS has a higher positioning accuracy and better stability than other positioning methods, it will cease to be effective in some scenarios. For example, GPS signals will be interfered with by high buildings in cities and shielded in tunnels, which causes the car to be unable to locate itself. Some studies have been carried out on the integration of IMU and GPS data for positioning. Our GPS/INS system was based on differential positioning and RTK technology, ensuring centimeter-level positioning accuracy. Table 2 shows the performance index of our system.

3. Method

3.1. Vehicle Dynamics

Our autonomous FTV mainly adopted the multi-axle steering method to improve its flexibility, ensuring that it could choose more control strategies for avoiding obstacles on the terrain. The front axle and rear axle were steering axles, which were used to change the vehicle’s direction, and the middle shaft is the driven shaft. Figure 3 shows the simplified model of the vehicle.
The steering angle of each wheel could be calculated as follows:
(1)
when turning right, θ > 0
{ θ 1 = θ = tan 1 L I N ,   θ 2 = tan 1 L I B + N θ 3 = 0 ,   θ 4 = 0 θ 5 = tan 1 L II N ,   θ 6 = tan 1 L II B + N
(2)
when turning left, θ < 0
{ θ 1 = tan 1 L I N + B ,   θ 2 = θ = tan 1 L I N θ 3 = 0 ,   θ 4 = 0 θ 5 = tan 1 L II N + B ,   θ 6 = tan 1 L II N
where θ is the steering angle of the vehicle, which is positive when turning right or negative when turning left, and L I and L II represent the distance from the front axle and rear axle to the middle shaft, respectively.
Some of the existing approaches simplified the vehicle model into a rigid body structure, resulting in lower accuracy of vehicle models. Moreover, the research field on predictive control generally simplified the car body into a spring-damped structure only with a vertical direction, which is not suitable for three-dimensional uneven roads. Considering the characteristics of the suspension and tires, we constructed a segmented spring damping system to make the model more realistic, as shown in Figure 3b.
The Euler–Lagrange equation was chosen as the dynamics equation of our vehicle through comprehensive consideration of the vehicle’s characteristics.
d d t ( ( K P ) q ˙ i ) ( K P ) q i + F q ˙ i = E q ˙ i
Assume that the coordinates of the vehicle mass center is expressed in M ( x m , y m , z m ) , and M ( m , l , n ) is the center of mass coordinates with respect to the vehicle center O ( x , y , z ) . According to the geometric knowledge, the following equation could be deduced:
{ x m = x + m + n β l γ y m = y + l n α + m γ z m = z + n + l α m β
The velocity of the mass center could be deduced by taking the derivative of Equation (4):
{ x ˙ m = x ˙ + n β ˙ l γ ˙ y ˙ m = y ˙ n α ˙ + m γ ˙ z ˙ m = z ˙ + l α ˙ m β ˙
The kinetic energy of the system K was as follows:
K = K l + K r = 1 2 M ( x ˙ m 2 + y ˙ m 2 + z ˙ m 2 ) + 1 2 ( α 2 J X X α ˙ 2 + J Y Y β ˙ 2 + J Z Z γ ˙ 2 ) ( J X Y α ˙ β ˙ + J Y Z β ˙ γ ˙ + J X Z α ˙ γ ˙ )
where J X X , J Y Y , J Z Z are the vehicle’s moment of inertia, and J X Y , J Y Z , J X Z are the vehicle’s product of inertia.
According to the geometric relations in Figure 3a, the tangential, lateral, and radial displacement of each wheel could be calculated:
{ u i = ( x r β L i γ ) cos θ i [ y + r α + b i γ ] sin θ i v i = ( x r β L i γ ) sin θ i + [ y + r α + b i γ ] cos θ i w i = z b i β + L i α
b = B 2 , b i = ( 1 ) i + 1 b
The potential energy of the system could be expressed as the combination of gravitational potential energy and elastic potential energy of the vehicle. The formula was as follows:
P = 1 2 ( i = 1 6 [ K i x ( u i u i ) 2 ] + i = 1 6 [ K i y ( v i v i ) 2 ] + i = 1 6 [ K i z ( w i δ 0 w i ) 2 ] ) + Mg ( x m sin λ cos φ y m sin λ sin φ + z m cos λ )
The energy dissipation of the system was as follows:
F = 1 2 ( i = 1 6 [ C i x ( u ˙ i u ˙ i ) 2 ] + i = 1 6 [ C i y ( v ˙ i v ˙ i ) 2 ] + i = 1 6 [ C i z ( w ˙ i w ˙ i ) 2 ] )
where K i x , K i y , K i z ( i = 1 , 2 ... 6 ) and C i x , C i y , C i z ( i = 1 , 2 ... 6 ) denote the stiffness coefficient and damping coefficient of each wheel in three directions, respectively. Further, u i , v i , w i ( i = 1 , 2 ... 6 ) are the lateral, tangential, and radial displacement caused by the influence of the terrain, respectively.
Then, the work done by the wheels on the possible displacement and the force of friction on each wheel were as follows:
E = i = 1 6 [ ( P i F i ) v i S i u i ]
F i = μ K i z ( w i w i ) + C i z ( w ˙ i w ˙ i )
where F i and S i represent the lateral and tangential forces on the tire, respectively. P i denotes the traction of each wheel, which can be obtained from the controller area network (CAN) of the FTV.
Combining Equations (1)–(11), the dynamic equation of autonomous (FTV) could be deduced: X-direction:
M ( x ¨ + n β ¨ l γ ¨ ) M g sin λ cos φ + i = 1 6 cos θ i K i x ( u i u i ) + i = 1 6 sin θ i K i y ( v i v i ) + i = 1 6 cos θ i C i x ( u ˙ i u ˙ i ) + i = 1 6 sin θ i C i y ( v ˙ i v ˙ i ) = 0
Y-direction:
M ( y ¨ n α ¨ + m γ ¨ ) M g sin λ sin φ i = 1 6 sin θ i K i x ( u i u i ) + i = 1 6 cos θ i K i y ( v i v i ) i = 1 6 sin θ i C i x ( u ˙ i u ˙ i ) + i = 1 6 cos θ i C i y ( v ˙ i v ˙ i ) = 0
Z-direction:
M ( z ¨ + l α ¨ m β ¨ ) + M g c o s λ + i = 1 6 K i z ( w i δ 0 w i ) + i = 1 6 C i z ( w ˙ i w ˙ i ) = 0
α -direction:
M n y ¨ + M l z ¨ + [ J X X + M ( n 2 + l 2 ) ] α ¨ ( M m l + J X Y ) β ¨ ( M m n + J X Z ) γ ¨ + M g ( n sin λ sin φ + l cos λ ) i = 1 6 [ r sin θ i K i x ( u i u i ) + r cos θ i K i y ( v i v i ) + L i K i z ( w i δ 0 w i ) ] + i = 1 6 [ r sin θ i C i x ( u ˙ i u ˙ i ) + r cos θ i C i y ( v ˙ i v ˙ i ) + L i C i z ( w ˙ i w ˙ i ) ] = 0
β -direction:
M n x ¨ M m z ¨ ( J X Y + M m l ) α ¨ + [ J Y Y + M ( m 2 + n 2 ) ] β ¨ ( J Y Z + M n l ) γ ¨ M g ( n sin λ cos φ + m cos λ ) i = 1 6 [ r cos θ i K i x ( u i u i ) + r sin θ i K i y ( v i v i ) + b i K i z ( w i δ 0 w i ) ] + i = 1 6 [ r cos θ i C i x ( u ˙ i u ˙ i ) r sin θ i C i y ( v ˙ i v ˙ i ) b i C i z ( w ˙ i w ˙ i ) ] = 0
γ -direction:
M l x ¨ + M m y ¨ ( J X Z + M m n ) α ¨ ( J Y Z + M n l ) β ¨ + [ J Z Z + M ( l 2 + m 2 ) ] γ ¨ + M g ( l sin λ cos φ m sin λ sin φ ) + i = 1 6 ( L i cos θ i b i sin θ i ) [ K i x ( u i u i ) + C i x ( u ˙ i u ˙ i ) ] + i = 1 6 ( L i sin θ i + b i cos θ i ) [ K i y ( v i v i ) + C i y ( v ˙ i v ˙ i ) ] = 0
u i -direction:
K i x ( u i u i ) + C i x ( u ˙ i u ˙ i ) = S i ( i = 1 , 2 ... 6 )
v i -direction:
K i y ( v i v i ) + C i y ( v ˙ i v ˙ i ) = P i F i ( i = 1 , 2 ... 6 )
The matrix form of the above equation was expressed as follows:
[ [ M 6 × 6 ] [ 0 ] 6 × 12 [ 0 ] 12 × 6 [ 0 ] 12 × 12 ] { q ¨ 6 q ¨ 12 } + [ [ C 6 × 6 ] [ C 6 × 12 ] [ C 12 × 6 ] [ C 12 × 12 ] ] { q ˙ 6 q ˙ 12 } + [ [ K 6 × 6 ] [ K 6 × 12 ] [ K 12 × 6 ] [ K 12 × 12 ] ] { q 6 q 12 } = { F 6 F 12 }
[ M 6 × 6 ] = [ M 0 0 0 M n M l 0 M 0 M n 0 M m 0 0 M M l M m 0 0 M n M l J X X + M ( n 2 + l 2 ) ( J X Y + M m l ) ( J Z X + M m n ) M n 0 M m ( J X Y + M m l ) J Y Y + M ( n 2 + m 2 ) ( J Y Z + M l n ) M l M m 0 ( J Z X + M m n ) ( J Y Z + M l n ) J Z Z + M ( l 2 + m 2 ) ]
{ q 6 } = { x   y   z   α   β   γ } , { q 12 } = { u 1 . . . u 6   v 1 . . . v 6 }
where [ C 6 × 6 ] , [ C 6 × 12 ] , [ C 12 × 12 ] consists of the coefficient of { q ˙ 6 } and { q ˙ 12 } in Equations (12)–(19); [ K 6 × 6 ] , [ K 6 × 12 ] , [ K 12 × 12 ] consists of the coefficient of and in Equations (12)–(19); F 6 and F 12 consist of the generalized force in Equations (12)–(19). All of the above parameters are known.

3.2. Kalman Filter Algorithm

The dynamic model derived in the previous chapter could be used to calculate the pose when the vehicle was on an uneven road. However, the accuracy of the predicted pose was low, owing to the errors caused by IMU and the accuracy of the dynamic model. Several methods have been proposed to compensate for these errors. An effective method is the Kalman filter technique, which was proposed by Kalman in 1960 [28]. In this paper, the KF algorithm was used to compensate for the errors caused by a dynamic model and IMU for improving the accuracy of pose predicted.
The KF algorithm mainly includes two steps: predict and update. In the predict step, the state of the system is predicted with the following two equations:
X ^ k = A k X ^ k 1 + B k u k + w k
P ^ k = A k P ^ k 1 A k T + Q k 1
where X ^ k and P ^ k represent the system predicted vector and predicted covariance matrix at time t k , respectively; X ^ k 1 and P ^ k 1 represent the system condition vector and system covariance matrix at time t k 1 , respectively; w k and Q k 1 represent system error and corresponding covariance matrix at time t k 1 , respectively.
The above equations are linear system equations, which could not be used in the dynamic model. Thus, we needed to convert Equation (20) into a linear system equation to match the Kalman filter algorithm.
Firstly, Equation (20) could be rewritten as follows:
{ q ¨ 18 } = [ M 18 × 18 ] 1 { F 18 } [ M 18 × 18 ] 1 [ C 18 × 18 ] { q ˙ 18 } [ M 18 × 18 ] 1 [ K 18 × 18 ] { q 18 }
Then, a vector with 24 state variables was used to convert Equation (23) into a 1-order differential equation, and Equation (23) could be rewritten as follows:
{ X ˙ } = [ E ] { X } + { F * }
{ X } = { q 18 q ˙ 6 } = { x   y   z   α   β   γ   u 1 u 6   v 1 v 6   x ˙   y ˙   z ˙   α ˙   β ˙   γ ˙ }
[ E ] = [ [ 0 ] 6 × 6 [ 0 ] 6 × 12 [ I ] 6 × 6 [ C 12 × 12 ] 1 [ K 12 × 6 ] [ C 12 × 12 ] 1 [ K 12 × 12 ] [ C 12 × 12 ] 1 [ C 12 × 6 ] [ T 6 × 12 ] [ K 12 × 6 ] [ M 6 × 6 ] 1 [ K 6 × 6 ] [ T 6 × 12 ] [ K 12 × 12 ] [ M 6 × 6 ] 1 [ K 6 × 12 ] [ T 6 × 12 ] [ C 12 × 6 ] [ M 6 × 6 ] 1 [ C 6 × 6 ] ]
{ F * } = { { 0 } 6 [ C 12 × 12 ] 1 { F 12 } [ M 6 × 6 ] 1 { F 6 } [ T 6 × 12 ] { F 12 } }
[ T 6 × 12 ] = [ M 6 × 6 ] 1 [ C 6 × 12 ] [ C 12 × 12 ] 1
Secondly, we used a 4-order Runge–Kutta equation to solve Equation (24) for combining with the Kalman filter algorithm:
{ { X } k = { X } k 1 + 1 6 ( K 1 + 2 K 2 + 2 K 3 + K 4 ) K 1 = Δ T Φ ( { X } k 1 ) K 2 = Δ T Φ ( { X } k 1 + 1 2 K 1 ) K 3 = Δ T Φ ( { X } k 1 + 1 2 K 2 ) K 4 = Δ T Φ ( { X } k 1 + 1 2 K 3 )
Finally, Equations (21) and (22) were rewritten as follows:
X k = A k X k 1 + F k + w k
P k = A k P k 1 A k T + Q k 1
A k = ϕ ( [ E ] k 1 )
B k = ψ ( { F * } k 1 )
where X k 1 and P k 1 represent state vector of the vehicle consisting of 24 variables and corresponding covariance matrix at time t k 1 , respectively; X k and P k represent the predicted state vector and predicted covariance matrix at time t k , respectively; A k is a matrix composed of mass, stiffness coefficient, and damping coefficient of the vehicle at time t k 1 ; F k is a matrix composed of the vehicle’s generalized force at time t k 1 ; A k and B k can be calculated through Equation (25); Q k is the system noise covariance at time t k 1 .
The state vector of the vehicle measured by sensors could be calculated with the following two equations:
Z k = H k X ¯ k + v k
S k = H k P ^ k 1 H k T + R k
where X ¯ k is a state vector measured by sensors; v k and R k represent the observation noise and corresponding covariance matrix of sensors, respectively.
In the update step, the state and covariance estimates of the heavy-duty vehicle were corrected by the following equations:
K k = P ^ k H k T [ H k P ^ k H k T + R k ] 1
P k = [ I + H k K k ] P ^ k
X k = X ^ k + K k [ Z k H k X ^ k ]
The flow chart of the Kalman filter is shown in Figure 4:

3.3. Active Suspension System Control

The active suspension system can control the vehicle vibration and pose by changing the height, shape, and damping of the suspension system, so as to improve the performance of the vehicle’s operating stability and ride comfort. The automobile’s active suspension system can be divided into three categories according to the control type: hydraulic control suspension system, air suspension system, and electromagnetic induction suspension system. Our car adopts a hydraulic suspension system to support such a heavy body of FTV. Most of the methods on the control of suspension are based on the IMU or other three-dimensional sensors on the car to measure the state of the car, and then according to the state, to adjust the suspension to maintain the stability. However, this method has some defects. On the one hand, IMU does not output data in continuous time, which leads to the inaccuracy of suspension control. On the other hand, the INS only measures the current state of the vehicle, which results in a delay in suspension adjustment. Our proposed pose prediction could not only solve the discontinuity of IMU but also solve the delay of IMU.
Figure 5 shows the active suspension system of our FTV. According to the geometric relationship, the kinematic equation of FTV could be obtained:
{ Z 1 = Z L f sin θ + 1 2 a cos θ sin φ Z 2 = Z L f sin θ 1 2 a cos θ sin φ Z 3 = Z + L r sin θ + 1 2 a cos θ sin φ Z 4 = Z + L r sin θ 1 2 a cos θ sin φ Z 5 = Z + 1 2 a sin φ Z 6 = Z 1 2 a sin φ
where θ , φ , and Z could be predicted through our dynamic model. Then, we could get the z-direction displacement of the body at the connection with the hydraulic cylinder. Finally, we controlled the hydraulic cylinder to maintain the stability and safety of the FTV.

4. Discussion

We used four experiments to verify the effectiveness and feasibility of our method. The first experiment was to confirm that our pose prediction using a Kalman filter was more accurate and stable than using the dynamic model alone. The second experiment was to verify the accuracy of our proposed method over a while compared with the real vehicle pose. The third experiment was to verify the effectiveness of our active suspension system control method. The last experiment was to verify the stability of our method in more situations for a long period.
The experiment was first carried out on the test site with two kinds of obstacles, as shown in Figure 6a. We used the nearest neighbor interpolation (NNI) algorithm to process cloud data points obtained from 3D sensors, as shown in Figure 6b.
The vehicle’s pose at time t k + 1 was predicted by inputting information about the vehicle and the road at time t k to the dynamic model and compared with the data of GPS/INS at time t k + 1 . The position errors relative to GPS/INS data are shown in Figure 7. The blue dotted line represents position error only predicted by the dynamic model. As the curve shows, using the dynamic model to predict the pose produced a large error due to its iterative algorithm and the accuracy of sensors mounted on the vehicle. The red solid line represents position error predicted by the dynamic model using a Kalman filter algorithm. The curve demonstrated that the combination of a dynamic model and Kalman filter algorithm could effectively eliminate the error, which could ensure accurate positioning. The attitude angle errors are shown in Figure 8. For the same reason, using a Kalman filter algorithm could get more accurate positioning and attitude angle information.
For our autonomous FTV, active suspension system control and advanced risk analysis were implemented to maintain stability and safety on uneven terrain. To achieve the above method, we needed to accurately predict the attitude information of the vehicle in the next period of time. The second experiment was carried out to verify the effect of our method. We made the vehicle pass an obstacle directly at a constant slow speed, and, at the same time, recorded the vehicle pose information and vehicle input information in this period. Then, we imported the same vehicle input information into the dynamic model to get the prediction information. The result is shown in Figure 9—the purple line and the red line show the attitude angle information of the vehicle and the predicted attitude angle information, respectively. The result showed that the predicted pose had high accuracy. It means that we could calculate how much speed and traction were needed to pass some obstacles, as there was a risk of rollover when passing obstacles at a certain speed in advance.
The data after adjusting the suspension of the vehicle are shown in Figure 10—the purple line shows the attitude angle information when passing through an obstacle, and the red line shows the attitude angle information after the vehicle continuously adjusts the suspension when passing through the same obstacle at the same speed. The result showed that the attitude was maintained within 5 degrees by continuous suspension adjustment, which means that the stability of the vehicle could be well guaranteed by our proposed method.
The last experiment was carried out to further verify the feasibility of our method and its stability in complex environments. The experiment was carried out in a field around our university, with various obstacles on the road. As shown in Figure 11b, the black line shows the real trajectory of the FTV, and the blue line and red line represent the trajectories predicted by the dynamic model with and without Kalman filter, respectively. The experiment result showed that the method, combining a Kalman filter with a dynamic model, had higher accuracy and stability over a long period. The three points where the car passed through the obstacle were A, B, and C. Figure 12 shows the change of attitude angle during driving. The vehicle attitude angle maintained within 5 degrees during the whole driving process. The result showed that the vehicle could maintain stability for a long time through suspension system control.

5. Conclusions

This paper presented a method of pose prediction of autonomous FTV on uneven roads, which could be used as a reference index for adjusting active suspension and planning path. The first was the way to extract road real-time elevation information through GPS/INS and 3D laser scanners. The second was the description of a method that established the vehicle’s dynamic model and imported the elevation information extracted from the previous step into it for pose estimation. Finally, the dynamic model was combined with a KF to obtain a more accurate pose prediction. Real experiments results demonstrated that the safety and stability of vehicles driving across complex uneven terrain could be ensured by using our method. There were also some defects in the whole study. For example, the accuracy of the vehicle’s dynamic model was reduced when adjusting the active suspension system. In the future, we would consider how to improve the accuracy of a dynamic model of the vehicle and adjust the suspension system in more forms to maintain the vehicle’s stability. Furthermore, we need to do in-depth research on the path planning across complex uneven terrain to make the vehicle truly unmanned.

Author Contributions

Conceptualization, W.L.; Data curation, T.N.; Formal analysis, W.L.; Funding acquisition, H.Z.; Methodology, T.N.; Project administration, H.Z.; Software, W.L., H.Y. and Z.K.

Funding

This work was supported by the National Key Research and Development Program of China (Grant No. 2016YFC0802900).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
  2. Urmson, C.; Anhalt, J.; Bagnell, D.; Baker, C.; Bittner, R.; Clark, M.N.; Dolan, J.; Duggins, D.; Galatali, T.; Geyer, C.; et al. Autonomous driving in urban environments: Boss and the Urban Challenge. J. Field Robot. 2008, 25, 425–466. [Google Scholar] [CrossRef]
  3. Hemann, G.; Singh, S.; Kaess, M. Long-range GPS-denied aerial inertial navigation with LIDAR localization. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 1659–1666. [Google Scholar]
  4. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L. LiDAR scan matching aided inertial navigation system in GNSS-denied environments. Sensors 2015, 15, 16710–16728. [Google Scholar] [CrossRef] [PubMed]
  5. Gao, Y.; Liu, S.; Atia, M.M.; Noureldin, A. INS/GPS/LiDAR integrated navigation system for urban and indoor environments using hybrid scan matching algorithm. Sensors 2015, 15, 23286–23302. [Google Scholar] [CrossRef] [PubMed]
  6. Guowei, W.; Xiaolong, Y.; Renlan, C. Robust and Precise Vehicle Localization based on Multi-sensor Fusion in Diverse City Scenes. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 4670–4677. [Google Scholar]
  7. Erhan, D.; Szegedy, C.; Toshev, A.; Anguelov, D. Scalable object detection using deep neural networks. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 2155–2162. [Google Scholar]
  8. Dong, J.; Chen, Q.; Yan, S.; Yuille, A. Towards unified object detection and semantic segmentation. In Computer Vision–ECCV 2014; Springer: Berlin, Germany, 2014; Volume 7, pp. 299–314. [Google Scholar]
  9. Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; Le-Cun, Y. Overfeat: Integrated Recognition, Localization and Detection Using Convolutional Networks. In Proceedings of the 2014 International Conference on Learning Representations (ICLR), Banff, Canada, 14–16 April 2014; pp. 299–314. [Google Scholar]
  10. Ahmed, M.M.; Svaricek, F. Preview optimal control of vehicle semi-active suspension based on partitioning of chassis acceleration and tire load spectra. In Proceedings of the 2014 European Control Conference (ECC), Strasbourg, France, 24–27 June 2014; pp. 1669–1674. [Google Scholar]
  11. Ryu, S.; Park, Y.; Suh, M. Ride quality analysis of a tracked vehicle suspension with a preview control. J. Terramech. 2011, 48, 409–417. [Google Scholar] [CrossRef]
  12. Marzbanrad, J.; Ahmadi, G.; Zohoor, H.; Hojjat, Y. Stochastic optimal preview control of a vehicle suspension. J. Sound Vib. 2004, 275, 973–990. [Google Scholar] [CrossRef]
  13. Youn, I.; Tchamna, R.; Lee, S.H.; Uddin, N.; Lyu, S.K.; Tomizuka, M. Preview suspension control for a full tracked vehicle. Int. J. Automot. Technol. 2014, 15, 399–410. [Google Scholar] [CrossRef]
  14. Li, P.; Lam, J.; Cheung, K.C. Multi-objective control for active vehicle suspension with wheelbase preview. J. Sound Vib. 2014, 333, 5269–5282. [Google Scholar] [CrossRef]
  15. Mehra, R.; Amin, J.; Hedrick, K.; Osorio, C.; Gopalasamy, S. Active suspension using preview information and model predictive control. In Proceedings of the 1997 IEEE International Conference on Control Applications, Hartford, CT, USA, 5–7 October 1997; pp. 860–865. [Google Scholar]
  16. Marzbanrad, J.; Ahmadi, G.; Hojjat, Y.; Zohoor, H. Optimal active control of vehicle suspension systems including time delay and preview for rough roads. J. Vib. Control 2002, 8, 967–991. [Google Scholar] [CrossRef]
  17. Droeschel, D.; Schwarz, M.; Behnke, S. Continuous mapping and localization for autonomous navigation in rough terrain using a 3D laser scanner. Robot. Auton. Syst. 2017, 88, 104–115. [Google Scholar] [CrossRef]
  18. Akgul, M.; Yurtseven, H.; Akburak, S.; Demir, M.; Cigizoglu, H.K. Short term monitoring of forest road pavement degradation using terrestrial laser scanning. Measurement 2017, 103, 283–293. [Google Scholar] [CrossRef]
  19. Cole, D.M.; Newman, P.M. Using laser range data for 3D SLAM in outdoor environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 1556–1563. [Google Scholar]
  20. Grivon, D.; Vezzetti, E.; Violante, M.G. Development of an innovative low-cost MARG sensors alignment and distortion compensation methodology for 3D scanning applications. Robot. Auton. Syst. 2013, 61, 1710–1716. [Google Scholar] [CrossRef]
  21. Youn, I.; Khan, M.A.; Uddin, N. Road disturbance estimation for the optimal preview control of an active suspension systems based on tracked vehicle model. Int. J. Automot. Technol. 2017, 18, 307–316. [Google Scholar] [CrossRef]
  22. Göhrle, C.; Schindler, A.; Wagner, A. Model Predictive Control of semi-active and active suspension systems with available road preview. In Proceedings of the 2013 European Control Conference (ECC), Zürich, Switzerland, 17–19 July 2013; pp. 1499–1504. [Google Scholar]
  23. Yingchong, M.; Shiller, Z. Pose Estimation of Vehicles over Uneven Terrain, Paslin Laboratory for Robotics and Autonomous Vehicles; Department of Mechanical Engineering and Mechatronics, Ariel University: Ariel, Israel, 2019. [Google Scholar]
  24. Jordan, J.; Zell, A. Real-time Pose Estimation on Elevation Maps for Wheeled Vehicles. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1337–1342. [Google Scholar]
  25. Jun, J.Y.; Saut, J.-P. Pose estimation-based path planning for a tracked mobile robot traversing uneven terrains. Robot. Auton. Syst. 2016, 75, 325–339. [Google Scholar] [CrossRef]
  26. Dingxuan, Z.; Lili, W.; Yilei, L.; Miaomiao, D. Extraction of preview elevation of road based on 3D sensor. Measurement 2018, 127, 104–114. [Google Scholar]
  27. Cho, H.; Kim, E.K.; Kim, S. Indoor SLAM application using geometric and ICP matching methods based on line features. Robot. Auton. Syst. 2018, 100, 206–224. [Google Scholar] [CrossRef]
  28. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Vehicle configuration.
Figure 1. Vehicle configuration.
Sensors 19 05120 g001
Figure 2. System structure. IMU, inertial measurement unit; R–K, Runge–Kutta.
Figure 2. System structure. IMU, inertial measurement unit; R–K, Runge–Kutta.
Sensors 19 05120 g002
Figure 3. Autonomous heavy-duty vehicle models. (a) Top view of the vehicle. (b) A simplified model of the vehicle on a slope.
Figure 3. Autonomous heavy-duty vehicle models. (a) Top view of the vehicle. (b) A simplified model of the vehicle on a slope.
Sensors 19 05120 g003
Figure 4. Kalman filter algorithm.
Figure 4. Kalman filter algorithm.
Sensors 19 05120 g004
Figure 5. Model of the active suspension system.
Figure 5. Model of the active suspension system.
Sensors 19 05120 g005
Figure 6. Two kinds of obstacles in the test site. (a) Real obstacles in the test site. (b) Points cloud data of an obstacle processed by the nearest neighbor interpolation (NNI) algorithm.
Figure 6. Two kinds of obstacles in the test site. (a) Real obstacles in the test site. (b) Points cloud data of an obstacle processed by the nearest neighbor interpolation (NNI) algorithm.
Sensors 19 05120 g006
Figure 7. Positioning error with and without Kalman filter algorithm. (a) X position error. (b) Y position error. (c) Z position error.
Figure 7. Positioning error with and without Kalman filter algorithm. (a) X position error. (b) Y position error. (c) Z position error.
Sensors 19 05120 g007
Figure 8. Angle errors with and without the Kalman filter algorithm. (a) Alpha angle error. (b) Beta angle error. (c) Gamma angle error.
Figure 8. Angle errors with and without the Kalman filter algorithm. (a) Alpha angle error. (b) Beta angle error. (c) Gamma angle error.
Sensors 19 05120 g008
Figure 9. Prediction data of the vehicle’s attitude angle. (a) Alpha angle. (b) Beta angle. (c) Gamma angle.
Figure 9. Prediction data of the vehicle’s attitude angle. (a) Alpha angle. (b) Beta angle. (c) Gamma angle.
Sensors 19 05120 g009
Figure 10. Attitude angle information after adjusting the active suspension system. (a) Alpha angle. (b) Beta angle. (c) Gamma angle.
Figure 10. Attitude angle information after adjusting the active suspension system. (a) Alpha angle. (b) Beta angle. (c) Gamma angle.
Sensors 19 05120 g010
Figure 11. Test sites around our university and vehicle trails. (a) Top view of test sites. (b) The trajectory curves of the full tracked vehicle (FTV).
Figure 11. Test sites around our university and vehicle trails. (a) Top view of test sites. (b) The trajectory curves of the full tracked vehicle (FTV).
Sensors 19 05120 g011
Figure 12. Data on the vehicle’s attitude angle from the whole process of vehicle movement. (a) Alpha angle. (b) Beta angle.
Figure 12. Data on the vehicle’s attitude angle from the whole process of vehicle movement. (a) Alpha angle. (b) Beta angle.
Sensors 19 05120 g012
Table 1. Performance index of 3D-sensor.
Table 1. Performance index of 3D-sensor.
Performance IndexParameter
Scanning frequency5–20 Hz
Measuring range0–100 mm
Horizontal angular resolution0.1°–0.4°
Vertical angular resolution
Precision of distance measurement±3 cm
Number of laser lines16 L
Table 2. Performance index of GPS/INS.
Table 2. Performance index of GPS/INS.
Performance IndexParameter
Data out frequency100 Hz
Precision of heading±0.3°
Precision of horizontal attitude±0.3°
Precision of horizontal position±2 cm
Precision of Altitude position±3 cm

Share and Cite

MDPI and ACS Style

Ni, T.; Li, W.; Zhang, H.; Yang, H.; Kong, Z. Pose Prediction of Autonomous Full Tracked Vehicle Based on 3D Sensor. Sensors 2019, 19, 5120. https://doi.org/10.3390/s19235120

AMA Style

Ni T, Li W, Zhang H, Yang H, Kong Z. Pose Prediction of Autonomous Full Tracked Vehicle Based on 3D Sensor. Sensors. 2019; 19(23):5120. https://doi.org/10.3390/s19235120

Chicago/Turabian Style

Ni, Tao, Wenhang Li, Hongyan Zhang, Haojie Yang, and Zhifei Kong. 2019. "Pose Prediction of Autonomous Full Tracked Vehicle Based on 3D Sensor" Sensors 19, no. 23: 5120. https://doi.org/10.3390/s19235120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop