Next Article in Journal
3D Visual Reconstruction as Prior Information for First Responder Localization and Visualization
Previous Article in Journal
Modeling and Optimization of LoRa Networks under Multiple Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Pure Electric Driverless Crawler Construction Machinery Walking Method Based on the Fusion SLAM and Improved Pure Pursuit Algorithms

1
College of Mechanical Engineering and Automation, Huaqiao University, Xiamen 361021, China
2
Fujian Key Laboratory of Green Intelligent Drive and Transmission for Mobile Machinery, Xiamen 361021, China
3
Mechatronic Engineering with the School of Beihang University, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(18), 7784; https://doi.org/10.3390/s23187784
Submission received: 29 July 2023 / Revised: 7 September 2023 / Accepted: 8 September 2023 / Published: 10 September 2023
(This article belongs to the Section Vehicular Sensing)

Abstract

:
Driverless technology refers to the technology that vehicles use to drive independently with the help of driverless system under the condition of unmanned intervention. The working environment of construction machinery is bad, and the working conditions are complex. The use of driverless technology can greatly reduce the risk of driver operation, reduce labor costs and improve economic benefits.Aiming at the problem of the GPS positioning signal in the working environment of construction machinery being weak and not able to achieve accurate positioning, this paper uses the fusion SLAM algorithm based on improved NDT to realize the real-time positioning of the whole vehicle through reconstruction of the scene. Considering that the motion characteristics of crawler construction machinery are different from those of ordinary passenger cars, this paper improves the existing pure pursuit algorithm. Simulations and real vehicle tests show that the algorithm combined with the fusion SLAM algorithm can realize the motion control of driverless crawler construction machinery well, complete the tracking of the set trajectory perfectly and have high robustness. Considering that there is no mature walking method of driverless crawler construction machinery for reference, the research of this paper will lay a foundation for the development of driverless crawler construction machinery.

1. Introduction

With the aggravation of the “greenhouse effect” and the outbreak of the global energy crisis, traditional fuel construction machinery has been unable to meet the development needs of the times, and energy conservation and emission reduction of construction machinery has become the consensus in the industry [1]. Compared with traditional construction machinery, pure electric construction machinery has the advantages of high efficiency, low noise and no pollution [2]. At present, it has become one of the mainstream directions in the development of construction machinery industry. Compared with fuel construction machinery, it is easier to access the electrical components and driverless systems of pure electric construction machinery, achieving more accurate control.
Intelligentization of construction machinery includes intelligentization of products and construction, intelligentization of manufacturing, intelligentization of service and intelligentization of management, while driverless construction machinery belongs to the intelligentization module of products and construction [3]. Driverless construction machinery refers to construction machinery that can complete work and work path planning independently without human intervention [4]. At present, driverless cars are developing rapidly. Tesla, Waymo, Uber, Toyota, Baidu Apollo, Huawei Arcfox, Pony AI, Xpeng Inc and other well-known car companies have developed their own driverless cars and completed road tests of urbanization structures in a variety of bad weather [5]. Compared with the rapid development of driverless vehicles, the research on driverless construction machinery in the industry started late. Construction machinery is different from ordinary cars. It runs on unstructured roads, and thus it is difficult to extract road features. At the same time, the control system of construction machinery is also quite different from ordinary cars. Based on the above factors, the driverless system of an automobile is not suitable for construction machinery. Therefore, it is highly important for the intelligent process of construction machinery to develop a set of driverless systems suitable for construction machinery. Caterpillar, Komatsu and other world-famous construction machinery companies have been developing driverless construction machinery. The driverless mining dump truck developed by Komatsu has even canceled the cab, and it has been put into use in mining areas in Australia and Chile. Relatively speaking, the research on driverless construction machinery in China is still in its infancy, and most of it is based on remote control. There are few self-driving construction machinery products based on multi-sensor and deep learning methods [6].
Intelligent construction machinery generally uses GPS to achieve vehicle positioning, but GPS satellite positioning signals are easily obstructed and shielded by obstacles. In the working conditions of tunnels and buildings, GPS cannot provide accurate positioning information for the whole vehicle. Aiming at the problem of weak GPS positioning signals in some working environments of construction machinery [7], a fusion SLAM algorithm [8] based on improved NDT [9] is proposed in this paper. The algorithm collects environmental information through LiDAR and constructs a point cloud map. The real-time positioning of the whole vehicle is realized by matching the real-time perceived point cloud information with the constructed point cloud map. At the same time, the algorithm combines an IMU and odometer to make up for the slow mapping speed and cumulative error of single LiDAR SLAM. This set of positioning algorithms relies on LiDAR to construct geometric feature constraints, supplemented by an IMU and odometer to provide initial pose estimation values, and it can effectively solve the problem of vehicle positioning for construction machinery under conditions where satellite positioning signals are obstructed, such as in tunnels and buildings. Considering that crawler construction machinery belongs to the differential motion machinery, and the existing pure pursuit algorithm is only applicable to motion machinery based on Ackerman steering, this paper improves the existing pure pursuit algorithm and modifies its motion model to make it suitable for motion control of crawler construction machinery. Based on ROS, this paper builds a driverless simulation platform for crawler construction machinery and uses rviz to visualize the simulation effect. In order to verify the effect of the algorithm on real crawler construction machinery, a pure electric crawler excavator driverless platform is built, and the established path tracking experiment based on the fusion SLAM and improved pure pursuit algorithms is carried out.

2. Fusion SLAM System Based on Improved NDT

The driverless system includes five modules: perception, localization, planning, decision making and motion control [10]. The driverless system receives the environmental perception and positioning information through the multi-sensor platform and uses the computing platform to analyze the received perception and positioning information and perform corresponding planning and decision making. After the planning and decision making are completed, the computing platform sends control signals to the vehicle control system to complete the motion control of the vehicle. To sum up, in the driverless system, positioning information is very important, being the basis of planning, decision making and motion control.
At present, the commonly used multi-sensor fusion methods include the weighted average method, Kalman filtering method [11], multi-Bayesian estimation method and D-S evidence inference method. Gao et al. [12] proposed a distributed state fusion method based on sparse grid orthogonal filtering and established a matrix-weighted distributed state fusion method in the sense of the mean squared error, which improves the fusion performance of multi-sensor nonlinear systems. Gao et al. [13] proposed a new multi-sensor data fusion method for INS-, GPS- and SAR-integrated navigation systems. This method combines local scattered fusion with global optimal fusion, improving the accuracy and reliability of the integrated navigation system. Lai et al. [14] proposed a multi-sensor tight fusion method that combines inertial navigation systems (INSs), odometers, barometric altimeters and GNSS technology. In the absence of satellite signals, the accuracy of this method can be close to that of outdoor environments. Gao et al. [15] proposed a new random weighted estimation method for optimal fusion of multidimensional position data. This method can effectively fuse multi-sensor dimensional position data, and the fusion accuracy is much higher than the Kalman fusion method. Zong et al. [16] proposed a randomly weighted CKF (RWCKF) to handle the limitations of the CKF, overcoming the CKF problem and improving the accuracy of system state estimation. Yu et al. [17] proposed a distributed finite-time optimal fusion filtering method based on dynamic communication weights for distributed state estimation in sensor networks which achieves fusion error convergence and optimal estimation with minimal variance within a limited number of iterations.
In summary, this article combines RTK and INS data based on extended Kalman filtering to output the initial odometer information. Then, the initial odometer information is weighted and fused with the pose information obtained from IMU integration, and the initial pose information used for point cloud registration is finally output to improve the speed and accuracy of point cloud registration.
The SLAM system is mostly used for unmanned vehicle positioning in passenger cars. Up to now, it has formed five positioning systems: visual SLAM, laser SLAM, visual inertial navigation SLAM, laser inertial navigation SLAM and visual laser inertial navigation SLAM. Koestler et al. [18] proposed a dense map construction method based on monocular cameras (TANDEM). This method is based on a new front-end odometer estimation and a BA optimization algorithm, which can complete dense map construction with rich texture information using only low-cost monocular cameras. However, the amount of front-end matching data is too large to be deployed on real vehicles. Chen et al. [19] proposed a LiDAR positioning method based on a Monte Carlo and particle filter (Range MCL), which can accurately estimate the attitude of mobile robots or autonomous vehicles and achieve the global positioning of the whole vehicle, but there is no loop closure mechanism and it cannot control the global error well. Xu et al. [20] proposed a fast, robust and universal laser inertial odometer framework (FAST-LIO2), which uses an extended Kalman filtering algorithm based on multi-sensor tightly coupled iteration. It uses a direct registration method between the original point cloud of the current frame and the local map to solve the pose of the current frame, avoiding the computational waste caused by feature extraction and preventing the loss of detailed features of the original point cloud, effectively improving the real-time accuracy of point cloud registration. However, without the ability to resist degradation, it is prone to drift in localization and mapping in scenes such as tunnels, bridges and corridors.
In this paper, the fusion SLAM method based on improved NDT is used to obtain the real-time positioning information of the whole vehicle, which can still provide accurate positioning information of the whole vehicle for the computing platform without a GPS positioning signal [21]. Its advantage lies in the use of LiDAR to directly construct geometric feature constraints for point cloud registration without the need for feature extraction, greatly improving algorithm efficiency and ensuring real-time vehicle positioning. At the same time, the IMU and odometer are used to provide the initial values for point cloud registration, greatly accelerating the speed and accuracy of solving the vehicle’s position. It can still achieve high-precision localization and mapping without loop closure and has a certain degree of anti-degradation ability.

2.1. Point Cloud Registration Algorithm Based on NDT

2.1.1. Point Cloud Registration

In the process of building a LiDAR point cloud map, because the scanning distance of LiDAR is limited, the point cloud data obtained by one scan cannot contain all the environmental information, and with the increase in distance, the obtained point cloud information will become more and more sparse. Therefore, in the process of mapping, the whole vehicle equipped with LiDAR needs to move continuously in the environment required for mapping, use LiDAR to collect enough environmental information, splice the collected point cloud data frame by frame and finally make a point cloud map. Point cloud registration is the process of converting the scanned point clouds of different frames in the same environment to the same coordinate system through a coordinate transformation matrix. The key of point cloud registration is to find the optimal coordinate transformation matrix between different frame point clouds. At present, the commonly used methods are the iterative nearest point (ICP) method and normal distribution transformation (NDT) method. The coordinate transformation matrix solved by the iterative nearest point method has high accuracy, but the disadvantage is that the amount of calculation is too large to meet the real-time requirements. The normal distribution transformation is based on the statistical principle, dividing the point cloud of the previous frame through the grid, assuming that the point cloud in the grid obeys the normal distribution, calculating the probability value of the converted point cloud in the grid of the point cloud division of the previous frame, multiplying all the probability values as the target likelihood function and using the Newton iteration method to continuously optimize the coordinate transformation matrix until the target likelihood function reaches the maximum value, At this time, the coordinate transformation matrix is the optimal coordinate transformation matrix [22]. However, the traditional NDT algorithm uses the Newton method to iterate, and the Hessian matrix of the objective function in its iterative equation is usually difficult to solve, which seriously affects the real-time performance of localization and mapping. Therefore, the traditional NDT algorithm is improved by the Levenberg–Marquardt iterative method [23]. The advantage of this method is that the Jacobian matrix of the objective function is used to approximate the Hessian matrix, which greatly improves the computational efficiency. At the same time, the added diagonal operator solves the singularity problem of the approximate Hessian matrix.

2.1.2. NDT Algorithm

NDT is a model based on a statistical principle. If a group of random vectors obeys a normal distribution, then its probability density function is [24]
p ( x ) = 1 ( 2 π ) D 2 exp ( x q ) T 1 ( x q ) 2 σ 2
where the parameter D represents dimension, q represents the mean vector and ∑ represents the covariance matrix of a random vector. NDT uses probability to reflect the distribution of point clouds, which greatly reduces the registration time.

2.1.3. NDT Algorithm Flow

Point cloud registration generally refers to the splicing of point clouds of the upper and lower frames. The point cloud of the previous frame is generally the target point cloud, and the point cloud of the next frame is the source point cloud. During registration, the target point cloud is fixed, and the source point cloud is transferred to the coordinate system of the target point cloud through the coordinate transformation matrix. In this paper, the initial value of the coordinate transformation matrix is jointly provided by the Odom (odometer) and IMU, which can improve the speed and accuracy of mapping. The source point cloud is compared with the target point cloud after transformation of the initial coordinate transformation matrix, and the total score is calculated. The total score is finally maximized through the Levenberg–Marquardt iteration method. At this time, the corresponding coordinate transformation matrix is the optimal coordinate transformation matrix. This paper compares the target point cloud with the source point cloud after coordinate conversion based on the improved NDT algorithm. The first step of the NDT algorithm is to mesh the target point cloud (i.e., divide the target point cloud with a grid of the same size), as shown in Figure 1 [25].
The second step of the NDT algorithm is to load the target point cloud data. We put the target point cloud into the grid divided in the previous step and calculate the mean value of the point cloud data in each grid with Equation (2):
q = 1 m k = 1 m x k
where x k = 1 , 2 , 3 , . . . , m represents the coordinates of all points of the point cloud in a single grid.
We use Equation (3) to solve the covariance of the point cloud data in each grid and finally use Equation (4) to solve the probability density function of the point cloud distribution in each grid:
= 1 m k = 1 m ( x k q ) ( x k q ) T
p ( x ) = 1 2 3 2 exp ( x q ) T 1 ( x q ) 2
The third step of the NDT algorithm is to solve the coordinate transformation matrix of the source point cloud relative to the target point cloud. The coordinate transformation matrix is generally composed of a translation matrix and a rotation matrix. The translation matrix describes the translation of the origin in the source point cloud coordinate system relative to the origin in the target point cloud coordinate system in the x, y and z directions. The rotation matrix is the rotation angle of the origin in the source point cloud coordinate system relative to the origin in the target point cloud coordinate system around the x, y and z axes, for which the industry usually uses roll, pitch and yaw to express them, respectively. Here, γ , β and α are used to correspond to the roll, pitch and yaw, respectively. Therefore, the expression of a rotation matrix R 3 × 3 is
R 3 × 3 = [ δ ε η ]
The expressions of δ , ε and η are as follows:
δ = cos β cos α sin α cos β sin β
ε = cos α sin β sin γ sin α cos γ cos α cos γ + sin α sin β sin γ cos β sin γ
η = cos α sin β cos γ + sin α sin γ sin α sin β cos γ cos α sin γ cos γ cos β
The translation vector t is shown in Equation (9):
t = t x , t y , t z T
Assuming that the positions of a laser point in the target point cloud coordinate system and the source point cloud coordinate system are as illustrated in Equation (10) and Equation (11), respectively, the coordinate transformation relationship in Equation (13) between the same laser point in the target point cloud coordinate system and the source point cloud coordinate system can be obtained from Equation (12), where T is the coordinate transformation matrix of the source point cloud coordinate system relative to the target point cloud coordinate system:
X = x i y i z i
X = x i y i z i
X = R 3 × 3 X + t 3 × 1 = T X
X = x i y i z i = [ δ ε η ] x i y i z i + t x t y t z
In the operation process, it is necessary to give a good initial value to the above operation formula (i.e., the initial coordinate transformation matrix close to the optimal coordinate transformation matrix). This can accelerate the speed of NDT inter-frame matching, reduce the number of Levenberg–Marquardt iterations, reduce the computing pressure of the computing platform and achieve faster mapping. In the actual mapping process, the position and attitude information of the real vehicle are generally collected through the GNSS, IMU, Odom and other sensors, and the coordinate transformation matrix relative to the target point cloud coordinate system is obtained through coordinate transformation via the initial coordinate transformation matrix. Using the IMU or Odom alone to provide initial values will produce cumulative errors, which will affect the final localization and mapping. Therefore, this paper uses an IMU and Odom to provide the initial coordinate transformation matrix for inter-frame matching, in which the IMU provides attitude information and the Odom provides position information.
The fourth step of the NDT algorithm is to solve the probability of the source point cloud in the target point cloud grid after coordinate transformation. After the initial coordinate transformation matrix transformation, the source point cloud rotates and shifts to the target point cloud coordinate system, and it is divided in each grid by the target point cloud. X represents the coordinates of a laser point in the source point cloud converted to the target point cloud coordinate system through the coordinate transformation matrix. Using the point cloud distribution probability density function of the grid where the laser point X is located, the probability value that the coordinate of the laser point in the source point cloud is X after coordinate transformation is solved. Similarly, the probability values of other laser points are solved.
The fifth step of the NDT algorithm is to solve the maximum objective likelihood function. The maximum objective likelihood function is obtained by multiplying the probability values of each laser point solved in the fourth step. When the maximum objective likelihood function reaches the maximum value (i.e., when the probability value of each laser point reaches the maximum), the coordinate transformation matrix at this time is considered to be the optimal solution. The expression of the maximum objective likelihood function is
ψ = k = 1 n p ( X )
Since the maximum objective likelihood function is a multiplicative function, it is not easy to derive, and the subsequent optimization needs to derive the maximum objective likelihood function. Therefore, we use logarithms on both sides of the maximum objective likelihood function to simplify the function:
log ψ = k = 1 n log ( p ( X ) )
The last step of the NDT algorithm is to solve the optimal coordinate transformation matrix with the Newton iteration method. In this paper, the Levenberg–Marquardt iteration method is used to replace the Newton iteration method, which not only improves the real-time performance of the localization and mapping system but also enhances the robustness. The core of solving Equation (15) with the Lievenberg–Marquardt iteration method is to solve Equation (16), where J is the Jacobian matrix of the objective function, μ is the dynamic update factor, I is the identity matrix and g is the gradient vector of the objective function. The dynamic update factor is determined by the size of the objective function. When the objective function is larger, it is proven that the current gradient is toward the optimal solution. At this time, the dynamic update factor should be appropriately reduced, and the step size should be increased to speed up the solution. Otherwise, the dynamic update factor should be increased to reduce the step size and prevent solution divergence. The iterative updating equation is shown in Equation (17), and T is the coordinate transformation matrix (i.e., the iterative parameter):
( J T J + μ I ) · Δ p = g
T = T + Δ T

2.2. NDT Mapping Process and Effect Display

The NDT mapping method is a scan-to-map method [26], where scan refers to the current frame point cloud scanned by LiDAR while map refers to the target point cloud map formed by splicing historical frames frame by frame. Due to the limited scanning range of LiDAR, one scan cannot include all environmental information. Therefore, it is necessary to use LiDAR to continuously scan and collect environmental information, fix the target point cloud map and splice the collected laser point cloud to the target point cloud map frame by frame through the coordinate transformation matrix. The whole mapping process is shown in Figure 2.
In this paper, the Velodyne 128 line of LiDAR is used to collect the original point cloud information. After voxel filtering, the source point cloud is obtained and input into the NDT mapping module. We take the input point cloud of the first frame as the target point cloud map (i.e., the initial global map). We then fix the global map and input it into the NDT mapping module as the target point cloud. The initial coordinate transformation matrix is provided for the NDT mapping module through the IMU and Odom. The NDT mapping module performs Levenberg–Marquardt iteration based on the initial coordinate transformation matrix after the input of the source point cloud and the target point cloud, and it outputs the optimal coordinate transformation matrix. Through the optimal transformation matrix, the laser point cloud of the current frame can be spliced with the global map, the global map can be updated, and the position of the whole vehicle relative to the coordinate origin of the target point cloud map can be obtained. Through the above operations, the laser point cloud of the current frame is continuously spliced to the global map, and finally the global point cloud map is built.
This paper selects an open field in the campus, uses the electric crawler excavator equipped with the Velodyne 128 line of LiDAR to collect environmental information and builds a map of the scene. The acquisition equipment is shown in Figure 3.
To compare the error between the mapping track and the actual track, this paper carries RTK on the electric crawler excavator, uses the RTK settlement result as the actual track (i.e., the reference track) and compares it with the mapping track settled by NDT. The comparison result is shown in Figure 4, Figure 5 and Figure 6. As can be seen in Figure 4, the mapping track of NDT settlement is basically consistent with the reference track. Figure 5 and Figure 6 show the forward (x axis direction) error and the lateral (y axis direction) error between the mapping track and the reference track, respectively. According to Figure 5 and Figure 6, the maximum error between the mapping track and the reference track in the forward direction was 0.05 m, the average error was 0.025 m, and the root mean squared error was 0.029 m. Meanwhile, the maximum error in the lateral direction was 0.06 m, the average error was 0.032 m, and the root mean squared error was 0.037 m. The experimental results show that the fusion SLAM method proposed in this paper can achieve high mapping accuracy in small- and medium-sized scenes. Figure 7 shows the mapping scene, and Figure 8 shows the point cloud map established for the environment through NDT.

2.3. NDT Positioning Process and Effect Display

The NDT positioning process is similar to the NDT mapping process, except that there is no link to update the global map. The current frame point cloud scanned by LiDAR is sampled into the source point cloud through voxel filtering and matched with the global map. The initial coordinate transformation matrix is still provided by the IMU and Odom. The optimal coordinate transformation matrix is continuously optimized by the Levenberg–Marquardt iteration method so as to output the current position and attitude and realize the positioning of the whole vehicle. Figure 9 shows the overall flow chart of NDT positioning.
To compare the error between the fusion SLAM positioning track and the real track, the positioning track calculated by RTK is regarded as the real track (i.e., the reference track) and compared with the positioning track calculated by the fusion SLAM method. The results are shown in Figure 10, Figure 11 and Figure 12. As can be seen from Figure 10, the positioning track obtained by the fusion SLAM method was basically consistent with the reference track. Figure 11 and Figure 12 show the forward (x axis direction) error and lateral (y axis direction) error between the positioning track and the reference track, respectively. According to Figure 11 and Figure 12, the maximum error between the positioning track and the reference track in the forward direction was 0.065 m, the average error was 0.034 m, and the root mean squared error was 0.04 m. In addition, the maximum error in the lateral direction was 0.06 m, the average error was 0.032 m, and the root mean squared error was 0.032 m. The experimental results show that the fusion SLAM method proposed in this paper can basically meet the real-time positioning of driverless construction machinery in small- and medium-sized scenes and has high robustness.

3. Motion Control System of Crawler Construction Machinery Based on the Improved Pure Pursuit Algorithm

At present, the commonly used motion control algorithms in the field of unmanned driving include the pure pursuit algorithm [27], Stanley algorithm [28], linear quadratic regulator (LQR) algorithm [29] and model predictive control (MPC) algorithm [30]. Among them, the pure pursuit algorithm and Stanley algorithm are motion control algorithms based on vehicle kinematics which are applicable to low-speed vehicle motion control. The LQR and MPC algorithms are motion control algorithms based on vehicle dynamics which are suitable for high-speed vehicle motion control. Considering that tracked construction machinery belongs to low-speed mobile machinery, the dynamic model is relatively complex and difficult to establish, and the driving path has curvature discontinuity and cannot meet the conditions for using the Stanley algorithm. Therefore, the pure pursuit algorithm was adopted as the motion control algorithm of tracked construction machinery in this paper, which has the advantages of a simple principle, high robustness and no strict requirements for the driving path. The pure pursuit algorithm is mainly used for trajectory tracking of low-speed vehicles [31]. The existing pure pursuit algorithm was designed based on the Ackerman steering model and cannot meet the control requirements of crawler construction machinery. At present, there is no suitable unmanned motion control scheme for crawler construction machinery for reference. Therefore, this paper proposes an improved pure pursuit algorithm based on the motion characteristics of the crawler construction machinery so that it can be applied to the given trajectory tracking control of the crawler construction machinery.
The existing pure pursuit algorithm simplifies Ackerman steering to a bicycle model [32], as shown in Figure 13. With the center of the rear axle of the bicycle as the tangent point and the longitudinal body as the tangent line, the rear wheel can always move forward along the arc of the preview point by controlling the angle δ of the front wheel [33].
The relationship of the front wheel angle of bicycle can be obtained through a geometric relationship:
tan ( δ ) = L R
where L is the wheelbase of the front and rear wheels and R is the radius of the motion path.
According to Equation (18), since the wheelbase L of the front and rear wheels is known, when the radius R of the motion path is calculated, the corresponding rotation angle δ can also be calculated. Pure pursuit is a method based on a geometric relationship, as shown in Figure 14.
Here, x and y are the coordinate axes in the vehicle coordinate system, ( x 0 , y 0 ) represents the coordinates of the preview point in the vehicle coordinate system, L 0 is the distance from the origin of the vehicle coordinate system to the preview point, R is the radius of the motion path, and K is the curvature of the motion path. Based on the above geometric model relationship, the following derivation is made:
D + x 0 = R
D 2 + y 0 2 = R 2
x 0 2 + y 0 2 = L 0 2
R = L 0 2 2 x 0
K = 1 R
Based on the above analysis, the angle δ of the front wheel of the vehicle can be obtained by finding the radius R of the path between the current vehicle position and the preview point. Given the vehicle speed and planned waypoints, motion control of the vehicle based on Ackerman steering can be realized. The crawler construction machinery is different from ordinary Ackerman steering vehicles. Its motion control quantity is the travel speed of the left and right tracks. When the travel speed of the left track is greater than that of the right track, the crawler construction machinery turns right; otherwise, it turns left. When the left and right tracks travel at the same speed, the crawler construction machinery goes straight [34]. The kinematic model of crawler construction machinery is shown in Figure 15 [35]. This article takes the geometric center point of tracked construction machinery as the tangent point of the travel path, and by controlling the speed of the left and right tracks, the geometric center point of tracked construction machinery always travels along the planned path.
Here, R is the radius of the crawler construction machinery movement path, which can be solved by Equations (19)–(22), and b is the distance between the central axis of the left track (right track) and the central axis of the whole vehicle. Given the forward speed v c of the crawler construction machinery, the following derivation can be made based on the kinematic model of the crawler construction machinery [36]. Based on the above analysis, the following derivation is made. As the process of solving the arc radius R of the path is consistent with traditional pure tracking, the derivation will not be repeated here.
An excavator turning left is calculated as follows:
w = v c R = 2 x 0 v c L 0 2
v L = w ( R b ) = w ( L 0 2 2 x 0 b ) 2 x 0
v R = w ( R + b ) = w ( L 0 2 + 2 x 0 b ) 2 x 0
An excavator turning right is calculated as follows:
w = v c R = 2 x 0 v c L 0 2
v L = w ( R + b ) = w ( L 0 2 + 2 x 0 b ) 2 x 0
v R = w ( R b ) = w ( L 0 2 2 x 0 b ) 2 x 0
An excavator moving straight is calculated as follows:
v L = v R
where w is the angular speed of the crawler construction machinery rotating around the center O of the motion path, v L is the running speed of the left track and v R is the running speed of the right track. Thus far, the formula derivation of the improved pure pursuit algorithm for crawler construction machinery has been completed.

4. Simulation Platform Construction and Real Vehicle Test

To verify the feasibility of the walking scheme based on the fusion SLAM and improved pure pursuit algorithms, the electric crawler excavator simulation platform and driverless platform were built as shown in Figure 16 and Figure 17. The verification idea of this paper was to build a target path through the waypoint file under the constructed map coordinate system, input the waypoint file and real-time positioning of the whole vehicle obtained through the fusion SLAM method into the improved pure pursuit algorithm and smooth the output control signal before sending it to the whole vehicle in the form of a CAN message. Finally, the tracking of the given trajectory was completed. The overall process is shown in Figure 18.
Considering the safety of the test, this paper first verifies the walking scheme through the simulation platform. The simulation platform visualizes the simulation results with the help of the integration module rviz in ROS, and the whole vehicle motion model was built by using the Xacro macro language and SolidWorks. See Figure 19 for details.
The green path in Figure 19 is the target path, the white point cloud is the point cloud map constructed in advance, and the color point cloud is the historical observation point cloud collected through LiDAR. The blue crawler excavator is the motion model of crawler excavator established in this paper. The white path is the driving track (i.e., the path between the current position of the whole vehicle and the preview point). The driving track would constantly change with the change in the preview point so as to complete the tracking of the whole target path.
In this paper, constant speed tracking was adopted (i.e., the speed was constant in the tracking process). The vehicle speed set in the simulation process was 2 km/h (i.e., 0.56 m/s). The real-time positioning of the whole vehicle could be obtained by matching the observation point cloud with the point cloud map. The speeds of the left and right tracks could be obtained by inputting the position and attitude of the whole vehicle and the waypoint information of the target path into the improved pure pursuit algorithm and then smoothing it, as shown in Figure 20.
Since RTK could not be accessed in the simulation environment, to test the simulation tracking effect, the trajectory calculated by the fusion SLAM method was taken as the whole vehicle driving trajectory and compared with the target path, as shown in Figure 21, Figure 22 and Figure 23. As can be seen in Figure 21, the driving track was basically consistent with the target path in the simulation process. Figure 22 and Figure 23 show the forward (x axis direction) error and the lateral (y axis direction) error between the driving track and the target path, respectively. According to Figure 22 and Figure 23, the maximum error between the driving track and the target path in the forward direction was 0.082 m, the average error was 0.017 m, and the root mean squared error was 0.025 m. In addition, the maximum error in the lateral direction was 0.162 m, the average error was 0.038 m, and the root mean squared error was 0.055 m. The simulation results basically prove the feasibility of the driverless walking method for crawler construction machinery proposed in this paper.
To further verify the feasibility of the driverless walking method of crawler construction machinery proposed in this paper, a real vehicle tracking walking test was carried out based on the driverless platform of the pure electric crawler excavator built in this paper. Through the integration module rviz in ROS, the information collected by the sensors of the driverless platform and the real-time running state of the vehicle test were visualized, and the visualized information was transmitted from the computing unit in the real vehicle test platform to the display screen in the operation room through wireless image transmission equipment. On the premise of ensuring the safety of the testers, the whole test process was monitored. See Figure 24, Figure 25 and Figure 26 for more details. The main equipment parameters of the actual vehicle test platform are shown in Table 1. The onboard computing platform we used was TW-T609, developed by Tuwei Technology Company, with a computing power of up to 32 TOPS, which can effectively meet the real-time operation and verification of various unmanned driving algorithms in real vehicles.
The green path in Figure 25 is the path that needs to be tracked in the real vehicle test in this paper. The white point cloud is the point cloud map constructed in advance. Due to the limited computing power of the on-board computing unit, the observation point cloud collected by LiDAR during the real vehicle test was not displayed on the rviz.
The real vehicle test still adopted constant speed tracking, and the set vehicle speed was the same as that in the simulation test. The solution process of the left and right tracks’ speeds was also the same as that of the simulation test. The only difference was that the observation point cloud of the simulation test was recorded in advance by LiDAR, while the observation point cloud of the real vehicle test was obtained in real time with LiDAR. The calculateds speed of the left and right tracks are shown in Figure 27.
To test the actual tracking effect, RTK was connected to the whole vehicle. The trajectory calculated by RTK was regarded as the actual driving track and compared with the target path, as shown in Figure 28, Figure 29 and Figure 30. As can be seen in Figure 28, the driving track was basically consistent with the target path in the real vehicle test. Figure 29 and Figure 30 show the forward (x axis direction) error and lateral (y axis direction) error between the driving track of the real vehicle test and the target path, respectively. According to Figure 29 and Figure 30, the maximum error between the driving track of the real vehicle test and the target path in the forward direction was 0.083 m, the average error was 0.023 m, and the root mean squared error was 0.031 m. Additionally, the maximum error in the lateral direction was 0.152 m, the average error was 0.044 m, and the root mean squared error was 0.052 m. The real vehicle test results prove the feasibility of the crawler construction machinery driverless walking method based on the NDT fusion SLAM and improved pure pursuit algorithms again.

5. Conclusions

Aiming at the problem of weak GPS positioning signals in some working scenes of construction machinery, a fusion SLAM positioning system based on NDT was designed in this paper. Through multi-sensor fusion, the speed and quality of mapping were effectively improved. The test shows that the positioning system can still achieve more accurate vehicle positioning without GPS signal access. According to the walking characteristics of crawler construction machinery, this paper improved the existing pure pursuit algorithm to make it suitable for motion control of crawler construction machinery. Based on the NDT fusion SLAM and improved pure pursuit algorithms, this paper proposed a walking method suitable for driverless crawler construction machinery and built a crawler construction machinery driverless simulation platform and real vehicle test platform to carry out the corresponding simulation tests and real vehicle tests. The simulation and real vehicle test results show that the crawler construction machinery can track the set target path well through the walking method proposed in this paper. The driverless walking scheme of crawler construction machinery proposed in this paper fills the gap in the field of driverless construction machinery walking schemes to a certain extent and lays a certain foundation for the development of related technologies for driverless construction machinery.
Because the fusion SLAM method in this paper adopts the most basic NDT point cloud registration method, it can only achieve the accurate positioning of small-scale scenes. Later, we will consider improving the existing NDT point cloud registration method to meet the accurate positioning of large-scale scenes. The improved pure pursuit algorithm proposed in this article is currently only applicable to low-speed mobile construction machinery. In the future, we will consider the dynamics of construction machinery and develop new motion control algorithms to make it suitable for high-speed mobile construction machinery. At the same time, this paper only realized the trajectory tracking of the given path. Later, we will consider the introduction of perception and cost maps, combined with the hybrid A* algorithm, to realize the autonomous planning and tracking of construction machinery’s operation path. Hybrid A * [37] is a path planning algorithm that takes into account a vehicle’s kinematic constraints. Unlike the Dijkstra algorithm [38] and A * algorithm [39], it can generate a smooth path that conforms to a vehicle’s kinematic constraints, which helps to improve the smoothness of subsequent motion control. At the same time, we will use the PointPillars algorithm [40] and CenterPoint algorithm [41], commonly used in the industry as a perception scheme, and combine them with the common working conditions of construction machinery to create corresponding datasets for training in order to achieve a perception scheme suitable for construction machinery working conditions, providing more reference and basis for subsequent path planning, decision making and obstacle avoidance for construction machinery.

Author Contributions

Conceptualization, J.W.; methodology, J.W. and Z.F.; software, J.W. and C.L.; validation, J.W. and C.L.; formal analysis, J.W. and Z.F.; investigation, Y.Y.; resources, T.L. and H.R.; data curation, J.W. and Z.F.; writing—original draft preparation, J.W.; writing—review and editing, H.R.; visualization, J.W. and H.R.; supervision, T.L. and H.R.; project administration, J.W., T.L. and H.R.; funding acquisition, T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant Nos. 52275055 and 52175051), key projects of the Natural Science Foundation of Fujian Province (Grant No. 2021J02013), the Fujian University industry university research joint innovation project plan (Grant No. 2022H6007) and the Natural Science Foundation of Fujian Province (Grant No. 2022J01300).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, T. Energy-Saving Technology and Application of Construction Machinery; China Machine Press: Beijing, China, 2018. [Google Scholar]
  2. Ge, L.; Quan, L.; Zhang, X.; Dong, Z.; Yang, J. Power matching and energy efficiency improvement of hydraulic excavator driven with speed and displacement variable power source. Chin. J. Mech. Eng. Engl. Ed. 2019, 32, 1–12. [Google Scholar] [CrossRef]
  3. Lin, T.; Yao, Y.; Xu, W.; Fu, S.; Ren, H.; Chen, Q. Driverless walking method of electric construction machinery based on environment recognition. Jixie Gongcheng Xuebao 2021, 57, 42–49. [Google Scholar]
  4. Zhang, X. Application analysis of unmanned technology in construction machinery. IOP Conf. Ser. Earth Environ. Sci. 2019, 310, 022005. [Google Scholar] [CrossRef]
  5. Ma, Y.; Li, Z.; Sotelo, M.A. Testing and evaluating driverless vehicles’ intelligence: The Tsinghua lion case study. IEEE Intell. Transp. Syst. Mag. 2020, 12, 10–22. [Google Scholar] [CrossRef]
  6. Oussama, A.; Mohamed, T. End-to-End Deep Learning for Autonomous Vehicles Lateral Control Using CNN. In Advanced Intelligent Systems for Sustainable Development; Springer: Cham, Switzerland, 2022; pp. 705–712. [Google Scholar]
  7. Milford, M.; Anthony, S.; Scheirer, W. Self-driving vehicles: Key technical challenges and progress off the road. IEEE Potentials 2019, 39, 37–45. [Google Scholar] [CrossRef]
  8. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
  9. Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, USA, 27–31 October 2003; pp. 2743–2748. [Google Scholar]
  10. Yu, X.; Huai, Y.; Yao, Z.; Sun, Z.; Yu, A. Key technologies in autonomous vehicle for engineering. Jilin Daxue Xuebao 2021, 51, 1153–1168. [Google Scholar]
  11. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
  12. Gao, B.; Hu, G.; Zhong, Y.; Zhu, X. Distributed state fusion using sparse-grid quadrature filter with application to INS/CNS/GNSS integration. IEEE Sens. J. 2021, 22, 3430–3441. [Google Scholar] [CrossRef]
  13. Gao, S.; Zhong, Y.; Zhang, X.; Shirinzadeh, B. Multi-sensor optimal data fusion for INS/GPS/SAR integrated navigation system. Aerosp. Sci. Technol. 2009, 13, 232–237. [Google Scholar] [CrossRef]
  14. Lai, Q.; Yuan, H.; Wei, D.; Wang, N.; Li, Z.; Ji, X. A multi-sensor tight fusion method designed for vehicle navigation. Sensors 2020, 20, 2551. [Google Scholar] [CrossRef]
  15. Gao, S.; Zhong, Y.; Shirinzadeh, B. Random weighting estimation for fusion of multi-dimensional position data. Inf. Sci. 2010, 180, 4999–5007. [Google Scholar] [CrossRef]
  16. Zong, H.; Gao, Z.; Wei, W.; Zhong, Y.; Gu, C. Randomly weighted CKF for multisensor integrated systems. J. Sens. 2019, 2019, 1–19. [Google Scholar] [CrossRef]
  17. Yu, H.; Dai, K.; Li, Q.; Li, H.; Zhang, H. Optimal Distributed Finite-Time Fusion Method for Multi-Sensor Networks under Dynamic Communication Weight. Sensors 2023, 23, 7397. [Google Scholar] [CrossRef]
  18. Koestler, L.; Yang, N.; Zeller, N.; Cremers, D. Tandem: Tracking and dense mapping in real-time using deep multi-view stereo. In Proceedings of the Conference on Robot Learning, PMLR, Auckland, New Zealand, 14–18 December 2022; pp. 34–45. [Google Scholar]
  19. Chen, X.; Vizzo, I.; Läbe, T.; Behley, J.; Stachniss, C. Range image-based LiDAR localization for autonomous vehicles. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 5802–5808. [Google Scholar]
  20. Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. Fast-lio2: Fast direct lidar-inertial odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
  21. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  22. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D NDT. J. Field. Rob. 2007, 24, 803–827. [Google Scholar] [CrossRef]
  23. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis, Proceedings of the Biennial Conference, Dundee, Scotland, 28 June–1 July 1977; Springer: Berlin/Heidelberg, Germany, 2006; pp. 105–116. [Google Scholar]
  24. Magnusson, M. The Three-Dimensional Normal-Distributions Transform: An Efficient Representation for Registration, Surface Analysis, and Loop Detection. Ph.D. Thesis, Örebro University, Örebro, Sweden, 2013. [Google Scholar]
  25. Chen, S.; Ma, H.; Jiang, C.; Zhou, B.; Xue, W.; Xiao, Z.; Li, Q. NDT-LOAM: A real-time lidar odometry and mapping with weighted NDT and LFA. IEEE Sens. J. 2021, 22, 3660–3671. [Google Scholar] [CrossRef]
  26. Deschaud, J. IMLS-SLAM: Scan-to-model matching based on 3D data. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2480–2485. [Google Scholar]
  27. Samuel, M.; Hussein, M.; Mohamad, M.B. A review of some pure-pursuit based path tracking techniques for control of autonomous vehicle. Int. J. Comput. Appl. 2016, 135, 35–38. [Google Scholar] [CrossRef]
  28. Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
  29. Bemporad, A.; Morari, M.; Dua, V.; Pistikopoulos, E.N. The explicit linear quadratic regulator for constrained systems. Automatica 2002, 38, 3–20. [Google Scholar] [CrossRef]
  30. Camacho, E.F.; Alba, C.B. Model Predictive Control; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  31. Liu, J.; Yang, Z.; Huang, Z.; Li, W.; Dang, S.; Li, H. Simulation Performance Evaluation of Pure Pursuit, Stanley, LQR, MPC Controller for Autonomous Vehicles. In Proceedings of the 2021 IEEE International Conference on Real-time Computing and Robotics (RCAR), Xining, China, 15–19 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1444–1449. [Google Scholar]
  32. Park, M.; Lee, S.; Han, W. Development of steering control system for autonomous vehicle using geometry-based path tracking algorithm. ETRI J. 2015, 37, 617–625. [Google Scholar] [CrossRef]
  33. Snider, J.M. Automatic steering methods for autonomous automobile path tracking. Master’s Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2009. [Google Scholar]
  34. Zhang, C.; Dong, W.; Xiong, Z.; Hu, Z.; Wang, D.; Ding, Y. Design and Experiment of Fuzzy Adaptive Pure Pursuit Control of Crawler-type Rape Seeder. Nongye Jixie Xuebao 2021, 52. [Google Scholar]
  35. Lee, K.; Kim, B.; Choi, H.; Moon, H. Development of Path Tracking Algorithm and Variable Look Ahead Distance Algorithm to Improve the Path-Following Performance of Autonomous Tracked Platform for Agriculture. J. Korea Robot Soc. 2022, 17, 142–151. [Google Scholar] [CrossRef]
  36. Wang, S.; Fu, S.; Li, B.; Wang, S. Path tracking control of tracked paver based on improved pure pursuit algorithm. In Proceedings of the 2021 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 4187–4192. [Google Scholar]
  37. Dolgov, D.; Thrun, S.; Montemerlo, M.; Diebel, J. Path planning for autonomous vehicles in unknown semi-structured environments. Int. J. Robot. Res. 2010, 29, 485–501. [Google Scholar] [CrossRef]
  38. Dijkstra, E.W. A note on two problems in connexion with graphs. In Edsger Wybe Dijkstra: His Life, Work, and Legacy; ACM Books: New York, NY, USA, 2022; pp. 287–290. [Google Scholar]
  39. Likhachev, M.; Ferguson, D. Planning long dynamically feasible maneuvers for autonomous vehicles. Int. J. Robot. Res. 2009, 28, 933–945. [Google Scholar] [CrossRef]
  40. Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar]
  41. Yin, T.; Zhou, X.; Krahenbuhl, P. Center-based 3d object detection and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11784–11793. [Google Scholar]
Figure 1. 3D original point cloud and its meshed effect.
Figure 1. 3D original point cloud and its meshed effect.
Sensors 23 07784 g001
Figure 2. NDT mapping flow chart.
Figure 2. NDT mapping flow chart.
Sensors 23 07784 g002
Figure 3. Point cloud collection equipment.
Figure 3. Point cloud collection equipment.
Sensors 23 07784 g003
Figure 4. Comparison between the mapping track and the reference track.
Figure 4. Comparison between the mapping track and the reference track.
Sensors 23 07784 g004
Figure 5. Forward mapping error (x axis direction).
Figure 5. Forward mapping error (x axis direction).
Sensors 23 07784 g005
Figure 6. Lateral mapping error (y axis direction).
Figure 6. Lateral mapping error (y axis direction).
Sensors 23 07784 g006
Figure 7. Mapping scene.
Figure 7. Mapping scene.
Sensors 23 07784 g007
Figure 8. Point cloud map established for the mapping scene.
Figure 8. Point cloud map established for the mapping scene.
Sensors 23 07784 g008
Figure 9. NDT positioning flow chart.
Figure 9. NDT positioning flow chart.
Sensors 23 07784 g009
Figure 10. Comparison between the positioning track and the reference track.
Figure 10. Comparison between the positioning track and the reference track.
Sensors 23 07784 g010
Figure 11. Forward positioning error (x axis direction).
Figure 11. Forward positioning error (x axis direction).
Sensors 23 07784 g011
Figure 12. Lateral positioning error (y axis direction).
Figure 12. Lateral positioning error (y axis direction).
Sensors 23 07784 g012
Figure 13. Kinematic model of bicycle.
Figure 13. Kinematic model of bicycle.
Sensors 23 07784 g013
Figure 14. Analysis of pure pursuit geometric model.
Figure 14. Analysis of pure pursuit geometric model.
Sensors 23 07784 g014
Figure 15. Kinematic model of the crawler construction machinery.
Figure 15. Kinematic model of the crawler construction machinery.
Sensors 23 07784 g015
Figure 16. Simulation platform of electric crawler excavator.
Figure 16. Simulation platform of electric crawler excavator.
Sensors 23 07784 g016
Figure 17. Driverless platform of electric crawler excavator.
Figure 17. Driverless platform of electric crawler excavator.
Sensors 23 07784 g017
Figure 18. Flow chart of established trajectory tracking technology.
Figure 18. Flow chart of established trajectory tracking technology.
Sensors 23 07784 g018
Figure 19. Visualization of simulation results.
Figure 19. Visualization of simulation results.
Sensors 23 07784 g019
Figure 20. Left and right tracks’ speed in simulation process.
Figure 20. Left and right tracks’ speed in simulation process.
Sensors 23 07784 g020
Figure 21. Comparison between simulated driving track and target path.
Figure 21. Comparison between simulated driving track and target path.
Sensors 23 07784 g021
Figure 22. Forward tracking error (x axis direction) in simulation process.
Figure 22. Forward tracking error (x axis direction) in simulation process.
Sensors 23 07784 g022
Figure 23. Lateral tracking error (y axis direction) in simulation process.
Figure 23. Lateral tracking error (y axis direction) in simulation process.
Sensors 23 07784 g023
Figure 24. Real-time monitoring platform for real vehicle test.
Figure 24. Real-time monitoring platform for real vehicle test.
Sensors 23 07784 g024
Figure 25. Real vehicle test target path planning.
Figure 25. Real vehicle test target path planning.
Sensors 23 07784 g025
Figure 26. Overall process of real vehicle test.
Figure 26. Overall process of real vehicle test.
Sensors 23 07784 g026
Figure 27. Left and right tracks’ speeds in real vehicle test.
Figure 27. Left and right tracks’ speeds in real vehicle test.
Sensors 23 07784 g027
Figure 28. Comparison between driving track of real vehicle test and target path.
Figure 28. Comparison between driving track of real vehicle test and target path.
Sensors 23 07784 g028
Figure 29. Forward tracking error (x axis direction) in real vehicle test.
Figure 29. Forward tracking error (x axis direction) in real vehicle test.
Sensors 23 07784 g029
Figure 30. Lateral tracking error (y axis direction) in real vehicle test.
Figure 30. Lateral tracking error (y axis direction) in real vehicle test.
Sensors 23 07784 g030
Table 1. Main equipment parameters of the actual vehicle test platform.
Table 1. Main equipment parameters of the actual vehicle test platform.
Device NameEquipment ModelEquipment Parameters
IMUN100Output frequency: 400 Hz
Serial port baud rate: 921,600
Pitch/roll accuracy: 0.05°
number of axles: 9 axes
RTK/INSCHCNAV CGI-610Attitude accuracy: 0.1°
Positioning accuracy: 1 cm
Output frequency: 100 Hz
Initialization time: 1 min
LidarVelodyne16Measurement range: 300 m
Measurement accuracy: ±3 cm
Vertical measurement angle: 40°
Horizontal measurement angle: 360°
Measurement frequency: 5–20 Hz
On-board computerTW-T609CPU: 8-core Arm architecture
GPU: 512 core Volte architecture + 64 Tensor core
Computational power: 32 TOPS
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.; Ren, H.; Lin, T.; Yao, Y.; Fang, Z.; Liu, C. A Pure Electric Driverless Crawler Construction Machinery Walking Method Based on the Fusion SLAM and Improved Pure Pursuit Algorithms. Sensors 2023, 23, 7784. https://doi.org/10.3390/s23187784

AMA Style

Wu J, Ren H, Lin T, Yao Y, Fang Z, Liu C. A Pure Electric Driverless Crawler Construction Machinery Walking Method Based on the Fusion SLAM and Improved Pure Pursuit Algorithms. Sensors. 2023; 23(18):7784. https://doi.org/10.3390/s23187784

Chicago/Turabian Style

Wu, Jiangdong, Haoling Ren, Tianliang Lin, Yu Yao, Zhen Fang, and Chang Liu. 2023. "A Pure Electric Driverless Crawler Construction Machinery Walking Method Based on the Fusion SLAM and Improved Pure Pursuit Algorithms" Sensors 23, no. 18: 7784. https://doi.org/10.3390/s23187784

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop