Next Article in Journal
Influence of Reduced Tillage, Fertilizer Placement, and Soil Afforestation on CO2 Emission from Arable Sandy Soils
Previous Article in Journal
Energy Assessment of Second-Generation (2G) Bioethanol Production from Sweet Sorghum (Sorghum bicolor (L.) Moench) Bagasse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LiDAR Odometry and Mapping Based on Semantic Information for Maize Field

College of Engineering, China Agricultural University, Beijing 100091, China
*
Author to whom correspondence should be addressed.
Agronomy 2022, 12(12), 3107; https://doi.org/10.3390/agronomy12123107
Submission received: 18 September 2022 / Revised: 24 November 2022 / Accepted: 29 November 2022 / Published: 7 December 2022

Abstract

:
Agricultural environment mapping is the premise of the autonomous navigation of agricultural robots. Due to the undulating terrain and chaotic environment, it is challenging to accurately map the environmental maize field using existing LOAM (LiDAR odometry and mapping) methods. This paper proposes a LOAM method based on maize stalk semantic features for 6-DOF (degrees of freedom) pose estimation and field mapping with agricultural robots operating in a dynamic environment. The piecewise plane fitting method filters the ground points for the complex farmland terrain. To eliminate the unstable factors in the environment, we introduce the semantic information of maize plants into the feature extraction. The regional growth method segments the maize stalk instances, the instances are parameterized to a line model, and the optimization method calculates the pose transformation. Finally, the mapping method corrects the drift error of the odometry and outputs the maize field map. This paper compares our method with the GICP and LOAM methods. The trajectory relative errors of our method are 0.88%, 0.96%, and 2.12%, respectively, better than other methods. At the same time, the map drawn by our method has less ghosting and clearer plant edges. The results show that our method is more robust and accurate than other methods due to the introduction of semantic information in the environment. The mapping of corn fields can be further used in precision agriculture.

1. Introduction

With the increasing cost of agricultural labor, accelerating agricultural modernization and developing smart agriculture has become the development direction of China’s agriculture [1,2,3]. The intelligent agricultural robot is a critical component of intelligent agriculture [4,5,6]. Deploying intelligent robots is appealing for a variety of applications, including yield and health estimating [7,8], as well as weed detection [9], pest detection [10], and disease detection [11]. Among the capabilities of an intelligent robot, map building and state estimation are among the most fundamental prerequisites. The map can estimate yield and health, calculate planting density, row spacing, and plant spacing in the plot unit, and measure plant height and growth detection in the unit of a single plant.
To realize the long-term autonomous navigation of agricultural robots with more accuracy and lower cost, based on the simultaneous localization and mapping (SLAM) technology, the LiDAR (Light Detection And Ranging) sensor is used to draw the map of the environment while detecting the crop plants.
Mapping has been applied to orchards [12,13], forests [14], and agricultural scenes. Underwood et al. [15] used the hidden semi-Markov model to segment individual trees, introduced a descriptor to represent the appearance of each tree, and then associated new observations with the map of the orchard. Guo et al. [16] used the target ball to reconstruct a single fruit tree. In large-scale agricultural scene reconstruction, Su et al. [17] and Qiu et al. [18] used tools such as target ball to construct salient features for point cloud registration. Manuel et al. [19,20] obtained the positioning information of acquisition tools for point cloud registration. Dong et al. [21] and Sun et al. [22] extracted features from images for scene reconstruction.
Before the SLAM method, the most commonly used method in two-frame LiDAR scanning wasICP (Iteration closest point) [23] and NDT (Normal distribution transformation) [24]. Many variants of ICP methods have been proposed to improve efficiency and accuracy, including point-to-plane ICP [25], GICP (Generalized-ICP) [26], and point-to-line ICP [27]. However, when the number of points is more, the computational cost of the ICP method is more. In addition, the ICP method has low robustness in the dynamic environment.
Therefore, feature-based matching methods have attracted more attention as they require fewer computational resources by extracting representative features from the environment. Many feature detectors have been proposed [28,29], such as FPFH (Fast Point Feature Histogram) [30]. Serafin et al. [31] proposed a method to extract plane and line features from sparse point clouds quickly and verified the accuracy of the feature extraction method combined with the SLAM method.
Because the feature extraction method is simple and effective, more and more SLAM methods use feature extraction to build LiDAR odometry. Ji et al. [32] classified the points into plane points and edge points by calculating the smoothness and then matching the plane points and edge points with the plane and line segments of the next frame to calculate the transformation. This method obtains the highest accuracy of the LiDAR odometer in the KITTI dataset. Shan et al. [33] proposed the LeGO-LOAM method to improve the LOAM method, the points in the environment were segmented, the plane points and edge points were detected according to the smoothness, and the matching of segmented labels was added based on the LOAM matching method. Jiang et al. [34] divided the points into voxels, calculated the geometric features of the voxel, matched the features with the feature labels according to the distance, and integrated the Partition of Unity (POU)-based feature extraction method into the SLAM system. The SegMatch [35] method first segmented the point cloud, then extracted its eigenvalues and shape histograms, and used a random forest classifier to match the segmentation in the two scans. Chen et al. [36] proposed a LiDAR odometer and mapping framework used in forestry environments. Firstly, the ground and trunk examples were segmented, and the plane and cylindrical parameters were extracted as features for data association. This method has certain universality.
The above methods have achieved high accuracy in public datasets or specific application scenarios, but they are not suitable for the farmland environment. In the maize planting environment, adjacent maize usually has a similar appearance, making it difficult to distinguish similar plants. On the other hand, due to the influence of wind, maize is very easy to deform, which leads to inconsistency in the appearance of the same maize plant even in the data of two adjacent frames. Therefore, in the intensive planting environment, the above LOAM method cannot extract representative features and perform accurate feature matching when the maize changes dynamically.
Our work focuses on solving the LOAM problem of the autonomous system working in the maize field. Previous methods rely on texture features and are fragile in the farmland environment. Therefore, this paper presents a semantic-based LOAM method. The key idea is to extract robust maize stalk features using semantic information and extract maize stalk instances as parameterized landmarks to obtain a robust LOAM solution.
The method includes (1) ground filtering with a piecewise plane fitting method; (2) maize stalk detection with a regional growth method; (3) landmark model parameter extraction; (4) data association and motion estimation; and (5) LiDAR mapping. The primary contributions of this paper are as follows:
  • A maize stalk detection pipeline is developed, and maize stalk instance is parameterized into a landmark model.
  • A LOAM method can perform robust data association, and ego-motion estimation in maize intensive planting environments is developed.
  • Our method is verified by maize field data of different growth states.

2. Materials and Methods

2.1. Hardware and Sensors

The 3D point cloud data used in this study were obtained through the mobile robot platform controlled by the joystick that navigated in the experimental field. The LiDAR scanner used in this study is an RS-LiDAR-32 (Robosense, Shenzhen, China) mounted at the height of 1.2 m. The installation angle of LiDAR can be adjusted in the range of 0°~−15°. The specifications of the LiDAR can be found in Table 1. In order to compensate for the influence of rough terrain and mechanical vibration on the LiDAR, the robot platform is equipped with an Inertial Measurement Unit (IMU). GNSS receiver is mounted on a mobile robot platform to obtain ground truth. The mobile robot platform and its composition are shown in Figure 1.

2.2. Software System Overview

Figure 2 shows the data processing flow chart of our LiDAR odometry and mapping method. Because the LOAM method has better real-time and scalability, our method adopts a framework similar to the LOAM method.
The problem is to perform an ego-motion estimation with a point cloud perceived by LiDAR in a maize field and build a map for the agricultural environment. A sweep refers to the point cloud obtained by LiDAR completing one scan coverage.   P k indicates the point cloud perceived during a sweep k. Firstly, the ground points are filtered by the piecewise ground filtering method for the current sweep P k + 1 and the previous sweep P k , the maize stalk instances are detected by the maize stalk segmentation method. Take P k as an example, maize stalk instances with a number of N k are put into set G k G k i i = 1 N k . At the same time, feature points with a number of N j contained in a stalk instance is put into set G k i p j j = 1 N j . Next, for each G k i , the landmark parameters are extracted into F k f k i i = 1 N k , and the number of landmark parameters is the same as the number of stalk instances. Finally, the pose transforms T k + 1 L are estimated by data association and motion estimation, where T k + 1 L is the LiDAR pose transform between points in LiDAR coordinate {L}.
The LiDAR mapping algorithm is executed to process the odometry output further and calculate the T i + 1 W . T i + 1 W refers to the LiDAR pose transform in world coordinate {W}. Finally, the map Q i is updated to Q i + 1 .

2.3. Piecewise Ground Filtering

Many methods assume that the ground is flat and everything that stands up from the ground is considered an obstacle. However, the ground is not always planar in the actual agricultural scenes encountered by autonomous robots. Nonplanar grounds, such as undulated and sloped terrains, curved uphill/downhill ground surfaces, or situations with big rolling/pitch angles of the autonomous robots remain unsolved. We modified the method of Asvadi et al. [37] to make it suitable for agricultural ground filtering. This method can automatically and accurately fit the ground without manual modification of parameters. The method includes four steps: (1) data slicing; (2) gating; (3) RANSAC plane fitting; (4) validation.

2.3.1. Data Slicing

This process divides point cloud data into different slices with the mobile robot platform position as the origin. Firstly, the Pass-Through filter removes the points outside the region of interest (ROI). The axis range of the ROI was set to 0~10 m, and the axis range was set to −5~5 m. The LiDAR scanning harness model is established, as shown in Figure 3.
Since there is a certain range of blind areas around the acquisition platform, the radius is related to the installation angle. According to the geometric relationship, the distance from the front end of the initial slice to the center of the mobile robot platform R 0 is:
R 0 = h · a r c t a n π 2 α L α b
The angle between the front end of the initial area and the vertical line of the LiDAR installation position α 0 = arctan R 0 / h . Then the distance from the front end of each slice to the mobile robot platform R i is:
R i = h · t a n α 0 + i · η · Δ α α L
where h is the height of the LiDAR sensor to the ground;   α L is the LiDAR mount angel;   α b is the minimum vertical angle of the LiDAR laser transmitter;   η is the number of scanning harnesses for LiDAR data in each region; Δ α is the vertical resolution of LiDAR, (°). Considering   P = p j = x j , y j , z j j = 1 N p of size N p , each region is represented as S = s i : R i 1 < p j < R i i = 1 N of size N.

2.3.2. Gating

P includes plant points and ground points. Because the height (z-value) and distribution are very different, plant points belong to the abnormal point in the plane fitting. Selecting ground data as much as possible can improve the accuracy of the plane fitting. The IQR (interquartile range) method automatically selects the z-value range. The IQR is:
  I Q R = Q 3 Q 1
where Q 1 is the first quartile of z-value; Q 3 is the third quartile of z-value. Then the upper and lower gate limits of the z-value can be calculated as follows:
Q min = Q 1 0.5 × I Q R Q max = Q 3 0.5 × I Q R
The ground filtering accuracy is the highest when the IQR coefficient is 0.5 after many tests. After obtaining the upper and lower gate limits of the z-value, the points within the gate are selected for surface plane fitting. The point within the gate in each region s i is represented as s i ˙ . The gating operation is shown in Figure 4.

2.3.3. RANSAC Plane Fitting

The RANSAC (Random Sample Consensus) method is used to fit the ground plane to the inliers points. The RANSAC method is a random parameter estimation method that estimates the best model parameters from a group of observation data iteratively containing outliers. For the ground point in the kth slice s i ˙ , the RANSAC method randomly selects three points in s k ˙ to fit the plane model.
The distance is calculated from each point to the plane. When the distance is below a given threshold, the point is chosen as an inliers point. The score is calculated as the number of inliers points. The plane that has the largest number of inliers is chosen as the best fit for the ground point. The fitting plane model is defined as α i x + b i y + c i z + d i = 0 , denoted by   L = l i = a i , b i , c i , d i i = 1 N .

2.3.4. Validation

Based on the assumption that the nearest slice s 1 has the maximum confidence of the fitting plane, the plane parameters are verified from the nearest slice to the farthest slice. For the two planes G i 1 and G i , calculate the angle δ ψ i between the normal and the distance δ z i between two planes. Let the angle threshold ψ t h r e s h o l d = 15° and the distance threshold Z t h r e s h o l d = 0.05 m. When δ ψ i and δ z i meet the requirements, the plane G i is assumed valid. Otherwise, the parameters of G i 1 are passed to G i .
After the above steps are performed, the ground surface points are filtered according to the parameters of each slice. The specific method calculates the distance from each point to the plane, presets the distance threshold, and filters out all points whose distance is less than the threshold and points below the plane.
The ground filtering effects of different growth stages are shown in Figure 5. The first row is the plane detected by each region and the ground points, in which different planes and points are represented by different colors, respectively, and the second row is the segmented plant points and ground points, which are represented by two colors, respectively. It can be seen intuitively that our method filters out the uneven farmland surface point cloud, and the accuracy is quantitatively analyzed in Section 3.1.

2.4. Maize Stalk Segmentation

According to the comparison of the elastic modulus of the maize stalk and maize leaf, the maize stalk is less likely to deform under the influence of wind force than the maize leaf. Therefore, the primary purpose of this section is to divide each maize stalk point in the maize field. The overall workflow of the segmentation method is shown in Figure 6.
The regional growth method is used to segment the maize stalk point cloud. First, the initial seed points are detected by slicing. According to the L, the slice with a thickness of 0.05 m is extracted at z = 0.27 m above the plane (This value can be changed according to the height of the maize plant). Then the DBSCAN (Density-Based Spatial Clustering of Applications with Noise) method is applied to cluster the point cloud data in the slice. The DBSCAN method has two important parameters. Neighborhood radius   R e p s , is used to calculate the density of a point and adjacent points, and the minimum preset number of points R M i n P t s , in a single point neighborhood is used to classify each point into a class of core points, boundary points, and noise points. In this paper, R e p s and R M i n P t s are set to 0.1 and 5. In maize planting, the fixed row spacing can generally be guaranteed. Therefore, the clustering results are further verified by fitting the planting line to exclude the clustering results of non-stem points, so the seed point set, P s e e d , is obtained. An example of a seed point extraction method is shown in Figure 7.
Second, two methods are adopted for regional growth and the selection of the next seed point. The choice of the two methods depends on the balance between computational efficiency and segmentation accuracy. In the first method, the slice with a size of 0.10 m × 0.10 m × 0.05 m is extracted below the current seed point, the DBSCAN clusters the data in the slice. The mean value of the clustering center is taken as the next seed point. The regional growth algorithm stops when the seed point cannot be found or reaches the lowest limit. The first method is simple and effective can quickly segment the stalk point cloud.
The second method adopts the method of paper by Jin et al. [38]. Extract the points   D = p d : p d p s 2 R d d = 1 N d of size Nd in the spherical area with radius R d = 0.05 m around the seed point P s , the growth direction m can be calculated by median normalized vector method:
  m = median p d p s p d p s 2 , p d D
where   median · stands for the median normalized value of the unit vector; · 2 stands for L2 distance. The next seed point can be calculated as follows:
p s ' = p s + R d × m
The regional growth algorithm stops when the seed point cannot be found or reaches the lowest limit. In the second method, the growth direction is determined by the distribution of the nearest neighbor points of the seed points. The second method can obtain more accurate segmentation results but requires more computing resources. The two different growth methods are shown in Figure 8.
Finally, through the regional growth method, all stalk points in the point cloud data are segmented, and the feature points of the segmented stem instance are put into, where Nj is the number of feature points.

2.5. LiDAR Odometry

2.5.1. Maize Stalk Parameterization

For each stalk instance in G k i = p j j = 1 N j , the stalk is approximated as a straight line, and the line parameters are extracted as the characteristic parameters of each stalk instance. Let f = p m , v be the parameters of a line model, where p m = x m , y m , z m is a point on the line and v = m , n , k is the direction vector of the line. Considering that there may be some maize leaf points in p j , the line parameter f in the feature points is still solved by the RANSAC method, and the line parameter f of each instance is put into F = f i i = 1 N k , where N k is the number of stalk instances.

2.5.2. Data Association and Motion Estimation

We need to find the corresponding stalk instance in the previous sweep for a stalk feature point p k + 1 j G k + 1 i . We sequentially calculate the distance from each feature point p k + 1 i to the line parameter f k i . The instances G k + 1 i corresponding to the line parameter f k i are closest to most of the feature points in the set and can be regarded as the same instance. We perform data association by matching G k + 1 i to the instance G ¯ k j corresponding to the closest line parameter to most of the feature points in the set. In order to reduce false matching, we set the threshold of distance. The two instances do not perform data association when the distance exceeds the threshold. It is regarded as a suspected new landmark for an instance that has never been successfully matched with other instances. If it matches other instances successfully in the subsequent sweep, it will be used as a new landmark.
We can estimate the pose transform T k + 1 L by optimizing the nonlinear least squares problem for the associated feature points and line parameters in two sweeps. The distance from a point to a line is selected as the error function. According to Huang et al. [39], compared with the point-to-point metric, a point-to-line metric gives quadratic convergence instead of linear convergence. It is proved that the point-to-line method can obtain higher accuracy in noisy data. Therefore, the error function is:
f T k + 1 L = a r g m i n 1 2 i = 1 N k j = 1 N i v R × p j + t p m 2
where T k + 1 L = t , R is the LiDAR pose transform between points P k ,   P k + 1 in LiDAR coordinate {L}. The nonlinear least squares problem (7) is solved by L-M (Levenberg–Marquardt) optimization.
Algorithm 1 shows the pseudo-code of our LiDAR odometry. The input of the algorithm is the mapped P ¯ k , G ¯ k , F ¯ k , and current sweep P k + 1 . We run the piecewise ground surface filtering in P k + 1 , and the point cloud obtained is recorded as P ~ k + 1 . Find the seed point set P s e e d in P ~ k + 1 , and obtain the feature point set G k + 1 through region growing algorithm. Extract stem feature F k from G k + 1 . Finally, the attitude transformation T k + 1 L is obtained by solving Equation (7). Finally, according to the pose transform T k + 1 L , G k + 1 , F k + 1 , P k + 1 are mapped to G ¯ k + 1 , F ¯ k + 1 , P ¯ k + 1 respectively. The algorithm outputs G ¯ k + 1 , F ¯ k + 1 , T k + 1 L . The algorithm cycles until the acquisition is completed.
Algorithm 1 LiDAR Odometry algorithm
1: Input: P ¯ k , G ¯ k ,   F ¯ k , and P k + 1
2: Output:   G ¯ k + 1 ,   F ¯ k + 1 , and newly pose transform T k + 1 L
3: for each point p j in P k + 1  do
4:  Filtering the ground surface point by piecewise ground surface filtering to
5:   obtain P ˜ k + 1
6:  Find the seed point P s e e d by slicing, DBSCAN clustering, and planting line fitting
7:  for each point in P s e e d  do
8:    Find feature point G k + 1 by regional growth method
9:    for each point p k + 1 j in G k + 1 i  do
10:     Find f k i by extracting stem parameters
11: end for
12: for each G k + 1 and F k  do
13:  Update T k + 1 L by solving (7)
14: end for
15: for G k + 1 , F k + 1 , and P k + 1  do
16:  Project G k + 1 , F k + 1 , and P k + 1 to   G ¯ k + 1 ,   F ¯ k + 1 , and P ¯ k + 1
17: end for
18: return   G ¯ k + 1 ,   F ¯ k + 1 , P ¯ k + 1 ,   T k + 1 L

2.6. LiDAR Mapping

LiDAR odometry has obtained the pose transformation matrix of the LiDAR, but the map construction directly through the odometry output will produce inevitable cumulative errors. Therefore, the LiDAR odometry output is further optimized using the LiDAR mapping algorithm, and the newly obtained point cloud is mapped to the global map. The mapping algorithm is executed every 4 times of LiDAR odometry.
The mapping algorithm has similar input and output to the LiDAR odometry. The difference is that the LiDAR odometry calculates the pose transformation   T k + 1 L of the sensor in the vehicle coordinate system {L}, and the map building algorithm calculates the pose transformation   T i + 1 W of the sensor in the global coordinate system {W}.
The point cloud map is represented as Q i , and the local point cloud map constructed from the output of LiDAR odometry 4 times is represented as Q i + 1 . According to the pose transformation   T k + 1 L and the previous global pose transformation   T i W , the same feature extraction as the laser odometry is implemented and finally obtains   T i + 1 W through the L-M algorithm.

3. Results

At the Shangzhuang experimental station of China Agricultural University (Beijing), the data of maize fields with growth days of 20 days, 48 days, 60 days, and 95 days were collected using the mobile robot platform. These data were used to verify the accuracy of the ground surface filtering and LiDAR odometry methods proposed in this paper. In the process of each data acquisition, different paths were selected according to the passable conditions. In each collection, the mobile robot platform traveled at a speed of 0.5 m/s in the maize field. The ground in each maize field was not flat, and the installation angle of LiDAR was adjusted according to the maize height during collection. The details of each experiment are listed in Table 2.

3.1. Ground Filtering Experiment

Ten frames of point cloud data collected at 20 days, 48 days, and 60 days were selected, respectively, with 30 frames. The point cloud data were manually divided into maize and ground surface points as ground truth. The ground filtering method based on piecewise plane fitting proposed in this paper is compared with the RANSAC method, the RANSAC method using the IQR method, and the LS method using the IQR method. For the method of manually setting parameters, the parameters with the highest accuracy and recall are used. The precision, recall, and average precision of the ground surface filtering method are calculated as follows:
R = T P T P + F N
P = T P T P + F P
A P = T P + T N T P + F P + T N + F N
where R denotes the recall; P denotes the precision; AP denotes the average precision; TP is the true positives, which are the ground points commonly found in the reference and extracted ground points; FP is the false positives, which are the extracted ground points that are not present in the reference ground points; TN is the true negatives, which are the maize points commonly found in the reference and extracted maize points; and FN is the false negatives, which are the maize points that are not present in the reference ground points.
Table 3 shows the P, R, and AP of the ground surface filtering method. This phenomenon may be because with the increase in maize plant height, the shielding generated by plants will reduce the number of surface point clouds, resulting in the reduction in plane parameter calculation accuracy.
According to Table 3, the ground surface filtering method based on piecewise plane fitting achieves the highest P, R, and AP in maize point clouds of different growth days, in which the P is 94.32%, 92.27%, and 91.34%, respectively, the R is 88.17%, 88.29%, and 87.34%, respectively, and the AP is 86.49%, 85.13%, and 85.71%, respectively. In addition to our method, in this paper, the best method is the RANSAC method using the IQR method. It can be found from the table that the P, R, and AP of the method of setting parameters by the IQR method are higher than those of the method of manually setting parameters.

3.2. LiDAR Odometry and Mapping Method Experiment

We benchmarked our method with the LOAM and ICP methods. A-LOAM is an open-source implementation of the LOAM algorithm [32], and GICP is an open-source implementation of the ICP algorithm [26]. Both LOAM and ICP are commonly used. The LOAM method extracts line segment and plane features. Real-time mapping can be carried out through fast feature extraction and feature matching. High accuracy has been obtained in indoor and outdoor environment experiments. The ICP method has the highest accuracy in the case of small attitude transformation due to point-by-point registration, but it can not build maps in real time due to a large amount of calculation.
The data from the maize field collected at 48 days, 60 days, and 95 days were used as the method input. Then we evaluated the accumulated point cloud by evaluating whether each maize plant has ghosting and duplication. Quantitatively, we calculated the drift of motion estimation in each experiment, and counted the number of feature point sets in each sweep participating in registration and the number of correctly registered feature point sets.
It can be seen from Figure 9 that the maps generated by LOAM have ghosting. Although the map generated by GICP has no ghosting, the edge of each plant is not clear enough. This is due to the deformation of maize affected by wind, which leads to the calculation error of pose transfer. Our method only calculates the pose transfer of stalk point clouds with minor or no deformation to obtain the best point cloud map.
It can be seen from Figure 10 that the trajectory calculated by our method and the GICP method is the closest to the ground truth, the A-LOAM method deviates greatly from the ground truth, and the relative error is relatively large when handling large rotations. According to Table 4, the accuracy of each LiDAR odometry decreases with the increase in maize growth days, which may be due to two reasons. On the one hand, the higher the maize plant is, the more serious the occlusion between plants is, and the fewer effective stalk instances can be extracted, which affects the accuracy of the LOAM method. On the other hand, most of the data obtained are maize leaf data, and the leaves are more affected by the wind. In particular, when the leaves are connected with adjacent maize stalks, the accuracy of segmentation and data association decreases significantly.
Compared with LOAM and GICP, our maize stalk feature is robust in dynamically transformed maize plants. Compared with the LOAM method, which extracts features based on line and plane, our method adds the semantic information of the environment, which makes the data association process more robust. Compared with the GICP method, our method ignores dynamically changing points and selects the stalk points that are not easy to transform in the dynamic environment, to reduce the pose transformation calculation error. Therefore, our method obtained the best accuracy of the three methods.
Table 5 shows that for maize with different growth days, 23.2, 25.2, and 15.1 stalk instances with successful association can be extracted per sweep. The association success rate is calculated as the percentage between the correctly associated instances and all associated instances. The association success rates of each instance were 89%, 94%, and 95%, respectively. It indicates that the maize stalk feature extraction and matching method proposed in this paper can work with high precision.
Based on the above experiences, our SLAM method based on maize stalk feature points and parameter extraction can obtain high-precision pose transform results in maize fields with different growth days. The map obtained by our method can be used for single plant segmentation and maize growth parameter detection.

4. Discussion

This paper proposed a LOAM pipeline based on maize stalk instance segmentation for the maize field. Firstly, the uneven ground points are filtered by piecewise surface filtering. Secondly, the stem instance is segmented by a regional growth method. Thirdly, the pose transform is calculated by model parameter extraction, data association, and motion estimation. Finally, the LiDAR odometry output is accurately processed by a further LiDAR mapping method. Due to the introduction of stalk semantic information into LiDAR odometry and mapping, our method can accurately and robustly locate and map in outdoor maize intensive planting environments.
Comparing the other two LiDAR state estimation methods showed that our method can perform with high precision in a maize intensive planting environment. The relative errors of our method in maize field data of different growth states are 0.88%, 0.96%, and 2.12%, respectively. Our algorithm is more accurate with the maize with fewer growth days. Our algorithm can accurately extract the stem instance when the maize is relatively short. When the maize plant is high, the point of the maize stalk is far less than that of the maize leaf, and the leaf completely blocks the stalk. However, our algorithm achieves the highest accuracy compared with other methods. This is because we introduced the semantic information of maize stalks in the LOAM method, making our method robust to feature extraction and feature matching in the muddy maize fields, thus obtaining the highest accuracy.
It is essential for robots to locate themselves in an intensive maize planting environment. The map obtained by our method has no ghosting and blur and has the highest quality compared with other methods.
Although our method can deal with the deformation of maize, it does not solve the deformation of maize plants in the process of mapping. For example, suppose a leaf of a maize plant is displaced in two sweeps, our method can obtain accurate pose transformation. In that case, the ghosting of the same leaf will be generated in the mapping process, which leads to the decline in the quality of the map drawn by our method.
Finally, fusing multi-sensor data to obtain more accurate positioning information, further optimizing the execution speed of the algorithm, and applying the deep learning method to segment the stalk instance will be the focus of our future research.

Author Contributions

Methodology, N.D.; software, N.D.; validation, N.D. and W.Z.; writing—review and editing, N.D.; writing—review and editing, R.C.; supervision, R.C.; resources, R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 52172396.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The nomenclature of variables.
P The point cloud perceived during a sweephThe height of the LiDAR sensor(m)
G The set of maize stalk instances α L The LiDAR mount angel(°)
F The set of landmark parameters R i The distance from the front end of each slice to the mobile robot platform(m)
T L The LiDAR pose transform in
LiDAR coordinate
α b The minimum vertical angle of the LiDAR laser transmitter(°)
T W The LiDAR pose transform in world coordinate η The number of scanning harnesses for LiDAR data in each region
R 0 The distance from the front end of the initial slice to the center of the mobile robot platform (m) Δ α The vertical resolution of LiDAR (°)
SThe set of slicing region Q 1 The first quartile of z-value
s i ˙ The point within the gate in each region Q min The lower gate limits of z-value
IQRInterquartile range Q max The upper gate limits of z-value
Q 3 The third quartile of z-valueLThe set of fitting plane model
δ ψ i The angle between the normal (°) δ z i The distance between two planes (m)
ψ t h r e s h o l d The angle threshold (°) Z t h r e s h o l d The distance threshold (m)
R e p s The neighborhood radius of DBSCAN method (m) R M i n P t s The minimum preset number of DBSCAN method
P s e e d The set of seed point p s The seed point
DThe set of points in the spherical area with radius Rd R d The neighborhood radius (m)
m The growth direction of method 2 p s ' The next seed point
fThe parameters of a line model p m A point on the line
vThe direction vector of the line p j A feature point in G i
P ¯ The point cloud after projection according to pose transform G ¯ The set of maize stalk instances after projection according to pose transform
F ¯ The line parameter after projection according to pose transform t x The translations along the x axis (m)
t y The translations along the y axis (m) t y the translations along the z axis (m)
θ x the rotation angles around the x axis (°) θ y the rotation angles around the y axis (°)
θ z the rotation angles around the z axis (°) θ The Euler angle
RThe rotation matrixIThe identity matrix
KThe cross-product matrix Q The point cloud map
RThe recallPThe precision
APThe average precisionTPThe true positives
FPThe false positivesTNThe true negatives
FNThe false negatives

References

  1. King, A. Technology: The future of agriculture. Nature 2017, 544, S21–S23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Pedersen, S.M.; Fountas, S.; Have, H.; Blackmore, B.S. Agricultural robots—System analysis and economic feasibility. Precis. Agric. 2006, 7, 295–308. [Google Scholar] [CrossRef]
  3. Lowenberg-Deboer, J.; Huang, I.Y.; Grigoriadis, V.; Blackmore, S. Economics of robots and automation in field crop production. Precis. Agric. 2020, 21, 278–299. [Google Scholar] [CrossRef] [Green Version]
  4. Sparrow, R.; Howard, M. Robots in agriculture: Prospects, impacts, ethics, and policy. Precis. Agric. 2021, 22, 818–833. [Google Scholar] [CrossRef]
  5. Gharajeh, M.S.; Jond, H.B. An intelligent approach for autonomous mobile robots path planning based on adaptive neuro-fuzzy inference system. Ain Shams Eng. J. 2022, 13, 101491. [Google Scholar] [CrossRef]
  6. Gharajeh, M.S.; Jond, H.B. Hybrid Global Positioning System-Adaptive Neuro-Fuzzy Inference System Based Autonomous Mobile Robot Navigation. Robot. Auton. Syst. 2020, 134, 103669. [Google Scholar] [CrossRef]
  7. Xu, J.X.; Ma, J.; Tang, Y.N.; Wu, W.X.; Shao, J.H.; Wu, W.B.; Wei, S.Y.; Liu, Y.F.; Guo, H.Q. Estimation of sugarcane yield using a machine learning approach based on uav-lidar data. Remote Sens. 2020, 12, 2823. [Google Scholar] [CrossRef]
  8. Velumani, K.; Elberink, S.O.; Yang, M.Y.; Baret, F. Wheat ear detection in plots by segmenting mobile laser scanner data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-2/W4, 149–156. [Google Scholar] [CrossRef] [Green Version]
  9. Gonzalez-De-Soto, M.; Emmi, L.; Perez-Ruiz, M.; Aguera, J.; Gonzalez-De-Santos, P. Autonomous systems for precise spraying—Evaluation of a robotised patch sprayer. Biosyst. Eng. 2016, 146, 165–182. [Google Scholar] [CrossRef]
  10. Jiao, L.; Dong, S.; Zhang, S.; Xie, C.; Wang, H. AF-RCNN: An anchor-free convolutional neural network for multi-categories agricultural pest detection. Comput. Electron. Agric. 2020, 174, 105522. [Google Scholar] [CrossRef]
  11. Zhou, L.; Gu, X.; Cheng, S.; Yang, G.; Shu, M.; Sun, Q. Analysis of plant height changes of lodged maize using UAV-LiDAR data. Agriculture 2020, 10, 146. [Google Scholar] [CrossRef]
  12. Jaeger-Hansen, C.L.; Griepentrog, H.W.; Andersen, J.C. Navigation and Tree Mapping in Orchards. In Proceedings of the International Conference of Agricultural Engineering, Valencia, Spain, 8–12 July 2012. [Google Scholar]
  13. Ji, Z.; Maeta, S.; Bergerman, M.; Singh, S. Mapping Orchards for Autonomous Navigation. In Proceedings of the ASABE and CSBE/SCGAB Annual International Meeting, Montreal, QC, Canada, 13–16 July 2014. [Google Scholar]
  14. Gao, F.; Wu, W.; Gao, W.; Shen, S. Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments. J. Field Robot. 2019, 36, 710–733. [Google Scholar] [CrossRef]
  15. Underwood, J.P.; Jagbrant, G.; Nieto, J.; Sukkarieh, S. Lidar-based tree recognition and platform localization in orchards. J. Field Robot. 2015, 32, 1056–1074. [Google Scholar] [CrossRef]
  16. Guo, C.; Liu, G. Reconstruction Method of Apple Tree Canopy Point Leaf Model Based on 3D Point Clouds. Trans. Chin. Soc. Agric. Mach. 2020, 51, 173–180. (In Chinese) [Google Scholar] [CrossRef]
  17. Su, W.; Jiang, K.; Guo, H.; Liu, Z.; Zhu, D.; Zhang, X. Extraction of phenotypic information of maize plants in field by terrestrial laser scanning. Trans. Chin. Soc. Agric. Eng. 2019, 35, 125–130. (In Chinese) [Google Scholar] [CrossRef]
  18. Qiu, Q.; Sun, N.; Bai, H.; Wang, N.; Fan, Z.; Wang, Y.; Meng, Z.; Li, B.; Cong, Y. Field-based high-throughput phenotyping for maize plant using 3d lidar point cloud generated with a “phenomobile”. Front. Plant Sci. 2019, 10, 554. [Google Scholar] [CrossRef] [Green Version]
  19. Vázquez-Arellano, M.; Reiser, D.; Paraforos, D.S.; Garrido-Izard, M.; Burce, M.E.C.; Griepentrog, H.W. 3-D reconstruction of maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 145, 235–247. [Google Scholar] [CrossRef]
  20. Vázquez-Arellano, M.; Paraforos, D.S.; Reiser, D.; Garrido-Izard, M.; Griepentrog, H.W. Determination of stem position and height of reconstructed maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 154, 276–288. [Google Scholar] [CrossRef]
  21. Dong, J.; John, G.B.; Byron, B.; Glen, R.; Frank, D. 4D Crop Monitoring: Spatio-Temporal Reconstruction for Agriculture. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar] [CrossRef] [Green Version]
  22. Sun, S.P.; Li, C.Y.; Peng, W.C.; Andrew, H.P.; Jiang, Y.; Xu, R.; Jon, S.R.; Jeevan, A.; Tariq, S. Three-dimensional photogrammetric mapping of cotton bolls in situ based on point cloud segmentation and clustering. ISPRS J. Photogramm. Remote Sens. 2020, 160, 195–207. [Google Scholar] [CrossRef]
  23. Paul, J.B.; Neil, D.M. A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef] [Green Version]
  24. Biber, P. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27–31 October 2003. [Google Scholar] [CrossRef]
  25. Rusinkiewicz, S.; Levoy, M. Efficient Variants of the ICP Algorithm. 3-D Digital Imaging and Modeling. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001. [Google Scholar] [CrossRef] [Green Version]
  26. Segal, A.; Hhnel, D.; Thrun, S. Generalized-ICP. In Robotics: Science and Systems V; University of Washington: Seattle, WA, USA, 2009. [Google Scholar] [CrossRef]
  27. Censi, A. An ICP variant using a point-to-line metric. In Proceedings of the IEEE International Conference on Robotics & Automation, Pasadena, CA, USA, 19–23 May 2008. [Google Scholar] [CrossRef] [Green Version]
  28. Johnson, A.E.; Hebert, M. Using spin images for efficient object recognition in cluttered 3d scenes. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 433–449. [Google Scholar] [CrossRef] [Green Version]
  29. Salti, S.; Tombari, F.; Stefano, L.D. Shot: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
  30. Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics & Automation, Kobe, Japan, 12–17 May 2009. [Google Scholar] [CrossRef]
  31. Serafin, J.; Olson, E.; Grisetti, G. Fast and robust 3D feature extraction from sparse point clouds. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots & Systems, Daejeon, Republic of Korea, 9–14 October 2016; pp. 4105–4112. [Google Scholar] [CrossRef] [Green Version]
  32. Ji, Z.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems Conference, Rome, Italy, 13–15 July 2014. [Google Scholar]
  33. Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar] [CrossRef]
  34. Jiang, J.; Wang, J.; Wang, P.; Chen, Z. POU-SLAM: Scan-to-Model Matching Based on 3D Voxels. Appl. Sci. 2019, 9, 4147. [Google Scholar] [CrossRef] [Green Version]
  35. Dube, R.; Dugas, D.; Stumm, E.; Nieto, J.; Cadena, C. SegMatch: Segment based place recognition in 3D point clouds. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar] [CrossRef] [Green Version]
  36. Chen, S.W.; Nardari, G.V.; Lee, E.S.; Qu, C.; Liu, X.; Romero, R. SLOAM: Semantic lidar odometry and mapping for forest inventory. IEEE Robot. Autom. Lett. 2020, 5, 612–619. [Google Scholar] [CrossRef] [Green Version]
  37. Asvadi, A.; Premebida, C.; Peixoto, P.; Nunes, U. 3d lidar-based static and moving obstacle detection in driving environments. Robot. Auton. Syst. 2016, 83, 299–311. [Google Scholar] [CrossRef]
  38. Jin, S.; Yan, J.; Fang, F.; Pang, S. Stem–leaf segmentation and phenotypic trait extraction of individual maize using terrestrial lidar data. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1336–1346. [Google Scholar] [CrossRef]
  39. Huang, S.S.; Ma, Z.Y.; Mu, T.J.; Fu, H.; Hu, S.M. Lidar-Monocular Visual Odometry using Point and Line Features. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar] [CrossRef]
Figure 1. Mobile robot platform and test field.
Figure 1. Mobile robot platform and test field.
Agronomy 12 03107 g001
Figure 2. Flow chart of LiDAR data processing.
Figure 2. Flow chart of LiDAR data processing.
Agronomy 12 03107 g002
Figure 3. Illustration of the LiDAR scanning harness.
Figure 3. Illustration of the LiDAR scanning harness.
Agronomy 12 03107 g003
Figure 4. An example of the application of gating. The black bounding boxes indicate the gates. In the figure, “Outlier Points” is simply represented as OP, “Inlier Points of Slice n” is simply represented as IPS n. Inlier points in different regions are shown in different colors.
Figure 4. An example of the application of gating. The black bounding boxes indicate the gates. In the figure, “Outlier Points” is simply represented as OP, “Inlier Points of Slice n” is simply represented as IPS n. Inlier points in different regions are shown in different colors.
Agronomy 12 03107 g004
Figure 5. Effect of surface filtering. (a) Maize field data of 48 growth days; (b) maize field data of 60 growth days. In the figure, “Ground Point of Slice n” is simply represented as GPS n, “Fitting Plane of Slice n” is simply represented as FPS n.
Figure 5. Effect of surface filtering. (a) Maize field data of 48 growth days; (b) maize field data of 60 growth days. In the figure, “Ground Point of Slice n” is simply represented as GPS n, “Fitting Plane of Slice n” is simply represented as FPS n.
Agronomy 12 03107 g005
Figure 6. Workflow of the maize stalk segmentation.
Figure 6. Workflow of the maize stalk segmentation.
Agronomy 12 03107 g006
Figure 7. An example of a seed point extraction method. (a) Points after surface filtering. (b) Slicing and clustering results. (c) Planting line fitting and outlier points filtering. (d) Superposition of fitting results on origin data. In the figure, “Cluster n” is simply represented as Cn, “Fitted Line n” as FL n, “Seed points of line n” as SPL n, “Removed seed points” as RSP, “Seed points that do not belong to any line” as SPNL.
Figure 7. An example of a seed point extraction method. (a) Points after surface filtering. (b) Slicing and clustering results. (c) Planting line fitting and outlier points filtering. (d) Superposition of fitting results on origin data. In the figure, “Cluster n” is simply represented as Cn, “Fitted Line n” as FL n, “Seed points of line n” as SPL n, “Removed seed points” as RSP, “Seed points that do not belong to any line” as SPNL.
Agronomy 12 03107 g007
Figure 8. Regional growth method.
Figure 8. Regional growth method.
Agronomy 12 03107 g008
Figure 9. Maize maps generated by our method and other methods. (a) is the complete map obtained by our method; (b) is the details of the map obtained by our method; (c) is the same local detail obtained by the GICP method; (d) is the same local detail obtained by the LOAM.
Figure 9. Maize maps generated by our method and other methods. (a) is the complete map obtained by our method; (b) is the details of the map obtained by our method; (c) is the same local detail obtained by the GICP method; (d) is the same local detail obtained by the LOAM.
Agronomy 12 03107 g009
Figure 10. Trajectories of benchmark methods in 48 days maize field data. In the figure, “Ground Truth” is simply represented as GT, “Starting Point” as SP, “The end of our method” as OE, “The end of GICP” as GE, and “The end of our method” as LE.
Figure 10. Trajectories of benchmark methods in 48 days maize field data. In the figure, “Ground Truth” is simply represented as GT, “Starting Point” as SP, “The end of our method” as OE, “The end of GICP” as GE, and “The end of our method” as LE.
Agronomy 12 03107 g010
Table 1. Key parameters of RS-LiDAR-32.
Table 1. Key parameters of RS-LiDAR-32.
Key ParametersValues
LiDAR class32 laser lines
Field of view/scanning angle360 degrees
Vertical field of view40 degrees
Azimuth angular resolution0.1 degrees
Vertical angular resolution0.33 degrees
Accuracy<3 cm distance accuracy
Operation range (distance)0.4 m to 200 m range
Table 2. The details of each point cloud data.
Table 2. The details of each point cloud data.
ExperimentThe Average Height of Maize/mAverage Scan PointsTrajectory Length/m
20 days0.4035,634-
48 days0.7332,57034.44
60 days0.8232,76414.83
95 days2.1227,50453.10
Table 3. Comparison of evaluation indexes of different ground surface filtering methods.
Table 3. Comparison of evaluation indexes of different ground surface filtering methods.
ExperimentOursRANSACIQR + RANSACIQR + LS
P/%R/%AP/%P/%R/%AP/%P/%R/%AP/%P/%R/%AP/%
20 days94.3288.1786.4981.6180.3966.6489.2386.3580.6280.4172.1365.73
48 days92.2788.2985.1378.3583.4466.5786.1991.9684.4579.8473.8568.42
60 days91.3487.3485.7174.2680.2463.2884.6487.9083.1765.5168.9358.40
Table 4. Distance and relative error of trajectory.
Table 4. Distance and relative error of trajectory.
ExperimentDistanceDistance from the Goal/mError
OursGICPLOAMOursGICPLOAM
48 days34.440.300.351.060.88%1.01%3.08%
60 days14.830.140.170.670.96%1.13%4.52%
95 days53.101.131.822.932.12%3.43%5.52%
Table 5. The number of features matched successfully per sweep for different growth days.
Table 5. The number of features matched successfully per sweep for different growth days.
ExperimentNumber of Successfully Matched FeaturesAssociation Success Rates
48 days23.289%
60 days25.294%
95 days15.195%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, N.; Chi, R.; Zhang, W. LiDAR Odometry and Mapping Based on Semantic Information for Maize Field. Agronomy 2022, 12, 3107. https://doi.org/10.3390/agronomy12123107

AMA Style

Dong N, Chi R, Zhang W. LiDAR Odometry and Mapping Based on Semantic Information for Maize Field. Agronomy. 2022; 12(12):3107. https://doi.org/10.3390/agronomy12123107

Chicago/Turabian Style

Dong, Naixi, Ruijuan Chi, and Weitong Zhang. 2022. "LiDAR Odometry and Mapping Based on Semantic Information for Maize Field" Agronomy 12, no. 12: 3107. https://doi.org/10.3390/agronomy12123107

APA Style

Dong, N., Chi, R., & Zhang, W. (2022). LiDAR Odometry and Mapping Based on Semantic Information for Maize Field. Agronomy, 12(12), 3107. https://doi.org/10.3390/agronomy12123107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop