1. Introduction
As an essential strategic resource, natural rubber and its products are universally applied in industry, transportation, national defense, and medical treatment. At present, the excellent properties of natural rubber cannot be exceeded by synthetic materials [
1,
2,
3,
4]. The demand for natural rubber products keeps increasing with the development of economies and industries [
5]. Currently, the rubber-tapping activity is still dominated by manual work both at home and abroad [
6]. Due to the labor’s intensity and technical difficulty, the poor working environment, the shortage of workers, and aging in the working population, there is a sore need for intelligent rubber-tapping devices [
7,
8]. The autonomous navigation technique is of great importance to make rubber-tapping devices intelligent, especially in night-time operation.
In addition, the diameter, plant spacing, and row spacing of trees are important parameters for evaluating orchards or forests. The same is true of rubber plantations. These parameters reflect the growth status, water consumption, and biomass of rubber trees. Accurate and up-to-date mapping and monitoring of a rubber plantation are challenging [
9,
10].
To date, many technologies, such as sensors, RTK-GPS (RTK: Real-time kinematic; GPS: Global Positioning System), machine vision, LiDAR, and ultrasonic geomagnetic position, have been developed to study the autonomous navigation of robots [
11,
12]. However, machine vision is affected by the working environment and lighting conditions to a large extent, and technologies involved in it, such as image processing, image analysis, camera calibration, and the extraction of navigation parameters, make it rather difficult to apply in agriculture [
13,
14,
15,
16]. The application of GPS is affected by interruptions in the satellite signal [
11,
17]. For example, when a vehicle travels through a forest with too many trees, the crown often blocks the signals sent by satellites to GPS receivers, while RTK-GPS, with higher precision, is too expensive to be widely used [
9,
18]. Rubber-tapping techniques require this activity to be carried out at night. LiDAR can not only provide a large amount of accurate information with high frequency, but also can meet the accuracy and speed requirements. Moreover, with great performance compared to cost, it provides all-weather services regardless of lighting conditions [
19]. Therefore, it has been widely applied to robot navigation and positioning as well as map construction in non-agricultural fields [
20,
21,
22,
23], and is becoming increasingly popular in the navigation of agricultural vehicles [
24,
25,
26,
27].
Benet [
18] and his colleagues employed LiDAR, an Inertial Measurement Unit (IMU), and a camera sensor in their study. The environmental–geometric data collected by the IMU are obtained with the color camera’s data in order to improve the efficiency of object recognition. Then, by sorting the colors, objects in front of the vehicle, such as crops, grass, leaves, and soil, are inspected and recognized. Thus, agricultural vehicles can detect obstacles and navigate along crop rows autonomously. Similarly, in the paper of [
24,
25], by fusing data from LiDAR, a camera, and an IMU, the trunks of trees were detected and a partial map of an orchard was thus constructed. The Kalman Filter algorithm was also used to estimate the robot’s position in this orchard, so as to make it navigate autonomously among tree rows. Shalal [
26] et al. proposed a new method for local-scale orchard mapping based on tree trunk detection and utilizing modest-cost color vision and laser-scanning technologies. The image segmentation and data fusion methods were used for feature extraction, tree detection, and orchard map construction. The map is composed of the coordinates of the individual trees in each row as well as the coordinates of other non-tree objects, and it can be used as an a priori map to realize robot positioning and path planning and navigation in the orchard. However, the application of the camera increases the complexity and the amount of calculation of the system. In the papers of [
27,
28], taking LiDAR as the navigation sensor, Barawid et al. have made it possible for the robot to navigate autonomously along a particular tree row of the orchard. Due to the lack of attitude information, the number of navigation errors will increase if the robot walks on uneven ground. Additionally, the turning performance of the robot has not been studied. Therefore, the key to increasing the precision and robustness of a navigation system lies in the combination of LiDAR and Inertial Measurement Units (IMUs) [
19,
29]. Bayar [
30] et al. have made full use of the data from LiDAR, wheels, and the steering encoder. A motion model, including wheel side slip, was constructed to calculate the vehicle’s speed and its steering instructions. It can not only autonomously navigate among rows in the orchard, but also can generate the turning path and turn at the end of the row. Compared with the expensive RTK-GPS, this low-cost sensor suite is of much more commercial value. However, turning at the end of the rows is probabilistic, so the success of each turn cannot be guaranteed. Moreover, the automatic rubber-tapping operation requires the robot navigation platform to safely walk along one row in a rubber forest, to turn from one row into another, and to stop within a fixed range for tapping. Freitas [
31] et al. have applied LiDAR, an IMU, a steering wheel, and wheel encoders to assess the position of offline obstacles based on the classification and clustering of three-dimensional (3D) point obstacles. Finally, obstacles, such as pedestrians and boxes, on the route of the vehicle have been detected successfully. To predict the biomass, growth, yield, water consumption, and health conditions of trees so as to provide bases for tree management, Lee [
32] et al. have successfully adopted LiDAR to measure the geometric characteristics of tree crowns. The results show that LiDAR can better measure the geometric characteristics of the tree.
In summary, due to the excellent comprehensive performance of LiDAR, a method of forest navigation and information collection based on low-cost LiDAR and a gyroscope, which does not rely on GPS, is proposed in this paper. The sparse point cloud data of tree trunks are extracted by LiDAR and the gyroscope through the clustering method. The point cloud is fitted into circles by the Gauss–Newton method to obtain the center point of each tree. These center points are threaded through the Least Squares method to obtain the straight line, which is regarded as the navigation path of the robot in this forest. The Extended Kalman Filter (EKF) algorithm is adopted to obtain the robot’s position. The Fuzzy Controller is applied for the following activities: walking along one row with a fixed lateral distance, stopping at fixed points, turning from one row into another, and collecting the information of plant spacing, row spacing, and trees’ diameters. Then, according to the collected information, each tree’s position is calculated, and the geometric feature map is constructed. The aim of the method proposed in this paper is to provide autonomous navigation for intelligent tapping devices and collect information on a rubber plantation to benefit the informatization and precision management of the rubber plantation.
2. Materials and Methods
2.1. System Composition
This study takes a tracked robot as the basic platform, the system of which is shown in
Figure 1. The program runs on Windows 7 Visual Studio 2013. The length of the robot platform is 76 cm and the width is 62 cm. The distance between the LiDAR’s installation site on the robot platform and the ground is 95 cm. The type of this LiDAR is A2M6; it is made by SLAMTEC Company, and has a scanning frequency of 10 Hz, a scanning angle of 360 degrees, an angle resolution that is adjustable from 0.45 to 1.35 degrees, a maximum scanning distance of 18 m, and a relative scanning accuracy of 1%. The gyroscope used in this study is of type JY901 and produced by Shenzhen Witt Intelligent Technology Co., Ltd. Its frequency is 10 Hz, and its angular resolution is 0.1 degrees. Two caterpillar driving wheels are driven by two HLM480E36LN brushless direct current (DC) motors made by MOTEC Company. The role of the trace device: white wheat flour is placed in the tracing device, so when the robot walks, the white line left on the ground should be the actual walking path of the robot. As shown in
Figure 2, the motor is directly controlled by the industrial computer (IC) through its driver. The Industrial Computer is the control core of the robot.
2.2. Navigation Strategy
For targeting under specific conditions of rubber-tapping, mobile robots are required to be able to stop in front of trees and work. So, the robot platform set should achieve the following goals in the forest:
Autonomously navigate along one row with a fixed lateral distance;
Stop at the designed spot in front of trees;
Turn from one row into another;
Information collection.
According to the trend of trees, the spot that crosses the tree’s center and is perpendicular to the tree row is regarded as dead ahead of a tree. As required by the rubber-tapping operation of the mechanical arm, the robot’s ideal stopping spot should be 125 cm right ahead the rubber tree. Therefore, the ideal navigation path for the robot in a rubber forest should be the line chart that connects the dead ahead spot of each tree. In order to fit tree rows better, the segmentation fitting method is thus proposed. At the same time, the LiDAR scanning distance is set to be twice the maximum plant space to make full use of the tree data closest to the robot. If there are no data scanned within 0~90° of the LiDAR, it is supposed that the robot has reached the boundary of this rubber forest, and then a turn will be performed immediately. As shown in
Figure 3, the green circles represent trees; black points stand for the ideal stopping spots; and the red dotted line is regarded as the robot’s ideal navigation path.
2.3. Navigation Phases
According to the data applied to the robot’s navigation and the corresponding movements, the navigation is divided into three phases: “straight phase”, “turning phase”, and “row-changing phase”.
As shown in
Figure 3, at the A-B stage of the straight phase, the sparse point cloud data of tree trunks are extracted based on data offered by LiDAR and the gyroscope. Then, the point cloud is fitted though the Gauss–Newton method to obtain the coordinates of each tree’s center. The straight line connecting tree centers is fitted by the Least Squares method and then regarded as the robot’s navigation path. According to the navigation path, the robot would walk along one row with a fixed lateral distance and stop at the designated spot in front of each tree.
At the B-C stage of the turning phase, after finishing the rubber tapping at point B, the robot will walk straight to point C (the distance between B and C is controlled by setting the straight-walking time). Then, the data of the gyroscope’s Z-axis and differential driving are employed to make a 90-degree turn.
At the C-D and E-F stages of the row-changing phase, the navigation of C-D is similar to that of the “straight phase”. Their difference lies in that C-D has no fixed-point stopping, and it will use the gyroscope to turn right again when it goes through a complete row space. The E-F stage is similar to the B-C stage: when finishing the tapping, the robot will go straight toward point E, make a 90-degree left turn with the help of the gyroscope, walk straight to point F (the distance of E-F is controlled by setting the straight-walking time), make another 90-degree left turn using the gyroscope, and enter the next cycle.
The above three phases constitute the whole cycle of the robot’s navigation. When the robot encounters situations beyond the above three phrases, it will stay put. On one hand, this can ensure the safety of the robot. On the other hand, after finishing all of the operations in the rubber forest, it can stop automatically.
2.4. Calibration of the Installed Location of LiDAR
It is important to calibrate the installed location of the vehicular LiDAR [
27]. Whether the axis of the LiDAR, i.e., the 0–180° axis, is parallel to the center line of the robot platform or not directly influences the navigating performance of the robot and the accuracy of information collection. When installing, the vertical wall is utilized to calibrate the center line of the LiDAR, so as to make the center line of the LiDAR parallel to the center line of the robot. As shown in
Figure 4, the robot is put at the place 100 cm away from the wall and the center line of the robot is parallel to the wall. Then, on the wall, two points that can form the 45° and 135° angles are extracted from the LiDAR’s scanned data, and the distance between these two points and the Lidar’s central point are set as
l45 and
l135, respectively. Finally, the LiDAR’s installation position should be calibrated until
l45 = l135.
2.5. Navigation Path Generation
Firstly, all scanned data are sorted according to angles, and then data within 0~180°, i.e., the robot’s right-side data are extracted. In order to make full use of the tree data closest to the robot and to fit the tree row better, the LiDAR scanning distance is set to be twice the maximum plant spacing. All scanned points on the tree are extracted using the clustering method [
31,
33]. If two points meet the following requirements, they fall into the same category:
- (1)
The distance difference between two points and the LiDAR is less than the threshold
δ1 (
δ1 is determined by the diameter), i.e.,
- (2)
The angle difference between two points and the LiDAR is less than the threshold
δ2 (
δ2 is determined by the diameter and the distance between the robot and the tree row), i.e.,
- (3)
The distance difference between two points and the center line of the robot is less than the threshold
δ3 (
δ3 is determined by the row spacing), i.e.,
After clustering, if the number of elements in a cluster is larger than the given threshold
N (N is determined by the number of points scanned on the tree with the smallest diameter), it can represent a tree:
In the same way, other tree trunk information of the nearest tree row from the robot can be obtained. If there are small trees or stumps that do not need to be operated on, the robot can automatically ignore them by setting the threshold N. All data used in the navigation come from the tree row closest to the robot, so different row spacings and plant spacings in the forest are permitted.
Due to the uneven terrain in the forest, slight tilt may occur. This will lead to an increase in the distance error. Therefore, the gyroscope should acquire attitude information of the robot in real time to correct the LiDAR data based on the roll and pitch angles.
As shown in
Figure 5, plane 1 is horizontal, and the attitude information of the robot is acquired in real time by the gyroscope, including the left and right tilt angle
ε1, the front and rear tilt angle
ε2 to plane 1, and the rotation angle around the Z axis. The tilt angle is utilized to compensate for the scanning distance error of the LiDAR when the robot walks on uneven ground. When the robot tilts to the left or right,
l and
l’ represent the scanning distance of the LiDAR on plane 1 and plane 2, respectively, while the cylinder represents the tree. The actual distance between the robot and the tree is:
Similarly, when the robot tilts to the front or rear, the actual distance between the robot and the tree is:
When the robot has both left–right tilt and front–rear tilt, the total tilt angle of the robot should be the combination of these two parts:
After the clustering is completed, all point cloud data on the trunk for navigation can be extracted then round-fitted through the "Gauss–Newton method" to obtain the center coordinates of C
1, C
2, C
3, and C
4, as shown in
Figure 6. Then, the Least Squares method is adopted to fit the straight line of the center, which shall be the navigation path.
2.6. Design of the Fuzzy Controller
The advantage of Fuzzy Control [
34] is that it does not need to establish an accurate control model. We only need to define effective input and output control variables and appropriate Fuzzy Control rules [
35]. In this paper, the lateral error E and the heading error θ are used as the input of the Fuzzy Controller. The forward direction of the robot is taken as the reference. If E is positive, it means that the robot’s position deviates from the left side of the ideal navigation path. If E is negative, it means that the robot position deviates from the right side of the ideal navigation path. At the same time, the positive heading angle error indicates the counterclockwise deflection, and the negative angle indicates the clockwise deflection.
The PWM (Pulse-Width Modulation) control difference, U, of the DC motors on both the left and the right sides is selected as the output variable, so as to control the differential steering of the robot. The heading angle error θ, lateral error E, and output quantity U are divided into seven levels: negative big (NB), negative medium (NM), negative small (NS), zero (Z0), positive small (PS), positive medium (PM), and positive big (PB). The basic fields of θ, E, and U are respectively [−6°, 6°], [−250 cm, 250 cm], [−120, 120]. Their corresponding fuzzy fields are all {−3, −2, −1, 0, 1, 2, 3}. The membership function adopts a triangle. Then, the distribution of this fuzzy variable membership function can be seen in the following
Figure 7.
The MIN-MAX-gravity method is selected for defuzzification. According to experience and the test conditions, the table of fuzzy control rules is shown in
Table 1.
Figure 8 shows the three-dimensional surface diagram of the fuzzy control rules.
2.7. Location of the Robot
The feature-based Extended Kalman Filter (EKF) [
36] provides an effective method for mobile robot pose estimation, so it was adopted in this study for robot localization in the forest. It is stipulated that, when the robot stops in front of each tree, the exact spot is regarded as the initial position of the robot,
(x’0, y’0). The pose of the robot in the forest can be expressed by its coordinates
(x’, y’) and the attitude angle
relative to the trunk. Assuming that the robot is moving at a constant speed between the two trees, the pose of the robot should be:
In the above formulas, T is the sampling time; u(k) the random perturbation in motion; and vy’ the speed of the car in the y’-axis direction.
For sake of convenience, the equation of the state of the robot system should be:
Then, the error covariance matrix should be:
In the above formula, stands for the Jacobian matrix of the system state, the Jacobian matrix of process noise, and Qk the covariance matrix of process noise.
The Kalman Gain can be calculated as:
In the above formula, is the Jacobian matrix for measuring the state of the model, the Jacobian matrix for measuring model noise, and Rk the covariance matrix of process measurement noise.
Since the robot’s initial position
(x‘0, y’0) is already known, the position at time k should be
(x‘(k), y’(k)). Distances between the tree and the LiDAR can be measured by the LiDAR, while the heading angle of the robot can be measured by the gyroscope. Therefore, the measurement model should be:
Using actually measured data to correct the robot’s attitude estimation:
In the above formula, Zk stands for the actually measured data.
Finally, the error covariance matrix is updated as follows:
Extended Kalman Filter (EKF) is a recursive estimation process, and the updated pose and error covariance matrix are used to predict the new estimates in the next time step.
Figure 9 is the flow chart of the system algorithm.
2.8. Information Collection
The information collected on the forest includes tree diameters, plant spacing, row spacing, and position information, among which tree position derives from plant spacing and row spacing recursively. The specific methods can be seen as follows.
2.8.1. Calculation of Tree Position
As shown in
Figure 10, the absolute coordinate system XOY is established according to the position of tree P
1 and the trend of the entire forest. The absolute coordinate of tree P
1 is
(x1, y1). When the robot stops in front of tree P
1 to carry out the operation, the center of its LiDAR is taken as the origin point O
1. Then, the dead-ahead direction of the robot is taken as the N-axis, and the line connecting O
1 and the tree is taken as the M-axis, so as to establish the robot’s coordinate system MO
1N. The distance between P
1 and O
1 is
L1; the distance between P
2 and O
1 is
L2; and the angle between P
2 and the N-axis is
γ. The coordinate system MO
1N is rotated by
β degrees (counterclockwise is positive; the
β angle is measured by the gyroscope), so that the newly obtained coordinate system M
1O
1N
1 is parallel with the absolute coordinate system XOY. Thus, the coordinate of tree P
2 in the M
1O
1N
1 coordinate system should be:
The coordinate of P
2 in the absolute coordinate system XOY should be:
Therefore, the absolute coordinate of tree P3 can be deduced from the absolute coordinates of tree P2. In the same way, the coordinate of all trees in the same row in the absolute coordinate system can be calculated, and coordinate calculation formulas for different rows can be inferred. Tree rows in the actual experiment are so straight that the X-coordinate of each row is regarded as the same. The coordinate formula of tree position in the forest is obtained as follows:
The coordinate formula of odd tree rows should be:
The coordinate formula when the robot turns:
The coordinate formula of even tree rows should be:
2.8.2. Calculation of Tree Radius
In order to improve the accuracy of the measurement of a tree’s diameter as much as possible, when the robot stops in front of the tree for operation, the LiDAR’s position is closest to the tree and the robot pauses. At this time, the scanned point set of this tree and the coordinate of the next tree’s relative position are collected and circularly fitted using the Gauss–Newton method [
37]. Thus, the diameter of the obtained circle is regarded as the diameter of the tree.
The curvilinear equation of the circular is supposed to be:
C, D, E, G, and
H are assumed as follows:
Then
a, b, and
c should be:
Thus, the tree’s radius
R should be:
After obtaining each tree’s position and diameter according to the above method, in order to help users have a better understanding of the collected information, the position of the tree is embodied by the position of the circle’s center and the diameter of the tree by the diameter of the circle. The geometric feature map of the forest is thus graphed.