Next Article in Journal
Infrared Aircraft Detection Algorithm Based on High-Resolution Feature-Enhanced Semantic Segmentation Network
Previous Article in Journal
Modeling of a Novel T-Core Sensor with an Air Gap for Applications in Eddy Current Nondestructive Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LiDAR-Based Negative Obstacle Detection for Unmanned Ground Vehicles in Orchards

by
Peng Xie
1,
Hongcheng Wang
1,*,
Yexian Huang
1,
Qiang Gao
1,
Zihao Bai
1,
Linan Zhang
1 and
Yunxiang Ye
2,3
1
School of Mechanical Engineering, Hangzhou Dianzi University, Hangzhou 310018, China
2
Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences, Hangzhou 310012, China
3
Key Laboratory of Agricultural Equipment for Hilly and Mountainous Areas in Southeastern China (Co-Construction by Ministry and Province), Ministry of Agriculture and Rural Affairs, Hangzhou 310021, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(24), 7929; https://doi.org/10.3390/s24247929
Submission received: 8 October 2024 / Revised: 7 November 2024 / Accepted: 10 December 2024 / Published: 11 December 2024
(This article belongs to the Section Smart Agriculture)

Abstract

:
In orchard environments, negative obstacles such as ditches and potholes pose significant safety risks to robots working within them. This paper proposes a negative obstacle detection method based on LiDAR tilt mounting. With the LiDAR tilted at 40°, the blind spot is reduced from 3 m to 0.21 m, and the ground point cloud density is increased by an order of magnitude. Based on geometric features of laser point clouds (such as rear wall height and density, and spacing jump between points), a method for detecting negative obstacles is presented. This method establishes a mathematical model by analyzing changes in point cloud height, density, and point spacing, integrating features captured from multiple frames to enhance detection accuracy. Experiments demonstrate that this approach effectively detects negative obstacles in orchard environments, achieving a success rate of 92.7% in obstacle detection. The maximum detection distance reaches approximately 8.0 m, significantly mitigating threats posed to robots by negative obstacles in orchards. This research contributes valuable technological advancements for future orchard automation.

1. Introduction

In recent years, the use of mechanization in agricultural production has become increasingly popular in developing countries [1]. Agricultural robots need to perceive and understand the surrounding environment, with obstacle detection being crucial. Obstacles can be categorized based on their height relative to the ground into two types: positive obstacles, which are above the ground, and negative obstacles, which are below the ground [2]. Positive obstacles are easily detected by sensors as they are above the ground, whereas in orchard environments, there are numerous negative obstacles due to planting requirements. The negative obstacles mixed with the orchard roads make the information features not obvious enough, which pose a great risk to the orchard mobile robot. Negative obstacles, being below the ground, present a greater challenge for detection as only a small portion of their features can be detected. Some scholars have shown that the angle subtended by positive obstacles at the distance sensor R is proportional to 1 / R , and the angle for negative obstacles is also proportional to 1 / R 2 [3], significantly increasing the difficulty of detection. Additionally, the complex orchard environment with vegetation cover further adds to the difficulty of detection [4]. Therefore, negative obstacle detection has always been a major challenge in the field of autonomous driving [3,5,6,7]. Negative obstacles in orchards change with planting needs, making the real-time detection of negative obstacles an urgent issue that needs to be addressed [8].
Over the past few decades, scholars have conducted extensive research on negative obstacle detection [9,10]. To ensure driving safety, the robot should detect both positive and negative obstacles as much as possible [11].The prevalence of negative obstacles in orchards poses significant safety hazards to robotic travel. As a result, efficiently detecting these obstacles in real time has become an urgent issue that needs to be addressed. There are currently three methods for detecting negative obstacles. The first is thermal imaging; the second is machine vision based on deep learning, and the third is detection methods based on LiDAR. Matthies et al. [3] proposed a negative obstacle detection method based on thermal infrared imaging, which mainly relies on the principle that negative obstacles are cooler in temperature than the surrounding area at night. By detecting areas with lower temperatures at night using the principles of thermal imaging, the spots of negative obstacles can be identified. However, this method only yields good detection results at night and is significantly affected by climate conditions.
Based on machine vision solutions utilizing depth cameras as the primary detection sensors, these approaches are mainly represented by deep learning algorithms [12,13,14,15]. Feng et al. [16] designed a segmentation network with dual semantic-feature complementary fusion to segment different negative obstacles on roads. Liu et al. [17] propose the YOLO-SCG model that enables faster and more accurate target detection. Koch et al. [18] compared the geometric features of potholes in asphalt pavement areas with flawless road surfaces to detect defective spots. Dhiman et al. [19] developed two stereo vision analysis methods and two deep learning methods for pothole detection. Machine vision-based methods are susceptible to environmental lighting conditions and may not function properly in nighttime environments. Silvister et al. [20] and Thiruppathiraj et al. [21] used convolutional neural networks on smartphones to detect pavement damage. Although the accuracy is high, the detection speed is relatively slow. Kang et al. [22] proposed an environmental perception algorithm that integrates a single-line LiDAR with vision to address the challenge of identifying small targets. However, the detection performance of this method is poor and cannot adapt to the complex environment of orchards.
LiDAR technology shows promising results, as it is less affected by adverse environmental conditions such as bad weather [23] or smoke [24]. Therefore, detection solutions based on LiDAR are currently more mainstream. LiDAR can accurately measure distances between obstacles and the sensor, with minimal impact from weather and lighting changes. Xiao et al. [25] proposed a point cloud enhancement technique. Larson et al. [26] identified potential negative obstacles by detecting gaps and an absence of data. Shang et al. [27] proposed a novel LiDAR setup method and introduced an algorithm based on feature fusion that employs Bayesian estimation for each weight, achieving promising experimental results; the new type of setup proposed in this paper gave us some inspiration, on the basis of which we have proposed our own installation. In the following text, we elaborate on the differences between our installation method and their installation methods. Park et al. [28] proposed a new 3D object detection method called temporal motion-aware 3D object detection (TM3DOD), which utilizes temporal LiDAR data; this method achieves comparable performance to state-of-the-art methods. This method is not very effective for the detection of small objects. Cheng et al. [29] proposed a negative obstacle feature detection method based on radial distance and local dense features with multiple LIDAR and synthetic features, which achieves better negative obstacle detection but poorer detection of small targets. Ravi et al. [30,31] proposed a LiDAR-based detection of cracks and potholes on highways and airport runways and estimation of negative obstacles area and volume; however, their study uses a 2D LiDAR that is prone to occlusions due to different types of road anomalies. Song et al. [32] propose a rule-based method for extracting target points from LiDAR ring data and edge triggers, which can improve detection speed and performance. The traditional method of installing LiDAR systems has the issue of large blind spots [27], which is highly detrimental to the detection of negative obstacles. This paper proposes a novel installation method for LiDAR to address this problem.
Zhou et al. [11] proposed a terrain perception technology that combines visual and LiDAR sensors for forest environments, estimating ground positions and extracting passable areas using visual sensors. Ruan et al. [33] developed a real-time negative obstacle detection model based on the YOLOv4 network for open-pit mine environments. Many scholars have also introduced technologies for detecting ground negative obstacles using unmanned aerial vehicle (UAV) platforms [34,35,36,37,38,39]. However, in orchard environments where vegetation cover obstructs visibility, drones struggle to detect terrain, rendering such methods ineffective in these environments. In addition, jiang et al. [40] investigated the detection of tree trunks in an orchard environment for the purpose of path planning and navigation.
The first section of this paper discusses relevant research on negative obstacle detection. The second section elaborates on the selection of LiDAR and proposes a novel installation method. On this basis, our paper presents a detection algorithm based on the spatial geometric features of negative obstacles and briefly introduces the experimental setup. The third section discusses the experimental results, conducting data analysis on the two experimental results regarding the installation method and the detection of negative obstacles, and concludes that the proposed method in this paper can meet the needs of re-al-time detection of negative obstacles in orchards. The fourth chapter summarizes the entire paper.

2. Methods

This paper proposes a detection algorithm that uses LiDAR to detect negative obstacles. To reduce the amount of blind spots and increase the ground point cloud density, an approach is adopted where the solid-state LiDAR is tilted during installation. Solid-state LiDAR is a new type of LiDAR that achieves a high point cloud density at a lower cost, which is highly beneficial for negative obstacle detection. Based on this, our paper detects negative obstacles using geometric features derived from point clouds. Since the LiDAR detection method is used, weather conditions, lighting, and temperature factors have less impact on the detection results, which can improve the robustness of the algorithm.
This section will provide a detailed introduction to the specific methods of negative obstacle recognition, mainly including the installation method of the LiDAR and the specific process of the negative obstacle recognition algorithm. Solid-state LiDAR is a type of three-dimensional LiDAR that has emerged in recent years, with advantages such as high reliability, compact structure, and lower cost, which has attracted the attention of scholars. This type of LiDAR is usually used for blind spot compensation. In this paper, a solid-state LiDAR is tilted and placed to scan the ground to obtain ground point cloud data, thereby achieving the detection of negative obstacles. On this basis, we establish a corresponding mathematical model based on the spatial geometric characteristics of the negative obstacle point cloud collected by the LiDAR and matched the characteristics of the negative obstacles with the mathematical model to achieve the detection of negative obstacles.

2.1. Tilted Installation of LiDAR

2.1.1. The Disadvantages of Traditional Upright Setup

The traditional upright setup of LiDAR for negative obstacle detection in orchards presents two significant disadvantages: (1) there is a large blind spot and (2) the point cloud density is relatively low. The traditional upright setup of LiDAR is typically designed for structured environments such as urban roads and indoor settings. In urban and indoor environments, negative obstacles are rare, so upright setup enables the LiDAR to scan further places. Within the blind spot, the risk to vehicles is minimal since negative obstacles are absent. However, this is not the case in orchard settings, where the presence of negative obstacles necessitates that the robot continuously detects whether there are negative obstacles in its vicinity (especially ahead) to avoid safety risks. As shown in Figure 1 (the figure is from the official website of RoboSense), which illustrates the beam distribution of a multi-line LiDAR (RoboSense, Shenzhen, China), it is evident that the vertical field of view (FOV) of the LiDAR in the image ranges from −15° to 15°.
As shown in Figure 2, it is a schematic diagram illustrating the blind spot for vertically installed LiDAR. Here, H represents the height from which the LiDAR emits the laser to the ground; B is the range of the blind spot, and θ is the angle between the lowest LiDAR beams and the vertical direction. It can be observed that
W = H × t a n θ .
Therefore, the extent of the blind spot for the traditional vertical setup method depends on the LiDAR height H and the angle θ between the lowest LiDAR beam and the vertical direction. Assuming the LiDAR installation height H is 0.8 m and the angle θ is 75°, we can calculate W = 0.8 × tan 75 ° 3 m . If the LiDAR is not installed in front of the vehicle but at the center of the vehicle, it may lead to a larger blind spot (part of the LiDAR beams might be obstructed by the vehicle’s edges). In complex environments like orchards, the occurrence of negative obstacles in the blind spot without real-time detection can be extremely dangerous.
The traditional vertical setup method also leads to the issue of too low a beam density on the ground. As shown in Figure 3 (for convenience of viewing, the angles depicted are different from the actual angles), it is a schematic diagram of the beam density of the traditional vertical LiDAR setup method. Here, D represents the distance between the two laser beams, and the angles between the two beams and the vertical direction are θ 1 and θ 2 , respectively. W 1 and W 2 are the distances between the two laser beams and the front end of the vehicle, and H is the height from where the LiDAR emits laser to the ground. From this, we can derive
D = W 1 W 2 = H × ( tan θ 2 tan θ 1 ) .
Assuming H is 0.8 m, θ 1 and θ 2 are 75° and 73°, respectively. We can then calculate that D is 0.37 m. As θ 1 and θ 2 increase, the distance between the beams will further increase, leading to a low ground point cloud density, which makes it difficult to detect negative obstacles (as shown in Figure 3). Figure 4 illustrates two issues brought by the traditional vertical setup method: (1) a blind spot at close distances and (2) a low ground point cloud density at far distances.

2.1.2. The Use of MID-70 LiDAR

In the negative obstacle detection method proposed in this paper, we use a MID-70 LiDAR (DJI, Shenzhen, China) instead of a mechanical rotating LiDAR. The MID-70 LiDAR is a type of solid-state LiDAR that has emerged in recent years. Compared to mechanical rotating LiDARs, solid-state LiDARs lack rotating mechanical parts, which reduce the failure rate and enhances durability. Moreover, these types of LiDARs are more affordable and have a compact structure, making them suitable for use in orchard environments. The traditional mechanical rotating LiDAR has a horizontal field of view of 360°, which allows it to scan the surrounding environment with a single rotation of the internal probe. However, this type of LiDAR is not suitable for negative obstacle detection in orchard environments due to two main reasons: (1) there is waste in the 360° field of view of the rotating LiDAR (more than half of the beams cannot illuminate the ground) and (2) the point cloud density of the rotating LiDAR is low (with point clouds distributed in every corner of space).
Addressing the above two issues, this paper utilizes the LIVOX MID-70 LiDAR for ground scanning as shown in Figure 5. This LiDAR has a circular field of view of up to 70.4° in both horizontal and vertical directions, with the near blind spot being decreased significantly to 21 cm. The larger field of view and smaller blind spot help the unmanned ground vehicle to detect the orchard’s environment more comprehensively and achieve real-time detection of negative obstacles. The inclined installation method allows most of the laser light to illuminate the ground (as shown in Figure 6). MID-70 adopts non-repetitive scanning technology, resulting in higher point cloud density, which enables accurate detection of every detail on the ground. Based on the aforementioned advantages, this paper proposes to use the MID-70 LiDAR as the sensor for detecting negative obstacles in orchards.

2.1.3. The Tilted Installation of LiDAR

Considering the shortcomings of the traditional upright setup method, this paper proposes the solution of a tilted setup for LiDAR; compared with the traditional upright setup, the tilted setup allows more laser irradiation on the ground, as shown in Figure 6a. For the LiDAR setup schematic, the solid-state LiDAR is mounted in the center of the front of the vehicle and tilted at an angle θ 3 in order to achieve the purpose of scanning on the ground. As shown in Figure 6b, the red area represents the scanning area of the LIDAR on the ground, in which θ 3 can be adjusted to modify the scanning range of the LiDAR.
When the LiDAR is installed at a height of 0.8 m, the orientation of the LiDAR blind spot is 0.8 × tan ( 54.8 θ 3 ) ; the farthest detection distance is 0.8 × tan ( 125.2 θ 3 ) , and the two functions are plotted in MATLAB as shown in Figure 7. The vertical viewing angle of the LiDAR is 70.4°, which is divided by the horizontal line, so theoretically, when θ 3 is less than 35.2°, the farthest detection distance is infinity; when θ 3 is greater than 54.8°, the blind spot will be reduced to 0, so Figure 7 plots the curve trend from 36° to 54°. After comprehensive consideration, this paper chooses a 40° tilt angle. In the case of the installation height of 0.8 m, the farthest detection distance is about 9 m, and the blind spot range is about 0.21 m; 40° is the angle at which we have experimented on, and we found the experiment to be more effective at this angle. When θ 3 is smaller than 40°, although a longer detection distance can be obtained, the distant point cloud is too sparse (excessive distance between points will result in inconspicuous features of obstacles at a distance), resulting in detection effect that is not ideal; when θ 3 is larger than 40°, although the blind spot range can be reduced, the detection distance will be greatly reduced, as can be seen from Equation (12). If the unmanned ground vehicle cannot detect the distant negative obstacles in time when traveling at a faster speed, this may lead to danger. In summary, a blind spot range of about 0.21 m and the farthest detection distance of about 9.5 m can meet the demand of negative obstacle detection in the orchard.
As can be seen from the blind spot curve plotted in Figure 7, under the tilted setup method, the LiDAR blind spot is significantly reduced, and almost all of the laser light from the MID-70 LiDAR can be irradiated on the forward ground area, greatly improving the density of the forward ground point cloud; this approach addresses the shortcomings of the traditional upright LiDAR setup. Under the tilt setup method, the geometric features of the negative obstacle point cloud are clearer than the traditional method, which enhance the robustness of the negative obstacle recognition system.

2.1.4. Experimental Arrangements

We designed experiments to verify the advantages of the new mounting method proposed in this paper by placing four obstacles in front of the unmanned vehicle, scanning the obstacles with three different mounting methods, analyzing the number of point cloud on the obstacles and the range of blind spots, and comparing the advantages and disadvantages of the three mounting methods.

2.2. Negative Obstacle Detection Method Based on Point Cloud Spatial Geometric Features

Based on the inclined installation, we obtain the ground point cloud features, and by analyzing the spatial features of the negative obstacle point cloud, we can realize the detection of negative obstacles.

2.2.1. Spatial Geometric Characterization of Negative Obstacle Point Cloud

Negative obstacle point clouds are characterized by the following three main features: (1) a jump in the spacing of neighboring points; (2) a “falling” phenomenon along at the wall after a negative obstacle point cloud; and (3) an increase in the density of the point cloud along the wall after a negative obstacle.
The geometric features of the negative obstacle are shown in Figure 8, and the angles between the three laser beams are all θ 4 , where laser l 1 is irradiated on the horizontal ground; laser l 2 is irradiated on the junction of the ground and the negative obstacle, and laser l 3 is irradiated inside the negative obstacle. When there is a negative obstacle, the irradiation points of l 3 changes from P to P , and the jump of the point cloud distance is a major geometric feature of the negative obstacle due to D 2 > D 2 .
Since the negative obstacle is lower than the horizontal ground, the point cloud inside the negative obstacle is also lower than the point cloud of the surrounding environment. Considering that the ground around the negative obstacle is not necessarily horizontal, this paper adopts straight-line fitting of the point cloud around the negative obstacle to represent the ground trend. The height of the point cloud at the negative obstacle is significantly lower than the fitted straight line, illustrating the characteristic of the point cloud “falling” along the wall after the obstacle. As shown in Figure 9, it is the actual point cloud diagram of the negative obstacle, with the green solid line, representing the boundary of the negative obstacle.
Negative obstacles also have the geometrical feature of dense point cloud along the back wall, as shown in Figure 10a; when there is no negative obstacle, the point cloud spacing ( d ) increases with the increase of θ angle, corresponding to the gradual decrease in point cloud density. However, when there is a negative obstacle, this gradual decreasing trend is interrupted. As illustrated in Figure 10b, the point spacing within the negative obstacle region is significantly reduced, leading to a marked increase in the density of points in this area.

2.2.2. Negative Obstacle Detection: An Algorithm-Specific Process

Through the above analysis, we learn that the negative obstacle point cloud has three different spatial point cloud distribution characteristics, based on which the negative obstacle can be detected. Each frame of the point cloud can be regarded as the signal of the LiDAR scanning the current ground environment, so as long as each frame of the point cloud is detected, the obstacle information can be extracted. Figure 11 shows the general flow of the negative obstacle detection algorithm in this paper.
In this paper, based on the spatial characteristics of the point cloud, we know that negative obstacles are characterized by proximal point jumps. Let the coordinates of a point be P i = [ x i   ,   y i   ,   z i ] , the point adjacent to P i and closer to the LIDAR is P i 1 = [ x i 1   ,   y i 1   ,   z i 1 ] . The neighboring point spacing of the point cloud can then be calculated as
d n i = ( x i x i 1 ) 2 + ( y i y i 1 ) 2 + ( z i z i 1 ) 2
We calculate the distance between neighboring points according to Equation (3) with a threshold T h j . When the distance d n i is greater than T h j , the point involved in the calculation of d n i is added to the initial candidate points set. In the experiments of this paper, we set T h j to 35 cm. By changing the value of T h j in different scenarios, it can adapt to the sizes of different negative obstacles. Based on the analysis presented earlier, it can be deduced that the abrupt change in the spacing of adjacent points marks the rear edge of a negative obstacle. Conversely, the laser point directly in front of this spacing jump corresponds to the front edge of the negative obstacle. The abrupt change in point spacing can roughly determine the location of the negative obstacle, reducing the subsequent computational load.
We define whether a point cloud has planar or linear features based on its smoothness; planar features indicate that the point is relatively smooth, while linear features suggest that the point is relatively steep. Point cloud smoothness can be calculated using Equation (4).
S i = 1 2 n j [ i n , i + n ] , j i ( P i P j ) P i P j
where n denotes the number of laser points selected to be neighboring Pi; S i represents the magnitude of the sum of the unit vectors formed by point P i and its neighboring 2n laser points. As illustrated in Figure 12, when P j is situated on the surface, the individual unit vectors cancel each other out upon addition, resulting in a smaller S i . Conversely, when P j is located on the line, the unit vectors do not cancel out upon addition, leading to a larger S i . Take the face feature threshold T h p and the line feature threshold T h l , when S i < T h p , the point is a face feature point, and when S i > T h l , the point is a line feature point.
After obtaining the initial candidate points set, we can then calculate the point cloud density. By filtering the Z-axis information of the point cloud, the point cloud is placed on a plane, which is divided into many small grids. The density of the point cloud is determined by the number of points in each grid. We set a threshold T h s , and if the number of points in a grid exceeds T h s , the corresponding three-dimensional point cloud points are added to the secondary candidate points set. If the condition is not met, the point cloud will be eliminated. We will compare the number of point clouds in the grid at the m-th row and n-th column with the number of point clouds in the surrounding grids, as shown in Equation (5).
N p m , n N p i , j > T h s ,   i [ m N g , m + N g ] ,   j [ n N g , n + N g ]
where N p m , n represents the number of point clouds in the grid at the m-th row and n-th column, and N g denotes the range of the grid neighborhood.
We mentioned above that the negative obstacle point cloud has the characteristic of sinking along the wall. Afterward, we used a straight line fitted to the point cloud to represent the ground alignment and compare the height of the point cloud with the fitted straight line to obtain a collection of point clouds that are below the ground. Here, we describe the straight line with a point on the line and a vector in its direction. Let the space line pass through the coordinates P 0 = [ x 0 , y 0 , z 0 ] ; this is a specific point on the straight line, which provides the exact position of the line in space, and the direction vector is n = [ a , b , c ] . This is the vector that defines the direction of the line. The components a , b , c of the direction vector n represent the unit lengths of the vector along the X, Y and Z axes, respectively, then the equation of the straight line is expressed as follows:
x x 0 a = y y 0 b = z z 0 c
The distance between back-edge point P b and the straight line is
| | ( P b P 0 ) × n | | T h d
where Thd denotes the distance threshold. As shown in Figure 13, P b P 0 represents the vector from P 0 to P b . By performing a cross product with n , the distance between P b and the fitted line can be calculated. Equation (7) indicates that the distance between point P b and the fitted line should be greater than or equal to T h d . But since it is possible for the point cloud to be over a straight line, the following constraints are added:
P 0 + P b P 0 | | P b P 0 | | z < P 0 + n | | n | | z
In Equation (8), where z is the unit vector in the direction of the Z-axis in the world coordinate system, as shown in Figure 13; P b P 0 | | P b P 0 | | is the unit vector pointing from P 0 to P b , and n | | n | | is the unit vector along the fitted line. By adding P 0 to both vectors, we effectively translate the starting point of the vectors to P 0 . Then, by taking the dot product with the z , we can calculate the magnitude of the components of both vectors along the Z-axis. This formula constrains the point cloud height to be below the fitted straight line and ensures that negative obstacles rather than positive obstacles are recognized. Under the constraints of Equations (7) and (8), points from the secondary candidate points set are added to the final candidate points set.
So far, we have obtained the final candidate point set for negative obstacles, but the negative obstacle size cannot be determined from these candidate point sets alone, and the potential obstacle size is obtained by point cloud clustering operation. Our principle is that the potential obstacle is allowed to be a little larger than the actual obstacle, but the potential obstacle is not allowed to be smaller than the size of the actual obstacle, otherwise it will possibly pose a safety hazard to the unmanned ground vehicle. In this paper, we use the mean-shift algorithm to cluster the candidate point cloud so as to calculate the polygon that describes the area of the negative obstacle. Finally, the negative obstacles are area filtered to filter out the negative obstacles with too small area and output the negative obstacle data.
In fact, the characteristics of negative obstacle point clouds vary at different distances. As shown in Figure 14, the point spacing and height features of obstacles are illustrated at distances of 1 m, 2 m, 4 m, and 8 m. It can be observed that when the negative obstacle is closer to the vehicle, the jump and height depression features of the point cloud are more pronounced. Conversely, when the distance is greater, the dense feature of the point cloud becomes more noticeable. Based on these characteristics, this paper sets different thresholds every 0.5 m to accommodate for the features of negative obstacles at varying distances.

2.2.3. Experimental Arrangement

In this paper, experiments are designed to verify the feasibility of the algorithm in a real orchard; three types of negative obstacles are taken in the orchard and detected using the algorithm, respectively, and the detection results of each frame of point cloud are counted.

2.3. Experimental System and Evaluation Indicators

Figure 15 is the experimental system of this paper, which shows the main equipment used in the experiments, including a 16-line mechanical rotating LiDAR, a MID-70 solid-state LiDAR, and a cart platform (Yunle New Energy, Xuancheng, China).
In this paper, we separately count the detection results for different negative obstacles; we count the detection results of each frame and use these results as its performance-judgment index. The car travels towards the negative obstacle at a constant speed; when the distance between the vehicle and the obstacle is D m a x , it starts to count the number of frames, and the total number of frames is N 0 . In a detection process, there are four possible outcomes: the first is correctly identifying a negative obstacle, the second is identifying a negative obstacle while also marking a non-negative obstacle as negative, the third is only identifying non-negative obstacles, and the last is failing to detect any at all. Among these, the sum of the first and second scenarios is N 1 ; the sum of the second and third is N 2 , and N 0 is the sum of all four situations. Based on these measurements, the detection success rate is defined as follows:
P s u c c e s s = N 1 N 0 × 100 %
False detection rate defined as follows:
P f a l s e = N 2 N 0 × 100 %
And miss detection rate defined as follows:
P m i s s = ( 1 N 1 N 0 ) × 100 %
We also count the negative obstacle time consumed per frame. When the negative obstacles are far away from the unmanned ground vehicle, the geometric features of the negative obstacles are not obvious, which lead to difficult detection, and too small negative obstacles are often filtered out as noise. In practical applications, the unmanned ground vehicle moves in the orchard at speeds of 5~20 km/h. According to Matthies et al. [41], it is known that the vehicle stopping distance can be determined using Equation (12).
R = v 2 2 μ g + v T r + B
where μ is the static friction coefficient between the ground and the wheels; the unmanned ground vehicle used in this paper is about 0.65 with the orchard ground; g is the gravitational acceleration, usually g = 9.8   m / s 2 ; T r is the total reaction time, usually 0.25 s; B is the safety buffer distance, usually 2 m in our experiments. When v > 3.2   m / s ( v > 11.52   k m / h ), the speed value becomes the dominant term. When the speed is 20 km/h, the stopping distance can be calculated as 5.9 m using Equation (12). It can be seen that the detection distance of this algorithm meets the demand of the orchard scenario. The hardware device used in this paper is a computer with an Intel Core i5-1135G7 processor and 16 GB of RAM. The average computing time per frame is 17 ms, meeting the requirements for real-time detection.

3. Results

We designed three experiments to validate the advantages and disadvantages of the tilt-mounted LIDAR approach versus the traditional upright-mounted approach, as well as to validate the accuracy of the negative obstacle detection algorithm and the success rate of the detection in an orchard field scenario.

3.1. Experimental Result and Analysis of Tilt-Mounting Method

To compare the differences among three installation methods, we designed three experiments: (1) tilted installation of solid-state LiDAR; (2) traditional upright installation; and (3) tilted installation of mechanical LiDAR. In the tilted installation experiment of solid-state LiDAR, this paper installed the MID-70 LiDAR at a 40° tilt on a platform 0.8 m above the ground; in the traditional upright installation experiment, the LiDAR was installed upright on a platform 0.8 m above the ground; in the tilted installation experiment of mechanical LiDAR, we installed the LiDAR at a 25° tilt on a platform 0.8 m above the ground. As shown in Figure 16, this paper placed four obstacles in front of the vehicle, each 21 cm high and 30 cm wide, with a 2 m interval between each obstacle, to verify the visible range of the LiDAR.
Figure 17a shows the field of view range of the MID-70 in the tilted mounting mode; Figure 17b shows the field of view range of the Helios-16 LiDAR in the conventional mounting mode; Figure 17c shows the field of view range of the Helios-16 LIDAR in tilt mount mode. The experimental study found that under the tilted mounting method, the LiDAR can scan more obstacles with a smaller field-of-view blind spot (see Table 1); while under the traditional upright setup method, the large field-of-view blind spot will not be able to detect the near obstacles, which will pose a safety hazard in a complex environment such as an orchard.
Another advantage of the proposed recommended mounting method is that the point cloud density is larger, which improves the LiDAR resolution and allows better analysis of negative obstacle spatial geometry features. As shown in Figure 17, we designed another experiment to verify this enhancement by placing an obstacle at 2 m, 4 m, 6 m, and 8 m away from the trolley, and all four obstacles are of the same size, labeled as S1, S2, S3 and S4, respectively. The obstacles are detected separately with two different setup methods of LiDAR, and then the number of point cloud on each obstacle is counted, because each obstacle has the same size, so the number of point clouds can directly respond to the density of point clouds, and the statistical results are shown in Table 1.
As can be seen from the data in Table 1, in the traditional upright setup, the number of point clouds on S1 is zero, because S1 is within the blind spot. On the other three obstacles, the number of point clouds was an order of magnitude higher in the tilt-mounted approach than in the traditional upright-mounted approach. Additionally, when we tilt the mechanical LiDAR for installation, the point cloud density increases, but it still does not match our proposed solution. Due to the mechanical LiDAR’s smaller vertical field of view, even with a tilt of only 25°, its maximum detection range is only 4.5 m, with a blind zone of 0.56 m. It can be concluded that by tilting the installation of solid-state LiDAR proposed in this paper, the number of point clouds on the obstacle is more, and the point clouds are denser, which are favorable for obstacle detection. Therefore, the tilted setup method proposed in this paper is advantageous for detecting negative obstacles in orchards.

3.2. Experimental Result and Analysis of Negative Obstacles Detection Algorithms

In order to evaluate the algorithm proposed in this paper, experiments were carried out in an orchard scenario. Our experiment consists of the following three parts: (1) introduction of the experimental site as well as the type and number of negative obstacles; (2) experimental evaluation metrics, mainly the detection of success rate and the maximum detection distance; and (3) the analysis of the reasons for false detection.
In order to verify the algorithm’s effect on the detection of negative obstacles, as shown in Figure 17. We setup negative obstacles in the orchard environment, and the orchard scene includes gullies and potholes, of which most of the gullies are manually excavated in the orchard mud, and the potholes include those obtained by digging downward in the mud and removing the sewer cover. Gullies are about 0.5 m wide and 0.5 m deep; potholes are 0.5–1 m in size and greater than 0.5 m deep. To show the generalization of the algorithm, we also added a negative barrier scene present in the campus, which is 0.6 m in size and about 0.3 m in depth. Potholes on semi-structured roads are shown in Figure 18a and are noted as negative obstacle O1; sewer openings alongside semi-structured roads are shown in Figure 18b and are noted as negative obstacle O2, and gullies alongside unstructured roads are shown in Figure 18c and are noted as negative obstacle O3; negative obstacles on campus are shown in Figure 18d and are noted as negative obstacle O4.
In our experiments, we installed the MID-70 LiDAR tilted at 40° on a 0.8 m high vehicle platform and installed a camera below the LiDAR to capture images of the road ahead of the vehicle. Figure 19 shows the detection results for the different types of negative obstacles, with the camera capturing images on the left and the detection results on the right. The experiments show that the algorithm can effectively detect negative obstacles in the orchard.
In this paper, we first determined the statistical significance of maximum detection distance; we define the distance between the negative obstacle and the unmanned ground vehicle, when the negative obstacle is detected for the first time, as the maximum detection distance D m a x ; the maximum detection distance is one of the indexes used to measure the performance of the detection algorithm. In the orchard environment, the environment is more complex, and most of them are filled with semi-structured roads and unstructured roads. The road condition has a greater impact on the maximum detection distance; when the road is smoother and less shaded, the detection distance is farther; when the road is rugged and overgrown it is not favorable for negative obstacle detection. Therefore, for the same negative obstacle, the experiment is repeated to obtain the average value used for the final statistical results. In addition, the size of negative obstacles will also affect the maximum detection distance. Finally, the statistical results are listed in Table 2.
In this experiment, certain moments produced false alarms (Pfalse) for negative obstacles; in the orchard, the environment is more complex due to various factors, such as rugged roads and overgrown weeds, so the point cloud shape is irregular and the likelihood of false alarms is much higher; the algorithm proposed in this paper is based on the detection of spatial geometrical features of negative obstacles, and therefore the probability of generating false alarms in unstructured environments is also higher. As shown in Table 2, the negative obstacle O3 in the unstructured scenario produces a higher false alarm rate than the other three scenarios.

4. Discussion

In this paper, we have installed the LiDAR at a height of 0.8 m on a platform with a 40° inclination angle, which we believe is an appropriate installation method. Based on the theoretical analysis presented earlier, it is known that increasing the height of the radar will increase the blind area and detection range, while increasing the inclination angle will decrease the blind area and detection range. In practical applications, the installation height and inclination angle can be adjusted appropriately to adapt to obstacle detection in different scenarios.
This paper proposes a method of tilt-mounting mechanical LiDAR to detect negative obstacles. The method of tilt-mounting mechanical LiDAR is also favored by some scholars. The article [27] also proposed the inclined installation of two mechanical LiDARs, that is, tilting two 32-line mechanical LIDARs in front of the vehicle. This method increases the point cloud density and narrows the range of blind spots. Our proposed inclined installation can also achieve this goal, but our solution has achieved better results at a lower cost. First, we have achieved a higher point cloud density. Table 1 shows that at a distance of 8 m from the vehicle, an obstacle with a surface area of 0.063 square meters has 161 point clouds. In the article by Shang et al. [27], the human body surface about 8 m away from the vehicle has 740 point clouds. If we convert the two to the same area (such as 1 square meter), our scheme has approximately 161 × (1/0.063) ≈ 2556 point clouds, while the article only has 740 × (1/0.935) ≈ 791 point clouds. The figure 0.935 represents half of the human body’s surface area, which was calculated using the formula provided in the article [42]. Therefore, the number of point clouds in our scheme is significantly better than that in the article’s scheme. Furthermore, although the blind spot range is not explicitly mentioned in the article [27], From the description in the article [27], it can be inferred that the blind spot area in front of the vehicle is approximately 0.5 m. which is larger than the 0.21 m blind spot in this paper. Finally, the cost of the LIDAR equipment we selected is much lower. As far as the author knows, the price of the HDL-32 LIDAR used in article [27] is about 52,000 US dollars, and the cost of the HDL-64 LIDAR is about 100,000 US dollars. The author of article [27] used two HDL-32 and one HDL-64 LIDARs, with a cost exceeding 200,000 US dollars. The cost of the MID-70 LIDAR used in this article is around 1000 US dollars.
It is easy to find from the above experiments that the farthest detection distances of negative obstacle environments are O2 and O4 with relatively flat roads is farther. This is because the structured environment is free from the influence of weeds and rough roads, which is beneficial for negative obstacle detection. At the same time, unstructured scenes will also increase the number of false alarms. This is because the point cloud shape of unstructured scenes is irregular, affecting the extraction of spatial features of ground point clouds, leading to an increase in the false alarms rate and a decrease in the successful detection rate.
This paper uses three thresholds to filter the three characteristics of negative obstacles. In our experimental process, we found that the stricter the threshold selection, the lower the success rate of negative obstacle detection, while the false alarm rate also decreases accordingly. Conversely, the more lenient the threshold selection, the higher the success rate, but the false alarm rate also increases.
Despite these difficulties, this paper still chose to compare with some of the more advanced negative obstacle detection algorithms currently available. Dhiman et al. [19] proposed a stereo vision-based detection scheme that can detect potholes whether they are dry, water-filled, or snow-covered; the overall accuracy of the 50 datasets presented in the paper is 88%, slightly lower than the method proposed in this paper. The paper does not mention the maximum detection distance, so a comparison cannot be made; however, to the authors’ knowledge, the detection distance accuracy of vision-based detection scheme is usually lower than that of LiDAR detection.
Unfortunately, to the best of the authors’ knowledge, there is currently no public dataset available for negative obstacle detection. Due to the varying sizes of negative obstacles and their prevalence in non-structured environments, it is challenging to define a negative obstacle scenario. Moreover, the detection scenario can also affect the success rate and detection distance. Orchard environments are particularly complex, as it is difficult to receive GNSS signals due to tree cover, and the obstruction of light by trees can also affect the operation of machine vision. Furthermore, due to the presence of ground weeds, it is almost impossible for LiDAR to detect the ground beyond 10 m. Therefore, it is difficult to compare negative obstacle detection in orchard environments with other environments. In the real world, the number of negative obstacles is much less than that of positive obstacles, making the collection of datasets relatively difficult and resulting in insufficient data volume. The next step should be to further collect more negative obstacles to improve the robustness of the algorithm. The experiments conducted in our paper were carried out in a real orchard environment. Due to the actual planting requirements, the spacing between negative obstacles is relatively large, which makes it difficult to find two negative obstacles within a limited detection range. Therefore, there is indeed a challenge in conducting experiments for the simultaneous detection of multiple negative obstacles. In future research, we plan to conduct experiments on multi-obstacle detection and discuss possible implementation strategies.

5. Conclusions

In this paper, a detection algorithm based on LiDAR for detecting negative obstacles in orchards is introduced. The paper proposes both a method for tilt-mounted LiDAR and a detection technique based on the geometric features of negative obstacle point clouds. As a purely laser-based detection method, it ensures reliability and robustness in the complex environment of orchards. The work presented in this paper focuses on two main aspects: (1) proposing the tilt-mounted LiDAR method and (2) developing a detection algorithm based on the spatial geometric features of negative obstacle point clouds using the tilt-mounted LiDAR.
The MID-70 solid-state LiDAR chosen in this paper is an economical choice. Solid-state LiDAR has a lower cost but offers a higher field of view, which is very favorable for negative obstacle detection. Under the tilt-mounting method, the density of the ground point cloud in front of the unmanned ground vehicle increases dramatically; the point cloud density is increased by more than an order of magnitude, which greatly improves the resolution of the ground point cloud; compared with the traditional upright-mounting method, the tilt-mounting LiDAR method can reduce the blind area range from 3 m to 0.21 m, which ensures the safety of the vehicle. In conclusion, the tilt-mounted MID-70 solid-state LiDAR can ensure that the majority of the laser beams are illuminated on the ground, which improve the robustness of the negative obstacle detection system.
On this basis, the geometric characterizations of the negative obstacle point cloud are used to analyze the difference between the negative obstacle and the horizontal ground, and the three features of the negative obstacles in space are analyzed to detect point pairs on the laser-scanning line that correspond to negative obstacles, and the candidate point sets are clustered and filtered to finally obtain the negative obstacle detection results. The detection success rate in the orchard field scene reaches 92.7%, and the average time consumed per frame is about 17.3 ms, which meets the real-time detection and obstacle avoidance requirements of the orchard.
The experimental results show that this method can realize the demand for real-time detection of negative obstacles in the orchard and improve the obstacle avoidance ability of orchard robots. The purpose of this paper is to improve the robot’s ability to detect negative obstacles in the orchard and to address the rollover and tipping issues caused by negative obstacles in the orchard.

Author Contributions

H.W. conceived the idea and wrote the paper. P.X. designed the experiments and analyzed the experiment data. L.Z. and Y.Y. revised the paper. Y.H., Q.G. and Z.B. conducted the experiment. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by, Key R&D Program of Zhejiang (2022C02055), Zhejiang Province “Triple Agriculture Nine Aspects Cooperation” Science and Technology Cooperation Pro-gram (2023SNJF046), Fundamental Research Funds for the Provincial Universities of Zhejiang (GK239909299001-403), Scientific Research Fund of Zhejiang Provincial Education Department (Y202352236) and Zhejiang Province College Students’ Science and Technology Innovation Activity Plan (New Talent Plan) (2023R407067).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, W.; Zhou, X.; Boansi, D.; Horlu, G.S.A.; Owusu, V. Adoption and Intensity of Agricultural Mechanization and Their Impact on Non-Farm Employment of Rural Women. World Dev. 2024, 173, 106434. [Google Scholar] [CrossRef]
  2. Manduchi, R.; Castano, A.; Talukder, A.; Matthies, L. Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation. Auton. Robot. 2005, 18, 81–102. [Google Scholar] [CrossRef]
  3. Matthies, L.; Rankin, A. Negative Obstacle Detection by Thermal Signature. In Proceedings of the Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), Las Vegas, NV, USA, 27–31 October 2003; IEEE: Las Vegas, NV, USA, 2003; Volume 1, pp. 906–913. [Google Scholar]
  4. Li, N.; Yu, X.; Liu, X.; Lu, C.; Liang, Z.; Su, B. 3D-Lidar Based Negative Obstacle Detection in Unstructured Environment. In Proceedings of the 2020 4th International Conference on Vision, Image and Signal Processing, Bangkok, Thailand, 9–11 December 2020; ACM: Bangkok, Thailand, 2020; pp. 1–6. [Google Scholar]
  5. Matthies, L.; Kelly, A.; Litwin, T.; Tharp, G. Obstacle Detection for Unmanned Ground Vehicles: A Progress Report. In Proceedings of the Intelligent Vehicles’ 95. Symposium, Detroit, MI, USA, 25–26 September 1995; IEEE: Detroit, MI, USA, 1995; pp. 66–71. [Google Scholar]
  6. Bellutta, P.; Manduchi, R.; Matthies, L.; Owens, K.; Rankin, A. Terrain Perception for DEMO III. In Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No. 00TH8511), Dearborn, MI, USA, 5 October 2000; IEEE: Dearborn, MI, USA, 2000; pp. 326–331. [Google Scholar]
  7. Heckman, N.; Lalonde, J.-F.; Vandapel, N.; Hebert, M. Potential Negative Obstacle Detection by Occlusion Labeling. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; IEEE: San Diego, CA, USA, 2007; pp. 2168–2173. [Google Scholar]
  8. Moore, J.G. Managing the Orchard; Agricultural Experiment Station of the University of Wisconsin: Madison, WI, USA, 1916. [Google Scholar]
  9. Shang, E.; An, X.; Li, J.; Ye, L.; He, H. Robust Unstructured Road Detection: The Importance of Contextual Information. Int. J. Adv. Robot. Syst. 2013, 10, 179. [Google Scholar] [CrossRef]
  10. Chen, T.; Dai, B.; Wang, R.; Liu, D. Gaussian-Process-Based Real-Time Ground Segmentation for Autonomous Land Vehicles. J. Intell. Robot Syst. 2014, 76, 563–582. [Google Scholar] [CrossRef]
  11. Zhou, S.; Xi, J.; McDaniel, M.W.; Nishihata, T.; Salesses, P.; Iagnemma, K. Self-supervised Learning to Visually Detect Terrain Surfaces for Autonomous Robots Operating in Forested Terrain. J. Field Robot. 2012, 29, 277–297. [Google Scholar] [CrossRef]
  12. Dodge, D.; Yilmaz, M. Convex Vision-Based Negative Obstacle Detection Framework for Autonomous Vehicles. IEEE Trans. Intell. Veh. 2023, 8, 778–789. [Google Scholar] [CrossRef]
  13. Ahmed, A.; Ashfaque, M.; Ulhaq, M.U.; Mathavan, S.; Kamal, K.; Rahman, M. Pothole 3D Reconstruction With a Novel Imaging System and Structure From Motion Techniques. IEEE Trans. Intell. Transport. Syst. 2022, 23, 4685–4694. [Google Scholar] [CrossRef]
  14. Sun, T.; Pan, W.; Wang, Y.; Liu, Y. Region of Interest Constrained Negative Obstacle Detection and Tracking With a Stereo Camera. IEEE Sens. J. 2022, 22, 3616–3625. [Google Scholar] [CrossRef]
  15. Ghani, M.F.A.; Sahari, K.S.M. Detecting Negative Obstacle Using Kinect Sensor. Int. J. Adv. Robot. Syst. 2017, 14, 172988141771097. [Google Scholar] [CrossRef]
  16. Feng, Z.; Guo, Y.; Sun, Y. Segmentation of Road Negative Obstacles Based on Dual Semantic-Feature Complementary Fusion for Autonomous Driving. IEEE Trans. Intell. Veh. 2024, 9, 4687–4697. [Google Scholar] [CrossRef]
  17. Liu, C.; Zhao, Z.; Zhou, Y.; Ma, L.; Sui, X.; Huang, Y.; Yang, X.; Ma, X. Object Detection and Information Perception by Fusing YOLO-SCG and Point Cloud Clustering. Sensors 2024, 24, 5357. [Google Scholar] [CrossRef] [PubMed]
  18. Koch, C.; Brilakis, I. Pothole Detection in Asphalt Pavement Images. Adv. Eng. Inform. 2011, 25, 507–515. [Google Scholar] [CrossRef]
  19. Dhiman, A.; Klette, R. Pothole Detection Using Computer Vision and Learning. IEEE Trans. Intell. Transport. Syst. 2020, 21, 3536–3550. [Google Scholar] [CrossRef]
  20. Silvister, S.; Komandur, D.; Kokate, S.; Khochare, A.; More, U.; Musale, V.; Joshi, A. Deep Learning Approach to Detect Potholes in Real-Time Using Smartphone. In Proceedings of the 2019 IEEE Pune Section International Conference (PuneCon), Pune, India, 18–20 December 2019; IEEE: Pune, India, 2019; pp. 1–4. [Google Scholar]
  21. Thiruppathiraj, S.; Kumar, U.; Buchke, S. Automatic Pothole Classification and Segmentation Using Android Smartphone Sensors and Camera Images with Machine Learning Techniques. In Proceedings of the 2020 IEEE Region 10 Conference (Tencon), Osaka, Japan, 16–19 November 2020; IEEE: Osaka, Japan, 2020; pp. 1386–1391. [Google Scholar]
  22. Kang, B.; Choi, S. Pothole Detection System Using 2D LiDAR and Camera. In Proceedings of the 2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN), Milan, Italy, 4–7 July 2017; IEEE: Milan, Italy, 2017; pp. 744–746. [Google Scholar]
  23. Wu, J.; Xu, H.; Tian, Y.; Pi, R.; Yue, R. Vehicle Detection under Adverse Weather from Roadside LiDAR Data. Sensors 2020, 20, 3433. [Google Scholar] [CrossRef] [PubMed]
  24. Sakai, M.; Aoki, Y.; Takagi, M. The Support System of Firefighters by Detecting Objects in Smoke Space. In Proceedings of the Optomechatronic Machine Vision, Sapporo, Japan, 5–7 December 2005; SPIE: Sapporo, Japan, 2005; Volume 6051, pp. 191–201. [Google Scholar]
  25. Xiao, A.; Huang, J.; Guan, D.; Cui, K.; Lu, S.; Shao, L. PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds. Adv. Neural Inf. Process. Syst. 2022, 35, 11035–11048. [Google Scholar]
  26. Larson, J.; Trivedi, M. Lidar Based Off-Road Negative Obstacle Detection and Analysis. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; IEEE: Washington, DC, USA, 2011; pp. 192–197. [Google Scholar]
  27. Shang, E.; An, X.; Wu, T.; Hu, T.; Yuan, Q.; He, H. LiDAR Based Negative Obstacle Detection for Field Autonomous Land Vehicles: LiDAR Based Negative Obstacle Detection for Field Autonomous Land Vehicles. J. Field Robot. 2016, 33, 591–617. [Google Scholar] [CrossRef]
  28. Park, G.; Koh, J.; Kim, J.; Moon, J.; Choi, J.W. LiDAR-Based 3D Temporal Object Detection via Motion-Aware LiDAR Feature Fusion. Sensors 2024, 24, 4667. [Google Scholar] [CrossRef]
  29. Chen, L.; Yang, J.; Kong, H. Lidar-Histogram for Fast Road and Obstacle Detection. In Proceedings of the 2017 IEEE international conference on robotics and automation (ICRA), Singapore, 29 May–3 June; IEEE: Singapore, 2017; pp. 1343–1348. [Google Scholar]
  30. Ravi, R.; Bullock, D.; Habib, A. Pavement Distress and Debris Detection Using a Mobile Mapping System with 2D Profiler LiDAR. Transp. Res. Rec. 2021, 2675, 428–438. [Google Scholar] [CrossRef]
  31. Ravi, R.; Habib, A.; Bullock, D. Pothole Mapping and Patching Quantity Estimates Using LiDAR-Based Mobile Mapping Systems. Transp. Res. Rec. 2020, 2674, 124–134. [Google Scholar] [CrossRef]
  32. Song, E.; Jeong, S.; Hwang, S.-H. Edge-Triggered Three-Dimensional Object Detection Using a LiDAR Ring. Sensors 2024, 24, 2005. [Google Scholar] [CrossRef]
  33. Ruan, S.; Li, S.; Lu, C.; Gu, Q. A Real-Time Negative Obstacle Detection Method for Autonomous Trucks in Open-Pit Mines. Sustainability 2022, 15, 120. [Google Scholar] [CrossRef]
  34. Atencio, E.; Plaza-Muñoz, F.; Muñoz-La Rivera, F.; Lozano-Galant, J.A. Calibration of UAV Flight Parameters for Pavement Pothole Detection Using Orthogonal Arrays. Autom. Constr. 2022, 143, 104545. [Google Scholar] [CrossRef]
  35. Pan, Y.; Zhang, X.; Cervone, G.; Yang, L. Detection of Asphalt Pavement Potholes and Cracks Based on the Unmanned Aerial Vehicle Multispectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3701–3712. [Google Scholar] [CrossRef]
  36. Goodin, C.; Carrillo, J.; Monroe, J.G.; Carruth, D.W.; Hudson, C.R. An Analytic Model for Negative Obstacle Detection with Lidar and Numerical Validation Using Physics-Based Simulation. Sensors 2021, 21, 3211. [Google Scholar] [CrossRef] [PubMed]
  37. Vandapel, N.; Donamukkala, R.R.; Hebert, M. Unmanned Ground Vehicle Navigation Using Aerial Ladar Data. Int. J. Robot. Res. 2006, 25, 31–51. [Google Scholar] [CrossRef]
  38. Li, J.; Deng, G.; Luo, C.; Lin, Q.; Yan, Q.; Ming, Z. A Hybrid Path Planning Method in Unmanned Air/Ground Vehicle (UAV/UGV) Cooperative Systems. IEEE Trans. Veh. Technol. 2016, 65, 9585–9596. [Google Scholar] [CrossRef]
  39. Deng, S.; Xu, Q.; Yue, Y.; Jing, S.; Wang, Y. Individual Tree Detection and Segmentation from Unmanned Aerial Vehicle-LiDAR Data Based on a Trunk Point Distribution Indicator. Comput. Electron. Agric. 2024, 218, 108717. [Google Scholar] [CrossRef]
  40. Jiang, A.; Ahamed, T. Navigation of an Autonomous Spraying Robot for Orchard Operations Using LiDAR for Tree Trunk Detection. Sensors 2023, 23, 4808. [Google Scholar] [CrossRef]
  41. Matthies, L.; Grandjean, P. Stochastic Performance Modeling and Evaluation of Obstacle Detectability with Imaging Range Sensors. IEEE Trans. Robot. Autom. 1994, 10, 783–792. [Google Scholar] [CrossRef]
  42. Verbraecken, J.; Van de Heyning, P.; De Backer, W.; Van Gaal, L. Body Surface Area in Normal-Weight, Overweight, and Obese Adults. A Comparison Study. Metabolism 2006, 55, 515–524. [Google Scholar] [CrossRef]
Figure 1. RoboSense LiDAR field of view (FOV) angle. Image from RoboSense official website: (a) Helios-32 LiDAR field of view angle and (b) Helios-16 LiDAR field of view angle.
Figure 1. RoboSense LiDAR field of view (FOV) angle. Image from RoboSense official website: (a) Helios-32 LiDAR field of view angle and (b) Helios-16 LiDAR field of view angle.
Sensors 24 07929 g001
Figure 2. Diagram of blind spot in traditional upright LiDAR setup method.
Figure 2. Diagram of blind spot in traditional upright LiDAR setup method.
Sensors 24 07929 g002
Figure 3. Diagram of wiring harness density in traditional upright LiDAR setup method.
Figure 3. Diagram of wiring harness density in traditional upright LiDAR setup method.
Sensors 24 07929 g003
Figure 4. The disadvantages of traditional upright setup method in real environments.
Figure 4. The disadvantages of traditional upright setup method in real environments.
Sensors 24 07929 g004
Figure 5. (a) Range of the LIVOX MID-70 LiDAR’s field of view (FOV); (b) horizontal field of view of the LIVOX MID-70 LiDAR; (c) vertical field of view of the LIVOX MID-70 LiDAR.
Figure 5. (a) Range of the LIVOX MID-70 LiDAR’s field of view (FOV); (b) horizontal field of view of the LIVOX MID-70 LiDAR; (c) vertical field of view of the LIVOX MID-70 LiDAR.
Sensors 24 07929 g005
Figure 6. LiDAR setup method: (a) side view and (b) top view.
Figure 6. LiDAR setup method: (a) side view and (b) top view.
Sensors 24 07929 g006
Figure 7. Effect of LiDAR tilt angle on blind spot and furthest detection range.
Figure 7. Effect of LiDAR tilt angle on blind spot and furthest detection range.
Sensors 24 07929 g007
Figure 8. Laser point cloud distance jump at negative obstacles.
Figure 8. Laser point cloud distance jump at negative obstacles.
Sensors 24 07929 g008
Figure 9. Negative obstacles point cloud.
Figure 9. Negative obstacles point cloud.
Sensors 24 07929 g009
Figure 10. Relationship between point spacing and theta angle: (a) relationship between point spacing and theta angle in the absence of negative obstacles and (b) relationship between point spacing and theta angle in the presence of negative obstacles.
Figure 10. Relationship between point spacing and theta angle: (a) relationship between point spacing and theta angle in the absence of negative obstacles and (b) relationship between point spacing and theta angle in the presence of negative obstacles.
Sensors 24 07929 g010
Figure 11. Flowchart of negative obstacle detection algorithm.
Figure 11. Flowchart of negative obstacle detection algorithm.
Sensors 24 07929 g011
Figure 12. Local smoothness illustration: (a) P j is located on the surface and (b) P j is located on the line.
Figure 12. Local smoothness illustration: (a) P j is located on the surface and (b) P j is located on the line.
Sensors 24 07929 g012
Figure 13. Fitting a straight line to the constraint relationship at points along the wall after a negative obstacle.
Figure 13. Fitting a straight line to the constraint relationship at points along the wall after a negative obstacle.
Sensors 24 07929 g013
Figure 14. Point spacing and height of negative obstacles at different distances: (a) 1 m distance; (b) 2 m distance; (c) 4 m distance; and (d) 8 m distance.
Figure 14. Point spacing and height of negative obstacles at different distances: (a) 1 m distance; (b) 2 m distance; (c) 4 m distance; and (d) 8 m distance.
Sensors 24 07929 g014
Figure 15. Experimental platform: (a) front view and (b) side view.
Figure 15. Experimental platform: (a) front view and (b) side view.
Sensors 24 07929 g015
Figure 16. The visual range comparison experiments of the proposed setup: (a) schematic diagram of car and obstacles; (b) point clouds of obstacles.
Figure 16. The visual range comparison experiments of the proposed setup: (a) schematic diagram of car and obstacles; (b) point clouds of obstacles.
Sensors 24 07929 g016
Figure 17. Ground point cloud with different mounting methods: (a) point cloud in inclined setup method; (b) point cloud in traditional upright setup method; and (c) point cloud in inclined mechanical LiDAR.
Figure 17. Ground point cloud with different mounting methods: (a) point cloud in inclined setup method; (b) point cloud in traditional upright setup method; and (c) point cloud in inclined mechanical LiDAR.
Sensors 24 07929 g017
Figure 18. Three different negative obstacles in the orchard: (a) negative obstacle on the path to semi-structuring; (b) roadside sewer; (c) gully in the orchard; and (d) negative obstacle in the campus.
Figure 18. Three different negative obstacles in the orchard: (a) negative obstacle on the path to semi-structuring; (b) roadside sewer; (c) gully in the orchard; and (d) negative obstacle in the campus.
Sensors 24 07929 g018
Figure 19. Detection results of the algorithm in three scenarios: (a) negative obstacle O1 test result; (b) negative obstacle O2 test result; (c) negative obstacle O3 test result; and (d) negative obstacle O4 test result.
Figure 19. Detection results of the algorithm in three scenarios: (a) negative obstacle O1 test result; (b) negative obstacle O2 test result; (c) negative obstacle O3 test result; and (d) negative obstacle O4 test result.
Sensors 24 07929 g019
Table 1. The points distribution on obstacles under the tilted setup and the traditional setup.
Table 1. The points distribution on obstacles under the tilted setup and the traditional setup.
Setup MethodS1S2S3S4
Traditional upright mount0362218
The tilt mechanical LiDAR105822725
The tilt solid-state mount835551264161
Table 2. Statistics of negative obstacle detection result.
Table 2. Statistics of negative obstacle detection result.
Negative Obstacle ScenarioN0N1N2PsuccessPfalsePmissDmax (m)Average Time Spent (ms)
O15254903093.3%5.7%6.7%8.016.2
O26846353692.8%5.3%7.2%8.517.1
O33963523288.9%8.1%11.1%6.918.9
O43843681095.8%2.6%4.2%8.416.8
average4974612792.7%5.4%7.3%8.017.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, P.; Wang, H.; Huang, Y.; Gao, Q.; Bai, Z.; Zhang, L.; Ye, Y. LiDAR-Based Negative Obstacle Detection for Unmanned Ground Vehicles in Orchards. Sensors 2024, 24, 7929. https://doi.org/10.3390/s24247929

AMA Style

Xie P, Wang H, Huang Y, Gao Q, Bai Z, Zhang L, Ye Y. LiDAR-Based Negative Obstacle Detection for Unmanned Ground Vehicles in Orchards. Sensors. 2024; 24(24):7929. https://doi.org/10.3390/s24247929

Chicago/Turabian Style

Xie, Peng, Hongcheng Wang, Yexian Huang, Qiang Gao, Zihao Bai, Linan Zhang, and Yunxiang Ye. 2024. "LiDAR-Based Negative Obstacle Detection for Unmanned Ground Vehicles in Orchards" Sensors 24, no. 24: 7929. https://doi.org/10.3390/s24247929

APA Style

Xie, P., Wang, H., Huang, Y., Gao, Q., Bai, Z., Zhang, L., & Ye, Y. (2024). LiDAR-Based Negative Obstacle Detection for Unmanned Ground Vehicles in Orchards. Sensors, 24(24), 7929. https://doi.org/10.3390/s24247929

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop