Previous Article in Journal
Cascade-Based Distributed Estimator Tracking Control for Swarm of Multiple Nonholonomic Wheeled Mobile Robots via Leader–Follower Approach
Previous Article in Special Issue
Kriging-Variance-Informed Multi-Robot Path Planning and Task Allocation for Efficient Mapping of Soil Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Field Evaluation of an Autonomous Mobile Robot for Navigation and Mapping in Forest

by
Diego Tiozzo Fasiolo
,
Lorenzo Scalera
,
Eleonora Maset
and
Alessandro Gasparetto
*
Polytechnic Department of Engineering and Architecture, University of Udine, Via delle Scienze 206, 33100 Udine, Italy
*
Author to whom correspondence should be addressed.
Robotics 2025, 14(7), 89; https://doi.org/10.3390/robotics14070089 (registering DOI)
Submission received: 20 May 2025 / Revised: 23 June 2025 / Accepted: 25 June 2025 / Published: 27 June 2025
(This article belongs to the Special Issue Autonomous Robotics for Exploration)

Abstract

This paper presents a mobile robotic system designed for autonomous navigation and forest and tree trait estimation, with a focus on the location of individual trees and the diameter of the trunks. The system integrates light detection and ranging data and images using a framework based on simultaneous localization and mapping (SLAM) and a deep learning model for trunk segmentation and tree keypoint detection. Field experiments conducted in a wooded area in Udine, Italy, using a skid-steered mobile robot, demonstrate the effectiveness of the system in navigating, while avoiding obstacles (even in cases where the Global Navigation Satellite System signal is not reliable). The results highlight that the proposed robotic system is capable of autonomously generating maps of forests as point clouds with minimal drift thanks to the loop closure strategy integrated in the SLAM algorithm, and estimating tree traits automatically.

1. Introduction

Accurate mapping and monitoring of forest ecosystems are essential for sustainable forest management, carbon stock estimation, and biodiversity conservation [1,2]. Traditional forest inventory methods are mainly based on manual measurements of tree parameters, such as tree height, diameter at breast height (DBH), and volume of the canopy [3]. These approaches provide valid data, but they are time-consuming, labor-intensive, and subject to human error [4]. In contrast, integrating robotics, sensing technologies, and artificial perception enables more efficient data collection pipelines [5]. For example, recent studies have demonstrated the potential of terrestrial laser scanning (TLS) for automating forest data collection. TLS is capable of generating high-resolution 3D point clouds that capture detailed structural information about trees [6,7]. Despite its advantages, TLS has several drawbacks, including the need to acquire data from multiple static positions to minimize occlusions and constrained mobility in complex environments, which could limit its application in extensive forest monitoring [8].
Portable laser scanners (PLSs) capture 3D data using laser scanning sensors carried by an operator who moves within the area of interest, facilitating the mapping of the environment. PLSs address several drawbacks of TLS by enabling continuous data acquisition while in motion, reducing occlusions through multiple perspectives, and increasing scalability for large-scale forest monitoring [9]. For example, to generate 3D point clouds of forest trees, handheld laser scanners were employed in [10,11], whereas a backpack-mounted device was used in [12]. These studies demonstrate that point clouds acquired by PLSs can be exploited to automatically identify the location of individual trees and their DBH from the collected data. For these reasons, PLSs offer a flexible and efficient solution for forest monitoring. Similarly, handheld cameras, combined with photogrammetry and structure-from-motion (SfM) techniques, can be used to reconstruct forests from a sequence of images. This approach provides a cost-effective alternative to laser scanning technologies, as shown in [13], but it also poses several challenges, such as the difficulty in ensuring complete coverage and reconstruction of the area. However, PLSs and handheld cameras require a human operator to carry the sensor, which can introduce subjectivity in data collection [14].
In contrast, robotic systems equipped with sensors can repeatedly survey outdoor areas without human intervention, ensuring consistent data acquisition [15]. For this reason, robotic platforms offer a promising alternative for forest inventory analysis [5]. Table 1 provides an overview of the state-of-the-art mobile robots developed for forest inventory, highlighting the type of mobile platform, sensor configuration, processing approach, and output data. A notable example is the all-terrain vehicle presented in [16], which takes advantage of data from a Global Navigation Satellite System (GNSS) receiver, an inertial measurement unit (IMU), and a light detection and ranging (LiDAR) sensor, to create a 2D map with tree locations in the forest. Moreover, the skid-steered mobile robots presented in [17,18,19] generate 3D maps of forests as point clouds, which are then used for DBH estimation, relying on data from a 3D LiDAR sensor. Similar approaches have also been tested in forests using legged mobile robots, as demonstrated in [20]. However, the robots mentioned in these papers are operated via remote control and lack complete capability for autonomous navigation in a forestry environment.
Differently from previous examples, the tracked mobile robot in [21] calculates an optimal sequence of waypoints for navigation in a forest-like environment. However, the approach presented in [21] assumes that trees are arranged in rows, a condition that is rarely representative of natural forest environments. Identifying navigable terrain is crucial for autonomous robots, as it allows them to determine safe routes, plan efficient paths, and avoid obstacles [22,23]. The work in [24] integrates into a wheeled mobile robot a framework to navigate and explore unknown forest environments, using a Gaussian process model to efficiently identify free space. A self-supervised learning approach for predicting traversable paths for wheeled mobile robots in forests is presented in [25]. Similarly, the mobile robot in [26] leverages visual features from RGB images to enhance global path planning in wooded environments by predicting traversability costs up to 100 m ahead. Furthermore, the work in [27] introduces a dataset gathered through a quadrupedal robot during autonomous navigation. Although these mobile robots have autonomous navigation capabilities in forest-like environments, they are not specifically designed to generate a detailed digital model that includes tree locations, DBH, or other tree parameters. In contrast, the quadrupedal robot in [28] is capable of estimating tree traits from point clouds generated during autonomous missions.
Table 1. Overview of cutting-edge mobile robots for forest inventory.
Table 1. Overview of cutting-edge mobile robots for forest inventory.
First
Author
Ref.YearRobotic
System
AutonomyOnboard
Sensors
Approach for Tree
Trait Estimation
Output
Tang[16]2015Skid-steered robotLiDAR
(FARO Focus3D 120S)
Improved maximum
likelihood estimation [29]
2D tree locations
Pierzchala[18]2018Skid-steered robot
(Superdroid 4WD)
LiDAR
(Velodyne VLP-16)
Point cloud-based
(RANSAC plane-fitting,
DBSCAN clustering,
circle fitting)
3D map, DBH
Tremblay[17]2020Skid-steered robot
(Clearpath Husky)
LiDAR
(Velodyne HDL32E)
Point cloud-based
(Manual segmentation,
cylinder fitting)
3D map, DBH
Da Silva[30]2021Skid-steered robotCamera
(GoPro Hero6, FLIR M232,
ZED Stereo, Allied Mako G-125)
Image-based
(deep learning)
Tree
bounding boxes
Da Silva[31]2022Skid-steered robotCamera
(OAK-D)
Image and
depth-based
(deep learning)
2D tree locations
Freißmuth[28]2024Quadruped robot
(ANYmal)
LiDAR
(Velodyne VLP-16,
Hesai QT64)
Point cloud-based
(CSF, Vornoi-based clustering,
cylinder fitting)
3D map,
DBH, height
Malladi[20]2024Quadruped robot
(ANYmal)
LiDAR
(Velodyne VLP-16)
Point cloud-based
(CSF, density-based clustering,
cylinder fitting)
3D map, DBH
Sheng[19]2024Skid-steered robot
(AgileX Scout Mini)
LiDAR
(Velodyne VLP-16)
Point cloud-based
(CSF, Euclidean clustering,
circle fitting)
3D map, DBH
Proposed approach 2025Skid-steered robot
(AgileX Scout 2.0)
LiDAR
(Velodyne VLP-16)
camera
(Intel RealSense D435)
Based on images
and LiDAR scans
(deep learning,
LiDAR data projection,
DBSCAN clustering)
3D map,
3D tree locations,
DBH
A variety of approaches can be found in the literature to optimize tree detection and the automatic extraction of tree parameters from point clouds. Commonly, the first step involves separating between points that belong to the ground and those associated with trees, such as by means of the Cloth Simulation Filter (CSF), exploited in [10,13,19,32]. The CSF is a ground segmentation technique that models a virtual cloth falling onto the point cloud, allowing it to distinguish ground points from above-ground objects such as trees and vegetation. Then, the segmentation of individual trees from point cloud data is generally performed by means of clustering approaches. Euclidean clustering, which groups points based on their spatial proximity, is applied in [33]. Furthermore, density-based clustering, which identifies clusters as regions with high point density separated by lower-density areas is used in [10]. These two clustering methods are effective for segmenting well-separated tree trunks, but may struggle to distinguish adjacent tree trunks if they are closely packed. In [34], the authors employ the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) method, which extends the density-based clustering, adding the capability of identifying noisy points and outliers.
In recent years, pattern recognition and deep learning approaches have facilitated the automatic segmentation of trees from dense point clouds, addressing challenges such as occlusions and variations in tree morphology [35]. For example, the PointNet++ [36] deep learning architecture, specifically designed for processing 3D point clouds, extracts features from local neighborhoods and provides semantic labels for each point. PointNet++ is used in [12] to segment tree trunks and branches, and in [7] to classify forest point clouds into four specific classes: stems, vegetation, terrain, and coarse woody debris. Similarly, the encoder–decoder deep learning architecture U-Net, extended to work with 3D point clouds by leveraging sparse 3D convolutions, is applied for forest tree segmentation in [11,37].
Once tree trunks are identified, the next step for retrieving the DBH from 3D point cloud data usually involves circle fitting. For this purpose, the least-squares circle fitting method is adopted in [13,32]. Moreover, in [19] the DBH estimation performance is improved by using LiDAR intensity as a weight for the least-squares fitting. In addition, the Random Sample Consensus (RANSAC) scheme is introduced for circle fitting in [7,10,33] to handle noisy points and remove outliers. The DBH can also be estimated by fitting different primitives, such as cylinders, as shown in [6].
Alternatively, deep learning techniques applied on visual data acquired by mobile robots can detect trunk locations in each image frame. For instance, an image-based trunk detection system that exploits a combination of visible and thermal images for forestry mobile robotics is presented in [30]. Another example is given by the robot presented in [31], which leverages YOLOv7 [38] to provide a 2D map with tree locations. Moreover, the Cascade Mask R-CNN architecture [39], pre-trained on synthetic data and fine-tuned with a small dataset of real forest images, is employed to segment tree trunks in [40]. Additionally, the artificial neural network in [40] predicts a set of tree keypoints as feature representations extracted from the images captured by a handheld ZED stereo camera, which can be exploited to estimate tree locations and diameters. The method in [40] incorporates geometric data from a depth sensor to retrieve DBH in meters, but it is not integrated into an autonomous mobile robot. Alternatively, geometric information can also be retrieved by fusing data from a LiDAR sensor and an RGB camera, as shown in [41]. Nonetheless, the robot in [41] is only capable of detecting different types of surfaces in a forest and therefore is not suitable for a complete forest inventory.
In this paper, we present a skid-steered mobile robotic system capable of autonomous navigation, 3D reconstruction of the forestry environment, and tree parameter estimation. The system is based on a simultaneous localization and mapping (SLAM) algorithm for navigation and mapping. Multiple odometry sources are fused together to overcome the limitations of the individual sensors: LiDAR, IMU, GNSS, and wheel odometry. The robot integrates global and local path planning for forest navigation and obstacle avoidance while navigating in rough terrain. Furthermore, an approach based on combined-image LiDAR data is implemented for estimating tree locations and DBH at the metric scale. This tree identification approach extends the framework proposed in [40], which works on single image frames, by also utilizing LiDAR scans and the pose of the robot provided by the SLAM algorithm. This extension enables the determination of tree locations in the map reference frame and the estimation of DBH. A video showing the capabilities of the robotic system is provided in the Supplementary Material (Video S1).
The proposed system is validated with a Scout 2.0 mobile platform by AgileX (Shenzhen, China). The experimental results show that the robot is capable of autonomously navigating in the forest while adjusting its trajectory to avoid obstacles such as tree trunks. Moreover, the proposed approach effectively detects tree locations and estimates their diameters. To summarize, the main contributions of this work are the following:
  • An autonomous navigation approach for a wheeled mobile robot operating in a forestry environment;
  • A method for detecting tree locations and estimating trunk diameters by combining point cloud data with an image-based artificial neural network;
  • The experimental validation of the proposed robotic system and the tree parameter estimation approach in a wooded area.
The paper is structured as follows: Section 2 describes the autonomous navigation and 3D mapping approach (Section 2.1), the data processing pipeline to identify tree locations and DBH (Section 2.2), and the mobile robot used for the experiments (Section 2.3). Section 3 presents the experimental results, and Section 4 discusses the advantages and disadvantages of the proposed system. Finally, Section 5 concludes the paper and provides future developments and research directions.

2. Materials and Methods

2.1. Autonomous Navigation and 3D Mapping

The proposed navigation strategy requires the user to provide the coordinates of the waypoints that the robot must reach to explore the forest. The approach integrates into a SLAM algorithm data from multiple odometry sources to localize the robot within the environment: a LiDAR sensor, an IMU, wheel encoders, and a GNSS receiver. Moreover, global and local path planners guarantee safe navigation among obstacles. A simplified overview of the navigation and mapping approach is shown in Figure 1.
In more detail, the LiDAR SLAM algorithm Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM) [42] estimates the location of the mobile robot while building a map of the environment in the form of a point cloud. The LIO-SAM SLAM algorithm used in this paper is specifically designed for autonomous navigation and provides odometry online at the same rate as the IMU sensor. A fixed east–north–up (ENU) reference frame is initialized by the SLAM algorithm using IMU readings. Then, LIO-SAM updates the 3D map by using new 3D LiDAR scans while the robot explores the environment. Data from the IMU sensor and wheel encoders are fused with an extended Kalman filter to estimate the pose of the robot over short time intervals between consecutive LiDAR scan matches. Additionally, GNSS data and a loop closure strategy based on the Euclidean distance between points are used to correct the accumulated drift. This SLAM algorithm has been chosen to fuse data from multiple odometry sources, overcoming the limitations of individual sensors [43]. LIO-SAM can also provide a map of the forest that is more suitable for 3D reconstruction compared to alternative approaches [43].
The waypoints can be expressed as latitude and longitude in the World Geodetic System 1984 (WGS 84) if the GNSS signal is available. As a first step, the waypoints are converted from geodetic decimal degree coordinates to Cartesian coordinates in the ENU reference frame. The proper rotation is influenced by the presence of ferromagnetic objects, electric currents, and by the magnetic declination of the Earth. For this reason, a calibration procedure is introduced to estimate the difference between magnetic north and true north (i.e., the direction to the geographic North Pole). This calibration begins when the SLAM algorithm, and thus the fixed ENU reference frame, are initialized. A set of raw GNSS measurements is collected along with the position of the robot provided by the SLAM algorithm. These GNSS measurements are then expressed in ENU coordinates. Ideally, the two sets of points should be identical, but they differ primarily due to orientation bias in the IMU readings caused by external magnetic fields. Since the correspondence between point sets is known, the rotation matrix for the rigid transformation between them is determined by using singular-value decomposition [44]. The computed rotation is then applied to the waypoints. At the end of the survey, the point cloud built by the SLAM algorithm is converted to XYZ coordinates expressed in the Earth-centered, Earth-fixed (ECEF) coordinate system. Alternatively, if the GNSS signal is unavailable in the forest, the waypoints can be provided to the robot directly as Cartesian coordinates (Figure 1).
The proposed navigation approach does not require a prebuilt map of the environment. Initially, the path between each pair of waypoints is computed as a straight line. This is based on the assumption that trees are obstacles small enough to be managed by the collision avoidance system integrated into the robotic platform. The Timed Elastic Band (TEB) algorithm is used for local planning and obstacle avoidance [45]. The TEB approach optimizes the initial trajectory provided by the global planner online, aiming to reduce the execution time, maintain a safe distance from obstacles, and adhere to velocity and acceleration constraints. This collision avoidance system is based on 2D maps, indicating the risk of traversing a specific area, that are built online by using the most recent LiDAR scans acquired by the mobile robot. Despite forests often presenting uneven terrain, the assumption of using 2D maps for obstacle avoidance is reasonable in many practical scenarios where the robot primarily traverses gentle slopes.

2.2. Vision-Based Tree Trait Identification

This section describes the approach integrated into the autonomous mobile robot for tree localization and DBH computation. The approach, summarized in Figure 2, uses image frames of the camera, 3D LiDAR scans, and the pose of the robot provided by the SLAM algorithm. A pseudocode of the tree detection and DBH estimation pipeline is reported in Algorithm 1.
Algorithm 1 Tree detection and DBH estimation pipeline.
1:
Input: camera stream, LiDAR data, robot pose, LiDAR-camera calibration parameters
2:
Output: tree IDs, tree locations, DBH values
3:
Synchronize camera stream and LiDAR data
4:
     → Get synchronized camera frame and LiDAR scan
5:
Filter points more distant than 40 m from the LiDAR scan
6:
Run Mask R-CNN (PercepTreeV1) on camera data
7:
     → Generate segmentation mask and 2D keypoints
8:
Project the LiDAR scan on the image plane
9:
Filter projected LiDAR scan using the segmentation mask
10:
     → Get projected 2D points
11:
Perform nearest neighbor search between projected 2D points and 2D keypoints
12:
     → Get 3D keypoints
13:
Filter out 3D points distant more than 10 m from the robot pose
14:
Compute DBH using filtered 3D keypoints
15:
Compute optimal DBSCAN parameters
16:
Perform DBSCAN clustering on the set of locations and DBH measurements
17:
     → Get tree IDs, tree locations, and DBH values
At first, the object detector proposed in [40] is used for tree instance segmentation in each image frame and to identify the keypoints needed to estimate the DBH and the trunk center locations. The considered deep learning architecture is named PercepTreeV1 and is based on Mask R-CNN [39,46]. We chose this framework since it appears to be an established solution to address this problem. The PercepTreeV1 artificial neural network is trained on a synthetic dataset generated by employing the Unity game engine to render realistic virtual forests and to simulate different meteorological conditions and illuminations. At training time, common data augmentation techniques are employed (i.e., image resize, horizontal flip, sheer, and camera distortions) to improve model generalization. This architecture has three heads: the first head outputs the class and bounding box, the second head produces the segmentation mask, and the third head identifies the keypoints. In more detail, five keypoints are predicted for each tree. Among them, three are used to estimate the tree parameters: the left and right edges of the trunk, and the trunk center. Only instances with a confidence score higher than 98 % are considered in this work, in order to avoid using segmentation masks that do not fully overlap with the tree trunks and could lead to incorrect parameter estimations.
As a contribution of this work, LiDAR scans are then utilized to retrieve 3D information, complementing the 2D keypoints, which only provide locations in the image frame. In principle, 3D information could also be obtained by using the depth sensor of an RGB-D camera. However, this data is not always reliable outdoors because active infrared stereo systems are sensitive to sunlight, which can interfere with depth measurement, reducing accuracy and increasing the noise. In this work, by exploiting the LiDAR–camera extrinsic calibration, LiDAR points within the camera’s field of view are projected onto the image, as is performed in [47]. Since the two sensors operate at different frame rates, time synchronization is conducted before performing the projection, retaining only the data acquired most closely in time. Only LiDAR data in a range of 40 m are used, to reduce errors caused by noisy points. The segmentation masks predicted by the deep learning architecture filter out the projected points that do not fall in the tree trunks.
Then, a nearest-neighbor search is executed to find the projected LiDAR points that are the closest to the trunk center keypoints in the images. For each trunk center keypoint, the five nearest projected points are selected to estimate the tree location. Specifically, the median of the corresponding 3D points in the LiDAR scan determines the tree position in the robot reference frame. Only tree locations within 10 m of the robot are considered, as trees located farther away typically yield too few LiDAR points within their corresponding masks, and the quality of the masks predicted by the PercepTreeV1 neural network tends to degrade with distance. Similarly, the 3D points associated with the left- and right-trunk-edge keypoints are retrieved to estimate the DBH. In this case, only the 3D points within a 1-meter radius of the estimated tree location are considered. The DBH is then computed in meters using the 3D points retrieved from the nearest-neighbor search associated with the left and right keypoints. So far, the coordinates of the tree locations are expressed in the robot reference frame. To obtain tree locations in the map reference frame, the most recent pose of the robot, estimated by the SLAM algorithm, is combined with the trunk location expressed in the robot reference frame.
At the end of the survey, multiple predictions for each tree are obtained (i.e., each prediction is made independently for each image, resulting in multiple separate predictions). To determine which predictions correspond to the same tree, the DBSCAN clustering algorithm, known for its robustness to outliers, is used. This density-based clustering method links regions containing sufficiently high point densities, while labeling as outliers those points that reside in low-density areas (specifically, points whose nearest neighbors are too distant). Notably, the quality of the clustering results of DBSCAN depends on the choice of the value for two parameters. The first parameter is the number of samples in the neighborhood of a point, including the point itself, required for the point to be considered as part of a dense region. Setting a higher value for the minimum number of samples makes DBSCAN detect only tightly packed clusters, while a lower value allows it to identify more sparse clusters. In this work, the minimum number of samples is chosen to be 4, by means of preliminary tests. The other main parameter of the DBSCAN algorithm is ε , which defines the maximum distance between two points for them to be considered neighbors. Once the minimum number of points is set, it is possible to compute an optimal value for the parameter ε [48]. In more detail, the Kneedle algorithm, described in [49], is used to find the optimal ε . The algorithm finds this value by looking for a “knee”, i.e., where further changes to the value of the parameter yield diminishing returns in terms of performance improvement relative to the cost [49]. After selecting these parameters and executing the DBSCAN algorithm, the tree locations and DBH are computed using the median of the measurements within each cluster. The whole tree location and trait estimation pipeline is performed in a postprocessing stage.

2.3. Experimental Setup

The proposed approach for autonomous navigation and forest mapping is tested on a skid-steered Scout 2.0 (AgileX, Shenzhen, China) mobile robot (Figure 3a). This robot is equipped with an NVIDIA Xavier computer (NVIDIA, Santa Clara, CA, USA) running Ubuntu 18.04 and ROS Melodic and a custom sensor suite. The sensor suite includes a Velodyne VLP-16 LiDAR sensor (Velodyne, San Jose, CA, USA), an Xsens MTi-630 9-axis IMU (Xsens, Enschede, the Netherlands), an Ardusimple simpleRTK2B Budget kit (Ardusimple, Andorra la Vella, Andorra) as the GNSS system, and a Intel RealSense D435 RGB-D camera (Intel, Santa Clara, CA, USA). The LiDAR sensor is mounted horizontally on the chassis of the robot, with the camera positioned nearby, facing forward. A 3D-printed mount secures the GNSS antenna and the IMU on top of the LiDAR sensor (Figure 3b). The Velodyne LiDAR sensor operates at 10 Hz, the RealSense camera at 30 fps, the GNSS receiver outputs solutions at 3 Hz, and the IMU provides linear accelerations, angular velocities, and sensor orientation relative to the magnetic north at 200 Hz. An LTE router is integrated into the robotic system to provide Internet connectivity to the NVIDIA Xavier. When the GNSS signal and the Internet connection are available, the robot receives differential corrections needed to achieve an RTK-FIX solution of the position. These corrections are obtained via the Networked Transport of RTCM via Internet Protocol (NTRIP) service of the HxGN SmartNet GNSS permanent network. The postprocessing of the data is conducted on a workstation equipped with an Intel Core i9-14900K processor, 128 GB of RAM, an NVIDIA GeForce RTX 4070 GPU with 12 GB of VRAM, and running Ubuntu 20.04.
The extrinsic calibration between the LiDAR sensor and the camera is performed using the targetless method proposed in [50]. Since a single LiDAR scan provides limited geometric and texture information, the point cloud used for the calibration is obtained by accumulating multiple LiDAR scans while compensating for changes in viewpoint and distortion in the point cloud. The initial LiDAR–camera transformation is estimated by applying RANSAC and minimizing the reprojection error, based on manually selected 2D–3D correspondences between the input images and the point cloud. Finally, the calibration is refined by applying the NID-based LiDAR camera registration method, which relies on histograms of LiDAR points and pixel intensities [50].

3. Results

The approach proposed in this work for autonomous navigation and monitoring in forest is tested in a wooded area of the Cormor park in Udine, Friuli-Venezia Giulia, Italy (lat: 46°05′04.9″, lon: 13°11′23.9″), under full sunlight conditions. In test (1) and test (5), the robot follows a path with a zigzag pattern generated using nine waypoints. Moreover, three distinct tests, test (2), test (3), and test (4), are conducted with the robot, each involving the execution of a square path defined by four user-specified waypoints. To better appreciate the experimental results, a video of the robot during autonomous navigation in forest is reported as Supplementary Material to this work and available online: https://doi.org/10.5281/zenodo.15425343 (accessed on 15 May 2025).
Test (1) is devoted to assessing the repeatability of the autonomous navigation approach, so it is repeated three times by the robot without reinitializing the SLAM algorithm, ensuring that the paths are estimated relative to the same reference frame (Figure 4a). The directed Hausdorff distance (DHD), calculated as the farthest distance from each point in one path to the closest point in the other path, is used as a measure of repeatability. The distance between the paths of test (1.1) and test (1.2) is 0.43 m, between the paths of test (1.1) and test (1.3) is 0.48 m, and between the paths of test (1.2) and test (2.3) is 0.43 m.
In test (2) the waypoints generate a square path of side 10 m, in test (3) of side 15 m, and in test (4) of side 20 m. A summary of the results is reported in Table 2. In test (2), the robot travels a total distance of 43.2 m in 112 s; in test (3), 66.4 m in 187 s; in test (4), 83.1 m in 211 s, and in test (5), 89.2 m in 317 s. During its autonomous navigation through the user-defined waypoints, the mobile robot demonstrates its ability to modify its planned path online to avoid obstacles, thanks to the TEB algorithm for local path planning. Figure 4b shows the obstacle avoidance behavior of the mobile robot during the autonomous navigation. The robot, depicted in the center of the image, follows the reference path while adjusting its trajectory to avoid the detected obstacles that are represented by black pixels. The global path is depicted with a red line, whereas the local path is shown by using the poses of the robot along the trajectory. The robot successfully detects and avoids large obstacles while maintaining a smooth transition back towards the global path, demonstrating the effectiveness of the local obstacle avoidance strategy.
The DHD is also chosen as a metric to measure the degree of mismatch between the planned global path and the path actually followed by the robot, since it represents the worst-case deviation from the reference path. The DHD from the modified path to the global path is 0.68 m for test (2), 1.08 m for test (3), 1.70 m for test (4), and 1.37 m for test (5). The positions of the robot during test (4) are reported in Figure 5a,b. Figure 5c illustrates the elevation of the robot relative to its initial position. The maximum elevation difference observed is 0.68 m, indicating that the robot operated on sloped terrain. Although the path planners assume planar motion, the robot successfully navigated the environment despite the presence of elevation changes. Figure 5d depicts the heading of the mobile platform, whereas Figure 5e shows its speed profile during navigation. Notably, the speed remains approximately constant during the entire test, even when the robot is actively avoiding obstacles. This indicates that the robot does not stop when encountering obstacles, but rather adjusts its trajectory to continue the survey at nearly the same speed as before the deviation. Finally, the distance from the robot position to the next waypoint is highlighted in Figure 5f.
The values of the uncertainties for the robot positions estimated by the GNSS receiver remain above 1 m throughout test (2), test (3), and test (5), whereas in test (4) the uncertainty is equal to 0.08 m. This indicates that the GNSS alone does not always provide sufficient accuracy or reliability for autonomous navigation in this context. For this reason, the robot cannot rely exclusively on GNSS for localization. Nevertheless, the system successfully completes the navigation tasks by leveraging the SLAM algorithm that integrates LiDAR, IMU, and wheel odometry data. This sensor fusion approach enables accurate pose estimation and robust navigation, effectively compensating for possible limitations of the GNSS signal.
The experimental results show that the artificial neural network can reliably detect tree trunks, even in challenging forest environments. Most of the individual trees in a range of 10 m from the robot are identified with high confidence by the PercepTreeV1 deep learning model. Figure 4c illustrates examples of tree detection, using the camera images as input. Detected trees are segmented and highlighted with colored masks. Each detected tree is assigned a confidence score, and its center and edge keypoints, used to compute trunk parameters, are marked in blue. The LiDAR scans, converted to image plane coordinates (shown in red in Figure 4c), form patterns that consistently align with the detected tree trunks. This alignment indicates that the extrinsic calibration between the camera and LiDAR sensor is sufficiently accurate for the intended application.
The results of the LIO-SAM SLAM algorithm used to localize the robot and autonomously map the surrounding environment are shown in Figure 6a, Figure 6b, Figure 6c, and Figure 6g for test (2), test (3), test (4), and test (5), respectively. The path of the robot, estimated by the SLAM algorithm, is shown with white markers. The 3D point cloud generated with the LiDAR data collected by the robot is visualized, colored by the LiDAR intensity information, without considering the points belonging to the ground, which are removed with the CSF approach. The individual trees are clearly distinguishable on the basis of the spatial distribution and density of points. The definition of the tree trunks and branches within the point cloud highlights the quality of the data acquisition procedure and the success of the SLAM algorithm, and supports its use for applications like mapping of wooded areas.
Figure 6d–e,h show the final results for test (2), test (3), test (4), and test (5) in terms of global path, local path, tree locations, and DBH values. The waypoints, which define the desired route for the mobile robot, are also marked on the map. The global path is represented by a blue dashed line, whereas the actual trajectory executed by the robot is plotted alongside it with a solid line. Green circles indicate the detected tree trunks, each assigned a unique identification number after the clustering performed by DBSCAN. The DBSCAN clustering algorithm is used with the automatically computed optimal values: ε 2 = 1.11 , ε 3 = 1.17 , ε 4 = 2.40 , and ε 5 = 1.06 for test (2), test (3), test (4), and test (5), respectively. The diameter of each green circle corresponds to the DBH estimated by the proposed approach. Moreover, the numerical values of the tree locations and DBH are reported in Table 3, Table 4, Table 5, and Table 6 for test (2), test (3), test (4), and test (5), respectively. The close alignment between the global path and the actual trajectory of the robot, as well as the distribution of detected trees, demonstrates the effectiveness of the navigation and perception systems in the accurate localization and measurement of trees.

4. Discussion

The proposed method for tree localization and DBH estimation effectively combines deep learning-based image analysis with 3D LiDAR data, achieving accurate detection and measurement of tree trunks in natural environments. One of the main strengths of the approach lies in the integration of complementary sensor modalities: the visual information captured by the camera enables reliable tree segmentation and keypoint prediction, while LiDAR data provide accurate 3D spatial measurements necessary for estimating the trunk position and diameter. The use of a deep learning model specifically trained for tree trunk detection (PercepTreeV1) ensures high-quality instance segmentation, enabling precise identification of the trunks even in cluttered scenes. Furthermore, the decision to filter detections based on a high confidence threshold (98%) minimizes the risk of incorporating false positives, contributing to the reliability of the localization and DBH estimation results.
Moreover, the accuracy of tree localization is inherently influenced by the performance of the SLAM algorithm. Any drift or pose estimation errors can propagate into the final tree position estimates, representing a potential source of systematic error despite the ability of the approach to achieve consistent relative measurements. Thanks to the use of loop closure in the SLAM algorithm, no significant drift is observed in the localization of the robot, as evidenced by the consistency of the reconstructed point cloud and the accuracy of the estimated trajectory shown in the results. Another strength is the application of the DBSCAN clustering algorithm for consolidating multiple detections collected across the survey. By automatically identifying clusters without requiring a predefined number of trees, DBSCAN offers robustness to varying tree densities and partial occlusions, which are common in real-world outdoor environments. The automatic dynamic tuning of the clustering radius using the Kneedle algorithm further enhances the generalization of the method across different field conditions without extensive manual parameter tuning.
Alongside its strengths, the method also has a number of limitations. First, the reliance on high-confidence detections may reduce recall, potentially leading to missed trees, especially in cases where lighting conditions or partial occlusions affect segmentation confidence. Some missed detections are attributed to the square-shaped path followed by the robot, which can cause trees located closer to the center of the surveyed area to fall outside the field of view of the camera or beyond the acceptable LiDAR range for reliable detection, e.g., in test (4). This limitation could be mitigated by introducing a coverage path-planning strategy, where the robot path is designed to systematically cover the entire area of interest, ensuring that all regions are observed and reducing the likelihood of missing trees. Additionally, the nearest-neighbor association between keypoints and LiDAR points assumes a sufficiently dense and accurate LiDAR scan; thus, the method may degrade in performance when operating at longer ranges, where LiDAR data become sparser and noisier. Although limiting the operational range to 10 m mitigates this effect, it inherently constrains the maximum detection range of the system. Moreover, the conversion of 3D LiDAR points into image coordinates introduces an additional dependency on accurate extrinsic calibration between the camera and the LiDAR. While the results suggest that the calibration quality is sufficient for the application, small calibration errors could still propagate into the tree localization and DBH estimation results, particularly when working with small trunks.
Another factor contributing to missed tree detections is the presence of vegetation or other plants growing over or around the lower portion of the trunk. In such cases, the trunk surface is partially or entirely covered by leaves or branches, making it difficult for the deep learning model to identify the characteristic visual patterns it was trained to detect, as shown in Figure 7b,c. Since the model has been trained predominantly on images in which the wooden trunk is clearly visible and unobstructed, its performance diminishes when the visual appearance deviates significantly from the training data. This highlights a limitation in the generalization capability of the network when encountering heavily occluded or overgrown trunks and suggests that future improvements could involve training with more diverse examples to better handle such variability. Finally, the method currently processes detections independently in each frame, relying on clustering only after data collection, which could be less efficient or prone to association errors in highly dynamic environments or when surveying large areas with significant robot movement.
Furthermore, an online registration approach could be more efficient for detecting trees in real time and limiting the amount of redundant data stored. However, in this work, we prefer to adopt an offline detection approach (using visual frames from the camera and single scans from the LiDAR sensor), since we also need an entire 3D reconstruction of the forest in the form of a point cloud for the estimation of further tree traits (e.g., height, inclination of the trunk, branches, canopy volume); we plan to obtain this in future developments of this project.

5. Conclusions

This work presented an approach for autonomous forest navigation and tree trait estimation using a skid-steered mobile robot equipped with a LiDAR sensor and a camera. The proposed system was demonstrated to be able to autonomously follow predefined waypoints, avoid obstacles, and collect data in a forest, without requiring human intervention. The fusion of camera images and 3D LiDAR scans by means of SLAM-based localization and deep learning for instance segmentation and keypoint prediction enabled the estimation of tree trunk positions and their DBH. The experimental evaluations conducted in a real wooded area confirmed the system’s effectiveness in both navigation and tree trait estimation. The use of loop closure within the SLAM algorithm ensured drift-free localization, which is essential for maintaining a consistent mapping accuracy over extended trajectories. The DBSCAN clustering allowed the aggregation of multiple detections to estimate the trunk parameters. Despite some limitations, such as missed detections due to the dense foliage and trunk occlusions and the reduced visibility of trees located far from the path of the robot, the results validate the feasibility of using an autonomous mobile platform for the task of forest inventory.
Future work will explore the use of coverage path planning to improve the detection rates, as well as the introduction of an AI-based path planner capable of estimating terrain traversability. Furthermore, we will test different sensors, e.g., LiDAR sensors with higher point resolution, and conduct experiments in more challenging forest conditions. Introducing uncertainty estimation for keypoint predictions and LiDAR associations would help quantify the confidence of DBH and position estimates, improving robustness in ambiguous or noisy cases. Moreover, extending the training dataset with images representing a wider variety of forest types, seasonal changes, and lighting conditions would improve the generalization of the deep learning model and increase performance in unseen environments. Beyond DBH, future work will target the estimation of additional traits such as tree height, canopy volume, and species classification, using either geometric analysis or multimodal learning models. Finally, connecting the generated digital inventories with geographic information systems or forest management platforms could streamline the transition from field data to insights for ecologists, foresters, and land managers.

Supplementary Materials

The following supporting information can be downloaded at: https://doi.org/10.5281/zenodo.15425343 (accessed on 15 May 2025), Video S1: supplementary video of the paper “Field evaluation of an autonomous mobile robot for navigation and mapping in forest”.

Author Contributions

Conceptualization, resources, project administration, L.S.; methodology, validation, formal analysis, investigation, data curation, visualization, D.T.F., L.S. and E.M.; software, writing—original draft preparation, D.T.F.; supervision, writing—review and editing, L.S., E.M. and A.G.; funding acquisition, L.S. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partially carried out within the PRIN 2022 project “An Artificial Intelligence approach for Forestry Robotics in Environment Survey and Inspection (AI4FOREST)” funded by the European Union Next-Generation EU (National Recovery and Resilience Plan (PNRR), Mission 4, Component 2, Investment 1.1, PRIN 2022, Code 2022LP4ASR, CUP G53D23002880001). This reseach was partially carried out within the Agritech National Research Center and received funding from the European Union Next-Generation EU (National Recovery and Resilience Plan (PNRR), Mission 4 Component 2, Investiment 1.4, D.D. 1032 17/06/2022, CN00000022, CUP G23C22001100007). This research was partially carried out within the research activities of the consortium iNEST (Interconnected Nord-Est Innovation Ecosystem) funded by the European Union Next-Generation EU (National Recovery and Resilience Plan (PNRR), Mission 4, Component 2, Investment 1.5, D.D. 1058 23/06/2022, ECS_00000043, CUP G23C22001130006). This research was developed within the Laboratory for Big Data, IoT, Cyber Security (LABIC) funded by Friuli Venezia Giulia region, and the Laboratory for Artificial Intelligence for Human-Robot Collaboration (AI4HRC) funded by Fondazione Friuli. This manuscript reflects only the authors’ views and opinions, neither the European Union nor the European Commission can be considered responsible for them.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CSFCloth Simulation Filter
DBHDiameter at breast height
DBSCANDensity-Based Spatial Clustering of Applications with Noise
DHDDirected Hausdorff distance
ECEFEarth-centered, Earth-fixed coordinate system
ENUEast–north–up
GNSSGlobal Navigation Satellite System
IMUInertial measurement unit
LiDARLight detection and ranging
LIO-SAMTightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
PDOPPosition Dilution of Precision Proxy
PLSPortable laser scanner
RANSACRandom Sample Consensus
RTKReal-time kinematic
SLAMSimultaneous localization and mapping
SfMStructure from motion
TLSTerrestrial laser scanning
WGS 84World Geodetic System 1984

References

  1. De Frenne, P.; Lenoir, J.; Luoto, M.; Scheffers, B.R.; Zellweger, F.; Aalto, J.; Ashcroft, M.B.; Christiansen, D.M.; Decocq, G.; De Pauw, K.; et al. Forest microclimates and climate change: Importance, drivers and future research agenda. Glob. Change Biol. 2021, 27, 2279–2297. [Google Scholar] [CrossRef] [PubMed]
  2. Liang, J.; Crowther, T.W.; Picard, N.; Wiser, S.; Zhou, M.; Alberti, G.; Schulze, E.D.; McGuire, A.D.; Bozzato, F.; Pretzsch, H.; et al. Positive biodiversity-productivity relationship predominant in global forests. Science 2016, 354, aaf8957. [Google Scholar] [CrossRef]
  3. Xu, Z.; Shen, X.; Cao, L. Extraction of forest structural parameters by the comparison of Structure from Motion (SfM) and Backpack Laser Scanning (BLS) point clouds. Remote Sens. 2023, 15, 2144. [Google Scholar] [CrossRef]
  4. Liu, T.; Sun, Y.; Wang, C.; Zhang, Y.; Qiu, Z.; Gong, W.; Lei, S.; Tong, X.; Duan, X. Unmanned Aerial Vehicle and artificial intelligence revolutionizing efficient and precision sustainable forest management. J. Clean. Prod. 2021, 311, 127546. [Google Scholar] [CrossRef]
  5. Ferreira, J.F.; Portugal, D.; Andrada, M.E.; Machado, P.; Rocha, R.P.; Peixoto, P. Sensing and artificial perception for robots in precision forestry: A survey. Robotics 2023, 12, 139. [Google Scholar] [CrossRef]
  6. Wilkes, P.; Disney, M.; Armston, J.; Bartholomeus, H.; Bentley, L.; Brede, B.; Burt, A.; Calders, K.; Chavana-Bryant, C.; Clewley, D.; et al. TLS2trees: A scalable tree segmentation pipeline for TLS data. Methods Ecol. Evol. 2023, 14, 3083–3099. [Google Scholar] [CrossRef]
  7. Krisanski, S.; Taskhiri, M.S.; Gonzalez Aracil, S.; Herries, D.; Muneri, A.; Gurung, M.B.; Montgomery, J.; Turner, P. Forest structural complexity tool—An open source, fully-automated tool for measuring forest point clouds. Remote Sens. 2021, 13, 4677. [Google Scholar] [CrossRef]
  8. Åkerblom, M.; Kaitaniemi, P. Terrestrial laser scanning: A new standard of forest measuring and modelling? Ann. Bot. 2021, 128, 653–662. [Google Scholar] [CrossRef]
  9. Maset, E.; Cucchiaro, S.; Cazorzi, F.; Crosilla, F.; Fusiello, A.; Beinat, A. Investigating the performance of a handheld mobile mapping system in different outdoor scenarios. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 43, 103–109. [Google Scholar] [CrossRef]
  10. Liu, L.; Zhang, A.; Xiao, S.; Hu, S.; He, N.; Pang, H.; Zhang, X.; Yang, S. Single tree segmentation and diameter at breast height estimation with mobile LiDAR. IEEE Access 2021, 9, 24314–24325. [Google Scholar] [CrossRef]
  11. Henrich, J.; van Delden, J.; Seidel, D.; Kneib, T.; Ecker, A.S. TreeLearn: A deep learning method for segmenting individual trees from ground-based LiDAR forest point clouds. Ecol. Inform. 2024, 84, 102888. [Google Scholar] [CrossRef]
  12. Kim, D.H.; Ko, C.U.; Kim, D.G.; Kang, J.T.; Park, J.M.; Cho, H.J. Automated segmentation of individual tree structures using deep learning over LiDAR point cloud data. Forests 2023, 14, 1159. [Google Scholar] [CrossRef]
  13. Zhu, R.; Chen, L.; Chai, G.; Chen, M.; Zhang, X. Integrating extraction framework and methods of individual tree parameters based on close-range photogrammetry. Comput. Electron. Agric. 2023, 215, 108411. [Google Scholar] [CrossRef]
  14. Maset, E.; Scalera, L.; Beinat, A.; Visintini, D.; Gasparetto, A. Performance investigation and repeatability assessment of a mobile robotic system for 3D mapping. Robotics 2022, 11, 54. [Google Scholar] [CrossRef]
  15. Tiozzo Fasiolo, D.; Scalera, L.; Maset, E.; Gasparetto, A. Towards autonomous mapping in agriculture: A review of supportive technologies for ground robotics. Robot. Auton. Syst. 2023, 169, 104514. [Google Scholar] [CrossRef]
  16. Tang, J.; Chen, Y.; Kukko, A.; Kaartinen, H.; Jaakkola, A.; Khoramshahi, E.; Hakala, T.; Hyyppä, J.; Holopainen, M.; Hyyppä, H. SLAM-aided stem mapping for forest inventory with small-footprint mobile LiDAR. Forests 2015, 6, 4588–4606. [Google Scholar] [CrossRef]
  17. Tremblay, J.F.; Béland, M.; Gagnon, R.; Pomerleau, F.; Giguère, P. Automatic three-dimensional mapping for tree diameter measurements in inventory operations. J. Field Robot. 2020, 37, 1328–1346. [Google Scholar] [CrossRef]
  18. Pierzchała, M.; Giguère, P.; Astrup, R. Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM. Comput. Electron. Agric. 2018, 145, 217–225. [Google Scholar] [CrossRef]
  19. Sheng, Y.; Zhao, Q.; Wang, X.; Liu, Y.; Yin, X. Tree diameter at breast height extraction based on mobile laser scanning point cloud. Forests 2024, 15, 590. [Google Scholar] [CrossRef]
  20. Malladi, M.V.; Guadagnino, T.; Lobefaro, L.; Mattamala, M.; Griess, H.; Schweier, J.; Chebrolu, N.; Fallon, M.; Behley, J.; Stachniss, C. Tree instance segmentation and traits estimation for forestry environments exploiting LiDAR data collected by mobile robots. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 17933–17940. [Google Scholar]
  21. Zhang, X.; Liu, Y.; Liu, J.; Chen, X.; Xu, R.; Ma, W.; Zhang, Z.; Fu, S. An autonomous navigation system with a trajectory prediction-based decision mechanism for rubber forest navigation. Sci. Rep. 2024, 14, 29495. [Google Scholar] [CrossRef]
  22. Ni, J.; Chen, Y.; Tang, G.; Cao, W.; Yang, S.X. An Integration Model of Blind Spot Estimation and Traversable Area Detection for Indoor Robots. IEEE Sen. J. 2025, 25, 17850–17866. [Google Scholar] [CrossRef]
  23. Ni, J.; Chen, Y.; Zhang, Z.; Zhao, Y.; Yang, S.X. An Improved Fuzzy Decision and Geometric Inference Based Indoor Layout Estimation Model from RGB-D Images. IEEE Trans. Instrum. Meas. 2025, 74, 5015115. [Google Scholar] [CrossRef]
  24. Ali, M.; Jardali, H.; Roy, N.; Liu, L. Autonomous navigation, mapping and exploration with gaussian processes. In Proceedings of the Robotics: Science and Systems (RSS), Daegu, Republic of Korea, 10–14 July 2023. [Google Scholar]
  25. Gasparino, M.V.; Sivakumar, A.N.; Liu, Y.; Velasquez, A.E.; Higuti, V.A.; Rogers, J.; Tran, H.; Chowdhary, G. Wayfast: Navigation with predictive traversability in the field. IEEE Robot. Autom. Lett. 2022, 7, 10651–10658. [Google Scholar] [CrossRef]
  26. Fahnestock, E.; Fuentes, E.; Prentice, S.; Vasilopoulos, V.; Osteen, P.R.; Howard, T.; Roy, N. Far-Field Image-Based Traversability Mapping for A Priori Unknown Natural Environments. IEEE Robot. Autom. Lett. 2025, 10, 6039–6046. [Google Scholar] [CrossRef]
  27. Pollayil, M.J.; Angelini, F.; de Simone, L.; Fanfarillo, E.; Fiaschi, T.; Maccherini, S.; Angiolini, C.; Garabini, M. Robotic monitoring of forests: A dataset from the EU habitat 9210* in the Tuscan Apennines (central Italy). Sci. Data 2023, 10, 845. [Google Scholar] [CrossRef]
  28. Freißmuth, L.; Mattamala, M.; Chebrolu, N.; Schaefer, S.; Leutenegger, S.; Fallon, M. Online tree reconstruction and forest inventory on a mobile robotic system. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 11765–11772. [Google Scholar]
  29. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR scan matching aided inertial navigation system in GNSS-denied environments. Sensors 2015, 15, 16710–16728. [Google Scholar] [CrossRef]
  30. da Silva, D.Q.; Dos Santos, F.N.; Sousa, A.J.; Filipe, V. Visible and thermal image-based trunk detection with deep learning for forestry mobile robotics. J. Imaging 2021, 7, 176. [Google Scholar] [CrossRef]
  31. da Silva, D.Q.; dos Santos, F.N.; Filipe, V.; Sousa, A.J.; Oliveira, P.M. Edge AI-based tree trunk detection for forestry monitoring robotics. Robotics 2022, 11, 136. [Google Scholar] [CrossRef]
  32. Arslan, A.E.; Inan, M.; Çelik, M.F.; Erten, E. Estimations of forest stand parameters in open forest stand using point cloud data from Terrestrial Laser Scanning, Unmanned Aerial Vehicle and aerial LiDAR data. Eur. J. For. Eng. 2022, 8, 46–54. [Google Scholar] [CrossRef]
  33. Proudman, A.; Ramezani, M.; Digumarti, S.T.; Chebrolu, N.; Fallon, M. Towards real-time forest inventory using handheld LiDAR. Robot. Auton. Syst. 2022, 157, 104240. [Google Scholar] [CrossRef]
  34. Laino, D.; Cabo, C.; Prendes, C.; Janvier, R.; Ordonez, C.; Nikonovas, T.; Doerr, S.; Santin, C. 3DFin: A software for automated 3D forest inventories from terrestrial point clouds. For. Int. J. For. Res. 2024, 97, 479–496. [Google Scholar] [CrossRef]
  35. Wołk, K.; Tatara, M.S. A review of semantic segmentation and instance segmentation techniques in forestry using LiDAR and imagery data. Electronics 2024, 13, 4139. [Google Scholar] [CrossRef]
  36. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 5105–5114. [Google Scholar]
  37. Xiang, B.; Wielgosz, M.; Kontogianni, T.; Peters, T.; Puliti, S.; Astrup, R.; Schindler, K. Automated forest inventory: Analysis of high-density airborne LiDAR point clouds with 3D deep learning. Remote Sens. Environ. 2024, 305, 114078. [Google Scholar] [CrossRef]
  38. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
  39. Cai, Z.; Vasconcelos, N. Cascade R-CNN: High quality object detection and instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1483–1498. [Google Scholar] [CrossRef]
  40. Grondin, V.; Fortin, J.M.; Pomerleau, F.; Giguère, P. Tree detection and diameter estimation based on deep learning. For. Int. J. For. Res. 2022, 96, 264–276. [Google Scholar] [CrossRef]
  41. Zhou, S.; Xi, J.; McDaniel, M.W.; Nishihata, T.; Salesses, P.; Iagnemma, K. Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain. J. Field Robot. 2012, 29, 277–297. [Google Scholar] [CrossRef]
  42. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled LiDAR Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 5135–5142. [Google Scholar]
  43. Tiozzo Fasiolo, D.; Scalera, L.; Maset, E. Comparing LiDAR and IMU-based SLAM approaches for 3D robotic mapping. Robotica 2023, 41, 2588–2604. [Google Scholar] [CrossRef]
  44. Sorkine-Hornung, O.; Rabinovich, M. Least-squares rigid motion using SVD. Computing 2017, 1, 1–5. [Google Scholar]
  45. Rösmann, C.; Hoffmann, F.; Bertram, T. Integrated online trajectory planning and optimization in distinctive topologies. Robot. Auton. Syst. 2017, 88, 142–153. [Google Scholar] [CrossRef]
  46. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  47. Tiozzo Fasiolo, D.; Maset, E.; Scalera, L.; Macaulay, S.; Gasparetto, A.; Fusiello, A. Combining LiDAR SLAM and deep learning-based people detection for autonomous indoor mapping in a crowded environment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 447–452. [Google Scholar] [CrossRef]
  48. Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. (TODS) 2017, 42, 1–21. [Google Scholar] [CrossRef]
  49. Satopaa, V.; Albrecht, J.; Irwin, D.; Raghavan, B. Finding a “kneedle” in a haystack: Detecting knee points in system behavior. In Proceedings of the 2011 31st International Conference on Distributed Computing Systems Workshops, Minneapolis, MN, USA, 20–24 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 166–171. [Google Scholar]
  50. Koide, K.; Oishi, S.; Yokozuka, M.; Banno, A. General, single-shot, target-less, and automatic LiDAR-camera extrinsic calibration toolbox. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 11301–11307. [Google Scholar]
Figure 1. Overview of the proposed navigation and 3D mapping approach. The data from the sensors are represented in blue, the user-defined inputs in magenta, the algorithms in red, and their outputs in green. The operations performed by the blocks inside the dotted rectangle are only executed if GNSS signal is available.
Figure 1. Overview of the proposed navigation and 3D mapping approach. The data from the sensors are represented in blue, the user-defined inputs in magenta, the algorithms in red, and their outputs in green. The operations performed by the blocks inside the dotted rectangle are only executed if GNSS signal is available.
Robotics 14 00089 g001
Figure 2. Overview of the vision-based tree trait estimation approach. The data from the sensors are represented in blue, the algorithms in red, and their outputs in green.
Figure 2. Overview of the vision-based tree trait estimation approach. The data from the sensors are represented in blue, the algorithms in red, and their outputs in green.
Robotics 14 00089 g002
Figure 3. Mobile robot in a wooded area of the Friuli-Venezia Giulia region, Italy (a). Close view of the AgileX Scout 2.0 mobile robot and its sensor suite (b).
Figure 3. Mobile robot in a wooded area of the Friuli-Venezia Giulia region, Italy (a). Close view of the AgileX Scout 2.0 mobile robot and its sensor suite (b).
Robotics 14 00089 g003
Figure 4. (a) Paths followed by the robot during test (1). (b) Example of path planned to avoid a tree trunk during autonomous navigation of the mobile robot. (c) Result of the vision-based tree trait identification approach in a single image frame.
Figure 4. (a) Paths followed by the robot during test (1). (b) Example of path planned to avoid a tree trunk during autonomous navigation of the mobile robot. (c) Result of the vision-based tree trait identification approach in a single image frame.
Robotics 14 00089 g004
Figure 5. Example results for the robot trajectory in test (4): (a) robot position (X coordinate); (b) robot position (Y coordinate); (c) robot elevation with respect to the map reference frame; (d) robot heading; (e) robot speed; (f) distance to the next waypoint. The vertical dashed lines in each plot indicate the times when the robot reaches the target waypoints.
Figure 5. Example results for the robot trajectory in test (4): (a) robot position (X coordinate); (b) robot position (Y coordinate); (c) robot elevation with respect to the map reference frame; (d) robot heading; (e) robot speed; (f) distance to the next waypoint. The vertical dashed lines in each plot indicate the times when the robot reaches the target waypoints.
Robotics 14 00089 g005
Figure 6. Point clouds of the forest reconstructed by the LIO-SAM algorithm after ground removal (the color bar for the intensity values of the LiDAR data is represented as 8-bit integer; the order of the waypoints is illustrated in red): (a) test (2), (b) test (3), (c) test (4), and (g) test (5). Global path and actual path estimated by the SLAM algorithm and location of the trees detected during the surveys of (d) test (2), (e) test (3), (f) test (4), and (h) test (5). The labels in the plots indicate the identification number associated with each tree.
Figure 6. Point clouds of the forest reconstructed by the LIO-SAM algorithm after ground removal (the color bar for the intensity values of the LiDAR data is represented as 8-bit integer; the order of the waypoints is illustrated in red): (a) test (2), (b) test (3), (c) test (4), and (g) test (5). Global path and actual path estimated by the SLAM algorithm and location of the trees detected during the surveys of (d) test (2), (e) test (3), (f) test (4), and (h) test (5). The labels in the plots indicate the identification number associated with each tree.
Robotics 14 00089 g006
Figure 7. Examples of unreliable detections: (a) trees located farther away, resulting in too few LiDAR points within their masks; (b) poor-quality mask; (c) missed detection due to foliage covering the tree trunk.
Figure 7. Examples of unreliable detections: (a) trees located farther away, resulting in too few LiDAR points within their masks; (b) poor-quality mask; (c) missed detection due to foliage covering the tree trunk.
Robotics 14 00089 g007
Table 2. Summary of results for the four surveying tests. For each test, the table reports the total path length followed by the robot, the duration of the survey, the number of LiDAR points in the point cloud, the number of trees detected, and the DBH of the trees in the scanned area (as mean value ± standard deviation).
Table 2. Summary of results for the four surveying tests. For each test, the table reports the total path length followed by the robot, the duration of the survey, the number of LiDAR points in the point cloud, the number of trees detected, and the DBH of the trees in the scanned area (as mean value ± standard deviation).
Global
Path
[m]
Robot Path
Length
[m]
Survey
Duration
[s]
Number of
Points
in the 3D Map
Number of
Trees
Detected
DBH
[cm]
Test (2)10 × 1043.21122,609,86616 27.4 ± 11.8
Test (3)15 × 1566.41874,552,28621 21.6 ± 8.4
Test (4)20 × 2083.12115,556,14917 20.1 ± 8.0
Test (5)15 × 1089.23172,834,63424 30.0 ± 20.1
Table 3. Tree locations and DBH in test (2).
Table 3. Tree locations and DBH in test (2).
Tree IDXYDBH
02.573.8034
1−0.756.6720
22.856.1321
37.435.3030
48.313.2931
510.5710.1620
611.8813.1313
710.3715.6059
87.1613.8629
9−1.6417.6529
10−4.6919.2825
11−6.6011.3819
12−2.289.4614
13−7.197.2148
14−6.461.2617
15−4.903.9529
Table 4. Tree locations and DBH in test (3).
Table 4. Tree locations and DBH in test (3).
Tree IDXYDBH
02.863.2837
10.706.6217
28.652.8532
38.446.984
411.548.7118
512.1112.6718
610.4615.8227
76.2613.0413
8−0.617.4423
9−0.7422.1822
10−0.7325.8331
11−4.726.4113
12−8.3622.825
13−4.1619.9122
14−1.8420.8514
15−10.0216.7134
16−5.8411.8619
17−12.7313.115
18−4.434.829
19−4.892.8511
205.760.1330
Table 5. Tree locations and DBH in test (4).
Table 5. Tree locations and DBH in test (4).
Tree IDXYDBH
0−9.6422.9224
1−5.4418.8414
2−1.2131.2520
32.0137.9110
4−4.5835.1710
58.9224.2434
612.3918.3723
711.315.0424
815.815.3619
96.250.1821
100.86−1.5317
11−3.922.0540
12−5.276.6622
13−2.348.8629
14−15.4811.823
15−15.618.8215
16−19.2616.2310
Table 6. Tree locations and DBH in test (5).
Table 6. Tree locations and DBH in test (5).
Tree IDXYDBH
0−8.07−5.4436
1−5.16−7.4791
2−5.75−8.4721
3−11.11−15.127
4−2.76−11.2522
51.39−13.223
6−1.19−6.7936
71.53−3.9520
85.70.0724
1010.86−2.9624
128.77−14.3718
13−3.58−19.7127
145.52−18.7119
154.73−24.4527
1610.18−23.814
1717.99−5.8423
1817.79−10.346
1915.32−10.259
20−2.5−1.2734
21−0.445.2924
22−5.193.3258
23−3.155.7678
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tiozzo Fasiolo, D.; Scalera, L.; Maset, E.; Gasparetto, A. Field Evaluation of an Autonomous Mobile Robot for Navigation and Mapping in Forest. Robotics 2025, 14, 89. https://doi.org/10.3390/robotics14070089

AMA Style

Tiozzo Fasiolo D, Scalera L, Maset E, Gasparetto A. Field Evaluation of an Autonomous Mobile Robot for Navigation and Mapping in Forest. Robotics. 2025; 14(7):89. https://doi.org/10.3390/robotics14070089

Chicago/Turabian Style

Tiozzo Fasiolo, Diego, Lorenzo Scalera, Eleonora Maset, and Alessandro Gasparetto. 2025. "Field Evaluation of an Autonomous Mobile Robot for Navigation and Mapping in Forest" Robotics 14, no. 7: 89. https://doi.org/10.3390/robotics14070089

APA Style

Tiozzo Fasiolo, D., Scalera, L., Maset, E., & Gasparetto, A. (2025). Field Evaluation of an Autonomous Mobile Robot for Navigation and Mapping in Forest. Robotics, 14(7), 89. https://doi.org/10.3390/robotics14070089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop