Next Article in Journal
Transient Two-Way Molecular-Continuum Coupling with OpenFOAM and MaMiCo: A Sensitivity Study
Next Article in Special Issue
A Video Analytics System for Person Detection Combined with Edge Computing
Previous Article in Journal
More on the Supremum Statistic to Test Multivariate Skew-Normality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unimodal and Multimodal Perception for Forest Management: Review and Dataset

by
Daniel Queirós da Silva
1,2,*,†,
Filipe Neves dos Santos
1,
Armando Jorge Sousa
1,3,
Vítor Filipe
1,2 and
José Boaventura-Cunha
1,2
1
INESC Technology and Science (INESC TEC), 4200-465 Porto, Portugal
2
School of Science and Technology, University of Trás-os-Montes e Alto Douro (UTAD), 5000-801 Vila Real, Portugal
3
Faculty of Engineering, University of Porto (FEUP), 4200-465 Porto, Portugal
*
Author to whom correspondence should be addressed.
Current address: Campus da FEUP, Rua Dr. Roberto Frias 400, 4200-465 Porto, Portugal.
Computation 2021, 9(12), 127; https://doi.org/10.3390/computation9120127
Submission received: 13 October 2021 / Revised: 25 November 2021 / Accepted: 26 November 2021 / Published: 29 November 2021
(This article belongs to the Special Issue Computation and Analysis of Remote Sensing Imagery and Image Motion)

Abstract

:
Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain.

1. Introduction

In robotics, perception is the ability of a system to identify and interpret sensory information to achieve a better understanding and enhance its awareness on the surrounding environment. This article formally reviews the state-of-the-art about unimodal perception (using a single type of sensors) and multimodal perception (combining data from distinct kinds of sensors) in forestry environments. These types of perception are also relevant for agricultural purposes, but this work is only focused in the forestry domain. Therefore, this article covers scientific works tested in the woods and/or in the forests. Both of these terms represent forestry environments. In the woods, 25–60% of the land is covered by trees, and in forests, the tree canopy covers 60–100% of the land (https://www.reconnectwithnature.org/news-events/the-buzz/what-the-difference-woods-vs-forest, accessed on 6 October 2021). For simplicity, throughout this article, we refer to the forestry environments using the term “forests”.
Over the years, several advances in perception systems have appeared, giving a positive impact in the forestry domain. These systems, combined with robotics, enabled the improvement, in terms of precision and intelligence, of several tasks and operations that were performed in forest a long time ago. Some years back, such operations were performed without thinking about the forest sustainability and were performed with obvious limitations. In forestry, perception is of utmost importance, as it is required for detecting trees, stems, bushes, and rocks [1], and measuring certain parameters of valuable vegetation whilst ignoring nonvaluable plants [2]. Such tasks have an inherent difficulty because of illumination changes caused by tree-derived shading. With this in mind, this article presents an overview of the recent scientific developments about multimodal perception in forests for several purposes: species detection, disease detection, structural measurement, biomass and carbon dynamics assessment, and monitoring through autonomous navigation. The addition of cutting-edge technology to these and other operations not only leads to a smarter and more precise forestry but also helps to prevent and deal with natural disasters such as wildfires, which were estimated to have affected the lives of over 6.2 million people since 1998 [3].
The selection of works about perception systems for forestry was performed based on the current state-of-the-art of this domain; therefore, the majority of the cited works are from the past 10 years. The main focus of this review was the production of a scientific survey; therefore, articles published in journals and conferences were preferred. The literature databases that were used to search for scientific information were: Scopus, ScienceDirect, IEEE Xplore, SpringerLink, and Google Schoolar.
To perform this search, the following keywords were used: forests, sensor fusion, multimodal perception, images, lidar, radar, and navigation. These keywords were combined to form the following search strings: “forests AND images”, “forests AND lidar”, “forests AND “sensor fusion” AND “multimodal perception”, “forests AND images AND lidar AND radar”, and “forests AND navigation”.
The contributions of this work are the following:
  • A review of perception methods and datasets for multimodal systems and applications;
  • A publicly available dataset with multimodal perception data.
The rest of this article is structured as follows. Section 2 presents a review of unimodal and multimodal perception methods for forestry. In Section 3, our dataset and other perception datasets found in the literature are presented and detailed. Section 4 ends this article, drawing the main conclusions about the forestry unimodal and multimodal perception domains.

2. Unimodal and Multimodal Perception in Forestry

This section presents a literature review of scientific works about unimodal (using only images or LiDAR data) and multimodal perception in forestry.

2.1. Vision-Based Perception

Over the years, several works have appeared whose main goal was to use only vision-based data for performing forestry-related tasks. With this in mind, in this section, works related to vision-based perception in forest areas are covered.
The use of images to inspect forestry environments can have multiple purposes: disease detection in vegetation, vegetation inventory reports, vegetation health monitoring, detection of forest obstacles for safe autonomous or semiautonomous navigation and for assessing the structure, and mapping of the forest land, among others.
Health monitoring and disease detection in trees are a frequent topic in forest contexts that can be performed using only cameras. In [4], a study about the detection of pine wilt disease was conducted. The authors used a Unmanned Aerial Vehicle (UAV) equipped with a camera to gather aerial images for further processing. The UAV captured several images during three consecutive months that would form a dataset for training four different Deep Learning (DL) object detection methods—Faster R-CNN ResNet50, Faster R-CNN ResNet101, YOLOv3 DarkNet53, and YOLOv3 MobileNet—to diagnose such a disease. The authors claimed that the four methods achieved similar precision, but YOLOv3-based models were lighter and faster than Faster R-CNN variations. In [5], another study about disease detection in pinus trees was made. In this work, the authors also used UAV-based images to detect the disease; however, they developed a method that combines Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) with an AdaBoost classifier. The GAN was used to extend the diseased samples of the dataset; the CNN was used to remove existent noise for the recognition task such as roads, rocks, and soils; and the role of AdaBoost classifier was to distinguish diseased trees from healthy ones and to identify shadows in the images. The proposed method attained better recognition performance than several well-known methods, such as support vector machines, AlexNet, VGG, and Inception-V3. Another work where UAVs were used to capture aerial images for the further identification of sick trees was proposed in [6]. In this work, the authors wanted to detect sick fir trees, and for that, they started by obtaining a Digital Surface Model (DSM) from the aerial images, on top of which an algorithm developed by them was run to detect treetops. Then, the detected tree crowns were classified using five DL models: AlexNet, SqueezeNet, VGG, ResNet, and DenseNet. The obtained results showed that the proposed tree crown detection algorithm achieved on average best matching and counting of treetops. In terms of treetop classification, DenseNet, ResNet, and VGG were the DL models presenting more stability in their detection results. In [7], the authors presented an approach for diagnosing forest health based on the detection of dead trees in aerial images. In this work, the authors used their own aerial images datasets and used eight fine-tuned variations of a DL method called Mask R-CNN to produce dead tree detections, resulting in the best variation achieving a mean average precision of about 54%. The purpose of the detections was to serve as an indicator of environmental changes and even an alert for the possibility of forest fires occurrence. A study about monitoring trees’ health was made in [8], where the authors collected aerial images using a UAV and performed individual tree identification by using a k-means method to perform tree segmentation, followed by the use of histogram of oriented gradients to localise the treetops. Afterwards, the images went through a multipyramid feature extraction step where important features were extracted to further identify the health of the trees. The results showed that the proposed method performed better than other state-of-the-art methods.
The production of inventory reports and the assessment of forest structure and its characterisation are important issues that can tell about the productivity of the forest land. In [9], the authors used a CNN called RetinaNet to detect palm trees in aerial images, achieving 89% and 77% precision in the validation and test dataset, respectively. The authors also presented a similar work in [10], where they went even deeper regarding the inventory report of palm trees, attaining a very high number of accounted palms with a confidence score above 50%. In [11], the authors used a photogrammetric technique called Structure from Motion (SfM) to generate point clouds from which some forest parameters were extrapolated such as tree positions, Diameter at Breast Height (DBH), and stem curves (curves that define the stem diameter at different heights). The image capture was made in two locations of Austria and in two locations of Slovakia, performing Terrestrial Laser Scanning (TLS) measurements that were used as ground-truth for the SfM parameters estimation. The results show that SfM is an accurate solution for forest inventory purposes and for measuring forest parameters, not falling far behind TLS. Another work where an SfM-based strategy was used to obtain forest plot characteristics such as tree positions, DBH, tree height, and tree density was presented in [12]. In this work, the authors combined the image acquisition with a type of differential Global Navigation Satellite System (GNSS) technology, which is different from the common method of simply using photogrammetry to reconstruct the 3D point cloud of a scene; instead, their method is capable of extracting directly real geographical coordinates of the points. The results showed minimal differences in the positioning accuracy (between 0.162 and 0.201 m), on the trunk DBH measurements (between 3.07% and 4.51%), and on tree height measurements (between 11.26% and 11.91%). In [13], the authors also used SfM to make 3D reconstruction from aerial data collected using a UAV. They applied a watershed segmentation methods along with local maxima to detect individual trees, and after, tree heights were calculated using DSMs and digital terrain models. The tree detection procedure was carried out with a maximum 6% error, and the tree height estimation error was around 1 and 0.7 m for the pinus and eucalyptus stands, respectively. While it is import to quantify some forest parameters such as DBH, tree height, and tree position, the measurement of tree crowns must not be underestimated, as it is quite difficult to assess this measure manually, and it provides a comprehension about the stand timber volume. For this, the authors in [14] made a study on methods for the detection and extraction of tree crowns from UAV-based images and further crown measurement. They used three DL models, namely Faster R-CNN, YOLOv3, and Single-Shot MultiBox Detector (SSD). In terms of detection, the three models behaved similarly; however, in terms of crown width estimation (computed directly from the generated bounding boxes of the methods), SSD was the method that presented the lowest error.
Commonly, forestry inventory is estimated by detecting the trees. Several works proposed this approach as a way of assessing quantitatively the forest yield, forest biomass, and carbon dynamics from high-resolution remote sensing or UAV-based imagery [15,16,17,18,19,20,21,22]. The inventory from a certain ecosystem can also be estimated by mapping it through satellite images, as was made in [23] for a mangrove ecosystem. The authors used a pixel-based random forest classifier that resulted in a mangrove map with an overall accuracy of 93%. This work demonstrated that the production of detailed ecosystem maps can have a high impact for monitoring and manage natural resources.
Autonomous navigation in forests is a relevant challenge. For this issue, it is fundamental that all obstacles in the forests are detected to avoid hazardous situations and damages. In [24], the authors installed a camera on a forwarder (a forestry vehicle that transports logs) and developed an algorithm that, using the images, detects trees and measures the distance to them. They trained an Artificial Neural Network (ANN) and a K-Nearest Neighbours (KNN) to perform the detections. After detecting the trees, a distance measurement process is executed, that is based on the intrinsic and extrinsic parameters of the camera, and other parameters related to the vehicle. Then, if the distance is below a proximity threshold, the vehicle is stopped, and it waits for a command from the operator before returning to operation. Similar work was presented in [25], where the authors developed an autonomous navigation and obstacle avoidance system for a robotic mower with a mounted camera. The obstacles and landmarks were detected using a CNN. Autonomous navigation in forests can also happen in the air with UAVs, as was shown in [26], where the authors developed a UAV-based system capable of following footpaths in the forest terrain. The CNN-based perception system detected the footpaths followed by a decision-making system that calculated the deviation angle of UAV’s motion vector from the desired path, and if the angle did not exceed 80 degrees, the UAV would move forward; otherwise, it would turn left or right depending on the sign of the angle. In [27], the authors also used a UAV to develop an autonomous flight system using monocular vision in forest environments. The system is an enhanced version of an existing algorithm for rovers. The proposed method is capable of computing the distance to obstacles, calculating the angle to the nearest obstacle, and applying the correct yaw–velocity pair to manoeuvre the UAV to avoid the obstacles. A similar work was developed in [28], where the authors proposed a DL-based system for obstacle avoidance in forests. The system was tested in a simulated and in a real environment: in the simulated environment, the UAV concluded 85% of the test flights without collisions, and in the real environment, the UAV concluded all test flights without collisions. Other studies focused on detecting tree trunks in street images using Deep Learning methods [29,30], in dense forests using visible and thermal imagery combined with Deep Learning [31], and even on the detection of stumps in harvested forests [32] to enhance the surrounding awareness of the operators and to endow machines with intelligent object avoidance systems.
Table 1 shows a summary of works related with vision-based perception in forests, where studies are presented by categories, by the type of processing needed, and the number of works found with impact in each category.
The aim of the “Health and diseases” category is to monitor the health of forest lands and detect the existence of diseases that affect forest trees, destroying some forest cultures and ecosystems. Data from this category are most of the times processed offline—the data are collected in the field and are processed later. For this category, only five works were presented, since the way of acquiring the data for such purpose is quite similar (the majority use UAVs or satellite imagery) and their processing is mostly targeted at the detection of sick or death trees in aerial images. The category of “Inventory and structure” is based on remote sensing to produce inventory reports about forest content, such as biomass volume, and to study the structure of forests using plot-level parameters to assess its growth and yield. The data of this category are also processed offline, and 15 works were collected for this category. Most of these works are focused on detecting and counting treetops in high-resolution and UAV images, and some of them are focused on measuring some parameters of forest trees, such as DBH, height, tree density, and size of the crown. Lastly, the “Navigation” category is mostly about works focused on detecting trees, in visible and thermal images, and also measuring the distance to them to perform avoidance manoeuvres. Other studies focused on following footpaths with decision making systems capable of deciding which route to choose. Such works are the foundations needed to attain fully or semiautonomous navigation in forests, hence the importance of the works in this area. The works of this category are all characterised by online data processing, otherwise the aerial or terrestrial vehicles, which rely on the visual perception, would crash and serious damage would happen to their hardware.
Another perspective taken from the collected works over vision-based perception in forests is about the nature of their perception systems, i.e., whether the systems are terrestrial or airborne. Of the 28 works, eight are of terrestrial nature, and 20 of aerial nature. These numbers may indicate that the domain of terrestrial-based perception is still under development, and more research is needed, since advanced ground-level perception can enable the development of technological solutions for harvesting biomass, and cleaning and planting operations, which in turn can help to tackle environmental issues, such as greenhouse effect, global warming, and even wildfires.

2.2. LiDAR Perception

This section is about forest perception using Light Detection And Ranging (LiDAR) technology. In this domain, the literature is divided into two main areas: LiDAR-based perception for estimating forestry inventory and structure, and for achieving autonomous navigation and other operations in forests.
Regarding the forest structure and inventory assessment domain, several works are focused on the development of methods to precisely perform tree detection and segmentation on LiDAR-based point clouds. In [33], the authors base their work in low-density full-waveform airborne laser scanning data for Individual Tree Detection (ITD), and tree species classification using a random forest classifier whose input was the extracted features from the detected trees. This work covered three tree species and, in the end, the results were compared with the ones obtained from the discrete return laser scanning data. In [34], a benchmark of eight ITD techniques was made over a dataset made by Canopy Height Models (CHMs) obtained from Airborne Laser Scanning (ALS). Additionally, an automated tree-matching procedure was presented that was capable of linking each detection results to the reference tree. The method proved to work in an efficient manner. In [35], the authors presented an ITD method based on the watershed algorithm for further computing several tree-related variables. Deep Learning is another way of performing ITD, as was shown in some works [36,37] where distinct DL methods were used, such as, Faster R-CNN, 3D-FCN, K-D Tree, and PointNet. Other works focused on Individual Tree Crown (ITC) detection and segmentation (or delineation) as the one presented in [38]. In this work, the authors developed a framework that receives LiDAR-derived CHMs and 3D point cloud data and generates estimations of tree parameters such as tree height, mean crown width, and Above-Ground Biomass (AGB). The authors concluded that their framework is very accurate at ITC delineation, even in dense forestry areas. For the task of ITC, some works also used Deep Learning. In [39], the method PointNet [40] was used, and the authors presented a method that started by turning the point clouds containing the trees into voxels; then, the voxels were the input samples for training PointNet to detect the tree crowns; lastly, with the segmentation results provided by PointNet, a height-related gradient information was used to distinguish the boundaries of each tree crown. Over the years, novel tree segmentation methods have appeared: gradient orientation clustering [41], graph-cut variations [42], region-based segmentation [43], mean-shift segmentation [44], and layer stacking [45]. The majority of works based on the use of LiDAR in forests are aimed at estimating and assessing forestry parameters and biomass. Some are focused on computing DBH [46,47,48,49], others want to measure AGB [47,48,50,51,52,53,54,55,56,57], Leaf Area Index (LAI) [58,59,60], canopy height [47,48,49,54,61,62,63,64,65,66,67,68], tree crown diameter for estimating biomass and volume [69], basal area and tree density [52,70], land cover classification [71], and above-ground carbon density using an ITC segmentation that locates trees in the CHM, measures their heights and their crowns widths, and computes the biomass [72].
Autonomous navigation and automated tasks in forests are still a challenge due to the unstructured nature of such environment and to the unavailability and/or degradation of GNSS signals [2,3]. In [73], the authors claimed to solve such localisation problem in sparse forests, where the GNSS signals can be sporadically detected. They proposed a method that fuses GNSS information with a LiDAR-based odometry solution, which uses tree trunks as a feature input for a scan matching algorithm to estimate the relative movement of the aerial robot used in this work. The method employs a robust adaptive unscented Kalman filter, and, for motion control, the authors implemented an obstacle avoidance system based on a probabilistic planner. In [74], an autonomous rubber-tapping ground robot was presented. The robot achieves autonomous navigation by collecting a sparse point cloud of tree trunks using a low-cost LiDAR and gyroscope; the center points of the trees are acquired; then, the points are connected to form a line that serves as the robot’s navigation path. Additionally, a fuzzy controller was used to analyse the heading and lateral errors while the robot performed certain operations: straight-line walking in a row at a fixed lateral distance, stopping at certain points, turning from a row to another, and gathering specific information regarding row spacing, plant spacing, and tree diameter. In [75], the authors presented a point cloud-based collision-free navigation system for UAVs. The system collects the point cloud using a LiDAR, converts it to an occupation map that is the input for a random tree to generate path candidates. They used a modified version of Covariant Hamiltonian Optimisation for Motion Planning objective function to choose the best candidate, whose trajectory is in turn the input of a model predictive controller. The authors’ strategy was tested in four different simulated environments, and the results showed that their method is more successful and has a “shorter goal-reaching distance” than the ground-truth ones. Most of the time, the problem of navigation and localisation in forests can be resolved by using Simultaneous Localisation and Mapping (SLAM) algorithms [2]. SLAM normally combines the data from a perception sensor, such as a camera, a LiDAR, or both, with the data from a Inertial Navigation System (INS). The authors in [76] developed a GNSS/INS/LiDAR-based SLAM method to perform highly precise stem mapping. The heading angles and velocities were extracted from GNSS/INS, enhancing the positioning accuracy of the SLAM method. In [77], the authors also used an INS and LiDAR-based SLAM method to attain a stable and a long-term navigation solution. They assessed the performance of two different approaches: making SLAM with only a LiDAR and making SLAM with a LiDAR and an Inertial Measurement Unit (IMU). They concluded that the positioning error improved when the second approach (LiDAR+IMU-based SLAM) was in use. Similarly, in [78], the goal was stem mapping and to accomplish that the authors combined GNSS+IMU with a LiDAR, mounted on a terrestrial vehicle, and performed SLAM. They concluded that the addition of LiDAR contributed to an improvement of 38% compared to the traditional approach of only using GNSS+IMU. In [79], the authors proposed a SLAM method called sparse SLAM (sSLAM) whose main application is in forests and for sparse point clouds. They tested their method on the field with a LiDAR and a GNSS-mounted on a harvester and compared their method with LeGO-LOAM. The results showed that sSLAM generates a lighter point cloud, incurs a lower GNSS parallel error, and has more consistency than LeGO-LOAM. Lastly, in [80], the authors proposed a new approach to match point clouds to tree maps using Delaunay triangulation. They tested their method with a dataset corresponding to a 200 m path, travelled by a harvester with a LiDAR and a GNSS mounted on it. Initially, the tree trunks are extracted from the map, resulting in a sparser map that is triangulated; then, a local submap of the harvester is registered, triangulated, and matched using triangular similarity maximisation, estimating the harvester’s position.
Table 2 presents a summary of the works that were aforementioned and that are related to LiDAR-based perception in forests, where the category, processing type, and number of works found are highlighted.
The majority of works are focused on perceiving the forest structure and estimating its inventory, and drawing conclusions about the forest carbon stock and vegetation yield from it. With respect to navigation purposes, more research is needed, as only eight works were found to be interesting for the study at hand. Additionally, a detail to be mentioned is that the majority of works (around 30) were made on aerial systems (including spaceborne ones), showing the predominance of the aerial systems in forestry, similarly to the domain of vision-based perception.

2.3. Multimodal Perception

In this section, the domain of forest multimodal perception is addressed, and the works that meet the formal search are presented.
Multimodal perception combines data from different kinds of sensors through a sensor fusion approach to attain richer, more robust, and more accurate perception systems. In this sense, multimodal sensing is likely to present a superior performance comparatively to unimodal sensing, demonstrating the relevance of this type of sensing for the forestry domain.
One of the main applications of multimodal perception systems in forests is the classification of vegetation and tree species distinction. Fusing aerial-visible and hyper-multispectral images with LiDAR data is the most common practice for classifying forestry vegetation. In [81], the authors proposed a data fusion system that combined aerial hyperspectral images with aerial LiDAR data to distinguish 23 classes, including 19 tree species, shadows, snags, and grassy areas. The authors tested three classifiers: support vector machines, Gaussian maximum likelihood with leave-one-out-covariance, and k-nearest neighbours. The results showed that the best classifier was support vector machines; the system benefited with the addition of LiDAR, improving the classification accuracies in almost all classes; and the system attained accuracies over 90% for some classes. In [82], the authors studied sensor fusion approaches to perform species classification. Initially, the trees were detected in the CHM derived from the ALS data, and then the detected trees were distinguished among four classes by use of different combinations of 23 features provided by ALS data and coloured orthoimages. Seven classification methods were studied: decision trees, discriminant analyses, support vector machines, k-nearest neighbours, ensemble classifiers, neural networks, and random forests. Again, the use of ALS-based features proved to improve the overall accuracy. The authors recommended to use quadratic support vector machines for tree species classification, as this performed better than the other methods. A similar work was produced in [83], where the authors proposed a method that at first performed ITC in a LiDAR-derived CHM, followed by a hyperspectral extraction in each segmented tree for further classification through two classifiers: random forests and a multiclass classifier. In this study, seven tree species were classified. The authors compared the use of all 118 bands against the use of only 20 optimal bands (obtained by minimum noise fraction transformation) in the classification performance. The results showed that using only 20 bands is beneficial, as it increases the overall accuracy of the two classifiers, and that the multiclass classifier is more robust with high-dimensional datasets composed by small sample sizes. Another similar study was made in [84]. In this work, the authors presented a classification algorithm for tree species classification based on CNNs. Firstly, the algorithm performs ITD by using the local maximum method over a LiDAR-derived CHM; then, the trees are cropped from aerial images into patches that are classified by a ResNet50 CNN into one of seven classes. A comparison among the CNN from the algorithm with a traditional method (random forest) and two CNNs (ResNet18 and DenseNet121) was made, resulting in ResNet50 outperforming the other methods. In addition, a study regarding the resolution of the patches was made, where it was concluded that the biggest tested patch size generated better results. A study involving the classification of land use and land cover was made in [85], where the authors used a combination of satellite images with satellite Radio Detection And Ranging (RaDAR). In [86], the authors presented a study about the classification of the vertical structure of the forest. They fused the information of aerial orthophotos (an orthophoto is an aerial photograph or satellite imagery geometrically corrected (orthorectified) such that the scale is uniform (https://en.wikipedia.org/w/index.php?title=Orthophoto&oldid=1020970836, accessed on 6 October 2021)) with aerial LiDAR data and used an ANN to produce the classifications. In [87], the authors also used CNNs but instead of classifying trees, they wanted to detect them in fused data composed by aerial images and an ALS-derived DSM. They concatenated a DSM with a Normalised Difference Vegetation Index (NDVI) and with a concatenation of red, green, and near-infrared features. Their goal was to use a single CNN to process such combination of data, and for that, they used AlexNet. The results showed that the input data pair NDVI-DSM achieved the best results. With respect to tree detection, in 2005, a work about the detection of obstacles behind foliage using LiDAR and RaDAR [88] was published. The detection of occluded obstacles in forests is a major issue, as it is important that mobile platforms avoid crashing into other objects while traversing the forest. With this in mind, the authors in [88] were capable of detecting a tree trunk behind a maximum foliage thickness of 2.5 m. Some works related to detection in forests are focused on detecting terrain surfaces by means of LiDAR and vision-based data [89], while others, that also used LiDAR data combined with images, are focused on detecting roads instead [90].
There are application areas where multimodal perception is crucial. The estimation of biomass is an important process that can help predict the forest yield and its carbon cycle. Such assessment can be made by means of combined LiDAR data and multispectral imagery [91]; combined multispectral imagery and RaDAR imagery [92] and combined inventory data, multispectral imagery and RaDAR imagery [93]. Moreover, the estimation of the vegetation or canopy height is also a field where multimodal perception takes an important role. This kind of estimation can be made by combining aerial photogrammetry with LiDAR-derived point clouds [94] and by combining LiDAR data with multispectral optical data [95]. Other applications are aimed at: autonomous navigation in forests using sensor fusion of GNSS, IMUs, LiDAR, and cameras [96,97]; mapping the forest using a LiDAR and a camera mounted on a ground vehicle, and by means of a SLAM approach [98]; characterising the root and canopy structure of the forest by combination of LiDAR-derived point clouds at ground-level with Ground Penetrating RaDAR (GPR) [99]; and measuring forest structure parameters, such as average height, canopy openness, AGB, tree density, basal area and number of species, by combining spaceborne RaDAR images with multispectral images [100].

2.4. Perception in Other Contexts

Multimodal or unimodal perception also plays an important role in other contexts. Digital and precision agriculture, military robotics and disaster robotics are some of the areas where robots can be combined with advanced perception systems to enhance the knowledge of the robots about their surroundings in several tasks.
In agriculture, the introduction of digital and automated solutions in recent years potentiated the appearance of precision agriculture procedures that can be applied in farmer’s cultures, increasing the production yield and decreasing the environmental impact of using fertilisers. With precision agriculture, the fertiliser application is performed at the right time, at the right place and with the right amount, fulfilling the crop needs. With this in mind, several scientific works have appeared in recent years. The majority of them are about autonomous harvesting where the fruit or vegetable must be detected and/or segmented prior to its picking [101,102,103,104,105,106,107,108,109]. The detection of vegetables or fruits are also important to count them and estimate the production yield [110,111,112,113]. Similarly to forestry contexts, some works are about disease detection and monitoring [114,115], and others are focused on detection woody trunks, weeds, and general obstacles in crops for navigation [116,117,118,119], operation purposes [120,121], and cleaning tasks [122,123]. Another application of perception systems in precision agriculture is characterising, monitoring, and phenotyping vegetative cultures using stereo vision [124,125], point clouds [126,127], satellite imagery [128], low-altitude aerial images [129], or multispectral imagery [130]. Along with these advances in terms of perception, several robotic platforms have appeared: for harvesting [101,102,104,105,106], for precise spraying [131], for plant counting [132], and for general agricultural tasks [133,134,135,136]. A topic that is being increasingly studied in the agricultural sector is localisation and consequent autonomous navigation in crops. To achieved this, the robots can rely on topological maps for path planning [137,138], ground-based sensing [119,135,136], aerial-based sensing [139,140], and simulated sensing [141].
Regarding the military and disaster robotics domains, both share some of the perception issues that exist in forestry and agricultural, such as, illumination changes, occlusions, and possible dust and fog. Several scientific advances have been made in these domains. With respect to the occurrence of disastrous events, some scientific solutions have appeared that use UAVs with cameras to perform surveillance of shipwreck survivors at sea [142], to search for people after an avalanche [143], and to detect objects and people in buildings after calamity events [144,145,146,147]. Other developments have been achieved related with inspecting bridges after disasters [148], rescuing people using a mobile robot similar to a crane [149], and scouting and counting of fallen distribution line poles [150]. In a military context, perception takes an important role as it helps to detect airports, airplanes, and ships from satellite images [151,152] and even from RaDAR images [153]. Some autonomous and semiautonomous systems have appeared that travelled by air [154,155] or by land [156] relying on vision sensors and/or LiDARs. Additionally work has been conducted in specific areas namely, opening doors using a robotic arm and 3D vision for unmanned ground vehicle [157], detecting obstacles in adverse conditions (fog, smoke, rain, snow, and camouflage) using a ultra wide-band RaDAR [158] or a spectral laser [159,160], fusing camera and LiDAR data to recognise and follow soldiers [161], avoiding obstacles using a 2D LiDAR [162], autonomously following roads and trails using a visual perception algorithm [163], and even multitarget detection and tracking for intruders [164].

2.5. Discussion

Normally, vision sensing is more beneficial than LiDAR, by the fact that each image datum comprises at maximum four values—red, green, blue, and possibly depth—whereas each LiDAR datum can only have two possible values, which are distance and intensity. Even so, for perception applications in forests, the combination of vision sensor(s) with LiDAR(s) is the favourite approach due to the existence of sharp illumination changes that can compromise the performance of cameras. Thus, even if the cameras temporarily failed, the system could continue operating using LiDAR-based perception. A relevant detail is that, when using multimodal data, it is expected that the diverse nature of the measurements incurs uncorrelated errors, interference, and noise. The expected consequence is that multimodal sensing is likely to be superior to unimodal sensing. It is hoped that an adequate data fusion technique improves the quality of the final perception with uncorrelated limitations. When compared to several sensors with different natures to a set of high-quality sensors of the same nature, multimodal is likely to have limitations in distinct situations instead of having persistent noise and interference measured with high accuracy. Admittedly, multimodal data come from diverse sensors that make the overall system more expensive.
Table 3 shows the most innovative and disruptive works regarding the categories mentioned in Section 2.1, Section 2.2 and Section 2.3.
In the category “Health and diseases”, the two works that are clearly highlighted were about performing disease detection in trees using only aerial images captured from UAVs. The work developed in [5] focused on detecting diseases in pinus trees using deep learning models to remove the noisy background from the UAV images (such as soils, roads, and rocks), followed by disease recognition using the AdaBoost algorithm. The authors went even further, and to expand their training dataset, they used a GAN. After the study, they concluded that their method achieved superior results compared to the state-of-the-art methods. The novelty of this article is related to the fact that the proposed method not only recognises diseased trees but also other forestry objects, which can be used to assess other forest parameters, such as LAI and rockiness of the forest terrain. The use of GAN to augment the dataset is a relevant point as well. The other work of this category was aimed at identifying sick fir trees [6]. The authors’ proposed method differs from the common methods by combining DSM, for detecting treetops, with UAV images, to classify the detected treetops. For the classification, the authors made a benchmark involving 10 deep learning classifiers. Their method achieved better results compared to three state-of-the-art methods.
Three works from the category “Inventory and structure” are of great relevance. The work proposed in [11] was aimed at using SfM, and by means of a handheld camera, to measure some inventory characteristics (tree positions and DBH). Then, the point cloud resulting from terrestrial SfM was evaluated using a TLS-based point cloud, which proved that the SfM method is an accurate solution for deriving inventory parameters from image-based point clouds, which is an important breakthrough, as this type of work is normally performed by means of aerial or terrestrial LiDAR data. A similar work was proposed in [38], where the authors wanted to obtain biomass attributes, such as tree height, mean crown width, and AGB, but in this work, aerial LiDAR data were used. To achieve that, they proposed a method that performs ITC detection over a CHM derived from a LiDAR point cloud. The authors claim that their method is very accurate and efficient even in dense forests, where traditional methods tend to present a limited performance, hence the mentioning of this work in Table 3. The last work to be mentioned in this category was about estimating biomass using a spaceborne multimodal approach [93]. The authors used a combination of in-field inventory data reports, with satellite images and satellite RaDAR to estimate and map forest biomass. The features were extracted from the data and serve as an input to two different estimation models. The interesting aspect of this article is the combination of forest inventory reports with remote sensing data to attain a low cost method with a high level of reliability and efficiency.
Within the “Navigation” category, there are three relevant works that were considered in Table 3. One is focused on performing autonomous flight with a UAV using only vision sensing [28]. In this work, the authors developed an aerial system that uses a vision-based DL method to detect obstacles and then performs evasion manoeuvres. The results of this work are surprising and are the reason why this work was chosen; out of 100 flights carried out in a simulated environment, the rate of success was 85%, while in a real environment the rate of success was 100% for 10 flights. Another surprising work was developed in [74]. The aim of this work was to perform rubber-tapping autonomously. For that, the authors used a caterpillar robot with a gyroscope and a LiDAR mounted on it. The robotic system was capable of walking along one row at a fixed lateral distance and then turning from one row into another, while performing rubber-tapping automatically. Moreover, the system collected forest information and mapped the forest during the operation. These developments constitute a tremendous breakthrough, as this is one of the first works to implement an autonomous system for performing a forestry operation without human interaction. Back in 2010, another work was published, and it was about autonomous navigation using a quadruped robot with a sensing system composed by a LiDAR, a stereo camera, GNSS receiver, and an IMU [97]. The robot was tested in a forest environment, and it managed to successfully complete 23 out of 26 autonomous runs and even managed to travel more than 130 m in one of them.
The last category covered in this review is “Species classification”. In this category, two works based on multimodal sensing for classification of vegetation were considered. In one of them, the authors combined hyperspectral images with LiDAR data to distinguish 23 classes, of which, 21 were vegetative, and benchmarked three classifiers [81]. In the other work, the authors developed a method that performs ITD over an ALS-derived CHM, and then the detected tree crowns were classified into four classes. The classification was performed by combining 23 features from ALS data and orthoimages [82]. These two works were selected by the fact that they covered a considerable amount of classes to identify, made several combinations of multimodal features to serve as an input for classification, and benchmarked the state-of-the-art methods to classify forestry vegetation.
Figure 1 presents a year distribution of the works that were studied and reviewed in this article.
From Figure 1, it is easily concluded that the majority of works (around 70%) are from 2017 onwards. This not only means that there is a growing interest in developing technology for forestry, but also that in the last four years, there have been technological developments with higher impact in the forestry domain.
The categories presented in Section 2.1, Section 2.2 and Section 2.3 are detailed in Figure 2 according to their coverage in this article.
Undoubtedly, the category with greater presence in this article (more than 50%) is “Inventory and structure”. Such dominance of this category over the others can be explained by the fact that the type of works that the category embraces are mostly about forest characterisation by extracting vegetation parameters and biomass estimation (using vision and/or LiDAR), which are the most common work lines to estimate important socioeconomic variables, such as, forest yield, carbon stock, and wildfire risk. The second category with most coverage in this article is “Navigation”. This information denotes that an increasing search is being made for automation systems capable of navigating and performing autonomous tasks in forests. This is crucial, and it is expected that in upcoming years a larger number of autonomous robotic solutions may appear, since the lack of manpower is a constant issue in forestry, for both manual and machine work. The third category is “Species classification/detection”, and it is part of a relevant application domain that allows one to detect intrusive species and avoid future implications in the biodynamics of forestry areas.
Another aspect that must be discussed in this article is the applicability and adequacy of sensor platforms to perform specific forestry operations. Table 4 presents an overview of different sensor platform types and their details regarding area coverage, data resolution, and whether a certain platform is at real-time operations or not.
From Table 4, one can verify that the sensor platforms that should be used for collecting data from large forest areas are the airborne ones (spacecraft, aircraft, and UAVs), as they can cover more terrain than ground vehicles and in less time. Ground vehicles and UAVs are the platforms to be employed to achieve highly precise data and for real-time operation. However, ground vehicles are mostly preferred over UAVs since they typically have more energy autonomy and support much more payloads. Such characteristics are ideal to perform forestry tasks, which usually involve spending several hours in the terrain and carrying large amounts of weight. Nonetheless, ground vehicles/robots require advanced perception systems. To develop and test these systems, more datasets are needed.

3. Perception Datasets for Forestry

The existence of sufficient data is crucial for further developing multimodal perception in forests. Therefore, this section introduces our dataset as a contribution of this paper and also emphasises existing datasets within this field.

3.1. Proposed Dataset

The proposed dataset in this work is called QuintaRei Forest Multimodal Dataset (QuintaReiFMD) and was acquired in an eucalyptus forest located in Valongo (Portugal) using a robotic platform named AgRob V16, which is presented in Figure 3.
The dataset is available in the Robot Operating System (ROS) format, it is made up by nine rosbags, and it includes visible, thermal, and depth images, and even point clouds. The dataset was recorded during the navigation of the robot (manually controlled using a remote controller) in the forest, in plane and also steep terrains, at a maximum velocity of 0.5 m/s. These data were collected by means of different sensors mounted on the front of AgRob V16: a ZED stereo camera (https://www.stereolabs.com/zed, accessed on 24 September 2021), pointing forward, mounted 96 cm above ground and tilted by 10 degrees, was used to acquire visible and depth images; a FLIR M232 camera (https://www.flir.eu/products/m232, accessed on 24 September 2021), pointing forward and mounted 70 cm above ground with no tilt, was used to capture thermal images; a OAK-D camera (https://store.opencv.ai/products/oak-d, accessed on 24 September 2021), pointing to the left of the robot, mounted 96 cm above ground and tilted by 10 degrees, was used to collect visible and depth images; and a Velodyne Puck LiDAR (https://velodynelidar.com/products/puck, accessed on 24 September 2021), mounted 100 cm above ground, was used to acquire point clouds. These sensors are also presented in Figure 3 with coloured annotations. The dataset is publicly available at https://doi.org/10.5281/zenodo.5045354 (accessed on 24 September 2021), and a partial description of the same is presented in Table 5, where the data types, data resolution, frame rate, Field Of View (FOV), and number of messages associated to each sensor are detailed.

3.2. Publicly Available Datasets in the Literature

Other publicly available datasets were found in the literature. In [34], the authors built a unimodal dataset made of laser scanning data (available at https://www.newfor.net/download-newfor-single-tree-detection-benchmark-dataset, accessed on 24 September 2021) to perform a tree detection benchmark. In [165], the authors constructed a dataset of low-viewpoint coloured and depth images (available at https://doi.org/10.5281/zenodo.3690210, accessed on 24 September 2021) to enhance the intelligence of smaller robots, possibly achieving autonomous navigation in forests. In [31], the authors built a dataset of manually annotated visible and thermal images (available at https://doi.org/10.5281/zenodo.5213824, accessed on 24 September 2021) to perform trunk detection to enhance robot awareness in the forest. In [166], the authors presented a multimodal dataset of laser scans, colour and grey images (available at http://autonomy.cs.sfu.ca/sfu-mountain-dataset, accessed on 24 September 2021), whose data correspond to eight hours of trail navigation. In [167], the authors produced a dataset composed by colour images (available at https://etsin.fairdata.fi/dataset/06926f4b-b36a-4d6e-873c-aa3e7d84ab49, accessed on 24 September 2021) for forestry operations in general. Lastly, in [168], the authors proposed two multimodal datasets made of laser scans and thermal images (available as DS_AG_34 and DS_AG_35 at https://doi.org/10.5281/zenodo.5357238, accessed accessed on 12 October 2021) for forestry robotics, and they used the datasets to perform a SLAM benchmark. Table 6 summarises and describes the aforementioned datasets and our dataset. All datasets were acquired at ground-level.
From Table 6, it can be seen that our dataset complements other existing datasets, as it contains laser scans and three different image types (more than any other) all together, enabling the development of more forestry applications, possibly in real-time, during day and night, using multimodal data.

4. Conclusions

The perception in forests is of utmost interest, since the combination of perception systems with robotics and machinery can enable a smarter, more precise, and more sustainable forestry. In this sense, this work presents a formal review of several scientific articles of forestry applications and operations by perceiving the forest environment.
This work reviewed unimodal and multimodal perception in forest environment. Additionally, this work contributes for the enrichment of multimodal data in forests by providing a public dataset composed by LiDAR data and three different types of imagery: visible, thermal, and depth. This dataset is more complete than any other as it includes four different types of sensor data (refer to Table 6).
Regarding unimodal sensing, the most common sensors are vision and LiDAR. Multimodal sensing takes advantage of a set of data coming from vision, LiDAR, and RaDAR. The most common usages for perception are divided into categories, such as health and diseases, inventory, and navigation.
Processing can be performed online, in real-time, onboard a given vehicle, or offline to reach conclusions after the mission that collected the data. With the literature review made in this article, the perception trends in forestry environments can be detailed. Vision-based perception is mainly used along with aerial vehicles and in offline tasks such as detecting diseases in vegetation and assessing the forest yield from its inventory; LiDAR-based perception is mostly used along with aerial vehicles (sometimes even spaceborne), and its data are most of the times processed offline for biomass estimation and structure measurement purposes; the multimodal perception is specially focused on offline operations, such as detecting and distinguishing vegetation species from aerial imagery and laser scanning systems, estimation of biomass using multispectral and hyperspectral images with LiDAR and RaDAR data, and measuring the forest canopy.
In the next years, the perception trends in forest should be focused on ground-based systems to perform forestry operations in real-time relying on visual perception and LiDAR perception alone and/or on a fusion of these two. Advances in these topics can enable further technological developments in forestry, including fully unmanned navigation for monitoring, and to perform operations such as cleaning, pruning, fertilising, and planting autonomously.

Author Contributions

Conceptualisation, D.Q.d.S. and F.N.d.S.; data curation, D.Q.d.S.; formal analysis, D.Q.d.S., F.N.d.S., A.J.S., V.F. and J.B.-C.; investigation, D.Q.d.S.; methodology, D.Q.d.S.; resources, D.Q.d.S.; supervision, F.N.d.S., A.J.S., V.F. and J.B.-C.; validation, D.Q.d.S., F.N.d.S., A.J.S., V.F. and J.B.-C.; vizualisation, D.Q.d.S.; writing—original draft preparation, D.Q.d.S.; writing—review and editing, D.Q.d.S., F.N.d.S., A.J.S., V.F. and J.B.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is cofinanced by the ERDF—European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation—COMPETE 2020 under the PORTUGAL 2020 Partnership Agreement, as a part of project «Project Replant—POCI-01-0247-FEDER-046081».

Data Availability Statement

The dataset presented in this work is publicly available at https://doi.org/10.5281/zenodo.5045354, accessed on 22 September 2021.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AGBAbove-Ground Biomass
ALSAirborne Laser Scanning
ANNArtificial Neural Network
CHMCanopy Height Model
CNNConvolutional Neural Network
DBHDiameter at Breast Height
DLDeep Learning
DSMDigital Surface Model
FOVField Of View
GANGenerative Adversarial Network
GNSSGlobal Navigation Satellite Systems
GPRGround Penetrating RaDAR
IMUInertial Measurement Unit
INSInertial Navigation System
ITCIndividual Tree Crown
ITDIndividual Tree Detection
KNNK-Nearest Neighbours
LAILeaf Area Index
LiDARLight Detection And Ranging
NDVINormalised Difference Vegetation Index
QuintaReiFMDQuintaRei Forest Multimodal Dataset
RaDARRadio Detection And Ranging
ROSRobot Operating System
SfMStructure from Motion
SLAMSimultaneous Localisation and Mapping
SSDSingle-Shot MultiBox Detector
TLSTerrestrial Laser Scanning
UAVUnmanned Aerial Vehicle

References

  1. Talbot, B.; Pierzchała, M.; Astrup, R. Applications of Remote and Proximal Sensing for Improved Precision in Forest Operations. Croat. J. For. Eng. 2017, 38, 327–336. [Google Scholar]
  2. Billingsley, J.; Visala, A.; Dunn, M. Robotics in Agriculture and Forestry. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1065–1077. [Google Scholar] [CrossRef] [Green Version]
  3. Oliveira, L.F.P.; Moreira, A.P.; Silva, M.F. Advances in Forest Robotics: A State-of-the-Art Survey. Robotics 2021, 10, 53. [Google Scholar] [CrossRef]
  4. Wu, B.; Liang, A.; Zhang, H.; Zhu, T.; Zou, Z.; Yang, D.; Tang, W.; Li, J.; Su, J. Application of conventional UAV-based high-throughput object detection to the early diagnosis of pine wilt disease by deep learning. For. Ecol. Manag. 2021, 486, 118986. [Google Scholar] [CrossRef]
  5. Hu, G.; Yin, C.; Wan, M.; Zhang, Y.; Fang, Y. Recognition of diseased Pinus trees in UAV images using deep learning and AdaBoost classifier. Biosyst. Eng. 2020, 194, 138–151. [Google Scholar] [CrossRef]
  6. Nguyen, H.T.; Lopez Caceres, M.L.; Moritake, K.; Kentsch, S.; Shu, H.; Diez, Y. Individual Sick Fir Tree (Abies mariesii) Identification in Insect Infested Forests by Means of UAV Images and Deep Learning. Remote Sens. 2021, 13, 260. [Google Scholar] [CrossRef]
  7. Chiang, C.Y.; Barnes, C.; Angelov, P.; Jiang, R. Deep Learning-Based Automated Forest Health Diagnosis From Aerial Images. IEEE Access 2020, 8, 144064–144076. [Google Scholar] [CrossRef]
  8. Barmpoutis, P.; Stathaki, T.; Kamperidou, V. Monitoring of Trees’ Health Condition Using a UAV Equipped with Low-cost Digital Camera. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8291–8295. [Google Scholar] [CrossRef]
  9. Culman, M.; Delalieux, S.; Tricht, K.V. Palm Tree Inventory From Aerial Images Using Retinanet. In Proceedings of the 2020 Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Tunis, Tunisia, 9–11 March 2020; pp. 314–317. [Google Scholar] [CrossRef]
  10. Culman, M.; Delalieux, S.; Van Tricht, K. Individual Palm Tree Detection Using Deep Learning on RGB Imagery to Support Tree Inventory. Remote Sens. 2020, 12, 3476. [Google Scholar] [CrossRef]
  11. Piermattei, L.; Karel, W.; Wang, D.; Wieser, M.; Mokroš, M.; Surový, P.; Koreň, M.; Tomaštík, J.; Pfeifer, N.; Hollaus, M. Terrestrial Structure from Motion Photogrammetry for Deriving Forest Inventory Data. Remote Sens. 2019, 11, 950. [Google Scholar] [CrossRef] [Green Version]
  12. Liu, J.; Feng, Z.; Yang, L.; Mannan, A.; Khan, T.U.; Zhao, Z.; Cheng, Z. Extraction of Sample Plot Parameters from 3D Point Cloud Reconstruction Based on Combined RTK and CCD Continuous Photography. Remote Sens. 2018, 10, 1299. [Google Scholar] [CrossRef] [Green Version]
  13. Hentz, A.M.K.; Silva, C.A.; Dalla Corte, A.P.; Netto, S.P.; Strager, M.P.; Klauberg, C. Estimating forest uniformity in Eucalyptus spp. and Pinus taeda L. stands using field measurements and structure from motion point clouds generated from unmanned aerial vehicle (UAV) data collection. For. Syst. 2018, 27, 005. [Google Scholar] [CrossRef]
  14. Lou, X.; Huang, Y.; Fang, L.; Huang, S.; Gao, H.; Yang, L.; Weng, Y.; Hung, I.K. Measuring loblolly pine crowns with drone imagery through deep learning. J. For. Res. 2021. [Google Scholar] [CrossRef]
  15. Tianyang, D.; Jian, Z.; Sibin, G.; Ying, S.; Jing, F. Single-Tree Detection in High-Resolution Remote-Sensing Images Based on a Cascade Neural Network. ISPRS Int. J. Geo-Inf. 2018, 7, 367. [Google Scholar] [CrossRef] [Green Version]
  16. Hirschmugl, M.; Ofner, M.; Raggam, J.; Schardt, M. Single tree detection in very high resolution remote sensing data. Remote Sens. Environ. 2007, 110, 533–544. [Google Scholar] [CrossRef]
  17. Ferreira, M.P.; de Almeida, D.R.A.; de Almeida Papa, D.; Minervino, J.B.S.; Veras, H.F.P.; Formighieri, A.; Santos, C.A.N.; Ferreira, M.A.D.; Figueiredo, E.O.; Ferreira, E.J.L. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. For. Ecol. Manag. 2020, 475, 118397. [Google Scholar] [CrossRef]
  18. Daliman, S.; Abu-Bakar, S.A.R.; Azam, S.H.M.N. Development of young oil palm tree recognition using Haar- based rectangular windows. IOP Conf. Ser. Earth Environ. Sci. 2016, 37, 012041. [Google Scholar] [CrossRef] [Green Version]
  19. Li, W.; Fu, H.; Yu, L. Deep convolutional neural network based large-scale oil palm tree detection for high-resolution remote sensing images. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 846–849. [Google Scholar] [CrossRef]
  20. Pulido, D.; Salas, J.; Rös, M.; Puettmann, K.; Karaman, S. Assessment of Tree Detection Methods in Multispectral Aerial Images. Remote Sens. 2020, 12, 2379. [Google Scholar] [CrossRef]
  21. Fujimoto, A.; Haga, C.; Matsui, T.; Machimura, T.; Hayashi, K.; Sugita, S.; Takagi, H. An End to End Process Development for UAV-SfM Based Forest Monitoring: Individual Tree Detection, Species Classification and Carbon Dynamics Simulation. Forests 2019, 10, 680. [Google Scholar] [CrossRef] [Green Version]
  22. Roslan, Z.; Long, Z.A.; Ismail, R. Individual Tree Crown Detection using GAN and RetinaNet on Tropical Forest. In Proceedings of the 2021 15th International Conference on Ubiquitous Information Management and Communication (IMCOM), Seoul, Korea, 4–6 January 2021; pp. 1–7. [Google Scholar] [CrossRef]
  23. Ghorbanian, A.; Zaghian, S.; Asiyabi, R.M.; Amani, M.; Mohammadzadeh, A.; Jamali, S. Mangrove Ecosystem Mapping Using Sentinel-1 and Sentinel-2 Satellite Images and Random Forest Algorithm in Google Earth Engine. Remote Sens. 2021, 13, 2565. [Google Scholar] [CrossRef]
  24. Ali, W.; Georgsson, F.; Hellstrom, T. Visual tree detection for autonomous navigation in forest environment. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 560–565. [Google Scholar] [CrossRef] [Green Version]
  25. Inoue, K.; Kaizu, Y.; Igarashi, S.; Imou, K. The development of autonomous navigation and obstacle avoidance for a robotic mower using machine vision technique. IFAC-PapersOnLine 2019, 52, 173–177. [Google Scholar] [CrossRef]
  26. Zhilenkov, A.A.; Epifantsev, I.R. System of autonomous navigation of the drone in difficult conditions of the forest trails. In Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), Moscow and St. Petersburg, Russia, 29 January–1 February 2018; pp. 1036–1039. [Google Scholar] [CrossRef]
  27. Mannar, S.; Thummalapeta, M.; Saksena, S.K.; Omkar, S. Vision-based Control for Aerial Obstacle Avoidance in Forest Environments. IFAC-PapersOnLine 2018, 51, 480–485. [Google Scholar] [CrossRef]
  28. Dionisio-Ortega, S.; Rojas-Perez, L.O.; Martinez-Carranza, J.; Cruz-Vega, I. A deep learning approach towards autonomous flight in forest environments. In Proceedings of the 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 21–23 February 2018; pp. 139–144. [Google Scholar] [CrossRef]
  29. Itakura, K.; Hosoi, F. Automatic Tree Detection from Three-Dimensional Images Reconstructed from 360° Spherical Camera Using YOLO v2. Remote Sens. 2020, 12, 988. [Google Scholar] [CrossRef] [Green Version]
  30. Xie, Q.; Li, D.; Yu, Z.; Zhou, J.; Wang, J. Detecting Trees in Street Images via Deep Learning With Attention Module. IEEE Trans. Instrum. Meas. 2020, 69, 5395–5406. [Google Scholar] [CrossRef]
  31. da Silva, D.Q.; dos Santos, F.N.; Sousa, A.J.; Filipe, V. Visible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics. J. Imaging 2021, 7, 176. [Google Scholar] [CrossRef] [PubMed]
  32. Li, S.; Lideskog, H. Implementation of a System for Real-Time Detection and Localization of Terrain Objects on Harvested Forest Land. Forests 2021, 12, 1142. [Google Scholar] [CrossRef]
  33. Yu, X.; Litkey, P.; Hyyppä, J.; Holopainen, M.; Vastaranta, M. Assessment of Low Density Full-Waveform Airborne Laser Scanning for Individual Tree Detection and Tree Species Classification. Forests 2014, 5, 1011–1031. [Google Scholar] [CrossRef] [Green Version]
  34. Eysn, L.; Hollaus, M.; Lindberg, E.; Berger, F.; Monnet, J.M.; Dalponte, M.; Kobal, M.; Pellegrini, M.; Lingua, E.; Mongus, D.; et al. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space. Forests 2015, 6, 1721–1747. [Google Scholar] [CrossRef] [Green Version]
  35. Fernández-Álvarez, M.; Armesto, J.; Picos, J. LiDAR-Based Wildfire Prevention in WUI: The Automatic Detection, Measurement and Evaluation of Forest Fuels. Forests 2019, 10, 148. [Google Scholar] [CrossRef] [Green Version]
  36. Windrim, L.; Bryson, M. Detection, Segmentation, and Model Fitting of Individual Tree Stems from Airborne Laser Scanning of Forests Using Deep Learning. Remote Sens. 2020, 12, 1469. [Google Scholar] [CrossRef]
  37. Windrim, L.; Bryson, M. Forest Tree Detection and Segmentation using High Resolution Airborne LiDAR. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, 4–8 November 2019; pp. 3898–3904. [Google Scholar] [CrossRef] [Green Version]
  38. Wan Mohd Jaafar, W.S.; Woodhouse, I.H.; Silva, C.A.; Omar, H.; Abdul Maulud, K.N.; Hudak, A.T.; Klauberg, C.; Cardil, A.; Mohan, M. Improving Individual Tree Crown Delineation and Attributes Estimation of Tropical Forests Using Airborne LiDAR Data. Forests 2018, 9, 759. [Google Scholar] [CrossRef] [Green Version]
  39. Chen, X.; Jiang, K.; Zhu, Y.; Wang, X.; Yun, T. Individual Tree Crown Segmentation Directly from UAV-Borne LiDAR Data Using the PointNet of Deep Learning. Forests 2021, 12, 131. [Google Scholar] [CrossRef]
  40. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  41. Dong, T.; Zhou, Q.; Gao, S.; Shen, Y. Automatic Detection of Single Trees in Airborne Laser Scanning Data through Gradient Orientation Clustering. Forests 2018, 9, 291. [Google Scholar] [CrossRef] [Green Version]
  42. Dersch, S.; Heurich, M.; Krueger, N.; Krzystek, P. Combining graph-cut clustering with object-based stem detection for tree segmentation in highly dense airborne lidar point clouds. ISPRS J. Photogramm. Remote Sens. 2021, 172, 207–222. [Google Scholar] [CrossRef]
  43. Burt, A.; Disney, M.; Calders, K. Extracting individual trees from lidar point clouds using treeseg. Methods Ecol. Evol. 2019, 10, 438–445. [Google Scholar] [CrossRef] [Green Version]
  44. Dai, W.; Yang, B.; Dong, Z.; Shaker, A. A new method for 3D individual tree extraction using multispectral airborne LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 144, 400–411. [Google Scholar] [CrossRef]
  45. Ayrey, E.; Fraver, S.; Kershaw, J.A., Jr.; Kenefic, L.S.; Hayes, D.; Weiskittel, A.R.; Roth, B.E. Layer Stacking: A Novel Algorithm for Individual Forest Tree Segmentation from LiDAR Point Clouds. Can. J. Remote Sens. 2017, 43, 16–27. [Google Scholar] [CrossRef]
  46. Lefsky, M.A.; Harding, D.J.; Keller, M.; Cohen, W.B.; Carabajal, C.C.; Del Bom Espirito-Santo, F.; Hunter, M.O.; de Oliveira, R., Jr. Estimates of forest canopy height and aboveground biomass using ICESat. Geophys. Res. Lett. 2005, 32. [Google Scholar] [CrossRef] [Green Version]
  47. Popescu, S.C. Estimating biomass of individual pine trees using airborne lidar. Biomass Bioenergy 2007, 31, 646–655. [Google Scholar] [CrossRef]
  48. Calders, K.; Newnham, G.; Burt, A.; Murphy, S.; Raumonen, P.; Herold, M.; Culvenor, D.; Avitabile, V.; Disney, M.; Armston, J.; et al. Nondestructive estimates of above-ground biomass using terrestrial laser scanning. Methods Ecol. Evol. 2015, 6, 198–208. [Google Scholar] [CrossRef]
  49. Dalla Corte, A.P.; Rex, F.E.; Almeida, D.R.A.D.; Sanquetta, C.R.; Silva, C.A.; Moura, M.M.; Wilkinson, B.; Zambrano, A.M.A.; Cunha Neto, E.M.D.; Veras, H.F.P.; et al. Measuring Individual Tree Diameter and Height Using GatorEye High-Density UAV-Lidar in an Integrated Crop-Livestock-Forest System. Remote Sens. 2020, 12, 863. [Google Scholar] [CrossRef] [Green Version]
  50. Ayrey, E.; Hayes, D.J. The Use of Three-Dimensional Convolutional Neural Networks to Interpret LiDAR for Forest Inventory. Remote Sens. 2018, 10, 649. [Google Scholar] [CrossRef] [Green Version]
  51. Saatchi, S.S.; Harris, N.L.; Brown, S.; Lefsky, M.; Mitchard, E.T.A.; Salas, W.; Zutta, B.R.; Buermann, W.; Lewis, S.L.; Hagen, S.; et al. Benchmark map of forest carbon stocks in tropical regions across three continents. Proc. Natl. Acad. Sci. USA 2011, 108, 9899–9904. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Drake, J.B.; Dubayah, R.O.; Clark, D.B.; Knox, R.G.; Blair, J.; Hofton, M.A.; Chazdon, R.L.; Weishampel, J.F.; Prince, S. Estimation of tropical forest structural characteristics using large-footprint lidar. Remote Sens. Environ. 2002, 79, 305–319. [Google Scholar] [CrossRef]
  53. Gonzalez de Tanago, J.; Lau, A.; Bartholomeus, H.; Herold, M.; Avitabile, V.; Raumonen, P.; Martius, C.; Goodman, R.C.; Disney, M.; Manuri, S.; et al. Estimation of above-ground biomass of large tropical trees with terrestrial LiDAR. Methods Ecol. Evol. 2018, 9, 223–234. [Google Scholar] [CrossRef] [Green Version]
  54. Matasci, G.; Hermosilla, T.; Wulder, M.A.; White, J.C.; Coops, N.C.; Hobart, G.W.; Zald, H.S. Large-area mapping of Canadian boreal forest cover, height, biomass and other structural attributes using Landsat composites and lidar plots. Remote Sens. Environ. 2018, 209, 90–106. [Google Scholar] [CrossRef]
  55. Asner, G.P.; Brodrick, P.G.; Philipson, C.; Vaughn, N.R.; Martin, R.E.; Knapp, D.E.; Heckler, J.; Evans, L.J.; Jucker, T.; Goossens, B.; et al. Mapped aboveground carbon stocks to advance forest conservation and recovery in Malaysian Borneo. Biol. Conserv. 2018, 217, 289–310. [Google Scholar] [CrossRef]
  56. Stovall, A.E.; Vorster, A.G.; Anderson, R.S.; Evangelista, P.H.; Shugart, H.H. Non-destructive aboveground biomass estimation of coniferous trees using terrestrial LiDAR. Remote Sens. Environ. 2017, 200, 31–42. [Google Scholar] [CrossRef]
  57. Quegan, S.; Le Toan, T.; Chave, J.; Dall, J.; Exbrayat, J.F.; Minh, D.H.T.; Lomas, M.; D’Alessandro, M.M.; Paillou, P.; Papathanassiou, K.; et al. The European Space Agency BIOMASS mission: Measuring forest above-ground biomass from space. Remote Sens. Environ. 2019, 227, 44–60. [Google Scholar] [CrossRef] [Green Version]
  58. Korhonen, L.; Korpela, I.; Heiskanen, J.; Maltamo, M. Airborne discrete-return LIDAR data in the estimation of vertical canopy cover, angular canopy closure and leaf area index. Remote Sens. Environ. 2011, 115, 1065–1080. [Google Scholar] [CrossRef]
  59. Riaño, D.; Valladares, F.; Condés, S.; Chuvieco, E. Estimation of leaf area index and covered ground from airborne laser scanner (Lidar) in two contrasting forests. Agric. For. Meteorol. 2004, 124, 269–275. [Google Scholar] [CrossRef]
  60. Zhu, X.; Skidmore, A.K.; Darvishzadeh, R.; Niemann, K.O.; Liu, J.; Shi, Y.; Wang, T. Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest. Int. J. Appl. Earth Obs. Geoinf. 2018, 64, 43–50. [Google Scholar] [CrossRef]
  61. Zörner, J.; Dymond, J.R.; Shepherd, J.D.; Wiser, S.K.; Jolly, B. LiDAR-Based Regional Inventory of Tall Trees—Wellington, New Zealand. Forests 2018, 9, 702. [Google Scholar] [CrossRef] [Green Version]
  62. Wallace, L.; Lucieer, A.; Watson, C.; Turner, D. Development of a UAV-LiDAR System with Application to Forest Inventory. Remote Sens. 2012, 4, 1519–1543. [Google Scholar] [CrossRef] [Green Version]
  63. Andersen, H.E.; McGaughey, R.J.; Reutebuch, S.E. Estimating forest canopy fuel parameters using LIDAR data. Remote Sens. Environ. 2005, 94, 441–449. [Google Scholar] [CrossRef]
  64. Popescu, S.C.; Wynne, R.H.; Nelson, R.F. Estimating plot-level tree heights with lidar: Local filtering with a canopy-height based variable window size. Comput. Electron. Agric. 2002, 37, 71–95. [Google Scholar] [CrossRef]
  65. Simard, M.; Pinto, N.; Fisher, J.B.; Baccini, A. Mapping forest canopy height globally with spaceborne lidar. J. Geophys. Res. Biogeosciences 2011, 116. [Google Scholar] [CrossRef] [Green Version]
  66. Peng, X.; Li, X.; Wang, C.; Zhu, J.; Liang, L.; Fu, H.; Du, Y.; Yang, Z.; Xie, Q. SPICE-Based SAR Tomography over Forest Areas Using a Small Number of P-Band Airborne F-SAR Images Characterized by Non-Uniformly Distributed Baselines. Remote Sens. 2019, 11, 975. [Google Scholar] [CrossRef] [Green Version]
  67. Mlambo, R.; Woodhouse, I.H.; Gerard, F.; Anderson, K. Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries. Forests 2017, 8, 68. [Google Scholar] [CrossRef] [Green Version]
  68. Zhao, K.; Suarez, J.C.; Garcia, M.; Hu, T.; Wang, C.; Londo, A. Utility of multitemporal lidar for forest and carbon monitoring: Tree growth, biomass dynamics, and carbon flux. Remote Sens. Environ. 2018, 204, 883–897. [Google Scholar] [CrossRef]
  69. Popescu, S.C.; Wynne, R.H.; Nelson, R.F. Measuring individual tree crown diameter with lidar and assessing its influence on estimating forest volume and biomass. Can. J. Remote Sens. 2003, 29, 564–577. [Google Scholar] [CrossRef]
  70. Hudak, A.T.; Crookston, N.L.; Evans, J.S.; Hall, D.E.; Falkowski, M.J. Nearest neighbor imputation of species-level, plot-scale forest structure attributes from LiDAR data. Remote Sens. Environ. 2008, 112, 2232–2245. [Google Scholar] [CrossRef] [Green Version]
  71. Antonarakis, A.; Richards, K.; Brasington, J. Object-based land cover classification using airborne LiDAR. Remote Sens. Environ. 2008, 112, 2988–2998. [Google Scholar] [CrossRef]
  72. Coomes, D.A.; Dalponte, M.; Jucker, T.; Asner, G.P.; Banin, L.F.; Burslem, D.F.; Lewis, S.L.; Nilus, R.; Phillips, O.L.; Phua, M.H.; et al. Area-based vs tree-centric approaches to mapping forest carbon in Southeast Asian forests from airborne laser scanning data. Remote Sens. Environ. 2017, 194, 77–88. [Google Scholar] [CrossRef] [Green Version]
  73. Chiella, A.C.B.; Machado, H.N.; Teixeira, B.O.S.; Pereira, G.A.S. GNSS/LiDAR-Based Navigation of an Aerial Robot in Sparse Forests. Sensors 2019, 19, 4061. [Google Scholar] [CrossRef] [Green Version]
  74. Zhang, C.; Yong, L.; Chen, Y.; Zhang, S.; Ge, L.; Wang, S.; Li, W. A Rubber-Tapping Robot Forest Navigation and Information Collection System Based on 2D LiDAR and a Gyroscope. Sensors 2019, 19, 2136. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Lu, L.; Yunda, A.; Carrio, A.; Campoy, P. Robust autonomous flight in cluttered environment using a depth sensor. Int. J. Micro Air Veh. 2020, 12, 1756829320924528. [Google Scholar] [CrossRef]
  76. Qian, C.; Liu, H.; Tang, J.; Chen, Y.; Kaartinen, H.; Kukko, A.; Zhu, L.; Liang, X.; Chen, L.; Hyyppä, J. An Integrated GNSS/INS/LiDAR-SLAM Positioning Method for Highly Accurate Forest Stem Mapping. Remote Sens. 2017, 9, 3. [Google Scholar] [CrossRef] [Green Version]
  77. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR Scan Matching Aided Inertial Navigation System in GNSS-Denied Environments. Sensors 2015, 15, 16710–16728. [Google Scholar] [CrossRef] [PubMed]
  78. Tang, J.; Chen, Y.; Kukko, A.; Kaartinen, H.; Jaakkola, A.; Khoramshahi, E.; Hakala, T.; Hyyppä, J.; Holopainen, M.; Hyyppä, H. SLAM-Aided Stem Mapping for Forest Inventory with Small-Footprint Mobile LiDAR. Forests 2015, 6, 4588–4606. [Google Scholar] [CrossRef] [Green Version]
  79. Nevalainen, P.; Li, Q.; Melkas, T.; Riekki, K.; Westerlund, T.; Heikkonen, J. Navigation and Mapping in Forest Environment Using Sparse Point Clouds. Remote Sens. 2020, 12, 4088. [Google Scholar] [CrossRef]
  80. Li, Q.; Nevalainen, P.; Peña Queralta, J.; Heikkonen, J.; Westerlund, T. Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation. Remote Sens. 2020, 12, 1870. [Google Scholar] [CrossRef]
  81. Dalponte, M.; Bruzzone, L.; Gianelle, D. Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1416–1427. [Google Scholar] [CrossRef] [Green Version]
  82. Deng, S.; Katoh, M.; Yu, X.; Hyyppä, J.; Gao, T. Comparison of Tree Species Classifications at the Individual Tree Level by Combining ALS Data and RGB Images Using Different Algorithms. Remote Sens. 2016, 8, 1034. [Google Scholar] [CrossRef] [Green Version]
  83. Zhang, Z.; Kazakova, A.; Moskal, L.M.; Styers, D.M. Object-Based Tree Species Classification in Urban Ecosystems Using LiDAR and Hyperspectral Data. Forests 2016, 7, 122. [Google Scholar] [CrossRef] [Green Version]
  84. Sun, Y.; Xin, Q.; Huang, J.; Huang, B.; Zhang, H. Characterizing Tree Species of a Tropical Wetland in Southern China at the Individual Tree Level Based on Convolutional Neural Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4415–4425. [Google Scholar] [CrossRef]
  85. Zhang, R.; Tang, X.; You, S.; Duan, K.; Xiang, H.; Luo, H. A Novel Feature-Level Fusion Framework Using Optical and SAR Remote Sensing Images for Land Use/Land Cover (LULC) Classification in Cloudy Mountainous Area. Appl. Sci. 2020, 10, 2928. [Google Scholar] [CrossRef]
  86. Kwon, S.K.; Jung, H.S.; Baek, W.K.; Kim, D. Classification of Forest Vertical Structure in South Korea from Aerial Orthophoto and Lidar Data Using an Artificial Neural Network. Appl. Sci. 2017, 7, 1046. [Google Scholar] [CrossRef] [Green Version]
  87. Pibre, L.; Chaumon, M.; Subsol, G.; Lenco, D.; Derras, M. How to deal with multi-source data for tree detection based on deep learning. In Proceedings of the 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Montreal, QC, Canada, 14–16 November 2017; pp. 1150–1154. [Google Scholar] [CrossRef] [Green Version]
  88. Matthies, L.; Bergh, C.; Castano, A.; Macedo, J.; Manduchi, R. Obstacle Detection in Foliage with Ladar and Radar. In Robotics Research. The Eleventh International Symposium; Dario, P., Chatila, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 291–300. [Google Scholar]
  89. Zhou, S.; Xi, J.; McDaniel, M.W.; Nishihata, T.; Salesses, P.; Iagnemma, K. Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain. J. Field Robot. 2012, 29, 277–297. [Google Scholar] [CrossRef]
  90. Lei, G.; Yao, R.; Zhao, Y.; Zheng, Y. Detection and Modeling of Unstructured Roads in Forest Areas Based on Visual-2D Lidar Data Fusion. Forests 2021, 12, 820. [Google Scholar] [CrossRef]
  91. Shendryk, I.; Hellström, M.; Klemedtsson, L.; Kljun, N. Low-Density LiDAR and Optical Imagery for Biomass Estimation over Boreal Forest in Sweden. Forests 2014, 5, 992–1010. [Google Scholar] [CrossRef]
  92. Theofanous, N.; Chrysafis, I.; Mallinis, G.; Domakinis, C.; Verde, N.; Siahalou, S. Aboveground Biomass Estimation in Short Rotation Forest Plantations in Northern Greece Using ESA’s Sentinel Medium-High Resolution Multispectral and Radar Imaging Missions. Forests 2021, 12, 902. [Google Scholar] [CrossRef]
  93. Zhu, Y.; Feng, Z.; Lu, J.; Liu, J. Estimation of Forest Biomass in Beijing (China) Using Multisource Remote Sensing and Forest Inventory Data. Forests 2020, 11, 163. [Google Scholar] [CrossRef] [Green Version]
  94. Chen, S.; McDermid, G.J.; Castilla, G.; Linke, J. Measuring Vegetation Height in Linear Disturbances in the Boreal Forest with UAV Photogrammetry. Remote Sens. 2017, 9, 1257. [Google Scholar] [CrossRef] [Green Version]
  95. Popescu, S.; Wynne, R. Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height. Photogramm. Eng. Remote Sens. 2004, 70, 589–604. [Google Scholar] [CrossRef] [Green Version]
  96. Raibert, M.; Blankespoor, K.; Nelson, G.; Playter, R. BigDog, the Rough-Terrain Quadruped Robot. IFAC Proc. Vol. 2008, 41, 10822–10825. [Google Scholar] [CrossRef] [Green Version]
  97. Wooden, D.; Malchano, M.; Blankespoor, K.; Howardy, A.; Rizzi, A.A.; Raibert, M. Autonomous navigation for BigDog. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, Alaska, 3–8 May 2010; pp. 4736–4741. [Google Scholar] [CrossRef]
  98. Pierzchała, M.; Giguère, P.; Astrup, R. Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM. Comput. Electron. Agric. 2018, 145, 217–225. [Google Scholar] [CrossRef]
  99. Hardiman, B.S.; Gough, C.M.; Butnor, J.R.; Bohrer, G.; Detto, M.; Curtis, P.S. Coupling Fine-Scale Root and Canopy Structure Using Ground-Based Remote Sensing. Remote Sens. 2017, 9, 182. [Google Scholar] [CrossRef] [Green Version]
  100. Mulatu, K.A.; Decuyper, M.; Brede, B.; Kooistra, L.; Reiche, J.; Mora, B.; Herold, M. Linking Terrestrial LiDAR Scanner and Conventional Forest Structure Measurements with Multi-Modal Satellite Data. Forests 2019, 10, 291. [Google Scholar] [CrossRef] [Green Version]
  101. Birrell, S.; Hughes, J.; Cai, J.Y.; Iida, F. A field-tested robotic harvesting system for iceberg lettuce. J. Field Robot. 2020, 37, 225–245. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  102. Xiong, Y.; Ge, Y.; Grimstad, L.; From, P.J. An autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation. J. Field Robot. 2020, 37, 202–224. [Google Scholar] [CrossRef] [Green Version]
  103. Kang, H.; Zhou, H.; Chen, C. Visual Perception and Modeling for Autonomous Apple Harvesting. IEEE Access 2020, 8, 62151–62163. [Google Scholar] [CrossRef]
  104. Leu, A.; Razavi, M.; Langstädtler, L.; Ristić-Durrant, D.; Raffel, H.; Schenck, C.; Gräser, A.; Kuhfuss, B. Robotic Green Asparagus Selective Harvesting. IEEE/ASME Trans. Mechatronics 2017, 22, 2401–2410. [Google Scholar] [CrossRef]
  105. Lehnert, C.; English, A.; McCool, C.; Tow, A.W.; Perez, T. Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robot. Autom. Lett. 2017, 2, 872–879. [Google Scholar] [CrossRef] [Green Version]
  106. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a sweet pepper harvesting robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  107. SepúLveda, D.; Fernández, R.; Navas, E.; Armada, M.; González-De-Santos, P. Robotic Aubergine Harvesting Using Dual-Arm Manipulation. IEEE Access 2020, 8, 121889–121904. [Google Scholar] [CrossRef]
  108. Yu, Y.; Zhang, K.; Liu, H.; Yang, L.; Zhang, D. Real-Time Visual Localization of the Picking Points for a Ridge-Planting Strawberry Harvesting Robot. IEEE Access 2020, 8, 116556–116568. [Google Scholar] [CrossRef]
  109. Ge, Y.; Xiong, Y.; Tenorio, G.L.; From, P.J. Fruit Localization and Environment Perception for Strawberry Harvesting Robots. IEEE Access 2019, 7, 147642–147652. [Google Scholar] [CrossRef]
  110. Padilha, T.C.; Moreira, G.; Magalhães, S.A.; dos Santos, F.N.; Cunha, M.; Oliveira, M. Tomato Detection Using Deep Learning for Robotics Application. In Progress in Artificial Intelligence; Marreiros, G., Melo, F.S., Lau, N., Lopes Cardoso, H., Reis, L.P., Eds.; Springer: Cham, Switzerland, 2021; pp. 27–38. [Google Scholar]
  111. Magalhães, S.A.; Castro, L.; Moreira, G.; dos Santos, F.N.; Cunha, M.; Dias, J.; Moreira, A.P. Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse. Sensors 2021, 21, 3569. [Google Scholar] [CrossRef]
  112. Aguiar, A.S.; Magalhães, S.A.; dos Santos, F.N.; Castro, L.; Pinho, T.; Valente, J.; Martins, R.; Boaventura-Cunha, J. Grape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models. Agronomy 2021, 11, 1890. [Google Scholar] [CrossRef]
  113. Bargoti, S.; Underwood, J.P. Image Segmentation for Fruit Detection and Yield Estimation in Apple Orchards. J. Field Robot. 2017, 34, 1039–1060. [Google Scholar] [CrossRef] [Green Version]
  114. Martin, J.; Ansuategi, A.; Maurtua, I.; Gutierrez, A.; Obregón, D.; Casquero, O.; Marcos, M. A Generic ROS-Based Control Architecture for Pest Inspection and Treatment in Greenhouses Using a Mobile Manipulator. IEEE Access 2021, 9, 94981–94995. [Google Scholar] [CrossRef]
  115. Su, J.; Yi, D.; Su, B.; Mi, Z.; Liu, C.; Hu, X.; Xu, X.; Guo, L.; Chen, W.H. Aerial Visual Perception in Smart Farming: Field Study of Wheat Yellow Rust Monitoring. IEEE Trans. Ind. Informatics 2021, 17, 2242–2249. [Google Scholar] [CrossRef] [Green Version]
  116. Aguiar, A.S.; Monteiro, N.N.; Santos, F.N.d.; Solteiro Pires, E.J.; Silva, D.; Sousa, A.J.; Boaventura-Cunha, J. Bringing Semantics to the Vineyard: An Approach on Deep Learning-Based Vine Trunk Detection. Agriculture 2021, 11, 131. [Google Scholar] [CrossRef]
  117. Aguiar, A.S.; Santos, F.N.D.; De Sousa, A.J.M.; Oliveira, P.M.; Santos, L.C. Visual Trunk Detection Using Transfer Learning and a Deep Learning-Based Coprocessor. IEEE Access 2020, 8, 77308–77320. [Google Scholar] [CrossRef]
  118. Pinto de Aguiar, A.S.; Neves dos Santos, F.B.; Feliz dos Santos, L.C.; de Jesus Filipe, V.M.; Miranda de Sousa, A.J. Vineyard trunk detection using deep learning – An experimental device benchmark. Comput. Electron. Agric. 2020, 175, 105535. [Google Scholar] [CrossRef]
  119. Sarmento, J.; Silva Aguiar, A.; Neves dos Santos, F.; Sousa, A.J. Autonomous Robot Visual-Only Guidance in Agriculture Using Vanishing Point Estimation. In Progress in Artificial Intelligence; Marreiros, G., Melo, F.S., Lau, N., Lopes Cardoso, H., Reis, L.P., Eds.; Springer: Cham, Switzerland, 2021; pp. 3–15. [Google Scholar]
  120. Campos, Y.; Sossa, H.; Pajares, G. Comparative analysis of texture descriptors in maize fields with plants, soil and object discrimination. Precis. Agric. 2017, 18, 717–735. [Google Scholar] [CrossRef]
  121. Kim, W.S.; Lee, D.H.; Kim, T.; Kim, H.; Sim, T.; Kim, Y.J. Weakly Supervised Crop Area Segmentation for an Autonomous Combine Harvester. Sensors 2021, 21, 4801. [Google Scholar] [CrossRef]
  122. Potena, C.; Nardi, D.; Pretto, A. Fast and Accurate Crop and Weed Identification with Summarized Train Sets for Precision Agriculture. In Intelligent Autonomous Systems 14; Chen, W., Hosoda, K., Menegatti, E., Shimizu, M., Wang, H., Eds.; Springer: Cham, Switzerland, 2017; pp. 105–121. [Google Scholar]
  123. Lottes, P.; Hörferlin, M.; Sander, S.; Stachniss, C. Effective Vision-based Classification for Separating Sugar Beets and Weeds for Precision Farming. J. Field Robot. 2017, 34, 1160–1178. [Google Scholar] [CrossRef]
  124. Rovira-Mas, F.; Zhang, Q.; Kise, M.; Reid, J. Agricultural 3D Maps with Stereovision. In Proceedings of the 2006 IEEE/ION Position, Location, and Navigation Symposium, Coronado, CA, USA, 25–27 April 2006; pp. 1045–1053. [Google Scholar] [CrossRef]
  125. Nugroho, A.; Fadilah, M.; Wiratmoko, A.; Azis, Y.; Efendi, A.; Sutiarso, L.; Okayasu, T. Implementation of crop growth monitoring system based on depth perception using stereo camera in plant factory. IOP Conf. Ser. Earth Environ. Sci. 2020, 542. [Google Scholar] [CrossRef]
  126. da Silva, D.Q.; Aguiar, A.S.; dos Santos, F.N.; Sousa, A.J.; Rabino, D.; Biddoccu, M.; Bagagiolo, G.; Delmastro, M. Measuring Canopy Geometric Structure Using Optical Sensors Mounted on Terrestrial Vehicles: A Case Study in Vineyards. Agriculture 2021, 11, 208. [Google Scholar] [CrossRef]
  127. Digumarti, S.T.; Nieto, J.; Cadena, C.; Siegwart, R.; Beardsley, P. Automatic Segmentation of Tree Structure From Point Cloud Data. IEEE Robot. Autom. Lett. 2018, 3, 3043–3050. [Google Scholar] [CrossRef]
  128. Santos, L.; Santos, F.N.; Filipe, V.; Shinde, P. Vineyard Segmentation from Satellite Imagery Using Machine Learning. In Progress in Artificial Intelligence; Moura Oliveira, P., Novais, P., Reis, L.P., Eds.; Springer: Cham, Switzerland, 2019; pp. 109–120. [Google Scholar]
  129. Chapman, S.C.; Merz, T.; Chan, A.; Jackway, P.; Hrabar, S.; Dreccer, M.F.; Holland, E.; Zheng, B.; Ling, T.J.; Jimenez-Berni, J. Pheno-Copter: A Low-Altitude, Autonomous Remote-Sensing Robotic Helicopter for High-Throughput Field-Based Phenotyping. Agronomy 2014, 4, 279–301. [Google Scholar] [CrossRef] [Green Version]
  130. Arunachalam, A.; Andreasson, H. Real-time plant phenomics under robotic farming setup: A vision-based platform for complex plant phenotyping tasks. Comput. Electr. Eng. 2021, 92, 107098. [Google Scholar] [CrossRef]
  131. Baltazar, A.R.; Santos, F.N.d.; Moreira, A.P.; Valente, A.; Cunha, J.B. Smarter Robotic Sprayer System for Precision Agriculture. Electronics 2021, 10, 2061. [Google Scholar] [CrossRef]
  132. Weyler, J.; Milioto, A.; Falck, T.; Behley, J.; Stachniss, C. Joint Plant Instance Detection and Leaf Count Estimation for In-Field Plant Phenotyping. IEEE Robot. Autom. Lett. 2021, 6, 3599–3606. [Google Scholar] [CrossRef]
  133. Quaglia, G.; Visconte, C.; Scimmi, L.S.; Melchiorre, M.; Cavallone, P.; Pastorelli, S. Design of a UGV Powered by Solar Energy for Precision Agriculture. Robotics 2020, 9, 13. [Google Scholar] [CrossRef] [Green Version]
  134. Sarri, D.; Lombardo, S.; Lisci, R.; De Pascale, V.; Vieri, M. AgroBot Smash a Robotic Platform for the Sustainable Precision Agriculture. In Innovative Biosystems Engineering for Sustainable Agriculture, Forestry and Food Production; Coppola, A., Di Renzo, G.C., Altieri, G., D’Antonio, P., Eds.; Springer: Cham, Switzerland, 2020; pp. 793–801. [Google Scholar]
  135. Gasparino, M.V.; Higuti, V.A.H.; Velasquez, A.E.B.; Becker, M. Improved localization in a corn crop row using a rotated laser rangefinder for three-dimensional data acquisition. J. Braz. Soc. Mech. Sci. Eng. 2020, 42, 592. [Google Scholar] [CrossRef]
  136. Astolfi, P.; Gabrielli, A.; Bascetta, L.; Matteucci, M. Vineyard Autonomous Navigation in the Echord++ GRAPE Experiment⁎⁎This work has been conducted under the “Ground Robot for vineyArdMonitoring and ProtEction (GRAPE)” Experiment funded by the European Commission under the ECHORD++ project (FP7-601116). IFAC-PapersOnLine 2018, 51, 704–709. [Google Scholar] [CrossRef]
  137. Santos, L.C.; Aguiar, A.S.; Santos, F.N.; Valente, A.; Petry, M. Occupancy Grid and Topological Maps Extraction from Satellite Images for Path Planning in Agricultural Robots. Robotics 2020, 9, 77. [Google Scholar] [CrossRef]
  138. Santos, L.; Santos, F.; Mendes, J.; Costa, P.; Lima, J.; Reis, R.; Shinde, P. Path Planning Aware of Robot’s Center of Mass for Steep Slope Vineyards. Robotica 2020, 38, 684–698. [Google Scholar] [CrossRef]
  139. Malyuta, D.; Brommer, C.; Hentzen, D.; Stastny, T.; Siegwart, R.; Brockers, R. Long-duration fully autonomous operation of rotorcraft unmanned aerial systems for remote-sensing data acquisition. J. Field Robot. 2020, 37, 137–157. [Google Scholar] [CrossRef] [Green Version]
  140. Wang, D.; Li, W.; Liu, X.; Li, N.; Zhang, C. UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution. Comput. Electron. Agric. 2020, 175, 105523. [Google Scholar] [CrossRef]
  141. Iqbal, J.; Xu, R.; Sun, S.; Li, C. Simulation of an Autonomous Mobile Robot for LiDAR-Based In-Field Phenotyping and Navigation. Robotics 2020, 9, 46. [Google Scholar] [CrossRef]
  142. Mendonça, R.; Marques, M.M.; Marques, F.; Lourenço, A.; Pinto, E.; Santana, P.; Coito, F.; Lobo, V.; Barata, J. A cooperative multi-robot team for the surveillance of shipwreck survivors at sea. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, Monterey, CA, USA, 19–23 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
  143. Bejiga, M.B.; Zeggada, A.; Nouffidj, A.; Melgani, F. A Convolutional Neural Network Approach for Assisting Avalanche Search and Rescue Operations with UAV Imagery. Remote Sens. 2017, 9, 100. [Google Scholar] [CrossRef] [Green Version]
  144. Pi, Y.; Nath, N.D.; Behzadan, A.H. Convolutional neural networks for object detection in aerial imagery for disaster response and recovery. Adv. Eng. Inform. 2020, 43, 101009. [Google Scholar] [CrossRef]
  145. Mishra, B.; Garg, D.; Narang, P.; Mishra, V. Drone-surveillance for search and rescue in natural disaster. Comput. Commun. 2020, 156, 1–10. [Google Scholar] [CrossRef]
  146. Sandino, J.; Vanegas, F.; Maire, F.; Caccetta, P.; Sanderson, C.; Gonzalez, F. UAV Framework for Autonomous Onboard Navigation and People/Object Detection in Cluttered Indoor Environments. Remote Sens. 2020, 12, 3386. [Google Scholar] [CrossRef]
  147. Yeum, C.M.; Dyke, S.J.; Ramirez, J. Visual data classification in post-event building reconnaissance. Eng. Struct. 2018, 155, 16–24. [Google Scholar] [CrossRef]
  148. Liang, X. Image-based post-disaster inspection of reinforced concrete bridge systems using deep learning with Bayesian optimization. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 415–430. [Google Scholar] [CrossRef]
  149. Garcia-Cerezo, A.; Mandow, A.; Martinez, J.L.; Gomez-de Gabriel, J.; Morales, J.; Cruz, A.; Reina, A.; Seron, J. Development of ALACRANE: A Mobile Robotic Assistance for Exploration and Rescue Missions. In Proceedings of the 2007 IEEE International Workshop on Safety, Security and Rescue Robotics, Rome, Italy, 27–29 September 2007; pp. 1–6. [Google Scholar] [CrossRef]
  150. Chen, B.; Miao, X. Distribution Line Pole Detection and Counting Based on YOLO Using UAV Inspection Line Video. J. Electr. Eng. Technol. 2020, 15, 441–448. [Google Scholar] [CrossRef]
  151. Xu, Y.; Zhu, M.; Li, S.; Feng, H.; Ma, S.; Che, J. End-to-End Airport Detection in Remote Sensing Images Combining Cascade Region Proposal Networks and Multi-Threshold Detection Networks. Remote Sens. 2018, 10, 1516. [Google Scholar] [CrossRef] [Green Version]
  152. Zhang, Y.; Guo, L.; Wang, Z.; Yu, Y.; Liu, X.; Xu, F. Intelligent Ship Detection in Remote Sensing Images Based on Multi-Layer Convolutional Feature Fusion. Remote Sens. 2020, 12, 3316. [Google Scholar] [CrossRef]
  153. Zhang, T.; Zhang, X. High-Speed Ship Detection in SAR Images Based on a Grid Convolutional Neural Network. Remote Sens. 2019, 11, 1206. [Google Scholar] [CrossRef] [Green Version]
  154. Petrlík, M.; Báča, T.; Heřt, D.; Vrba, M.; Krajník, T.; Saska, M. A Robust UAV System for Operations in a Constrained Environment. IEEE Robot. Autom. Lett. 2020, 5, 2169–2176. [Google Scholar] [CrossRef]
  155. Sun, J.; Song, J.; Chen, H.; Huang, X.; Liu, Y. Autonomous State Estimation and Mapping in Unknown Environments With Onboard Stereo Camera for Micro Aerial Vehicles. IEEE Trans. Ind. Informatics 2020, 16, 5746–5756. [Google Scholar] [CrossRef]
  156. Anderson, S.J.; Karumanchi, S.B.; Johnson, B.; Perlin, V.; Rohde, M.; Iagnemma, K. Constraint-based semi-autonomy for unmanned ground vehicles using local sensing. In Unmanned Systems Technology XIV; Karlsen, R.E., Gage, D.W., Shoemaker, C.M., Gerhart, G.R., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2012; Volume 8387, pp. 218–225. [Google Scholar] [CrossRef]
  157. Shane, D.J.; Rufo, M.A.; Berkemeier, M.D.; Alberts, J.A. Autonomous urban reconnaissance ingress system (AURIS): Providing a tactically relevant autonomous door-opening kit for unmanned ground vehicles. In Unmanned Systems Technology XIV; Karlsen, R.E., Gage, D.W., Shoemaker, C.M., Gerhart, G.R., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2012; Volume 8387, pp. 421–430. [Google Scholar] [CrossRef]
  158. Yamauchi, B. All-weather perception for man-portable robots using ultra-wideband radar. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, Alaska, 3–8 May 2010; pp. 3610–3615. [Google Scholar] [CrossRef] [Green Version]
  159. Powers, M.A.; Davis, C.C. Spectral ladar: Towards active 3D multispectral imaging. In Laser Radar Technology and Applications XV; Turner, M.D., Kamerman, G.W., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2010; Volume 7684, pp. 74–85. [Google Scholar] [CrossRef]
  160. Powers, M.A.; Davis, C.C. Spectral LADAR: Active range-resolved three-dimensional imaging spectroscopy. Appl. Opt. 2012, 51, 1468–1478. [Google Scholar] [CrossRef]
  161. Tao, X.; Jingjing, F.; Shuai, G.; Zhipeng, L. Multi-sensor Spatial and Time Scale Fusion Method for Off-road Environment Personnel Identification. In Proceedings of the 2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI), Hangzhou, China, 18–20 December 2020; pp. 633–638. [Google Scholar] [CrossRef]
  162. Ghorpade, D.; Thakare, A.D.; Doiphode, S. Obstacle Detection and Avoidance Algorithm for Autonomous Mobile Robot using 2D LiDAR. In Proceedings of the 2017 International Conference on Computing, Communication, Control and Automation (ICCUBEA), Pune, India, 17–18 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
  163. Marion, V.; Lecointe, O.; Lewandowski, C.; Morillon, J.G.; Aufrere, R.; Marcotegui, B.; Chapuis, R.; Beucher, S. Robust perception algorithms for road and track autonomous following. In Unmanned Ground Vehicle Technology VI; Gerhart, G.R., Shoemaker, C.M., Gage, D.W., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2004; Volume 5422, pp. 55–66. [Google Scholar] [CrossRef]
  164. Li, J.; Ye, D.H.; Chung, T.; Kolsch, M.; Wachs, J.; Bouman, C. Multi-target detection and tracking from a single camera in Unmanned Aerial Vehicles (UAVs). In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 4992–4997. [Google Scholar] [CrossRef]
  165. Niu, C.; Tarapore, D.; Zauner, K.P. Low-viewpoint forest depth dataset for sparse rover swarms. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2021. [Google Scholar]
  166. Bruce, J.; Wawerla, J.; Vaughan, R. The SFU Mountain Dataset: Semi-Structured Woodland Trails Under Changing Environmental Conditions. In Proceedings of the IEEE International Conference on Robotics and Automation 2015, Workshop on Visual Place Recognition in Changing Environments, Seattle, WA, USA, 25–30 May 2015. [Google Scholar]
  167. Ali, I.; Durmush, A.; Suominen, O.; Yli-Hietanen, J.; Peltonen, S.; Collin, J.; Gotchev, A. FinnForest dataset: A forest landscape for visual SLAM. Robot. Auton. Syst. 2020, 132, 103610. [Google Scholar] [CrossRef]
  168. Reis, R.; dos Santos, F.N.; Santos, L. Forest Robot and Datasets for Biomass Collection. In Proceedings of the Robot 2019: Fourth Iberian Robotics Conference, Porto, Portugal, 20–22 November 2019; Silva, M.F., Luís Lima, J., Reis, L.P., Sanfeliu, A., Tardioli, D., Eds.; Springer: Cham, Switzerland, 2020; pp. 152–163. [Google Scholar]
Figure 1. Year distribution of the reviewed articles.
Figure 1. Year distribution of the reviewed articles.
Computation 09 00127 g001
Figure 2. Category distribution of the reviewed articles.
Figure 2. Category distribution of the reviewed articles.
Computation 09 00127 g002
Figure 3. Lateral view of the ground mobile robotic platform AgRob V16 with the sensors annotated.
Figure 3. Lateral view of the ground mobile robotic platform AgRob V16 with the sensors annotated.
Computation 09 00127 g003
Table 1. Summary of the collect works about vision-based perception.
Table 1. Summary of the collect works about vision-based perception.
CategoryProcessing TypeWorks
Health and diseasesOffline[4,5,6,7,8]
Inventory and structureOffline[9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]
NavigationOnline[24,25,26,27,28,29,30,31,32]
Table 2. Summary of the collect works about LiDAR-based perception.
Table 2. Summary of the collect works about LiDAR-based perception.
CategoryProcessing TypeWorks
Inventory and structureOffline[33,34,35,36,37,38,39,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72]
NavigationOnline[73,74,75,76,77,78,79,80]
Table 3. Best works in terms of innovation in each category.
Table 3. Best works in terms of innovation in each category.
CategoryWorkObjectivePerception TypePlatforms
Health and diseases[5]Disease detectionUnimodalUAV
[6]Disease detectionUnimodalUAV
Inventory and structure[11]Inventory characterisationUnimodalHandheld
[38]Biomass parametersUnimodalAirborne
[93]Biomass estimationMultimodalSpaceborne
Navigation[28]Autonomous flightUnimodalUAV
[74]Autonomous rubber-tappingUnimodalCaterpillar robot
[97]Autonomous navigationMultimodalQuadruped robot
Species classification[81]Vegetation classificationMultimodalSpaceborne, airborne
[82]Vegetation classificationMultimodalAirborne
Table 4. Overview of sensor platform types. This table was adapted from [1].
Table 4. Overview of sensor platform types. This table was adapted from [1].
Sensor PlatformArea CoverageSpatial ResolutionReal-Time Operation
SpacecraftGlobalLowNo
AircraftRegionalMediumNo
UAVLocalHigh/Very HighMaybe
Ground-BasedSiteVery HighYes
Table 5. Partial description of the dataset acquired by the sensors mounted on AgRob V16: FOV, data types, data resolution, frame rate, and total number of messages related to each sensor.
Table 5. Partial description of the dataset acquired by the sensors mounted on AgRob V16: FOV, data types, data resolution, frame rate, and total number of messages related to each sensor.
SensorFOV (H° × V° × D°)Frame Rate (Hz)Data TypeResolution (W × H px)Number of Messages
ZED90 × 60 × 1002.67Visible image1280 × 7206133
Depth image1280 × 7203045
OAK-D72 × 50 × 835.20Visible image1280 × 7205105
Depth image1280 × 7204811
FLIR24 × 1827.87Thermal image640 × 51234,375
Velodyne360 × 3010.37Point cloud-12,154
Table 6. Summary of works presenting perception datasets in forests and our dataset, as well as their characteristics.
Table 6. Summary of works presenting perception datasets in forests and our dataset, as well as their characteristics.
ReferencePerception DataData Format
Eysn et al. [34]Laser scansLAS, TIFF, SHP
Niu et al. [165]Colour and depth imagesPNG, CSV
da Silva et al. [31]Visible and thermal imagesJPG, XML
Bruce et al. [166]Laser scans; colour and monochrome imagesROS
Ali et al. [167]Colour imagesROS
Reis et al. [168]Laser scans; thermal imagesROS
QuintaReiFMDLaser scans; visible, thermal, and depth imagesROS
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

da Silva, D.Q.; dos Santos, F.N.; Sousa, A.J.; Filipe, V.; Boaventura-Cunha, J. Unimodal and Multimodal Perception for Forest Management: Review and Dataset. Computation 2021, 9, 127. https://doi.org/10.3390/computation9120127

AMA Style

da Silva DQ, dos Santos FN, Sousa AJ, Filipe V, Boaventura-Cunha J. Unimodal and Multimodal Perception for Forest Management: Review and Dataset. Computation. 2021; 9(12):127. https://doi.org/10.3390/computation9120127

Chicago/Turabian Style

da Silva, Daniel Queirós, Filipe Neves dos Santos, Armando Jorge Sousa, Vítor Filipe, and José Boaventura-Cunha. 2021. "Unimodal and Multimodal Perception for Forest Management: Review and Dataset" Computation 9, no. 12: 127. https://doi.org/10.3390/computation9120127

APA Style

da Silva, D. Q., dos Santos, F. N., Sousa, A. J., Filipe, V., & Boaventura-Cunha, J. (2021). Unimodal and Multimodal Perception for Forest Management: Review and Dataset. Computation, 9(12), 127. https://doi.org/10.3390/computation9120127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop