Next Article in Journal
Multi-Stage Semantic Segmentation Quantifies Fragmentation of Small Habitats at a Landscape Scale
Previous Article in Journal
Altered Trends in Light Use Efficiency of Grassland Ecosystem in Northern China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unsupervised Methodology for Large-Scale Tree Seedling Mapping in Diverse Forestry Settings Using UAV-Based RGB Imagery

by
Sadeepa Jayathunga
1,*,
Grant D. Pearse
1 and
Michael S. Watt
2
1
Scion, Titokorangi Drive, Private Bag 3020, Rotorua 3046, New Zealand
2
Scion, 10 Kyle St, Riccarton, Christchurch 8440, New Zealand
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(22), 5276; https://doi.org/10.3390/rs15225276
Submission received: 16 October 2023 / Revised: 3 November 2023 / Accepted: 3 November 2023 / Published: 7 November 2023
(This article belongs to the Topic Individual Tree Detection (ITD) and Its Applications)

Abstract

:
Mapping and monitoring tree seedlings is essential for reforestation and restoration efforts. However, achieving this on a large scale, especially during the initial stages of growth, when seedlings are small and lack distinct morphological features, can be challenging. An accurate, reliable, and efficient method that detects seedlings using unmanned aerial vehicles (UAVs) could significantly reduce survey costs. In this study, we used an unsupervised approach to map young conifer seedlings utilising spatial, spectral, and structural information from UAV digital aerial photogrammetric (UAV-DAP) point clouds. We tested our method across eight trial stands of radiata pine with a wide height range (0.4–6 m) that comprised a total of ca. 100 ha and spanned diverse site conditions. Using this method, seedling detection was excellent, with an overall precision, sensitivity, and F1 score of 95.2%, 98.0%, and 96.6%, respectively. Our findings demonstrated the importance of combining spatial, spectral, and structural metrics for seedling detection. While spectral and structural metrics efficiently filtered out non-vegetation objects and weeds, they struggled to differentiate planted seedlings from regenerating ones due to their similar characteristics, resulting in a large number of false positives. The inclusion of a row segment detection algorithm overcame this limitation and successfully identified most regenerating seedlings, leading to a significant reduction in false positives and an improvement in overall detection accuracy. Our method generated vector files containing seedling positions and key structural characteristics (seedling height, crown dimensions), offering valuable outputs for precision management. This automated pipeline requires fewer resources and user inputs compared to manual annotations or supervised techniques, making it a rapid, cost-effective, and scalable solution which is applicable without extensive training data. While serving as primarily a standalone tool for assessing forestry projects, the proposed method can also complement supervised seedling detection methods like machine learning, i.e., by supplementing training datasets.

1. Introduction

Seedlings represent an acutely vulnerable stage of the tree development cycle. Trees at this stage are highly susceptible to many biotic and abiotic factors that can be exacerbated by transplanting shock [1] and poor site conditions in disturbed forest lands [2]. Consequently, seedling mortality during the establishment phase is considered a strong predictor of the mid- and long-term performance of forestry projects [2,3,4].
Monitoring seedling survival often requires the detection and mapping of seedlings at different spatial and temporal scales. When conducted regularly, seedling mapping facilitates effective monitoring that helps track the progress of intended outcomes of reforestation efforts, e.g., sustainable timber production, carbon sequestration [5], and ecological recovery [4,6]; it also simplifies the process of surveying individual trees [7] and can guide precision management techniques [8]. Seedling detection, mapping, and monitoring can also greatly assist the control of invasive tree species during the early growth stages [9,10,11].
Remote sensing (RS) offers rapid, scalable, and multi-temporal solutions for tree mapping and monitoring on sites where field surveys can be challenging, time-consuming, and labour-intensive. However, due to their small size and the absence of distinctive morphological structure at young ages, it can be challenging to detect seedlings using RS data that do not possess appropriate spatial–spectral resolutions [9,12,13].
Spectral data captured by sensors mounted on space-borne platforms have been commonly utilised for forest cover mapping due to their scalability and accessibility [14,15]. Nevertheless, they have been frequently observed to be affected by cloud coverage, especially in regions with high cloud frequencies. High-resolution imagery captured using aerial platforms, such as aircraft and helicopters, is another data source employed for individual tree detection in RS studies. Both high-resolution satellite, e.g., World-View imagery with a resolution of 0.31 m, and aerial imagery have limitations in seedling mapping, as they lack the necessary details, accuracy, and flexibility required for effective seedling mapping and monitoring [9]. Also, it is important to note that high-resolution satellite and aerial imagery can be costly, contingent upon coverage and sensor usage, and both lack the ability to perform direct measurements of structural information.
Methodologies using airborne light detection and ranging (LiDAR) can overcome some of these limitations and accurately detect seedlings using key structural features such as height [16,17,18]. However, airborne LiDAR data can be expensive to collect over large areas at sufficiently high pulse densities for the accurate detection of small seedlings, particularly when repeat acquisitions are required [19]. While LiDAR can provide intricate structural and spatial information about seedlings, its lack of embedded spectral details hinders its ability to distinguish between vegetative and non-vegetative elements in plantation settings, such as piles of harvest residue and tree stumps. To address this limitation, the integration of LiDAR with imagery is necessary [20,21]. However, this integration complicates the methodology by introducing additional steps, including co-registration and data fusion.
In recent decades, forest management has experienced a transformation with the widespread adoption of lightweight unmanned aerial vehicles (UAVs) [22,23]. These UAVs, equipped with miniaturised optical and LiDAR sensors, provide the flexibility of on-demand data captures at a local scale. Spectral imagery having red, green, and blue bands (RGB) with sufficient spatial–spectral resolution for the reliable detection of small seedlings can readily be collected using high-resolution cameras mounted on UAVs [4,24]. Furthermore, cost-effective consumer-grade UAV-based LiDAR sensors, such as the DJI-L1 Zenmuse (DJI, Shenzhen, China), can be employed to capture colourised point clouds that contain both structural and spectral information.
In cases where such equipment is unavailable, an alternative option would be to use digital aerial photogrammetry (DAP) and computer vision techniques such as structure from motion (SfM). These relatively recent proliferations enable the reconstruction of geometrically precise, three-dimensional (3D) point clouds from overlapping two-dimensional (2D) images [25]; this technology has further broadened the horizons of UAV-based remote sensing in forestry [23]. The methodologies that utilise UAV-based RGB imagery for DAP eliminate the requirement for intricate sensors such as multispectral or LiDAR and negate the need for data fusion, thereby reducing the overall complexity of the methods employed.
The automated detection and mapping of seedlings using UAV-based, high-resolution RGB imagery is an active area of research. Many studies have reported a suite of common seedling detection methods, including the use of RGB-derived spectral indices with traditional image segmentation techniques such as thresholding, edge detection, and clustering [4,26,27], and more modern, artificial intelligence (AI) based methods such as deep learning [28,29,30]. Deep learning has seen a dramatic increase in usage over the past few years as a successful technique for small seedling detection, with some studies reporting precision levels as high as 97% with the use of convolution neural networks (CNNs) [28]. However, a notable limitation of CNNs is the need for large training datasets. These datasets are often acquired through the visual interpretation of imagery, where a human operator annotates examples that allow the CNN algorithm to self-learn the characteristic features which are useful for detection. This annotation phase can be very time-consuming and, hence, expensive, and can introduce the potential for biases [29].
An alternative possibility is to use objective and data-driven unsupervised techniques. These methods recognise distinct units by automatically identifying natural groups based on spatial, spectral, and structural information within RS data. The key advantage of these techniques is that they do not require the user to have extensive prior knowledge of the study area. Unsupervised techniques have been widely implemented in a wide range of RS applications and are usually tailored to specific application requirements. Customised pipelines have been developed using data from a wide range of satellite, airborne, and terrestrial platforms to improve the accuracy and efficiency of vegetation detection and tested against a variety of species in a broad spectrum of forest settings [31,32,33,34,35]. Within this research area, a few studies have focused on the detection and mapping of conifer seedlings using unsupervised methods [4,24,36,37]. However, we are only aware of one study that has used a combination of spatial, spectral, and structural information for conifer seedling detection. That study showed that these complementary data sources improved detection accuracy in challenging settings, such as sites with weeds and coarse woody debris [36]. In the present study, unsupervised techniques proved to be suitable, as they enabled us to leverage the spatial, structural, and spectral details contained within the UAV–DAP point clouds without the need for site-specific model training.
Radiata pine (Pinus radiata D. Don) is one of the most widely planted exotic species, both globally and in the southern hemisphere [38]. Our research utilised data from six radiata pine plantations encompassing a diverse range of site conditions and weed infestations. Using these data, the primary goal of this study was to develop an easy to implement and robust pipeline for detecting and mapping conifer seedlings in plantation settings using a combination of spatial, spectral, and structural metrics derived from UAV–DAP point clouds. We extended the developed method to include the generation of vegetation masks and the delineation of crown structures.
In this study, DAP point clouds derived from high-resolution RGB imagery captured by UAVs were utilised, presenting a notable advantage over the direct application of LiDAR data for seedling detection. Similar observations highlighting the challenges of integrating LiDAR with spectral data and the advantage of UAV–DAP point clouds over direct applications of LiDAR have been documented in previous studies [20,21,23].

2. Materials and Method

2.1. Study Site

Eight trial stands located in the North Island of Aotearoa, New Zealand, (Figure 1) comprising a total area of ca. 100 ha, were selected to assess the pipeline. At the time of measurement, seedlings were aged between 4 months and 3.5 years, with a height range of 0.4–4.2 m. In general, regular spacing was maintained at all sites, but occasional gaps were found as a result of missing or dead seedlings.
The Rangipo trial was established on a gently rolling site that was previously pasture. Half of the site was rip-mounded (i.e., a soil preparation activity that involves using a mechanical tool to loosen and elevate the soil), while the other half was planted into the bare pasture. Some areas of this site were infested with weeds. The Kaingaroa trial sites 1 and 3 were previously planted forest sites with significant levels of harvest residue. The seedlings were mechanically spot-mounded—a process by which harvest residue is cleared from the planting area and stacked in windrows, and the seedlings are established on a small mound of earth. Weeds on this site mainly comprised naturally regenerating P. radiata seedlings (which will be referred to as “regen” hereafter) that were unevenly spaced and smaller in size than the planted seedlings. Kaingaroa 2 was also a previously planted site with little to moderate harvest residues and significant weed infestation that comprised regen and patches of gorse (Ulex europaeus). The Scion Nursery site is a flat site in which seedlings were established onto grass that is regularly mowed. The Tarawera site is a previously forested site with moderate levels of harvest residue, weed cover, and regen. An overview of the study sites is given in Appendix A; please refer to [28,39] for more details.

2.2. Data

2.2.1. Remote Sensing Data

High resolution imagery for the study sites was captured using the DJI P4P system (DJI Ltd., Shenzhen, China), equipped with an integrated 1-inch 20 MP RGB camera. All flights were planned to maintain forward and side overlap of at least 85% and 80%, respectively. Other specific flight details for each site can be found in Appendix A.

2.2.2. Field Data

A total of 16,318 seedlings spanning all eight study stands were annotated using high-resolution UAV imagery by an operator. Subsequently, the positions of the annotated seedlings at all sites, apart from Tarawera and Kaingaroa 2, were converted to seedling maps. The accuracy of these seedling maps was then verified on the ground by a field crew, who matched the locations on the seedling maps with the stem numbers attached to the seedling stems. From these annotated seedlings, the height of a total of 6616 trees was measured across six sites, i.e., the Kaingaroa 1 and 3, Rangipo, and Scion nursery trials, using a height pole for trees up to ~6 m tall, and a Vertex 4 hypsometer (Haglöf, Langsele, Sweden) for taller trees. The measured heights ranged from 0.4 to 6.1 m.

2.3. Detection Process

Our seedling detection and measurement methodology consists of four stages: (i) UAV–DAP point cloud generation and processing, (ii) vegetation point isolation, (iii) individual seedling detection, and (iv) seedling segmentation and metric extraction. A flowchart describing the methodology is presented in Figure 2.

2.3.1. UAV–DAP Point Cloud Generation and Processing

Pix4Dmapper (Pix4D, Lausanne, Switzerland) was used to process the raw UAV images and generate SfM data. The standard 3D processing settings within Pix4D were followed, with some previously described minor adjustments (see [39]). The processing of SfM data comprised the following steps: (i) initial processing, (ii) point cloud and mesh generation, and (iii) DSM and orthomosaic and point cloud generation. Once the initial processing step had been completed, spatial reference data in the form of 3D GCPs were added to reprocess the models into the required spatial coordinate system—NZGD2000/New Zealand Transverse Mercator 2000 (EGM 96 Geoid). A point cloud and an orthomosaic were generated for each site in subsequent steps and exported in LAS and TIF format, respectively. The exported point cloud contained XYZ coordinates and RGB digital number (DN) values attached to each point as point attributes. These point clouds were tiled, de-noised, thinned, ground classified, and normalised using the LAStools software package (version 220310; [40]). The Kaingaroa 1 and Kaingaroa 3 datasets were processed separately with care given to ensure that the ground classification accounted for the spot mounds present at these sites. In [39], the detailed methodology used for UAV–DAP point cloud generation in Pix4D and point cloud processing in LAStools is described.

2.3.2. Vegetation Point Isolation

A two-step process was used to isolate the vegetation on the site. This included the creation of a vegetation mask and the use of nearest neighbour and height filtering to remove noise.
We chose Otsu’s thresholding technique [41] to create the binary vegetation mask. Using this method, a single intensity value is selected as a global threshold that separates objects (UAV–DAP points in this study) into two classes—foreground and background (vegetation and non-vegetation points in this study). The threshold is determined either by maximising the separability of the classes or minimising intra-class variance in grey levels.
The accuracy of the method depends on how well the coloured input is transformed into greyscale. Vegetation indices (VIs) have proven to be a useful means for this transformation. VIs calculated from spectral information are simple and effective for quantitative and qualitative evaluations of vegetation cover, vigour, and growth dynamics [31,42] and have been widely implemented with high-resolution UAV-based RGB imagery [23]. We examined common RGB-based VIs (e.g., Excess green index [43], Excess red index [44], Green–red ratio index [45], Modified green blue vegetation index [46], Normalised green–red difference index [47], Vegetative index [48], and Visible atmospherically resistant index [49]) and found the red-green-blue vegetation index (RGBVI) introduced in [46] to be most effective in segregating the seedlings. The RGBVI can be used to identify vegetation through the reflectance characteristics of chlorophyll, which include high reflectance in the green band and absorption in the red and blue bands. RBGVI is determined as:
R G B V I = R G 2 ( R B × R R ) R G 2 + ( R B × R R )
where RR, RG, and RB are, respectively, the normalised DN values of the red, green, and blue bands extracted from the spectral information within the imagery. To convert the RGB point cloud into a greyscale point cloud, we calculated the RGBVI value as a point attribute, determined the global thresholding by applying Otsu’s technique to the RGBVI values, and filtered out all the points having RGBVI values less than Otsu’s threshold (Figure 3c).
This thresholding was refined in the next step through the application of height and noise filters. A height filter was used to remove weeds that were spectrally similar to the trees and identified through the above process but could be separated based on structural differences. Lastly, a noise filter based on the number of nearest neighbours of a point within a specified distance was applied to remove isolated points and small point clusters that could not represent a seedling. The final output from this stage (Figure 3d) will be hereafter referred to as the “vegetation point cloud”.

2.3.3. Individual Seedling Detection

Using the vegetation point cloud as input, this stage comprises the following steps: (i) detection of potential seedling locations, (ii) approximation of general planting distance and orientation of the site, (iii) multicriteria evaluation of detected seedling locations, and (iv) re-testing the identified seedling locations.

Detection of Potential Seedling Locations

This step establishes a baseline by detecting all possible seedling locations in a particular area. Assuming that the highest point in an individual conifer seedling point cloud represents the vertex (Figure 3e), a local maxima-based treetop detection algorithm was implemented on the vegetation point cloud using a fixed-size moving window. The output of the treetop detection algorithm is a vector file of the likely XY locations of the individual seedlings and their maximum height (Figure 4). These seedling locations were subjected to further evaluation in the next steps of the pipeline.

Approximation of General Planting Distance and Orientation of the Site

Planting distance and row orientation are often site-specific and are the most relevant inputs in plant row fitting algorithms. The authors of [36] suggested using a histogram-based method to estimate the general planting distance of a site. We adopted their method with some modifications. Assuming that the along-row distance is smaller than the across- row distance, the most likely nearest neighbour of a point is a seedling on the same row as the point being considered (called the principal point hereafter). The process that we used (i) detects the nearest neighbour of each point, (ii) calculates the distance and orientation of the nearest neighbour point with respect to the principal point, (iii) creates a histogram of the nearest neighbour distances of all the points in the site, (iv) fits a Gaussian curve, and (v) extracts the peak value of the curve as the general planting distance. This method works effectively in young, planted forest stands that have not yet been thinned, as the seedling spacing is relatively consistent throughout such sites. Often, the nominal planting distance of a site is specified at establishment and, in these cases, can be directly specified in the pipeline as the general planting distance. From this point forward, all the default distance thresholds are defined as a percentage of the general planting distance.
In contrast to the general planting distance, the row orientation is often not known and varies significantly within the site, depending on the terrain and site conditions. Therefore, we followed a more flexible approach when determining the general row orientation of a site. We used a mixture distribution method, which fits multiple Gaussian curves to a histogram depending on the complexity of the distribution (via the mixtools package in R software version 4.0.2; [50]). The mixture distribution method was applied to the histogram that was plotted using the nearest neighbour orientations of all points in a particular site. The initial value of the mixing proportion was set to 15% of the population size to prevent the dataset from splitting into insignificant clusters and to ensure erroneous seedling locations did not affect the process. We estimated the peak value of the Gaussian curve of the largest cluster as the general row orientation. Peak values of the Gaussian curves that were fitted to clusters constituting at least 15% of the total population size were recorded as secondary orientations.

Multicriteria Evaluation of Detected Seedling Locations

This process used all the potential seedling locations detected in the previous step. At this stage of the processing, the vegetation point cloud may have still been contaminated by non-seedling vegetation and regen that were not removed by the spectral, structural, and spatial filtering of the UAV–DAP point cloud described in Section 2.3.2. These contaminated UAV–DAP points may have propagated errors into the planted seedling location detection as well. In order to account for such errors and possible misdetections and to retain only uncontaminated seedling location points, we used a scoring system that can evaluate each seedling location point against a list of defined criteria (Table 1).
This method identified relevant criteria and assigned a relative weight to each criterion by distributing 10 points among all the criteria based on their relative importance. A probability rating scale (e.g., −1 = less probable, 0 = same probability and +1 = more probable) was then established for each criterion. Each seedling location point was assigned a base score of 10, and the base value was adjusted accordingly as the point was evaluated against each criterion specified in Table 1. The summed score can range from 0–20, with higher values denoting greater confidence in the tree location. The user can specify the cut-off score below which seedlings are eliminated, depending on the level of contamination in their vegetation point cloud and the seedling location dataset (Figure 5b). When deciding on the cut-off, it is crucial to consider the end use of the seedling detection results. For instance, a higher cut-off would be suitable if predicted locations are used to generate a training dataset for a supervised method (e.g., deep learning) that needs to be as accurate as possible. The performance of the cut-off score was visually assessed on the Tarawera and Kaingaroa 1 sites, as these two sites had a high number of false positives. A cut-off score of 12 proved to be ideal for removing the majority of these false positives in these two sites without discarding a significant number of true positives. Therefore, the default cut-off was set to 12 for all our study sites. We favoured a scoring method over a filter-out method, as it gives the user more control over the point selection and error margin.
Within the multicriteria evaluation, many of the criteria were associated with the collinearity and row alignment of the points. Thus, a simple row segment detection algorithm was developed to identify seedling location points that did not fit a local line with their nearest neighbours. This was accomplished following the steps listed below:
(i)
The two nearest near-collinear neighbours of each seedling location point that were in alignment with the general row orientation were identified.
(ii)
The mean XY of the two identified points was calculated (midXY) and compared to the principal point.
(iii)
The two neighbours were marked as the collinear pair of the principal point if the distance between midXY and the principal point was less than a specified distance threshold. The midXY and the XY of the principal point should overlap for stands with regular spacing, but this was often not the case when the planting spacing was less uniform. Therefore, the distance threshold was estimated as a proportion of the general planting distance (a default of 0.2 times the general planting distance was used).
The first three steps were repeated on all points to detect their near-collinear neighbours that were in alignment with the general direction. The remaining non-collinear location points were then checked against the secondary orientations (only if available) to detect the collinear points. After identifying all collinear points, they were processed through the following steps:
(iv)
A line segment, represented by each point and its two collinear points, was created (known as collinear line segments hereafter).
(v)
Some of these collinear line segments may have mutual points as the collinear point detection was conducted independently on each point. Consequently, all collinear line segments with mutual points were grouped together.
(vi)
All the points belonging to each group of collinear line segments were extracted, and a local line was fitted to these points using the least square method. These fitted local lines corresponded to row segments, as depicted in Figure 5a. Finally, the shortest distance between the fitted local line and each point used for line fitting was estimated.

Re-Testing the Identified Seedling Locations

This step takes the filtered seedling location dataset as the input and re-tests each point. For each seedling location point, (i) two dummy points were placed along the line direction at a distance similar to the general planting spacing, (ii) a circular buffer with a radius of 0.2 times the general planting distance was applied to each dummy point (Figure 5d), and (iii) any tree peak point located within the buffer was confirmed as a seedling location. If no tree peak point was found within the buffer, the dummy points were marked as possible missing seedlings. These missing seedling location points were re-checked in the raw UAV–DAP point cloud by directly interrogating the Z values of the unfiltered UAV–DAP points within a circle of the same size. Whenever a point with a Z value between the specified maximum height range was found, the dummy point was flagged as a potential seedling point. If no peak fell within the specified maximum height range, then the point was marked as a possible missing seedling. All the flagged and possible missing seedling location points were then exported as vector layers for the user to visually evaluate on the orthomosaic.

2.3.4. Seedling Segmentation and Metric Extraction

This last stage of the pipeline includes the following two steps: (i) seedling segmentation, and (ii) extraction of the seedling structural attributes. The input data for this stage were the isolated vegetation point cloud and the confirmed seedling locations.

Seedling Segmentation

Ideally, the seedlings were expected to appear as solid circular formations of point clusters of appropriate size, separated by at least the minimum planting distance. Using a region-growing method, point clusters that represented seedlings could be detected and any clusters that met the size and shape criteria could be safely delineated as seedling crowns.
Following the method proposed in [51], a decision tree method was implemented to grow individual crowns around the confirmed seedling location points. This was achieved through the following steps:
(i)
Each detected seedling location, with attached XYZ coordinates representing the top of the seedling crown, was marked as the initial seed point around which the seedling crown was grown.
(ii)
A maximum allowable distance between the seed point and neighboring points was defined. This represented the maximum crown diameter expected in a particular stand.
(iii)
A circle, centered on the initial seed point and having a diameter equal to the maximum allowable distance, was drawn and all UAV–DAP points falling within the circle were identified.
(iv)
The Z coordinates of these identified UAV–DAP points were then divided by the Z coordinate of the initial seed point and attached as a point attribute named “height fraction”. Subsequently, all the UAV–DAP points with a height fraction above 0.25 were labelled as potential crown points.
(v)
The distances between each of these potential crown points and the initial seed point were calculated and attached as a point attribute named “distance”. The potential crown points were then sorted in ascending order by distance.
(vi)
Starting from the potential crown point with the lowest distance, each potential crown point was visited and retained if its height fraction was above 0.25 but less than the height fraction of the previous point.
(vii)
A convex hull was fitted to the retained potential crown points and its circularity was estimated using the function 4 π × A P 2 , where A and P are the area and the perimeter of the convex hull, respectively. The circularity value was 1 for a perfect circle and decreased to 0 for highly non-circular shapes.
(viii)
If the circularity of the convex hull was less than 0.6, starting from the point with the highest distance, points were removed until the circularity reached 0.6. A default circularity threshold of 0.6 was considered appropriate for this pipeline after testing various threshold values on weed-infested sites.
These steps were implemented iteratively for all the confirmed seedling locations (Figure 6a).
In weed-infested sites, there could be point clusters that do not necessarily represent individual seedlings but clusters of vegetation as a result of weeds and regen that are established close to seedlings. This potential issue was resolved by using a maximum crown diameter threshold to prevent the delineation of seedling crowns of unrealistic size. In addition, a circularity threshold was applied to avoid unrealistic crown shapes that deviated significantly from the typical circular crown shape.

Seedling Metric Extraction

One of our objectives was to expand the seedling detection method to incorporate the generation of vegetation masks and the delineation of crown structures. At this stage, given that the locations of most seedlings had already been accurately identified following the procedures outlined in Section 2.3.3, and UAV–DAP points belonging to each segmented seedling cluster were classified as per the method described above. It was then possible to extract more precise individual seedling metrics, including crown dimensions and seedling height, from the detected segments. To achieve this, a 2D convex hull was fitted around each seedling cluster, and polygon parameters such as diameter (Figure 6b,d), circularity, and area were estimated. The outputs included vector files representing the confirmed seedling locations, potential missing seedling locations, and potential seedling locations detected on the raw UAV–DAP point cloud, crown polygons of confirmed seedlings, and a database with seedling properties, which were calculated using the spatial, structural, and spectral information for points in each segment (e.g., maximum height, mean height, crown area, and mean RGBVI).

2.4. Parameterisation and Accuracy Assessment

All the steps in the pipeline were custom-written using the R software package (version 4.0.2; [50]), except for the initial UAV–DAP point cloud generation and processing step. Although the pipeline does not require training data, it employs rule-based techniques, which requires the user to fine-tune a number of parameters to optimise the results (Table 2).
The seedling locations detected by the proposed pipeline were tested against the annotated seedling dataset, which showed the availability of seedlings on the ground, to assess the performance of the proposed seedling detection method. To do this, a circle with a radius of 0.5 m was created around each of the detected seedling points. Any spatial intersections between this layer and the annotated seedling layer were tested. If an annotated point fell within the circle, the seedling point was marked as a true positive; otherwise, it was marked as a true negative. If no annotated point intersected the circle, the detected seedling point was marked as a false positive.
The performance of the detection method was assessed through precision, recall, and the F1 score, which were calculated as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1   s c o r e = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n
where T P , F P , and F N are the true positive (a prediction that correctly indicates the presence of a tree), false positive (a prediction that incorrectly indicates the presence of a tree), and false negative (a prediction that incorrectly indicates the absence of a tree) predictions, respectively.
Precision measures the proportion of the positive predictions that were correct, while recall measures the proportion of actual positives that were correctly identified. The F1 score is the harmonic mean of precision and recall and is more useful than accuracy for unbalanced datasets, as it accounts for both false positives and false negatives [28]. The values for all metrics were expressed as a percentage (by multiplying by 100), and all ranged from 0–100%, with values >90% representing outstanding classification for the F1 score.
In addition to assessing the overall performance of the method, we also assessed the performance of the predictions made using normalised UAV–DAP point clouds prior to vegetation point isolation.

3. Results

3.1. The Overall Accuracy of Seedling Detection

Averaged across all sites, the method F1 score was 96.6%; the F1 score exceeded 95% at all sites, which represents outstanding performance (Table 3). The average recall across the sites slightly exceeded the precision value (98 vs. 95.2%), indicating that there were slightly more false positives than false negatives. The three Scion nursery stands and Rangipo had the highest F1 scores (range 97.9–99.1%), which was predominantly due to the very high precision achieved at the four sites (99.1–99.7%). In contrast, the F1 measure was lowest within the three Kaingaroa and Tarawera sites (range 95.1–96.2%), predominantly because precision (range 92.1–94.6%) was lower than recall (range 96.7–98.4%).

3.2. Impact of the Multicriteria Evaluation of Seedling Detection Accuracy

To understand the impact of the cut-off score of multicriteria evaluations on the detections, we calculated the statistics for cut-off scores ranging from 0 to 20. A sensitivity analysis examined how changes in the cut-off score of the multicriteria score affected true positives, false negatives, and false positives. Thus, this assessment explained the impact of the multicriteria score on both the quantity and quality of detected seedlings. As shown in Figure 7, the false positives dwindled with increasing cut-off scores, subsequently increasing precision, which reached 98% when the cut-off score was at its highest. The most pronounced increase in false positives was seen in the seedling locations detected on the unfiltered UAV–DAP point cloud.
The false negatives in the unfiltered seedling location dataset can be categorised into two broad classes: (i) smaller seedlings, e.g., seedlings with poor growth or seedlings that were planted to replace dead seedlings that were barely visible on the UAV imagery and, thus, were inadequately reconstructed on the UAV–DAP point cloud (Figure 8); or (ii) seedling locations detected on the vegetation point cloud but which were then assigned a lower multicriteria score and subsequently filtered-out by higher cut-off scores.

3.3. Importance of Vegetation Point Isolation Using Spectral Information

The individual seedling locations detected on the unfiltered UAV–DAP point cloud, i.e., prior to vegetation isolation, constituted a substantial number of false positives (Figure 7), as the local maxima algorithm detected peak points on piles of harvest residues that had Z values in the same range as the maximum seedling height (Figure 5a). However, as expected, the vegetation point isolation stage of the proposed pipeline eliminated a substantial number of non-vegetation noise points from the input UAV–DAP point cloud, resulting in a cleaner point cloud, with the majority of points corresponding to seedlings, particularly on the sites with significant levels of harvest residue (e.g., Kaingaroa 1 and Kaingaroa 3; Figure 5b).

3.4. Results of Seedling Location Re-Testing

The potential missing and true seedling location vector files that were exported as an additional output of stage 3 of the pipeline were visually assessed by overlaying them on the orthomosaic. About 86% of the detected missing seedlings were confirmed as true missing seedlings (Figure 9a–c). About two-thirds of the seedlings that were missed on the vegetation point cloud but were located on the unfiltered UAV–DAP point cloud were small seedlings with very small crowns (Figure 9a–b). The remaining one-third comprised discoloured seedlings (e.g., standing dead and diseased seedlings) (Figure 9c), seedlings that deviated considerably from the planting row (Figure 9c), and large weeds growing in missing seedling locations (Figure 9d).

4. Discussion

4.1. Performance of the Pipeline

Using a comprehensive combination of remotely sensed metrics and algorithms, the developed pipeline was able to accurately detect trees. Previous studies that used unsupervised seedling detection methods on UAV–DAP point clouds have reported accuracies in the range of 76–95% [24,36]. Although the results from previous studies cannot be compared directly to our results due to the differences in methods used and species targeted, it should be noted that our results are within the upper range of reported values.
Although model performance was very high across all sites, there was some variation in tree detection accuracy among sites. The highest F1 scores were observed in the three Scion nursery stands and the Rangipo site. The excellent model performance at these sites can be attributed to the uniform planting spacing and the absence of weeds and regen that were spectrally and structurally similar to the planted seedlings. In contrast, the lowest F1 scores, observed within the three Kaingaroa and Tarawera sites, were most likely due to the presence of moderate to significant weed cover within these sites, which resulted in more false positives than at the sites with few weeds.
The high accuracy of our pipeline was attributable to use of a combination of spatial, structural, and spectral metrics, which provided a comprehensive set of criteria for tree identification. The use of RGBVI efficiently segregated the vegetation and substantially reduced the number of false positives. Squaring the green band reflectance value in the RGBVI calculation provided a means of amplifying the greenness of the vegetation [46], and thus, was highly effective in discriminating between non-vegetation objects (e.g., harvest residue, soil, and some weeds) and green vegetation in all our test sites. The vegetation point isolation stage further reduced the number of points by at least 50%, making the dense UAV–DAP point clouds more manageable in the following steps of the pipeline, which substantially reduced the processing time. Further research should explore alternative VIs with potential for vegetation point isolation. For instance, the normalised difference vegetation index (NDVI) is known to exhibit robust correlations with parameters such as leaf area index (LAI), above ground biomass, and plant vigour [52,53,54]. These high correlations between VIs and LAI result from their joint sensitivity to vegetation density; as leaf area (represented by LAI) increases, higher reflectance in green, red, and near-infrared spectra leads to elevated VI values.
At all sites, the false positives that remained after the vegetation point isolation stage originated from the weeds and regen that were not eliminated by the binary thresholding technique due to their spectral and structural similarity to the planted seedlings. Most of these false positives could be removed by increasing the cut-off score, further emphasising the advantage of using a combination of spectral, spatial, and structural metrics. However, it is crucial to understand that the increased precision comes at the cost of a higher number of false negatives, i.e., predictions that incorrectly indicated the absence of a seedling.
The majority of the seedlings that were missed on the vegetation point cloud but were located in the unfiltered UAV–DAP point cloud were small seedlings with very small crowns. These smaller seedlings were poorly reconstructed during the UAV–DAP process and, thus, were removed in the nearest neighbour noise filtering due to the small size of the point clusters. Some of these inadequate reconstructions could be improved to some extent by fine-tuning the UAV–DAP generation parameters on the photogrammetric software package, particularly by increasing the point density parameter [55]. This aspect could also be further improved using a moving window method in which the user is able to specify thresholds for each window separately to account for variations in the seedling size. Such a method would also be more suitable for uneven-aged plantations and sites characterised by a range of seedling sizes. It is also worth noting that higher-resolution UAV-based RGB imagery might be required to attain a higher detection rate in sites with small seedlings [4,24].

4.2. Downstream Applications

Accurate tree crown delineation is crucial for precision forestry, allowing for the estimation of various forest attributes, including crown size, stem volume, species classification, and vigour at the individual tree level [56,57,58]. Furthermore, incorporating crown metrics into models has been shown to enhance model fit and improve predictions of diameter at breast height [59], which is a commonly predicted forest attribute. In addition, studies indicate that individual tree metrics extracted from crown masks enhance above-ground biomass models, with the extent of improvement depending on the forest structure and species distribution [60].
The outcomes of the proposed method enable similar estimations and modelling for seedlings, especially in plantation forests. Forest managers will have the flexibility to customise this method to achieve diverse objectives, including seedling health assessments, survival analyses, weed infestation evaluations, and continuous seedling growth monitoring, ensuring accurate and timely oversight of forest health and productivity. Overall, the implementation of this method will lead to significant time and cost savings, enabling measurements to be taken more frequently.
This methodology also offers substantial time and cost savings in terms of generating forestry training datasets for supervised classification methods such as machine learning applications in the forestry sector. Currently, the training datasets utilised by machine learning methods rely heavily on manual annotations, hindering their efficiency and restricting data size. The produced vector masks can be converted into training data for instance segmentation or object detection algorithms with minimal cleaning, thereby greatly reducing the amount of labor required to produce high-quality datasets for supervised methods [28,30]. In this pipeline, we utilised a scoring method that provides greater control over point selection and greater flexibility in terms of managing error margins. By setting a higher multicriteria score cut-off value, we were able to precisely regulate the purity of the generated training datasets. The proposed methodology not only accelerates data generation but also guarantees the availability of larger, more reliable datasets. Therefore, this method has the potential to significantly enhance the capabilities and applications of machine learning methods within the forestry sector.

5. Conclusions

In this study, we have developed an unsupervised method to detect and map young conifer seedlings using point clouds and orthomosaic imagery generated from high-resolution UAV-based RGB imagery through DAP. The processing pipeline comprises four main steps: (i) generating UAV–DAP point clouds from UAV RGB imagery, (ii) isolating vegetation points by utilising RGB spectral information from the UAV–DAP point cloud to eliminate non-vegetation points, (iii) identifying individual seedlings through a combination of structural and spatial details such as spacing and height, combined in a weighted scoring approach to detect potential seedling locations, and (iv) segmenting seedling crowns and extracting attributes like height and crown dimensions.
The efficacy of the developed method was assessed in radiata pine plantations with varying site conditions covering a wide range of seedling heights. The results showed high F1 scores (96.6%), precision (95.2%), and recall (98.0%) across a range of age classes and site conditions. Notably, the accuracy increased significantly when a multicriteria evaluation method was employed to filter out false detections, demonstrating the value of leveraging a combination of distinct yet complementary metrics.
In broader applications, this method could be adapted for purposes such as seedling health assessments, survival analyses, evaluating weed infestation levels, and monitoring seedling growth. Moreover, there is potential to adapt the pipeline for use with point cloud data captured by consumer-grade, colourised LiDAR sensors (e.g., DJI-L1 Zenmuse), which could subsequently enhance the precision of seedling attribute estimations.

Author Contributions

Conceptualisation, S.J. and G.D.P.; methodology, S.J. and G.D.P.; validation, S.J.; formal analysis, S.J.; investigation, S.J.; resources, G.D.P.; writing—original draft preparation, S.J.; writing—review and editing, M.S.W. and G.D.P.; supervision, M.S.W.; funding acquisition, M.S.W. and G.D.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Business, Innovation and Employment (MBIE), programme entitled “Seeing the forest for the trees: transforming tree phenotyping for future forests” (programme grant number C04X2101).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank John Moore, Lien Pham and Michael Wilson from Timberlands Limited for providing processed SfM point clouds for use in this study. We extend our sincere gratitude to Hautu Rangipo Whenua Limited, Lake Taupo Forest Management and NZ Forest Managers for granting us access to their forests, and to the Scion’s Accelerator Trials team for providing access to the Scion accelerator trials and the associated data. Also, we extend our gratitude to Grant Evans, Robin J.L. Hartley, and Russell Main from Scion for their feedback on the manuscript before its submission. The authors are grateful to the reviewers for their invaluable suggestions, which significantly improved the quality of the manuscrip.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Summary of site properties, field measurements and UAV flight parameters for the selected eight sites. GSD stands for ground sampling distance, while NA indicates that the information is not available.
Table A1. Summary of site properties, field measurements and UAV flight parameters for the selected eight sites. GSD stands for ground sampling distance, while NA indicates that the information is not available.
SiteField MeasurementsUAV Flight Parameters
Estab.
Date
Site
Area
(ha)
TerrainWeed
Cover
DebrisDate
of Field
Measures
Seedling
Height
(m)
Date of
UAV
Imagery
Target
Altitude
(m)
Target
Overlap
Flight Speed
(m/s)
Density SfM Point Cloud (pt/m2)GSD (cm)
Kaingaroa 1July 20162Gently
rolling
Moderate weed cover (mostly radiata pine regen)Moderate harvest
residues
July–August
2017
2.9
(0.4–5.5)
June 20197490:803.55731.9
2NA9Gently
rolling
Significant weed coverLittle to moderate harvest
residues
NANAJanuary
2022
10085:8572012.7
3August 201527RollingModerate weed cover (mostly radiata pine regen)Significant harvest
residues
July–August
2017
0.4
(2.9–5.5)
August 20187490:803.54431.9
RangipoAugust 201626Gently
rolling
Little weed coverLittle harvest residuesSeptember
2017
3.3
(0.6–5.6)
June 20197490:803.55801.9
Scion Nursery1October 2015 FlatRegularly mowedNo harvest residuesSeptember
2017
4.2
(1.4–6.1)
April 20196085:8039391.6
2October 20160.9FlatRegularly mowedNo harvest residuesSeptember
2017
1.7
(0.34–3.1)
April 20196085:8039391.6
3October 2019 FlatRegularly mowedNo harvest residuesSeptember
2017
0.4
(0.12–0.61)
March 20207490:803.54102.0s
TaraweraNA25Gently
rolling
Significant weed coverLittle to moderate harvest
residues
NANAJanuary
2021
8085:8572322.2

References

  1. Burdett, A.N. Physiological processes in plantation establishment and the development of specifications for forest planting stock. Can. J. For. Res. 1990, 20, 415–427. [Google Scholar] [CrossRef]
  2. Charles, L.S.; Dwyer, J.; Smith, T.J.; Connors, S.; Marschner, P.; Mayfield, M.; Cadotte, M.M. Species wood density and the location of planted seedlings drive early-stage seedling survival during tropical forest restoration. J. Appl. Ecol. 2017, 55, 1009–1018. [Google Scholar] [CrossRef]
  3. James, J.J.; Svejcar, T.J.; Rinella, M.J. Demographic processes limiting seedling recruitment in arid grassland restoration. J. Appl. Ecol. 2011, 48, 961–969. [Google Scholar] [CrossRef]
  4. Buters, T.; Belton, D.; Cross, A. Seed and Seedling Detection Using Unmanned Aerial Vehicles and Automated Image Classification in the Monitoring of Ecological Recovery. Drones 2019, 3, 53. [Google Scholar] [CrossRef]
  5. Fujimoto, A.; Haga, C.; Matsui, T.; Machimura, T.; Hayashi, K.; Sugita, S.; Takagi, H. An End to End Process Development for UAV-SfM Based Forest Monitoring: Individual Tree Detection, Species Classification and Carbon Dynamics Simulation. Forests 2019, 10, 680. [Google Scholar] [CrossRef]
  6. Hallett, L.M.; Standish, R.J.; Jonson, J.; Hobbs, R.J. Seedling emergence and summer survival after direct seeding for woodland restoration on old fields in south-western Australia. Ecol. Manag. Restor. 2014, 15, 140–146. [Google Scholar] [CrossRef]
  7. Roccaforte, J.P.; Fulé, P.Z.; Covington, W.W. Monitoring Landscape-Scale Ponderosa Pine Restoration Treatment Implementation and Effectiveness. Restor. Ecol. 2010, 18, 820–833. [Google Scholar] [CrossRef]
  8. Fardusi, M.J.; Chianucci, F.; Barbati, A. Concept to Practices of Geospatial Information Tools to Assist Forest Management and Planning under Precision Forestry Framework: A review. Ann. Silvic. Res. 2017, 41, 3–14. [Google Scholar] [CrossRef]
  9. Dash, J.P.; Watt, M.S.; Paul, T.S.H.; Morgenroth, J.; Pearse, G.D. Early Detection of Invasive Exotic Trees Using UAV and Manned Aircraft Multispectral and LiDAR Data. Remote Sens. 2019, 11, 1812. [Google Scholar] [CrossRef]
  10. Kattenborn, T.; Lopatin, J.; Förster, M.; Braun, A.C.; Fassnacht, F.E. UAV data as alternative to field sampling to map woody invasive species based on combined Sentinel-1 and Sentinel-2 data. Remote Sens. Environ. 2019, 227, 61–73. [Google Scholar] [CrossRef]
  11. Lourenço, P.; Teodoro, A.; Gonçalves, J.; Honrado, J.; Cunha, M.; Sillero, N. Assessing the performance of different OBIA software approaches for mapping invasive alien plants along roads with remote sensing data. Int. J. Appl. Earth Obs. Geoinf. 2021, 95, 102263. [Google Scholar] [CrossRef]
  12. Duncanson, L.; Dubayah, R. Monitoring individual tree-based change with airborne lidar. Ecol. Evol. 2018, 8, 5079–5089. [Google Scholar] [CrossRef]
  13. Thieme, N.; Bollandsås, O.M.; Gobakken, T.; Næsset, E. Detection of small single trees in the forest–tundra ecotone using height values from airborne laser scanning. Can. J. Remote Sens. 2014, 37, 264–274. [Google Scholar] [CrossRef]
  14. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Dedieu, G. Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas. Remote Sens. Environ. 2016, 187, 156–168. [Google Scholar] [CrossRef]
  15. Wagner, F.H.; Ferreira, M.P.; Sanchez, A.; Hirye, M.C.M.; Zortea, M.; Gloor, E.; Phillips, O.L.; de Souza Filho, C.R.; Shimabukuro, Y.E.; Aragão, L.E.O.C. Individual tree crown delineation in a highly diverse tropical forest using very high resolution satellite images. ISPRS J. Photogramm. Remote Sens. 2018, 145, 362–377. [Google Scholar] [CrossRef]
  16. Jeronimo, S.M.A.; Kane, V.R.; Churchill, D.J.; McGaughey, R.J.; Franklin, J.F. Applying LiDAR Individual Tree Detection to Management of Structurally Diverse Forest Landscapes. J. For. 2018, 116, 336–346. [Google Scholar] [CrossRef]
  17. Næsset, E.; Nelson, R. Using airborne laser scanning to monitor tree migration in the boreal–alpine transition zone. Remote Sens. Environ. 2007, 110, 357–369. [Google Scholar] [CrossRef]
  18. Almeida, D.R.A.; Broadbent, E.N.; Zambrano, A.M.A.; Wilkinson, B.E.; Ferreira, M.E.; Chazdon, R.; Meli, P.; Gorgens, E.B.; Silva, C.A.; Stark, S.C.; et al. Monitoring the structure of forest restoration plantations with a drone-lidar system. Int. J. Appl. Earth Obs. Geoinf. 2019, 79, 192–198. [Google Scholar] [CrossRef]
  19. White, J.C.; Coops, N.C.; Wulder, M.A.; Vastaranta, M.; Hilker, T.; Tompalski, P. Remote Sensing Technologies for Enhancing Forest Inventories: A Review. Can. J. Remote Sens. 2016, 42, 619–641. [Google Scholar] [CrossRef]
  20. Ghamisi, P.; Rasti, B.; Yokoya, N.; Wang, Q.M.; Hofle, B.; Bruzzone, L.; Bovolo, F.; Chi, M.M.; Anders, K.; Gloaguen, R.; et al. Multisource and multitemporal data fusion in remote sensing a comprehensive review of the state of the art. IEEE Geosci. Remote Sens. Mag. 2019, 7, 6–39. [Google Scholar] [CrossRef]
  21. Manzanera, J.A.; García-Abril, A.; Pascual, C.; Tejera, R.; Martín-Fernández, S.; Tokola, T.; Valbuena, R. Fusion of airborne LiDAR and multispectral sensors reveals synergic capabilities in forest structure characterization. GIScience Remote Sens. 2016, 53, 723–738. [Google Scholar] [CrossRef]
  22. Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2016, 38, 2427–2447. [Google Scholar] [CrossRef]
  23. Dainelli, R.; Toscano, P.; Di Gennaro, S.; Matese, A. Recent Advances in Unmanned Aerial Vehicles Forest Remote Sensing—A Systematic Review. Part II: Research Applications. Forests 2021, 12, 397. [Google Scholar] [CrossRef]
  24. Feduck, C.; McDermid, G.J.; Castilla, G. Detection of Coniferous Seedlings in UAV Imagery. Forests 2018, 9, 432. [Google Scholar] [CrossRef]
  25. Dandois, J.P.; Ellis, E.C. High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision. Remote Sens. Environ. 2013, 136, 259–276. [Google Scholar] [CrossRef]
  26. Goodbody, T.R.H.; Coops, N.C.; Hermosilla, T.; Tompalski, P.; Crawford, P. Assessing the status of forest regeneration using digital aerial photogrammetry and unmanned aerial systems. Int. J. Remote Sens. 2017, 39, 5246–5264. [Google Scholar] [CrossRef]
  27. Schäfer, E.; Heiskanen, J.; Heikinheimo, V.; Pellikka, P. Mapping tree species diversity of a tropical montane forest by unsupervised clustering of airborne imaging spectroscopy data. Ecol. Indic. 2016, 64, 49–58. [Google Scholar] [CrossRef]
  28. Pearse, G.D.; Tan, A.Y.; Watt, M.S.; Franz, M.O.; Dash, J.P. Detecting and mapping tree seedlings in UAV imagery using convolutional neural networks and field-verified data. ISPRS J. Photogramm. Remote Sens. 2020, 168, 156–169. [Google Scholar] [CrossRef]
  29. Kattenborn, T.; Eichel, J.; Wiser, S.; Burrows, L.; Fassnacht, F.E.; Schmidtlein, S. Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery. Remote Sens. Ecol. Conserv. 2020, 6, 472–486. [Google Scholar] [CrossRef]
  30. Fromm, M.; Schubert, M.; Castilla, G.; Linke, J.; McDermid, G. Automated Detection of Conifer Seedlings in Drone Imagery Using Convolutional Neural Networks. Remote Sens. 2019, 11, 2585. [Google Scholar] [CrossRef]
  31. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017, 1353691. [Google Scholar] [CrossRef]
  32. Mohan, M.; Silva, C.A.; Klauberg, C.; Jat, P.; Catts, G.; Cardil, A.; Hudak, A.T.; Dia, M. Individual Tree Detection from Unmanned Aerial Vehicle (UAV) Derived Canopy Height Model in an Open Canopy Mixed Conifer Forest. Forests 2017, 8, 340. [Google Scholar] [CrossRef]
  33. Alvites, C.; Santopuoli, G.; Maesano, M.; Chirici, G.; Moresi, F.V.; Tognetti, R.; Marchetti, M.; Lasserre, B. Unsupervised algorithms to detect single trees in a mixed-species and multilayered Mediterranean forest using LiDAR data. Can. J. For. Res. 2021, 51, 1766–1780. [Google Scholar] [CrossRef]
  34. Xu, Z.; Shen, X.; Cao, L.; Coops, N.C.; Goodbody, T.R.; Zhong, T.; Zhao, W.; Sun, Q.; Ba, S.; Zhang, Z.; et al. Tree species classification using UAS-based digital aerial photogrammetry point clouds and multispectral imageries in subtropical natural forests. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102173. [Google Scholar] [CrossRef]
  35. Wang, D. Unsupervised semantic and instance segmentation of forest point clouds. ISPRS J. Photogramm. Remote Sens. 2020, 165, 86–97. [Google Scholar] [CrossRef]
  36. Finn, A.; Kumar, P.; Peters, S.; O’Hehir, J. Unsupervised spectral-spatial processing of drone imagery for identification of pine seedlings. ISPRS J. Photogramm. Remote Sens. 2022, 183, 363–388. [Google Scholar] [CrossRef]
  37. Vauhkonen, J.; Imponen, J. Unsupervised classification of airborne laser scanning data to locate potential wildlife habitats for forest management planning. Forestry 2016, 89, 350–363. [Google Scholar] [CrossRef]
  38. Lavery, P.B.; Mead, D.J. Pinus radiata takes on the world. In Ecology and Biogeography of Pinus; Richardson, D.M., Ed.; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  39. Hartley, R.J.L.; Leonardo, E.M.; Massam, P.; Watt, M.S.; Estarija, H.J.; Wright, L.; Melia, N.; Pearse, G.D. An Assessment of High-Density UAV Point Clouds for the Measurement of Young Forestry Trials. Remote Sens. 2020, 12, 4039. [Google Scholar] [CrossRef]
  40. Isenburg, M. LAStools—Efficient LiDAR Processing Software; Rapidlasso GmbH: Gilching, Germany, 2022. [Google Scholar]
  41. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  42. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Maimaitiyiming, M.; Hartling, S.; Peterson, K.T.; Maw, M.J.W.; Shakoor, N.; Mockler, T.; Fritschi, F.B. Vegetation Index Weighted Canopy Volume Model (CVMVI) for soybean biomass estimation from Unmanned Aerial System-based RGB imagery. ISPRS J. Photogramm. Remote Sens. 2019, 151, 27–41. [Google Scholar] [CrossRef]
  43. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color Indices for Weed Identification Under Various Soil, Residue, and Lighting Conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  44. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  45. Gamon, J.A.; Surfus, J.S. Assessing leaf pigment content and activity with a reflectometer. New Phytol. 1999, 143, 105–117. [Google Scholar] [CrossRef]
  46. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  47. Gitelson, A.A.; Kaufman, Y.J.; Stark, S.; Rundquis, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  48. Hague, T.; Tillett, N.D.; Wheeler, H. Automated Crop and Weed Monitoring in Widely Spaced Cereals. Precis. Agric. 2006, 7, 21–32. [Google Scholar] [CrossRef]
  49. Gitelson, A.A.; Viña, A.; Arkebauer, T.J.; Rundquist, D.C.; Keydan, G.; Leavitt, B. Remote estimation of leaf area index and green leaf biomass in maize canopies. Geophys. Res. Lett. 2003, 30, 1248. [Google Scholar] [CrossRef]
  50. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2022. [Google Scholar]
  51. Dalponte, M.; Coomes, D.A. Tree-centric mapping of forest carbon density from airborne laser scanning and hyperspectral data. Methods Ecol. Evol. 2016, 7, 1236–1245. [Google Scholar] [CrossRef]
  52. Zhu, X.; Liu, D. Improving forest aboveground biomass estimation using seasonal Landsat NDVI time-series. ISPRS J. Photogramm. Remote Sens. 2015, 102, 222–231. [Google Scholar] [CrossRef]
  53. Johnson, L.F. Temporal stability of an NDVI-LAI relationship in a Napa Valley vineyard. Aust. J. Grape Wine Res. 2003, 9, 96–101. [Google Scholar] [CrossRef]
  54. Ferro, M.V.; Catania, P.; Miccichè, D.; Pisciotta, A.; Vallone, M.; Orlando, S. Assessment of vineyard vigour and yield spatio-temporal variability based on UAV high resolution multispectral images. Biosyst. Eng. 2023, 231, 36–56. [Google Scholar] [CrossRef]
  55. Jayathunga, S.; Owari, T.; Tsuyuki, S. Digital Aerial Photogrammetry for Uneven-Aged Forest Management: Assessing the Potential to Reconstruct Canopy Structure and Estimate Living Biomass. Remote Sens. 2019, 11, 338. [Google Scholar] [CrossRef]
  56. Yao, W.; Krzystek, P.; Heurich, M. Tree species classification and estimation of stem volume and DBH based on single tree extraction by exploiting airborne full-waveform LiDAR data. Remote Sens. Environ. 2012, 123, 368–380. [Google Scholar] [CrossRef]
  57. Zhu, Z.; Kleinn, C.; Nölke, N. Towards Tree Green Crown Volume: A Methodological Approach Using Terrestrial Laser Scanning. Remote Sens. 2020, 12, 1841. [Google Scholar] [CrossRef]
  58. Zhen, Z.; Quackenbush, L.J.; Zhang, L. Trends in Automatic Individual Tree Crown Detection and Delineation—Evolution of LiDAR Data. Remote Sens. 2016, 8, 333. [Google Scholar] [CrossRef]
  59. Popescu, S.C. Estimating biomass of individual pine trees using airborne lidar. Biomass Bioenergy 2007, 31, 646–655. [Google Scholar] [CrossRef]
  60. Duncanson, L.; Dubayah, R.; Cook, B.; Rosette, J.; Parker, G. The importance of spatial detail: Assessing the utility of individual crown information and scaling approaches for lidar-based biomass density estimation. Remote Sens. Environ. 2015, 168, 102–112. [Google Scholar] [CrossRef]
Figure 1. (a,b) Study site locations within the North Island of New Zealand. Also shown are UAV acquired images showing the (c) Scion nursery, (d) Tarawera, (e) Kaingaroa 1, (f) Kaingaroa 2, (g) Kaingaroa 3, and (h) Rangipo.
Figure 1. (a,b) Study site locations within the North Island of New Zealand. Also shown are UAV acquired images showing the (c) Scion nursery, (d) Tarawera, (e) Kaingaroa 1, (f) Kaingaroa 2, (g) Kaingaroa 3, and (h) Rangipo.
Remotesensing 15 05276 g001
Figure 2. The workflow containing general steps for conifer seedling detection and segmentation.
Figure 2. The workflow containing general steps for conifer seedling detection and segmentation.
Remotesensing 15 05276 g002
Figure 3. Example results from stage 1 and stage 2 of the pipeline. A 50 × 50 m area within the Tarawera site is shown to visualise the: (a) raw UAV–DAP point cloud coloured by RGB values, (b) normalised UAV–DAP point cloud coloured by height values, (c) binary vegetation mask generated by Otsu thresholding of RGBVI values, (d) vegetation point cloud after applying a minimum height filter and nearest neighbor noise filter, and (e) individual tree point cloud segmented manually from the normalised UAV–DAP point cloud.
Figure 3. Example results from stage 1 and stage 2 of the pipeline. A 50 × 50 m area within the Tarawera site is shown to visualise the: (a) raw UAV–DAP point cloud coloured by RGB values, (b) normalised UAV–DAP point cloud coloured by height values, (c) binary vegetation mask generated by Otsu thresholding of RGBVI values, (d) vegetation point cloud after applying a minimum height filter and nearest neighbor noise filter, and (e) individual tree point cloud segmented manually from the normalised UAV–DAP point cloud.
Remotesensing 15 05276 g003
Figure 4. Example detection of potential seedling locations within two 50 × 50 m areas at the Tarawera site. The results show the estimation of distance and row orientation in (a) an area with only one row orientation and (b) an area with two distinct orientations.
Figure 4. Example detection of potential seedling locations within two 50 × 50 m areas at the Tarawera site. The results show the estimation of distance and row orientation in (a) an area with only one row orientation and (b) an area with two distinct orientations.
Remotesensing 15 05276 g004
Figure 5. Images from a sample area at Kaingaroa 1 showing (a) fitted row segments, (b) multicriteria evaluation score of points, (c), remaining seedling locations after filtering values <12, and (d) and dummy point placement with buffers drawn around them.
Figure 5. Images from a sample area at Kaingaroa 1 showing (a) fitted row segments, (b) multicriteria evaluation score of points, (c), remaining seedling locations after filtering values <12, and (d) and dummy point placement with buffers drawn around them.
Remotesensing 15 05276 g005
Figure 6. Illustrations of the crown segmentation process showing (a) UAV–DAP vegetation point cloud clusters representing individual seedling crowns coloured by cluster ID, (b) closeup of a 2D convex hull polygon drawn around a single cluster, (c) vertical representation of a point cluster, and (d) the results of seedling crown delineation for a section of the Tarawera site.
Figure 6. Illustrations of the crown segmentation process showing (a) UAV–DAP vegetation point cloud clusters representing individual seedling crowns coloured by cluster ID, (b) closeup of a 2D convex hull polygon drawn around a single cluster, (c) vertical representation of a point cluster, and (d) the results of seedling crown delineation for a section of the Tarawera site.
Remotesensing 15 05276 g006
Figure 7. Comparison of statistics for all sites combined at different cut-off scores of multicriteria evaluation. BVP represents the results of individual seedling location detection before the vegetation point isolation step. TP, FP, and FN are, respectively, true positive, false positive, and false negative.
Figure 7. Comparison of statistics for all sites combined at different cut-off scores of multicriteria evaluation. BVP represents the results of individual seedling location detection before the vegetation point isolation step. TP, FP, and FN are, respectively, true positive, false positive, and false negative.
Remotesensing 15 05276 g007
Figure 8. Comparison of results of individual seedling location detection (a) before and (b) after vegetation point isolation in a small area of the Kaingaroa 1 site. Note the tree location points detected on piles of harvest residue on the figure on the left panel.
Figure 8. Comparison of results of individual seedling location detection (a) before and (b) after vegetation point isolation in a small area of the Kaingaroa 1 site. Note the tree location points detected on piles of harvest residue on the figure on the left panel.
Remotesensing 15 05276 g008
Figure 9. Example illustrations of true positives, true negatives, and false negatives in a sample area at the Tarawera site.
Figure 9. Example illustrations of true positives, true negatives, and false negatives in a sample area at the Tarawera site.
Remotesensing 15 05276 g009
Table 1. Seedling location evaluation criteria and scoring.
Table 1. Seedling location evaluation criteria and scoring.
IndicatorWeightCriteria Score
−101
Maximum seedling height1<defined lower threshold
(1 × −1)
within the range
(1 × 0)
>defined upper threshold
(1 × 1)
Distance between actual and ideal point
location (midXY)
1.5>0.2 times the general planting distance
(1.5 × −1)
between 0.1 and 0.2 times the general planting distance
(1.5 × 0)
<0.1 times the general planting distance
(1.5 × 1)
Collinearity with
nearest neighbours
3No collinear neighbours in alignment with general or secondary row orientation
(3 × −1)
No collinear neighbours in alignment with general row orientation but present in alignment with secondary row orientation
(3 × 0)
Collinear neighbours in alignment with general row orientation
(3 × 1)
Mutual points with other collinear
segments
3Not present in any collinear segment
(3 × −1)
Present only in one collinear segment
(3 × 0)
Present in more than one collinear segment
(3 × 1)
Distance from a fitted local line1.5Distance between the local line segment and the point is larger than 0.2 times of general planting distance
(1.5 × −1)
Distance between the local line segment and the point is between 0.1 and 0.2 times of general planting distance
(1.5 × 0)
Distance between the local line segment and the point is less than 0.1 times of general planting distance
(1.5 × 1)
Total (including the base score)10−10 (0)0 (10)10 (20)
Table 2. List of input parameters.
Table 2. List of input parameters.
User Input ParametersDefault ValueRequired to be Set by the User for Optimal Results
1. Minimum height of UAV–DAP points that could represent a seedling0.2 mNO
2. Minimum cluster size for noise filtering5 pointsNO
3. Local maxima window size2.5 mNO
4. Range of maximum seedling height0.5–5 mYES
5. Variability of planting distance±20% of the general
planting distance
NO
6. Cut-off score for seedling filtering12YES
7. Maximum allowable distance for crown delineation30% of the general
planting distance
YES
Table 3. Summary of seedling detection accuracy assessment. Accuracy statistics were calculated using only the individual seedling locations with multicriteria scores ≥12.
Table 3. Summary of seedling detection accuracy assessment. Accuracy statistics were calculated using only the individual seedling locations with multicriteria scores ≥12.
Precision
%
Recall
%
F Score
%
Kaingaroa194.697.896.2
292.198.495.1
393.698.395.9
Scion nursery199.796.698.1
299.396.597.9
399.598.699.1
Rangipo 99.199.099.0
Tarawera 94.196.795.4
All sites combined 95.298.096.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jayathunga, S.; Pearse, G.D.; Watt, M.S. Unsupervised Methodology for Large-Scale Tree Seedling Mapping in Diverse Forestry Settings Using UAV-Based RGB Imagery. Remote Sens. 2023, 15, 5276. https://doi.org/10.3390/rs15225276

AMA Style

Jayathunga S, Pearse GD, Watt MS. Unsupervised Methodology for Large-Scale Tree Seedling Mapping in Diverse Forestry Settings Using UAV-Based RGB Imagery. Remote Sensing. 2023; 15(22):5276. https://doi.org/10.3390/rs15225276

Chicago/Turabian Style

Jayathunga, Sadeepa, Grant D. Pearse, and Michael S. Watt. 2023. "Unsupervised Methodology for Large-Scale Tree Seedling Mapping in Diverse Forestry Settings Using UAV-Based RGB Imagery" Remote Sensing 15, no. 22: 5276. https://doi.org/10.3390/rs15225276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop