Next Article in Journal
Near Real-Time Monitoring of Large Gradient Nonlinear Subsidence in Mining Areas: A Hybrid SBAS-InSAR Method Integrating Robust Sequential Adjustment and Deep Learning
Previous Article in Journal
Impacts of Fengyun-4A and Ground-Based Observation Data Assimilation on the Forecast of Kaifeng’s Heavy Rainfall (2022) and Mechanism Analysis of the Event
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of the Living Vegetation Volume (LVV) for Individual Urban Street Trees Based on Vehicle-Mounted LiDAR Data

Co-Innovation Center for Sustainable Forestry in Southern China, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2024, 16(10), 1662; https://doi.org/10.3390/rs16101662
Submission received: 22 February 2024 / Revised: 22 April 2024 / Accepted: 23 April 2024 / Published: 8 May 2024
(This article belongs to the Section Forest Remote Sensing)

Abstract

:
The living vegetation volume (LVV) can accurately describe the spatial structure of greening trees and quantitatively represent the relationship between this greening and its environment. Because of the mostly line shape distribution and the complex species of street trees, as well as interference from artificial objects, current LVV survey methods are normally limited in their efficiency and accuracy. In this study, we propose an improved methodology based on vehicle-mounted LiDAR data to estimate the LVV of urban street trees. First, a point-cloud-based CSP (comparative shortest-path) algorithm was used to segment the individual tree point clouds, and an artificial objects and low shrubs identification algorithm was developed to extract the street trees. Second, a DBSCAN (density-based spatial clustering of applications with noise) algorithm was utilized to remove the branch point clouds, and a bottom-up slicing method combined with the random sampling consistency iterative method algorithm (RANSAC) was employed to calculate the diameters of the tree trunks and obtain the canopy by comparing the variation in trunk diameters in the vertical direction. Finally, an envelope was fitted to the canopy point cloud using the adaptive AlphaShape algorithm to calculate the LVVs and their ecological benefits (e.g., O2 production and CO2 absorption). The results show that the CSP algorithm had a relatively high overall accuracy in segmenting individual trees (overall accuracy = 95.8%). The accuracies of the tree height and DBH extraction based on vehicle-mounted LiDAR point clouds were 1.66~3.92% (rRMSE) and 4.23~15.37% (rRMSE), respectively. For the plots on Zijin Mountain, the LVV contribution by the maple poplar was the highest (1049.667 m3), followed by the sycamore tree species (557.907 m3), and privet’s was the lowest (16.681 m3).

1. Introduction

A healthy urban forest plays a key role in air quality improvement and heat island effect mitigation [1,2,3]. Street trees have the functions of releasing oxygen, absorbing air pollution, transpiring water, and improving soil quality [4,5,6]. Furthermore, they play vital roles in regulating temperature and humidity on a regional scale. Evaluating the ecological value of street trees is of great importance, as it allows us to quantitatively understand their roles in air purification and mitigating environmental pollution [7]. The living vegetation volume (LVV) is one of the upmost indicators in the evaluation of the ecological benefits of urban street trees, and it is described as the volume of branches and leaves within the tree canopy (m3) [8,9,10]. The LVV is highly correlated with the production capacity of trees, which is normally used to indicate the plant yield and ecological and economic benefits [11,12].
Traditionally, the LVVs of individual trees are measured by field survey, which is time-consuming and laborious [13]. Remote sensing technology can conduct large-scale observations in a relatively short period and obtain valuable remote sensing data. It has good timeliness and periodicity [14]. There are two approaches to estimating LVVs using optical remote sensing. One approach is to determine the structural information of trees in light of the parallax of two neighboring aerial images and then calculate the LVV based on the experimental formula of “crown diameter-crown height-volume” [15]. This approach relies on the estimated tree height and crown diameter, which easily accumulate errors [16]. Additionally, the fitted formula cannot fully accommodate the diverse complexities of tree canopy characteristics [17]. Another approach is to utilize UAV aerial photogrammetry and computer vision software (e.g., Pix4D4.5.6) to generate a three-dimensional model or point cloud model for the extraction of LVVs [18,19]. However, when dealing with natural objects such as vegetation canopies, capturing images with a UAV makes it difficult to obtain structural information under the canopy.
LiDAR is an advanced technology that is capable of capturing 3D measurements through the emission of laser light [20]. It excels at penetrating complex environments (including dense vegetation), capturing detailed tree information, and obtaining accurate elevation information for precise tree height and volume data. Lv et al. [21] used ULS (unmanned aerial vehicle laser scanner) data to obtain the point cloud data of 64 urban roadside ginkgo trees. Their developed algorithm (the voxel coupling convex hull by slices algorithm) accurately estimated the LVV of an individual ginkgo tree, achieving a high level of accuracy (RMSE = 11.17 m3). LiDAR systems can be classified as airborne LiDAR systems, vehicle-mounted LiDAR scanning systems, backpack LiDAR scanning systems, terrestrial LiDAR scanning systems, etc. [22,23,24]. The UAV LiDAR system is capable of acquiring forest information in a cost-effective and efficient manner. However, its special data acquisition style (from top to bottom) makes it difficult to penetrate the canopy of trees to capture the understory’s canopy structure information. At the same time, its point density is limited, which affects the accuracy of the extraction of a tree’s structural parameters. Terrestrial laser scanning (TLS) has advantages in accurately extracting individual trees’ structural parameter information, but for the extensive collection of street tree information, it requires the deployment of a large number of stations [25]. The data fusion and processing involved in this process are complex and tedious, making it difficult to complete within a brief duration [26]. The backpack LiDAR system has high portability and flexibility, making it easy to carry out surveys and scans in various environments, such as environments with a complex understory [27]. However, backpack LiDAR systems also face issues such as low efficiency and high error rates. In addition, when a backpack LiDAR system uses the SLAM algorithm, the system accumulates errors over time during self-localization and map building [28].
A vehicle-mounted LiDAR system (VLS) has the advantages of high efficiency and strong penetrability. It can obtain high-density point clouds and accurately acquire structural information under the tree canopy [29]. The VLS has a strong detection capability for the spatial structure of vegetation, which is applicable for quickly obtaining the structural information of trees over a wide area. Tao et al. [30] used data from a vehicle-mounted LiDAR system to segment the tree canopy of broadleaved and coniferous forests in China. Quantitative assessments at the point level revealed that, for the street trees scanned using the mobile LiDAR, all points were accurately classified as their respective trees.
Previous research has demonstrated that VLS data perform well when applied in various studies of urban street trees. Yan et al. [31] obtained the point data of 30 street trees (20 oriental plane (Platanus orientalis L.) trees and 10 ginkgo (Ginkgo biloba L.) trees) by VLS. They extracted individual tree crowns by identifying the first branching point of the crown and calculated their LVVs. Wu et al. [32] presented a new voxel-based marked neighborhood searching (VMNS) method to efficiently identify street trees and derive their morphological parameters from VLS point cloud data. The evaluation results show that the completeness and correctness of the method for street tree detection are over 98%. However, there are still some gaps and issues in the current research on the use of VLS to extract LVVs from street trees. One issue is how to automatically and accurately identify the tree point cloud from the original dataset containing massive natural and artificial objects. Another issue is how to accurately segment the tree-canopy point cloud and calculate the LVV efficiently. Few studies in the field of LVV extraction have considered the interference of artificial features. Furthermore, the accurate extraction of the tree crown structure and sequent calculation of the LVV have received little attention.
In order to accurately and nondestructively estimate the LVVs of urban street trees, this paper proposes an improved methodology for accurately and nondestructively estimating the LVV of urban street trees based on vehicle-mounted LiDAR data. We first developed an artificial objects and low shrubs identification algorithm to accurately extract the street trees. Second, we utilized a DBSCAN algorithm to remove the branch point cloud. A bottom-up slicing method was used to calculate the diameter of the tree trunk and obtain the canopy. Finally, we fitted an envelope to the canopy point cloud using the adaptive AlphaShape algorithm to calculate the LVV and their ecological benefits (e.g., O2 production and CO2 absorption). We conducted a statistical analysis to determine the LVV contributions of various tree species, as well as the ecological benefits they provide. The research objectives were as follows: (1) to develop an automated individual tree point cloud identification approach for extracting street trees based on VLS data, which take into account the crown shape and height of the trees; (2) to develop a tree canopy point cloud extraction algorithm based on a bottom-up slicing method combined with the RANSAC and DBSCAN algorithms to estimate the LVVs and ecological benefits of individual trees.

2. Materials and Methods

Aiming at the characteristics of long planting routes and large ranges of street trees, this paper designs a framework for the LVV estimation of urban street trees. This framework includes data collection, preprocessing, individual tree segmentation, tree extraction, and LVV calculation. The steps of the framework are as follows: (1) acquire precise 3D point cloud information of trees using a vehicle-mounted LiDAR system known for its efficiency and accuracy; (2) process the point cloud data by eliminating ground points, and, subsequently, segment the nonground point cloud into distinct entities; (3) use the shape index algorithm to remove the artificial features and low shrubs and then extract the tree point cloud; (4) use the DBSCAN algorithm to remove the lateral branch point cloud by clustering identification, combined with the RANSAC algorithm to fit the trunk cylinders and the slicing method to extract the individual tree canopy; (5) use the AlphaShape algorithm for the LVV calculation and to quantify the associated ecological benefits, such as O2 production and CO2 absorption. The technical route illustrating the methodology of this investigation is presented in Figure 1.

2.1. Study Area

Zijin Mountain is located in the eastern suburbs of Nanjing (118°48′24″~118°53′04″E, 32°01′57″~32°06′15″N), with a total area of about 30.09 km2 and a main peak altitude of 448.9 m, with a relative height of 420 m [33]. Zijin Mountain is situated in the northern subtropical region, characterized by an annual rainfall of 900 to 1000 mm and an average annual temperature exceeding 15.7 °C. The study area abounds in plant resources, featuring predominantly mixed deciduous and evergreen broad-leaved forests typical of the northern subtropical zone, with a prevalence of evergreen vegetation. A map of the study area is shown in Figure 2.

2.2. Data Collection

2.2.1. Field Investigation

Nine sample plots (100 × 30 m2) were established along the direction of the road, considering factors such as distance, slope, type of tree species, and density of canopy. The measurement and location of all the trees mainly included recording the individual tree species, DBH, tree height, canopy width, and canopy density, whereby the tree height was determined using a Vertex V laser altimeter (Haglöf, Swiss). Canopy width was measured with a tape measure of the projected distances in the two main directions. The positional data of each tree were recorded by real-time differential equipment Trimble R8s GNSS (Trimble, Kennesaw, GA, USA) with an accuracy of 5 cm. The parameters of the individual tree surveys within the sample plot are summarized in Table 1.

2.2.2. Vehicle LiDAR Data

A VLS with a high-resolution (890 W) RGB video camera and LiDAR sensor XT 32 was used for LiDAR data acquisition. The LiDAR sensor was placed at an angle of 45° along the horizontal plane to facilitate the acquisition of branch structure information in both the upper and lower regions of the forest canopy. The data were collected on 30 November 2021, and the vegetation in the study area showed rich colors with significant differences in leaf color among tree species. The weather was clear and windless at the time of data collection. The vehicle speed was 40 km/h. The accuracy of the LiDAR sensor XT 32 was ±1 cm. The vertical viewing angle was −16~15°. The horizontal viewing angle was 0~360°. The measurement range was 120 m. The camera type was a wide-angle camera with 890 W pixels, the frame rate was ≥3, and the point density averaged 800 points per square meter (Figure 3). The parameters of the VLS are shown in Table 2.

2.3. Data Preprocessing

The study used the statistical outlier removal (SOR) filtering method, which is also known as spatial-distribution-based denoising filtering [34]. This approach involves examining the surrounding points within a specified neighborhood for each point. It computes the average value (dij) of the proximity from the point to its adjacent points, determining the mean (μ) and standard deviation (σ) of those mean distances using a Gaussian distribution model (d~N(μ,σ)). If dij exceeds the maximum distance, it is designated as a noise point and, subsequently, remove it. The formula for the aforementioned method is outlined as follows:
μ = 1 n k i = 1 m j = 1 k d i j
σ = 1 n k i = 1 m j = 1 k ( d i j μ ) 2
where dij signifies the distance from each point to its adjacent points; i = [1, …, m] denotes m points; j = [1, ..., k] represents the total count of neighboring points; μ represents the average distance from each point to its adjacent points; and σ represents the standard deviation of the distances from each point to its neighboring points.
All points were iterated through, and each point with an average distance greater than the specified confidence level of a Gaussian distribution was removed, for example, as follows:
j = 1 k d i j > μ + x σ   o r   j = 1 k d i j < μ x σ
where x is the multiple of the standard deviation.
In the study, only the point cloud data of the road area were preserved. As a result, the buildings outside the road need to be cropped out. We set a 40 m horizontal buffer zone according to the GNSS track line to retain the filtered data, i.e., only the road area point cloud was retained.
The improved progressive TIN densification (IPTD) [35] filtering algorithm was applied to the cropped data to conduct ground point classification and point cloud normalization with separated ground points, as follow:
Z = Z i Z i d e m
where Z i represents the elevation value at point i; Z i d e m represents the DEM pixel value at the corresponding location of point i. The normalized point cloud height value indicates the height of the point from the ground.

2.4. Individual Tree Segmentation

The comparative shortest-path algorithm (CSP) was utilized in this study. The algorithm was grounded on the premise that the forest structure changes in accordance with the shortest-path distance, and considers that two trees of the same size have a high probability that their intersecting branch portions are subordinate to the forest with a shorter trunk transmission distance [36]. If the stands are not of the same size, they need to be normalized before comparing the transmission distances. This normalization is founded on the theoretical concept of metabolic ecology, which posits a strong correlation between branch length and radius characterized by an exponent approximately equal to two-thirds. Therefore, the following formula can be used to normalize the transmission distances of the trees [30]:
D v T r u n k N = D v T r u n k / D B H 2 / 3
where DvTrunk represents the transmission distance between v and the trunk; D v T r u n k N signifies the transmission distance normalized by the DBH.
The precision of the individual tree segmentation was assessed by comparing the measured field data with the outcomes of the individual tree segmentation within the study area [37]. We recorded the quantity of trees obtained after segmentation, the number of accurately segmented trees, the number of incorrectly segmented trees, and the number of missed trees by comparing them with the measured values. The values of recall (r), precision (p), and F1-score (F1) were computed using the following formulas, respectively, all of which vary between 0 and 1 [38].
r = T P T P + F N
P = T P T P + F P
F 1 = 2 r × p r + p
where r represents the percentage at which the trees are detected; p denotes the accuracy of the tree segmentation; and F1 is the overall accuracy of combining the wrong and missed segments. TP is the algorithm-detected quantity of tree crowns that corresponded to the visually observed results (i.e., correct segmentation); FN is the quantity of tree crowns present in the visual interpretation but not detected by the algorithm (i.e., missed segmentation); and FP is the quantity of tree crowns detected by the algorithm but not present in the visual interpretation (i.e., oversegmentation).

2.5. Tree Extraction

2.5.1. Low Shrub Removal

Since vehicle-mounted LiDAR exhibits strong penetration and a long range, the scanned point cloud data often include nontarget objects like understory shrubs, road signs, utility poles, and other roadside greenery [39]. In order to facilitate further research, it becomes essential to remove these objects from the dataset. Trees usually take up more space and surface area compared to shrubs. As a result, in most cases, the LiDAR data of a single shrub contain fewer points than that of a tree [40]. Therefore, distinguishing between large trees and shrubs can be initially accomplished by analyzing the number of individual tree points. Additionally, through ground survey data, it is evident that trees and shrubs also have distinct differences in terms of individual tree heights. Trees typically exceed six meters, whereas shrubs are relatively short, measuring less than six meters in height and typically around three meters. As such, determining the specific height threshold should be based on the characteristics acquired through sample plot analysis. After examining the point cloud clusters resulting from the segmentation of individual trees in the sample plot LiDAR data, we identified initial height thresholds in the range of 6–8 m and point thresholds ranging from 1000 to 2000 (see figure for differences in points and heights between common shrubs and trees). Subsequently, we developed an algorithm to filter the height and number of points in the point cloud clusters after tree segmentation. After testing various parameters on some sample plots, we found that setting the threshold to heights below 6 m and fewer than 15,000 points effectively retained the trees while removing shrub point clouds. In this paper, objects with a point count less than 15,000 and a height less than six meters were recognized as shrubs for removal.

2.5.2. Artificial Objects Extraction

Differentiating trees from other objects like billboards and street lamps becomes challenging because of their similarity in appearance. To address this issue, we suggest an approach that involves fitting an ellipse to each individual object through projection and filtering the ellipses based on their shape index, denoted as r. The process involves projecting the point cloud of each tree vertically onto the x and y planes, fitting an ellipse to the projected points, extracting the major axis (a) and minor axis (b) of the fitted ellipse, and then calculating the shape index.
When r = 1, the projection shape is perfectly circular. As the value of r increases, the projection shape becomes flatter as an ellipse. Because of the flat shape of artificial objects, such as billboards and streetlights, in the plan view, their shape indexes are calculated to be larger than 1. Tree canopies, on the other hand, tend to be close to a circular shape, resulting in their shape index being very close to 1. They can be easily distinguished using the calculated shape index. After multiple experiments with the plot data, we established a threshold of 1.31 to distinguish between tree canopies and artificial objects. By retaining targets with a shape index less than 1.31, we can effectively preserve tree canopies without excluding them while efficiently removing artificial objects. Therefore, we excluded individual trees with a crown ratio r greater than 1.31 and identified them as artificial objects. The specific process of removing artificial features is shown in Figure 4. The formula to calculate the flattening rate is as follows:
P = 2 π b + 4 a b
S = π a b
r = P 2 π · A
where a is the major axis of the ellipse; b is the minor axis of the ellipse; P is the perimeter of the fitted ellipse of the target shape; and A is the area of the fitted ellipse of the target shape.

2.6. Individual Tree Structure Parameters Extraction

2.6.1. Remove Lateral Branches Based on DBSCAN Algorithm Combined with RANSAC Algorithm

For this research, we utilized a clustering algorithm, called density-based spatial clustering of applications with noise (DBSCAN), to extract the DBH of individual trees because of its ability to effectively handle noise points and its efficient performance [41]. The DBSCAN algorithm does not rely on the input of the number of clusters. Alternatively, it utilizes the minimal quantity of points necessary to constitute a cluster (MinPts) or the radius of the neighborhood (Eps neighborhood of a point) for the automatic discovery of clusters of any shape. To start, the DBSCAN selects a random point, p, from the point cloud, D. The Eps neighborhood of p, NEps (p), as the set of points q (which includes point p itself), for which the distance between p and q is within Eps, is as follows:
N E p s p = q D d i s t p , q E p s
If there are sufficient points in the neighborhood, a cluster is formed; else, p is classified as an outlier [42]. This procedure is carried out for every unvisited point. When the points demonstrate a uniform distribution, the number of points is able to be derived from the dimensions of the point cloud and MinPts using the following equation [43]:
E p s = T · M i n P t s · Γ 1 2 · n + 1 m π n
T = i = 1 n max X i min X i
where m represents the quantity of points; n represents the number of dimensions of the points; Γ represents the gamma function; X denotes the dataset of the points; T represents the volume of the experimental space created by the m points.
The lateral branches of individual trees (noise) will affect the subsequent research and also impact the accuracy of the outcomes, and the angle of the inclination of the tree can also impact the precision of the DBH extraction. The precision of the DBH extraction will also affect the accuracy of the subsequent tree crown extraction. The currently used least squares circle fitting for the DBH extraction has difficulty in addressing these issues. Therefore, the DBSCAN algorithm was used to extract the DBH. First, point cloud clustering was performed based on the segmentation results. Then, the point clouds at the 1 to 2 m sections of the main stems of the individual trees were extracted. Branches were removed using point cloud DBSCAN clustering, and then the cylindrical models of the main stems of the individual trees were fitted by the random sample consensus (RANSAC) algorithm with a consistent sampling strategy. Based on this, the point cloud at the 1.2 to 1.4 m section was extracted, and a cylindrical model was fitted using the random sample consensus algorithm for extraction of the DBH. The DBH was subsequently computed using the fitted cylindrical model. This approach can avoid the extraction error brought about by the tilt of the main stem of an individual tree. The specific workflow is depicted in Figure 5.
The precision of the DBH extraction was confirmed using the coefficient of determination (R2), root mean square error (RMSE), and relative root mean square error (rRMSE). We compared the extracted values of the vehicle-mounted LiDAR with the recorded chest diameter measurement to verify the precision of the LiDAR data. The formulas for the precision verification are as follows:
R 2 = i = 1 n x i x i ^ 2 i = 1 n x i x ¯ 2
R M S E = 1 n i = 1 n x i x i ^ 2
r R M S E = R M S E x ¯ × 100 %
where x i is the measured value of the ith tree; x i ^ is the predicted value; x ¯ is the average value; and n is the number of all of the measured trees.

2.6.2. Slicing Method of the Crown Extraction

After utilizing the DBSCAN algorithm to remove the tree branch point clouds, a bottom-up slicing approach combined with the random sample consensus iteration method (RANSAC) was developed to calculate the diameters of the tree trunks. Based on the growth characteristics of the trees, we observed that the diameters of the trunks usually did not vary greatly from top to bottom, while the crown widths were much wider than the trunk diameters. The variation in trunk diameters in the vertical direction was compared to capture the tree crown point clouds. Slicing was carried out vertically, the height of segmentation was set to 0.1 m, and the thickness of the slices was 2 d, d = 0.02 m. Each point cloud was segmented into n slices based on the height, h. The diameter of each layer’s point cloud was calculated, which was then sequentially stored as d1, d2, ..., dn. Additionally, the diameters of adjacent slices were compared. When a slice reached the crown section of a tree, the diameter of the slice would abruptly increase significantly. This allowed for determining the starting elevation value of the crown and extracting the canopies (Figure 6).

2.6.3. Extract Living Vegetation Volume

This study utilized the AlphaShape algorithm to estimate the LVV value for individual trees. The AlphaShape algorithm is a geometric shape reconstruction algorithm used for processing point cloud data [44]. It can be used to control the smoothness and level of detail of shapes based on the parameter alpha. Specifically, the AlphaShape algorithm can be employed to find the convex hull or nonconvex shapes of a set of points, with the alpha value determining the “looseness” of the geometric shape formed by the point set.
Since LVV is challenging to measure directly, we compared the calculated results with the traditional method (i.e., voxel method), which is a common approach for estimating the volume of three-dimensional models. Its principle involves dividing the three-dimensional model space into small cubic units (referred to as voxels) and calculating the total volume of these voxels to estimate the overall volume of the model. We chose to create a Bland–Altman plot to calculate the average values and differences of the volume results between the two methods in order to compare their consistency [45]. Please refer to Figure 7 for details on the two methods.

2.6.4. Ecological Benefits Calculation

For the ecological benefits calculation, we referenced the standards for the conversion of the LVV into the environmental benefits from the research team of Dong and Wan [46]. We transformed the estimated LVV into O2 production, CO2 absorption, SO2 absorption, TSP (total suspended particulates), and summer evaporation of plants. According to these five indicators, the ecological benefits of the forest stand are estimated [47]. In every 10,000 m3 of LVV, evergreen plants can absorb 48.5 t of carbon dioxide and release 35.2 t of oxygen per year. Deciduous plants can absorb 26.2 t of carbon dioxide and release 19.0 t of oxygen per year; mixed coniferous and broad-branch forests can absorb 30.3 kg of sulfur dioxide and stagnate 11 t of dust annually. The transpiration of mixed coniferous and broad-branch forests in summer is 5.5 t/d [48].

3. Results

3.1. DEM Accuracy Verification

The ground points were resampled after normalization, and accuracy verification was carried out with the generated DEM. The verification results are shown in Table 3. It is verified that the RMSE of the dense vegetation area (198.33 plants/hm2) = 3.34 m; the RMSE of the sparsely vegetated areas area (96.67 plants/hm2) = 0.19 m; and the RMSE of the road (0 plant/hm2) = 0.03 m, exhibiting high accuracy (Table 3).

3.2. Individual Tree Segmentation

Through the application of the CSP segmentation algorithm, the trunk was identified and the individual tree was segmented. Nine 100 m validation sample areas were selected in the test area, and the quantities of true positive, false negative, and false positive results were calculated by comparing the segmentation results of solid objects in the test area with the visual interpretation data.
The overall count of individual trees derived from the individual tree segmentation was 275, the recall (r) was 97.7%, the precision (p) was 94.1%, and the F1-score was 95.8%. The individual tree segmentation results can be seen in Table 4.

3.3. Tree Extraction Results

Firstly, we calculated the point count for each individual tree and identified trees with point counts below a predefined threshold (15,000) as shrubs. Considering the growth characteristics of different tree species, trees are usually able to reach a height of six meters or more, while shrubs tend to have heights between three and six meters. Therefore, according to the segmentation outcomes for each tree, we set a height threshold to remove low vegetation based on tree height. Of course, in practice, we would also adjust the height threshold according to the distribution of tree species in specific road sections. Through our testing, we found that setting the initial height threshold to eight meters performs well. This step effectively eliminates low vegetation and shrubs growing under the forest canopy (Figure 8).
We vertically projected the point cloud data for each tree onto the x and y planes and fitted an ellipse to the projected point cloud. We then extracted the long axis (a) and short axis (b) of the fitted ellipse and calculated the flatness (r) of the ellipse. By comparing the projected flatness to a threshold value of 1.31, we identified and extracted individual trees with a higher flatness value as artificial features, subsequently rejecting them from further analysis. The results can be visible in Figure 8.

3.4. Tree Height Extraction and Assessment

We verified the accuracy of the predicted tree height values extracted from LiDAR data using the R2, RMSE, and rRMSE. The regression relationships between the tree heights estimated by the LiDAR extraction and the measured values are shown in Figure 9. This signifies a clear linear correlation between the estimated and measured tree heights by LiDAR, with a slope close to 1 and good robustness. As shown in the figure below, the tree height fitting accuracy based on vehicle-mounted LiDAR point clouds is high, with the R2 ranging from 0.85 to 0.99 and the RMSE ranging from 0.17 m to 0.76 m. The rRMSE values ranged from 1.66% to 3.92%. Refer to Figure 9 for the results.

3.5. DBH Extraction and Assessment

We verified the precision of the predicted DBH values extracted from LiDAR data using the coefficients of R2, RMSE, and rRMSE. The regression relationships between the tree DBHs estimated by the LiDAR extraction and the measured values are shown in Figure 10. This indicates a clear linear correlation between the estimated and measured tree heights by LiDAR, with a slope close to 1 and good robustness. As shown in the figure below, the tree DBH fitting accuracy based on vehicle-mounted LiDAR point clouds was high, with an R2 ranging from 0.82 to 0.96 and an RMSE ranging from 1.14 cm to 9.77 cm. The rRMSE values ranged from 4.23% to 15.37%. Refer to Figure 10 for the results.

3.6. Living Vegetation Volume

3.6.1. Algorithm Comparison and Assessment

When using the AlphaShape algorithm, the parameters can significantly affect the results. Gaps in the point cloud data can result in an underestimated volume fit when smaller parameters are selected. Conversely, choosing excessively large parameters may lead to a rough fit that overlooks structural details, resulting in an overestimated volume fit. In the case of estimating the LVV using the voxel method, the size of the voxels directly affects the precision of the volume estimation. Smaller voxels can provide a more precise estimation, but they may yield less accurate results due to missing point cloud data. On the other hand, larger voxels may lead to a rough estimation. Therefore, it is crucial to select an appropriate voxel size based on the model characteristics and precision requirements. In reference to the study by Lv et al. [21] and the results of multiple experiments, we selected alpha = 1 and voxel size = 0.4 as the fitting parameters. The effects of the different parameters of the algorithm can be seen in Figure 11. Additionally, we used a Bland–Altman plot to compare the two methods and validate the feasibility of the AlphaShape method. The result is shown in Table 5.
From the Bland–Altman results, it is evident that the mean difference between the two methods is 0.724, which is close to 0, with the majority of data points falling within the 95% limits of agreement. These results indicate strong consistency between the two methods. This demonstrates the accuracy and robustness of the AlphaShape method in obtaining the LVV. The similarity in the results of the two methods underscores the reliability of the AlphaShape method in estimating the LVV. The result is shown in Figure 12.

3.6.2. LVV Estimation

Calculating the LVV of an individual tree based on the measured crown diameter and crown height can serve as a ground truth to validate the accuracy of the model [21,31,49]. To evaluate the accuracy of the AlphaShape method in calculating tree canopy volume from VLS data, we selected the manual measurement values of 100 trees, including sycamore trees, koelreuteria paniculata trees, magnolia trees, celties sinensis trees, weeping willow trees, ginkgo trees, and privet trees, for verification. The formula for the manual measurements is as follows:
For trees with an ovoid crown shape, such as sycamore trees, weeping willow trees, and celties sinensis trees, the crown volume can be calculated using the following formula [49]:
V = π x 2 y 6
where x is the crown diameter, and y is the crown height.
For trees with a conical crown shape, such as ginkgo trees and magnolia trees, the crown volume can be calculated using the following formula [50]:
V = π x 2 y 12
The values of x and y are the same as in the formula above.
For trees with a spherical sector crown shape, such as privet trees and koelreuteria paniculata trees, the following formula was used:
V = π ( 2 y 2 y 3 · 4 y 2 x 2 ) 3
From the results (Figure 13), we can conclude that the AlphaShape algorithm has high accuracy in extracting the LVV (R2 = 0.89~0.96, RMSE = 6.63~44.73 m3, and rRMSE = 7.40~34.64%).

3.6.3. LVV Estimation

The study extracted the average LVV of the main tree species in nine sample plot sections. The road section with the largest total LVV was plot 1 (sycamore (Platanus × acerifolia (Aiton) Willd.)/maple poplar (Pterocarya stenoptera C. DC.)), with a total of 19,469 m3 and an average LVV of 811.20 m3 per tree. The road section with the smallest total LVV was plot 5 (magnolia (Magnolia liliiflora Desr)/ginkgo (Ginkgo biloba L.)), with a total of 702.55 m3 and an average LVV of 58.55 m3 per tree. The LVV visualization of the nine plots (consisting of various tree species) is shown in Figure 14. It includes the maximum LVV of these trees, the minimum LVV of these trees, and their average value for each plot.
The tree species with the highest average LVV per individual tree was maple poplar (Pterocarya stenoptera C. DC.), with a LVV of 1049.67 m3 per tree, while the tree species with the lowest LVV per individual tree was privet (Ligustrum lucidum Ait.), with a LVV of 16.68 m3 per tree. The distribution map of the individual tree LVV is shown in Figure 15.

3.7. Ecological Benefits

The contribution of the ecoefficiency LVV produced by the sample sections composed of different tree species also varied considerably, with plot 1 (maple poplar/sycamore) having the greatest ecoefficiency, as follows: O2 production of 36.99 t/a, CO2 absorption of 51.01 t/a, SO2 absorption of 58.99 kg/a, TSP of 21.42 t/a, and summer evaporation of 10.71 t/d. Plot 5 (magnolia/ginkgo) had the least ecological benefits with an O2 production of 1.91 t/a, CO2 absorption of 2.63 t/a, SO2 absorption of 1.95 kg/a, TSP of 0.71 t/a, and summer evaporation of 0.35 t/d. On the basis of the findings of this study, planting more ecologically beneficial trees, such as sycamore trees and maple poplar trees, in cities may be a benefit by improving the urban air environment. The total ecological benefits generated by each plot is shown in Table 6.

4. Discussion

This study was based on high-density vehicle-mounted LiDAR point clouds of urban street trees. The height and quantity features obtained through point cloud clustering segmentation were used for the extraction of tree point clouds. The RANSAC algorithm was integrated with the DBSCAN-based bottom-up point cloud segmentation method to detect individual tree canopy point clouds. The accurate LVV information of individual tree crowns was obtained through the construction of crown convex hulls. The study then calculated the LVV contributions of the predominant tree species within the sample plots and their ecological benefits. Many domestic and international studies have detailed methods for extracting LVV. Compared to other studies estimating LVV, this paper elaborates on the algorithm for automatically removing shrubs and artificial objects in the point cloud data processing section. This step facilitates the estimation of individual tree LVVs for different tree species in the subsequent analysis. There are still some limitations in verifying the accuracy of the extracted LVV values. Currently, obtaining field data is difficult, so most scholars use LVV values extracted from LiDAR data as the truth to validate their methods. Luo et al. [51] used the canopy height model (base area multiplied by height) to calculate the LVV, which is considered as the ground truth value. Zhu et al. [52] used the values extracted using the convex hull method as true values. In this paper, we conducted a consistency test between the LVVs extracted using the AlphaShape method and the voxel method to demonstrate the feasibility of this approach. Additionally, we investigated the influence of different voxel sizes on the accuracy of the volume fitting using AlphaShape as the true value. On the basis of the findings of this study, planting more ecologically beneficial trees, such as sycamore trees and maple poplar trees, in cities may be a benefit by improving the urban air environment.

4.1. Improvement of This Work

This study used a vehicle-mounted LiDAR for data collection based on the principle of linear distribution of road canopy trees. The vehicle-mounted LiDAR continuously scans the road canopy trees during the driving process, thereby obtaining a large amount of point cloud data [53]. This data collection method does not require parking or additional equipment installation, greatly improving the efficiency of the data collection. Moreover, as road canopy trees generally linearly distribute along the road, a vehicle-mounted LiDAR can effectively capture the position and shape information of road canopy trees. Compared to ground LiDAR scanners, the efficiency of the data collection using vehicle-mounted LiDAR is higher, resulting in more complete point cloud collection, making it more suitable for monitoring the LVVs of road canopy trees and providing important support for urban road tree management and planning [54].
The accuracy of the individual tree segmentation will affect the accuracy of the crown width segmentation. If the crowns of the adjacent trees of different species intersect and are not accurately segmented, this will result in the crowns of the two trees being either too large or too small. Incorrect crown point clouds will lead to errors in the calculated LVVs, thereby affecting the calculation of the subsequent ecological benefits. Therefore, the accuracy of the individual-tree segmentation will affect the estimation of the LVV. This paper employed the CSP algorithm to segment individual tree trunks and crowns from street trees captured using mobile LiDAR system, encompassing both broad-leaved and coniferous forests [36,55]. In recent decades, there has been noticeable advancements in single-tree segmentation algorithms, with the majority being reliant on airborne LiDAR data. Traditionally, trees are identified using CHMs that are derived from the first return points. Hyyppa utilized a region-growing method to segment trees from CHM images in coniferous stands [26]. The CSP algorithm demonstrates high accuracy, not only in the values of trees detected but also in segmentation (r = 97.7%, p = 94.1%, and F1 = 95.8%).
Artificial objects in urban areas, such as billboards and streetlights, are mixed in the point clouds of road canopy trees, and undergrowth shrubs can affect further tree crown extraction and analysis. Many studies do not mention removing the influence of objects. Wu and Li [56] used the spatial distribution and geometric characteristics of typical objects in the urban environment to predefine the elevation threshold for different objects. In our study, in accordance with various geometric attributes, shape index r was selected as the criteria for judging and removing artificial object point clouds when extracting tree point clouds. The threshold for removing objects was determined to be 1.31. Most tree crowns have a close-to-circular shape when viewed from above, while billboards and streetlights have a relatively flattened elliptical shape. There is significant variation in the shape of crowns and artificial objects, and the shape index r can quantify the shape characteristics of objects and differentiate targets with different shapes. For example, the tree crown shape index r is close to 1, while billboards and streetlights have a higher r value. It was determined through testing that a shape index of 1.31 can appropriately distinguish objects.
The slicing method of extracting individual tree canopy point clouds can be interfered by lateral branches and noise, leading to inaccurate canopy extraction and thus inaccurate calculation of LVV. This study utilized the DBSCAN algorithm to identify and remove tree branch point clouds. Additionally, a bottom-up slicing approach combined with RANSAC was used to calculate the diameter of the tree trunks. The variation in the trunk diameters in the vertical direction was compared to extract the tree crown point clouds. Through the slice method developed combined with the above improved DBH algorithm, the crown was extracted more accurately after removing the interference of lateral branches, which provides a guarantee for improving the calculation accuracy of the LVV calculation.
Unlike previous studies, this paper added a convenient algorithm for the batch extraction of individual tree canopies before extracting LVV. The method slices the tree in the vertical direction, compares the diameter differences between the slices, and identifies the tree canopy [31], extracting tree canopies by manually confirming the starting point of the canopy, which leads to a less efficient process. In this study, we are able to automatically extract tree canopy point cloud data for a large number of trees, significantly improving efficiency.
Existing LVV quantification models mainly rely on the measurement of growth parameters of the individual tree species, making it difficult to achieve the universality of the measurement model for different types of tree species. Luo et al. [51] used a drone aerial imaging system to obtain images of Shanghai Botanical Garden and estimated the LVV. This study adopts the AlphaShape method to fit the envelope surface of the main road canopy trees, estimating the LVV of individual trees. The AlphaShape algorithm fits the envelope surface without being affected by irregular tree crown growth, resulting in more accurate shape simulation and, thus, more accurate estimation of the LVV.
The extraction of the LVV on Zijin Mountain can be applied to similar environments to obtain the LVV values for different types of tree species under similar conditions. This can provide a basis for urban forestry greening data and technical parameters for artificial vertical greening design. It not only improves ecological benefits but also has aesthetic effects, further contributing to the goal of environmentally friendly urban landscape forests.

4.2. Advantages and Limitations in Estimating LVV

Vauhkonen et al. [57] found that as the point cloud density decreases, the accuracy of predicting individual tree features using the concave hull algorithm decreases. This conclusion is consistent with our research. We thinned the point cloud data to different degrees, categorized into different densities—100% (2621.6 pts/m2), 75% (1966.2 pts/m2), 50% (1310.8 pts/m2), 25% (665.4 pts/m2), 10% (262.2 pts/m2), and 5% (1331.1 pts/m2)—and calculated their LVVs and changes in RMSEs. As shown in the graph results, with a decrease in the point density, the estimated volume also decreased, and the accuracy of extracting the LVV became lower. Specifically, when the point density dropped below 25% (665.4 pts/m2), the RMSE exceeded 2.34 m3. When the point density was above 25% (665.4 pts/m2), a high accuracy was achieved (RMSE = 2.34 m3~RMSE = 1.85 m3). The accuracy varies under different densities, but overall, this method has high accuracy and is not influenced by density. The results can be seen in Figure 16.
The size of the voxel also affects the LVV extraction. When the voxel size is too large, the extracted LVV will be overestimated; when the voxel size is too small, there will be gaps in the generated voxels, leading to an inaccurate LVV extraction. In the study by Lv et al. [21], the LVV obtained by the voxel algorithm increased linearly with the increase in the voxel size. The RMSE showed a trend of decreasing first and then increasing, reaching a peak at a scale of 0.4 m (RMSE = 12.03 m3).

4.3. Challenges and Prospects

Currently, obtaining field data is difficult, so most scholars use LVV values extracted from LiDAR data as the truth to validate their methods. Luo et al. [51] used the canopy height model (base area multiplied by height) to calculate the LVV, which is considered as the ground truth value. Zhu et al. [52] used the values extracted using the convex hull method as true values. In principle, I believe that the LVV calculated using the canopy height model is not as accurate as the LVV extracted by AlphaShape. This is because Al-phaShape extracts LVV based on the actual shape of the tree crown, while the formulaic method is an estimation. The trees in the sample plots are tall, making it challenging for the laser emitted by TLS to penetrate the canopy and obtain complete tree crown structure information. In the future, TLS-collected data can be combined with UAV data to obtain more complete high-density point cloud data for validation.
Our research did not involve automatic tree species identification based on LiDAR color information. Future research could explore tree species automatic identification based on LiDAR color information.
The unevenness measurement of vehicle-mounted LiDAR can also affect the LVV estimation. Missing point cloud data can, indeed, affect the results. In future research, backpack LiDAR can be used to obtain point clouds from the other side of the trees to improve the accuracy of the experiments.
In this study, the impact of height normalization on the estimation and extraction of tree height and LVV was negligible because the terrain is relatively flat. Khosravipour et al. [58] discovered that on steep slopes, the raw elevation values located on either the downhill or the uphill part of a tree crown are height-normalized with parts of the digital terrain model that may be much lower or higher than the tree stem base, respectively. They suggest that in order to minimize the negative effect of steep slopes on the CHM, it is best to use raw elevation values (i.e., use the un-normalized DSM) and compute the height after treetop detection. Nie et al. [59] suggest that the vertical displacement of tree crowns increases exponentially with terrain slope. Therefore, it is crucial to account for the influence of terrain slope in extremely steep areas. The study’s findings also highlight that the impact of slope-distorted CHMs can vary significantly depending on the types of tree crowns and terrains. It can be seen that height normalization has a significant impact on the extraction of tree height and LVV while the terrain is not flat.

5. Conclusions

In this study, we proposed an improved methodology based on vehicle-mounted LiDAR data for estimating and assessing LVVs of urban street trees. In this methodology, we obtained LiDAR data through a vehicle-borne laser scanning system and completed preprocessing and individual tree segmentation. An artificial objects and low shrubs identification algorithm was developed for extracting the street trees. We utilized the DBSCAN algorithm and the AlphaShape algorithm to calculate the structural parameters and the LVV of street trees. The findings indicated that (1) the CSP algorithm has a relatively high overall accuracy for segmenting individual trees (over-all accuracy = 95.8%). (2) The accuracy of tree height and DBH parameter extraction based on vehicle-mounted LiDAR point cloud is high, with accuracies of 1.66% to 3.92% (rRMSE) and 4.23% to 15.37% (rRMSE) respectively. (3) The results of nondestructive estimation of LVV based on the AlphaShape algorithm show that, the largest average LVV of individual tree is maple poplar (1049.667 m3), followed by the tree species of sycamore (557.907 m3), and the smallest average LVV of individual tree is privet (16.681 m3). Based on the findings of this study, planting more ecological benefit trees such as sycamore trees and maple poplar trees in cities may benefit for improving the urban air environment.

Author Contributions

Conceptualization, L.C.; methodology, Y.Y. and X.S.; software, Y.Y. and X.S.; validation, Y.Y. and X.S.; formal analysis, Y.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.Y., X.S. and L.C.; visualization, Y.Y.; supervision, L.C.; funding acquisition, X.S. and L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Jiangsu Province (BK20220415), National Key Research and Development Program (2022YFD2200101), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).

Data Availability Statement

Data are contained within the article.

Acknowledgments

We gratefully acknowledge the graduate students from the department of forest management at Nanjing Forestry University for field works.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Endreny, T.A. Strategically growing the urban forest will improve our world. Nat. Commun. 2018, 9, 1160. [Google Scholar] [CrossRef] [PubMed]
  2. Gehring, U.; Koppelman, G.H. Improvements in air quality: Whose lungs benefit? Eur. Respir. J. 2019, 53, 1900365. [Google Scholar] [CrossRef] [PubMed]
  3. Huang, K. Urban forests facing climate risks. Nat. Clim. Chang. 2022, 12, 893–894. [Google Scholar] [CrossRef]
  4. Liu, J.; Slik, F. Are street trees friendly to biodiversity? Landsc. Urban Plan. 2021, 218, 104304. [Google Scholar] [CrossRef]
  5. McPherson, E.G.; van Doorn, N.; de Goede, J. Structure, function and value of street trees in california, USA. Urban For. Urban Green. 2016, 17, 104–115. [Google Scholar] [CrossRef]
  6. Poudel, S.; Zou, A.; Maroo, S.C. Disjoining pressure driven transpiration of water in a simulated tree. J. Colloid Interface Sci. 2022, 616, 895–902. [Google Scholar] [CrossRef]
  7. Hellegers, M.; Ozinga, W.A.; Hinsberg, A.; Huijbregts, M.A.J.; Hennekens, S.M.; Schaminée, J.H.J.; Dengler, J.; Schipper, A.M. Evaluating the ecological realism of plant species distribution models with ecological indicator values. Ecography 2019, 43, 161–170. [Google Scholar] [CrossRef]
  8. Yue, F.; Fu, F.; Dai, F.; Zeng, H. Correlation between particulate matter pollution concentration and 3d green space in mega cities based on remote sensing inversion. Chin. Landsc. Archit. 2021, 49, 83–88. [Google Scholar]
  9. Sun, X.; Xu, S.; Hua, W.; Tian, J.; Xu, Y. Feasibility study on the estimation of the living vegetation volume of individual street trees using terrestrial laser scanning. Urban For. Urban Green. 2022, 71, 127553. [Google Scholar] [CrossRef]
  10. Zhou, Y.; Zhou, J. Fast method to detect and calculate lvv. Acta Ecol. Sin. 2006, 26, 8. [Google Scholar]
  11. Anderson, K.; Hancock, S.; Casalegno, S.; Griffiths, A.; Griffiths, D.; Sargent, F.; McCallum, J.; Cox, D.T.C.; Gaston, K.J. Visualising the urban green volume: Exploring LiDAR voxels with tangible technologies and virtual models. Landsc. Urban Plan. 2018, 178, 248–260. [Google Scholar] [CrossRef]
  12. Anderson, K.; Hancock, S.; Disney, M.; Gaston, K.J. Is waveform worth it? A comparison of LiDAR approaches for vegetation and landscape characterization. Remote Sens. Ecol. Conserv. 2015, 2, 5–15. [Google Scholar] [CrossRef]
  13. Li, L.; Liu, C. A new approach for estimating living vegetation volume based on terrestrial point cloud data. PLoS ONE 2019, 14, e0221734. [Google Scholar] [CrossRef] [PubMed]
  14. Liu, D.; Zhang, Q.; Wang, J.; Wang, Y.; Shen, Y.; Shuai, Y. The potential of moonlight remote sensing: A systematic assessment with multi-source nightlight remote sensing data. Remote Sens. 2021, 13, 4639. [Google Scholar] [CrossRef]
  15. Sheng, Q.; Zhang, Y.; Zhu, Z.; Li, W.; Xu, J.; Tang, R. An experimental study to quantify road greenbelts and their association with pm2.5 concentration along city main roads in Nanjing, China. Sci. Total Environ. 2019, 667, 710–717. [Google Scholar] [CrossRef] [PubMed]
  16. Peng, S. Study on Estimation Model and Quantitative Methodof Lvv of Regional Vegetation Based on Fast 3dreconstruction. Master’s Thesis, University of Electronic Science and Technology of China, Chengdu, China, 2022. [Google Scholar]
  17. Verma, N.; Lamb, D.; Reid, N.; Wilson, B. Comparison of canopy volume measurements of scattered eucalypt farm trees derived from high spatial resolution imagery and LiDAR. Remote Sens. 2016, 8, 388. [Google Scholar] [CrossRef]
  18. Marín-Buzón, C.; Pérez-Romero, A.; Tucci-Álvarez, F.; Manzano-Agugliaro, F. Assessing the orange tree crown volumes using google maps as a low-cost photogrammetric alternative. Agronomy 2020, 10, 893. [Google Scholar] [CrossRef]
  19. Cheng, L.; Chen, S.; Chu, S.; Li, S.; Yuan, Y.; Wang, Y.; Li, M. LiDAR -based three-dimensional street landscape indices for urban habitability. Earth Sci. Inform. 2017, 10, 457–470. [Google Scholar] [CrossRef]
  20. Wehr, A.; Lohr, U. Airborne laser scanning—An introduction and overview. ISPRS J. Photogramm. Remote Sens. 1999, 54, 68–82. [Google Scholar] [CrossRef]
  21. Zhou, L.; Li, X.; Zhang, B.; Xuan, J.; Gong, Y.; Tan, C.; Huang, H.; Du, H. Estimating 3d green volume and aboveground biomass of urban forest trees by UAV-LiDAR. Remote Sens. 2022, 14, 5211. [Google Scholar] [CrossRef]
  22. Dassot, M.; Constant, T.; Fournier, M. The use of terrestrial LiDAR technology in forest science: Application fields, benefits and challenges. Ann. For. Sci. 2011, 68, 959–974. [Google Scholar] [CrossRef]
  23. Melendy, L.; Hagen, S.C.; Sullivan, F.B.; Pearson, T.R.H.; Walker, S.M.; Ellis, P.; Kustiyo, N.; Sambodo, A.K.; Roswintiarti, O.; Hanson, M.A.; et al. Automated method for measuring the extent of selective logging damage with airborne LiDAR data. ISPRS J. Photogramm. Remote Sens. 2018, 139, 228–240. [Google Scholar] [CrossRef]
  24. Liu, K.; Shen, X.; Cao, L.; Wang, G.; Cao, F. Estimating forest structural attributes using UVA- LiDAR data in ginkgo plantations. ISPRS J. Photogramm. Remote Sens. 2018, 146, 465–482. [Google Scholar] [CrossRef]
  25. Liang, X.; Kankare, V.; Hyyppä, J.; Wang, Y.; Kukko, A.; Haggrén, H.; Yu, X.; Kaartinen, H.; Jaakkola, A.; Guan, F.; et al. Terrestrial laser scanning in forest inventories. ISPRS J. Photogramm. Remote Sens. 2016, 115, 63–77. [Google Scholar] [CrossRef]
  26. Hyyppa, J.; Kelle, O.; Lehikoinen, M.; Inkinen, M. A segmentation-based method to retrieve stem volume estimates from 3-d tree height models produced by laser scanners. IEEE Trans. Geosci. Remote Sens. 2001, 39, 969–975. [Google Scholar] [CrossRef]
  27. Gong, Z.; Li, J.; Luo, Z.; Wen, C.; Wang, C.; Zelek, J. Mapping and semantic modeling of underground parking lots using a backpack LiDAR system. IEEE Trans. Intell. Transp. Syst. 2021, 22, 734–746. [Google Scholar] [CrossRef]
  28. Wen, S.; Li, X.; Liu, X.; Li, J.; Tao, S.; Long, Y.; Qiu, T. Dynamic slam: A visual slam in outdoor dynamic scenes. IEEE Trans. Instrum. Meas. 2023, 72, 1–11. [Google Scholar] [CrossRef]
  29. Lin, Y.-C.; Manish, R.; Bullock, D.; Habib, A. Comparative analysis of different mobile LiDAR mapping systems for ditch line characterization. Remote Sens. 2021, 13, 2485. [Google Scholar] [CrossRef]
  30. Tao, S.; Wu, F.; Guo, Q.; Wang, Y.; Li, W.; Xue, B.; Hu, X.; Li, P.; Tian, D.; Li, C.; et al. Segmenting tree crowns from terrestrial and mobile LiDAR data by exploring ecological theories. ISPRS J. Photogramm. Remote Sens. 2015, 110, 66–76. [Google Scholar] [CrossRef]
  31. Yan, Z.; Liu, R.; Cheng, L.; Zhou, X.; Ruan, X.; Xiao, Y. A concave hull methodology for calculating the crown volume of individual trees based on vehicle-borne LiDAR data. Remote Sens. 2019, 11, 623. [Google Scholar] [CrossRef]
  32. Wu, B.; Yu, B.; Yue, W.; Shu, S.; Tan, W.; Hu, C.; Huang, Y.; Wu, J.; Liu, H. A voxel-based method for automated identification and morphological parameters estimation of individual street trees from mobile laser scanning data. Remote Sens. 2013, 5, 584–611. [Google Scholar] [CrossRef]
  33. Han, G.; Feng, X.; Jia, Y.; Wang, C.; He, X.; Zhou, Q.; Tian, X. Isolation and evaluation of terrestrial fungi with algicidal ability from zijin mountain, Nanjing, China. J. Microbiol. 2011, 49, 562–567. [Google Scholar] [CrossRef]
  34. Mohd, A.; Lau, C.; Setan, H.; Majid, Z.; Chong, A.K.; Aspuri, A.; Idris, K.M.; Farid, M. Terrestrial laser scanners pre-processing: Registration and georeferencing. J. Teknol. 2014, 71, 2180–3722. [Google Scholar]
  35. Axelsson, P. Dem generation from laser scanner data using adaptive tin models. Int. Arch. Photogramm. Remote Sens. 2000, 33, 110–117. [Google Scholar]
  36. West, G.B.; Brown, J.H.; Enquist, B.J. A general model for the structure and allometry of plant vascular systems. Nature 1999, 400, 664–667. [Google Scholar] [CrossRef]
  37. Polewski, P.; Yao, W.; Cao, L.; Gao, S. Marker-free coregistration of uav and backpack LiDAR point clouds in forested areas. ISPRS J. Photogramm. Remote Sens. 2018, 147, 307–318. [Google Scholar] [CrossRef]
  38. Goutte, C.; Gaussier, E. A probabiistic interpretation of precision, recall and f-score, with implication for evaluation. In Advances in Information Retrieval. ECIR 2005; Springer: Berlin/Heidelberg, Germany, 2005; Volume 51, p. 952. [Google Scholar]
  39. Kang, Z.; Yang, J. A probabilistic graphical model for the classification of mobile LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 143, 108–123. [Google Scholar] [CrossRef]
  40. Comai, L. The taming of the shrub. Nat. Plants 2018, 4, 742–743. [Google Scholar] [CrossRef] [PubMed]
  41. Reitberger, J.; Schnörr, C.; Krzystek, P.; Stilla, U. 3d segmentation of single trees exploiting full waveform LiDAR data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 561–574. [Google Scholar] [CrossRef]
  42. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD-96 Proceedings; AAAI Press: Washington, DC, USA, 1996; Volume 96, pp. 226–231. [Google Scholar]
  43. Daszykowski, M.; Walczak, B.; Massart, D.L. Representative subset selection. Anal. Chim. Acta 2002, 468, 91–103. [Google Scholar] [CrossRef]
  44. Zhu, T.; Ma, X.; Guan, H.; Wu, X.; Wang, F.; Yang, C.; Jiang, Q. A calculation method of phenotypic traits based on three-dimensional reconstruction of tomato canopy. Comput. Electron. Agric. 2022, 204, 107515. [Google Scholar] [CrossRef]
  45. Bland, J.M.; Altman, D.G. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986, 327, 307–310. [Google Scholar] [CrossRef]
  46. Dong, Y.; Wan, F. Ecological benefits and value assessment of forest stands based on three-dimensional green volume—A case study of pujiang country park of shanghai. Res. Soil Water Conserv. 2019, 26, 347–352. [Google Scholar]
  47. Xue, X.; Zhang, J.C.; Sun, Y.T.; Zhuang, J.Y.; Wang, Y.X. Study of carbon seaestration & oxvaen release and cooling & humicifving effect of main greening tree species in shanghai. J. Nanjing For. Univ. (Nat. Sci. Ed.) 2016, 40, 81–86. [Google Scholar]
  48. Sun, X.-Y.; Tian, J.-R.; Xu, Y.-N.; Xu, S.; Li, H.-D. Evaluation of ecological landscape of road based on terrestrial laser scanning: A case study of huanghai national forest park. J. Ecol. Rural Environ. 2020, 36, 1477–1484. [Google Scholar]
  49. Lin, W.; Meng, Y.; Qiu, Z.; Zhang, S.; Wu, J. Measurement and calculation of crown projection area and crown volume of individual trees based on 3d laser-scanned point-cloud data. Int. J. Remote Sens. 2017, 38, 1083–1100. [Google Scholar] [CrossRef]
  50. Yin, S.; Shen, Z.; Zhou, P.; Zou, X.; Che, S.; Wang, W. Quantifying air pollution attenuation within urban parks: An experimental approach in shanghai, china. Environ. Pollut. 2011, 159, 2155–2163. [Google Scholar] [CrossRef]
  51. Luo, J.; Zhou, Y.; Leng, H.; Meng, C.; Hou, Z.; Song, T.; Hu, Z.; Zhang, C.; Feng, S. Quick estimation of three-dimensional vegetation volume based on images from an unmanned aerial vehicle: A case study on shanghai botanical garden. J. East China Norm. Univ. (Nat. Sci.) 2022, 2022, 122–134. [Google Scholar]
  52. Zhu, Z.; Kleinn, C.; Nölke, N. Towards tree green crown volume: A methodological approach using terrestrial laser scanning. Remote Sens. 2020, 12, 1841. [Google Scholar] [CrossRef]
  53. Wang, X.; Zhu, J. Vehicle-mounted imaging LiDAR with nonuniform distribution of instantaneous field of view. Opt. Laser Technol. 2023, 169, 110063. [Google Scholar] [CrossRef]
  54. Li, X.-X.; Tang, L.; Peng, W.; Chen, J.-X.; Ma, X. Estimation method of urban green space living vegetation volume based on backpack light detection and ranging. Chin. J. Appl. Ecol. 2022, 33, 2777–2784. [Google Scholar]
  55. Leopold, L.B. Trees and streams: The efficiency of branching patterns. J. Theor. Biol. 1971, 31, 339–354. [Google Scholar] [CrossRef] [PubMed]
  56. Wu, F.; Li, Q. On classification of vehicle-borne laser-scanning data. Sci. Surv. Mapp. 2007, 32, 7555–7577. [Google Scholar]
  57. Vauhkonen, J.; Tokola, T.; Maltamo, M.; Packalén, P. Effects of pulse density on predicting characteristics of individual trees of scandinavian commercial species using alpha shape metrics based on airborne laser scanning data. Can. J. Remote Sens. 2008, 34, S441–S459. [Google Scholar] [CrossRef]
  58. Khosravipour, A.; Skidmore, A.K.; Wang, T.; Isenburg, M.; Khoshelham, K. Effect of slope on treetop detection using a LiDAR canopy height model. ISPRS J. Photogramm. Remote Sens. 2015, 104, 44–52. [Google Scholar] [CrossRef]
  59. Nie, S.; Wang, C.; Xi, X.; Luo, S.; Zhu, X.; Li, G.; Liu, H.; Tian, J.; Zhang, S. Assessing the impacts of various factors on treetop detection using LiDAR -derived canopy height models. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10099–10115. [Google Scholar] [CrossRef]
Figure 1. Technical flowchart of this study, including sections such as VLS LiDAR point cloud data collection, data preprocessing, tree segmentation, trees extraction, RANSAC-based separation of side branches, canopy extraction by slicing method, envelope fitting, and analysis of the extracted LVV. In the CSP algorithm, r represents the percentage at which the trees are detected; p denotes the accuracy of the tree segmentation; and F is the overall accuracy of combining the wrong and missed segments.
Figure 1. Technical flowchart of this study, including sections such as VLS LiDAR point cloud data collection, data preprocessing, tree segmentation, trees extraction, RANSAC-based separation of side branches, canopy extraction by slicing method, envelope fitting, and analysis of the extracted LVV. In the CSP algorithm, r represents the percentage at which the trees are detected; p denotes the accuracy of the tree segmentation; and F is the overall accuracy of combining the wrong and missed segments.
Remotesensing 16 01662 g001
Figure 2. Overview maps of the study area: (a) satellite image of the administrative divisions in Nanjing city; (b) satellite image of the Zijin Mountain area along with vehicle travel routes and sample site locations. It contains nine plots (100 × 30 m2) consisting of different trees.
Figure 2. Overview maps of the study area: (a) satellite image of the administrative divisions in Nanjing city; (b) satellite image of the Zijin Mountain area along with vehicle travel routes and sample site locations. It contains nine plots (100 × 30 m2) consisting of different trees.
Remotesensing 16 01662 g002
Figure 3. (a) Vehicle-mounted LiDAR system. (b) Top view of street tree point cloud data. (c) Side view of street tree point cloud data (with schematic diagram for extracting parameters of street trees). The numbers (1–16) represent the results of individual tree segmentation.
Figure 3. (a) Vehicle-mounted LiDAR system. (b) Top view of street tree point cloud data. (c) Side view of street tree point cloud data (with schematic diagram for extracting parameters of street trees). The numbers (1–16) represent the results of individual tree segmentation.
Remotesensing 16 01662 g003
Figure 4. Schematic of the point cloud discrimination of billboards, street lights, and other artificial features. The red ellipses represent the fitting results: (a) sideview of a billboard; (b) sideview of a street light; (c) sideview of an individual tree’s points; (d) top view of a billboard; (e) top view of a street light; (f) top view of an individual tree.
Figure 4. Schematic of the point cloud discrimination of billboards, street lights, and other artificial features. The red ellipses represent the fitting results: (a) sideview of a billboard; (b) sideview of a street light; (c) sideview of an individual tree’s points; (d) top view of a billboard; (e) top view of a street light; (f) top view of an individual tree.
Remotesensing 16 01662 g004
Figure 5. DBSCAN-based DBH extraction algorithm: (a) individual tree; (b) point cloud of a tree trunk at 1–2 m; (c) RANSAC-based separation of side branches; (d) trunk point cloud after removal of lateral branches; (e) point cloud fitting of a cylinder; (f) removal of the excess points from the trunk columns; (g) trunk point cloud at breast diameter; (h) point cloud at the diameter of the chest is fitted to the column and the DBH extracted.
Figure 5. DBSCAN-based DBH extraction algorithm: (a) individual tree; (b) point cloud of a tree trunk at 1–2 m; (c) RANSAC-based separation of side branches; (d) trunk point cloud after removal of lateral branches; (e) point cloud fitting of a cylinder; (f) removal of the excess points from the trunk columns; (g) trunk point cloud at breast diameter; (h) point cloud at the diameter of the chest is fitted to the column and the DBH extracted.
Remotesensing 16 01662 g005
Figure 6. Crown separation by slicing: (a) individual tree; (b) perform bottom-up slices on trees without lateral branches; (c) compare diameters between point cloud slices to identify the group with the larger gap (i.e., canopy point cloud starting position); (d) extract canopy data. The red point cloud represents successfully extracted tree crowns.
Figure 6. Crown separation by slicing: (a) individual tree; (b) perform bottom-up slices on trees without lateral branches; (c) compare diameters between point cloud slices to identify the group with the larger gap (i.e., canopy point cloud starting position); (d) extract canopy data. The red point cloud represents successfully extracted tree crowns.
Remotesensing 16 01662 g006
Figure 7. Canopy extraction and Living vegetation volume calculation: (a) point cloud of the ginkgo canopy; (b) voxel fitting visualization of the canopy (voxel size = 0.4); (c) voxel fitting of the canopy volume; (d) AlphaShape algorithm fitting envelope surface (alpha shape = 1); (e) AlphaShape algorithm fitting canopy volume.
Figure 7. Canopy extraction and Living vegetation volume calculation: (a) point cloud of the ginkgo canopy; (b) voxel fitting visualization of the canopy (voxel size = 0.4); (c) voxel fitting of the canopy volume; (d) AlphaShape algorithm fitting envelope surface (alpha shape = 1); (e) AlphaShape algorithm fitting canopy volume.
Remotesensing 16 01662 g007
Figure 8. Diagram of the effect of removing artificial features such as road signs and power lines: (a) discrete point cloud; (b) algorithm for segmenting trees using the CSP approach; (c) point cloud after low shrub removal; (d) point cloud after removing artificial objects; (e) extracted tree point cloud. The blue circles represent the shape fitting results. The red boxes enclose the identified artificial objects.
Figure 8. Diagram of the effect of removing artificial features such as road signs and power lines: (a) discrete point cloud; (b) algorithm for segmenting trees using the CSP approach; (c) point cloud after low shrub removal; (d) point cloud after removing artificial objects; (e) extracted tree point cloud. The blue circles represent the shape fitting results. The red boxes enclose the identified artificial objects.
Remotesensing 16 01662 g008
Figure 9. VLS LiDAR and manual measurement of street tree height accuracy verification: (a) mingled forest plot of maple poplar (Pterocarya stenoptera C. DC.)/sycamore (Platanus × acerifolia (Aiton) Willd.); (b) weeping willow (Salix babylonica L.) forest plot; (c) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (d) mingled forest plot of privet (Ligustrum lucidum Ait.)/celtis sinensis (Celtis sinensis Pers.); (e) mingled forest plot of magnolia (Magnolia liliiflora Desr)/ginkgo (Ginkgo biloba L.); (f) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (g) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (h) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (i) koelreuteria paniculata (Koelreuteria paniculata Laxm.) forest plot. The gray range is the 95% confidence band.
Figure 9. VLS LiDAR and manual measurement of street tree height accuracy verification: (a) mingled forest plot of maple poplar (Pterocarya stenoptera C. DC.)/sycamore (Platanus × acerifolia (Aiton) Willd.); (b) weeping willow (Salix babylonica L.) forest plot; (c) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (d) mingled forest plot of privet (Ligustrum lucidum Ait.)/celtis sinensis (Celtis sinensis Pers.); (e) mingled forest plot of magnolia (Magnolia liliiflora Desr)/ginkgo (Ginkgo biloba L.); (f) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (g) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (h) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (i) koelreuteria paniculata (Koelreuteria paniculata Laxm.) forest plot. The gray range is the 95% confidence band.
Remotesensing 16 01662 g009
Figure 10. VLS LiDAR and manual measurement of street tree DBH accuracy verification: (a) mingled forest plot of maple poplar (Pterocarya stenoptera C. DC.)/sycamore (Platanus × acerifolia (Aiton) Willd.); (b) weeping willow (Salix babylonica L.) forest plot; (c) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (d) mingled forest plot of privet (Ligustrum lucidum Ait.)/celtis sinensis (Celtis sinensis Pers.); (e) mingled forest plot of magnolia (Magnolia liliiflora Desr)/ginkgo (Ginkgo biloba L.); (f) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (g) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (h) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (i) koelreuteria paniculata (Koelreuteria paniculata Laxm.) forest plot. The gray range is the 95% confidence band. Purple represents the error bars.
Figure 10. VLS LiDAR and manual measurement of street tree DBH accuracy verification: (a) mingled forest plot of maple poplar (Pterocarya stenoptera C. DC.)/sycamore (Platanus × acerifolia (Aiton) Willd.); (b) weeping willow (Salix babylonica L.) forest plot; (c) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (d) mingled forest plot of privet (Ligustrum lucidum Ait.)/celtis sinensis (Celtis sinensis Pers.); (e) mingled forest plot of magnolia (Magnolia liliiflora Desr)/ginkgo (Ginkgo biloba L.); (f) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (g) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (h) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (i) koelreuteria paniculata (Koelreuteria paniculata Laxm.) forest plot. The gray range is the 95% confidence band. Purple represents the error bars.
Remotesensing 16 01662 g010
Figure 11. The effects of different parameters of the algorithm: (a) effect of the voxel size selection at 0.2 on the voxel algorithm; (b) effect of the voxel size selection at 0.4 on the voxel algorithm; (c) effect of the voxel size selection at 0.6 on the voxel algorithm; (d) effect of the alpha selection at 0.3 on the AlphaShape algorithm; (e) effect of the alpha selection at 1 on the AlphaShape algorithm; (f) effect of the alpha selection at 5 on the AlphaShape algorithm.
Figure 11. The effects of different parameters of the algorithm: (a) effect of the voxel size selection at 0.2 on the voxel algorithm; (b) effect of the voxel size selection at 0.4 on the voxel algorithm; (c) effect of the voxel size selection at 0.6 on the voxel algorithm; (d) effect of the alpha selection at 0.3 on the AlphaShape algorithm; (e) effect of the alpha selection at 1 on the AlphaShape algorithm; (f) effect of the alpha selection at 5 on the AlphaShape algorithm.
Remotesensing 16 01662 g011
Figure 12. Bland–Altman plots for the comparison of the voxel and AlphaShape methods to obtain the LVV.
Figure 12. Bland–Altman plots for the comparison of the voxel and AlphaShape methods to obtain the LVV.
Remotesensing 16 01662 g012
Figure 13. Accuracy verification of the AlphaShape algorithm: (a) predicted values and true values of the LVV for the ginkgo trees and magnolia trees; (b) predicted values and true values of the LVV for the sycamore trees; (c) predicted values and true values of the LVV for the weeping willow trees; (d) predicted values and true values of the LVV for the privet trees; (e) predicted values and true values of the LVV for the celties sinensis trees; (f) predicted values and true values of the LVV for the koelreuteria paniculata trees. The gray range is the 95% confidence band.
Figure 13. Accuracy verification of the AlphaShape algorithm: (a) predicted values and true values of the LVV for the ginkgo trees and magnolia trees; (b) predicted values and true values of the LVV for the sycamore trees; (c) predicted values and true values of the LVV for the weeping willow trees; (d) predicted values and true values of the LVV for the privet trees; (e) predicted values and true values of the LVV for the celties sinensis trees; (f) predicted values and true values of the LVV for the koelreuteria paniculata trees. The gray range is the 95% confidence band.
Remotesensing 16 01662 g013
Figure 14. Visualization of the LVV in the sample plots: (a) mingled forest plot of maple poplar (Pterocarya stenoptera C. DC.)/sycamore (Platanus × acerifolia (Aiton) Willd.); (b) weeping willow (Salix babylonica L.) forest plot; (c) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (d) mingled forest plot of privet (Ligustrum lucidum Ait.)/celtis sinensis (Celtis sinensis Pers.); (e) mingled forest plot of magnolia (Magnolia liliiflora Desr)/ginkgo (Ginkgo biloba L.); (f) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (g) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (h) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (i) koelreuteria paniculata (Koelreuteria paniculata Laxm.) forest plot.
Figure 14. Visualization of the LVV in the sample plots: (a) mingled forest plot of maple poplar (Pterocarya stenoptera C. DC.)/sycamore (Platanus × acerifolia (Aiton) Willd.); (b) weeping willow (Salix babylonica L.) forest plot; (c) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (d) mingled forest plot of privet (Ligustrum lucidum Ait.)/celtis sinensis (Celtis sinensis Pers.); (e) mingled forest plot of magnolia (Magnolia liliiflora Desr)/ginkgo (Ginkgo biloba L.); (f) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (g) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (h) sycamore (Platanus × acerifolia (Aiton) Willd.) forest plot; (i) koelreuteria paniculata (Koelreuteria paniculata Laxm.) forest plot.
Remotesensing 16 01662 g014
Figure 15. Ranks of the average LVVs for individual trees of eight tree species. The LVV of individual trees ranges from small to large, in the order of privet (Ligustrum lucidum Ait.), ginkgo (Ginkgo biloba L.), weeping willow (Salix babylonica L.), celtis sinensis (Celtis sinensis Pers), magnolia (Magnolia liliiflora Desr), koelreuteria paniculata (Koelreuteria paniculata Laxm.), sycamore (Platanus × acerifolia (Aiton) Willd.), and maple poplar (Pterocarya stenoptera C. DC.).
Figure 15. Ranks of the average LVVs for individual trees of eight tree species. The LVV of individual trees ranges from small to large, in the order of privet (Ligustrum lucidum Ait.), ginkgo (Ginkgo biloba L.), weeping willow (Salix babylonica L.), celtis sinensis (Celtis sinensis Pers), magnolia (Magnolia liliiflora Desr), koelreuteria paniculata (Koelreuteria paniculata Laxm.), sycamore (Platanus × acerifolia (Aiton) Willd.), and maple poplar (Pterocarya stenoptera C. DC.).
Remotesensing 16 01662 g015
Figure 16. The impact of different densities on the AlphaShape method. The black circles represent RMSE at different densities.
Figure 16. The impact of different densities on the AlphaShape method. The black circles represent RMSE at different densities.
Remotesensing 16 01662 g016
Table 1. Description of the forest characteristics of nine forest types.
Table 1. Description of the forest characteristics of nine forest types.
Sample PlotTree SpeciesNH (m)DBH (cm)Canopy Width (m)
MeanSDMeanSDMeanSD
Plot 1Maple Poplar (Pterocarya stenoptera C. DC.)/Sycamore (Platanus × acerifolia (Aiton) Willd.)2621.612.3063.6124.4214.162.45
Plot 2Weeping Willow (Salix babylonica L.)337.691.0425.735.034.841.31
Plot 3Sycamore (Platanus × acerifolia (Aiton) Willd.)3319.461.3541.146.0111.261.14
Plot 4Privet (Ligustrum lucidum Ait.)/Celtis sinensis (Celtis sinensis Pers.)297.471.8112.125.914.311.42
Plot 5Magnolia (Magnolia liliiflora Desr)/Ginkgo (Ginkgo biloba L.)1110.500.9424.322.304.871.36
Plot 6Sycamore (Platanus × acerifolia (Aiton) Willd.)2920.102.4949.048.0711.741.69
Plot 7Sycamore (Platanus × acerifolia (Aiton) Willd.)2424.112.8552.8913.1012.152.35
Plot 8Sycamore (Platanus × acerifolia (Aiton) Willd.)3324.566.024.104.1711.992.18
Plot 9Koelreuteria paniculata (Koelreuteria paniculata Laxm.)5717.402.2321.134.476.271.08
DBH: diameter at breast height (cm); SD: standard deviation.
Table 2. Specifications of the mobile LiDAR system.
Table 2. Specifications of the mobile LiDAR system.
SystemParameterValue
LiDAR sensor: XT 32Field of viewVertical: −16~15°
Horizontal: 0~360°
Position precision±1 cm
Measurement range120 m
Point density1000 PTS/m2
GNSS parametersSignal trackingGPS: L1 C/A, L1 C, L2 C, L2 P, L5
GLONASS: L1 C/A, L2 C, L2 P, L3, L5
BeiDou: B1, B2
Table 3. Resampling ground points with DEM accuracy verification.
Table 3. Resampling ground points with DEM accuracy verification.
AreaME (m)SD (m)RMSE (m)
Densely vegetated areas (198.33 plants/hm2)1.401.633.34
Sparsely vegetated areas (96.67 plants/hm2)0.090.170.19
Road area (0 plant/hm2)0.030.010.03
SD: standard deviation. ME: mean error. RMSE: root mean squared error.
Table 4. Accuracy statistics of individual tree segmentation.
Table 4. Accuracy statistics of individual tree segmentation.
PlotNumber of TreesTP/PlantFN/PlantFP/Plantr (%)p (%)F1 (%)
P 1262402100.092.396.0
P 233311396.991.293.9
P 3333101100.096.998.4
P 4292801100.096.698.2
P 5111001100.090.995.2
P 629261296.392.994.5
P 724221195.795.795.7
P 8333102100.093.996.9
P 957513394.494.494.4
Total27525461697.794.195.8
Table 5. Summary of Bland–Altman consistency analysis results.
Table 5. Summary of Bland–Altman consistency analysis results.
ItemValue
Effective Sample Size39
Mean (Method 1)42.304
Mean (Method 2)41.581
Mean (Differences)0.724
Standard deviation (Differences)6.711
95% Confidence Interval (Mean of Differences)−1.452~2.899
95% Confidence Interval (Differences)−12.430~13.877
t-Value (H0: Mean Difference = 0)0.673
p-Value (H0: Mean Difference = 0)0.505
CR Value (Coefficient of Repeatability)13.061
Table 6. The total ecological benefits generated by each plot.
Table 6. The total ecological benefits generated by each plot.
PlotTree SpeciesO2
Production/(t/a)
CO2
Absorption/(t/a)
SO2
Absorption/(kg/a)
TSP/(t/a)Summer
Evaporation/(t/d)
Plot 1Maple Poplar (Pterocarya stenoptera C. DC.)/Sycamore (Platanus × acerifolia (Aiton) Willd.)36.9951.0158.9921.4210.71
Plot 2Weeping Willow (Salix babylonica L.)3.725.125.932.151.08
Plot 3Sycamore (Platanus × acerifolia (Aiton) Willd.)36.0749.7457.5220.8810.44
Plot 4Privet (Ligustrum lucidum Ait.)/Celtis sinensis (Celtis sinensis Pers.)2.383.283.111.130.56
Plot 5Magnolia (Magnolia liliiflora Desr)/Ginkgo (Ginkgo biloba L.)1.912.631.950.710.35
Plot 6Sycamore (Platanus × acerifolia (Aiton) Willd.)30.642.248.817.728.86
Plot 7Sycamore (Platanus × acerifolia (Aiton) Willd.)27.9338.5244.5516.178.09
Plot 8Sycamore (Platanus × acerifolia (Aiton) Willd.)28.7739.6845.8916.668.33
Plot 9Koelreuteria paniculata (Koelreuteria paniculata Laxm.)11.6816.1118.636.763.38
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y.; Shen, X.; Cao, L. Estimation of the Living Vegetation Volume (LVV) for Individual Urban Street Trees Based on Vehicle-Mounted LiDAR Data. Remote Sens. 2024, 16, 1662. https://doi.org/10.3390/rs16101662

AMA Style

Yang Y, Shen X, Cao L. Estimation of the Living Vegetation Volume (LVV) for Individual Urban Street Trees Based on Vehicle-Mounted LiDAR Data. Remote Sensing. 2024; 16(10):1662. https://doi.org/10.3390/rs16101662

Chicago/Turabian Style

Yang, Yining, Xin Shen, and Lin Cao. 2024. "Estimation of the Living Vegetation Volume (LVV) for Individual Urban Street Trees Based on Vehicle-Mounted LiDAR Data" Remote Sensing 16, no. 10: 1662. https://doi.org/10.3390/rs16101662

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop