Next Article in Journal
A No-Reference Edge-Preservation Assessment Index for SAR Image Filters under a Bayesian Framework Based on the Ratio Gradient
Previous Article in Journal
Remotely Sensed Spatiotemporal Variation in Crude Protein of Shortgrass Steppe Forage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

VO-LVV—A Novel Urban Regional Living Vegetation Volume Quantitative Estimation Model Based on the Voxel Measurement Method and an Octree Data Structure

1
School of Recourses and Environment, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China
2
Suzhou Vocational Technical College, Suzhou 234099, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(4), 855; https://doi.org/10.3390/rs14040855
Submission received: 4 January 2022 / Revised: 6 February 2022 / Accepted: 7 February 2022 / Published: 11 February 2022
(This article belongs to the Section Environmental Remote Sensing)

Abstract

:
Currently, three-dimensional (3D) point clouds are widely used in the field of remote sensing and mapping, including the measurement of living vegetation volume (LVV) in cities. However, the existing quantitative methods for LVV measurement are mainly based on single tree modeling or on the calculation of single tree species’ growth parameters, which cannot be applied to the many tree species and complex forest layer structures present in urban regions, and thus are unsuitable for broad application. LVV measurement is based primarily on vegetation point cloud data, which can be obtained through many methods and often lack some information, thus posing problems in the use of traditional LVV measurement methods. To address the above problems, this paper proposes a novel LVV estimation model, which combines the voxel measurement method with an organizing point cloud based on an octree structure (we called it VO-LVV), to estimate the LVV of typical vegetation and landforms in cities. The point cloud data of single plants and multiple plants were obtained through preprocessing to verify the improvement in the calculation efficiency and accuracy of the proposed method. The results indicated that the VO-LVV estimation method, compared with the traditional method, enabled substantial efficiency improvement and higher calculation accuracy. Furthermore, the new method can be simultaneously applied to scenarios of single plants and multiple plants, and can be used for the calculation of LVV in areas with various vegetation types in cities.

Graphical Abstract

1. Introduction

As a major part of urban ecosystems, vegetation has important ecological functions in maintaining carbon and oxygen balance, purifying water bodies, regulating climate, and conserving water sources [1,2]. Early studies of urban vegetation systems often used two-dimensional (2D) greening indicators, such as the green space rate, green coverage rate, and per capita public green area [3,4]. Various 2D greening indicators have been used and have played important roles in the promotion of greening construction in a certain period of time [5,6,7]. However, these indicators only roughly express the distribution and development level of vegetation space. In other words, expressing the true ecological functions and environmental benefits of regional vegetation in a precise manner remains difficult [8].
For this reason, researchers proposed a three-dimensional (3D) greening indicator, living vegetation volume (LVV, also called 3D green quantity) [9], which is defined as the portion occupied by fresh stems and leaves of vegetation. Since the concept was proposed, the development and applications of the LVV have attracted increasing attention and have become major areas of research interest in this field. The LVV directly reflects the basis for vegetation productivity, including vegetation yield, the environment, and economic benefits [10,11]. Furthermore, the LVV can accurately indicate whether the arrangement of vegetation is scientific and can be used to evaluate the ecological benefits of the vegetation system. It has become an important green quantity index used in countries worldwide to strengthen research [12,13]. However, in the estimation of urban regional LVV, the vegetation in cities has diverse species, various spatial structures, and complex overlap. Furthermore, the vegetation is often buried between buildings, thus making information extraction difficult. Therefore, traditional methods such as remote sensing technology and hyperspectral technology cannot meet the requirements of LVV measurement in cities [14,15].
Early urban regional LVV measurements were based primarily on field vegetation quadrat morphology measurement [12,15,16,17], and on remote sensing object morphology and spectral information retrieval [18,19,20,21]. The former methods include plane greening indicators, simulated 3D greening indicators, estimation of LVV by tridimensional volume, and plane quantity estimation by 2D greening indicators [12,15,17]. The accuracy of these methods depends on the results of field sampling surveys and belongs to the semi-automatic mode. The latter methods depend on the corresponding hyperspectral images obtained through remote sensing. They first draw inferences regarding the vegetation area, vegetation types, and several tree measurement factors from the images, and they then calculate the entire LVV for a specific area [18,19,20,21]. Although the methods somewhat reduce the workload of manual collection and interpretation, they only provide information regarding the horizontal distribution of vegetation, but cannot accurately extract its vertical structure.
With the development of 3D point cloud modeling, researchers have extensively explored applications in LVV measurement. Commonly used methods include the vegetation height measurement method [22] and the wrapped surface approach [23]. For instance, the authors of [24] used light detection and ranging (LiDAR) to obtain individual wood structure parameters (tree height, diameter at breast height, crown width, and canopy closure), then established a regional biomass measurement model and validated it through comparison with field-measured vegetation samples. According to LiDAR point cloud data and high-resolution remote sensing image data, the authors of [25] used the separation zone method to identify the distribution areas of different types of vegetation. The data were then segmented on the scale of the vegetation layer. After the edge feature points were extracted, the redundant data points were removed, and irregular triangulations were constructed to effectively calculate the LVV in this area. Notably, the voxel measurement method is a relatively effective and efficient method for accurate volume estimation [26], and it has also been used to measure morphological parameters. For example, the authors of [27] used voxel measurement methods to automatically identify individual trees from the mobile laser scanning (MLS) data. After the individual vegetation voxel grid was extended, the derived morphological parameters, including the tree height and crown diameter of street trees on a road, were efficiently estimated. The authors of [28] researched the role of the voxel measurement method in the construction of 3D models, combined with a binary space partitioning tree and grid division of these two voxel models, thus providing an efficient digital city 3D model organization method. Unfortunately, this method has not been used in LVV measurement.
In summary, the existing LVV measurement methods are based mainly on single tree modeling or single tree species’ growth parameter calculations; therefore, they cannot be applied to the areas with many tree species and complex vegetation structures, and are unsuitable for large-scale application. Furthermore, vegetation often undergoes continuous changes due to factors such as growth and pruning. Updating the regional LVV rapidly is difficult through traditional manual measurement or current point cloud measurement methods. Additionally, the existing methods do not solve the problem of missing point clouds in the original data. To address these deficiencies, this study constructed the VO-LVV model, an LVV model based on point cloud data, which is suitable for urban scenes. Specifically, we used an octree structure to organize the point cloud. We built a voxel measurement model to estimate the LVV value. Experiments confirmed that our novel model has a faster calculation speed and higher measurement accuracy than the traditional model, and this will contribute to the large-scale application of LVV measurement in urban regions.
The main research included the following: (1) To establish a measurement method suitable for multiple types of vegetation in urban regions, we designed and implemented the VO-LVV (LVV calculation combines the voxel measurement method with an organizing point cloud based on an octree structure) measurement model based on a vegetation point cloud organization from the octree structure. (2) In view of the lack of information in the vegetation point cloud data obtained through different methods, multiple index parameters were included in the theoretical analysis to adjust the model and make the estimation results better fit the true values. (3) A typical vegetation distribution area was selected as the experimental site, and the same time phase point data for the experimental area were collected through three data acquisition methods.
The remainder of this paper is organized as follows: Section 2 introduces the principle and design of our proposed LVV estimation model based on an octree. Section 3 focuses on the implementation of the VO-LVV estimation method based on the C++ programing environment. Section 4 introduces validation experiments comparing our method with the traditional method to process the point cloud datasets with different sources. Finally, Section 5 draws conclusions and discusses future research directions.

2. Principle and Design of a Novel LVV Estimation Model Based on a Voxel Method and an Octree

The voxel estimation method has a fairly high calculation speed and good calculation accuracy, owing to the organization of the point cloud data [29]. However, this method has several application limitations. According to [26,30], in the calculation of LVV for a single plant, the procedure of accumulation is by traversing each individual element grid in the coordinate space. However, with the increase of voxel resolution, this will greatly increase the calculation time. Using an octree to organize the point cloud is an ideal scheme, because the octree enables very efficient point cloud searching. An octree is a spatial, segmented, hierarchical data structure whose nodes are recursively decomposed into eight sub-nodes until the minimum resolution is reached [31,32], as illustrated in Figure 1.
As shown in Figure 1, for vegetation point cloud data, first, a large bounding box surrounding the entire point cloud is generated according to the distribution range of the dataset in the three coordinate directions of X, Y, and Z, and the root node of level 0 of the octree is obtained. Second, the point cloud density in each non-empty node is judged according to the set target density threshold. If the density is less than the threshold requirement, the node is recursively divided along three coordinate axes. In each iteration, the cuboid space represented by the current node is equally divided into eight congruent small cuboids to obtain the corresponding child nodes, until the number of points in each non-empty child node meets the density threshold. Then, the cyclic iteration is stopped, and the tree meshing structure is obtained.
Let t represent the target density threshold of the octree algorithm, n represent the number of non-empty sub-nodes obtained by dividing the vegetation point cloud by the octree based on this threshold, and hm represent the maximum level of the tree structure obtained by octree division. Let the set of voxel centers (M) associated with the vegetation point cloud be represented by Equation (1):
M = { m i | ( x i , y i , z i ) , 1 i | M | }
The point cloud density contained in the voxel centered on mi must meet the threshold, that is, Equation (2):
| m i | V i t
where V i indicates the volume of the corresponding voxel when the octree level is i, and a detailed calculation formula is provided below. Furthermore, the global number set corresponding to all non-empty child nodes is obtained through the octree multi-level tree structure, which is represented by Equation (3):
B = { b i | 1 i n }
In Equation (3), B, the global index number, corresponds to the non-empty node, and 1 b i | M | . Similarly, the corresponding relationship between node level and node number is obtained from the octree multi-level index structure, and the node level calculation function (H) is further obtained, as shown in Equation (4):
H ( b i ) = h i   ( h i [ 0 , h m ] )
Assume that the root node corresponds to level 0, and the edge length set of the root node along the three coordinate axes is L 0 = ( l 0 x , l 0 y , l 0 z ) . Since each division is an octave, which is equivalent to a bisection along a single dimension, the edge length of a single non-empty child node of level i along the 3D direction can be obtained as shown in Equation (5):
( l i x , l i y , l i z ) = ( l 0 x , l 0 y , l 0 z ) 2 i
Furthermore, the volume, Vi, of a single non-empty node in the corresponding level i is obtained, as shown in Equation (6):
V i = l i x × l i y × l i z
All non-empty sub-nodes are accumulated according to different levels to obtain the total estimated volume of LVV (V), as shown in Equation (7):
V = i = 1 n V H ( b i )
Equation (7) describes the LVV of vegetation in an ideal situation. However, owing to the lack of information with the point cloud acquisition method, the calculation results must be magnified, and the specific value of the magnification depends on the differences in the vegetation data acquisition method and crown shape. Therefore, a better-fitting regional total LVV based on the octree is obtained after the adaptive parameter optimization of the theoretical model quantity value, as shown in Equation (8):
V = i = 1 n V H ( b i ) × c ( Q ) × c ( P )
where c ( Q ) is the morphological completion parameter. The specific method for calculation of this parameter is described later for a specific case. c ( P ) is the information completion parameter. For c ( P ) , owing to the lack of information in the 3D point cloud data, parameter fitting of the calculation results was also necessary in the algorithms in this study. In the vegetation point cloud data, for data collected by airborne LiDAR or UAV airborne oblique photogrammetry approaches, only the upper half of the vegetation canopy information can be obtained (unlike the MLS LiDAR instrument, which can penetrate vegetation, the middle of the model built in this way is hollow) and can be considered to be approximately half of the complete canopy model, i.e., c ( P ) = 2 . For MLS LiDAR data, the information on the back of the tree crown can be considered to be missing; that is, the amount of data obtained is only 0.75 times the original data, and c ( P ) = 4 / 3 . Furthermore, the parameter can also be more accurately measured and calculated by fitting specific examples to obtain more accurate values. Through parameter fitting, the LVV measured by this method is effectively optimized.
Thus, according to the above theoretical principles and rules, we extracted a novel LVV calculation method based on the voxel measurement method and an octree data structure, which we call the VO-LVV estimation model. On the basis of the above principles, the calculation process was designed as illustrated in Figure 2.
As shown in Figure 2, the entire process of the VO-LVV model is as follows:
(1)
The 3D bounding box of the vegetation point cloud data is constructed, which will be used as the partition space of the voxel model and octree.
(2)
According to the set voxel resolution, the voxel model is constructed; on this basis, the octree index is established for each vegetation point cloud.
(3)
The voxel density at each node is calculated and compared with the set threshold.
(4)
When the density is lower than this threshold, it is necessary to search the leaf node of the point and infer with the voxel of smaller scale until the voxel of the smallest scale is accessed.
(5)
When the density is higher than a certain threshold, the voxel is considered to be completely covered by the vegetation point cloud. At that time, the voxel volume is calculated and accumulated into the results according to the voxel level.
(6)
According to the observed original data, depending on the data collection method and vegetation type, parameter fitting is performed, and the corrected LVV calculation value is finally obtained.

3. Implementation of the VO-LVV Estimation Model

The VO-LVV estimation model was developed in the C++ compilation environment and integrated with the point cloud library (PCL), thus facilitating point cloud data processing. The research was based on LiDAR point cloud data, collected with a laser scanning instrument, and dense point cloud data, constructed from 3D modeling with UAV remote sensing images. After commercial software extraction of the vegetation information in the environment, a quantitative analysis of LVV for a single plant or regional vegetation was obtained. Specifically, this method was required to achieve the following:
(1)
Obtain the XYZ space coordinate information for all point clouds.
(2)
Construct an octree organizational structure for the point cloud collection.
(3)
Use the voxel nearest neighbor method to establish a fast search method for point clouds in voxel space.
(4)
Establish a method for measuring the volume of voxels at different levels.
Take LAS, an ordinary format point cloud, as the input. According to the above requirements, to enable LiDAR point cloud data to be processed in the PCL library, the mycloud point cloud class is constructed as the LAS data processing object. The structure of this type of object is shown in Figure 3.
In the mycloud class, the PointXYZ format in the PCL point cloud library is used to store the LAS format point cloud. When the point cloud class is initialized, liblas::Reader is used to directly read the point cloud XYZ coordinate value. The procedure can be described with the pseudocode in Appendix A.
After the PointCloud format of LAS data is obtained, the octreepointcloudsearch class is initialized to establish an octree according to the preset voxel resolution parameters, and instantiate the hierarchical information and density measurement function at the branch nodes of the octree point cloud based on this class. By traversing the octree, the voxel nearest neighbor search method is used to calculate the point cloud density in the disassembled element, and the voxels reaching a certain density are accumulated. The procedure of VO-LVV can be described with the pseudocode in Appendix B.

4. Experimental Verification of the VO-LVV Estimation Model

The VO-LVV estimation method in this study uses 3D point cloud data on vegetation to construct an octree search space; therefore, different point cloud data sources have differences in point cloud quality and parameters, and thus substantially influence the performance of the method. To build a generally applicable method, testing on different datasets and analysis of the sources of error were necessary.
In this study, we selected an experimental area (the longitude and latitude is: 103.933113°, 30.761321°) at the University of Electronic Science and Technology of China (UESTC) for data collection, and used three methods—the MLS system, UAV airborne LiDAR, and UAV airborne oblique photogrammetry—to obtain the corresponding point cloud dataset in the same time phase. The detailed parameters’ information of the MLS system and UAV airborne LiDAR are listed in Table 1. For UAV airborne oblique photogrammetry equipment, we used the DJI M300 UAV (https://www.dji.com/cn, accessed on 3 January 2022) and the Zenmuse P1 camera (https://www.dji.com/cn/zenmuse-p1, accessed on 3 January 2022).
We then used MicroStation for the point cloud classification process to coarsely extract the multi-vegetation point clouds from the original data. Since the trees in these data are properly spaced from each other, and the canopies of adjacent plants do not overlap, the point cloud data of multiple plants could be segmented by using the fence tool. The point cloud data obtained from the above three methods were classified to obtain rows of plants in the same place. The quality information of those point cloud datasets is shown in Table 2.
Of note, on the basis of the quality of the above three types of data and our previous calculation experience, the two key parameters in this method—the voxel resolution and the density threshold—were adjusted. The voxel resolution (that is, the minimum voxel size) divided by the octree l m x = l m y = l m z = 0.2   m , and the density threshold t = 1000   points / m 3 .
With the point cloud processing software, the visualization effects of the point clouds acquired by the three methods are shown in Figure 4. From top to bottom, the vegetation point clouds acquired by MLS (Figure 4a), UAV airborne LiDAR (Figure 4b), and 3D reconstruction with the UAV airborne oblique photogrammetry technique (Figure 4c) are shown. Owing to the different angles of data acquisition, some information is missing in the raw data acquired through the different methods: the MLS LiDAR data has a cavity in the canopy on the side with its back to the acquisition equipment, whereas the point cloud in the airborne LiDAR and 3D reconstruction with UAV airborne oblique photogrammetry images has a more severe absence of information at the bottom of the canopy.
First, the preprocessed single tree LAS point cloud data were imported into the calculation program used in this study for analysis. Two traditional LVV measurement methods were selected for comparison. The first was the commonly used measurement method based on a triangular mesh, i.e., the aforementioned wrapped surface approach [26], which constructs a triangular mesh on the point cloud on the crown surface to form a closed envelope, then uses the generated polyhedron as the fitting model for calculation. The second was the formula measurement method [17], which according to the observed crown shape, fits the most approximate regular geometry, which is input into the diameter-height correlation equation on the basis of the measured crown width, crown height, and tree measurement factors. The LVV value calculated with the formula method can be approximated as the true value of LVV. In this experiment, the vegetation canopy in the verified area was ellipsoidal, and the calculation formula of LVV is shown in Equation (9):
V = π d 2 h 6
where d is crown width and h is crown height. For our approach, because the equation simulates a plant as a regular round shape on the cross-section of each layer, but the cross-section of each layer of an actually measured plant is usually elliptical, when Equation (8) is used to estimate the total LVV, this method must be corrected by the shape completion parameter c ( Q ) . The calculation method of c ( Q ) is as shown in Equation (10):
C ( Q ) = a b
where a is the maximum crown diameter vector measured at the maximum cross-section of the vegetation point cloud, and b is the crown diameter vector perpendicular to a on the cross-section. A comparison of calculation results of the same vegetation in the verification area under the same time phase is shown in Table 3.
From Table 3, it can be found that:
(1)
Through the testing of vegetation point cloud samples, as compared with the wrapped surface approach, our LVV measurement method used in this study enabled faster calculation, and the acceleration ratio generally reached approximately 44:1–102:1, thus indicating good calculation efficiency.
(2)
For point cloud data from three different sources, when the same calculation method is used, e.g., the wrapped surface approach or the VO-LVV measurement method, the corresponding LVV results can be calculated. However, comparison of the proximity of the results to the true value of LVV indicated that the colocation results from the point cloud data based on 3D reconstruction with the UAV airborne oblique photogrammetry technique were better than those obtained through the other methods (MLS or UAV airborne LiDAR). Therefore, the point cloud data with higher point density appear to improve the LVV calculation accuracy. Meanwhile, the mean value (16.0617) of the first method for the three data sources is closer to the true value (25.7401) than that of the second method (14.0726).
(3)
For the most suitable point cloud data source for LVV estimation, i.e., the point cloud data from 3D reconstruction, the absolute value of the measurement error rate (difference between measured value and true value/true value) was 19.9%, thus indicating better measurement accuracy than the 26.8% absolute value of the traditional wrapped surface approach.
To verify the calculations of this method in complex scenarios, we tested the method on a sample of multiple plants (three plants in one row). Similar to the single tree experiment, the wrapped surface approach and formula method were used to measure each single plant separately, then accumulate and compare the total green value. Unlike the other methods, our method calculated these vegetation point clouds as a whole. A comparison of calculation results obtained from the multiple plants experiment is shown in Table 4.
From Table 4, it can be found that:
(1)
The VO-LVV estimation method can maintain a linear relationship with a single plant in calculations involving multiple plants. Therefore, this method is suitable for simultaneous calculation scenarios of single trees and multiple plants. In contrast, the wrapped surface approach and formula measurement method can perform calculations for only single trees, and their calculation efficiencies are quite low.
(2)
Similar to the second item in the above analysis from Table 3.
(3)
The VO-LVV estimation method still maintained a good acceleration ratio, as compared with that of the wrapped surface approach, reaching approximately 98:1–230:1.
(4)
For the most suitable point cloud data source for LVV estimation, i.e., the point cloud data from 3D reconstruction, the absolute error rate of the VO-LVV method was 14.5%, whereas the absolute error rate of the wrapped surface approach was 15.2%.
Finally, for the complex vegetation coverage scenes, this study also carried out the corresponding experiment, as shown in Figure 5. The vegetation in this experimental area is dense and overlapped with each other, and there are also great differences in vegetation types. Following the same aforementioned operations, three different types of datasets in this region are acquired through three different methods. By classifying and extracting the three types of datasets, the quality information of these point cloud datasets is shown in Table 5.
From Table 5, to this complex vegetation scene, only our proposed method can be adopted to calculate the LVV values of three types of point cloud data. The calculated LVV values are 761.9390, 1056.9209, and 1331.9026 m3, respectively. Although it is impossible to obtain the true value through actual measurement in this case, the measured value of our method conforms to the quantitative relationship with the data of the scene’s area size and the vegetation crown height. According to the field investigation and statistics, there are nine plants with a similar crown shape and size and an amount of low vegetation in this area. Take one of the plants, and the measured LVV is about 121.9107 m3. Based on this, it can be estimated that the true value of LVV in this area is about 1219.107–1341.0177 m3. Therefore, it can be generally considered that the method in this paper is effective for the calculation of regional LVV. It can also be considered that this method is effective for the LVV measurement in the complex scenes.

5. Discussion and Conclusions

The existing quantitative methods for LVV measurement are mainly based on single tree modeling or on the calculation of single tree species’ growth parameters, which cannot be applied to the many tree species and complex forest layer structures present in urban regions, and thus are unsuitable for broad application. To address the above problems, this paper proposes a novel LVV estimation model, VO-LVV, which combines the voxel measurement method with an organizing point cloud based on an octree structure, to estimate the LVV of typical vegetation and landforms in cities. The experiential results indicated that our method can be applied to point cloud data obtained through different methods, and this approach provides great advantages in calculation speed, accuracy, and the scope of the algorithm’s application:
  • The VO-LVV estimation method, compared with a traditional method, i.e., the wrapped surface approach, enabled substantial efficiency improvement and higher calculation accuracy. For the most suitable point cloud data source (i.e., the point cloud data from 3D reconstruction) for LVV estimation, the maximum calculation acceleration ratio reached 230X, and the best absolute error acceleration rate of the VO-LVV method was 6.9%.
  • Furthermore, the new method can be simultaneously applied to scenarios of single plants and multiple plants, and can be used for the calculation of LVV in areas with various vegetation types in cities.
  • However, this method also has some limitations in the following aspects:
  • In this method, the voxel resolution and density threshold strongly influence the LVV calculation results. Due to the lack of theoretical analysis on voxel scale division in this algorithm, it is unable to process data with low point density.
  • This method uses parameter fitting to optimize the lack of information in the original point cloud data, and it does not analyze the spatial distribution of point cloud data; therefore, errors may be present in the estimation results.
Follow-up studies should start by investigating the above problems and should continue to refine the algorithm to improve the measurement accuracy, with a focus on the following aspects:
  • The number of verification data will be increased to verify the adaptability of the algorithm in different canopy structures.
  • For low point density in the original point cloud data, the commonly used point cloud up-sampling algorithm [33] could be introduced.
  • For the lack of information due to the point cloud acquisition method, point cloud completion technology based on deep learning [34] could be used to complete the canopy data and improve the calculation accuracy.
  • Among different types of vegetation in cities, the lushness of branches and leaves greatly varies. Other measurement methods for specific tree species could be introduced and combined with the results of this study to yield an adaptive LVV high-precision calculation scheme. Through in-depth improvement of the method reported herein, a rapid estimation model of urban regional LVV could be established, which would enable urban greening measurements to be quickly obtained and analyzed.

Author Contributions

Conceptualization, F.H. and S.P.; methodology, F.H. and S.P.; software, S.C. and S.P.; writing—original draft preparation, F.H. and S.P.; writing—review and editing, S.C., H.C. and N.M.; visualization, H.C. and N.M.; supervision, F.H.; project administration, F.H.; funding acquisition, F.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study was mainly supported by the Fundamental Research Funds for the Central Universities (grant Nos. ZYGX2019J069 and ZYGX2019J072).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The algorithm and data used for this study are available for download at: https://pan.baidu.com/s/1LQN2Mt6tVxr7c9aJPvMKwg, accessed on 3 January 2022, and extraction code is: seyw.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

  • template <typename T > class mycloud{ //Create a new class to store data for calculation.
  •  boost::shared_ptr<PointCloud<PointXYZ> > cloud; //The data structure which store point cloud.
  •  unsigned long int nbPoints ← 0;
  •  double vol ← 0.0, area ← 0.0; //Initialize the LVV value.
  • }
  • mycloud::mycloud(const string& s, const string& type):
  •  cloud(new PointCloud< PointXYZ >),viewer(new visualization::PCLVisualizer(“3D Viewer”)) { //Load LiDAR file and change format from .las to PCD file format.
  •  ifstream ifs(inputFilename, ios::in | ios::binary); //Open .las file.
  •  if (ifs.is_open()) {
  •   if (type is “las”) {
  •    liblas::ReaderFactory f;
  •    liblas::Reader reader ← f.CreateWithStream(ifs);
  •    liblas::Header const& header ← reader.GetHeader(); //Read the file header.
  •    nbPoints ← reader.GetHeader().GetPointRecordsCount(); //Get the number of points.
  •    cloud->width ← nbPoints; //Keep the size of new data structure as same as original file.
  •    cloud->height ← 1;
  •    cloud->is_dense ← false;
  •    cloud->points.resize(nbPoints);
  •    for (vtkIdType i from 0 to nPoint-1) {
  •     this->cloud->points[i].x ← (reader.GetPoint().GetX());
  •     this->cloud->points[i].y ← (reader.GetPoint().GetY());
  •     this->cloud->points[i].z ← (reader.GetPoint().GetZ());
  •     reader.ReadNextPoint();
  •    } //Get each point from original data.
  •    ifs.close();
  •   }
  •  }
  • }

Appendix B

  • double mycloud::getoctreevolumeV0(double voxelSize = 0.2, int miu = 500){ //Calculate the volume by octree searching.
  •  vol ← 0.0; //Initialize the LVV.
  •  octree::OctreePointCloudSearch<T> oct(voxelSize); //Create the class and set resolution at lowest octree level as voxelSize.
  •  oct.setInputCloud(cloud); //Set input point cloud data for octree searching.
  •  oct.addPointsFromInputCloud();
  •  vector<T, Eigen::aligned_allocator<T> > searchPoint; //The vector which store branch points.
  •  if (oct.getOccupiedVoxelCenters(searchPoint) not succeeds) //Check if the octree has gotten the branch points.
  •   return 0.0;
  •  vtkIdType m ← searchPoint.size();
  •  for (int j from 0 to m-1) { //Check voxel for each branch points.
  •   pointIdxVec.clear();
  •   if (oct.voxelSearch(searchPoint[j], pointIdxVec) succeeds){ //Voxel searching form each branch nodes.
  •    vtkIdType n ← pointIdxVec.size();
  •    if (n >= miu * voxelSize * voxelSize * voxelSize) //Check if it is an isolated set of noise points.
  •     vol ← vol + voxelSize * voxelSize * voxelSize; //Accumulate the point cloud volume centered on the branch nodes.
  •   }
  •  }
  •  return vol;
  • }

References

  1. Kim, J.; Mok, K. Study on the current status and direction of environmental governance around urban forest in Korea: With a focus on the recognition of local government officials. J. Korean Society For. Sci. 2010, 99, 580–589. [Google Scholar]
  2. Grimmond, S.; Loridan, T.; Best, M. The importance of vegetation on urban surface-atmosphere exchanges: Evidence from measurements and modelling. In Proceedings of the International Workshop on Urban Weather and Climate: Observation and Modeling, Beijing, China, 12 July 2011. [Google Scholar]
  3. Gao, Y.G.; Xu, H.Q. Estimation of multi-scale urban vegetation coverage based on multi-source remote sensing images. J. Infrared Millim. Waves (Chin. Ed.) 2017, 36, 225–235. (In Chinese) [Google Scholar]
  4. Mishev, D. Vegetation index of a mixed class of natural formations. Acta Astronaut. 1992, 26, 665–667. [Google Scholar] [CrossRef]
  5. Hutmacher, A.M.; Zaimes, G.N.; Martin, J.; Green, D.M. Vegetation structure along urban ephemeral streams in southeastern Arizona. Urban Ecosyst. 2014, 17, 349–368. [Google Scholar] [CrossRef]
  6. Zhou, H.X.; Tao, G.X.; Yan, X.Y.; Sun, J.; Wu, Y. A review of research on the urban thermal environment effects of green quantity. Chin. J. Appl. Ecol. (Chin. Ed.) 2020, 31, 2804–2816. [Google Scholar]
  7. Pataki, D.E. City trees: Urban greening needs better data. Nature 2013, 502, 624. [Google Scholar] [CrossRef] [Green Version]
  8. Zhao, W. Research on urban green space system evaluation index system. In Proceedings of the International Conference on Air Pollution and Environmental Engineering (APEE), Hong Kong, China, 26–28 October 2018; Volume 208. [Google Scholar]
  9. Bai, X.Q.; Wang, W.; Lin, Z.Y.; Zhang, Y.J.; Wang, K. Three-dimensional measuring for green space based on high spatial resolution remote sensing images. Remote Sens. Land Resour. (Chin. Ed.) 2019, 31, 53–59. [Google Scholar]
  10. Li, F.X.; Li, M.; Feng, X.G. High-precision method for estimating the three-dimensional green quantity of an urban forest. J. Indian Soc. Remote Sens. 2021, 49, 1407–1417. [Google Scholar] [CrossRef]
  11. Zheng, S.J.; Meng, C.; Xue, J.H.; Wu, Y.B.; Liang, J.; Xin, L.; Zhang, L. UAV-based spatial pattern of three-dimensional green volume and its influencing factors in Lingang New City in Shanghai, China. Front. Earth Sci. (Chin. Ed.) 2021, 15, 543–552. [Google Scholar] [CrossRef]
  12. Zhou, J.H.; Sun, T.Z. Study on remote sensing model of three-dimensional green biomass and the estimation of environmental benefits of greenery. Remote Sens. Environ. Chin. (Chin. Ed.) 1995, 10, 162–174. [Google Scholar]
  13. Feng, D.L.; Liu, Y.H.; Wang, F.; Yuan, Y. Review on ecological benefits evaluation of urban green space based on three-dimensional green quantity. Chin. Agr. Sci. Bull. (Chin. Ed.) 2017, 33, 129–133. [Google Scholar]
  14. Huang, Y.; Yu, B.L.; Zhou, J.H.; Hu, C.L.; Tan, W.Q.; Hu, Z.M.; Wu, J.P. Toward automatic estimation of urban green volume using airborne LiDAR data and high-resolution remote sensing images. Front. Earth Sci. (Chin. Ed.) 2013, 7, 43–54. [Google Scholar] [CrossRef]
  15. Liu, C.F.; Jiang, Y.Z.; Zhang, Q.L.; Liu, J.; Wu, B.; Li, C.M.; Zhang, S.X.; Wen, J.B.; Liu, S.F.; Li, Y.B.; et al. Tridimensional green biomass measures of Shenyang urban forests. J. Beijing For. Univ. (Chin. Ed.) 2006, 28, 32–37. [Google Scholar]
  16. Zhou, J.H. Theory and Practice on Database of three-dimensional vegetation quantity. Acta Geogr. Sin. (Chin. Ed.) 2001, 56, 14–23. [Google Scholar]
  17. Zhou, Y.F.; Zhou, J.H. Fast method to detect and calculate LVV. Acta Ecol. Sin. (Chin. Ed.) 2006, 26, 4204–4211. [Google Scholar]
  18. Nowak, D.J.; Crane, D.E.; Dwyer, J.F. Compensatory value of urban trees in the United States. J. Arboric. 2002, 28, 194–199. [Google Scholar] [CrossRef]
  19. Nowak, D.J.; Greenfield, E.J.; Hoehn, R.E.; Lapoint, E. Carbon storage and sequestration by trees in urban and community areas of the United States. Environ. Pollut. 2013, 178, 229–236. [Google Scholar] [CrossRef] [Green Version]
  20. Sander, H.; Polasky, S.; Haight, R.G. The value of urban tree cover: A hedonic property price model in Ramsey and Dakota Counties, Minnesota, USA. Ecol. Econ. 2010, 69, 1646–1656. [Google Scholar] [CrossRef]
  21. Sun, G.; Ranson, K.J.; Kimes, D.S.; Blair, J.B.; Kovacs, K. Forest vertical structure from GLAS: An evaluation using LVIS and SRTM data. Remote Sens. Environ. 2008, 112, 107–117. [Google Scholar] [CrossRef]
  22. Li, W.; Niu, Z.; Wang, C.; Gao, S.; Feng, Q.; Chen, H.Y. Forest above-ground biomass estimation at plot and tree levels using airborne LiDAR data. J. Remote Sens. (Chin. Ed.) 2015, 4, 669–679. [Google Scholar]
  23. Dai, C. The Predicting Models of Crown Volume and Crown Surface Area about Main Species of Tree of Beijing; North China University of Science and Technology: Qinhuangdao, China, 2015. [Google Scholar]
  24. Cháidez, J.D.J.N. Allometric equations and expansion factors for tropical dry forest trees of Eastern Sinaloa, Mexico. Trop. Subtrop. Agroecosyst. 2009, 10, 45–52. [Google Scholar]
  25. Chen, X.X.; Li, J.C.; Xu, Y.L. Hierarchical measurement of urban living vegetation volume based on LiDAR point cloud data. Eng. Surv. Mapp. (Chin. Ed.) 2018, 27, 43–48. [Google Scholar]
  26. Hosoi, F.; Nakai, Y.; Omasa, K. 3D voxel-based solid modeling of a broad-leaved tree for accurate volume estimation using portable scanning LiDAR. ISPRS J. Photogramm. 2013, 82, 41–48. [Google Scholar] [CrossRef]
  27. Wu, B.; Yu, B.L.; Yue, W.H.; Shu, S.; Tan, W.Q.; Hu, C.L.; Huang, Y.; Wu, J.P.; Liu, H.X. A voxel-based method for automated identification and morphological parameters estimation of individual street trees from mobile laser scanning data. Remote Sens. 2013, 5, 584–611. [Google Scholar] [CrossRef] [Green Version]
  28. Che, D.F.; Zhang, C.L.; Du, H.Y. 3D model organization method of digital city based on BSP tree and grid division. Mine Surv. (Chin. Ed.) 2019, 47, 81–84. [Google Scholar]
  29. Saona-Vazquez, C.; Navazo, I.; Brunet, P. The visibility octree: A data structure for 3D navigation. Comput. Graph. 1999, 23, 635–643. [Google Scholar] [CrossRef]
  30. Li, F.X.; Shi, H.; Sa, L.W.; Feng, X.G.; Li, M. 3D green volume measurement of single tree using 3D laser point cloud data and differential method. J. Xi’an Univ. of Arch. Tech. (Chin. Ed.) 2017, 49, 530–535. [Google Scholar]
  31. Losasso, F.; Gibou, F.; Fedkiw, R. Simulating water and smoke with an octree data structure. ACM T. Graph. 2004, 23, 457–462. [Google Scholar] [CrossRef]
  32. Kato, A.; Moskal, L.M.; Schiess, P.; Swanson, M.E.; Calhoun, D.; Stuetzle, W. Capturing tree crown formation through implicit surface reconstruction using airborne LiDAR data. Remote Sens. Environ. 2009, 1136, 1148–1162. [Google Scholar] [CrossRef]
  33. Yu, L.Q.; Li, X.Z.; Fu, C.W.; Cohen-Or, D.; Heng, P.A. PU-Net: Point Cloud Upsampling Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2790–2799. [Google Scholar]
  34. Cheng, M.; Li, G.Y.; Chen, Y.P.; Chen, J.; Wang, C.; Li, J. Dense Point Cloud Completion Based on Generative Adversarial Network. IEEE T. Geosci. Remote 2022, 60, 1–10. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the octree structure.
Figure 1. Schematic diagram of the octree structure.
Remotesensing 14 00855 g001
Figure 2. Flow chart of the VO-LVV measurement model based on the voxel method and an octree.
Figure 2. Flow chart of the VO-LVV measurement model based on the voxel method and an octree.
Remotesensing 14 00855 g002
Figure 3. Mycloud class design diagram.
Figure 3. Mycloud class design diagram.
Remotesensing 14 00855 g003
Figure 4. Vegetation point cloud data obtained through different acquisition methods. (a) MLS LiDAR data. (b) UAV airborne LiDAR data. (c) 3D reconstruction with UAV airborne photogrammetry images.
Figure 4. Vegetation point cloud data obtained through different acquisition methods. (a) MLS LiDAR data. (b) UAV airborne LiDAR data. (c) 3D reconstruction with UAV airborne photogrammetry images.
Remotesensing 14 00855 g004
Figure 5. Schematic diagram of a complex vegetation coverage scene (LiDAR datasets). (a) Vertical view. (b) Survey view.
Figure 5. Schematic diagram of a complex vegetation coverage scene (LiDAR datasets). (a) Vertical view. (b) Survey view.
Remotesensing 14 00855 g005
Table 1. Measurement parameters of the MLS system and UAV airborne LiDAR equipment.
Table 1. Measurement parameters of the MLS system and UAV airborne LiDAR equipment.
TypeiScan-S-Z (MLS)Zenmuse L1 (Airborne LiDAR)
Limit measurement distance119 m450 m
Laser emission frequency1.01 × 107 p/s2.4 × 106 p/s (single echo),
4.8 × 106 p/s (multiple echo)
Ranging accuracy0.9 mm (at 50 m)3 cm (at 100 m)
Angular resolution0.0088°0.01°
System accuracy5 cm (at 100 m)10 cm (at 50 m)
Table 2. Quality comparison of three types of point cloud data collected at the same site and in the same time phase.
Table 2. Quality comparison of three types of point cloud data collected at the same site and in the same time phase.
Point Cloud SourceData FormatPoint Cloud QuantityPoint Cloud Density, ResolutionData Details
MLS.las23,7628000 points/m3, 5 cmMissing part of the crown dorsal information
UAV Airborne LiDAR.las50,3988000 points/m3, 5 cmMissing the trunk and crown bottom
UAV Airborne oblique photogrammetry.las58,70051,000,000 points/m3, 1 cmMissing the trunk and crown bottom
Table 3. Comparison of different LVV measurement methods (for a single tree *).
Table 3. Comparison of different LVV measurement methods (for a single tree *).
Calculation MethodMeasurement DataCalculated LVV (m3)Calculation Time (s)
Wrapped surface approachMLS16.15550.308
Airborne LiDAR13.18321.673
3D reconstruction point cloud18.84647.660
VO-LVV method
(voxel resolution: 0.2 m, density threshold: 1000/m3)
MLS11.36010.007
Airborne LiDAR10.24270.024
3D reconstruction point cloud20.61500.075
True value (based diameter-height correlation equation)Crown width4.355 m/
Crown height2.592 m
LVV in theory25.7401 m3
* Note: This single tree is the middle one in Figure 4, and other single trees can be calculated in the same way.
Table 4. Comparison of different LVV calculation methods (for multiple plants).
Table 4. Comparison of different LVV calculation methods (for multiple plants).
Calculation MethodMeasurement DataCalculated LVV (m3)Calculation Time (s)
Wrapped surface approachMLS58.13472.291
Airborne LiDAR39.93956.199
3D reconstruction point cloud67.762126.942
VO-LVV method
(voxel resolution: 0.2 m, density threshold: 1000/m3)
MLS39.21710.016
Airborne LiDAR39.19260.063
3D reconstruction point cloud68.30910.117
True value (based on diameter-height correlation equation)LVV in theory79.8619/
Table 5. Quality comparison of three point cloud datasets collected from the same complex vegetation coverage scene.
Table 5. Quality comparison of three point cloud datasets collected from the same complex vegetation coverage scene.
Point Cloud SourceData FormatPoint Cloud QuantityPoint Cloud Density, ResolutionData Details
MLS.las301,4628000 points/m3, 5 cmMissing part of the crown dorsal information
Airborne LiDAR.las723,6228000 points/m3, 5 cmMissing the trunk and crown bottom
UAV Airborne oblique photogrammetry.las7,137,0591,000,000 points/m3, 1 cmMissing the trunk and crown bottom
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, F.; Peng, S.; Chen, S.; Cao, H.; Ma, N. VO-LVV—A Novel Urban Regional Living Vegetation Volume Quantitative Estimation Model Based on the Voxel Measurement Method and an Octree Data Structure. Remote Sens. 2022, 14, 855. https://doi.org/10.3390/rs14040855

AMA Style

Huang F, Peng S, Chen S, Cao H, Ma N. VO-LVV—A Novel Urban Regional Living Vegetation Volume Quantitative Estimation Model Based on the Voxel Measurement Method and an Octree Data Structure. Remote Sensing. 2022; 14(4):855. https://doi.org/10.3390/rs14040855

Chicago/Turabian Style

Huang, Fang, Shuying Peng, Shengyi Chen, Hongxia Cao, and Ning Ma. 2022. "VO-LVV—A Novel Urban Regional Living Vegetation Volume Quantitative Estimation Model Based on the Voxel Measurement Method and an Octree Data Structure" Remote Sensing 14, no. 4: 855. https://doi.org/10.3390/rs14040855

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop