Next Article in Journal
Spatiotemporal Optimization for the Placement of Automated External Defibrillators Using Mobile Phone Data
Next Article in Special Issue
Remote Sensing-Based Yield Estimation of Winter Wheat Using Vegetation and Soil Indices in Jalilabad, Azerbaijan
Previous Article in Journal
HexTile: A Hexagonal DGGS-Based Map Tile Algorithm for Visualizing Big Remote Sensing Data in Spark
Previous Article in Special Issue
Mapping Cropland Extent in Pakistan Using Machine Learning Algorithms on Google Earth Engine Cloud Computing Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Point Cloud Data Processing Optimization in Spectral and Spatial Dimensions Based on Multispectral Lidar for Urban Single-Wood Extraction

1
The State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
The Shanghai Radio Equipment Research Institute, Shanghai 201109, China
3
Electronic Information School, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2023, 12(3), 90; https://doi.org/10.3390/ijgi12030090
Submission received: 16 December 2022 / Revised: 16 February 2023 / Accepted: 21 February 2023 / Published: 23 February 2023
(This article belongs to the Special Issue Geomatics in Forestry and Agriculture: New Advances and Perspectives)

Abstract

:
Lidar can effectively obtain three-dimensional information on ground objects. In recent years, lidar has developed rapidly from single-wavelength to multispectral hyperspectral imaging. The multispectral airborne lidar Optech Titan is the first commercial system that can collect point cloud data on 1550, 1064, and 532 nm channels. This study proposes a method of point cloud segmentation in the preprocessed intensity interpolation process to solve the problem of inaccurate intensity at the boundary during point cloud interpolation. The entire experiment consists of three steps. First, a multispectral lidar point cloud is obtained using point cloud segmentation and intensity interpolation; the spatial dimension advantage of the multispectral point cloud is used to improve the accuracy of spectral information interpolation. Second, point clouds are divided into eight categories by constructing geometric information, spectral reflectance information, and spectral characteristics. Accuracy evaluation and contribution analysis are also conducted through point cloud truth value and classification results. Lastly, the spatial dimension information is enhanced by point cloud drop sampling, the method is used to solve the error caused by airborne scanning and single-tree extraction of urban trees. Classification results showed that point cloud segmentation before intensity interpolation can effectively improve the interpolation and classification accuracies. The total classification accuracy of the data is improved by 3.7%. Compared with the extraction result (377) of single wood without subsampling treatment, the result of the urban tree extraction proved the effectiveness of the proposed method with a subsampling algorithm in improving the accuracy. Accordingly, the problem of over-segmentation is solved, and the final single-wood extraction result (329) is markedly consistent with the real situation of the region.

1. Introduction

In recent years, intensive urban development and economic activities have increased energy consumption and have led to serious greenhouse gas emissions [1,2]. Urban trees significantly impact environmental quality and human health because they can absorb carbon dioxide and store excess carbon in biomass [3,4]. Protecting carbon storage and improving green space infrastructure in urban areas present significant environmental benefits [5]. Therefore, estimating and monitoring urban carbon storage and green space is critical. The airborne lidar system is an accurate remote sensing technology for monitoring biological carbon and has been widely used in surface condition research and land cover classification [6]. In the aspect of vegetation state monitoring, multispectral lidar systems have also been widely used [7,8]. With the increasing demand for more accurate land cover classification and urban biomass monitoring, the frequency of lidar use is also increasing. Multispectral lidar has been carried out in many laboratories. Lidar has achieved satisfactory results in many fields of application and developed rapidly from single-wavelength to multiple spectra and high spectrum thereafter [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Although multiple spectra and hyperspectral lidar have shown better capabilities for land cover and forestry, they cannot be mass-produced, have no airborne implementation results, and are unable to be used in large-scale applications. In terms of the system, higher spectral resolution results in large systems. Therefore, in terms of practicability, two- or three-wavelength lidar are more suitable for large-scale research. In 2014, Optech released its first commercial multispectral airborne lidar system, introduced the system’s outstanding land and extremely shallow water mapping capabilities, and demonstrated new applications for realizing its multispectral capabilities [26].
At present, there have been many mature research methods for land cover classification of Titan data. Many studies convert point cloud data into raster images to achieve the land cover classification of multispectral Titan data [27,28]. Some research classifies data by constructing characteristics [29,30]. Some research uses morphology to improve the accuracy of classification [31]. In forestry, many studies have verified the high potential and advantages of airborne multispectral lidar in forest and urban vegetation detection and single-tree extraction [32,33,34,35,36,37,38,39]. Subsequent studies have also demonstrated lidar’s ability to estimate tree biomass and carbon content [40,41,42,43,44,45,46,47,48]. Moreover, the accurate extraction of individual trees in complex urban environments remains challenging.
The Titan sensor is used to obtain multispectral point clouds from three groups of single wavelength point clouds with spatial and spectral differences. Moreover, the intensity of point clouds should be interpolated in the preprocessing stage to convert them from single wavelength to multispectral point cloud data. The traditional method aims to interpolate the original point cloud directly. Weinmann developed software that searches the other two-channel point clouds within a 1 m radius of the point cloud and assigns the intensity value to the searched point cloud [49]. Ekhtari used a large point cloud search radius to exclude the case that the corresponding channel has no point cloud [50]. However, because of the close distance between ground objects, the edge intensity interpolation of ground objects is inaccurate. Therefore, we need to improve the accuracy of interpolation of the point cloud intensity to properly obtain the spectral information of urban vegetation.
To address the geometric and spectral inconsistencies of the Titan data, we propose an improved method for multispectral lidar point clouds. Firstly, the preprocessed point cloud is divided by the proposed method. Secondly, we set the bands according to the complexity of urban features in the classification. The spectral indices are selected to improve the extraction of trees in the city. Then, the down-sampling method is adopted in the subsequent single-wood extraction to effectively eliminate errors, which is caused by the scanning geometry. Compared with other methods, our method improves the spectral information completion accuracy of the multispectral point cloud [49,50] and optimizes the geometry distribution of the point cloud [51,52]. Meanwhile, the quality of airborne multispectral lidar point cloud data for urban single-wood extraction is enhanced [32,33,36,37].

2. Materials and Methods

2.1. Study Area and Point Cloud Data

2.1.1. Study Area

The flight area dataset was acquired using the Airborne Multispectral Lidar System OT of the University of Houston in Houston, Texas, USA (Wikipedia) (29° 43′ 09.68″ N, 95° 21′ 12.72″ W) (Figure 1). The study area is a subset of the flight dataset which is located at the intersection of Wheeler Avenue and Scott Street and centered on the residential west side of the University of Houston.

2.1.2. Data Introduction

The data used are igASS2018. The three channels of Titan are scanned at different angles: the 1550 nm, 1064 nm, and 532 nm channels are scanned vertically downward, 3.5° forward, and 7° forward, respectively. Therefore, point cloud files collected through the three channels will produce some geometric inconsistencies. The numbers of points in the three channels are 4,436,481, 4,801,267, and 4,739,643 and the average point cloud density is 6.3 points per m2. Data include point source ID, user data, flight line edge, sweep angle, echo times, echo intensity, and details other than coordinate information. We also obtain the true value of single trees in the region through images provided by Titan and point cloud data after elevation rendering. This area contains many independent houses and 308 trees. We refer to some other articles on classification using Titan data [21,53]. Considering the presence of several parking areas and many cars in the scene, we took cars as a separate category in combination with RGB images and point cloud data. For non-residential buildings, we also refer to RGB images and point cloud data. It is found that in the three bands of the point cloud studied, non-residential buildings exhibit spectral characteristics that are different from other ground features. Therefore, we classified non-residential buildings as a separate category. We manually divide target objects into eight categories: roads, low vegetation, high vegetation, cars, bare land, housings, non-residential buildings, and wires (Figure 2d) (all point clouds were displayed by cloudcompare software). The number of multispectral point clouds corresponding to these categories is shown in Table 1.

2.2. Methods

The proposed method for land cover classification using Titan multispectral lidar data is presented in the flowchart (Figure 3). Firstly, we segmented the single-wavelength point cloud, and on the basis of the segmentation, we carried out intensity interpolation for the point cloud and improved the quality of spectral information by using the information advantages of the spatial dimensions of the point cloud. After the point cloud intensity interpolation, we obtained the multispectral point cloud data. By constructing the spectral index, we classified the point cloud. We obtained the tree point cloud in the classification result. To optimize the spatial distribution of tree point clouds, we conducted drop sampling for tree point clouds. Finally, we carried out single-wood segmentation for tree point clouds and evaluated the accuracy of the classification and single-wood segmentation results according to the ground truth value. The details of the method are presented in the following subsections.

2.2.1. Spectral Interpolation Based on Space Information Enhancement

The three point-cloud datasets have their geometric information in a single wavelength with a single channel backscattering lidar point cloud, owing to the geometrical inconsistencies of the three channels of Titan. The red point cloud in the figure is the point cloud obtained by the 1550 nm NIR channel, the green point cloud in the figure is the point cloud obtained by the 1064 nm NIR channel, and the blue point cloud in the figure is the point cloud obtained by the 532 nm green light channel (Figure 4). Thus, our first step is the three channels containing different spectral information, which have geometrical inconsistencies of the lidar point cloud data processing for multiple spectral point clouds. Traditional point cloud interpolation methods include the nearest point, bilinear, and inverse distance weight interpolation methods. We need to interpolate the intensity of the single-wavelength point cloud. When we use the traditional method to interpolate multispectral point clouds for classification, we find that the classification results at the edges of different ground objects are not particularly ideal. Considering the spectral information used in classification, we are convinced that the point cloud at the edge of the ground object may have acquired the intensity information of other categories of point clouds during interpolation. Therefore, we propose a method of point cloud segmentation before intensity interpolation to effectively improve the accuracy of point cloud interpolation. We obtain different types of point cloud blocks after point cloud segmentation. Interpolation in a single point cloud block can effectively avoid point cloud intensity from other ground objects.
The calculation method of point cloud curvature is as follows. Here, n is a normal vector, A is the infinitesimal region around p , d i a m A is the diameter of this region, is the gradient operator with respect to point p , and α i j and β i j are the opposite angles connecting sides p i and p i . When the mean curvature x n of point p on a surface is calculated, x n is satisfied:
2 x n n = l i m d i a m A 0 A A
The formula is discretized to get:
x n p i = n 4 A m i n γ
  γ = j N i c o t α i j + c o t β i j p i p j
We choose the region-growing segmentation algorithm in the segmentation. In this method, the point with the least curvature is used as the seed point to start growth. Using such an approach can reduce the total number of regions. We calculate the angle between the normal of the neighborhood point and the normal of the current seed point, the domain points smaller than the smoothing threshold are added to the current area. Then we check the curvature value of each field point, and add the seed point sequence to the field points less than the curvature threshold. Point cloud normal vector calculation, smoothing threshold, curvature threshold, and all subsequent methods are implemented by PCL (Point Cloud Library).
In the subsequent intensity interpolation method, we refer to the tool developed by Weinman that is used to search the other two-channel point clouds within a 1 m radius of the point cloud and assign the intensity value to the searched point cloud [49]. When there is no corresponding point cloud in other channels, the intensity value of 0 is assigned to all point clouds. Thus, all point clouds are assigned two intensity values. Ekhtari used a large point cloud search radius to exclude the case that the corresponding channel has no point cloud [50]. To search all point clouds, this study uses the existing interpolation method as a basis to search all point clouds. We search for point clouds in other channels by the ten closest to each point cloud, not by the radius of the point cloud. By using this method, we can effectively solve that the point cloud strength value is zero, and search the point cloud according to the distance to weight calculation of intensity. The calculated intensity value is given. Lastly, the point cloud interpolation of the other two channels is performed. The three groups of multispectral point clouds obtained are also merged to increase point cloud density and reduce interpolation error.

2.2.2. Spectral Index Analysis Based on Land Cover Classification

One of the difficult tasks in urban areas is to accurately distinguish complex ground object categories merely by relying on the intensity characteristics and geometric features of a single wavelength of lidar point cloud data. Many studies have used the spectral advantages of Titan data as bases, whether in terms of spectral dimension or exponential construction, to demonstrate the advantages of the spectral aspects of datasets for the classification of point clouds [29,30,50,54,55,56,57].
For feature categories for regions with traditional urban features, compared with other classification methods, using the method of spectral index indicative is strong as it can effectively monitor and distinguish the corresponding feature, and analyzing the classification results also helps to study the laws of spectral characteristics and features of ground object classification for further contribution to the study. To build a suitable spectrum index for the classification of choice before we build a larger contribution to the classification results of two groups of band selection, we classify the point cloud by combining bands in pairs and adding the geometric information of the point cloud. Lastly, we determine the two bands with maximum influence on the classification results according to the results of the three classification groups and the classification results of adding all spectral and geometric information. We select six groups of spectral features (Table 2) suitable for distinguishing vegetation, buildings, vehicles, wires, and other ground objects according to the study area’s characteristics. After determining the two bands, we analyze the feature contribution degree of spectral features according to the classification results.
In the analysis of the contribution degree of spectral features, we calculate the influence of each spectral index on the final classification results by adding all spectral indices and not adding spectral indices together with a single set of experiments. We likewise analyze the influences of different bands and spectral indices on the accuracy of the final feature classification.
For accuracy verification, a confusion matrix is created to ensure that the overall accuracy (OA) and kappa statistics are calculated. Overall accuracy is the ratio of the number of correctly classified point clouds in the category to the total number of point clouds in the category, the kappa statistics represents the proportion of errors reduced by a classification compared to a completely random classification. Here, i represents the category of ground objects, and a i is the correct classification number of categories i . x i and y i is the number of category i ground object samples in the ground truth label and predicted label, respectively. N is the total number of samples and C is the number of categories.
O A = i = 1 C a i N
K a p p a = N i = 1 C a i x i y i N 2 x i y i

2.2.3. Single-Wood Extraction Based on Point Cloud Density Homogenization

Trees are divided after classification to obtain individual urban trees. At present, a significantly mature method of single-wood extraction uses airborne lidar. A similar local maximum algorithm is used to identify the crown reuse area for extraction by the growth algorithm [64]. The casting algorithm, empirically based tree shape assumption, and canopy edge detection are combined [65]. The k-means clustering algorithm is used to segment individual trees from the point cloud [66]. A new algorithm is developed to segment individual trees from the small footprint discrete return airborne lidar point cloud. Different from forest trees, trees in cities are mostly artificial, sparsely distributed, and fewer in number. The single-wood segmentation method based on point clustering is used in this study. The point spacing is about 0.15 m and the row spacing is irregular, 0.45 m in width and 1.1 m long, owing to the influence of the scanning mode of airborne lidar. Therefore, some single-wood segmentation results are linear. We sample the point cloud under the voxel before segmentation. The octree down-sampling method is utilized to reduce the point cloud density to one-fifth, and the point closest to the center of the voxel is used as the sampling point. Characteristics of the original point set are preserved as extensively as possible to reduce the impact of uneven distribution of point cloud data (Figure 5). The segmentation result of the point cloud after down-sampling is close to the real tree.

3. Result

3.1. Effect Evaluation of Improved Point Cloud Interpolation Method

We obtain many individual objective point cloud data, number them, interpolate the intensity of different objects, and fuse all interpolated data to obtain the final multispectral point cloud data after dividing three groups of single-wavelength point cloud data. It can be intuitively seen that the intensity obtained by the undivided multispectral point cloud data at the edge of the object will be affected by other nearby ground objects through the three-dimensional false color display of the object (Figure 6a,b).
We are convinced that the influence of adjacent ground objects can be relatively eliminated when interpolating the segmented objects. However, although the influence of alien features is eliminated, the error of similar features in different positions remains. Hence, this study focuses on a single object interpolation, considers the incident angle of laser scanning and multiple echo intensity caused by the error, chooses the interpolation to search to the point cloud according to the distance to weight calculation of the intensity, and combines multiple adjacent point weighting intensity values. Moreover, our demand for the accuracy of spectral information is also met.
In order to verify the effectiveness of the method, we divide the whole area into four small areas. In order to divide the same feature into the same area, we divided the area along the street, and we test this method in four small areas (Figure 7). We use the random forest method to classify according to the spectral information of point clouds. The directly interpolated multispectral point cloud and the interpolated multispectral point cloud after segmentation processing are classified. The results show that under the same condition, the classification results of point cloud data with segmentation before interpolation are improved in all areas. Area (d) showed the highest improvement (3.9%). Area (c) showed the lowest improvement (3.46%) (Table 3). We consider there are more buildings and trees in area (b) and area (d). Segmentation makes it easier to distinguish such large features and makes our method optimize more points.

3.2. Classification Result Based on Spectral Index Construction

The experiment uses k-means and random forest classifiers to divide the point cloud data into eight categories: roads, low vegetation, high vegetation, cars, bare land, housings, non-residential buildings, and wires. We use only small training samples (about 1.1% of the total data) to train the random forest classifiers. The total area contains 13,977,391 points, which are divided into 8 categories. We randomly select 20,000 samples for each type of object according to the category, and a total of 160,000 samples are used as training samples. The remaining 13,817,391 points are used as testing samples.
A difference vegetation index is constructed by combining the two channels with the optimal classification results from the 1064 and 532 nm channels (Figure 8, Table 4).
We construct six groups of vegetation indices using the 1064 nm and 532 nm channels of spectral information for classification in follow-up experiments. We adopt the k-means (Table 5) and random forest (Table 6) classifiers in the two experiments. Each experiment contains eight groups, and each group of experiments includes geometry and spectral information from the 1064 and 532 nm bands. The first set of experiments added spectral information from only two channels, the second through the seventh experiments each added a spectral index, and the last set of experiments added all the spectral exponents. We can according to the classification accuracy of the six spectrum indexes’ contributions to the final classification results. According to the experimental results of joining all the spectral indices, the overall accuracy can be increased by 23% (Figure 9).
Adding the spectral index of the experimental results, we can build six spectrum index contributions as follows (Table 7).

3.3. Single Wood Extraction Results

We selected a set of experimental results with optimal classification findings for high vegetation and obtained tree point clouds with a classification accuracy of 92%. In the accuracy assessment, we compared the average value of the five single-wood extraction results with the number of field survey trees to obtain the accuracy. We divide the single trees in the entire area into 377 trees by using a segmentation algorithm. After the single wood is segmented, it will be randomly rendered with colors (repeated colors are not applicable within 20 m). This value is higher than the true value of 308 trees obtained via hyperspectral images using the final segmentation algorithm. The segmentation results indicate that the majority of urban single trees can be effectively extracted, while a few trees gathered together will be under-segmented. In addition, the distance between some scan lines is different owing to the actual scanning situation of Titan. Hence, the over-segmentation of trees appears in the left-center position of the entire region (Figure 10b). After point cloud down-sampling, the segmentation result of 329 trees is generally close to the real value, which indicates the effectiveness of the proposed method in urban tree extraction (Figure 10c). We also compared the single-wood segmentation without the pretreatment stage for point cloud data. Due to the strength of the interpolation with no segmentation, the point cloud interpolation as part of the spectrum information is not accurate, which leads to low classification accuracy. Some other points are classified as trees, so the result of the single-wood quantity is much higher than the real single-wood quantity. In Table 8, we compared the results of single-tree segmentation of point clouds with and without down-sampling. We used the actual number of trees divided by the total number of trees that are segmented as a measure of accuracy. Because the geometry of the point cloud is optimized, it can be seen that the point cloud after down-sampling is more accurate in the segmentation results. In addition, the combination of a passive spectrometer and one-channel lidar is another solution. We combined the hyperspectral data provided by Titan with the point cloud data to obtain the hyperspectral point cloud data. The spectral coverage of hyperspectral point cloud data is 374.4–1047.1 nm and the spectral resolution is 15 nm. To compare the ability of multispectral lidar data with single-band lidar data and the combination of a passive spectrometer and one-channel lidar in the urban point cloud classification and urban single-tree extraction, we used three single-band lidar data provided by Titan and hyperspectral point cloud data for classification. After classification, the tree point cloud is obtained for segmentation. The results show that the single-tree extraction ability of multispectral lidar data is higher than single band lidar data, and the hyperspectral point cloud data have similar capabilities to multispectral point cloud data (Figure 11, Table 8).

4. Discussion

The point cloud data provided by the Titan multispectral lidar contain extensive geometric and spectral information. Through in-depth mining of spectral information, the land cover classification ability of point cloud data has been substantially improved. We can effectively obtain different categories of point clouds in complex urban environments and provide effective support for subsequent tree detection.
Based on single-wavelength lidar point cloud data interpolation, we first eliminate point clouds with spectral and geometric position anomalies. Thereafter, studying the segmentation of point clouds enables us to effectively eliminate the adjacent interference features of interpolation. In the subsequent interpolation, we also attempt to use other methods for interpolation. Lastly, we choose to utilize the inverse distance weighting method, but eliminating the search radius did not lower the error of the point cloud. It reduces the incidence angle of laser scanning and intensity error caused by multiple echoes, thereby further improving our demand for the accuracy of spectral information [49,50].
In addition to the categories and features of the target ground objects, we select only six spectral features in consideration of the limitations of the band constructed by part of the spectral index and the influence of data redundancy when constructing the spectral index. We will consider active and passive fusion in subsequent research to improve the number of spectra of point clouds and obtain hyperspectral point clouds. Research on the feature construction of point clouds will also be conducted. The results show that the geometric, spectral, and spectral index information of multispectral lidar can be used to obtain the highest accuracy of classification. The contribution of spectral indices constructed using the 1064 and 532 nm channels to improve the classification accuracy is mainly reflected in the distinction between vegetation and non-vegetation. The classification results indicate that the spectral index can significantly improve the distinction between vegetation and non-vegetation. In addition, the number of our training samples is large enough, and the sample labels have high accuracy when selected manually. Using intensity interpolation, we improve the information accuracy of spectral dimension through the information advantage of data space dimension. This treatment reduces the noise of the sample. Additionally, our parameters are selected, and the number of parameters is small. These reasons all lead to our results not overfitting. The accuracy of tree point cloud extraction can reach 92%. In addition, the vegetation index constructed via the 1064 and 532 nm channels is close to the normalized greenness vegetation index, thereby indicating its satisfactory vegetation discrimination ability. Compared with traditional single-wavelength lidar and hyperspectral sensors, multispectral lidar has shown advantages in ground object classification [56]. In the process of urban single-wood segmentation, considering the distribution and category of urban single trees, we chose the single-wood segmentation method based on point clustering. Lastly, some point clouds continue to exhibit abnormal intensity and segmentation results, although the proposed preprocessing method can effectively improve the single wood extraction accuracy. So, the optimization in the spatial dimension is taken into account by trying different methods of down-sampling. Finally, the octree descending sampling method, which can preserve the spatial characteristics of the point cloud to the greatest extent, is selected [67]. The over-segmentation phenomenon caused by aerial linear scanning was solved in the process of single wood extraction. In Figure 9 we have circled the areas where the effect is obvious. We also compare the ability of single-channel lidar data and hyperspectral point cloud data (the combination of a passive spectrometer and one-channel lidar) for urban point cloud classification and urban single-tree extraction. The results show that the capability of single-band lidar in this respect is lower than multispectral lidar data. We found that when we extracted tree point clouds by classification, using single-channel lidar makes it difficult to accurately extract tree point clouds. This is slightly better at the 532 nm channel. The results of the hyperspectral point cloud data show similar capabilities to multispectral point cloud data. The accuracy of acquiring tree point clouds in hyperspectral point cloud data is higher than that in multispectral point cloud data theoretically. We consider it to be caused by inaccurate registration during active and passive data fusion.

5. Conclusions

This paper research the optimization of multispectral lidar point cloud data in terms of space and spectrum to improve the accuracy of urban point cloud classification and urban single-wood extraction.
We propose an urban tree extraction method suitable for multispectral lidar point cloud data. By comparing the results before and after the experiment, we reach the following conclusions. We can ensure that all interpolation point clouds are the same object point clouds and eliminate errors through segmentation and interpolation. The final classification result can be improved by 3.7%. The results of the k-means and random forest classifiers showed that the maximum experimental classification accuracy is achieved using geometric, spectral, and six groups of spectral feature information. The increase is about 18.28%. The existing single-wood segmentation algorithm can effectively extract single trees after obtaining the tree point cloud and provide effective data support for the follow-up study on urban biomass and carbon storage. In the process of urban single-wood segmentation, considering the distribution and category of urban single trees, we chose the single-wood segmentation method based on point clustering. Lastly, some point clouds continue to exhibit abnormal intensity and segmentation results, although the proposed preprocessing method can effectively improve the single wood extraction accuracy. The precision of the laser radar-extracted single-wood assessment method, in addition to other important indicators, should be focused on single-wood space position. The reason is that we lack access to accurate field measurement data, and in the results section we only discuss single wood quantity after the split because using multispectral lidar point cloud data to extract a complex scene requires feasibility validation. We will select an appropriate area in a subsequent experiment to verify the spatial location accuracy of single-wood extraction.

Author Contributions

Conceptualization, Shuo Shi and Bowen Chen; Methodology, Shuo Shi and Biwu Chen; Software, Xingtao Tang; Validation, Xingtao Tang; Data curation, Qian Xu; Writing—original draft, Xingtao Tang; Writing—review & editing, Sifu Bi; Supervision, Wei Gong; Project administration, Shuo Shi. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (41971307), Fundamental Research Funds for the Central Universities (Grant No. 2042022kf1200), Wuhan University Specific Fund for Major School-level Internationalization Initiatives, and LIESMARS Special Research Funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the anonymous reviewers and handling editors for their con-structive comments. The authors also thank the Hyperspectral Image Analysis Lab at the University of Houston for providing the original Optech Titan data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Carter, J.G.; Handley, J.; Butlin, T.; Gill, S. Adapting cities to climate change–exploring the flood risk management role of green infrastructure landscapes. J. Environ. Plan. Manag. 2018, 61, 1535–1552. [Google Scholar] [CrossRef] [Green Version]
  2. Lutz, W. How population growth relates to climate change. Proc. Natl. Acad. Sci. USA 2017, 114, 12103–12105. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Donovan, G.H.; Butry, D.T. Trees in the city: Valuing street trees in Portland, Oregon. Landsc. Urban Plan. 2010, 94, 77–83. [Google Scholar] [CrossRef]
  4. Roy, S.; Byrne, J.; Pickering, C. A systematic quantitative review of urban tree benefits, costs, and assessment methods across cities in different climatic zones. Urban For. Urban Green. 2012, 11, 351–363. [Google Scholar] [CrossRef] [Green Version]
  5. Van den Berg, M.; Wendel-Vos, W.; van Poppel, M.; Kemper, H.; van Mechelen, W.; Maas, J. Health benefits of green spaces in the living environment: A systematic review of epidemiological studies. Urban For. Urban Green. 2015, 14, 806–816. [Google Scholar] [CrossRef]
  6. Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne LiDAR data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
  7. Chen, B.; Shi, S.; Gong, W.; Xu, Q.; Tang, X.; Bi, S.; Chen, B. Wavelength selection of dual-mechanism LiDAR with reflection and fluorescence spectra for plant detection. Opt. Express 2023, 31, 3660–3675. [Google Scholar] [CrossRef]
  8. Xu, L.; Shi, S.; Gong, W.; Shi, Z.; Qu, F.; Tang, X.; Chen, B.; Sun, J. Improving leaf chlorophyll content estimation through constrained PROSAIL model from airborne hyperspectral and LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103128. [Google Scholar] [CrossRef]
  9. Chen, B.; Shi, S.; Gong, W.; Sun, J.; Chen, B.; Du, L.; Yang, J.; Guo, K.; Zhao, X. True-color three-dimensional imaging and target classification based on hyperspectral LiDAR. Remote Sens. 2019, 11, 1541. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, B.; Shi, S.; Sun, J.; Chen, B.; Guo, K.; Du, L.; Yang, J.; Xu, Q.; Song, S.; Gong, W. Using HSI color space to improve the multispectral lidar classification error caused by measurement geometry. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3567–3579. [Google Scholar] [CrossRef]
  11. Sun, J.; Shi, S.; Wang, L.; Li, H.; Wang, S.; Gong, W.; Tagesson, T. Optimizing LUT-based inversion of leaf chlorophyll from hyperspectral lidar data: Role of cost functions and regulation strategies. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102602. [Google Scholar] [CrossRef]
  12. Sun, J.; Shi, S.; Yang, J.; Chen, B.; Gong, W.; Du, L.; Mao, F.; Song, S. Estimating leaf chlorophyll status using hyperspectral lidar measurements by PROSPECT model inversion. Remote Sens. Environ. 2018, 212, 1–7. [Google Scholar] [CrossRef]
  13. Chen, Y.; Räikkönen, E.; Kaasalainen, S.; Suomalainen, J.; Hakala, T.; Hyyppä, J.; Chen, R. Two-channel hyperspectral LiDAR with a supercontinuum laser source. Sensors 2010, 10, 7057–7066. [Google Scholar] [CrossRef] [Green Version]
  14. Rall, J.A.; Knox, R.G. Spectral ratio biospheric lidar. In Proceedings of the IGARSS 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; pp. 1951–1954. [Google Scholar]
  15. Douglas, E.; Martel, J.; Cook, T.; Mendill, C.; Marshall, R.; Chakrabarti, S.; Strahler, A.; Schaaf, C.; Woodcock, C.; Liu, Z. A dual-wavelength echidna lidar for Ground-based Forest scanning. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 4998–5001. [Google Scholar]
  16. Gaulton, R.; Danson, F.; Ramirez, F.; Gunawan, O. The potential of dual-wavelength laser scanning for estimating vegetation moisture content. Remote Sens. Environ. 2013, 132, 32–39. [Google Scholar] [CrossRef]
  17. Du, L.; Gong, W.; Shi, S.; Yang, J.; Sun, J.; Zhu, B.; Song, S. Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR. Int. J. Appl. Earth Obs. Geoinf. 2016, 44, 136–143. [Google Scholar] [CrossRef]
  18. Chen, B.; Shi, S.; Gong, W.; Zhang, Q.; Yang, J.; Du, L.; Sun, J.; Zhang, Z.; Song, S. Multispectral LiDAR point cloud classification: A two-step approach. Remote Sens. 2017, 9, 373. [Google Scholar] [CrossRef] [Green Version]
  19. Gong, W.; Sun, J.; Shi, S.; Yang, J.; Du, L.; Zhu, B.; Song, S. Investigating the potential of using the spatial and spectral information of multispectral LiDAR for object classification. Sensors 2015, 15, 21989–22002. [Google Scholar] [CrossRef] [Green Version]
  20. Wei, G.; Shalei, S.; Bo, Z.; Shuo, S.; Faquan, L.; Xuewu, C. Multi-wavelength canopy LiDAR for remote sensing of vegetation: Design and system performance. ISPRS J. Photogramm. Remote Sens. 2012, 69, 1–9. [Google Scholar] [CrossRef]
  21. Shi, S.; Bi, S.; Gong, W.; Chen, B.; Chen, B.; Tang, X.; Qu, F.; Song, S. Land Cover Classification with Multispectral LiDAR Based on Multi-Scale Spatial and Spectral Feature Selection. Remote Sens. 2021, 13, 4118. [Google Scholar] [CrossRef]
  22. Shi, S.; Xu, L.; Gong, W.; Chen, B.; Chen, B.; Qu, F.; Tang, X.; Sun, J.; Yang, J. A convolution neural network for forest leaf chlorophyll and carotenoid estimation using hyperspectral reflectance. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102719. [Google Scholar] [CrossRef]
  23. Luo, B.; Yang, J.; Song, S.; Shi, S.; Gong, W.; Wang, A.; Du, L. Target classification of similar spatial characteristics in complex urban areas by using multispectral LiDAR. Remote Sens. 2022, 14, 238. [Google Scholar] [CrossRef]
  24. Wang, B.; Song, S.; Shi, S.; Chen, Z.; Li, F.; Wu, D.; Liu, D.; Gong, W. Multichannel Interconnection Decomposition for Hyperspectral LiDAR Waveforms Detected From Over 500 m. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  25. Sun, J.; Shi, S.; Yang, J.; Gong, W.; Qiu, F.; Wang, L.; Du, L.; Chen, B. Wavelength selection of the multispectral lidar system for estimating leaf chlorophyll and water contents through the PROSPECT model. Agric. For. Meteorol. 2019, 266, 43–52. [Google Scholar] [CrossRef]
  26. Fernandez-Diaz, J.C.; Carter, W.E.; Glennie, C.; Shrestha, R.L.; Pan, Z.; Ekhtari, N.; Singhania, A.; Hauser, D.; Sartori, M. Capability assessment and performance metrics for the Titan multispectral mapping lidar. Remote Sens. 2016, 8, 936. [Google Scholar] [CrossRef] [Green Version]
  27. Yu, Y.; Guan, H.; Li, D.; Gu, T.; Wang, L.; Ma, L.; Li, J. A hybrid capsule network for land cover classification using multispectral LiDAR data. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1263–1267. [Google Scholar] [CrossRef]
  28. Xiaoliang, Z.; Guihua, Z.; Jonathan, L.; Yuanxi, Y.; Yong, F. 3D land cover classification based on multispectral lidar point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 741–747. [Google Scholar]
  29. Morsy, S.; Shaker, A.; El-Rabbany, A. Evaluation of distinctive features for land/water classification from multispectral airborne LiDAR data at Lake Ontario. In Proceedings of the 10th International Conference on Mobile Mapping Technology (MMT), Shenzhen, China, 6–8 May 2019; pp. 6–8. [Google Scholar]
  30. Morsy, S.; Shaker, A.; El-Rabbany, A. Clustering of Multispectral Airborne Laser Scanning Data Using Gaussian Decomposition. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 269–276. [Google Scholar] [CrossRef] [Green Version]
  31. Huo, L.-Z.; Silva, C.A.; Klauberg, C.; Mohan, M.; Zhao, L.-J.; Tang, P.; Hudak, A.T. Supervised spatial classification of multispectral LiDAR data in urban areas. PLoS ONE 2018, 13, e0206185. [Google Scholar] [CrossRef]
  32. Axelsson, A.; Lindberg, E.; Olsson, H. Multispectral ALS data for tree species classification. In Proceedings of the 20th EGU General Assembly, EGU2018, Vienna, Austria, 11 April 2018; p. 996. [Google Scholar]
  33. Budei, B.C.; St-Onge, B.; Hopkinson, C.; Audet, F.-A. Identifying the genus or species of individual trees using a three-wavelength airborne lidar system. Remote Sens. Environ. 2018, 204, 632–647. [Google Scholar] [CrossRef]
  34. Dai, W.; Yang, B.; Dong, Z.; Shaker, A. A new method for 3D individual tree extraction using multispectral airborne LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 144, 400–411. [Google Scholar] [CrossRef]
  35. Kukkonen, M.; Maltamo, M.; Korhonen, L.; Packalen, P. Multispectral airborne LiDAR data in the prediction of boreal tree species composition. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3462–3471. [Google Scholar] [CrossRef]
  36. Dalponte, M.; Bruzzone, L.; Gianelle, D. A system for the estimation of single-tree stem diameter and volume using multireturn LiDAR data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2479–2490. [Google Scholar] [CrossRef]
  37. Yan, W.; Guan, H.; Cao, L.; Yu, Y.; Gao, S.; Lu, J. An automated hierarchical approach for three-dimensional segmentation of single trees using UAV LiDAR data. Remote Sens. 2018, 10, 1999. [Google Scholar] [CrossRef] [Green Version]
  38. Yao, W.; Krzystek, P.; Heurich, M. Tree species classification and estimation of stem volume and DBH based on single tree extraction by exploiting airborne full-waveform LiDAR data. Remote Sens. Environ. 2012, 123, 368–380. [Google Scholar] [CrossRef]
  39. Gupta, S.; Weinacker, H.; Koch, B. Comparative analysis of clustering-based approaches for 3-D single tree detection using airborne fullwave lidar data. Remote Sens. 2010, 2, 968–989. [Google Scholar] [CrossRef] [Green Version]
  40. Chen, X.; Chengming, Y.; Li, J.; Chapman, M.A. Quantifying the carbon storage in urban trees using multispectral ALS data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3358–3365. [Google Scholar] [CrossRef]
  41. Zhao, K.; Popescu, S.; Nelson, R. Lidar remote sensing of forest biomass: A scale-invariant estimation approach using airborne lasers. Remote Sens. Environ. 2009, 113, 182–196. [Google Scholar] [CrossRef]
  42. Boudreau, J.; Nelson, R.F.; Margolis, H.A.; Beaudoin, A.; Guindon, L.; Kimes, D.S. Regional aboveground forest biomass using airborne and spaceborne LiDAR in Québec. Remote Sens. Environ. 2008, 112, 3876–3890. [Google Scholar] [CrossRef]
  43. Dalponte, M.; Coomes, D.A. Tree-centric mapping of forest carbon density from airborne laser scanning and hyperspectral data. Methods Ecol. Evol. 2016, 7, 1236–1245. [Google Scholar] [CrossRef] [Green Version]
  44. Popescu, S.C. Estimating biomass of individual pine trees using airborne lidar. Biomass Bioenergy 2007, 31, 646–655. [Google Scholar] [CrossRef]
  45. Gleason, C.J.; Im, J. Forest biomass estimation from airborne LiDAR data using machine learning approaches. Remote Sens. Environ. 2012, 125, 80–91. [Google Scholar] [CrossRef]
  46. Popescu, S.C.; Wynne, R.H.; Nelson, R.F. Measuring individual tree crown diameter with lidar and assessing its influence on estimating forest volume and biomass. Can. J. Remote Sens. 2003, 29, 564–577. [Google Scholar] [CrossRef]
  47. Olagoke, A.; Proisy, C.; Féret, J.-B.; Blanchard, E.; Fromard, F.; Mehlig, U.; de Menezes, M.M.; Dos Santos, V.F.; Berger, U. Extended biomass allometric equations for large mangrove trees from terrestrial LiDAR data. Trees 2016, 30, 935–947. [Google Scholar] [CrossRef] [Green Version]
  48. Wilkes, P.; Disney, M.; Vicari, M.B.; Calders, K.; Burt, A. Estimating urban above ground biomass with multi-scale LiDAR. Carbon Balance Manag. 2018, 13, 10. [Google Scholar] [CrossRef]
  49. Weinmann, M.; Jutzi, B.; Mallet, C. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 181. [Google Scholar] [CrossRef] [Green Version]
  50. Ekhtari, N.; Glennie, C.; Fernandez-Diaz, J.C. Classification of airborne multispectral lidar point clouds for land cover mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2068–2078. [Google Scholar] [CrossRef]
  51. Zou, B.; Qiu, H.; Lu, Y. Point cloud reduction and denoising based on optimized downsampling and bilateral filtering. Ieee Access 2020, 8, 136316–136326. [Google Scholar] [CrossRef]
  52. Lv, C.; Lin, W.; Zhao, B. Approximate intrinsic voxel structure for point cloud simplification. IEEE Trans. Image Process. 2021, 30, 7241–7255. [Google Scholar] [CrossRef]
  53. Wang, Q.; Zhang, X.; Gu, Y. Spatial-Spectral Smooth Graph Convolutional Network for Multispectral Point Cloud Classification. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September 2020–2 October 2020; pp. 1062–1065. [Google Scholar]
  54. Wang, Y.; Gu, Y. Multispectral-lidar data fusion via multiple kernel learning for remote sensing classification. In Proceedings of the 2018 9th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands, 23–26 September 2018; pp. 1–6. [Google Scholar]
  55. Shaker, A.; Yan, W.Y.; LaRocque, P.E. Automatic land-water classification using multispectral airborne LiDAR data for near-shore and river environments. ISPRS J. Photogramm. Remote Sens. 2019, 152, 94–108. [Google Scholar] [CrossRef]
  56. Teo, T.-A.; Wu, H.-M. Analysis of land cover classification using multi-wavelength LiDAR system. Appl. Sci. 2017, 7, 663. [Google Scholar] [CrossRef] [Green Version]
  57. Bakuła, K.; Kupidura, P.; Jełowicki, Ł. Testing of land cover classification from multispectral airborne laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 161–169. [Google Scholar] [CrossRef] [Green Version]
  58. Zhang, Y.; Yang, J.; Liu, X.; Du, L.; Shi, S.; Sun, J.; Chen, B. Estimation of multi-species leaf area index based on Chinese GF-1 satellite data using look-up table and gaussian process regression methods. Sensors 2020, 20, 2460. [Google Scholar] [CrossRef]
  59. Rousel, J.; Haas, R.; Schell, J.; Deering, D. Monitoring vegetation systems in the great plains with ERTS. In Proceedings of the Third Earth Resources Technology Satellite—1 Symposium, Washington, DC, USA, 10–14 December 1973; NASA SP-351. pp. 309–317. [Google Scholar]
  60. Yang, Z.; Willis, P.; Mueller, R. Impact of band-ratio enhanced AWIFS image to crop classification accuracy. Proc. Pecora 2008, 17, 1–11. [Google Scholar]
  61. Chappelle, E.W.; Kim, M.S.; McMurtrey III, J.E. Ratio analysis of reflectance spectra (RARS): An algorithm for the remote estimation of the concentrations of chlorophyll a, chlorophyll b, and carotenoids in soybean leaves. Remote Sens. Environ. 1992, 39, 239–247. [Google Scholar] [CrossRef]
  62. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  63. Jiang, Z.; Huete, A.R.; Didan, K.; Miura, T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens. Environ. 2008, 112, 3833–3845. [Google Scholar] [CrossRef]
  64. Tiede, D.; Hochleitner, G.; Blaschke, T. A full GIS-based workflow for tree identification and tree crown delineation using laser scanning. ISPRS Workshop CMRT 2005, 5, 2930. [Google Scholar]
  65. Koch, B.; Heyder, U.; Weinacker, H. Detection of individual tree crowns in airborne lidar data. Photogramm. Eng. Remote Sens. 2006, 72, 357–363. [Google Scholar] [CrossRef] [Green Version]
  66. Morsdorf, F.; Meier, E.; Allgöwer, B.; Nüesch, D. Clustering in airborne laser scanning raw data for segmentation of single trees. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2003, 34, W13. [Google Scholar]
  67. El-Sayed, E.; Abdel-Kader, R.F.; Nashaat, H.; Marei, M. Plane detection in 3D point cloud using octree-balanced density down-sampling and iterative adaptive plane extraction. IET Image Process. 2018, 12, 1595–1605. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Study area location and RBG images.
Figure 1. Study area location and RBG images.
Ijgi 12 00090 g001
Figure 2. (a) The 532 nm channel point cloud intensity; (b) the 1064 nm channel point cloud intensity; (c) the 1550 nm channel point cloud intensity; (d) multispectral lidar point cloud rendered by ground truth value.
Figure 2. (a) The 532 nm channel point cloud intensity; (b) the 1064 nm channel point cloud intensity; (c) the 1550 nm channel point cloud intensity; (d) multispectral lidar point cloud rendered by ground truth value.
Ijgi 12 00090 g002
Figure 3. Flowchart detailing the methodology.
Figure 3. Flowchart detailing the methodology.
Ijgi 12 00090 g003
Figure 4. Single-wavelength point cloud with three different channels (the points of the 1550 nm channel are red, the points of the 1064 nm channel are green, and the points of the 532 nm channel are blue).
Figure 4. Single-wavelength point cloud with three different channels (the points of the 1550 nm channel are red, the points of the 1064 nm channel are green, and the points of the 532 nm channel are blue).
Ijgi 12 00090 g004
Figure 5. (a) Point cloud of trees before down-sampling (the gap between scan lines is too large). (b) Point cloud of trees after down-sampling (less affected by scan lines).
Figure 5. (a) Point cloud of trees before down-sampling (the gap between scan lines is too large). (b) Point cloud of trees after down-sampling (less affected by scan lines).
Ijgi 12 00090 g005
Figure 6. (a) Pseudo-color display of conventional processing methods, (b) pseudo-color display after segmentation, (c) classification results of conventional processing methods, (d) classification results after segmentation, (e) pseudo-color image by merging the three spectral intensity images, and (f) pseudo-color image by merging the three spectral intensity images for 3D representation.
Figure 6. (a) Pseudo-color display of conventional processing methods, (b) pseudo-color display after segmentation, (c) classification results of conventional processing methods, (d) classification results after segmentation, (e) pseudo-color image by merging the three spectral intensity images, and (f) pseudo-color image by merging the three spectral intensity images for 3D representation.
Ijgi 12 00090 g006
Figure 7. (a) The whole area is divided into four small areas, (b) the classification result of area (b) after segmentation processing, (c) the classification result of area (c) after segmentation processing, (d) the classification result of area (d) after segmentation processing, and (e) the classification result of area (e) after segmentation processing.
Figure 7. (a) The whole area is divided into four small areas, (b) the classification result of area (b) after segmentation processing, (c) the classification result of area (c) after segmentation processing, (d) the classification result of area (d) after segmentation processing, and (e) the classification result of area (e) after segmentation processing.
Ijgi 12 00090 g007
Figure 8. (a) The intensity combination classification results for the 1064 nm and 1550 nm channels, (b) the intensity combination classification results for the 1550 nm and 532 nm channels, (c) the intensity combination classification results for the 1064 nm and 532 nm channels.
Figure 8. (a) The intensity combination classification results for the 1064 nm and 1550 nm channels, (b) the intensity combination classification results for the 1550 nm and 532 nm channels, (c) the intensity combination classification results for the 1064 nm and 532 nm channels.
Ijgi 12 00090 g008
Figure 9. (a) Classification results based on the intensity of the 1064 nm and 532 nm channels; (b) classification results based on all six spectral indices and the intensity of the 1064 nm and 532 nm channels.
Figure 9. (a) Classification results based on the intensity of the 1064 nm and 532 nm channels; (b) classification results based on all six spectral indices and the intensity of the 1064 nm and 532 nm channels.
Ijgi 12 00090 g009
Figure 10. (a) Tree point cloud; (b) single-wood point cloud before down-sampling; (c) single-wood point cloud after down-sampling.
Figure 10. (a) Tree point cloud; (b) single-wood point cloud before down-sampling; (c) single-wood point cloud after down-sampling.
Ijgi 12 00090 g010
Figure 11. (a) Lidar-detected trees (hyperspectral point cloud data), (b) lidar-detected trees (532 nm channel lidar data), (c) lidar-detected trees (1064 nm channel lidar data), (d) lidar-detected trees (1550 nm channel lidar data).
Figure 11. (a) Lidar-detected trees (hyperspectral point cloud data), (b) lidar-detected trees (532 nm channel lidar data), (c) lidar-detected trees (1064 nm channel lidar data), (d) lidar-detected trees (1550 nm channel lidar data).
Ijgi 12 00090 g011
Table 1. Reference points for the eight classes.
Table 1. Reference points for the eight classes.
ClassWireCarRoadLow VegetationHigh VegetationBare LandHousingNon-Residential Building
Number of Points790,066284,2713,329,6654,824,5542,368,410408,1671,697,271114,987
Table 2. Spectral exponential framework formula.
Table 2. Spectral exponential framework formula.
Spectral IndicesFormulaReference
D V I Differential vegetation index I 532 I 1064 [58]
N D V I Normalized differential vegetation index I 1064 I 532 I 1064 + I 532 [59]
R N D V I Ratio normalized difference vegetation index I 1064 2 I 532 I 1064 + I 532 2 [60]
S R 23 Simple ratio index I 1064 I 532 [61]
S A V I Soil-adjusted vegetation index 1 + 0.5   ×   I 1064 I 532 I 1064 + I 532 + 0.5 [62]
E V I 2 2-band enhanced vegetation index I 532 I 1550 I 1064 I 1550 + 6 7.5 2.08 I 1064 [63]
Table 3. Classification results of using improved point cloud interpolation method by the random forest classifier.
Table 3. Classification results of using improved point cloud interpolation method by the random forest classifier.
Area (b)Area (c)Area (d)Area (e)
Classification accuracy without segmentationOA:76.99OA:73.33OA:76.49OA:80.43
Kappa:0.57Kappa:0.55Kappa:0.57Kappa:0.60
Classification accuracy with segmentationOA:80.25OA:74.97OA:80.42OA:82.11
Kappa:0.60Kappa:0.56Kappa:0.60Kappa:0.61
Table 4. Classification results by the random forest classifier.
Table 4. Classification results by the random forest classifier.
1064 nm and 1550 nm1550 nm and 532 nm1064 nm and 532 nm
Total accuracyOA:62.22OA:63.91OA:66.27
Kappa:0.48Kappa:0.50Kappa:0.52
Table 5. Classification results using the k-means classifier.
Table 5. Classification results using the k-means classifier.
Spectral InformationSpectral + DVISpectral + NDVISpectral + RNDVISpectral + SRSpectral + SAVISpectral + EVI2Spectral + All Index
OA80.9179.8673.5578.8869.5472.2768.0482.36
Kappa0.680.700.610.680.550.600.540.69
Table 6. Classification results using the random forest classifier.
Table 6. Classification results using the random forest classifier.
Spectral InformationSpectral + DVISpectral + NDVISpectral + RNDVISpectral + SRSpectral + SAVISpectral + EVI2Spectral + All Index
OA82.2280.7373.2978.9772.2773.0969.4585.12
Kappa0.700.700.620.680.580.620.540.72
Table 7. The contributions of the spectral indexes.
Table 7. The contributions of the spectral indexes.
Spectral IndicesContribution RateCumulative Contribution Rate
DVI25.02%64.79%
RNDVI4.89%88.09%
EVI24.76%92.85%
NDVI2.79%95.64
SAVI2.35%97.99%
SR1.94%99.93
Table 8. Single-wood segmentation accuracy evaluation.
Table 8. Single-wood segmentation accuracy evaluation.
NumberAccuracy
No. of Field-Surveyed Trees308
No. of Lidar-Detected Trees (unsegmented data)40476.2%
No. of Lidar-Detected Trees37781.7%
No. of Lidar-Detected Trees (after down-sampling)32993.6%
No. of Lidar-Detected Trees (532 nm channel lidar data)45168.3%
No. of Lidar-Detected Trees (1064 nm channel lidar data)58252.9%
No. of Lidar-Detected Trees (1550 nm channel lidar data)55155.9%
No. of Lidar-Detected Trees (hyperspectral point cloud data)33591.9%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, S.; Tang, X.; Chen, B.; Chen, B.; Xu, Q.; Bi, S.; Gong, W. Point Cloud Data Processing Optimization in Spectral and Spatial Dimensions Based on Multispectral Lidar for Urban Single-Wood Extraction. ISPRS Int. J. Geo-Inf. 2023, 12, 90. https://doi.org/10.3390/ijgi12030090

AMA Style

Shi S, Tang X, Chen B, Chen B, Xu Q, Bi S, Gong W. Point Cloud Data Processing Optimization in Spectral and Spatial Dimensions Based on Multispectral Lidar for Urban Single-Wood Extraction. ISPRS International Journal of Geo-Information. 2023; 12(3):90. https://doi.org/10.3390/ijgi12030090

Chicago/Turabian Style

Shi, Shuo, Xingtao Tang, Bowen Chen, Biwu Chen, Qian Xu, Sifu Bi, and Wei Gong. 2023. "Point Cloud Data Processing Optimization in Spectral and Spatial Dimensions Based on Multispectral Lidar for Urban Single-Wood Extraction" ISPRS International Journal of Geo-Information 12, no. 3: 90. https://doi.org/10.3390/ijgi12030090

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop