Next Article in Journal
Finding the Optimal Multimodel Averaging Method for Global Hydrological Simulations
Next Article in Special Issue
How Do Two- and Three-Dimensional Urban Structures Impact Seasonal Land Surface Temperatures at Various Spatial Scales? A Case Study for the Northern Part of Brooklyn, New York, USA
Previous Article in Journal
Assessing Repeatability and Reproducibility of Structure-from-Motion Photogrammetry for 3D Terrain Mapping of Riverbeds
Previous Article in Special Issue
Spatiotemporal Patterns of Urbanization in the Three Most Developed Urban Agglomerations in China Based on Continuous Nighttime Light Data (2000–2018)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Aerial LiDAR and Very-High-Resolution Images for Urban Functional Zone Mapping

1
School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
2
China Aero Geophysical Survey and Remote Sensing Center for Natural Resources, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(13), 2573; https://doi.org/10.3390/rs13132573
Submission received: 3 June 2021 / Revised: 24 June 2021 / Accepted: 29 June 2021 / Published: 1 July 2021

Abstract

:
This study presents a new approach for Urban Functional Zone (UFZ) mapping by integrating two-dimensional (2D) Urban Structure Parameters (USPs), three-dimensional (3D) USPs, and the spatial patterns of land covers, which can be divided into two steps. Firstly, we extracted various features, i.e., spectral, textural, geometrical features, and 3D USPs from very-high-resolution (VHR) images and light detection and ranging (LiDAR) point clouds. In addition, the multi-classifiers (MLCs), i.e., Random Forest, K-Nearest Neighbor, and Linear Discriminant Analysis classifiers were used to perform the land cover mapping by using the optimized features. Secondly, based on the land cover classification results, we extracted 2D and 3D USPs for different land covers and used MLCs to classify UFZs. Results for the northern part of Brooklyn, New York, USA, show that the approach yielded an excellent accuracy of UFZ mapping with an overall accuracy of 91.9%. Moreover, we have demonstrated that 3D USPs could considerably improve the classification accuracies of UFZs and land covers by 6.4% and 3.0%, respectively.

Graphical Abstract

1. Introduction

Urban Functional Zones (UFZs, acronyms used throughout the manuscript are listed in Supplementary Material Table S1) refer to different functional divisions of urban lands, e.g., commercial, residential, industrial, and park zones [1]. Different UFZs often feature different architectural environments and are composed of various land covers. However, previous studies pay much attention to land cover mapping instead of large-scale UFZ classification [1]. As the basic spatial unit in cities, UFZs are vital for the urban planner and managers to conduct urban-related applications, e.g., the investigation of land surface temperatures, landscape patterns, urban planning, and urban ecological modeling [2,3,4,5]. Therefore, the detection of UFZs is a basis for urban management and provides a better understanding of urban spatial structures [6,7].
Very-High-Resolution (VHR) images represent urban surfaces with good spatial details, capturing tiny differences in spectral and textural records, thus can be utilized for UFZ mapping [8,9,10]. Many authors used VHR data to perform the UFZ classification [11,12,13]. Zhang et al. [11] proposed a Hierarchical Semantic Cognition (HSC) method to establish four semantic layers, i.e., visual features, object categories, spatial patterns of objects, and zone functions. They used their hierarchical relations to identify UFZs and found that the HSC method yields a good accuracy for UFZ mapping (the overall accuracy was 90.8%). Further, Zhang et al. [1] performed a top-down feedback method-Inverse Hierarchical Semantic Cognition (IHSC) to optimize the initial HSC results, and they found that the IHSC increased the Overall Accuracy (OA) from 84.0 to 90.5%. Recently, authors utilized the Point of Interest (POI) data for UFZ mapping. For instance, Hu et al. [12] generated parcel information using the road networks and integrated Landsat 8 Operational Land Imager images and POI data to classify parcels into eight functional zones (level I, e.g., residential, commercial, industrial, and institutional areas) and 16 land covers (Level II). They found that the OA value of Level I classification was 81.04%. Besides, Zhou et al. [13] proposed a Super Object-Convolutional Neural Network (SO–CNN) method to conduct UFZ classification. They used the POI data to identify four UFZs, i.e., commercial office, urban green, industrial warehouse, and residential zones in Hangzhou city, China and found that the classification results are refined with an OA value of 91.1%. However, previous studies did not explore the impacts of three-dimensional (3D) urban structure parameters (USPs), e.g., building height (BH) and sky view factor (SVF) on UFZ detection.
It is noteworthy that 3D USPs play distinctive roles in describing urban layouts and constructions [14]. For example, an investigation from the northern part of Brooklyn, New York City, USA, shows that industrial zones usually locate on an open ground surface, resulting in higher values of SVF than residential and commercial zones (Figure 1a). Generally, industrial zones feature low-rise and large buildings, thus often have low BHs (Figure 1b). In addition, different Street Aspect Ratios (SAR) and Floor Area Ratios (FAR) were observed among different functional zones (Figure 1c,d). Hence, it is of importance to consider 3D USPs for UFZ mapping. Recently, Light Detection And Ranging (LiDAR) technology rise as it can provide a fast and straightforward approach to acquiring the height information of underlying surfaces [15]. Thus, LiDAR technology can be regarded as a feasible approach to extract 3D USPs [14]. Note that it is challenging to extract UFZs directly from the VHR images because spectral, textural, and geometrical features can only be effective in segment objects instead of identifying UFZs [11]. As one of the essential elements of UFZs, the components and configurations of land covers exert significant influences on measuring and analyzing UFZs [16]. Thus, it is important to perform the land cover classification for the subsequent UFZ mapping.
In this study, a new approach that integrates multiple machine learning algorithms and 3D USPs is introduced for UFZ mapping. The objectives of the study are:
  • To integrate multi-machine learning algorithms and various features, primarily 3D USPs, for enhancing land cover mapping;
  • To perform UFZ mapping by coupling 3D USPs and multi-classifiers (MLCs);
  • To evaluate the influence of 3D USPs on the classifications of both land covers and UFZs.

2. Study Area and Data

2.1. Study Area

Our study area lies in the northern part of Brooklyn, New York City, USA (Figure 2), with an area of 6.12 km2. The eastern and northern parts of the area are along the East River. The area has 8779 buildings, 8146 parcels, and 493 blocks. In addition, the area includes four typical UFZs, i.e., commercial, residential, industrial, and park zones [10].

2.2. Data

The primary data used in the study includes VHR images, LiDAR point clouds, road networks, and land-lot information.
  • VHR images
The high-resolution orthophotos of the study area were acquired from the New York City Office of Information Technology Services [17] (Figure 2b). The images consist of four bands (i.e., blue, green, red, and near-infrared bands) with a 0.3 m (1.0 ft) spatial resolution, which provides rich spectral information for the classification of land covers and UFZs.
  • LiDAR point clouds
The point cloud data was acquired in May 2017 and collected using a Cessna 402 C or Cessna Caravan 208B aircraft equipped with Leica ALS80 and Riegl VQ-880-G laser systems. The data is released by the New York City Department of Information Technology and Telecommunications (NYCDITT) [18]. Furthermore, in order to generate an accurate Digital Surface Model (DSM), we eliminated the noise points (i.e., outliers and isolated points) using the “StatisticalOutlierRemove” filter operation of Point Cloud Library 1.6, and a voxel grid filter was adopted to reduce the redundant points. After filtering, the density of point clouds is about 8.0 points/m2 [14].
  • Road networks and land-lot information
The road networks were obtained from Open Street Map (OSM) in 2017. We used the networks to delineate the blocks’ boundaries, and the block was regarded as a basic unit for UFZ mapping [19,20]. In addition, the land-lot data, released by NYCDITT, provides the basic information of land functions. Thus, it could be used to label the blocks’ functional attributes and provide the ground reference. Finally, 493 zones were generated from the road networks and land-lot data.

3. Methods

3.1. Overview of the Study Approach

Figure 3 shows the workflow of the UFZ classification, including two key steps: (1) Land cover mapping: a method ensemble multi-machine learning algorithms and 3D USPs was proposed for enhancing land cover mapping; in details, we extracted the spectral, textural, geometrical features, and 3D USPs of objects that were determined by the multi-resolution image segmentation; further, the feature optimization was employed by using the method of mean decrease impurity; at last, we selected the best classifier from three machine learning methods, i.e., Random Forest (RF), K-Nearest Neighbor (KNN), and Linear Discriminant Analysis (LDA), to label those objects. (2) UFZ mapping: a new method that integrates urban spatial information of both two-dimensional (2D) USPs, 3D USPs and MLCs were utilized for UFZ mapping; in details, we extracted the 2D USPs from the land cover mapping results and 3D USPs from LiDAR point clouds; then, Nearest Neighbor Index (NNI) was used to identify three spatial patterns of land covers, i.e., random distribution, aggregation, and uniform distribution; finally, we chose the classifier with the best performance to conduct UFZ classification.

3.2. Land Cover Mapping

3.2.1. Multi-Feature Extraction

To avoid the “salt and pepper” phenomenon of land cover classification, firstly, multi-resolution segmentation was used to segment the VHR images into multi-scale objects, which was performed by using eCognition software. In particular, the Estimation of Scale Parameter (ESP) tool [21,22,23,24,25,26] was used to generate the appropriate scale of land cover object. Secondly, we extracted four categories of features, i.e., spectral, textural, geometrical features and 3D USP, for the subsequent land cover labeling (Table 1). As shown, the spectral features included spectral information (i.e., red, blue, green, and near-infrared bands), Normalized Difference Vegetation Index (NDVI), Ratio Vegetation Index (RVI), Difference Vegetation Index (DVI), Normalized Difference Water Index (NDWI), Meani, Brightness, Ratio, Mean diff. to neighbor (Mean. diff.), Standard Deviation (Std. Dev). The textural features were revealed by different indices from the Gray-Level Co-occurrence Matrix (GLCM), i.e., angular second moment, variance, contrast, entropy, energy, correlation, inverse differential moment, dissimilarity, and homogeneity. The geometrical features were used to reveal geometrical characteristics of objects, i.e., area, border length, length/width, compactness, asymmetry, border index, density, elliptic fit, main direction, shape index. The spectral, textural, and geometrical features are widely used in object-based image research [27,28]. In particular, we selected three features for 3D USPs, including the Digital Surface Model (DSM), Sky View Factor (SVF), and flatness, all of them were extracted from LiDAR point clouds.
Based on the spectral response of features on VHR images and investigation, six land covers were identified, including buildings, trees, grasses, soil lands, impervious grounds, and water bodies. Table 2 provides the details of the samples used for different land covers, and the training samples are randomly selected by using the “model_selection” tool in the sklearn package. As shown, 8/10 of the training samples were randomly selected and used for classification, and the others were chosen for the accuracy assessment.

3.2.2. Feature Optimization

Feature optimization provides a better understanding of the feature importance and is crucial to improve the classification accuracy. We used the Gini Index (GI) (i.e., Mean Decrease Impurity) to measure the importance of each variable. GI was calculated from the structure of the RF classifier, representing the average degree of error reduction by each feature, and can be defined as [42,43]:
G I ( P ) = k = 1 K P k ( 1 P k ) = 1 k = 1 K P k 2
where GI (P) is the GI value,   k represents the kth classes, and P k represents the probability that the sample belongs to the k class. Generally, a higher GI value means the corresponding variable exerts more influence on the classification. Details of feature optimization steps can be found in [44].

3.2.3. The Classifier of Multiple Machine Learning

Three classifiers were utilized to label land covers and UFZs, including Random Forest (RF, [45,46]), K-Nearest Neighbor (KNN, [47]), and Linear Discriminant Analysis (LDA, [48]).
(1)
RF classifier consists of multiple decision trees and can utilize different trees to train samples and predict results. In particular, each tree would yield its predicted result. Then, by counting the vote results in different decision trees, RF integrates their vote results to predict the final results. Therefore, the RF model can significantly improve the classification results compared with a single decision tree. In addition, RF has a good performance for the outlier as well as noise and can effectively avoid overfitting [3,49].
(2)
KNN classifier measures the weight of its neighbors when performs a new instance. The classifier labeled objects with different categories according to the weight and is more suitable than other classifiers when the class fields overlaps in the sample set [50].
(3)
LDA classifier projects the training sample on a straight line to make project objects of the same class as close as possible; in contrast, heterogeneous sample projection points away from as far as possible. The classifier assumes that all data sets are followed by a normal distribution and can reduce the dimensions of the original data. LDA classifier calculates the probability density of each class sample, and the classification results depend on the maximum probability of each category [51,52].

3.2.4. Classification Post-Processing and Accuracy Evaluation

Classification post-processing is a pivotal step to optimize the classification result [53,54]. We used the rules of classification post-processing for the objects (Table 3). In addition, we used a confusion matrix to evaluate the accuracy of land cover classification [55]. Three indices, including Overall Accuracy (OA), Producer’s Accuracy (PA), and User’s Accuracy (UA), were utilized to quantify the accuracy of classification results.

3.3. UFZ Mapping

3.3.1. Feature Extraction

We utilized three feature categories, including 2D USP, 3D USP, and spatial pattern of land covers, to assist the following UFZ classification (Table 4). The 2D USPs described the landscape compositions in different UFZs. They were revealed by building coverage (BC), tree coverage (TC), grass coverage (GC), soil coverage (SC), impervious surface coverage at ground level (ISC_G), and water coverage (WC). We extracted 3D USPs by integrating results of the land cover classification and LiDAR point clouds. For example, we obtained building labels from the results of land cover classification, and then calculated the average building height by using the height information from the LiDAR data [56]. The 3D USPs including sky view factor (SVF), building height (BH), street aspect ratio (SAR), and floor area ratio (FAR).
Previous studies have demonstrated that the NNI can measure the spatial patterns of land covers, and thus help the UFZ mapping [57,58,59]. To avoid such biases as the same landscape compositions or 3D USPs in different UFZs, we introduced the Nearest Neighbor Index (NNI) to improve the UFZ classification using different spatial patterns of land covers [60]. We considered three typical spatial patterns of land covers, i.e., random distribution, aggregation, and uniform distribution. The NNI can be defined as:
d ¯ m i n = 1 n i = 1 n d m i n
E ( d m i n ) = 1 2 n A
N N I = d ¯ m i n E ( d m i n )
where d m i n is the distance between a specific land cover (i.e., a building) and its nearest same object, and d ¯ m i n is the average distance of d m i n in a block. E ( d m i n ) is the expectation of d m i n in complete space randomness mode, which is calculated based on the area of the block ( A ) and the number of buildings   ( n ) . Modes of random distribution, aggregation, and uniform distribution were identified when N N I value = 1, N N I value < 1, and N N I value > 1.

3.3.2. Experiment Design

We determined the feature optimization using the GI method and selected the best combination of features for UFZ mapping (details can be found in Section 3.2.2). To further analyze the influences of 3D USP on UFZ mapping, we designed seven experiments with different variable combinations based on the results of feature optimization (Table 5). In this way, we tried to recognize the most significant feature category for UFZ mapping. As shown, Exps. a, b, and c featured 2D USP, 3D USP, and spatial pattern feature, respectively, and were meant to examine the ability of a single category in the UFZ mapping. Exp. d consisted of fusion categories involving 2D USP, 3D USP and was designed to identify 2D and 3D USP combined effect on the UFZ mapping. Exp. e selected 2D USP and spatial pattern feature as input variables. In addition, Exp. f consisted of mixed categories involving 3D USP and spatial pattern features. Exps. f and g differed in that Exp. f did not contain 2D USP, while Exp. g included all the categories. In addition, 307 zones were selected as training samples and used for classification, and the others were chosen for the accuracy assessment.

4. Results

4.1. Urban Land Cover Mapping

4.1.1. Results of Feature Optimization

Figure 4 shows the importance ranking of 44 variables for land cover classification. As shown, firstly, DSM reached the highest variable importance value (GI value = 0.057) in the classification, and SVF also yielded a high GI value (0.042), indicating that 3D USPs exert considerable influences on the land cover classification. In particular, the GI value of NDVI was 0.050, suggesting that vegetation coverage is vital for land cover mapping. The spectral features showed better performance in classification (all their GI values were higher than 0.020); in contrast, the GI values of textural features were relatively low (most of their GI values was lower than 0.02). Limited influence of geometrical features, i.e., asymmetry, area, and compactness, on the classifications was observed. In summary, the variable importance ranked from high to low was 3D USPs > spectral features > geometrical features > textural features.
We further tested the varying OA values associated with different input variables (Figure 5). An increasing trend and followed by stable OA values were observed with the rising number of input variables. The highest OA value (87.4%) was observed when the number of the input variables was 24. Thus, we selected the 24 variables to perform the subsequent land cover mapping (details of the 24 optimal variables can be found in Supplementary Material Table S2).

4.1.2. Results of Land Cover Mapping

Table 6 shows land cover classification accuracy using three classifiers, i.e., RF, KNN, and LDA. As shown, the RF classifier had the highest accuracy with an OA value of 87.4% (details of its confusion matrix see Supplementary Material Table S3). In contrast, the lowest accuracy of the classification was observed by using the LDA classifier (the OA value was 74.0%). LDA was failed to distinguish trees from grasses and buildings from ground-level impervious surfaces (details of its confusion matrix see Supplementary Material Table S4). Regarding the KNN classifier, its OA value was 77.4%, and it wrongly classified the tree and grass (details of its confusion matrix see Supplementary Material Table S5).
Figure 6 shows the details of land cover classification using different MLCs. As shown, the RF classifier can better capture small trees (location g in Figure 6b). Moreover, the RF classifier can effectively identify building boundaries (location h in Figure 6b). However, the KNN classifier could not depict the building boundaries clearly, and wrongly classified the shape of trees (location g in Figure 6c). The LDA classifier failed to capture small trees; meanwhile, some building boundaries were poorly detected using the LDA classifier (location h Figure 6d).

4.1.3. Advantages of Using 3D USPs to Land-Cover Mapping

Table 7 shows the comparison of land-cover classification by using 2D USPs and 3D USPs against those using only 2D USPs (details regarding the confusion matrixes of each classifier can be found in Supplementary Material Tables S6–S8). As shown, the accuracies of all classifiers were increased after adding 3D USPs. The OA values of RF, KNN, and LDA classifiers increased by 3.1, 2.3, and 3.5%, respectively. Further, it is found that, after 3D USPs involved in classification, PA values of all land covers increased, but different increasing degrees of varying land covers were observed. Tree yielded the highest increased accuracy with an average increased PA value of 6.7%. The possible reason is that the height information can be used to distinguish trees from grasses. In addition, 3D USPs can increase the PA value of buildings (increased by 3.3%). The possible reason is that the SVF can distinguish building roofs (featuring a high SVF) from impervious grounds (feature a low SVF). In summary, 3D USPs significantly increased the accuracy of land cover mapping. These findings were consistent with the previous studies by [14,64].
Figure 7 compares spatial details of land-cover classification by using 2D-3D USPs against those by using only 2D USPs. It is found that the classification results using 2D-3D USPs were better than those using only 2D USPs. As shown in Figure 7a, the trees in the blue circle were mistakenly classified as impervious ground, and the building boundaries cannot be captured by using 2D USPs (red rectangle). However, both were captured after adding 3D USPs (Figure 7b), indicating that 3D USPs can help distinguish trees from impervious grounds and delineate the building boundaries. In addition, some impervious grounds are mistakenly recognized as buildings (the red circle in Figure 7e), and some trees were poorly captured (blue circle in Figure 7e) using only 2D USPs. In contrast, Figure 7f shows that the impervious grounds and trees were correctly classified using 2D USPs and 3D USPs.

4.2. UFZ Mapping

4.2.1. Results of Feature Optimization

Figure 8 demonstrates the variable importance ranking of 16 variables in UFZ mapping. As shown, the most critical variable in UFZ mapping was SVF, which reached the highest GI value of 0.138. Besides, SAR also had a high GI value of 0.131 and performed well in UFZ mapping, indicating the importance of 3D USPs in UFZ mapping (note that all the GI values of 3D USPs were higher than 0.06). In addition, TC reached the second-highest GI value in UFZs (GI value = 0.134), suggesting that tree coverage plays an essential role in UFZ classification. The other 2D USPs (i.e., BC, ISC_G, and GC) were also crucial for UFZ mapping since their GI values were higher than 0.06. Moreover, the spatial patterns of land covers occupied a relatively low importance rank in UFZ mapping, and all their GI values were lower than 0.060.
We also separately tested the variable importance of each category of features. Figure 9a–c show the results of the variable importance of 2D USPs, 3D USPs, and spatial patterns. The most crucial variable in 2D USPs was TC (Figure 9a), which reached a high GI value of 0.32, indicating that tree coverage plays an essential role in UFZ classification. Besides, building coverage yielded a high GI value of 0.25. However, SC and WC showed a relatively low variable importance with GI values of less than 0.02. Figure 9b shows that SVF (GI value = 0.29) and SAR (GI value = 0.28) yielded good performance in UFZ mapping. Notably, all the GI values of 3D USPs exceeded 0.20, suggesting that 3D USPs are indispensable for UFZ mapping. Figure 9c shows that the top 2 important spatial pattern features were: TNNI (GI values = 0.30) and BNNI (GI values = 0.29). In addition, the variable importance ranking of spatial patterns from high to low was TNNI > BNNI > INNI > GNNI > SNNI > WNNI.
Figure 10 shows the changes in the OA value with different input variables. As shown, the OA value increased rapidly when the number of variables changed from 1 to 4, and it reached the highest accuracy of 91.9% when the number of input variables was 14. Therefore, the 14 variables of the optimal combination were selected to perform the UFZ mapping (the details of 14 optimal variables can be found in Supplementary Material Table S9).

4.2.2. Results of UFZ Mapping

Table 8 shows the accuracy of UFZ classification revealed by different classifiers (Details concerning the confusion matrix of three classifiers are shown in Supplementary Material Tables S10–S12). It is found that the RF classifier generated the highest accuracy results of UFZ mapping with an OA value of 91.9%. For example, the results of commercial zone classification show that the highest accuracy was found in the RF classifier (PA value = 88.9%), but low accuracies were observed in KNN (PA value = 64.4%) and LDA (PA value = 80.0%) algorithms. In summary, the RF classifier produced more accurate results and had more tremendous advantages in identifying UFZs than KNN and LDA classifiers do.
The results of UFZ classification revealed by RF, KNN, and LDA classifiers are shown in Figure 11a–c. We selected three sub-regions to show their differences in spatial classification details (Figure 11d–f). Figure 11d gives an example of a residential site. We found that the site could be well recognized using the RF classifier; however, it was wrongly classified as an industrial zone using KNN or LDA classifiers. Likewise, Figure 11e gives an example of a commercial site and found that the RF classifier could accurately classify the place; yet, the KNN classifier mistakenly classified it as a residential zone and the LDA classifier wrongly recognized it as an industrial zone.
Moreover, Figure 11f shows an industrial zone with irregularly disturbed buildings and some large trucks. It is found that the RF classifier recognized the site well, but it was incorrectly classified as a residential zone using the KNN or LDA classifiers. Thus, based on the above analyses, the RF classifier was more suitable for UFZ classification.

4.2.3. Advantages of Using 3D USPs to UFZ Mapping

Figure 12 summarizes the UFZ classification results of different variable combinations (the corresponding table can be found in Supplementary Material Table S13). As shown, firstly, Exp. g yielded the highest accuracy with an OA value of 91.9%, suggesting the advantages of integrating 2D USPs, 3D USPs and spatial pattern features of land covers. In contrast, Exp. c produced the worst classification results (OA value = 67.7%), suggesting the disadvantages of using only spatial patterns of land covers for UFZ mapping. Secondly, using combinations of different categories was generally better than using the single category (e.g., Exps. a VS. d and Exps. b VS. e). However, it is found that the OA value of Exp. a was higher than that of Exp. f, suggesting that 2D USPs are indispensable for the UFZ classification.
To explore the impacts of 3D USPs on UFZ mapping, we compared the two pairs of experiments (i.e., Exps. a vs. d and Exps. e vs. g) in Figure 13. As shown, the experiments with 2D USPs and 3D USPs produced more accurate classification results than those with only 2D USPs. For example, block a in Figure 13 is an industrial zone with low-rise and large buildings; however, it was incorrectly classified as a park in Exp. a. Similarly, residential zone b was incorrectly identified as a commercial zone in Exp. a. Yet, it is noteworthy that blocks a and b were correctly classified in Exp. d by using 3D USPs. The possible reason is that SAR and FAR can help with UFZ mapping. SARs in industrial zones are higher than those in parks (Figure 1c), and FARs in residential zones are higher than those in commercial zones (Figure 1d). In addition, blocks c and d were correctly identified as the industrial and commercial zones, respectively, in Exp. g, whereas they were wrongly labeled as residential zones in Exp. e that did not contain 3D USPs. Our results verified that the importance of 3D for UFZ classification.

5. Discussion

Previous studies use only 2D features, e.g., spectral, textural, and geometrical features to perform UFZ mapping [11,12,13], whereas our approach first considered the potentials of 3D USPs, i.e., BH, SVF, FAR, and SAR for the UFZ mapping. It is important to introduce 3D USPs for the UFZ mapping since different UFZs yield essentially 3D heterogeneity (Figure 1). Our results verified that 3D USPs could considerably improve the OA values of UFZ and land cover mapping by 6.4% and 3.0% (Table 8 and Figure 12).
To better illustrate the advantages of our approach, we further compare the proposed approach with relevant literature on the data source and evaluated results, i.e., OA value (Table 9). First, our approach obtains more refined evaluation results than most of the considered strategies (OA value = 91.9%). Second, the main idea of UFZ mapping is to fully use the differences in spectral features [12], textual features [64], the spatial pattern of objects [1,11], and 3D USPs among various UFZs. Note that different strategies and indicators may yield extra costs. Our approach needs VHR images and LiDAR point clouds producing additional expenses, i.e., 3D data. Yet, nowadays, accurate 3D data is more accessible, i.e., low-cost LiDAR and terrestrial LiDAR systems are becoming more affordable [14]. Based on the above discussion, our approach is suitable for accurate UFZ mapping due to better accuracy and affordable costs.
As stated earlier, this study achieved an accurate method for UFZ classification by considering 3D USPs. However, several limitations need to be noted. First, this study highlights the distinctive role of 3D USPs in UFZ mapping. However, cities composed of complex and various landscapes require more 3D variables, i.e., 3D spatial patterns, to label the objects to obtain accurate UFZ classification results. Second, the UFZs relevant to socioeconomic events, and people usually conduct different activities in various UFZs. The open social data related to human activity (i.e., POI, public transport data, mobile phone positioning data) are valuable for UFZ mapping [11,12,64]). Therefore, studies that explore additional available features and integrate open social data for more accurate UFZ mapping are required in the future.

6. Conclusions

In this study, we proposed a new approach for UFZ mapping by integrating 2D USPs, 3D USPs and the spatial patterns of land covers. The approach was then verified in Brooklyn, New York City, USA. We evaluated the influence of 3D USPs on the classifications of land covers and UFZs. The conclusions can be drawn as follows.
Our results show that the approach yielded an excellent accuracy of UFZ mapping with an overall accuracy of 91.9%. The RF classifier produced the highest accuracies of both land cover and UFZ classifications. In addition, 3D USPs considerably improved the classification accuracy of land cover by increasing the average OA value of 3.0% and can help the UFZ recognition, which improved the accuracy of UFZ mapping (increasing the OA value by 6.4%). Moreover, we verified DSM was the most critical variable of 44 features in land cover mapping, which obtained the GI value as 0.057. In addition, SVF was the top importance variable for UFZ classification with a GI value of 0.138. Our research provides a new perspective for UFZ mapping and highlights that 3D USPs should be considered in future studies that perform UFZ mapping.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/rs13132573/s1.

Author Contributions

Conceptualization, S.C. and M.D.; methodology, S.C., Y.M. and S.S.; software, S.S.; validation, S.C., W.H., Y.M. and M.D.; formal analysis, S.S.; investigation, S.S. and Y.M.; resources, S.C. and M.D.; data curation, S.C.; writing—original draft preparation, S.S. and S.C.; writing—review and editing, S.C.; visualization, Q.C. and W.H.; supervision, S.C. and M.D.; project administration, S.C.; funding acquisition, S.C. and M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Scientific Research Project of Beijing Municipal Education Commission (No. KM202110016004) and funded by the Fundamental Research Funds for Beijing University of Civil Engineering and Architecture (No. X20047) and by the National Natural Science Foundation (NSFC) of China [Key Project #41930650].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data in this study can be available within the article or its supplementary materials. 2017 aerial LiDAR data used in this study can be acquired from http://gis.ny.gov/elevation/lidar-coverage.htm (accessed on 1 February 2019).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.; Du, S.; Wang, Q. Integrating bottom-up classification and top-down feedback for improving urban land-cover and functional-zone mapping. Remote Sens. Environ. 2018, 212, 231–248. [Google Scholar] [CrossRef]
  2. Alberti, M.; Weeks, R.; Coe, S. Urban land-cover change analysis in central puget sound. Photogramm. Eng. Remote Sens. 2004, 70, 1043–1052. [Google Scholar] [CrossRef] [Green Version]
  3. Yao, Y.; Li, X.; Liu, X.; Liu, P.; Liang, Z.; Zhang, J.; Mai, K. Sensing spatial distribution of urban land use by integrating points-of-interest and Google Word2Vec model. Int. J. Geogr. Inf. Sci. 2017, 31, 825–848. [Google Scholar] [CrossRef]
  4. Kane, K.; Tuccillo, J.; York, A.M.; Gentile, L.; Ouyang, Y. A spatio-temporal view of historical growth in Phoenix, Arizona, USA. Landsc. Urban Plan. 2014, 121, 70–80. [Google Scholar] [CrossRef]
  5. Zhou, G.; Li, C.; Li, M.; Zhang, J.; Liu, Y. Agglomeration and diffusion of urban functions: An approach based on urban land use conversion. Habitat Int. 2016, 56, 20–30. [Google Scholar] [CrossRef]
  6. Heiden, U.; Heldens, W.; Roessner, S.; Segl, K.; Esch, T.; Mueller, A. Urban structure type characterization using hyperspectral remote sensing and height information. Landsc. Urban Plan. 2012, 105, 361–375. [Google Scholar] [CrossRef]
  7. Hu, S.; Wang, L. Automated urban land-use classification with remote sensing. Int. J. Remote Sens. 2012, 34, 790–803. [Google Scholar] [CrossRef]
  8. Feng, Y.; Du, S.; Myint, S.W.; Shu, M. Do urban functional zones affect land surface temperature differently? A case study of Beijing, China. Remote Sens. 2019, 11, 1802. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, X.; Du, S.; Zheng, Z. Heuristic sample learning for complex urban scenes: Application to urban functional-zone mapping with VHR images and POI data. ISPRS J. Photogramm. Remote Sens. 2020, 161, 1–12. [Google Scholar] [CrossRef]
  10. Xiao, J.; Shen, Y.; Ge, J.; Tateishi, R.; Tang, C.; Liang, Y.; Huang, Z. Evaluating urban expansion and land use change in Shijiazhuang, China, by using GIS and remote sensing. Landsc. Urban Plan. 2006, 75, 69–80. [Google Scholar] [CrossRef]
  11. Zhang, X.; Du, S.; Wang, Q. Hierarchical semantic cognition for urban functional zones with VHR satellite images and POI data. ISPRS J. Photogramm. Remote Sens. 2017, 132, 170–184. [Google Scholar] [CrossRef]
  12. Hu, T.; Yang, J.; Li, X.; Gong, P. Mapping urban land use by using landsat images and open social data. Remote Sens. 2016, 8, 151. [Google Scholar] [CrossRef]
  13. Zhou, W.; Ming, D.; Lv, X.; Zhou, K.; Bao, H.; Hong, Z. SO–CNN based urban functional zone fine division with VHR remote sensing image. Remote Sens. Environ. 2020, 236, 111458. [Google Scholar] [CrossRef]
  14. Cao, S.; Weng, Q.; Du, M.; Li, B.; Zhong, R.; Mo, Y. Multi-scale three-dimensional detection of urban buildings using aerial LiDAR data. GISci. Remote Sens. 2020, 57, 1125–1143. [Google Scholar] [CrossRef]
  15. Liu, C.; Huang, X.; Zhu, Z.; Chen, H.; Tang, X.; Gong, J. Automatic extraction of built-up area from ZY3 multi-view satellite imagery: Analysis of 45 global cities. Remote Sens. Environ. 2019, 226, 51–73. [Google Scholar] [CrossRef]
  16. Huang, X.; Chen, H.; Gong, J. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2018, 135, 127–141. [Google Scholar] [CrossRef]
  17. New York City Office of Information Technology Services (NYITS). High-Resolution Orthophotos of Red, Green, Blue, and Near-Infrared Bands. Available online: http://gis.ny.gov/gateway/mg/metadata.cfm (accessed on 1 February 2020).
  18. New York City Department of Information Technology and Telecommunications (NYCDITT). 2017 ALS Data. Available online: http://gis.ny.gov/elevation/lidar-coverage.htm (accessed on 1 February 2019).
  19. Shin, H.B. Residential redevelopment and the entrepreneurial local state: The implications of Beijing’s shifting emphasis on urban redevelopment policies. Urban Stud. 2009, 46, 2815–2839. [Google Scholar] [CrossRef] [Green Version]
  20. Zhao, P.; Lu, B. Transportation implications of metropolitan spatial planning in mega-city Beijing. Int. Dev. Plan. Rev. 2009, 31, 235–261. [Google Scholar] [CrossRef]
  21. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  22. Baatz, M.; Schape, A. Multiresolution segmentation: An optimization approach for high quality multi-scale image segmentation. Angew. Geogr. Inf. Verarbeitung. 2000, 12, 12–23. [Google Scholar]
  23. Burnett, C.; Blaschke, T. A multi-scale segmentation/object relationship modelling methodology for landscape analysis. Ecol. Model. 2003, 168, 233–249. [Google Scholar] [CrossRef]
  24. Hay, G.J.; Blaschke, T.; Marceau, D.J.; Bouchard, A. A comparison of three image-object methods for the multiscale analysis of landscape structure. ISPRS J. Photogramm. Remote Sens. 2003, 57, 327–345. [Google Scholar] [CrossRef]
  25. Silveira, M.; Nascimento, J.; Marques, J.S.; Marcal, A.R.S.; Mendonca, T.; Yamauchi, S.; Maeda, J.; Rozeira, J. Comparison of segmentation methods for melanoma diagnosis in dermoscopy images. IEEE J. Sel. Top. Signal Process. 2009, 3, 35–45. [Google Scholar] [CrossRef]
  26. Drǎguţ, L.; Tiede, D.; Levick, S. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  27. Peña-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [Google Scholar] [CrossRef]
  28. Cleve, C.; Kelly, M.; Kearns, F.R.; Moritz, M. Classification of the wildland–urban interface: A comparison of pixel- and object-based classifications using high-resolution aerial photography. Comput. Environ. Urban Syst. 2008, 32, 317–326. [Google Scholar] [CrossRef]
  29. Faridatul, M.I.; Wu, B. Automatic classification of major urban land covers based on novel spectral indices. ISPRS Int. J. Geo-Inf. 2018, 7, 453. [Google Scholar] [CrossRef] [Green Version]
  30. Jiang, Z.; Huete, A.R.; Chen, J.; Chen, Y.; Li, J.; Yan, G.; Zhang, X. Analysis of NDVI and scaled difference vegetation index retrievals of vegetation fraction. Remote Sens. Environ. 2006, 101, 366–378. [Google Scholar] [CrossRef]
  31. Person, R.L.; Miller, L.D. Remote mapping of standing crop biomass for estimation of the productivity of the short-grass prairie. In Proceedings of the Eighth International Symposium on Remote Sensing of Environment, Ann Arbor, MI, USA, 2–6 October 1972; Volume 2, pp. 1357–1381. [Google Scholar]
  32. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  33. McFeeters, S.K. The use of the normalized difference water index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  34. Kim, M.; Warner, T.A.; Madden, M.; Atkinson, D.S. Multi-scale GEOBIA with very high spatial resolution digital aerial imagery: Scale, texture and image objects. Int. J. Remote Sens. 2011, 32, 2825–2850. [Google Scholar] [CrossRef]
  35. Anys, H.; Bannari, A.; He, D.C.; Morin, D. Texture analysis for the mapping of urban areas using airborne MEIS-II images. In Proceedings of the First International Airborne Remote Sensing Conference and Exhibition, Strasbourg, France, 11–15 September 1994; Volume 3, pp. 231–245. [Google Scholar]
  36. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  37. Shaban, M.A.; Dikshit, O. Improvement of classification in urban areas by the use of textural features: The case study of Lucknow city, Uttar Pradesh. Int. J. Remote Sens. 2001, 22, 565–593. [Google Scholar] [CrossRef]
  38. Wu, S.-S.; Qiu, X.; Usery, E.L.; Wang, L. Using geometrical, textural, and contextual information of land parcels for classification of detailed urban land use. Ann. Assoc. Am. Geogr. 2009, 99, 76–98. [Google Scholar] [CrossRef]
  39. Gomez, C.; Hayakawa, Y.; Obanawa, H. A study of Japanese landscapes using structure from motion derived DSMs and DEMs based on historical aerial photographs: New opportunities for vegetation monitoring and diachronic geomorphology. Geomorphology 2015, 242, 11–20. [Google Scholar] [CrossRef] [Green Version]
  40. Zakšek, K.; Oštir, K.; Kokalj, Ž. Sky-view factor as a relief visualization technique. Remote Sens. 2011, 3, 398–415. [Google Scholar] [CrossRef] [Green Version]
  41. Rapidlasso. Lastools Introduction. Available online: https://rapidlasso.com/lastools/lasground/ (accessed on 1 March 2020).
  42. Strobl, C.; Boulesteix, A.-L.; Augustin, T. Unbiased split selection for classification trees based on the Gini Index. Comput. Stat. Data Anal. 2007, 52, 483–501. [Google Scholar] [CrossRef] [Green Version]
  43. Zhang, F.; Yang, X. Improving land cover classification in an urbanized coastal area by random forests: The role of variable selection. Remote Sens. Environ. 2020, 251, 112105. [Google Scholar] [CrossRef]
  44. Mo, Y.; Zhong, R.; Cao, S. Orbita hyperspectral satellite image for land cover classification using random forest classifier. J. Appl. Remote Sens. 2021, 15, 014519. [Google Scholar] [CrossRef]
  45. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  46. Breiman, L. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Stat. Sci. 2001, 16, 199–231. [Google Scholar] [CrossRef]
  47. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  48. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
  49. Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  50. Ge, G.; Shi, Z.; Zhu, Y.; Yang, X.; Hao, Y. Land use/cover classification in an arid desert-oasis mosaic landscape of China using remote sensed imagery: Performance assessment of four machine learning algorithms. Glob. Ecol. Conserv. 2020, 22, e00971. [Google Scholar] [CrossRef]
  51. Rao, C.R. The utilization of multiple measurements in problems of biological classification. J. R. Stat. Soc. Ser. B (Methodol.) 1948, 10, 159–193. [Google Scholar] [CrossRef]
  52. Blei, D.M.; Ng, A.Y.; Jordan, M.I. Latent dirichlet allocation. J. Mach. Learn. Res. 2003, 3, 993–1022. [Google Scholar]
  53. Stuckens, J.; Coppin, P.; Bauer, M. Integrating contextual information with per-pixel classification for improved land cover classification. Remote Sens. Environ. 2000, 71, 282–296. [Google Scholar] [CrossRef]
  54. Huang, X.; Wen, D.; Li, J.; Qin, R. Multi-level monitoring of subtle urban changes for the megacities of China using high-resolution multi-view satellite imagery. Remote Sens. Environ. 2017, 196, 56–75. [Google Scholar] [CrossRef]
  55. Lewis, H.G.; Brown, M. A generalized confusion matrix for assessing area estimates from remotely sensed data. Int. J. Remote Sens. 2001, 22, 3223–3235. [Google Scholar] [CrossRef]
  56. Scaioni, M.; Höfle, B.; Kersting, A.P.B.; Barazzetti, L.; Previtali, M.; Wujanz, D. Methods from information extraction from lidar intensity data and multispectral lidar technology. ISPRS 2018, 3, 1503–1510. [Google Scholar] [CrossRef] [Green Version]
  57. Diggle, P.J.; Besag, J.; Gleaves, J.T. Statistical analysis of spatial point patterns by means of distance methods. Biometrics 1976, 32, 659. [Google Scholar] [CrossRef]
  58. Ri, C.-Y.; Yao, M. Bayesian network based semantic image classification with attributed relational graph. Multimed. Tools Appl. 2015, 74, 4965–4986. [Google Scholar] [CrossRef]
  59. Yang, G.; Pu, R.; Zhang, J.; Zhao, C.; Feng, H.; Wang, J. Remote sensing of seasonal variability of fractional vegetation cover and its object-based spatial pattern analysis over mountain areas. ISPRS J. Photogramm. Remote Sens. 2013, 77, 79–93. [Google Scholar] [CrossRef]
  60. Clark, P.J.; Evans, F.C. On some aspects of spatial pattern in biological populations. Science 1955, 121, 397–398. [Google Scholar] [CrossRef] [PubMed]
  61. Chen, Z.; Xu, B.; Devereux, B. Urban landscape pattern analysis based on 3D landscape models. Appl. Geogr. 2014, 55, 82–91. [Google Scholar] [CrossRef]
  62. Jike, C.; Wen, F.; Shuan, G.; Wen, Q.; Peijun, D.; Jun, S.; Jia, M.; Jiu, F.; Zi, H.; Long, L.; et al. Separate and combined impacts of building and tree on urban thermal environment from two- and three-dimensional perspectives. Build. Environ. 2021, 194, 107650. [Google Scholar]
  63. Hermosilla, T.; Palomar-Vázquez, J.; Balaguer-Beser, A.; Balsa-Barreiro, J.; Ruiz, L.A. Using street based metrics to characterize urban typologies. Comput. Environ. Urban Syst. 2014, 44, 68–79. [Google Scholar] [CrossRef] [Green Version]
  64. Xin, H.; Yang, J.; Li, J.; Wen, D. Urban functional zone mapping by integrating high spatial resolution nighttime light and daytime multi-view imagery. ISPRS J. Photogramm. Remote Sens. 2021, 175, 403–415. [Google Scholar]
Figure 1. An investigation of 3D USPs across different functional zones in the northern part of Brooklyn, New York City, USA. (a) Sky view factor (SVF), the visible degree of sky in the ground level, (b) building height (BH), the height of building, (c) street aspect ratio (SAR), the ratio of average building height to street width, and (d) floor area ratio (FAR), the ratio of total building floor area to block area.
Figure 1. An investigation of 3D USPs across different functional zones in the northern part of Brooklyn, New York City, USA. (a) Sky view factor (SVF), the visible degree of sky in the ground level, (b) building height (BH), the height of building, (c) street aspect ratio (SAR), the ratio of average building height to street width, and (d) floor area ratio (FAR), the ratio of total building floor area to block area.
Remotesensing 13 02573 g001
Figure 2. The overview of the study area. (a) location of the study area, (b) true color composited image, (c) Digital Surface Model (DSM), and (d) sky view factor (SVF).
Figure 2. The overview of the study area. (a) location of the study area, (b) true color composited image, (c) Digital Surface Model (DSM), and (d) sky view factor (SVF).
Remotesensing 13 02573 g002
Figure 3. Workflow of the study.
Figure 3. Workflow of the study.
Remotesensing 13 02573 g003
Figure 4. Variable importance ranking for land cover mapping revealed by the RF algorithm.
Figure 4. Variable importance ranking for land cover mapping revealed by the RF algorithm.
Remotesensing 13 02573 g004
Figure 5. Changes in the OA values of land cover classification associated with different number of input variables.
Figure 5. Changes in the OA values of land cover classification associated with different number of input variables.
Remotesensing 13 02573 g005
Figure 6. Comparison of classification details by using the three classifiers. (a) RF classifier—total study area, (b) RF classifier—regions of interest, (c) KNN classifier, (d) LDA classifier, (e) reference model and (f) VHR image.
Figure 6. Comparison of classification details by using the three classifiers. (a) RF classifier—total study area, (b) RF classifier—regions of interest, (c) KNN classifier, (d) LDA classifier, (e) reference model and (f) VHR image.
Remotesensing 13 02573 g006
Figure 7. A comparison of classification details of using 2D USPs and 2D-3D USPs. Panels (a,e) refer to the classification results using 2D USPs; Panels (b,f) refer to the classification results using 2D-3D USPs; Panels (c,g) refer to the reference models; Panels (d,f) refer to the VHR images.
Figure 7. A comparison of classification details of using 2D USPs and 2D-3D USPs. Panels (a,e) refer to the classification results using 2D USPs; Panels (b,f) refer to the classification results using 2D-3D USPs; Panels (c,g) refer to the reference models; Panels (d,f) refer to the VHR images.
Remotesensing 13 02573 g007
Figure 8. Variable importance ranking for UFZ mapping revealed by the RF algorithm.
Figure 8. Variable importance ranking for UFZ mapping revealed by the RF algorithm.
Remotesensing 13 02573 g008
Figure 9. The variable importance of 2D USPs, 3D USPs, and spatial patterns of land covers.
Figure 9. The variable importance of 2D USPs, 3D USPs, and spatial patterns of land covers.
Remotesensing 13 02573 g009
Figure 10. Changes in the OA values of UFZ classification with rising number of input variables.
Figure 10. Changes in the OA values of UFZ classification with rising number of input variables.
Remotesensing 13 02573 g010
Figure 11. Results of UFZ mapping by different classifiers. (ac) represent RF, KNN, and LDA classifiers, respectively. (df) enlargement of interest areas in (ac). The red word refers to the ground reference.
Figure 11. Results of UFZ mapping by different classifiers. (ac) represent RF, KNN, and LDA classifiers, respectively. (df) enlargement of interest areas in (ac). The red word refers to the ground reference.
Remotesensing 13 02573 g011
Figure 12. UFZ classification accuracy for different variables combination.
Figure 12. UFZ classification accuracy for different variables combination.
Remotesensing 13 02573 g012
Figure 13. Comparisons of the UFZ classification results of different experiments. Exps. a and e are the UFZ classification results without 3D USPs, Exps. d and g are the UFZ classification results with 3D USPs.
Figure 13. Comparisons of the UFZ classification results of different experiments. Exps. a and e are the UFZ classification results without 3D USPs, Exps. d and g are the UFZ classification results with 3D USPs.
Remotesensing 13 02573 g013
Table 1. Multi-feature extraction for land cover mapping.
Table 1. Multi-feature extraction for land cover mapping.
CategoryFeatureDescriptionReferences
Spectral featureSpectral informationRed Band (BR), Green Band (BG), Blue Band (BB) and Near-Infrared Band (BNIR)[29]
Normalized Difference Vegetation Index (NDVI)NDVI = (BNIR − BR)/(BNIR + BR)[30]
Ratio Vegetation Index (RVI)RVI = BNIR/BR[31]
Difference Vegetation Index (DVI)DVI = BNIR − BR[32]
Normalized Difference Water Index (NDWI)NDWI = (BNIR − BG)/(BNIR + BG)[33]
MeaniAverage spectral value of pixels in an object of each layer (i refers to different spectral bands)[11]
BrightnessThe average value of the Meani of the image objects
RatioThe ratio of the Meani to the sum of Meani of the image objects
Mean diff. to neighbor (Mean. diff.)The difference between the layer average value and its adjacent objects
Standard Deviation (Std. Dev)Gray standard deviation of pixels in an object of each layer
Textural featureAngular Second MomentThe angular second moment derived from GLCM and GLDV, respectively[34]
VarianceThe variance derived from GLCM
ContrastThe contrast derived from GLCM and GLDV, respectively
EntropyThe entropy derived from GLCM and GLDV, respectively[35]
EnergyThe energy derived from GLCM
CorrelationThe gray correlation derived from GLCM[36]
Inverse Differential MomentThe inverse differential moment derived from GLCM
DissimilarityThe heterogeneity parameters derived from GLCM[37]
HomogeneityThe homogeneity derived from GLCM
Geometrical featureAreaThe area of image objects[13]
Border LengthThe perimeter of image objects
Length/WidthThe length-width ratio of the image object’s minimum bounding rectangle (MBR)
CompactnessThe ratio of the area of object’s MBR to the number of pixels within image objects[38]
AsymmetryThe ratio of the short axis to the long axis of an approximate ellipse of image objects
Border IndexThe ratio of the perimeter of image object to the perimeter of the object’s MBR.
DensityThe ratio of area to radius of image objects[11]
Elliptic FitThe fitting degree of eclipse fit
Main DirectionEigenvectors of covariance matrix of image objects
Shape IndexThe ratio of perimeter to four times side
length
3D USPDigital Surface Model (DSM, Figure 2c)DSM was produced by using an interpolation algorithm (i.e., binning approach) with all points.[39]
Sky View Factor (SVF, Figure 2d)Sky view factor refers to the visible degree of sky in the ground level and its values vary from 0 to 1. 0 refers to the sky is not visible; in contrast, 1 refers to the sky is completely visible.[40]
Flatness (the details can be found in Supplementary Material Figure S1)Flatness derived from DSM and refers to the flatness of the non-ground points. The points were generated by using the “lasground” filter operation in the LAStools. [14,41]
Table 2. Samples selected for the land cover mapping.
Table 2. Samples selected for the land cover mapping.
CategoryNumber of Training SampleNumber of
Verification Sample
Number of Total Samples
Building523813796617
Tree15624071969
Grass14433291772
Soil756224980
Impervious ground726718549121
Water24568313
Table 3. Rules for the classification post-processing.
Table 3. Rules for the classification post-processing.
Confused ClassesPrinciplesAttributesRules
Class   1   Class   2 a
Impervious ground and soil Most of the soil and grass are spatially adjacent. Likewise, impervious ground and buildings are spatially contiguous.Relative border (RB), distance to grass (DG), and distance to building (DB)Impervious ground   soil
DG = 0, DB > 0
  RB   to   nearest   soil   object   >   T 1
Soil   impervious ground
DG > 0, DB = 0
  RB   to   nearest   impervious   ground   object   >   T 1
Impervious ground and building Buildings are always higher than impervious groundsRelative border (RB) and height (H)Impervious ground   building
H > 0
  RB   to   nearest   building   object   >   T 2
Building   impervious ground
H = 0
  RB   to   nearest   impervious   ground   object   >   T 2
Tree and grass Trees are always higher than grassesRelative border (RB) and height (H)Tree     grass
H = 0
  RB   to   nearest   grass   object   > T 3
Grass   tree
H > 0
  RB   to   nearest   trees   object   >   T 3
a Class   1   Class   2 indicates that Class 1 is reclassified as Class 2 when the subsequent conditions are satisfied.
Table 4. Multi-feature extraction for UFZ mapping.
Table 4. Multi-feature extraction for UFZ mapping.
CategoryFeatureDescriptionReferences
2D USPBuilding coverage (BC)Total building area divided by block area.[61]
Tree coverage (TC)Total tree area divided by block area.
Grass coverage (GC)Total grass area divided by block area.[14]
Soil coverage (SC)Total soil area divided by block area.
Impervious surface coverage at ground level (ISC_G)Total impervious surface coverage at ground level divided by block area.
Water coverage (WC)Total water area divided by block area.
3D USPSky view factor (SVF)Sky view factor influenced by building.[40]
Building height (BH)The height of building[62]
Street aspect ratio (SAR)Average building high divided street width[61]
Floor area ratio (FAR)Total building floor area divided by block area.[63]
Spatial patternBuilding Nearest Neighbor Index (BNNI)The NNI value of buildings[1]
Tree Nearest Neighbor Index (TNNI)The NNI value of trees
Grass Nearest Neighbor Index (GNNI)The NNI value of grasses
Soil Nearest Neighbor Index (SNNI)The NNI value of soil lands[60]
Impervious ground Nearest Neighbor Index (INNI)The NNI value of impervious grounds
Water Nearest Neighbor Index (WNNI)The NNI value of water bodies
Table 5. Experiments used for UFZ mapping.
Table 5. Experiments used for UFZ mapping.
Experiment2D USP3D USPSpatial Pattern Feature
Exp. a
Exp. b
Exp. c
Exp. d
Exp. e
Exp. f
Exp. g
Table 6. Comparisons of the performances of RF, KNN, and LDA algorithms in land cover mapping. OA, PA, and UA represent overall accuracy, producer’s accuracy, and user’s accuracy, respectively.
Table 6. Comparisons of the performances of RF, KNN, and LDA algorithms in land cover mapping. OA, PA, and UA represent overall accuracy, producer’s accuracy, and user’s accuracy, respectively.
CategoryRF (%)KNN (%)LDA (%)
PAUAPAUAPAUA
Building85.488.675.174.374.867.8
Tree82.188.674.978.072.777.1
Grass86.678.580.275.479.677.7
Soil84.484.079.582.071.986.1
Impervious ground90.388.178.578.772.976.2
Water92.696.986.896.775.092.7
OA87.477.474.0
Table 7. Comparison of land-cover classification by using 3D USPs against those by using only 2D features.
Table 7. Comparison of land-cover classification by using 3D USPs against those by using only 2D features.
CategoryRF (%)KNN (%)LDA (%)Average Increase Accuracy (%)
PA (3D)PAPA (3D)PAPA (3D)PA
Building85.480.675.173.574.871.43.3
Tree82.175.274.967.372.767.16.7
Grass86.681.580.277.879.676.33.6
Soil84.481.779.577.771.968.32.7
Impervious ground90.389.578.576.872.969.71.9
Water92.691.286.883.875.073.52.0
OA87.484.377.475.174.070.53.0
Table 8. A comparison of using RF, KNN, and LDA algorithms in UFZ mapping.
Table 8. A comparison of using RF, KNN, and LDA algorithms in UFZ mapping.
CategoryRF (%)KNN (%)LDA (%)
PAUAPAUAPAUA
Commercial zone89.778.872.461.875.973.3
Residential zone94.696.679.384.989.189.1
Industrial zone89.893.673.570.685.784.0
Park zone87.588.268.873.375.085.7
OA91.975.884.9
Table 9. Comparisons of the exist methods for UFZ mapping.
Table 9. Comparisons of the exist methods for UFZ mapping.
MethodData SourceStudy AreaOA Value
HSC method [11]VHR images (0.61m) and POIsBeijing, China90.8%
Bottom-up and top-down feedback method [1]VHR images (0.5 m)Beijing, China84.0%
Super object-CNN method [13]High-resolution images (1.19 m) and POIsHangzhou, China91.1%
Integrating Landsat images and POIs method [12]Rough resolution images (30m) and POIsBeijing, China81.0%
Integrating nighttime light and multi-view imagery method [64]High-resolution images (5.8m), VHR images (0.92), and POIsBeijing, China and Wuhan, China89.6% (Beijing, China)
85.2% (Wuhan, China)
Our methodVHR images (0.3m) and LiDAR dataBrooklyn, New York City, USA91.9%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sanlang, S.; Cao, S.; Du, M.; Mo, Y.; Chen, Q.; He, W. Integrating Aerial LiDAR and Very-High-Resolution Images for Urban Functional Zone Mapping. Remote Sens. 2021, 13, 2573. https://doi.org/10.3390/rs13132573

AMA Style

Sanlang S, Cao S, Du M, Mo Y, Chen Q, He W. Integrating Aerial LiDAR and Very-High-Resolution Images for Urban Functional Zone Mapping. Remote Sensing. 2021; 13(13):2573. https://doi.org/10.3390/rs13132573

Chicago/Turabian Style

Sanlang, Siji, Shisong Cao, Mingyi Du, You Mo, Qiang Chen, and Wen He. 2021. "Integrating Aerial LiDAR and Very-High-Resolution Images for Urban Functional Zone Mapping" Remote Sensing 13, no. 13: 2573. https://doi.org/10.3390/rs13132573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop