Next Article in Journal
Characterization of Dry-Season Phenology in Tropical Forests by Reconstructing Cloud-Free Landsat Time Series
Previous Article in Journal
UAV LiDAR Survey for Archaeological Documentation in Chiapas, Mexico
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Urban Functional Zone Recognition Integrating Multisource Geographic Data

1
Key Laboratory of Geographical Processes and Ecological Security in Changbai Mountains, Ministry of Education, School of Geographical Sciences, Northeast Normal University, Changchun 130024, China
2
Urban Remote Sensing Application Innovation Center, School of Geographical Sciences, Northeast Normal University, Changchun 130024, China
3
Changchun Automobile Industry Institute, Changchun 130011, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(23), 4732; https://doi.org/10.3390/rs13234732
Submission received: 28 October 2021 / Revised: 16 November 2021 / Accepted: 20 November 2021 / Published: 23 November 2021
(This article belongs to the Section Urban Remote Sensing)

Abstract

:
As the basic spatial unit of urban planning and management, it is necessary to understand the real development trend of urban functional zones in time and carry out reasonable planning adjustment. Because of the complexity of urban functional zones, the automatic recognition of urban functional zones has become a significant scientific problem in urban research. Urban functional zones contain natural and socioeconomic characteristics, but the existing identification methods fail to comprehensively consider these features. This paper proposes a framework that integrates multisource geographic data to recognize urban functional zone. We used high-resolution remote sensing imagery, point-of-interest (POI) data and high-spatial-resolution nighttime light imagery to extract both natural and socioeconomic features for urban functional zone accurate interpretation. Various features provide more accurate and comprehensive description for complex urban functional zone, so as to improve the recognition accuracy of urban functional zone. At present, there are few studies on urban functional zone recognition based on the combination of high-resolution remote sensing image, POI and high-resolution nighttime light imagery. The application potential of the combination of these three geographical data sources in urban function zone recognition needs to be explored. The experimental results show that the accuracy of urban functional zone recognition was obviously improved by the three data sources combination, the overall accuracy reached 80.30% and a comprehensive evaluation index reached 68.26%. This illustrate that the combination of the three data sources is beneficial to the urban functional zone recognition.

Graphical Abstract

1. Introduction

In the process of urban development, urban functional zones have been gradually formed inside the city in order to meet the different living needs of urban residents, such as residential zone, industrial zone and commercial zone [1]. These urban functional zones carry different socioeconomic activities of human beings and reflect the characteristics of city [2], which are usually regarded as the basic division of city and the basic spatial unit of urban planning and management [3,4]. Accurate urban functional zone recognition can lead to timely understanding of the real development trend of functional zones and make planning adjustments, so as to realize the optimization and improvement of urban internal functional space structure [5,6,7]. The results of urban functional zone recognition can also be used to study the urban heat island effects, traffic congestion and air pollution [8,9]. In a word, identifying urban functional zone precisely is of great significance for urban planning policy-making and urban environment perception [10].
High-spatial-resolution remote sensing images can provide high-resolution and large-range description of urban functional zones, so they are widely used in urban functional zone recognition [11,12]. Some methods extracted visual features such as spectrum, texture, geometry and context from high-resolution remote sensing image, and encode them effectively to identify urban functional zone. Zhang [13] et al. presented a linear dirichlet mixture model (LDMM) for decomposing urban scenes that can be used for urban functional zone recognition. Their methods were experimentally verified to be more reliable and effective than linear mixture model and scene classification techniques. Zhang [14] et al. proposed a hierarchical semantic cognition (HSC) structure for urban functional zone recognition, which relies on geographic cognition and is different from the traditional classification methods. Zhang [15] et al. further proposed a method which especially considers the top-down feedback. Some studies focus on using multiscale segmentation methods to delineate urban functional zones’ boundaries from very high-resolution satellite images. Du [16] et al. calculated context feature to measure spatial context information of object in urban functional zones and utilized a scale-adaptive segmentation approach to determine appropriate boundaries of urban functional zone. With the continuous development of artificial intelligence algorithms, deep learning has become one of the most advantageous tools in the field of computer vision, and urban functional zones recognition has become a beneficiary of deep learning methods [17]. Zhang et al. [18] proposed a convolutional neural network (CNN) based functional zone classification method. They divided the urban functional zone into patches and fed them to the fully connected CNN. Zhou et al. [19] proposed a super object-convolutional neural network (SO-CNN)-based urban functional zone fine division method which combines the concept of super object (SO) and CNN.
Urban functional zones have both natural and socioeconomic characteristics. When using high-resolution remote sensing images to identify functional zones, only natural features can be obtained, leading to the loss of socioeconomic characteristics. In order to obtain the socioeconomic characteristics of urban functional zones, various social sensing data are increasingly used to urban functional zones identification. The social sensing commonly used in functional zone recognition include track data, check-in activity data, mobile phone location and point-of-interest (POI) data [20,21,22]. Some studies used topic models such as probabilistic latent semantic analysis (pLSA) [23], latent dirichlet allocation (LDA) [24] and subject word embedding (TWE) [25] to conduct topic vector for these social sensing. Then k-means, HDBSCAN and other clustering methods are used to identify the clustering patterns of the topic vector model [26]. Finally, the obtained clusters were labeled with functions to realize the recognition of functional zones. Vanderhaegen [27] and Xing [28,29] et al. calculated landscape metrics in urban functional zones based on the social sensing to describe urban form and urban function. Then the random forest (RF) and other classifiers were used to classify the landscape metrics to recognize urban functional zones. Urban functional zones cannot be described comprehensively and accurately by natural or socioeconomic attributes alone [30,31]. In order to consider both natural and socioeconomic characteristics, a growing number of studies have combined high-resolution remote sensing data with various geographical big data to identify urban functional zones [32]. Xu [33] et al. established a multimode urban functional zone recognition framework through integrating remote sensing images and POI. Cao et al. [34] developed a novel end to end deep-learning-based fusion model to combine the multisource and multimodal remote and social sensing data for urban functional zone recognition. Bao et al. [35] determined urban functional zones utilizing the buildings extracted from remote sensing data and social functional semantic extracted from POI data.
The combination of high-resolution remote sensing images with social sensing data is an effective way to understand urban functional zones patterns and realize urban functional zone recognition quickly and accurately [36]. However, a kind of social sensing data can only describe the socioeconomic characteristics of urban functional zones at a single level, and cannot represent the socioeconomic characteristics of urban functional zones comprehensively. Aiming to comprehensively reflect the socioeconomic characteristics of urban functional zones, it is necessary to capture multidimensional socioeconomic information from various data sources, avoiding the loss of urban functional zone attributes caused by using single source data. Different from daytime remote sensing, nighttime light imagery can detect the low light level of the earth at night and reflect the change of the intensity of human socioeconomic activities directly [37]. In particular, high-resolution nighttime light imagery can show the detailed lighting information inside the city and reflect the comprehensive intensity of human socioeconomic activities which brings a new opportunity for urban internal research [38]. Huang et al. [39] proved the light information related to human activities captured by night light remote sensing data can bring additional useful information to urban functional zones interpretation. Therefore, this paper combines high-resolution nighttime light imagery and POI data to extract socioeconomic information of urban functional zones, together with the natural features extracted from high-resolution remote sensing imagery to comprehensively characterize the complex urban functional zones. Few studies have integrated POI, high-resolution nighttime light images and high-resolution remote sensing imagery in the urban functional zone recognition. Urban functional zone recognition using POI or nighttime light image alone is available in the current literature [40], but these methods have not been used together for urban functional zone recognition. Both of these data sources can represent the socioeconomic characteristics of urban functional zone, and it remains to be explored whether their combination is more effective.

2. Study Area and Datasets

2.1. Study Area

The study area, as shown in Figure 1, covers an area of approximately 93 square kilometers belonging to the center of Chang Chun, China, and its functional categories are complex and comprehensive. There are government agencies, campuses, technology companies, business centers, residential zones, and so on. The complexity of functional categories in the study area is conducive to the generalization of this research method to other areas.

2.2. Datasets

The high-resolution remote sensing imagery and high-resolution nighttime light imagery used in this study were acquired by GF-2 and JL1-07 satellites, respectively. Table 1 shows the characteristics of different image datasets. Finally, the multispectral bands and panchromatic band of GF-2 image were merged to produce the pan-sharpened image of 0.81 resolutions with four bands. The JL1-07 high-resolution nighttime light imagery was resampled to 0.81m. 44730 POIs of the study area were collected in 2018 from Gaode map, which are sorted into 14 big classes and 134 medium classes.
Road networks are often used to determine urban functional zones’ boundaries and have achieved good results [41,42]. In this study, road vectors in study area were used to segment urban and the road-based plots representing the urban functional zone spatial units were obtained. After removing smaller blocks surrounded by circular overpasses, these blocks are too small to have a socioeconomic function, a total of 820 blocks were obtained for subsequent analysis.

3. Methods

3.1. Urban Functional Zone Feature Extraction

3.1.1. Extraction of Spectral Feature

Ground objects are the basic components of urban functional zones, and the composition of objects in different urban functional zones is various. For example, the commercial zones are mostly composed of buildings, while the parks and green spaces are mostly composed of vegetation. The classification of objects in urban functional zones is the basis for identifying regional functions. Spectral features in high-resolution remote sensing images are the basic clues to identify the object category in functional zone. To extract urban functional zone spectral features simply and efficiently, the bag-of-visual-words (BOVW) model can be used [43,44]. The BOVW model represents the urban functional zone through mid-level feature which are obtained by coding the low-level spectral features. BOVW defines a patch of an urban functional zone as “visual words” and the entire urban functional zone as a bag of “visual words”. Each visual word contains the local features of the urban functional zone, and the global spectral features of the functional zone can be quantified by the occurrence frequency of visual words. The detailed steps for BOVW are as follows:
  • Patch generation. To construct the BOVW model, each urban functional zone block was partitioned into a group of overlapped patches of size N*N m2. When more than 80% of the patch area belongs to a functional block, the patch belongs to this functional block, otherwise it would be excluded. In this study, N was set to 32 m and the overlap between adjacent patches was 24 m.
  • Patch-level spectral feature description. The histograms of 4 multispectral were used to description the spectral feature for each patch [39]. Each image band was represented by a 32-bin histogram, leading to 128 (32*4) spectral feature histograms.
  • Visual words generation. K-means unsupervised learning algorithm was used to cluster the patch-level features and generate visual words. Here, we set the k-value to 128, and acquired 128 visual words.
  • Urban functional zone spectral feature construction. We assigned each image patch to its nearest visual word according to the Euclidean distance. Each functional zone block can be encoded into a frequency histogram of the 128 visual words according to the overlapped patches which belong to it.
The above BOVW-based urban functional zone spectral feature extraction approach can effectively represented the global features of the ground objects in urban functional zone. At the same time, because the frequency histogram is not limited by the shape and size, this method is suitable for describing the urban functional zones with various shape and different size.

3.1.2. Extraction of Spatial Pattern Features

Urban functional zones are complex spatial units composed of various types of ground object, and some urban functional zones have similarities in objects composition. For example, both residential zones and commercial zones are composed of buildings, vegetation and impervious layer. The urban functional zones with similar object composition have no difference in spectral features, so it is difficult to distinguish their categories by spectral features. However, some functional zones with similar objects composition have different spatial object patterns, such as buildings in residential zones tend to be well laid out and similar in size and structure, while the buildings in commercial zones are usually different in structure and irregular in shape. They can be easily distinguished by spatial object patterns. Therefore, the spatial patterns of diverse objects are an important feature to distinguish urban functional zones. Window-independent context (WIC) features present spatial patterns information for each individual pixel [45]. In this paper, we use the WIC feature to define the spatial pattern feature of different urban functional zone. The procedure for obtaining WIC features is described as follows:
  • Firstly, we performed the ISODATA unsupervised spectral classification to divide the multispectral satellite image into 20 classes.
  • Secondly, we calculated the nearest neighbor distance between each pixel and all the 20 spectral classes. The 20 geographical nearest neighbor distances obtained constitute the contextual feature vector of each pixel represented as d ¯ q = ( d q 1 , d q 2 , , d qi , d qN ) . Where d qi is the shortest distance between pixel q and a specific spectral class S i . d qi   is calculated as the minimum Euclidian distance between pixel q and all pixel x i belong to S i .
  • Finally, urban functional zone WIC feature calculation was performed. The WIC feature of each functional zone block is the average of the context characteristics of all the pixels within it. The contextual feature vector of a pixel describes the contextual surroundings for each individual pixel. Additionally, the WIC features computed from the contextual feature vector of pixels describe the spatial pattern of objects in the urban functional zone.

3.1.3. Extraction of Socioeconomics Features from POI

Urban functional zones include not only natural attributes such as ground objects composition and spatial pattern, but also socioeconomic attributes driven by human activities. In order to meet the requirements of accurate identifying urban functional zones, it is necessary to combine natural with socioeconomic attributes to fully reveal the characteristics of urban functional zones. POIs provide information about human activities and socioeconomic information, which can be used to model the socioeconomic characteristics of urban functional zones. For POI data, the frequency density (FD) and the term frequency-inverse document frequency (TF-IDF) indicator [41] were used to quantify socioeconomic characteristics of urban functional zones. The FD of the ith POI category in region r is calculated by V i = N i / A r , where N i is number of POIs belong class i , A r is the area of region r. The POI frequency density feature vector of urban functional zone r is denoted by x r = ( v 1 , v 2 , , v f ) , where f is the number of POI categories.
For a given urban functional zone, we formulate a POI vector w 1 , w , , w f , where w i is the TF-IDF value of the ith POI category. w i is calculated as follows:
w i = n i N × log R { q | the   ith   POI   category   q }
where n i is the number of POIs belongs to the ith category and N is the total number of POI located in this urban functional zone. R is the total number of urban functional zone blocks. q is the number of regions which have the ith POI category.

3.1.4. Extraction of Socioeconomics Features from Nighttime Light Imagery

High-resolution nighttime light imagery records the night radiance caused by human activities, which are capable of reflecting the night human activity and socioeconomic attributes, and can be used to capture the socioeconomic features of urban functional zone. The quartile of brightness was calculated in each urban functional zone to sufficiently represent the socioeconomics features of urban functional zone. The formula for calculating brightness (BR) feature from high-resolution nighttime light imagery as follows [46]:
BR = 0.2989 × R + 0.5870 × G + 0.1140 × B
where R, G and B are the radiance of red, green and blue bands of high-resolution nighttime light imagery, respectively.

3.2. Urban Functional Zone Classification

Five characteristics were extracted from multisource geographic data to describe urban functional zones in multiple perspectives; these characteristics are the data basis for functional zone identification. In addition to accurate and comprehensive feature extraction; excellent classifier is also an important factor to ensure the accuracy of urban functional zone recognition. In this paper, extreme gradient boosting (XGBoost) algorithm is used as the urban functional zone classifier [47]. XGBoost algorithm is an excellent ensemble learning algorithm. It can train multiple classifiers simultaneously and combine them to solve the problem, which can get more accurate prediction results than single optimal classification, and has stronger generalization ability and robustness. XGBoost’s support for shrinkage and column secondary sampling techniques can effectively prevent over fitting and reduce computation. XGBoost can effectively process sparse data which makes it suitable and helpful for urban functional zone recognition. The framework of urban functional zone classification based on multisource geographic data is shown in Figure 2.

4. Results

4.1. Experiments and Settings

This paper divided the research area into commercial zone, residential zone, industrial zone, parks and green spaces, public service zone and others. The actual functional categories of the samples were finally determined through manual analysis of high-resolution remote sensing images and internet maps, as well as field investigations. When defining the real urban functional zone, a functional zone block may contain two or more zone types. We take the function with the largest area proportion in each functional zone block as the dominant function type of the urban functional zone. The urban functional zone samples were randomly divided into training and test set according to a certain proportion, and the training and test set are organized into polygon files. The sample number of training and test set for each functional zone is shown in Table 2. Five characteristics obtained above were fed into XGBoost classifier in the process of training and testing, and the hyperparameters’ maximum depth of a tree max_depth and learning rate eta of XGBoost model were set to 3 and 0.01, respectively, in our experiment.
The overall accuracy (OA) and accuracy proposed by Papa (AccP) [48] were used to evaluate the urban functional zone classification performances. Similar to the kappa index, AccP is a comprehensive evaluation index. AccP is stricter than kappa coefficient. When the size of each category in the dataset varies greatly, the classifier will often assign labels to the larger classes, resulting in high misclassification rate of smaller classes and low kappa coefficient of the final classification result. AccP takes into account the size of each category in the test set to avoid low classification accuracy caused by unbalanced data sets. It is more appropriate to use AccP to evaluate the classification results in this study since the urban functional zone number of each type in the study area varies greatly.

4.2. Recognition Results

Different combinations of geographic data sources were designed for urban functional zone recognition. Firstly, high-resolution remote sensing images were combined with POI data, and then the high-resolution remote sensing data, POI data and high-resolution nighttime light imagery were combined together. The recognition results of urban functional zones with different data sources are shown in Table 3. Overall, the accuracy of urban functional zone recognition using only high-resolution remote sensing imagery is lowest. The recognition accuracy was slightly improved after combining POI data, and the highest accuracy was obtained by combining the multiple data sources. The recognition accuracy of urban functional zones fused three data sources reached 80.30%, which is 4.17% higher than that using only high-resolution remote sensing imagery. The results illustrate that the combination of high-resolution remote sensing imagery, POI data and high-resolution nighttime light imagery can improve the recognition accuracy of urban functional zone. At the same time, it also indicates that although both nighttime light imagery and POI data can capture the socioeconomic characteristics of functional zones, they are still recorded from different aspects and contain different socioeconomic information, which cannot be completely replaced by each other.

5. Discussion

5.1. The Block Size Parameter Setting

When constructing the spectral features of urban functional zones using BOVW model, it is necessary to divide high-resolution remote sensing imagery into patches, where the size of the clipping patches is a variable parameter. If the block size is set too small, the block contained too little information to extract object and context information from it. Conversely, if the block size is set too large, the detailed structural characteristics of urban functional zones may be ignored. In order to determine the appropriate block size for urban functional zone recognition, different block sizes were set in the experiment. Finally, the suitable block size parameters are selected by comparing the urban functional zone identification results. As shown in Table 4, the block sizes were set as 16*16 m2, 32*32 m2, 48*48 m2 and 64*64 m2 in the experiment. When the block size is 32 * 32 m2 and 16*16 m2, the classification result obtained the highest OA and AccP, respectively. Considered smaller patch sizes lead to more computational complexity. Therefore, the block size was finally set as 32*32 m2 in this paper.

5.2. Comparison of Different Combination

5.2.1. Recognition Results Based on High-Resolution Remote Sensing Images and POI Data Combination

The accuracy of urban functional zone recognition based on high-resolution remote sensing image was relatively low because high-resolution remote sensing image cannot capture the socioeconomic characteristics of urban functional zones. It is easy to confuse urban functional zones by only using natural characteristics. As shown in Table 5, urban functional zones that with obvious natural characteristics, such as parks and green spaces, have better extraction effects by directly using high-resolution remote sensing image. While for some urban functional zones with obvious socioeconomic characteristics, such as business zones and public service zones, it is difficult to correctly classify using remote sensing images alone, and the proportion of correctly identification is low. As shown in Table 6, the identification of residential zones, commercial zones and public service zones are improved when combined high-resolution remote sensing images with POI data. This suggests that the socioeconomic characteristics extracted from POI data contribute to the correct identification of urban functional zones.

5.2.2. Recognition Results Based on Remote Sensing Image, POI Data and Nighttime Light Imagery Combination

The best recognition effect was obtained by combining the three data sources. As shown in Table 7, the identification effect of residential zones, commercial zones and public service zones are better than that of the two data combination methods. It was found that residential zones and parks and green spaces have high identification accuracy. Some public service zones and commercial zones are wrongly classified as residential zones. After careful analysis, it was found that some administrative agencies had family dormitories in their districts, which were supporting facilities for administrative agencies, and some commercial zones also had a mixture of residential buildings, which led to incorrect division of public service zones and commercial zones. Due to the small number of samples in industrial zones and others place, the classifier does not have sufficient training samples to learn the features of these two categories, resulting in low recognition accuracy of these two categories using various data sources.
The detailed recognition results of different data sources are shown in Figure 3. We can see that the combination of three data sources has the best recognition effect in Figure 3a–c. The black ellipse contains a residential zone which was correctly classified by the three data sources combination as shown in Figure 3c, but was misclassified as public service by remote sensing image and its combination with POI as shown in Figure 3a,b. The red ellipse in Figure 3c shows a commercial zone correctly identified by the combination of three data sources, but the remote sensing image in Figure 3a confused it as residential zone and after combined POI in Figure 3b confused it as public service zone. The commercial zone inside the orange ellipse in Figure 3a was correctly divided by single remote sensing image, which was wrongly divided into residential zone after combining with POI. After being combined with nighttime light image this zone was further corrected into commercial zone.

5.3. Comparison of Different Feature

Five features of urban functional zone are extracted for urban functional zone recognition in this paper. Figure 4 ranks the importance of different features. The total importance of socioeconomic features is 0.46, the importance of the spectral feature is 0.37, and the importance of the spatial pattern feature is 0.17. The feature importance ranking indicates that the urban functional zone recognition results from the coaction of spectral feature, spatial pattern and socioeconomic feature, and each feature is very important for urban functional zone recognition. Among the three socioeconomic features, TF-IDF feature has the highest contribution, followed by FD, and BR has the lowest contribution. This leads to the similarity in terms of the accuracies between the last two models reported in Table 3.

5.4. Limitations of the Proposed Method

Although the combination of high-resolution remote sensing imagery, POI and high-resolution nighttime light imagery has achieved the expected results and improved the accuracy of urban functional zone recognition, limitations still exist in the proposed method. POI data mostly occur in areas with more human activities, and high-resolution nighttime light imagery can only record nighttime human socioeconomic activities. Therefore, there is still a certain deviation in using POI and high-resolution nighttime light imagery to characterize the socioeconomic characteristics in urban functional zones. Moreover, high-resolution nighttime light imagery contains other information, such as the spatial pattern of lights, in addition to the brightness information used in this paper. We can explore using other information contained in high-resolution nighttime light imagery to further improve the contribution of nighttime light imagery in urban functional zone recognition.

6. Conclusions

In this paper, an urban functional zone recognition framework based on multisource geographic data is proposed by combining high-resolution remote sensing image, POI data and high-resolution nighttime light image. In this framework, natural features of urban functional zones are extracted from high-resolution remote sensing images; socioeconomic features are extracted from POI and high-resolution nighttime light imagery. Utilizing a variety of features, the characteristics of complex urban functional zones can be described more accurately and comprehensively. Finally, these features are fed into XGBoost classifier to classify urban functional zones. The complementary advantages of various geographical data can improve the recognition accuracy of urban functional zones. In the future, we will further study using light detection and ranging (LiDAR) data to extract three-dimensional urban information for urban functional zone recognition.

Author Contributions

Methodology, S.C.; Writing—Original Draft Preparation, H.Z.; Writing—Review & Editing, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by the Education Department of Jilin Province (Grant No. JJKH20211291KJ), the National Natural Science Foundation of China (Grant No. 41771450, 42071359), Jilin Provincial Science and Technology Development Project (Grant No. 20190103151JH, 20210101101JC), the Fundamental Research Funds for the Central Universities (Grant No. 2412019BJ001, 2412020FZ004, 2412019FZ002).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yao, Y.; Xia, L.; Liu, X.; Liu, P.; Ke, M. Sensing spatial distribution of urban land use by integrating points-of-interest and Google Word2Vec model. Int. J. Geogr. Inf. Sci. 2017, 31, 825–848. [Google Scholar]
  2. Hu, T.; Yang, J.; Li, X.; Gong, P. Mapping urban land use by using landsat images and open social data. Remote Sens. 2016, 8, 151. [Google Scholar]
  3. Zhou, G.; Li, C.; Li, M.; Zhang, J.; Liu, Y. Agglomeration and diffusion of urban functions: An approach based on urban land use conversion. Habitat Int. 2016, 56, 20–30. [Google Scholar]
  4. Sanlang, S.; Cao, S.; Du, M.; Mo, Y.; Chen, Q.; He, W. Integrating aerial lidar and very high-resolution images for urban functional zone mapping. Remote Sens. 2021, 13, 2573. [Google Scholar]
  5. Montanges, A.P.; Moser, G.; Taubenbock, H.; Wurm, M. Classification of urban structural types with multisource data and structured models. In 2015 Joint Urban Remote Sensing Event; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  6. Wang, Y.; Gu, Y.; Dou, M.; Qiao, M. Using spatial semantics and interactions to identify urban functional regions. ISPRS Int. J. Geo-Inf. 2018, 7, 130. [Google Scholar]
  7. Jiao, L.; Liu, J.; Xu, G.; Dong, T.; Gu, Y.; Zhang, B.; Liu, Y.; Liu, X. Proximity expansion index: An improved approach to characterize evolution process of urban expansion. Comput. Environ. Urban Syst. 2018, 70, 102–112. [Google Scholar]
  8. Huang, X.; Wang, Y. Investigating the effects of 3D urban morphology on the surface urban heat island effect in urban functional zones by using high-resolution remote sensing data: A case study of Wuhan, Central China. ISPRS J. Photogramm. Remote Sens. 2019, 152, 119–131. [Google Scholar]
  9. Du, S.; Du, S.; Liu, B.; Zhang, X. Mapping large-scale and fine-grained urban functional zones from VHR images using a multi-scale semantic segmentation network and object based approach. Remote Sens. Environ. 2021, 261, 112480. [Google Scholar]
  10. Batty, M. The size, scale, and shape of cities. Science 2008, 319, 769–771. [Google Scholar]
  11. Gong, J.; Liu, C.; Huang, X. Advances in urban information extraction from high-resolution remote sensing imagery. Sci. China Earth Sci. 2019, 63, 463–475. [Google Scholar]
  12. Zhang, X.; Du, S.; Wang, Q.; Zhou, W. Multiscale geoscene segmentation for extracting urban functional zones from VHR satellite images. Remote Sens. 2018, 10, 281. [Google Scholar]
  13. Zhang, X.; Du, S. A linear dirichlet mixture model for ecomposing scenes: Application to analyzing urban functional zonings. Remote Sens. Environ. 2015, 169, 37–49. [Google Scholar]
  14. Zhang, X.; Du, S.; Wang, Q. Hierarchical semantic cognition for urban functional zones with VHR satellite images and POI data. ISPRS J. Photogramm. Remote Sens. 2017, 132, 170–184. [Google Scholar]
  15. Zhang, X.; Du, S.; Wang, Q. Integrating bottom-up classification and top-down feedback for improving urban land-cover and functional-zone mapping. Remote Sens. Environ. 2018, 212, 231–248. [Google Scholar]
  16. Du, S.; Du, S.; Liu, B.; Zhang, X. Context-enabled extraction of large-scale urban functional zones from very-high-resolution images: A multiscale segmentation approach. Remote Sens. 2019, 11, 1902. [Google Scholar]
  17. Zhao, W.; Bo, Y.; Chen, J.; Tiede, D.; Blaschke, T.; Emery, W.J. Exploring semantic elements for urban scene recognition: Deep integration of high-resolution imagery and OpenStreetMap (OSM). ISPRS J. Photogramm. Remote Sens. 2019, 151, 237–250. [Google Scholar]
  18. Zhang, Z.; Wang, Y.; Liu, Q.; Li, L.; Wang, P. A CNN based functional zone classification method for aerial images. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS) 2016, 7730419, 5449–5452. [Google Scholar]
  19. Zhou, W.; Ming, D.; Lv, X.; Zhou, K.; Bao, H.; Hong, Z. SO-CNN based urban functional zone fine division with VHR remote sensing. Remote Sens. Environ. 2020, 236, 111458. [Google Scholar]
  20. Gao, S.; Janowicz, K.; Couclelis, H. Extracting urban functional regions from points of interest and human activities on location-based social networks. Trans. GIS 2017, 21, 446–467. [Google Scholar]
  21. Jia, Y.; Ge, Y.; Ling, F.; Guo, X.; Wang, J.; Wang, L.; Chen, Y.; Li, X. Urban land use mapping by combining remote sensing imagery and mobile phone positioning data. Remote Sens. 2018, 10, 446. [Google Scholar]
  22. Zhai, W.; Bai, X.; Shi, Y.; Han, Y.; Peng, Z.R.; Gu, C. Beyond Word2vec: An approach for urban functional region extraction and identification by combining Place2vec and POIs. Comput. Environ. Urban Syst. 2019, 74, 1–12. [Google Scholar]
  23. Du, Z.; Zhang, X.; Li, W.; Zhang, F.; Liu, R. A multi-modal transportation data-driven approach to identify urban functional zones: An exploration based on Hangzhou City, China. Trans. GIS 2020, 24, 123–141. [Google Scholar]
  24. Yuan, N.J.; Zheng, Y.; Xie, X.; Wang, Y.; Zheng, K.; Xiong, H. Discovering urban functional zones using latent activity trajectories. IEEE Trans. Knowl. Data Eng. 2015, 27, 712–725. [Google Scholar]
  25. Hu, S.; He, Z.; Wu, L.; Yin, L.; Xu, Y.; Cui, H. A framework for extracting urban functional regions based on multiprototype word embeddings using points-of-interest data. Comput. Environ. Urban Syst. 2020, 80, 101442. [Google Scholar]
  26. Chen, Y.; Liu, X.; Li, X.; Liu, X.; Yao, Y.; Hu, G.; Xu, X.; Pei, F. Delineating urban functional areas with building-level social media data: A dynamic time warping (DTW) distance based k-medoids method. Landsc. Urban Plan. 2017, 160, 48–60. [Google Scholar]
  27. Vanderhaegen, S.; Canters, F. Mapping urban form and function at city block level using spatial metrics. Landsc. Urban Plan. 2017, 167, 399–409. [Google Scholar]
  28. Xing, H.; Meng, Y. Integrating landscape metrics and socioeconomic features for urban functional region classification. Comput. Environ. Urban Syst. 2018, 72, 134–145. [Google Scholar]
  29. Xing, H.; Meng, Y. Measuring urban landscapes for urban function classification using spatial metrics. Ecol. Indic. 2020, 108, 105722. [Google Scholar]
  30. Zhang, X.; Du, S.; Zheng, Z. Heuristic sample learning for complex urban scenes: Application to urban functional-zone mapping with VHR images and POI data. ISPRS J. Photogramm. Remote Sens. 2020, 161, 1–12. [Google Scholar]
  31. Tu, W.; Hu, Z.; Li, L.; Cao, J.; Jiang, J.; Li, Q.; Li, Q. Portraying urban functional zones by coupling remote sensing imagery and human sensing data. Remote Sens. 2018, 10, 141. [Google Scholar]
  32. Qian, Z.; Liu, X.; Tao, F.; Zhou, T. Identification of urban functional areas by coupling satellite images and taxi GPS trajectories. Remote Sens. 2020, 12, 2449. [Google Scholar]
  33. Xu, S.; Qing, L.; Han, L.; Liu, M.; Peng, Y.; Shen, L. A new remote sensing images and point-of-interest fused (RPF) model for sensing urban functional regions. Remote Sens. 2020, 12, 1032. [Google Scholar]
  34. Cao, R.; Tu, W.; Yang, C.; Li, Q.; Liu, J.; Zhu, J.; Zhang, Q.; Li, Q.; Qiu, G. Deep learning-based remote and social sensing data fusion for urban region function recognition. ISPRS J. Photogramm. Remote Sens. 2020, 163, 82–97. [Google Scholar]
  35. Bao, H.; Ming, D.; Guo, Y.; Zhang, K.; Zhou, K.; Du, S. DFCNN-based semantic recognition of urban functional zones by integrating remote sensing data and POI data. Remote Sens. 2020, 12, 1088. [Google Scholar]
  36. Song, J.; Lin, T.; Li, X.; Prishchepov, A.V. Mapping urban functional zones by integrating very high spatial resolution remote sensing imagery and points of interest: A case study of Xiamen, China. Remote Sens. 2018, 10, 1737. [Google Scholar]
  37. Levin, N.; Kyba, C.C.M.; Zhang, Q.; de Miguel, A.S.; Román, M.O.; Li, X.; Portnov, B.A.; Molthan, A.L.; Jechow, A.; Miller, S.D.; et al. Remote sensing of night lights: A review and an outlook for the future. Remote Sens. Environ. 2020, 237, 111443. [Google Scholar]
  38. Zhao, M.; Zhou, Y.; Li, X.; Cao, W.; He, C.; Yu, B.; Li, X.; Elvidge, C.D.; Cheng, W.; Zhou, C. Applications of satellite remote sensing of nighttime light observations: Advances, challenges, and perspectives. Remote Sens. 2019, 11, 1971. [Google Scholar]
  39. Huang, X.; Yang, J.; Li, J.; Wen, D. Urban functional zone mapping by integrating high spatial resolution nighttime light and daytime multi-view imagery. ISPRS J. Photogramm. Remote Sens. 2021, 175, 403–415. [Google Scholar]
  40. Liu, B.; Deng, Y.; Li, M.; Yang, J.; Liu, T. Classification schemes and identification methods for urban functional zone: A Review of Recent Papers. Appl. Sci. 2021, 11, 9968. [Google Scholar]
  41. Yuan, J.; Zheng, Y.; Xie, X. Discovering regions of different functions in a city using human mobility and POIs. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Beijing, China, 12–16 August 2012; pp. 186–194. [Google Scholar]
  42. Xu, N.; Luo, J.; Wu, T.; Dong, W.; Liu, W.; Zhou, N. Identification and portrait of urban functional zones based on multisource heterogeneous data and ensemble learning. Remote Sens. 2021, 13, 373. [Google Scholar]
  43. Quelhas, P.; Monay, F.; Odobez, J.; Gatica-Perez, D.; Tuytelaars, T. A thousand words in a scene. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1575–1589. [Google Scholar]
  44. Van Gemert, J.C.; Veenman, C.J.; Smeulders, A.W.M.; Geusebroek, J.M. Visual word ambiguity. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1271–1283. [Google Scholar]
  45. Nielsen, M.M. Remote sensing for urban planning and management: The use of window-independent context segmentation to extract urban features in Stockholm. Comput. Environ. Urban Syst. 2015, 52, 1–9. [Google Scholar]
  46. Zheng, Q.; Weng, Q.; Huang, L.; Wang, K.; Deng, J.; Jiang, R.; Ye, Z.; Gan, M. A new source of multi-spectral high spatial resolution night-time light imagery—JL1-3B. Remote Sens. Environ. 2018, 215, 300–312. [Google Scholar]
  47. Huang, Z.; Qi, H.; Kang, C.; Su, Y.; Liu, Y. An ensemble learning approach for urban land use mapping based on remote sensing imagery and social sensing data. Remote Sens. 2020, 12, 3254. [Google Scholar]
  48. Papa, J.P.; Falcao, A.X.; Suzuki, C.T.N. Supervised pattern classification based on optimum-path forest. Int. J. Imaging Syst. Technol. 2009, 19, 120–131. [Google Scholar]
Figure 1. The study area and datasets: (a) the study area pan-sharpened GF-2 multispectral image; (b) the nighttime light image acquired by JL1-07.
Figure 1. The study area and datasets: (a) the study area pan-sharpened GF-2 multispectral image; (b) the nighttime light image acquired by JL1-07.
Remotesensing 13 04732 g001
Figure 2. Framework of urban functional zone recognition.
Figure 2. Framework of urban functional zone recognition.
Remotesensing 13 04732 g002
Figure 3. Recognition results of urban functional zones in study area using (a) only remote sensing image; (b) combination of remote sensing image and POI; (c) combination of remote sensing image, POI and nighttime light image.
Figure 3. Recognition results of urban functional zones in study area using (a) only remote sensing image; (b) combination of remote sensing image and POI; (c) combination of remote sensing image, POI and nighttime light image.
Remotesensing 13 04732 g003aRemotesensing 13 04732 g003b
Figure 4. Importance ranking of features.
Figure 4. Importance ranking of features.
Remotesensing 13 04732 g004
Table 1. Characteristics of different image datasets.
Table 1. Characteristics of different image datasets.
CharacteristicsHigh-Resolution Remote Sensing ImageryHigh-Resolution Nighttime Light Imagery
SensorGaofen-2Jilin1-07
Spatial resolution (m)Panchromatic:1
(subastral point 0.81)
Multispectral:4
(subastral point 3.24)
0.92
Data acquired16 June 201925 April 2018
Bands (nm)Panchromatic: 450–900
Blue: 450–520
Green: 520–590
Red: 630–690
Infrared: 770–890
Camera1-Blue: 426–546
Camera1-Green: 494–598
Camera1-Red: 584–738
Camera2-Blue: 424–512
Camera2-Green: 490–588
Camera2-Red: 582–730
Data source linkhttp://www.cresda.com/CN/sjfw/zxsj/index.shtml/, accessed on 17 November 2021https://mall.charmingglobe.com/Sampledata/, accessed on 17 November 2021
Table 2. Sample number of training and test set for each functional zone in study area.
Table 2. Sample number of training and test set for each functional zone in study area.
CategoriesTrainingTest
Residential zones105423
Industrial zones34
Commercial zones4669
Parks and green spaces1930
Public service4569
Others 34
Total221599
Table 3. Accuracy of recognition results using different data sources. Numbers in bold indicate the highest accuracy.
Table 3. Accuracy of recognition results using different data sources. Numbers in bold indicate the highest accuracy.
WICBOVWFDTF-IDFBROAAccP
0.72450.6587
0.71790.6064
0.71790.6404
0.68950.6220
0.69280.5350
0.76130.6661
0.79470.6796
0.80300.6826
Table 4. Classification accuracies using different patch sizes with high-resolution remote sensing imagery, POI and high-resolution nighttime light imagery combination. Numbers in bold indicate the highest accuracy.
Table 4. Classification accuracies using different patch sizes with high-resolution remote sensing imagery, POI and high-resolution nighttime light imagery combination. Numbers in bold indicate the highest accuracy.
Patch Size (m2)16*1632*3248*4864*64
OA0.79970.80300.80130.8013
AccP0.68910.68260.68650.6842
Table 5. Mixture matrix of classification result based on high-resolution remote sensing images. R for residential zone, I for industry zone, C for commercial zone, P1 for parks and green spaces, P2 for public service zone, O for others.
Table 5. Mixture matrix of classification result based on high-resolution remote sensing images. R for residential zone, I for industry zone, C for commercial zone, P1 for parks and green spaces, P2 for public service zone, O for others.
TargetRICP1P2O
Test
R0.8740.00.0430.0070.0760.0
I0.50.00.00.00.50.0
C0.450.00.4630.00.0870.0
P10.10.00.10.70.0670.033
P20.4350.00.0870.00.4780.0
O0.250.00.00.250.50.0
Table 6. Mixture matrix of classification result based on high-resolution remote sensing images and POI data combination.
Table 6. Mixture matrix of classification result based on high-resolution remote sensing images and POI data combination.
TargetRICP1P2O
Test
R0.910.00.0330.0050.050.002
I0.00.00.00.01.00.0
C0.380.00.490.010.120.0
P10.0670.00.10.6330.1670.033
P20.3330.00.1160.00.5510.0
O0.50.00.00.250.250
Table 7. Mixture matrix of classification result based on combination of remote sensing image, POI and nighttime light image.
Table 7. Mixture matrix of classification result based on combination of remote sensing image, POI and nighttime light image.
TargetRICP1P2O
Test
R0.920.00.0280.0020.0470.002
I0.00.00.00.01.00.0
C0.3620.00.5220.0140.1010.0
P10.0670.00.10.6330.1670.033
P20.330.00.130.00.540.0
O0.50.00.00.250.250.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, S.; Zhang, H.; Yang, H. Urban Functional Zone Recognition Integrating Multisource Geographic Data. Remote Sens. 2021, 13, 4732. https://doi.org/10.3390/rs13234732

AMA Style

Chen S, Zhang H, Yang H. Urban Functional Zone Recognition Integrating Multisource Geographic Data. Remote Sensing. 2021; 13(23):4732. https://doi.org/10.3390/rs13234732

Chicago/Turabian Style

Chen, Siya, Hongyan Zhang, and Hangxing Yang. 2021. "Urban Functional Zone Recognition Integrating Multisource Geographic Data" Remote Sensing 13, no. 23: 4732. https://doi.org/10.3390/rs13234732

APA Style

Chen, S., Zhang, H., & Yang, H. (2021). Urban Functional Zone Recognition Integrating Multisource Geographic Data. Remote Sensing, 13(23), 4732. https://doi.org/10.3390/rs13234732

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop