Next Article in Journal
Reliable and Efficient UAV Image Matching via Geometric Constraints Structured by Delaunay Triangulation
Next Article in Special Issue
Detection of Undocumented Building Constructions from Official Geodata Using a Convolutional Neural Network
Previous Article in Journal
Mitigation of Arctic Tundra Surface Warming by Plant Evapotranspiration: Complete Energy Balance Component Estimation Using LANDSAT Satellite Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of CLBP Window Size on Urban Vegetation Type Classification Using High Spatial Resolution Satellite Images

1
School of Geomatics and Marine Information, Jiangsu Ocean University, Lianyungang 222002, China
2
Department of Systems Engineering and Engineering Management, City University of Hong Kong, Kowloon, Hong Kong, China
3
Department of Architecture and Civil Engineering, City University of Hong Kong, Hong Kong, China
4
Faculty of Social Science and Asia-Pacific Studies Institute, The Chinese University of Hong Kong, New Territories, Hong Kong, China
5
School of Marine Science, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(20), 3393; https://doi.org/10.3390/rs12203393
Submission received: 27 August 2020 / Revised: 10 October 2020 / Accepted: 14 October 2020 / Published: 16 October 2020
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas)

Abstract

:
Urban vegetation can regulate ecological balance, reduce the influence of urban heat islands, and improve human beings’ mental state. Accordingly, classification of urban vegetation types plays a significant role in urban vegetation research. This paper presents various window sizes of completed local binary pattern (CLBP) texture features classifying urban vegetation based on high spatial-resolution WorldView-2 images in areas of Shanghai (China) and Lianyungang (Jiangsu province, China). To demonstrate the stability and universality of different CLBP window textures, two study areas were selected. Using spectral information alone and spectral information combined with texture information, imagery is classified using random forest (RF) method based on vegetation type, showing that use of spectral information with CLBP window textures can achieve 7.28% greater accuracy than use of only spectral information for urban vegetation type classification, with accuracy greater for single vegetation types than for mixed ones. Optimal window sizes of CLBP textures for grass, shrub, arbor, shrub-grass, arbor-grass, and arbor-shrub-grass are 3 × 3, 3 × 3, 11 × 11, 9 × 9, 9 × 9, 7 × 7 for urban vegetation type classification. Furthermore, optimal CLBP window size is determined by the roughness of vegetation texture.

Graphical Abstract

1. Introduction

Vegetation plays an important role in urban ecology, environment, and daily life. Urban vegetation maintains urban ecological balance, reduces the effects of urban heat islands, and improves the quality of the living environment [1,2,3,4,5,6,7,8]. In addition, it can produce social benefits [9], such as by reducing crime rates [10], improving social relationships [11,12], and boosting residential property values [13]. Accordingly, the study of urban vegetation is of great significance.
High-resolution remote sensing images are particularly suitable for urban vegetation type classification. Yu et al. [14] noted better classification accuracy with Digital Airborne Imagery System (DAIS) images than medium-resolution images during detailed vegetation classification in Northern California. Zhang and Feng [15] reported an accuracy of 87.71% in classifying five types of urban vegetation types in Nanjing (Jiangsu province, China) using IKONOS images. Compared with medium-resolution satellite images, high-resolution images have more detailed spatial information, which is instrumental in urban vegetation type classification.
Texture is the regular variation of pixel values in a digital image space. For vegetation images, textures are actually created by changes in the pattern, species, and density of vegetation, characteristics that are all closely related to vegetation types [16,17]. As a particularly important form of spatial information, textures have been extensively used to improve the accuracy of urban vegetation type classification based on high spatial-resolution remote sensing images [14,18]. The grey level co-occurrence matrix (GLCM) texture is the most widely used in vegetation classification. Yan et al. [19] used IKONOS images to extract urban grass information in Nanjing, Jiangsu Province, China, comparing the extraction accuracy of GLCM-Contract, GLCM-Entropy, GLCM-Correlation, and GLCM-Angular-Second-Moment, and found the highest extraction accuracy, reaching 90.56%, for GLCM-Contract. Pu and Landry [20] used WorldView-2 and IKONOS images to classify urban forest tree species in Tampa, Florida, with six kinds of GLCM in texture features used through LDA and CART classification methods. Urban forest trees were divided into seven types, with the classification accuracy of WorldView-2 (2 m) exceeding that of IKONOS images (4 m). The average highest classification accuracy of WorldView-2 was 67.22%; that of IKONOS was 53.67%.
Recently it has been reported that local binary pattern (LBP) is more efficient than GLCM in remote sensing classification [21,22]. LBP, which calculates the gray value difference between a center pixel and its neighborhoods, has been used in remote sensing image classification [23,24,25]. Song et al. [21] compared LBP and GLCM of four land-cover types classification using IKONOS images and found that overall accuracy for LBP was 4.42% higher than for GLCM. Similarly, Vigneshl and Thyagharajan [22] reported better overall accuracy for LBP than traditional GLCM. The completed local binary patterns (CLBP) algorithm, an enhanced version of LBP [26], could fully describe the texture information of remote sensing images, unlike LBP. Li et al. [27] investigated land-use classification using CLBP textures, achieving an overall accuracy of 93.3%. Wang et al. [28] employed CLBP textures to classify coastal wetland vegetation and achieve an overall accuracy of 85.38%. However, because CLBP textures have not been widely used to classify urban vegetation types, CLBP textures’ potential for urban vegetation type classification needs further exploration.
Window size is an important parameter for CLBP textures and is closely related to the accuracy of image segmentation and classification [29,30]. Numerous studies have shown that no single window size can describe the landscape characteristics of all remotely sensed objects [31,32,33,34,35]. To achieve higher classification accuracy, texture features of different objects should be analyzed using different window sizes. For example, GLCM textures found that window size is an important parameter in vegetation classification [36,37,38]. Yan et al. [19] used 3 × 3 and 5 × 5 window size GLCM textures to improve the grass extraction accuracy in Nanjing (Jiangsu province, China) and found that the 3 × 3 window size produced better classification results. Fu and Lin [39] extracted the loquat using 3 × 3, 5 × 5, 7 × 7, 9 × 9, and 11 × 11 GLCM window textures and found that the 7 × 7 window size produced the highest classification accuracy, 86.67%. They concluded that different window sizes have different effects on classification of different vegetation types.
Compared with GLCM, the optimal window size of CLBP textures has rarely been discussed for classification of urban vegetation types using high spatial-resolution remote sensing images. This study explores the impact of window sizes of CLBP texture extraction on classification of urban vegetation types.
The main experimental process of this paper is shown in Figure 1. The experimental process is mainly divided into remote sensing image preprocessing, CLBP window code writing, image segmentation, classification, and accuracy analysis.
In this paper, it mainly explains the function of urban vegetation in the city. The use of high-resolution remote sensing images is more suitable for the classification of urban vegetation types. In the study of urban vegetation classification, texture features play an important role in improving the classification effect. As a texture feature, CLBP has great potential in the classification of urban vegetation types, and it is of great research value to explore the influence of CLBP textures in different window sizes on the classification of urban vegetation types.

2. Study Area and Data

2.1. Study Area

Shanghai, one of China’s commercial capitals, features a subtropical maritime monsoon climate, abundant rainfall, advanced urban planning, and intelligent technology development, which have led to flourishing urban vegetation and a reasonable scale layout. Urban vegetation construction in Shanghai is at the forefront in China.
Lianyungang, a coastal city in eastern China, in the northeast of Jiangsu Province, in the Huaihe River basin, has the climate characteristics of China’s south and north. Its economic level and population are representative of China as a whole.
The study area of Shanghai was selected at Shanghai Jiao Tong University (Minhang Campus), with an area of 8.8 km2, at 121°25′30″–121°27′35″E and 31°0′55″–31°2′50″N (see Figure 2). The vegetation type is mainly mixed with evergreen broad-leaved forest and artificial shrubs and grass. Vegetation distribution and planning are reasonable, and the growth of vegetation is prosperous. The research area of Lianyungang is nearby Haizhou Municipal Government, covering an area of 9.38 km2, at 119°7′55″–119°9′55″E and 34°34′5″–34°35′35″N (see Figure 2). The vegetation types in this area are similar to Shanghai, whilst the growth of vegetation and the vegetation planning and management is worse than Shanghai. The urban vegetation planning and growth in Lianyungang is representative in China.

2.2. Data

Both study areas used images from WorldView-2, acquired from April to May in 2015, without clouds or deformation; the terrain of the study areas is flat. Accordingly, the images are suitable for remote sensing vegetation classification. The images of the study areas in Shanghai and Lianyungang are composed of multispectral bands at 2 m resolution and panchromatic bands at 0.5 m resolution. Table 1 gives relevant information about the images (https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/98/WorldView2-DS-WV2-rev2.pdf) and Table 2 shows the projection coordinates and geographic coordinate system of the images.
In the data preprocessing process, geometric correction and atmospheric correction were first conducted for each study area. Subsequently, the GS method was used to fuse four multispectral bands and one panchromatic band. The resulting image had 0.5 m spatial resolution and multispectral information. CLBP texture features were extracted by sharpening images, with urban vegetation in the two study areas classified by spectral information and textures.

2.3. Image Segmentation

Image segmentation models spatial relationships and dependencies by dividing a whole image into continuous and spatially independent objects [40]. The essence of segmentation is the clustering of pixels. In continuous iterative steps, similar pixels are merged into small objects that are themselves merged into larger objects [41], solving the salt-and-pepper problem [13]. Image segmentation makes full use of images’ spatial information [17] and has a high classification efficiency [40]. In this study, image objects are used to classify urban vegetation based on image segmentation.
The image was segmented using a multiresolution segmentation algorithm, a bottom-up region-merging technique [39]. Such approaches have three main parameters: scale, shape, and compactness [14]. The scale parameter determines the maximum difference of the image object by adjusting the scale parameter to determine the size of the segmentation object. The shape parameter determines the degree of difference in the shape of the segmentation object, and the compactness parameter determines the degree of fragmentation of the segmented object.
In this study, image segmentation had two steps: first segmentation of the entire image, then segmentation of the vegetation information. Over many attempts, the shape parameter was set to 0.2 and the compactness parameter to 0.6 in the first step of segmentation. After determination of shape and compactness parameters, the ESP (estimation of scale parameter) tool was used to help obtain the optimal segmentation scale, building on the idea of local variance of object heterogeneity within a scene. Peaks in the rate of change (ROC) graph indicated that imagery was segmented at the most appropriate scale [42].
The most suitable ESP scale for distinguishing vegetation in detail was 52 (Figure 3). Based on ESP, an artificial experiment (Table 3) was conducted to identify the optimal scale parameter for the first segmentation as 50.
The second segmentation is based on the first step. The optimal scale parameter in the second step was 80, while shape and compactness parameters were 0.2 and 0.4. In the first segmentation, vegetation information was mainly separated from other ground objects and thus the segmentation scale became relatively small to obtain more detailed vegetation information. In the second segmentation, if the segmentation scale was too small, the vegetation would be prone to over segmentation. Therefore, the optimal scale of first segmentation was 50, and the second was 80 through the ESP and manual testing.

2.4. Feature Extraction

For this study, 19 inconsistent features—seven spectral features and 12 CLBP texture features—were used in each vegetation classification experiment. Spectral features included the spectral value of four bands (red, blue, green, and NIR), max.diff, and NDVI. Texture features included the CLBP_M, CLBP_S, and CLBP_C texture features of each band (red, blue, green, and NIR) with six different window sizes. Table 4 gives detailed description of the features.
In Formula (1), the w k B is the brightness weight of layer k, K is the number of layers, c ¯ k ( v ) is the mean intensity of layer k of image object v (Trimble.eCognition® Developer9.0.1 User Guide, https://geospatial.trimble.com/ecognition-trial).
B r i g h t n e s s = 1 w B k = 1 K w k B c ¯ k ( v )               w B = k = 1 K w k B
In Formula (2), i, j are image layers, c ¯ ( v ) is the brightness of image object v , c i ¯ ( v ) is the mean intensity of layer i of image object v , c j ¯ ( v ) is the mean intensity of layer j of image object v .   K B are layers of positive brightness weight with K B , w k is the layer weight (Trimble.eCognition® Developer9.0.1 User Guide, https://geospatial.trimble.com/ecognition-trial).
D a x . d i f f = m a x i , j ε K B | c i ¯ ( v ) c j ¯ ( v ) | c ¯ ( v )           K B = { k K : w k = 1 }
CLBP textures were used as an improvement on and optimization of LBP—a nonparametric local texture description algorithm that incorporates a simple principle, low computational complexity, and invariant illumination [25].
L B P P , R = p = 0 P 1 s ( g p g c ) 2 p , s ( x ) = { 1 ,             x 0 0 ,             x < 0
In Formula (3), g p is the gray value of the central pixel in the window, g c is the gray value of neighbors, p is the number of neighbor pixels in the window, R is the size of the window [26].
As the original LBP does not make full use of image information, Guo et al. proposed CLBP [43], which includes mainly a signed component (CLBP_S), a magnitude component (CLBP_M), and a central pixel value binary component (CLBP_C). A neighborhood’s pixel value is compared with the central pixel value to obtain the corresponding symbol value of CLBP_S and magnitude value of CLBP_M. CLBP_C is obtained by comparing the central pixel value with the global average pixel value. On the basis of CLBP (CLBP_S, CLBP_M and CLBP_C), window sizes are changed to study the classification of urban vegetation types.
Based on the coding mode of LBP, different window sizes of CLBP textures were coded according to LBP. The value of 1 in CLBP_S was encoded as 1, and the value of −1 was encoded as 0. CLBP_S was the original LBP as shown in Formula (1), the value of CLBP_M was encoded by 1/0 as shown in Formula (2), and CLBP_C was a 1/0 binary grayscale image.
C L B P _ S = i = 0 i 1 S ( P s P c ) 2 i , S ( P s P c ) = { 1 1 , x 0 1 0 , x < 0
Formula (4) shows that the encoding process of CLBP_S, in which P s is the neighborhood’s pixel value of the window, P c is the pixel value of the central pixel of the window, and S ( P s P c ) is a symbol value that makes the difference between the neighborhood pixel value of the window and the central pixel value [26]. The value of 1 is positive and −1 is negative. In the encoding, the positive value is coded to 1, the negative value is coded to 0, and then the value of CLBP_S is obtained.
CLBP _ M =   i = 0 i 1 M ( U s , c ) 2 i ,   M ( U s , c ) =   { 1   ,   x c 0   ,   x < c U s = | P s P c |   c = s = 0 s 1 U s s
Formula (5) shows that c is the mean of U s for the whole image [26]. U s is the absolute value of the difference between the pixel value of the neighborhood’s pixel and the central pixel value of the window. s is the number of the center pixel. The value of CLBP_M can be obtained by encoding 1/0 and converting it to decimal value.
CLBP _ C = C ( P s , m ( c ) ) , C ( P s , m ( c ) ) = { 1 , P s m ( c ) 0 , P s < m ( c ) m ( c ) = i = 0 i 1 P i c
Formula (6) shows that m ( c ) is the mean of the pixel value of the whole image [26]. When P s m ( c ) , the value of CLBP_C is 1. When P s < m ( c ) , the value of CLBP_C is 0. With the increase of CLBP window size, the locality of CLBP information and data will be lost gradually, and the theoretical window size is extended to the entire image.

2.5. Vegetation Type Determination and Sample Selection

The vegetation types of the study areas were divided into three single types and three mixed types: grass, shrub, arbor, shrub-grass, arbor-grass, and arbor-shrub-grass. Shrub-grass, arbor-grass, arbor-shrub-grass are abbreviated as SG, AG, and ASG, respectively. These types were decided by urban vegetation characteristics and the results of multiresolution segmentation.
Table 5 shows the vegetation types, the principles for the selection, and field photographs of vegetation types samples. Table 6 shows the typical examples under different features (spectrum, CLBP_M, CLBP_S, CLBP_C) of each vegetation type.
This paper describes two parts of vegetation samples: training samples and inspection samples. In total, 6350 vegetation samples were used: 3773 samples from Shanghai and 2577 from Lianyungang, comprising 2688 training samples and 3673 inspection samples. Table 7 shows sample selection.
The experiment in this paper is unable to achieve a unified training sample size, because CLBP textures with different window sizes are added to the multiscale segmentation process. Different window sizes of CLBP textures lead to inconsistent segmentation results under the same segmentation scale, and thus this makes vegetation types more clearly segmented and helps to classify.

2.6. Classification Method

The random forest (RF) algorithm has been applied in many fields of remote sensing and achieved corresponding results [44,45,46,47]. It performs random selection of classification features, with random combination of features performed on the nodes of each decision tree. Meanwhile, the multiple decision trees that form the forest can evaluate all features’ combination classification results, then select the output classification to evaluate the best results [48]. Compared with support vector machines (SVMs), RF has fewer parameters. It shows the relative importance of different features in classification [49], using random trees to estimate internal error during the training stage. Wang et al. [50] found that the RF method had a higher classification accuracy than the SVM and K-nearest neighbors (KNN) methods in coastal wetland classification. Meanwhile, RF not only improves the classification accuracy of remote sensing images but also is insensitive to noise and over-training [51,52]. Over many attempts, max-depth, max-tree-number, and forest-accuracy were set to 100, 100, and 0.01, respectively.

3. Classification Results

Classification of urban vegetation types was carried out in Shanghai and Lianyungang. Vegetation planning and growth in Shanghai are at an advanced level in China, whereas Lianyungang they are at average level. Combining research in Shanghai and Lianyungang can show whether CLBP window textures are stable and universal.
To select optimal CLBP window texture sizes for classification of urban vegetation types, the OA (overall classification accuracy) and KIA (kappa coefficient) of the confusion matrix are used to analyze the overall performance of each CLBP window texture’s extraction of urban vegetation. Second, producer accuracy (PA) and user accuracy (UA) are used to analyze the selection of optimal CLBP window textures for different urban vegetation types. PA refers to the ratio of the number of objects correctly classified as Class A in the whole study area to the actual total number of Class A objects (one column of the confusion matrix). UA refers to the number of objects correctly classified into Class A and the number of objects in the whole research area classified into the total number of class A objects by the classifier (a row of the confusion matrix). Finally, the research results of the two research areas were compared and summarized.

3.1. Optimal CLBP Window Texture Size Analysis in Shanghai

Shanghai is one of China’s economic centers and has a suitable living climate. Its vegetation planning and growth in Shanghai are among the best in China. Using Shanghai as a research area allows testing of the feasibility of CLBP window textures for classification of urban vegetation types.

3.1.1. Classification Performance of Different CLBP Window Textures

By analyzing the OA and KIA of the confusion matrix, overall classification performance for different sizes and types of urban vegetation in Shanghai using a CLBP window texture can be assessed. Table 8 shows the OA and KIA of urban vegetation type classification using different CLBP window textures. In Table 8, the “spectrum” column indicates that only spectral information is used to classify urban vegetation types, whereas the other columns indicate that spectral information and different CLBP window textures are used for classification.
As Table 8 shows, all classification results based on a combination of spectral information and CLBP window textures achieved higher accuracy than when using only spectral information. Among the six selected CLBP window texture sizes from 3 × 3 to 13 × 13, all CLBP texture features except 13 × 13 showed markedly improved classification results. The 3 × 3 CLBP window texture size achieved the best overall accuracy, reaching 66.17%–17.28% higher than achieved using spectral information alone. The best KIA accuracy obtained by the 9 × 9 window size was 57.67% and the classification result improved by 19.41% from spectral information alone.

3.1.2. Selection of Optimal CLBP Window Sizes in Shanghai

The selection of optimal CLBP window textures for different urban vegetation types is mainly obtained by PA (Producer Accuracy) and UA (User Accuracy) in the confusion matrix. Table 9 shows the UA and PA of urban vegetation type classification in Shanghai.
Figure 4a–g shows the classification results of urban vegetation types using only spectral information and combining spectral information with 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, and 13 × 13 CLBP window textures in Shanghai.
The optimal CLBP window textures in Shanghai can be obtained from PA and UA in Table 9 and classification effect map in Figure 4. The optimal window size of grass and shrubs is 3 × 3, and the average value of PA and UA in grass and shrub were 74.10% and 75.60%, respectively. The optimal window size of arbor was 11 × 11, and the average value of PA and UA in the arbor was 70.48%. The optimal window size for SG and AG was 9 × 9, the average value of PA and UA in SG and AG were 60.84% and 73.00%. The optimal window size for ASG was 7 × 7, while the average value of PA and UA in ASG was 75.06%.

3.2. Optimal CLBP Window Texture Sizes Analysis in Lianyungang

The Lianyungang area, in the Huaihe River region, has climate characteristics of both North and South China. Its vegetation planning and growth are representative of China overall. Research into classification of urban vegetation types in Lianyungang can verify the universality of CLBP window textures.

3.2.1. Classification Performance of Different CLBP Window Textures

Table 10 shows the overall performance of urban vegetation type classification with different sizes of CLBP window textures in Lianyungang.
As Table 10 shows in the study area of Lianyungang, the 9 × 9 window size achieved the highest overall classification accuracy. The study results also show that the overall accuracy of using only spectrum information is less than using spectrum information and CLBP window textures information. The 9 × 9 CLBP window texture not only used spectral information, but also improved 7.28% in OA and 8.99% in KIA.

3.2.2. Selection of Optimal CLBP Window Sizes in Lianyungang

Table 11 shows the PA and UA of different CLBP window textures under different vegetation types in Lianyungang.
Figure 5a–g shows the classification results of urban vegetation types using only spectral information and combining spectral information with 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, and 13 × 13 CLBP window textures in Lianyungang.
Combined with Figure 5 and Table 11, it can be concluded that the optimal window size of grass and shrubs in Lianyungang was 3 × 3, the average values of PA and UA in grass and shrubs were 75.33% and 74.15%, respectively. The optimal window size of the arbor was 11 × 11, while the average value of PA and UA in arbor was 75.10%. The optimal window size for SG and AG was 9 × 9, while the average values of PA and UA in SG and AG were 65.70% and 66.13%, respectively. The optimal window size of ASG was 7 × 7, and the average values of PA and UA in ASG was 73.04%.

3.3. Summary of Research Results from Shanghai and Lianyungang

Combining the results of urban vegetation type classification from Figure 4 and Figure 5 and overall classification accuracy from Table 8 and Table 10 shows that the vegetation planning and growth effect in Shanghai is better than that in Lianyungang, combined spectral information with CLBP window textures ratio only using spectral information, with classification more accurate than when using only spectral information; overall classification effect of the 9 × 9 window in the two research areas was the best. Table 9 and Table 11 indicate that single vegetation has better separability than mixed vegetation, with the overall classification accuracy of single vegetation higher than for mixed vegetation.
Table 12 gives the summary of the experiment and optimal window sizes for the classification of vegetation type in the two study areas. The results show that optimal window size of grass and shrub was 3 × 3; the optimal window size for arbor was 11 × 11; optimal window size of SG and AG was 9 × 9; optimal window size of ASG was 7 × 7. Therefore, the experimental results indicate that the optimal window had a good stability and universality when CLBP window textures were used for urban vegetation types classification.
Selection of optimal CLBP window texture for urban vegetation type is related to the characteristics of vegetation texture. Grass and shrub have fine textures, so the optimal window is 3 × 3, whereas the texture of arbor is very rough, for an optimal window of 11 × 11. For mixed vegetation, texture roughness is composed of two or three kinds of roughness ranging from grass to arbor, for an optimal window between 3 × 3 and 11 × 11. It was also found that the optimal CLBP window texture had a limit value: Table 12 shows that arbor has the coarsest texture, with an optimal window size of 11 × 11 instead of 13 × 13.

4. Discussion

For urban vegetation type classification, high-resolution remote sensing images can provide more vegetation details. WorldView satellite images and other high-resolution remote sensing images are suitable for classification of urban vegetation types. Compared with use of only spectral information, combination of spectral information with CLBP texture information can improve accuracy of urban vegetation classification by up to 7.28%. In this paper, different sizes of CLBP window texture are discussed to identify the optimal CLBP window for different types of urban vegetation. The results show that selection of optimal window is affected by plant size and vegetation type. The size of the CLBP window texture dramatically affect the accuracy of urban vegetation type classification, so that optimal windows vary among vegetation types.
Compared with other studies, Wang et al. [28] used CLBP window textures to study classification of coastal vegetation types. The optimal window texture was CLBP24,3 (circular window), and the accuracy of the CLBP24,3 window texture was higher than for GLCM. Optimal windows were selected based only on separability, and the impact of CLBP window texture sizes on accuracy of vegetation type classification was not discussed. Notably, the complexity of the coastal vegetation types studied was much lower than the urban vegetation types. Unlike in the GLCM window texture research, Fu and Lim [39] studied window texture sizes of 3 × 3, 5 × 5, 7 × 7, 9 × 9, and 11 × 11 for GLCM window textures and found that 7 × 7 GLCM window size produced the highest accuracy in litchi tree extraction. Unlike the results of our experiment, the optimal CLBP window size for arbor was 11 × 11. Owing to inconsistent texture features and because the litchi tree is not urban arbors, the conclusions reached are inconsistent. Similarly, Yan et al. [19] studied window texture size for GLCM and found that a 3 × 3 window size extracted grass in Nanjing with greatest accuracy. The optimal GLCM window size obtained for grass was 3 × 3, consistent with the CLBP window texture effect in the experiment, as both studies related to urban grass. The texture characteristics of vegetation types are thus key to determining the optimal texture window size. Woźniak et al. [53] studied SAR image classification using different decomposition windows and image classification accuracies, with 3 × 3, 5 × 5, 7 × 7, and 9 × 9 decomposition window sizes used for in-depth research into land use classification accuracy. They reported that for classification of uniform features, the 3 × 3 window size can be more accurate: as the window becomes larger, classification details are lost. This conclusion is similar to the results of our experiment, indicating that the texture characteristics determine window size to a degree.
As early as 1992, Woodcock et al. [54] proposed a corresponding window texture size for application to different image feature types. Before 1995, it was concluded that no single window size could be applied to all types of remote sensing objects [55,56], it being inappropriate to use random window sizes for analysis of remote sensing images. The results of this paper validated previous views, while quantitatively clarifying the impact of CLBP window texture sizes on classification accuracy for urban vegetation types. In the body of research into remote sensing images, little research has studied the size of the texture window alone. This paper should prompt scholars to study the impact of window texture sizes on remote sensing image analysis [55,56].
In future research, the shape of CLBP window will be converted from a square window to a circular one. Combining CLBP textures of different window sizes to classify urban vegetation types will allow multiscale window texture sizes to provide more spatial information about vegetation types. Accuracy of CLBP window texture classification of urban vegetation types will be targeted for improvement. In future research, CLBP window textures of different sizes will be used in further remote sensing research.

5. Conclusions

Classification of urban vegetation types in Shanghai and Lianyungang produced greater accuracy when using spectrum and CLBP window texture features than when using only spectrum features. For all vegetation types, the 9 × 9 window size had the greatest impact on urban vegetation type classification. The 3 × 3 window size was optimal for grass and shrubs, 11 × 11 for arbor, 9 × 9 for SG (shrub-grass) and AG (arbor-grass), and 7 × 7 for ASG (arbor-shrub-grass). It was also found that optimal window size is determined by the roughness of vegetation texture. Classification effects were consistent for the two research areas, showing that the CLBP window texture has stability and universality when used to classify urban vegetation types.

Author Contributions

Investigation, Z.C., X.W. and H.Z.; methodology, Z.C., X.W., X.G. and H.Z.; writing—original draft, Z.C. and X.F.; writing—review and editing, J.Y.T., Y.Z., and K.W.; improving the data analysis, K.W., X.F. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China (NSFC No. U1901215, 31270745, and 41506106), Lianyungang Land and Resources Project (LYGCHKY201701), Lianyungang Science and Technology Bureau Project (SH1629), the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), the Top-notch Academic Programs Project of Jiangsu Higher Education Institutions (TAPP), and the Marine Special Program of Jiangsu Province in China (JSZRHYKJ202007).

Acknowledgments

WorldView-2 satellite imagery data is highly appreciated.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jo, H. Impacts of urban green space on offsetting carbon emissions for middle Korea. J. Environ. Manag. 2002, 64, 115–126. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, W.; Jim, C.Y. Assessment and valuation of the ecosystem services provided by urban forests. In Ecology, Planning, and Management of Urban Forests International Perspectives; Carreiro, M.M., Song, Y.C., Wu, J., Eds.; Springer: New York, NY, USA, 2008; pp. 53–83. Available online: https://link.springer.com/chapter/10.1007/978-0-387-71425-7_5 (accessed on 6 July 2020).
  3. Coutts, C.; Hahn, W. Green infrastructure, ecosystem services, and human health. Int. J. Environ. Res. Public Health 2015, 12, 9768–9798. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Cornelis, J.; Hermy, M. Biodiversity relationships in urban and suburban parks in Flanders. Landsc. Urban Plan. 2004, 69, 385–401. [Google Scholar] [CrossRef]
  5. Small, C. Estimation of urban vegetation abundance by spectral mixture analysis. Int. J. Remote Sens. 2001, 22, 1305–1334. [Google Scholar] [CrossRef]
  6. Avissar, R. Potential effects of vegetation on the urban thermal environment. Atmos. Environ. 1996, 30, 437–448. [Google Scholar] [CrossRef]
  7. Grimmond, C.S.B.; Souch, C.; Hubble, M.D. The influence of tree cover on summer time energy balance fluxes, San Gabriel Valley, Los Angeles. Clim. Res. 1996, 6, 45–57. [Google Scholar] [CrossRef]
  8. Nowak, D.J.; Dwyer, J.F. The Urban Forest Effects (UFORE) Model: Quantifying urban forest structure and functions. In Integrated Tools for Natural Resources Inventories in the 21st Century, Proceedings of the IUFRO Conference, Boise, ID, USA, 16–20 August 1998; Hansen, M., Burk, T., Eds.; General Technical Report NC-212; U.S. Department of Agriculture, Forest Service, North Central Research Station: St. Paul, MN, USA, 2000; pp. 714–720. Available online: https://www.nrs.fs.fed.us/pubs/gtr/gtr_nc212/gtr_nc212_714.pdf (accessed on 6 July 2020).
  9. Liu, L.; Pang, Y.; Li, Y.; Si, L.; Liao, S. Combining Airborne and Terrestrial Laser Scanning Technologies to Measure Forest Understorey Volume. Forests 2017, 8, 111. [Google Scholar] [CrossRef] [Green Version]
  10. Kuo, F.E.; Sullivan, W.C. Environment and crime in the inner city: Does vegetation reduce crime? Environ. Behav. 2001, 33, 343–367. Available online: https://journals.sagepub.com/doi/abs/10.1177/0013916501333002 (accessed on 6 July 2020). [CrossRef] [Green Version]
  11. Young, R.F. Managing municipal green space for ecosystem services. Urban For. Urban Green. 2010, 9, 313–321. [Google Scholar] [CrossRef]
  12. Peters, K.B.M.; Elands, B.H.M.; Buijs, A.E. Social interactions in urban parks: Stimulating social cohesion? Urban For. Urban Green. 2010, 9, 93–100. [Google Scholar] [CrossRef] [Green Version]
  13. Anderson, L.M.; Cordell, H.K. Residential property values improved by landscaping with trees. South. J. Appl. For. 1985, 9, 162–166. [Google Scholar] [CrossRef]
  14. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 7, 799–811. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, X.; Feng, X.; Jiang, H. Object-oriented method for urban vegetation mapping using Ikonos imagery. Int. J. Remote Sens. 2010, 31, 177–196. [Google Scholar] [CrossRef]
  16. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man. Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  17. Gonzalez, R.C.; Wintz, P. Addison-Wesley Digital Image Processing, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1987; Volume 8, pp. 70–71. [Google Scholar]
  18. Moran, E.F. Land cover classification in a complex urban-rural landscape with QuickBird Imagery. Photogramm. Eng. Remote Sens. 2010, 76, 1159–1168. [Google Scholar]
  19. Yan, C.M.; Zhang, Y.J.; Bao, Y.S. Extraction of urban grassland information from IKONOS image based on grey level co-occurrence matrix. Eng. Surv. Mapp. 2005, 14, 26–29. [Google Scholar]
  20. Pu, R.; Landry, S. A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery for mapping urban tree species. Remote Sens. Environ. 2012, 124, 516–533. [Google Scholar] [CrossRef]
  21. Song, C.; Yang, F.; Li, P. Rotation Invariant Texture Measured by Local Binary Pattern for Remote Sensing Image Classification. In Proceedings of the 2010 Second International Workshop on Education Technology and Computer Science, Wuhan, China, 6–7 March 2010. [Google Scholar]
  22. Vigneshl, T.; Thyagharajan, K.K. Local binary pattern texture feature for satellite imagery classification. In Proceedings of the 2014 International Conference on Science Engineering and Management Research (ICSEMR), Chennai, India, 27–29 November 2014; pp. 1–6. [Google Scholar]
  23. Huang, X.; Li, S.Z.; Wang, Y. Shape localization based on statistical method using extended local binary pattern. In Proceedings of the Third International Conference on Image and Graphics (ICIG 2004), Hong Kong, China, 18–20 December 2004; pp. 184–187. [Google Scholar]
  24. Iakovidis, D.K.; Keramidas, E.G.; Maroulis, D. Fuzzy Local Binary Patterns for Ultrasound Texture Characterization. In Proceedings of the International Conference Image Analysis and Recognition (ICIAR 2008), Póvoa de Varzim, Portugal, 25–27 June 2008; Volume 5122, pp. 750–759. [Google Scholar]
  25. Nanni, L.; Lumini, A.; Brahnam, S. Local binary patterns variants as texture descriptors for medical image analysis. Artif. Intell. Med. 2010, 49, 117–125. [Google Scholar] [CrossRef] [PubMed]
  26. Guo, Z.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar]
  27. Li, W.; Chen, C.; Su, H.; Du, Q. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  28. Wang, M.; Fei, X.; Zhang, Y.; Chen, Z.; Wang, X.; Tsou, J.Y.; Liu, D.; Lu, X. Assessing texture features to classify coastal wetland vegetation from high spatial resolution imagery using completed local binary patterns (CLBP). Remote Sens. 2018, 10, 778. [Google Scholar] [CrossRef] [Green Version]
  29. Robinson, A.; Sale, R.; Morrison, J.; Muehrcke, P. Elements of Cartography; John Wiley & Sons: Toronto, ON, Canada, 1985. [Google Scholar]
  30. Foley, J.; van Dam, A.; Feiner, S.; Hughes, J. Computer Graphic: Principles and Practice; Addison-Wesley Publ. Co.: Ontario, ON, Canada, 1990; p. 1174. [Google Scholar]
  31. Franklin, S.; Wulder, M.; Lavigne, M. Automated derivation of geographic window sizes for use in remote sensing digital image texture analysis. Comput. Geosci. 1995, 22, 665–673. [Google Scholar] [CrossRef]
  32. Hodgson, M. Characteristics of the window for neighborhood analysis of nominal data. ASPRS/ACSM Ann. Conv. Tech. 1991, 3, 206–214. [Google Scholar]
  33. Franklin, S.; McDermid, G. Empirical relations between digital SPOT HRV and CASI spectral response and lodgepole pine (Pinus contorta) forest stand parameters. Int. J. Remote Sens. 1993, 14, 2331–2348. [Google Scholar] [CrossRef]
  34. Wilson, B.A. Estimating Forest Structure Using SAR Imagery: Unpublished Doctoral Dissertation. Ph.D. Thesis, University Calgary, Calgary, AB, Canada, 1995; p. 175. [Google Scholar]
  35. Franklin, S.; Wulder, M.; Gerylo, G. Texture analysis of IKONOS panchromatic data for Douglas-fir forest age class separability in British Columbia. Int. J. Remote Sens. 2001, 22, 2627–2632. [Google Scholar] [CrossRef]
  36. Marceau, D.J.; Howarth, P.J.; Dubois, J.M.; Gratton, D.J. Evaluation of the grey level co-occurrence matrix method for land-cover classification using SPOT imagery. IEEE Trans. Geosci. Remote Sens. 1990, 28, 513–519. [Google Scholar] [CrossRef]
  37. Kayitakire, F.; Hamel, C.; Defourny, P. Retrieving forest structure variables based on image texture analysis and IKONOS-2 imagery. Remote Sens. Environ. 2006, 102, 390–401. [Google Scholar] [CrossRef]
  38. Murray, H.; Lucieer, A.; Williams, R. Texture-based classification of sub-Antarctic vegetation communities on Heard Island. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 138–149. [Google Scholar] [CrossRef]
  39. Fu, W.; Lin, M. Study on extracting of loquat information using SVM and gray-level co-occurrence matrix from QuickBird image. Remote Sens. Technol. Appl. 2010, 25, 695–699. [Google Scholar]
  40. Blaschke, T.; Burnett, C.; Pekkarinen, A. Image segmentation methods for object-based analysis and classification. In Remote Sensing Image Analysis: Including the Spatial Domain; de Jong, S.M., van der Meer, F.D., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2004; pp. 211–236. [Google Scholar]
  41. Mathieu, R.; Aryal, J.; Chong, A.K. Object-based classification of Ikonos imagery for mapping large-scal vegetation communities in urban areas. Sensors 2007, 7, 2860–2880. [Google Scholar] [CrossRef] [Green Version]
  42. Drǎguţ, L.; Tiede, D.; Levick, S.R. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  43. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 7, 971–987. [Google Scholar] [CrossRef]
  44. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a Random Forest classifier for land-cover classification. ISPRS J. Photogram. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  45. Stefanski, J.; Mack, B.; Waske, B. Optimization of object-based image analysis with Random Forests for land cover mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2492–2504. [Google Scholar] [CrossRef]
  46. Adelabu, S.; Mutanga, O.; Adam, E. Evaluating the impact of red-edge band from Rapideye image for classifying insect defoliation levels. ISPRS J. Photogram. Remote Sens. 2014, 95, 34–41. [Google Scholar] [CrossRef]
  47. Wang, Z.; Lai, C.; Chen, X.; Yang, B.; Zhao, S.; Bai, X. Flood hazard risk assessment model based on Random Forest. J. Hydrol. 2015, 527, 1130–1141. [Google Scholar] [CrossRef]
  48. Belgiu, M.; Drăguţ, L. Random Forest in remote sensing: A review of applications and future directions. ISPRS J. Photogram. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  49. Pal, M. Random Forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  50. Wang, X.; Gao, X.; Zhang, Y.; Fei, X.; Chen, Z.; Wang, J.; Zhao, H. Land-cover classification of coastal wetlands using the RF algorithm for Worldview-2 and Landsat 8 Images. Remote Sens. 2019, 11, 1927. [Google Scholar] [CrossRef] [Green Version]
  51. Pal, M.; Mather, P.M. An assessment of the effectiveness of decision tree methods for land cover classification. Remote Sens. Environ. 2003, 86, 554–565. [Google Scholar] [CrossRef]
  52. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random Forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  53. Woźniak, E.; Kofman, W.; Wajer, P.; Lewiński, S.; Nowakowski, A. The influence of filtration and decomposition window size on the threshold value and accuracy of land-cover classification of polarimetric SAR images. Int. J. Remote Sens. 2016, 37, 212–228. [Google Scholar] [CrossRef]
  54. Woodcock C, E.; Harward V, J. Nested-hierarchical scene models and image segmentation. Int. J. Remote Sens. 1992, 13, 3167–3187. [Google Scholar] [CrossRef]
  55. Zhang, Y.; Huang, Z.; Fu, D.; Tsou, J.Y.; Jiang, T.; Liang, X.S.; Lu, X. Monitoring of chlorophyll-a and sea surface silicate concentrations in the south part of Cheju island in the East China sea using MODIS data. Int. J. Appl. Earth Obs. 2018, 67, 173–178. [Google Scholar] [CrossRef]
  56. Ji, C.; Zhang, Y.; Cheng, Q.; Tsou, J.; Jiang, T.; Liang, X.S. Evaluating the impact of sea surface temperature (SST) on spatial distribution of chlorophyll-a concentration in the East China Sea. Int. J. Appl. Earth Obs. 2018, 68, 252–261. [Google Scholar] [CrossRef]
Figure 1. The experimental process.
Figure 1. The experimental process.
Remotesensing 12 03393 g001
Figure 2. The location of the study area in Shanghai and Lianyungang, China.
Figure 2. The location of the study area in Shanghai and Lianyungang, China.
Remotesensing 12 03393 g002
Figure 3. Scale parameters obtained by ESP (estimation of scale parameter).
Figure 3. Scale parameters obtained by ESP (estimation of scale parameter).
Remotesensing 12 03393 g003
Figure 4. Classification map of urban vegetation types in Shanghai based on different CLBP window textures, using only spectral information (a) and combining spectral information with CLBP window texture in Shanghai at 3 × 3 (b), 5 × 5 (c), 7 × 7 (d), 9 × 9 (e), 11 × 11 (f), and 13 × 13 (g).
Figure 4. Classification map of urban vegetation types in Shanghai based on different CLBP window textures, using only spectral information (a) and combining spectral information with CLBP window texture in Shanghai at 3 × 3 (b), 5 × 5 (c), 7 × 7 (d), 9 × 9 (e), 11 × 11 (f), and 13 × 13 (g).
Remotesensing 12 03393 g004aRemotesensing 12 03393 g004b
Figure 5. Classification map of urban vegetation types in Lianyungang based on different CLBP window textures, using only spectral information (a) and combining spectral information with CLBP window texture in Lianyungang at 3 × 3 (b), 5 × 5 (c), 7 × 7 (d), 9 × 9 (e), 11 × 11 (f), and 13 × 13 (g).
Figure 5. Classification map of urban vegetation types in Lianyungang based on different CLBP window textures, using only spectral information (a) and combining spectral information with CLBP window texture in Lianyungang at 3 × 3 (b), 5 × 5 (c), 7 × 7 (d), 9 × 9 (e), 11 × 11 (f), and 13 × 13 (g).
Remotesensing 12 03393 g005aRemotesensing 12 03393 g005b
Table 1. WordView-2 band descriptions.
Table 1. WordView-2 band descriptions.
Spectral BandWavelength (nm)Spatial Resolution (m)
Red630–6900.46
Green510–5800.46
Blue450–5100.46
NIR770–8950.46
Table 2. The projection coordinates and geographical coordinates of the study areas.
Table 2. The projection coordinates and geographical coordinates of the study areas.
CoordinatesShanghaiLianyungang
Projected coordinateWGS_1984_UTM_Zone_51NWGS_1984_UTM_Zone_50N
Geographic coordinateGCS_WGS_1984
Table 3. Comparison of segmentation scale effects (based on ESP).
Table 3. Comparison of segmentation scale effects (based on ESP).
Target 1Target 2Target 3Shape/Compactness
Scale: 40 Remotesensing 12 03393 i001 Remotesensing 12 03393 i002 Remotesensing 12 03393 i0030.2/0.6
Scale: 50 Remotesensing 12 03393 i004 Remotesensing 12 03393 i005 Remotesensing 12 03393 i006
Scale: 60 Remotesensing 12 03393 i007 Remotesensing 12 03393 i008 Remotesensing 12 03393 i009
Table 4. Each window’s features.
Table 4. Each window’s features.
Object FeaturesFeature ParametersExplains
Spectral featuresSpectral valuesRed, Blue, Green, NIR
BrightnessAverage value of Image Spectrum (as shown in Formula (1))
NDVI(NIR-Red)/(NIR+ Red)
Max.diffThe absolute value of the difference between the maximum band gray values divided by the average brightness values (as shown in Formula (2))
Texture featuresCLBP_SThe signed component of Red, Blue, Green, and NIR bands (as shown in Formula (4))
CLBP_MThe size component of Red, Blue, Green, and NIR bands (as shown in Formula (5))
CLBP_CThe central pixel value binary component of Red, Blue, Green, and NIR bands (as shown in Formula (6))
Table 5. The selection principle of vegetation type samples.
Table 5. The selection principle of vegetation type samples.
Vegetation TypesSelection PrinciplesField Photographs
GrassThe color transition is uniform and regular on the pseudo-color map, and the CLBP window texture is composed of regular, continuous, and smooth small patches. Remotesensing 12 03393 i010
ShrubThe color transition is uneven and discontinuous on the pseudo-color map, and CLBP window texture is composed of irregular, discontinuous patches of different sizes. Remotesensing 12 03393 i011
ArborThere are obvious crowns on the pseudo-color map, and CLBP window texture presents regular, discontinuous large patches. Remotesensing 12 03393 i012
SGIt is composed of shrub and grass, and the color shows the characteristics of shrub and grass on the pseudo-color map. The CLBP window texture is composed of irregular small, medium plaques and smooth patches. Remotesensing 12 03393 i013
AGOn the pseudo-color map, there are characteristics of arbor and grass. The CLBP window texture is composed of large patches of arbor’s crown and small patches or smooth parts of grass. Remotesensing 12 03393 i014
ASGIt consists of the color characteristics of arbor, shrub, and grass on the pseudo-color map. The CLBP window texture also has the texture characteristics of all vegetation types. Remotesensing 12 03393 i015
Table 6. The typical examples under different features of each vegetation type.
Table 6. The typical examples under different features of each vegetation type.
Vegetation TypesSpectrumCLBP_MCLBP_SCLBP_C
Grass Remotesensing 12 03393 i016 Remotesensing 12 03393 i017 Remotesensing 12 03393 i018 Remotesensing 12 03393 i019
Shrub Remotesensing 12 03393 i020 Remotesensing 12 03393 i021 Remotesensing 12 03393 i022 Remotesensing 12 03393 i023
Arbor Remotesensing 12 03393 i024 Remotesensing 12 03393 i025 Remotesensing 12 03393 i026 Remotesensing 12 03393 i027
AG Remotesensing 12 03393 i028 Remotesensing 12 03393 i029 Remotesensing 12 03393 i030 Remotesensing 12 03393 i031
SG Remotesensing 12 03393 i032 Remotesensing 12 03393 i033 Remotesensing 12 03393 i034 Remotesensing 12 03393 i035
ASG Remotesensing 12 03393 i036 Remotesensing 12 03393 i037 Remotesensing 12 03393 i038 Remotesensing 12 03393 i039
Table 7. The sample statistics.
Table 7. The sample statistics.
GrassShrubArborSGAGASGSum
3 × 3Training13412099386543499
Inspection146146108476844559
5 × 5Training938879535547415
Inspection10111994656551495
7 × 7Training668483445948384
Inspection838597607968472
9 × 9Training667577655738378
Inspection132106100839270583
11 × 11Training707281445243362
Inspection10893127666767528
13 × 13Training474950413847272
Inspection888382626064439
Only SpectrumTraining648496464642378
Inspection100124112909367586
Sum1298132812858048967396350
Table 8. The OA and KIA of urban vegetation types classification in Shanghai.
Table 8. The OA and KIA of urban vegetation types classification in Shanghai.
3 × 35 × 57 × 79 × 911 × 1113 × 13Spectrum
OA (%)66.1763.4461.1365.3357.3248.8548.89
KIA (%)55.7454.6452.8257.6748.4438.5238.26
Table 9. The PA and UA of urban vegetation types classification in Shanghai.
Table 9. The PA and UA of urban vegetation types classification in Shanghai.
PA (%)GrassShrubArborSGAGASG
3 × 378.1075.6072.4629.4154.058.33
5 × 575.7660.8765.5256.2560.6042.86
7 × 760.0066.6753.5253.3363.2772.97
9 × 954.0281.3648.8466.6778.7267.74
11 × 1145.0071.4372.4640.4850.0052.50
13 × 1342.8657.4547.2243.7543.3356.67
UA (%)GrassShrubArborSGAGASG
3 × 370.0975.6062.5045.4550.0050.00
5 × 565.7966.6770.3748.6560.6056.25
7 × 765.8577.2762.3050.0044.2977.14
9 × 972.3069.5755.2655.0067.2763.64
11 × 1161.3677.5968.5034.7043.4847.73
13 × 1347.3767.5053.1337.8452.0037.78
Table 10. The OA and KIA of urban vegetation types classification in Lianyungang.
Table 10. The OA and KIA of urban vegetation types classification in Lianyungang.
3 × 35 × 57 × 79 × 911 × 1113 × 13Spectrum
OA (%)59.3354.1761.9064.6657.9452.2557.38
KIA (%)50.9144.6754.1757.3748.8742.6248.38
Table 11. The PA and UA of urban vegetation types classification in Lianyungang.
Table 11. The PA and UA of urban vegetation types classification in Lianyungang.
PA (%)GrassShrubArborSGAGASG
3 × 368.3068.3058.9756.6745.1651.85
5 × 557.1470.0055.5642.4246.8843.33
7 × 755.2658.8265.3856.6763.3374.20
9 × 962.2253.2070.1870.0073.3356.41
11 × 1152.0863.3372.4154.1744.4448.15
13 × 1350.0052.7860.8760.0043.3341.18
UA (%)GrassShrubArborSGAGASG
3 × 382.3580.0042.6053.1341.1870.00
5 × 580.0071.4352.6333.3342.8648.15
7 × 753.8568.9765.3854.8459.3871.88
9 × 984.8554.3563.5061.4058.9378.57
11 × 1178.1354.2977.7837.1440.0046.43
13 × 1367.6559.3859.5732.7356.5248.39
Table 12. The optimal CLBP window sizes.
Table 12. The optimal CLBP window sizes.
Vegetation TypeThe Optimal Window SizeShanghaiLianyungang
PA (%)UA (%)PA (%)UA (%)
Grass3 × 378.1070.0968.3082.35
Shrub3 × 375.6075.6068.3080.00
Arbor11 × 1172.4668.5072.4177.78
SG9 × 966.6755.0070.0061.40
AG9 × 978.7267.2773.3358.93
ASG7 × 772.9777.1474.2071.88
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Z.; Fei, X.; Gao, X.; Wang, X.; Zhao, H.; Wong, K.; Tsou, J.Y.; Zhang, Y. The Influence of CLBP Window Size on Urban Vegetation Type Classification Using High Spatial Resolution Satellite Images. Remote Sens. 2020, 12, 3393. https://doi.org/10.3390/rs12203393

AMA Style

Chen Z, Fei X, Gao X, Wang X, Zhao H, Wong K, Tsou JY, Zhang Y. The Influence of CLBP Window Size on Urban Vegetation Type Classification Using High Spatial Resolution Satellite Images. Remote Sensing. 2020; 12(20):3393. https://doi.org/10.3390/rs12203393

Chicago/Turabian Style

Chen, Zhou, Xianyun Fei, Xiangwei Gao, Xiaoxue Wang, Huimin Zhao, Kapo Wong, Jin Yeu Tsou, and Yuanzhi Zhang. 2020. "The Influence of CLBP Window Size on Urban Vegetation Type Classification Using High Spatial Resolution Satellite Images" Remote Sensing 12, no. 20: 3393. https://doi.org/10.3390/rs12203393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop