Next Article in Journal
Mapping Layers of Clay in a Vertical Geological Surface Using Hyperspectral Imagery: Variability in Parameters of SWIR Absorption Features under Different Conditions of Illumination
Next Article in Special Issue
Earth Observation-Based Dwelling Detection Approaches in a Highly Complex Refugee Camp Environment — A Comparative Study
Previous Article in Journal
High-Density LiDAR Mapping of the Ancient City of Mayapán
Previous Article in Special Issue
Building Change Detection from Historical Aerial Photographs Using Dense Image Matching and Object-Based Image Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Level Spatial Analysis for Change Detection of Urban Vegetation at Individual Tree Scale

1
Key Laboratory of Geographic Information Science, Ministry of Education, East China Normal University, Shanghai 200241, China
2
Shanghai Botanical Garden, Shanghai 200231, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2014, 6(9), 9086-9103; https://doi.org/10.3390/rs6099086
Submission received: 27 May 2014 / Revised: 28 August 2014 / Accepted: 10 September 2014 / Published: 23 September 2014
(This article belongs to the Special Issue Advances in Geographic Object-Based Image Analysis (GEOBIA))

Abstract

: Spurious change is a common problem in urban vegetation change detection by using multi-temporal remote sensing images of high resolution. This usually results from the false-absent and false-present vegetation patches in an obscured and/or shaded scene. The presented approach focuses on object-based change detection with joint use of spatial and spectral information, referring to it as multi-level spatial analyses. The analyses are conducted in three phases: (1) The pixel-level spatial analysis is performed by adding the density dimension into a multi-feature space for classification to indicate the spatial dependency between pixels; (2) The member-level spatial analysis is conducted by the self-adaptive morphology to readjust the incorrectly classified members according to the spatial dependency between members; (3) The object-level spatial analysis is reached by the self-adaptive morphology involved with the additional rule of sharing boundaries. Spatial analysis at this level will help detect spurious change objects according to the spatial dependency between objects. It is revealed that the error from the automatically extracted vegetation objects with the pixel- and member-level spatial analyses is no more than 2.56%, compared with 12.15% without spatial analysis. Moreover, the error from the automatically detected spurious changes with the object-level spatial analysis is no higher than 3.26% out of all the dynamic vegetation objects, meaning that the fully automatic detection of vegetation change at a joint maximum error of 5.82% can be guaranteed.

1. Introduction

The presence and change information about the layouts and abundances of urban vegetation is most useful in modeling the behaviors of urban environment, estimating carbon storages and other ecological benefits of urban vegetation [13]. Remotely sensed imagery is an important source data available to characterize changes systematically and consistently [4]. Despite both frequent criticism and the availability of many alternatives, change detection from remotely sensed imagery still remains one of the most applied techniques due to its simplicity and intuitive manner [5].

Change detection techniques can be broadly grouped into two objective types: change enhancement and “from–to” change information extraction. These techniques also have two handling manners: pixel-based and object-based change detection [6,7]. The proposed approach is for extracting from–to change information of urban vegetation by the object-based manner and with joint use of spatial and spectral information provided by remote sensing images. A single object may represent an individual tree crown or several adhered crowns and also probably a vegetation-covered region with mixed tree, shrub and grassland. The minimum mapping object is three meters in diameter which is usually associated with the single crown of a younger tree. The approach is thus defined as being at individual tree scale.

It has been revealed that the traditional image differencing method only occasionally works properly due to the effect from the varying illumination levels associated with the changes in season, sun angle, off-nadir distance etc. [3,8]. The efforts to improve this have mainly focused on making the radiation levels of the image pair consistent or acquiring the quantitative correlation between the radiation levels of the image pair. The image rationing method [9] is an example of the former; the regression method [10] and change vector analysis [11] are dedicated to the latter. However, such global analysis of radiation level for a whole image, discerning or not discerning different classes, is often difficult to adapt to the change detection of urban micro-scale objects.

The difficulty is usually caused by the heterogeneous internal reflectance patterns of urban landscape features associated with shadowing effects and off-nadir issues that are not constant through time due to variations in sun angle and sensor look angle. Vegetation object pairs in dual-temporal images often tend to be different even if no vegetation change occurs within the considered time interval [12,13]. Several works related to the identification of shaded members have been reported [1416]. Some of them assigned the whole shaded surfaces into a single class, probably limited by the poor separability between different shaded classes which is often beyond the capabilities of applied classification methods [14,15]. Although some researchers have paid attention to the detection and reconstruction of shaded scene, due to the difficulties in compensating for each band of weakened reflections in the scene, thus far, only visually as opposed to spectrally, can reconstructed shadow-free imagery be obtained [1720].

Object-based image analysis (OBIA) and Geographic OBIA (GEOBIA) have become very popular since the turn of the century. In order to contend with the challenges associated with extracting meaningful information from increasingly higher resolution data products, GEOBIA focuses on the spatial patterns many pixels create rather than on only the statistical features each individual pixel owns [21,22]. Many practicable methods for automatically delineating and labeling geographical objects have been developed [23]. Most of them are linked to the concept of multi-resolution segmentation (MRS) [24]. MRS relies on the scale parameter (SP) [25], and the spatial connectivity of homogeneous pixels in a given scale, to partition an image into image objects. However, as for some certain applications, such as mapping vegetation at individual tree scale in a downtown area using high-resolution data, it is a severe challenge to decide the SPs because the vegetation-covered surfaces are often surrounded by buildings and other urban facilities resulting in locally random radiation noises superimposed on their already very complex layout. Consequently, there is no access to proper scale(s) being able to capture the normal and the noise-contaminated pixels in the same time therefore making the knitted objects incomplete.

In order to provide a possible solution to it, a frame of multi-level spatial analyses is put forward in this paper. Based on the fact that some of the spatial patterns (e.g., distance and density, etc.) created by the neighboring pixels of a certain pixel may help label the pixel, the patterns are formulated as a “density dimension” that takes the density of neighboring pixels similar in attribute to the center pixel as an indicator of their homogeneity. The indicator is added to the feature space to present the spatial dependency between pixels for better classification accuracies. The labeled pixels and patches by the classification are called as members and serve as object candidates. In addition, it is not rare that a member or object is geographically adjacent to another involved in a similar class even though their attributes are completely distinct, such as shaded patches connecting to sunlit ones and spurious change objects being adjacent to stable ones. Such spatial dependency can be formulated therefore detected. This lays the conceptual foundation for further spatial analyses in another two levels: member and object. The former can be applied to further improve the member accuracy and the latter is useful in detecting spurious changes and repairing the defects on original objects by comparing a pair of objects in two date images. Thus the objective of detecting urban vegetation change with better accuracy and automation can be reached.

2. Study Region and Data Collection

Figure 1 shows the study region.

The test images were randomly selected from nine groups of false-color aerial near-infrared (NIR) images, referred to as “NIR images” in the following text, in a time sequence from 1988 to 2006. The images were purchased from the geographic information service institution of the government. The sensor aboard on was a kind of photogrammetric camera. The original photographic scales were from 1:8000 to 1:15,000 and therefore the spatial resolution at nadir is better than two meters. The original size of photo was 23 cm by 23 cm and they had been assembled into a complete image for each sub- test-region and each date with the geometric and the orthographic corrections before selling. The used film was sensitive to the reflection of NIR band. The photo colors of red, green and blue indicate NIR (760∼850 nm), R (red, 630∼690 nm), and G (green, 520∼600 nm) bands, respectively. In addition, it is a common characteristic of object-based change detection that the detecting accuracy mainly depends on the accuracy of extracting individual object and ignores the spectral similarity between the pairs of images from where the object is extracted. Therefore, the radiation correction of individual image (e.g., atmospheric adjustment) is not as essential as that for the pixel-based change detection.

In Shanghai, likewise with all rainy southern cities in China, NIR images are widely used for city surveying and mapping due to the difficulties in acquiring satellite images all seasons with lower cloud cover. Therefore, it is desirable for them to gain access to the technology of detecting vegetation change from such NIR images.

3. Methods

3.1. Overview

As mentioned previously, the spatial analyses are conducted in three phases. (1) The pixel-level spatial analysis is performed by adding the density dimension to a feature space for classification to indicate the inter-pixel dependency for each pixel; (2) The member-level spatial analysis is conducted by the self-adaptive morphology to readjust the incorrectly classified members according to the inter-member dependency rule; (3) The object-level spatial analysis is realized by the self-adaptive morphology involved with the presetting inter-object dependency rule. The detected spurious changes will be used to repair both dual-temporal vegetation object sets. The added, stable and subtracted objects and the repaired vegetation objects can be finally obtained. Figure 2 shows the flowchart. MATLAB served as the simulation testing tool.

3.2. The Pixel-Level Spatial Analysis

In this phase, the whole image will be classified to the members of different classes through supervised classification. Due to the obvious differences in spectral feature between sunlit and shaded objects and the obvious differences in textural feature between tree and grass on remote sensing imagery of a given resolution, the original six classes include: sunlit tree, shaded tree, sunlit grass, shaded grass, bright background and dark background (involving shaded background and other dark surfaces such as water). Support vector machine (SVM) [26] serves as the model of classifier. In order to make the complexity of the feature space adapt to the number of classes and to highlight the differences in image features between these classes, two spectral and two texture features are utilized (Table 1 [19,2729]).

Besides the four descriptors, the density dimension, denoted as De, is added to the feature space. De can be understood as a “feature of feature” and is formulated as a density distribution function of multiple features (Equations (1) and (2)). Each element in De represents the density of homogeny neighboring elements. A homogeny element means the pixel with a feature tuple that falls within the tuple of the center element with a given tolerance. With the two equations, the inter-pixel dependency for each pixel can be locally defined in a multi-feature space.

D e ( i , j ) = COUNT ( B W ( i , j ) n ) / n 2
and
p = F 1 ( i , j ) ± d F 1 F k ( i , j ) ± d F k F m ( i , j ) ± d F m
where De(i,j) is the ith row and jth column element of De; BW(i,j)n is a n-by-n binary image in which each true member meets the condition formulated as Equation (2); p is a pixel of BW(i,j)n; Fk is the kth feature (k = 1, 2, …, m) while dFk represents its tolerance which is experimentally specified as 2% to 5% of the whole range of Fk.

Figure 3 gives an example of the classification with and without De. It can be seen that the identification between different shaded members has been obviously improved by adding De. This may benefit from three novelties: (1) The scale-labeled segments by MRS are replaced with a density field defined by De. The field helps cluster those pixels spatially depart from the patches knitted by other connecting pixels; (2) The homogeneity of pixels is indicated by the density of similar neighboring pixels rather than by SP of MRS. This makes the approach own a better tolerance to image noises; (3) The PS is replaced with dFk. The impact of the accuracy of selecting dFk is far less than that of selecting PS on the classification accuracy. Benefiting from it, a small and relatively stable default of dFk can serve as the tolerance for a single-level segmentation to offer a finer homogeneity between pixels, and the spatial patters created by these homogeneous pixels can be indicated by their density at the same time. Our experiments have revealed that these novelties have great potential for extracting illegible details, such as shaded vegetation members.

3.3. The Member-Level Spatial Analysis

There are four main steps in this phase: (1) removing noises (Section 3.3.1); (2) readjusting misclassified members based on the member-level spatial analysis (Section 3.3.2); (3) knitting vegetation members into objects (Section 3.3.3); and (4) refining objects by interaction (Section 3.3.4).

3.3.1. Removing Noises

Some seriously discrete smaller members should be dealt with as noises before the readjusting to save computational costs because the vegetation members will be individually analyzed afterwards. With the morphological closing, small neighboring members of the same class will be spatially integrated; the rest (those discrete smaller members) will then be removed by area filtering.

3.3.2. Readjusting Misclassified Members Based on the Member-Level Spatial Analysis

After removing the noises, the confusion between shaded grass and dark background members is still commonly apparent, thereby seriously damaging the accuracy of the member. Instead of simply reassigning smaller members by the surrounding majorities as the way of MMUs [5], we use the self-adaptive morphology to depict the inter-member dependency for a more reliable reassigning.

Based on the member-level spatial analysis, each misclassified dark grass member will be readjusted according to the inter-member dependency rule. The dependency for each member can be typically indicated by the density of the neighboring members of involved classes. According to the density, it can be decided whether the center member should be removed as a misclassified or morphologically closed with other involved members. The rule can be formulated as Equation (3).

B W sg = { B W s g ! M s g ( i ) , M s g ( i ) | R < T 1 B W s g M s g ( i ) S E t , M s g ( i ) | R T 1 }
where BWSg is the binary image of shaded grass member; MSg(i) is the ith member in BWSg; R is the density of sunlit vegetation member in the dilated region of MSg(i); T1 is the lower threshold of density with an experimental default of 0.1; • is the sing of morphological closing; SEt is the structural element with a given size t and t = ROUND(h·R) where ROUND(·) is the rounding function; h is a coefficient with an experimental default of 20, thus limiting t from 2 to 10.

The self-adaptive morphology is reached by adaptively adjusting t which is the size of the structural element associated with R. The higher the R is, the larger the t will be, and the more the neighboring members will be closed. The algorithm is based on the fact that the higher the R is, the more the members of involved classes around the center member there are and the lower the probability that the central member belongs to noise is. By adjusting t adaptively, the center member can be enlarged to an appropriately integrated vegetation patch. Thus a significant improvement on the accuracy of member can be expected. Figure 4 gives an example.

3.3.3. Knitting Vegetation Members into Objects

The readjusted members are still scattered and mixed with each other between classes. In general, as long as the members of a class are closely clustered, they can be morphologically knitted to objects, such as bonding discrete members into objects by morphological closing, smoothing object boundaries by morphological opening and removing smaller discrete members by area filtering. Figure 5c gives an example.

In addition, the member-level spatial analysis can also be used to further remove false concaves inside vegetation patches. If a dark background patch is surrounded by a far larger vegetation patch, the former is most likely of a false concave. The area ratio of the former to the latter can serve as a measure for the readjustment. The smaller the ratio is, the higher the probability of the member belonging to a false concave is. Figure 5a,b provide an example.

3.3.4. Refining Vegetation Objects

The originally generated dual-temporal vegetation objects sometimes are insufficiently accurate for change detection and they can be refined by human–computer interaction. There are two new algorithms developed for the refining. One, named as hitting algorithm, is effective in locating the false-present objects and the other, the point expansion algorithm, can be used to generate the false-absent objects.

(1)

Hitting algorithm

In the following, VS0 is used to denote the binary image of the originally extracted vegetation objects. It is relatively easier to locate the over-extracted patches because they already exist in VS0. By the conventional hitting algorithm, a false-present object can be located when being hit by cursor.

(2)

Point expansion algorithm

However, to determine a false-absent object is another thing. Its absence from VS0 is resulted from the situation that the majority of pixels in it likely deviate from the mass center of its class in a given feature space. A new algorithm, referred to as the point expansion, has been explored to capture such pixels. Figure 6 gives an example.

Our experiments have revealed that there is a good separability between vegetation and background in the NDVI–NDSV space. Thus the point expansion is conducted through the following steps: (1) taking a cursor-pointed pixel and its neighboring pixels as the samples; (2) deriving a NDVI–NDSV relationship from these samples by a two-order nonlinear fitting; and (3) conducting a seeded region growth within a buffer along the relationship and with these samples as the initial seeds to capture those always under-extracted vegetation pixels. The growing always begins from a more reliable end and searches new vegetation members within the buffer. The weights will make the width of the buffer gradually narrowed as the separability worsens. The growing process can be formulated as Equation (4).

V = N D i = max V I P i = 0.5 i = 0 i = n P i = 0.2 N D i = min V I f VI SV ( N D i ) ± P i d V V 0 = seed ( a ) V i 1 V = TURE ( b ) E R < T 2 = TURE ( c )
where V is the result set during one time point expansion; V0 and Vi is the original set and the expanded set respectively in the ith iteration; fVI-SV is the NDVI–NDSV relationship from the current sample set; seed refers to all the pixels in the sample set; minVI, maxVI, minSV and maxSV are the end values of a NDVI–NDSV region decided by the samples; dV = maxSVminSV; Pi is the weight in the ith iteration. ER and T2 will be explained in the following.

In order to avoid the unexpected entrance of smooth dark background members, a during-growing constraint, termed the “expansion rate,” is proposed (Equation (5)).

ER i = ( EA i EA i 1 ) / E A i 1
where ERi and EAi are the expansion rate and expansion area respectively at the ith step of a growing process.

ER is usually stable during the iteration but increases suddenly when a larger body of dark smooth background (e.g., water and shaded roads) enters the iteratively expanding region. Therefore, Condition (c) of Equation (4) serves as a constraint on ER to limit the unexpected entrance. T2 is of a threshold with an experimental default of 3. It is possible to achieve the best separation between shaded vegetation and smoother dark background by adjusting T2. Figure 6 provides an example. Additionally, the hitting algorithm and point expansion can also serve as a tool for the late accuracy assessment (Section 4.1).

3.4. The Object-Level Spatial Analysis

There are three main steps in this phase: (1) detecting spurious changes through use of the object-level spatial analysis and then revising both of the dynamic and stable object sets (Section 3.4.1); (2) recovering the misjudgments by interaction (Section 3.4.2); and (3) repairing the original dual-temporal vegetation object sets by merging the renewed stable into both of them (Section 3.4.3).

3.4.1. Detecting Spurious Changes

Before the detection, the dual-temporal vegetation objects are divided into three sets: the added, the subtracted and the stable (Badd0, Bsub0 and Bsta0) which denote the new, the disappearing and the stable objects respectively through use of per-pixel binary logical operations. There usually are considerable spurious changes in the initial dynamic object sets (Badd0 and Bsub0) due to variations in sun angle and sensor look angle. The accuracy of the stable objects will also be seriously damaged. The efforts mentioned before to improve the accuracy of the member can merely reduce the area of a pseudo-change patch and make it easier to be detected in this step.

The dependency rule for the object-level spatial analysis is potentially available by assessing how closely a dynamic object is located to a stable one. For example, a dynamic object will be most likely of a part of a stable one if the area of the former is far less than that of the latter and they have long shared boundaries at the same time. Assessed by such an inter-object dependency rule, most spurious changes can be detected. The rule can be formulated as Equation (6).

F = i = 1 n F i F i = { F i ( B add 0 B sub 0 ) , F i | A i < T 3 & ( F dila ( i ) B sta 0 ) > 0 | | A i < 2 T 3 & ( F dila ( i ) B sta 0 ) > T 4 }
where F is the spurious-change object set; Fi is the ith object in F; Ai is the area of Fi; Fdila=Badd0se or Fdila=Bsub0se where ⊕ is the sign of morphological dilating and se is the structural element (usually a 3-by-3 disk); Fdila(i) is the ith object in Fdila; n is the number of spurious-change patches. T3 is the upper area threshold which is generally assumed to be involved with image resolution and is therefore associated with the height (r) and width (c) of an image and T3 = ROUND(w·(r + c)·0.1) where w is the weight with the default of 1 which is of a reserved parameter and should be increased as the spurious changes are under assessed and vice versa; T4 is the lower length limit of the shared boundaries which is associated with Li (the perimeter of the ith dynamic object) and T4 = Li/4.

The rule can be linguistically described as: (1) when a dynamic object is located at an end of a stable one, the area of the former is smaller than T3 and the intersection of the dilated former and the latter is not empty; (2) when a dynamic object is near a side of a stable one, the two objects have length-sufficient shared boundaries and the area of the former is smaller. If any of the two conditions is satisfied, the former will be added to F. Then all the objects in F will be merged into Bsta0 and will also be removed from their original sets at the same time. Figure 7 gives an example of this process. Bsta, Badd and Bsub represent the renewed sets from Bsta0, Badd0 and Bsub0, respectively. It is revealed by the accuracy assessment that more than 97% of spurious-change objects can be detected.

3.4.2. Recovering Misjudgments by Human–Computer Interaction

Another version of the hitting algorithm for recovering the misjudgments has been explored. Some recovering operations will be executed in accordance with the properties of mouse click events. Either a hit dynamic object will be reassigned to Bsta or the hit part from a stable object will be recovered for its original dynamic status. If the differences between before and after the revisions are referred to as errors, the new version of the hitting algorithm can also serve as a tool to assess the accuracy of vegetation change detection.

3.4.3. Repairing Dual-Temporal Vegetation Objects

It is not rare that a vegetation object is less shaded by buildings or with no building shelter in one of the image pairs due to variations in sun angle and sensor look angle. The objects in F usually indicate the variations. Therefore, taking these objects as supplements, the shaded and the obscured vegetation objects can be repaired afterwards. Figure 8 provides an example.

4. Results and Discussion

In this section, the accuracy of extracting vegetation object and detecting spurious change will be assessed separately.

Both the extraction and detection address only binary splitting. The former divides pixels into vegetation and background and the latter divides the vegetation objects into dynamic and stable ones. Instead of using the confusion matrix which is fit for assessing the accuracy of classification between multiple classes, we use the relative errors to indicate the splitting mistakes. In addition, the incorrect vegetation objects, including under and over extracted ones, and the incorrect changes, including false stable and false dynamic ones, cannot become aware before the vegetation objects being extracted and the changes being detected. In most cases, such objects may not be collected as the reserved checking-samples for accuracy assessment in the sampling phase since they usually do not have the typical appearances of their respective classes. Therefore, they were practically collected as accurately as possible by human–computer interaction with the hitting and the point expansion algorithms to guarantee an objective assessment.

4.1. The Accuracy of Extracting Vegetation Object

Four scenes (Figure 3 and Figure 6) are for the accuracy assessment and Table 2 provides the results. Different combinations of spatial analyses are tested in each example to ensure both the spatial analyses at the levels of pixel and members. The feature space, the model of classifier and the original classes are the same as those of the example in Section 3.2.

As mentioned previously, the false-absent and false-present patches in the original object set can be located and then repaired by human–computer interaction. The differences between before and after the repair are referred to as errors for the accuracy assessment (Table 2). It can be seen that the accuracy of automatic extraction of vegetation objects with both the pixel and the member-level spatial analyses is pretty good and only few of them need to be recovered by the interaction. The errors are no more than 2.56%, 7.04% and 12.15% for the cases of with both the pixel and the member-level spatial analyses, with only the pixel-level spatial analysis and with no spatial analysis respectively.

4.2. The Accuracy of Detecting Vegetation Changes

The accuracy assessment is carried out through use of the hitting algorithm mentioned in Section 3.4.2. The differences between before and after the repair are referred to as errors. Four pairs of images provided in Figures 7 and 9 are for the assessment and Table 3 provides the results. The assessment follows the first three steps of the object-level spatial analysis given in the beginning of Section 3.4.

The data in Table 3 reveal that the accuracy of detecting spurious changes through use of the object-level spatial analysis is also so good that only a few of objects are misjudged by computer. The objects need to be recovered by human–computer interaction is only 2.03% on average and no more than 3.26% out of all the dynamic objects in area. The interactive recovery is likely required to deal with the off-nadir issues when vegetation locates close to higher buildings (e.g., the object pointed by red arrow in Figure 9c1).

5. Conclusions

This paper presents a novel method to improve change detection accuracy of the urban vegetation using high-resolution remote sensing images. The following conclusions can be supported by the aforementioned work. (1) The proposed approach takes advantage of series of spatial analyses at multi-levels which can help to significantly reduce the error of the extracted objects and the detected spurious changes; (2) All the spatial analyses in different levels connect one after another and each of them has been proved to be indispensable in the entire process. The errors from the automatic extraction of vegetation objects are no more than 2.56%, 7.04% and 12.15% for the cases with both pixel and member-level spatial analyses, with only pixel-level spatial analysis, and with no spatial analysis, respectively. The error from the automatic detection of spurious changes with the object-level spatial analysis is no more than 3.26%. It means that only no higher than 2.56% of incorrectly extracted vegetation objects and 3.26% of misjudged dynamic objects need to be recovered by human–computer interaction; (3) The limitation of the approach is mainly caused by the dependence on the position matching of two date images; otherwise the error from the detected spurious changes would increase heavily. However, the accurate position matching in a dense high-rise building area is difficult to reach because parts of the reference points in the ground for the matching are often sheltered by buildings. Thus, a method with better tolerance to this shortcoming is appreciated.

Acknowledgments

We would like to express our gratitude to Brian Finlayson, University of Melbourne, Australia and Professor Minhe Ji, East China Normal University, China, for their reviews and many helpful suggestions. The work presented in this paper is supported by the National Natural Science Foundation of China (Grant No. 41071275).

Author Contributions

Jun Qin analyzed the data. Jianhua Zhou performed the experiments and wrote the paper. Bailang Yu revised the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Coppin, P.; Jonckheere, I.; Nackaerts, K.; Muys, B. Digital change detection methods in ecosystem monitoring: A review. Int. J. Remote Sens 2004, 25, 1565–1596. [Google Scholar]
  2. Small, C.; Lu, W.T. Estimation and vicarious validation of urban vegetation abundance by spectral mixture analysis. Remote Sens. Environ 2006, 100, 441–456. [Google Scholar]
  3. Berberoglu, S.; Akin, A. Assessing different remote sensing techniques to detect land use/cover changes in the eastern Mediterranean. Int. J. Appl. Earth Obs. Geoinf 2009, 11, 46–53. [Google Scholar]
  4. Coops, N.C.; Wulder, M.A.; White, J.C. Identifying and describing forest disturbance and spatial pattern: Data selection issues and methodoloical implications. In Understanding Forest Disturbance and Spatial pattern: Remote Sensing and GIS Approaches; Wulder, M.A., Franklin, S.E., Eds.; CRC Press: Boca Raton, FL, USA, 2006; pp. 33–60. [Google Scholar]
  5. Colditz, R.R.; Acosta-Velázquez, J.; Díaz Gallegos, J.R.; Vázquez Lule, A.D.; Rodríguez-Zúñiga, M.T.; Maeda, P.; Cruz López, M.I.; Ressl, R. Potential effects in multi-resolution post-classification change detection. Int. J. Remote Sens 2012, 33, 6426–6445. [Google Scholar]
  6. Chan, J.C.; Chan, K.; Yeh, A.G. Detecting the nature of change in an urban environment: A comparison of machine learning algorithms. Photogram. Eng. Remote Sens 2001, 67, 213–225. [Google Scholar]
  7. Rafiee, R.A.; Salman, M.A.; Khorasani, N. Assessment of changes in urban green spaces of Mashad city using satellite data. Int. J. Appl. Earth Obs. Geoinf 2009, 11, 431–438. [Google Scholar]
  8. Prakash, A.; Gupta, R.P. Land-use mapping and change detection in a coal mining area—A case study in the Jharia Coalfield, India. Int. J. Remote Sens 1998, 19, 391–410. [Google Scholar]
  9. Bindschadler, R.A; Scambos, T.A.; Choi, H.; Haran, T.M. Ice sheet change detection by satellite image defferencing. Remote Sens. Environ 2010, 114, 1353–1362. [Google Scholar]
  10. Coppin, P.R.; Bauer, M.E. Digital change detection in forest ecosystems with remote sensing imagery. Remote Sens. Rev 1996, 13, 207–234. [Google Scholar]
  11. Malila, W.A. Change vector analysis: An approach for detecting forest changes with Landsat. Proceedings of the Machine Processing of Remotely Sensed Data Symposium, West Lafayette, IN, USA, 3–6 June 1980; pp. 326–335.
  12. Wulder, M.A.; Ortlepp, S.M.; White, J.C.; Coops, N.C. Impact of sun-surface-sensor geometry upon multitemporal high spatial resolution satellite imagery. Can. J. Remote Sens 2008, 34, 455–461. [Google Scholar]
  13. Chen, G.; Hay, G.J.; Carvalho, L.M.T.; Wulder, M.A. Object-based change detection. Int. J. Remote Sens 2012, 33, 4434–4457. [Google Scholar]
  14. Small, C. Estimation of urban vegetation abundance by spectral mixture analysis. Int. J. Remote Sens 2001, 22, 1305–1334. [Google Scholar]
  15. Pu, R.L.; Gong, P.; Michishita, R; Sasagawa, T. Spectral mixture analysis for mapping abundance of urban surface components from the Terra/ASTER data. Remote Sens. Environ 2007, 112, 939–954. [Google Scholar]
  16. Van der Linden, S.; Hostert, P. The influence of urban structures on impervious surface maps from airborne hyperspectral data. Remote Sens.Environ 2009, 113, 2298–2305. [Google Scholar]
  17. Prati, A.; Mikic, I.; Trivedi, M.M.; Cucchiara, R. Detecting moving shadows: Algorithms and evaluations. IEEE Trans. Pattern Anal. Mach. Intell 2003, 25, 918–923. [Google Scholar]
  18. Tsai, V.J.D. A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens 2006, 44, 1661–1671. [Google Scholar]
  19. Zhou, J.H.; Zhou, Y.F.; Guo, X.H.; Ren, Z. Methods of extracting distribution information of plants at urban darken areas and repairing their brightness. J. E. China Norm. Univ. (Natl. Sci. Ed.) 2011, 6, 1–9. [Google Scholar]
  20. Gao, X.J.; Wan, Y.C.; Zheng, S.Y.; Li, J. Automatic shadow detection and compensation of aerial remote sensing images. Geomat. Inf. Sci. Wuhan Univ 2012, 37, 1299–1302. [Google Scholar]
  21. Hofmann, P.; Blaschke, T.; Strobl, J. Quantifying the robustness of fuzzy rule sets in object based image analysis. Int. J. Remote Sens 2011, 32, 7359–7381. [Google Scholar]
  22. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic object-based image analysis: A new paradigm in remote sensing and geographic information science. ISPRS Int. J. Photogram. Remote Sens 2014, 87, 180–191. [Google Scholar]
  23. Blaschke, T. Object based image analysis for remote sensing. ISPRS Int. J. Photogram. Remote Sens 2010, 65, 2–16. [Google Scholar]
  24. Baatz, M.; Schäpe, A. Multiresolution segmentation—An optimization approach for high quality multi-scale image segmentation. In Angewandte Geographische Informationsverarbeitung XII; Strobl, J., Blaschke, T., Griesebner, G., Eds.; Wichmann-Verlag: Heidelberg, Germany, 2000; pp. 12–23. [Google Scholar]
  25. Drăguţ, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. ISPRS Int. J. Photogram. Remote Sens 2014, 88, 119–127. [Google Scholar]
  26. Bennett, K.P.; Campbell, C. Support vector machines: Hype or Hallelujah? SIGKDD Explor 2000, 2, 1–13. [Google Scholar]
  27. Price, J.C. Estimating vegetation amount from visible and near infrared reflectances. Remote Sens. Environ 1992, 41, 29–34. [Google Scholar]
  28. Zhou, J.H.; Zhou, Y.F.; Mu, W.S. Mathematic descriptors for identifying plant species: A case study on urban landscape vegetation. J. Remote Sens 2011, 15, 524–538. [Google Scholar]
  29. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ 1988, 25, 295–309. [Google Scholar]
Figure 1. The study region is in the downtown area of Shanghai, located on the eastern coast of mainland China. There are three involved sub-regions shown with small red circles in (a). Only sub-region 2, the main test region, has multi-temporal images. (b) gives a more detail image of this sub-region. The yellow rectangles in it show case test sites in following figures and the strings starting with “F” denote the figure numbers. Both the two back images were downloaded from the Google Earth. 3. Methods.
Figure 1. The study region is in the downtown area of Shanghai, located on the eastern coast of mainland China. There are three involved sub-regions shown with small red circles in (a). Only sub-region 2, the main test region, has multi-temporal images. (b) gives a more detail image of this sub-region. The yellow rectangles in it show case test sites in following figures and the strings starting with “F” denote the figure numbers. Both the two back images were downloaded from the Google Earth. 3. Methods.
Remotesensing 06 09086f1 1024
Figure 2. The flowchart of the multi-level spatial analyses.
Figure 2. The flowchart of the multi-level spatial analyses.
Remotesensing 06 09086f2 1024
Figure 3. An example of classification in the feature space of NDVI, NDSV, Cd, Dds, Ddm and De where Dds, Ddm are of two cases of Dd involved with the dark details of small and middle sizes, respectively. (a) The original image; (b) the members classified in the feature space including De; and (c) the members classified without using De. It can be seen that shaded tree and dark background members, if without De, are often mistaken for shaded grass (e.g., those in the elliptical and rectangular regions of different colors) but significantly improved by adding De. The test-site locates at sub-site 3 (See Figure 1a).
Figure 3. An example of classification in the feature space of NDVI, NDSV, Cd, Dds, Ddm and De where Dds, Ddm are of two cases of Dd involved with the dark details of small and middle sizes, respectively. (a) The original image; (b) the members classified in the feature space including De; and (c) the members classified without using De. It can be seen that shaded tree and dark background members, if without De, are often mistaken for shaded grass (e.g., those in the elliptical and rectangular regions of different colors) but significantly improved by adding De. The test-site locates at sub-site 3 (See Figure 1a).
Remotesensing 06 09086f3 1024
Figure 4. Readjusting members by the member-level spatial analysis. (a) Original image and the samples of sunlit grass (apricot cross) and shaded grass (olive-green cross); (b) originally classified members; and (c) originally classified shaded grass members (white patches) and ensured ones (magenta sketched). It can be seen that the once poor separability between shaded grass and dark background members has been improved by using the inter-member dependency rule.
Figure 4. Readjusting members by the member-level spatial analysis. (a) Original image and the samples of sunlit grass (apricot cross) and shaded grass (olive-green cross); (b) originally classified members; and (c) originally classified shaded grass members (white patches) and ensured ones (magenta sketched). It can be seen that the once poor separability between shaded grass and dark background members has been improved by using the inter-member dependency rule.
Remotesensing 06 09086f4 1024
Figure 5. An example of removing false concave and forming vegetation objects. (a) Original vegetation members; (b) after removing the false concaves; and (c) vegetation objects (cyan sketched). It can be seen by comparing Figure 5a,c that most concave members within larger vegetation patches are tree crown shadows and can be readjusted by self-adaptive area filtering. The test-site locates at Sub-site 1 (See Figure 1a).
Figure 5. An example of removing false concave and forming vegetation objects. (a) Original vegetation members; (b) after removing the false concaves; and (c) vegetation objects (cyan sketched). It can be seen by comparing Figure 5a,c that most concave members within larger vegetation patches are tree crown shadows and can be readjusted by self-adaptive area filtering. The test-site locates at Sub-site 1 (See Figure 1a).
Remotesensing 06 09086f5 1024
Figure 6. An example of refining vegetation objects. (a) A case-fitting process at a mouse click event. Weight P will make the width of the buffer gradually narrowed (in this case P from 0.5 to 0.2); and (b) an example of the refining. The newly captured objects (sketched by magenta lines) reveal that the point expansion algorithm works well.
Figure 6. An example of refining vegetation objects. (a) A case-fitting process at a mouse click event. Weight P will make the width of the buffer gradually narrowed (in this case P from 0.5 to 0.2); and (b) an example of the refining. The newly captured objects (sketched by magenta lines) reveal that the point expansion algorithm works well.
Remotesensing 06 09086f6 1024
Figure 7. An example of vegetation change detection with spatial analyses. (a) The initial stable and detected spurious-change objects in a case site (in the orange rectangle in Figure 7b; (b) and (c) the readjusted objects with the back images of date 1 and date 2, respectively.
Figure 7. An example of vegetation change detection with spatial analyses. (a) The initial stable and detected spurious-change objects in a case site (in the orange rectangle in Figure 7b; (b) and (c) the readjusted objects with the back images of date 1 and date 2, respectively.
Remotesensing 06 09086f7 1024
Figure 8. An example of repairing vegetation objects by the renewed stable objects. (a) Original vegetation objects (cyan-line sketched); (b) the stable objects in Bsta (cyan-line sketched) and the original vegetation objects (white patches), and (c) the repaired vegetation objects (cyan-line sketched). It can be seen that the repair will make the vegetation objects much more complete and accurate than usual.
Figure 8. An example of repairing vegetation objects by the renewed stable objects. (a) Original vegetation objects (cyan-line sketched); (b) the stable objects in Bsta (cyan-line sketched) and the original vegetation objects (white patches), and (c) the repaired vegetation objects (cyan-line sketched). It can be seen that the repair will make the vegetation objects much more complete and accurate than usual.
Remotesensing 06 09086f8 1024
Figure 9. Three additional examples for the accuracy assessment of detecting vegetation changes. The years of the image pairs are: (a) 1993/2003; (b) 2000/2005; (c) 2003/2006 and all of the left ones take the image of date 1 as the background and the other the image of date 2.
Figure 9. Three additional examples for the accuracy assessment of detecting vegetation changes. The years of the image pairs are: (a) 1993/2003; (b) 2000/2005; (c) 2003/2006 and all of the left ones take the image of date 1 as the background and the other the image of date 2.
Remotesensing 06 09086f9 1024
Table 1. Four features for the classification.
Table 1. Four features for the classification.
SignNameMeaningApplicability
NDVINormalized Difference Vegetation Index [27]NDVI = (IRR)/(IR + R)
where IR and R are the DNs of NIR and red bands, respectively.
Distinguishing between vegetation and background.
NDSVNormalized Difference of Saturation and Brightness [19]NDSV = (S − V)/(S + V)
where S and V are saturation and brightness in the Hue- saturation-brightness color system.
Distinguishing between shaded vegetation and dark background
CdDensity of low-NIR pixelsCd = Sp/A [a]
where Sp is a supplemental pixel set.
Sp = {BWlow∩!BWhigh, BWlow|NDVI > a × TNDVI & SAVI > 1.5 × a × TNDVI, BWhigh|NDVI > TNDVI}[b,c]
Extracting winter deciduous tree crowns.
DdDensity of dark details [28]Dd(k) = COUNT(BWd(k))/Aplant(k) [a,d]
BWd = {BWdI, BWd|(IseI)>c×d
where BWd(k) is the kth block of a binary image of dark detail; d is the mean size of dark detail; I is a gray image; • and se, the sign of morphological closing and its structure element; c is a coefficient.
Dd is sensitive to the variations of crown roughness. Different Dd matrices involved with d can often server as independent features in a feature space (e.g., the example in Figure 3).

[a]A = Block area; Aplant = vegetation-covered area in a block.[b]BWhigh and BWlow = Binary images of vegetation cover associated with normal and lowered thresholds respectively. SAVI = Soil-adjusted vegetation index [29]. The condition of SAVI > 1.5 a·TNDVI can often work well for BWlow to extract low-NIR reflection crowns where a is of a coefficient with the experimental defaults of 0.6 in summer and 0.3 in winter, respectively.[c]TNDVI = The threshold for the segmentation of NDVI with a widely used experimental default of 0.17 for extracting vegetation from NIR images.[d]COUNT(•) = Counting function for the number of true members in a binary image.

Table 2. Assessment of accuracy of extracting vegetation object. AVS is the area of total vegetation; Eunder = (Aunder/AVS) × 100(%) and Eover = (Aover/AVS) × 100(%) where Aunder and Aover are the area of the false-absent and the false-present, respectively. Etotal is the maximum error and Etotal = Eunder + Eover.
Table 2. Assessment of accuracy of extracting vegetation object. AVS is the area of total vegetation; Eunder = (Aunder/AVS) × 100(%) and Eover = (Aover/AVS) × 100(%) where Aunder and Aover are the area of the false-absent and the false-present, respectively. Etotal is the maximum error and Etotal = Eunder + Eover.
Figure No.AVS (pixels)Error with no Spatial Analysis (%)Error with the Pixel-Level Spatial Analysis only (%)Error with Both the Pixel and Member-Level Spatial Analyses (%)

EunderEoverEtotalEunderEoverEtotalEunderEoverEtotal

4335,3402.665.488.140.864.445.300.470.510.98
5195,2501.637.048.670.876.177.040.851.712.56
6108,6402.989.1712.152.331.744.081.340.51.84
3191,5101.269.5710.830.771.582.340.460.811.27
Mean2.137.819.951.213.484.690.780.881.66
Maximum2.989.5712.152.336.177.041.341.712.56
Table 3. Assessment of accuracy of detecting vegetation change. Adyn0 is the area of total initial dynamic objects; Adyn is the area of the dynamic objects after the interactive recovery. Ddyn = (AdynAdyn0) × 100/Adyn0. Davg and Dmax are the average and the maximum of Ddyn, respectively.
Table 3. Assessment of accuracy of detecting vegetation change. Adyn0 is the area of total initial dynamic objects; Adyn is the area of the dynamic objects after the interactive recovery. Ddyn = (AdynAdyn0) × 100/Adyn0. Davg and Dmax are the average and the maximum of Ddyn, respectively.
Figure NoYearImage sizeAdyn0 (pixels)Adyn (pixels)Ddyn (%)
72000/20031020 × 112580,75683,3923.264779
9a1993/2003441 × 49647,24548,3932.43081
9b2000/2005512 × 49629,68129,538−0.48345
9c2003/2006893 × 95467,89066,566−1.94967

Davg = 2.03%; Dmax = 3.26%

Share and Cite

MDPI and ACS Style

Zhou, J.; Yu, B.; Qin, J. Multi-Level Spatial Analysis for Change Detection of Urban Vegetation at Individual Tree Scale. Remote Sens. 2014, 6, 9086-9103. https://doi.org/10.3390/rs6099086

AMA Style

Zhou J, Yu B, Qin J. Multi-Level Spatial Analysis for Change Detection of Urban Vegetation at Individual Tree Scale. Remote Sensing. 2014; 6(9):9086-9103. https://doi.org/10.3390/rs6099086

Chicago/Turabian Style

Zhou, Jianhua, Bailang Yu, and Jun Qin. 2014. "Multi-Level Spatial Analysis for Change Detection of Urban Vegetation at Individual Tree Scale" Remote Sensing 6, no. 9: 9086-9103. https://doi.org/10.3390/rs6099086

Article Metrics

Back to TopTop