Forests 2014, 5(6), 1304-1330; doi:10.3390/f5061304

Article
Evaluation and Comparison of QuickBird and ADS40-SH52 Multispectral Imagery for Mapping Iberian Wild Pear Trees (Pyrus bourgaeana, Decne) in a Mediterranean Mixed Forest
Salvador Arenas-Castro 1,*, Juan Fernández-Haeger 2 and Diego Jordano-Barbudo 2
1
Department of Land Use and Improvement, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcká 129, 165 21 Prague 6 (Suchdol), Czech Republic
2
Department of Botany, Ecology and Plant Physiology (Area of Ecology), Faculty of Sciences, University of Cordoba, Cordoba 14071, Spain; E-Mails: bv1fehaj@uco.es (J.F.-H.); bv1jobad@uco.es (D.J.-B.)
*
Author to whom correspondence should be addressed; E-Mail: b62arcas@uco.es; Tel.: +420-22-438-2144; Fax: +420-23-438-1848.
Received: 9 January 2014; in revised form: 15 April 2014 / Accepted: 29 May 2014 /
Published: 11 June 2014

Abstract

: The availability of images with very high spatial and spectral resolution from airborne sensors or those aboard satellites is opening new possibilities for the analysis of fine-scale vegetation, such as the identification and classification of individual tree species. To evaluate the potential of these images, a study was carried out to compare the spatial, spectral and temporal resolution between QuickBird and ADS40-SH52 imagery, in order to discriminate and identify, within the mixed Mediterranean forest, individuals of the Iberian wild pear (Pyrus bourgaeana). This is a typical species of the Mediterranean forest, but its biology and ecology are still poorly known. The images were subjected to different correction processes and data were homogenized. Vegetation classes and individual trees were identified on the images, which were classified from two types of supervised classification (Maximum Likelihood and Support Vector Machines) on a pixel-by-pixel basis. The classification values were satisfactory. The classifiers were compared, and Support Vector Machines was the algorithm that provided the best results in terms of overall accuracy. The QuickBird image showed higher overall accuracy (86.16%) when the Support Vector Machines algorithm was applied. In addition, individuals of Iberian wild pear were discriminated with probability of over 55%, when the Maximum Likelihood algorithm was applied. From the perspective of improving the sampling effort, these results are a starting point for facilitating research on the abundance, distribution and spatial structure of P. bourgaeana at different scales, in order to quantify the conservation status of this species.
Keywords:
vegetation mapping; QuickBird imagery; ADS40-SH52 imagery; multispectral image; mosaicing; Maximum Likelihood; Support Vector Machines; accuracy assessment; Iberian wild pear; Pyrus bourgaeana

1. Introduction

One of the main applications of remote sensing in recent decades has been in mapping of vegetation [1], at a high spatial and spectral scale [2], in order to quantify the status and the environmental requirements of certain species and to prioritize conservation efforts [3,4]. In this sense, there is a simultaneous need for high spatial and spectral resolution of the images generated by remote sensors (hyper and/or multispectral) on satellites or airplanes. This provides better classification results, increased reliability and enhanced visual quality [5]. The high spatial and spectral resolution of these systems therefore offers new opportunities not only to classify and discriminate vegetation units or forest types [6,7], but also to discriminate or to locate individuals of a species within a complex matrix of vegetation [8,9,10]. This is an important tool for managing and conserving biodiversity, since knowledge about the spatial structure and geographical distribution of species could reduce sampling efforts. In addition, these systems can help to expand the existing dataset at the regional and global scales [11].

The tools available for this kind of work have evolved over time. For years, mapping and classification of individual tree species have ranged from aerial photointerpretation [12,13], multispectral [14], and hyperspectral [15,16] image classification of commercial satellites, such as Landsat, Ikonos and QuickBird. However, new airborne digital sensors such as Ultracam [17] and ADS40/ADS80 [18] have spectral and radiometric characteristics that are superior to those of analog cameras [19], and their data provides very high spatial resolution. These digital sensors have opened up a new window for research in the application of remote-sensing techniques for locating individual trees with high accuracy [20,21].

One of the advantages of high spatial and spectral resolution images is that they enable individual trees to be identified, especially when the vegetation is not too dense [22,23]. This is the situation in coniferous and deciduous temperate forests, where the application of such images provides high accuracy in identifying individual tree species [24]. However, in the case of the Mediterranean forest, the classification results are more moderate, mainly due to the high density and the diversity of the species that coexist in the same space, or due to a lack of space between the trees. This causes overlaps between the crowns, which can generate erroneous spectral information [25,26].

Until a few decades ago, the Mediterranean forest was less thoroughly researched and less well known than coniferous and deciduous temperate forests, mainly because the Mediterranean forest has lacked commercial interest [27]. However, there has been a significant increase in scientific work in the area of the Mediterranean evergreen open woodland (dehesas) of southern of Europe in the last decade. Studies of the application of remote-sensing techniques have had more or less specific objectives [28,29,30], and the aim has generally been to enhance knowledge about their origin, structure and function. The Mediterranean region is considered a hot spot of biodiversity [31,32], but there is still only limited knowledge about the abundance and the spatial distribution of some species that are typical of the Mediterranean forest.

This is the case with the Iberian wild pear (Pyrus bourgaeana), a deciduous tree species that is typical of the Mediterranean forest and the dehesas of central and southern Spain. This tree can reach 10 m in height, and it has an irregular crown with an average diameter of approximately 5 m. However, the most interesting thing about the species is that it plays an important trophic role in the context of ecological balance [33]. It produces very attractive palatable leaves as well as a good quantity of fleshy fruits throughout the summer. This is very attractive for phytophagous animals and herbivores, at a time when other resources are scarce. However, it is not an abundant species. It is rare, and is less well known than holm oak (Quercus ilex) and cork oak (Q. suber). Its ecology [34,35] and its geographical distribution [36] are unknown. In order to conserve this species, it is of critical importance to know and to map the spatial distribution of P. bourgaeana.

For these reasons, and considering the size and characteristics of the crown of this species, Arenas-Castro et al. [37] evaluated various methods for atmospheric correction and fusion of multispectral images (color-infrared) on the QuickBird satellite imagery. They aimed to determine which method gave the best results for locating and distinguishing P. bourgaeana at a study plot in Sierra Morena (Andalusia, Spain). They made a supervised classification, based on a pixel-by-pixel analysis, using the Maximum Likelihood method. According to the indices used to assess the spatial and spectral quality of the images obtained, Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercube (FLAASH) [38] was the best atmospheric correction method, and IHS (or HSI) processing [39] was the best image fusion method. Thus, after performing the supervised classification of the QuickBird image, atmospherically corrected and merged to 0.60 cm of spatial resolution, and from the confusion matrix for 11 classes, kappa values (78.1%) and overall accuracy values (80.42%) were obtained. It was therefore possible to discriminate different categories in the study area. However, the user’s and producer’s accuracy values for the Pyrus class were low (39.89% and 37.25%, respectively), mainly because it gets confused with the mixed vegetation class and with trees of other species. Another explanation could be related to the date on which the QuickBird image was acquired (July 2008). During the summer, many deciduous species, such as P. bourgaeana, respond to the summer drought by entering into a process of leaf senescence. This influences their spectral response and makes them less easily distinguishable.

The Maximum Likelihood algorithm has been one of the most widely used pixel-based approaches as a classifier for evergreen and deciduous tree species mapping [15], and is considered as a standard approach to thematic mapping from remotely sensed imagery. Its classification accuracy is compared with the other newly developed non-parametric classifiers [40]. Various learning-based algorithms have been developed in recent years to obtain more accurate and more reliable information from satellite images. One of them is the Support Vector Machine algorithm [41], a machine-learning classifier which has been used widely for remote-sensing data classification [42], and is considered among the best classifiers in remote sensing [43].

Therefore, and because no information on remote sensing is available for P. bourgaeana, the main objective of this work was to evaluate and compare the potential of color-infrared images of QuickBird and aerial orthophotos obtained with the ADS40-SH52 linear scanning airborne sensor, at different spatial and temporal resolutions, through a performance evaluation of two classification methods, Maximum Likelihood (ML) and Support Vector Machine (SVM). More specifically, our objectives were: (1) test if the use of SVM classifiers improved image classification versus the ML algorithm; (2) assess the optimum spatial resolution among the examined classifiers; (3) analyze the accuracy of the classifications for mapping and discriminating P. bourgaeana trees within a mixed Mediterranean forest. The ability to use remote sensing techniques to distinguish and map wild pear trees over large areas of inaccessible patches of open woodland, pasture or scrub could facilitate the collection of field data and improve the conservation management of this woody plant.

2. Materials and Methods

2.1. Study Area

The study area is located in the Sierra Morena (37°53′53.53″ N and 4°58′49.61″ W), in the province of Cordoba (Andalusia, Spain). The plot covers about 230 ha. It is crossed by various anthropogenic structures (roads, boundaries, etc.), as well as several seasonal streams and temporary pools. The current vegetation in the study area generally results from the management of oak (Quercus ilex) and cork oak (Q. suber), which covers a large area in the Sierra Morena. This type of management has been increasing due to human intervention, mainly for livestock use, hunting and agriculture. The main ecosystem is the typical dehesa of oaks (Q. ilex subsp. ballota), with trees scattered among high diversity grasslands. There are some patches of pristine sparse Mediterranean forest formations with a limited surface, consisting of evergreen shrubs belonging to several families (Cistaceae, Labiatae, Rosaceae, Ericaceae, Anacardiaceae, Aristoloquiaceae, among others). A section of the study area is an olive grove (Olea europaea) based on traditional production, now abandoned. Although there is some extensive sheep and goat livestock farming, the management is mainly focused on hunting (deer and wild boar).

2.2. Satellite Data and Pre-Processing

2.2.1. QuickBird Imagery

After studying the phenology of the species involved in the study, both evergreen and deciduous, we chose two scenes of the QuickBird 2 satellite which were taken in different seasons and years—in July 2008 and in May 2009 (Figure 1). The spectral and spatial features were very well suited to our needs, as the crown size of the Iberian wild pear and many other species involved in the study, and also other types of coverage, required the use of images with finer spatial resolution. The products and the data that were gathered are shown in Table 1.

Forests 05 01304 g001 200
Figure 1. Images acquired by QuickBird 2. (a) Panchromatic image from May 2009 (0.6 m); (b) Multispectral images from July 2008 (2.4 m); and (c) Multispectral images from May 2009 (2.4 m).

Click here to enlarge figure

Figure 1. Images acquired by QuickBird 2. (a) Panchromatic image from May 2009 (0.6 m); (b) Multispectral images from July 2008 (2.4 m); and (c) Multispectral images from May 2009 (2.4 m).
Forests 05 01304 g001 1024
Table Table 1. QuickBird Imagery Data.

Click here to display table

Table 1. QuickBird Imagery Data.
DateCapture AreaLatitudeLongitudeTime (GMT)Julian Day
BUNDLE
(PAN0.6 m + MS2.4 m)
15 July 200825 km237°54′21.51″4°59′55.96″11:30:47197
2 May 200964 km237°54′4.25″4°58′10.40″13:17:21122

The multispectral image has 16-bit radiometric resolution, 2.4 m of spatial resolution (a ground sample distance or GSD) and four spectral bands (Red (R), Green (G), Blue (B) and Near infrared (Nir)). The panchromatic channel image has 16-bit radiometric resolution and 0.6 m of spatial resolution, and includes a single spectral band.

2.2.2. ADS40-SH52 (Airborne Digital Sensor, 2nd Generation) Imagery

Orthoimagery with 25 cm of spatial resolution was obtained from the ADS40-SH52 airborne digital sensor, and from a photogrammetric flight made over the study plot (STEREOCARTO SL, 7 May 2009, between 12:02 and 14:02 h UTC). The weather and visibility conditions in the study area had previously been checked to ensure that they were suitable for the task. The data was received in four-band orthoimagery (R, G, B and Nir) with 16-bit radiometric resolution (Tiff), keeping the original radiometry from capture. The data acquisition was structured by zones, and each zone contained several parts. In order to get a complete picture of the study area and obtain a mosaic, the original images were combined. For this task, we used the Mosaicking tool [44], which provides the tools necessary for fulfilling the common mosaic requirements, such as edges which are blending through a degraded image (Feathering), transparency of the edges of the image (Edge Feathering) and correspondence between histograms (Histogram Matching). Virtual Mosaic is able to create and display mosaics without large output files. Finally, an orthophotograph with spectral resolution of four bands (R, G, B and Nir), with the original radiometry (16 bits Tiff format) and with spatial resolution of 25 cm was obtained using five different images (Figure 2).

A comparison was made between the images of the flight in order to check whether there was spatial or spectral information loss during the process of creating the mosaic. On the mosaic, and also on each of the passes of images used to create the mosaic (7663 × 8832 pixels)—provided separately and without any correction—a degradation process was made at different spatial resolutions (60, 120, 180, 200, 220 and 240 cm), and the same classification process was performed using the same regions of interest (ROIs).

We therefore obtained five different types of images, in terms of reflectance values of all bands. There were two original multispectral images (798 × 920 pixels) acquired by the QuickBird sensor in 2008 and in 2009, both with 240 cm of spatial resolution. In addition, there were two images merged by the IHS method (3191 × 3678 pixels) with 60 cm of spatial resolution, obtained from multispectral and panchromatic images from each year. Finally, there was the mosaic from the orthophotograph of 2009 (7663 × 8832 pixels), obtained as was described above, with four bands of spectral resolution (RGBNir) and 25 cm of spatial resolution.

In order to make the images comparable with each other, a degradation process (from high to low resolution) was carried out over each of them. Images were obtained at 60, 120, 180, 200, 220 and 240 cm of spatial resolution through a process of cutting and resampling the “pixel aggregate” In this way, we were also able to observe whether the merging process had an impact on the results.

Forests 05 01304 g002 200
Figure 2. (a) Images corresponding to pass 1; (b) Pass 2; (c) Mosaic with uncorrected edge effect; (d) Mosaic corrected using the Edge Feathering method and histogram matching.

Click here to enlarge figure

Figure 2. (a) Images corresponding to pass 1; (b) Pass 2; (c) Mosaic with uncorrected edge effect; (d) Mosaic corrected using the Edge Feathering method and histogram matching.
Forests 05 01304 g002 1024

2.2.3. Atmospheric Correction of the Images

Atmospheric correction corrects the effects of scattering and absorption of electromagnetic radiation caused by gases and particles suspended in the atmosphere. It ensures that the variations in the patterns are independent of weather conditions. There are different models depending on the parameters and variables that are used. In our case, we used atmospheric modeling [45], which is the most complex correction technique and requires data from the atmosphere on the day when the image is captured. We therefore chose the Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes. This correction is based on MODTRAN 4 (Moderate Resolution Atmospheric Transmission), with codes and algorithms tested, unique for each image, which provides very accurate results. However, this correction should not be performed until the digital levels of the original image have been transformed to radiance values. Thus, a radiometric calibration was done in accordance with the requirements of each manufacturer and for each type of image, i.e., for QuickBird [46] and for ADS40-SH52 [47].

2.3. Reference Data Collection

Based on the field data and on careful photointerpretation of the aerial imagery, homogeneous areas with similar spectral responses (classes) were identified, depending on various attributes (color, texture, hue, shape and position). Areas representing more spectral purity of each class serve as the basis for the development of patterns or regions of interest (ROIs) coinciding with the different types of land coverage located in the study area. The choice of these regions was random and independent, and the sample size (number of pixels) was proportional to the extent to which each class represented in the study area [48,49]. To enhance the comparability of the results between the classifications of different dates we tried to use the same training areas as much as possible. During this phase, we identified the units, formations and species involved in this study. We selected 11 training and testing regions of interest (classes) spread evenly over the area corresponding to different vegetation units (dry grass, wet grass, mixed woodland and riparian forest) and individual species that are part of the mixed Mediterranean forest (P. bourgaeana, O. europaea and Q. ilex). In addition, soil, saturated soil, ponds and shade classes were considered as additional ROIs (Table 2). For each generated image, the ROIs were converted to the corresponding spatial resolution by prior conversion to a vector file and subsequent export as ROI to the image in ENVI. Data from these regions of interest was used for classification of the images through the training and testing phases.

2.4. Classification Techniques

Two widely used classification techniques were selected. One of these is a parametric technique (Maximum Likelihood (ML)) and the other is non-parametric (Support Vector Machine (SVM)). ML is a very popular classifier used in pattern recognition and image classification [50], based on the assumptions of normally distributed data for each class and based on accurate selection of the training samples [51]. However, in real life, the nature of the distribution is hardly known, and it is preferable to use non-parametric classifiers that are free from assumptions [52]. For this purpose, we used the SVM non-parametric classifier using the Gaussian kernel called Radial Basis Function (RBF), following the recommendations of [53]. The SVM non-parametric classifier is a method based on the statistical information of remote sensing images. This binary classifier locates the optimal hyper plane between the two classes to separate them in a new high-dimensional feature space. To do this, it takes into account only the training samples that lie on the edge of the class distributions known as support vectors. Moreover, it does not require the assumption of normality, and it has often been found to provide higher classification accuracies than other widely used techniques [54]. The two classifiers (ML and SVM) were used to classify the images to investigate the effect of the classifier.

Table Table 2. Number of pixels for each type of image, resolution and class.

Click here to display table

Table 2. Number of pixels for each type of image, resolution and class.
PyrusOleaQuercusWet GrassDry GrassMixed WoodlandRiparian ForestSoilSaturated SoilPondsShade
QB 2008 Fused60570/524245/2581891/18471598/15561786/17682849/25241428/15781467/14421421/14481002/9451501/1482
120140/12866/67475/472391/376445/425680/623364/396382/329357/360237/245369/349
18057/5429/29199/203171/166192/204315/291165/166153/159166/168111/105174/153
20048/4629/18167/167144/127163/159254/227133/150145/131121/13891/84132/137
22043/3524/22146/133113/113129/125210/196105/113105/105105/10875/67110/117
24036/4679/31111/12194/100110/106174/16695/10374/9087/9756/59105/102
QB 2008 Non-Fused60528/512232/2391891/18471598/15561786/17682849/25241428/15781467/14421421/14481002/9451497/1482
120126/12561/62475/472391/376445/425680/623364/396382/329357/360237/245368/349
18057/5729/32199/203171/166192/204315/291165/166153/159166/168111/105174/153
20048/4929/19167/167144/127163/159254/227133/150145/131121/13891/84132/137
22043/3524/22146/133113/113129/125210/196105/113105/105105/10875/67111/117
24052/4628/31120/12198/100111/106162/15992/10396/9088/9768/59101/102
QB 2009 Fused60501/117198/4571600/18861272/15031657/18443530/26751488/15171468/1451546/1321052/10171631/1576
120106/106108/118418/504372/388465/454617/684399/361363/37839/43278/258385/392
18055/5556/50193/227155/161210/206283/300180/170161/16241990.00 126/119165/170
20045/4546/34153/190146/139175/162234/253154/136141/13016/19100/97142/146
22041/4142/37127/155108/110137/139194/205124/114114/11741954.00 90/77111/117
24033/3340/34107/12693/103116/116159/174108/9695/9041946.00 63/6497/89
QB 2009 Non-Fused60333/133190/1811740/17401461/15031867/18672506/26751599/15171421/145159/18980/9811306/1306
12072/7246/42418/418372/388465/465617/684399/361363/37841804.00 256/246328/328
18038/3827/25193/193155/161210/210283/300180/170161/16241916.00 118/114136/136
20033/3326/18153/153146/139175/175234/253154/136141/13041885.00 94/94120/120
22027/2721/18127/127108/110137/137194/205124/114114/117155/14684/7597/97
24033/3840/34107/12693/103116/116159/174108/9695/9011/363/6497/89
Flight 2009 (Mosaic)60646/649504/5051993/19131599/15961670/16212425/26751565/15971465/14231208/12411045/9611678/1668
12077/162111/131231/461509/408381/399801/673559/401364/363301/303258/250377/418
18064/6860/59214/217169/182171/183277/301181/169163/159140/143116/110185/200
20053/5949/45179/168151/143150/138218/244140/148130/131107/9792/90152/150
22043/5037/30156/141120/127122/114179/206117/112103/9589/9580/73121/125
24042/3435/23137/11497/104102/104152/166106/10591/8381/7855/57107/98

Training and testing data are separated by a spacebar.

2.5. Accuracy Assessment and a Comparison of Overall Classification Accuracy

In order to evaluate the incidence of degradation processes in the images, a qualitative visual inspection of the classified images was made. In addition, the classification accuracy was quantified from the confusion or error matrix through which we analyzed the overall accuracy (OA), the producer’s accuracy (PA) and the user’s accuracy (UA). Because the kappa coefficient does not have a probabilistic interpretation [55], whereas the other measures (OA, PA or UA) do have a probabilistic interpretation, and the kappa coefficient has been shown not to be an appropriate map accuracy measure for comparing the accuracy of thematic maps, particularly when the reference data used have always been the same [56,57], we decided not to apply this measure in our study. Therefore, we compared the best results in terms of OA, PA and UA for the Pyrus class, for the images acquired by the QuickBird satellite, and also for sensor ADS40-SH52. Furthermore, it is necessary to take into account statistically rigorous criteria in order to achieve an objective comparison of the classification accuracies. We used the McNemar test without continuity correction to assess the statistical significance of the difference in OA between each pair of classifiers (ML and SVM), because we had used identical reference data to generate the confusion matrix and thus obtain the proportion of correctly allocated cases [58]. The McNemar non-parametric test is based on a 2 × 2 matrix, and compares the frequencies of cases correctly allocated in one classification but misclassified in the other. To test the null hypothesis that the two classifiers should have the same error rate, a chi-square distribution is assumed (p < 0.05) and the two classifications are therefore considered to be significantly different at the 95% level of confidence [59]. Training two classifiers with identical data sets and further evaluation with a unique set of pixels, different to the training data, ensured that differences in accuracy ratings are due to the process of assigning the pixels, and are therefore due to the algorithm that is used [60].

The software used for pre-processing, image classification and classification accuracy assessment was ENVI 4.8 (ITT Visual Information Solution Corporation, Boulder, Colorado, USA).

3. Results

A total of 84 classifications were performed, six for each type of image and spatial resolution, including the mosaic creation process images. In general, the ratings were good, with OA between 52% and 86% (Table 3 and Table 4).

3.1. Visual Analysis of the Images

A visual analysis provided a preliminary assessment of the spatial quality of the images. Figure 3 illustrates the QuickBird images from July 2008 and May 2009, fused (60 cm of spatial resolution) and non-fused (240 cm of spatial resolution), and the flight image from May 2009 with resolution of 240 cm after a process of degradation (high to low spatial resolution). At first view, there are no significant differences, except in relation to the color of the images. However, we must be aware that the images were obtained in different seasons and in different years.

Forests 05 01304 g003 200
Figure 3. Comparison between images with different spatial resolution: (a) QuickBird 2008 Fused (60 cm); (b) QuickBird 2008 Non-fused (240 cm); (c) QuickBird 2009 Fused (60 cm); (d) QuickBird 2009 Non-fused (240 cm); and (e) Flight 2009 (240 cm).

Click here to enlarge figure

Figure 3. Comparison between images with different spatial resolution: (a) QuickBird 2008 Fused (60 cm); (b) QuickBird 2008 Non-fused (240 cm); (c) QuickBird 2009 Fused (60 cm); (d) QuickBird 2009 Non-fused (240 cm); and (e) Flight 2009 (240 cm).
Forests 05 01304 g003 1024

3.2. Assessment of the Mosaic Creation Process

If we analyze the results for images degraded separately, we note that there are differences according to the method of classification and the image used (Table 3). If we focus on the classifier method, for pass 1 OA applying the ML method varied from 70.10% at 200 cm to 63.19% at 60 cm; for the SVM method, OA was 67.87% at 200 cm, and 63.88% at 60 cm. For the mosaic image, the values ranged from 70.07% at 180 cm to 66.56% at 240 cm for the ML method, while for the SVM method the values were 69.68% at 180 cm and 67.73% at 60 cm. In the case of pass 2, the results were 66.28% at 180 cm and 63.45% at 120 cm for ML, and for SVM they were 66.53% at 60 cm and 63.62% at 240 cm. Overall, there are differences in the accuracy gain between the methods for each image. They are more pronounced in the case of pass 1 and mosaic. However, if we compare between images (Table 3), there is a general trend to gain accuracy at intermediate spatial resolution, being higher in the case of the mosaic. The method which provided greater accuracy was ML in the case of pass 1 and mosaic, and SVM for pass 2. However, if we analyze the higher accuracy for all images, these results indicate that there is no loss of spectral information in the process of creating mosaics, following the methodology described above.

Table Table 3. Analysis of the resolution between images and classifiers based on overall accuracy.

Click here to display table

Table 3. Analysis of the resolution between images and classifiers based on overall accuracy.
Pass 1Pass 2Mosaic
MLSVMMLSVMMLSVM
6063.1963.8864.9466.5367.4267.73
12064.8064.0763.4564.7369.0367.87
18067.4266.6766.2865.6770.0769.68
20070.1767.8764.1765.5870.5668.93
22068.6866.9865.0464.5166.6968.06
24067.5664.4565.6063.6266.5669.25

In addition, for the flight image obtained as a mosaic from two separate passes, the maximum value for OA was 70.56% at 200 cm spatial resolution applying the ML method (Figure 4).

Forests 05 01304 g004 200
Figure 4. Comparison between images based on the highest indices.

Click here to enlarge figure

Figure 4. Comparison between images based on the highest indices.
Forests 05 01304 g004 1024

3.3. Accuracy Assessment and a Comparison of Overall Classification Accuracy

We used the same degradation methodology as in the previous section for the QuickBird images from July 2008 and May 2009, in order to observe which date gave better results in terms of spatial and spectral resolution. For this purpose, we compared the OV accuracy values for the fused and non-fused images at different spatial resolutions, applying both ML and SVM classification methods, to assess the effect of changing spatial resolution on the performance of the classifiers.

3.3.1. QuickBird Image from 2008

After the fusion and degradation processes of the image from July 2008, the images were classified. Figure 5 shows the OA values for fused (F) and non-fused (NF) images, left and right respectively, for each classification method.

Forests 05 01304 g005 200
Figure 5. Overall accuracy analysis for the QuickBird 2008 image applying ML and SVM classifires: (a) fused image (F); (b) non-fused image (NF).

Click here to enlarge figure

Figure 5. Overall accuracy analysis for the QuickBird 2008 image applying ML and SVM classifires: (a) fused image (F); (b) non-fused image (NF).
Forests 05 01304 g005 1024

For the fused image, the OA values ranged between 80.42% at 60 cm of spatial resolution and 83.77% at 220 cm of spatial resolution for the ML classifier, and 80.87% at 120 cm of spatial resolution, and 85.21% at 180 cm of spatial resolution for the SVM classifier. For the non-fused image, the values ranged between 79.39% at 120 cm of spatial resolution, and 85.71% at 220 cm of spatial resolution for the ML classifier, while for the SVM classifier, the values were 81.04% at 120 cm of spatial resolution and 86.16% at 200 cm of spatial resolution.

Therefore, the OA index of the classification for the QuickBird fused image from 2008 took its maximum value (85.21%) at 180 cm of spatial resolution, and for the QuickBird non-fused image from 2008, the maximum OA index (86.16%) was at 200 cm of spatial resolution, in both cases applying the SVM method.

3.3.2. QuickBird Image from 2009

For the QuickBird image from May 2009, we worked in the same way as for the 2008 image. However, the data derived from the classification is considerably different (Figure 6). For the fused image the OA values ranged between 60.16% at 60 cm of spatial resolution and 64.24% at 200 cm of spatial resolution for the ML classifier, and 52.04% at 240 cm of spatial resolution, and 73.65% at 200 cm of spatial resolution for the SVM classifier. For the non-fused image, the values ranged between 57.12% at 240 cm of spatial resolution, and 65.94% at 200 cm of spatial resolution for the ML classifier, while for the SVM classifier the values were 64.63% at 240 cm of spatial resolution and 71.44% at 180 cm of spatial resolution.

Forests 05 01304 g006 200
Figure 6. Overall accuracy analysis for the QuickBird 2009 image applying ML and SVM classifiers: (a) fused image (F); (b) non-fused image (NF).

Click here to enlarge figure

Figure 6. Overall accuracy analysis for the QuickBird 2009 image applying ML and SVM classifiers: (a) fused image (F); (b) non-fused image (NF).
Forests 05 01304 g006 1024

Therefore, the OA index of the classification for the QuickBird fused image from 2009 took its maximum value (73.65%) at 200 cm of spatial resolution, and for the non-fused image, the maximum OA index (71.44%) was at 180 cm of spatial resolution, in both cases applying the SVM method. Table 4 presents a comparison of the highest values of OA, for the QuickBird and aerial flight images at different resolution.

Table Table 4. Highest values of overall accuracy for the QuickBird and aerial flight images. Values with symbol a and b were classified with the ML and SVM methods, respectively.

Click here to display table

Table 4. Highest values of overall accuracy for the QuickBird and aerial flight images. Values with symbol a and b were classified with the ML and SVM methods, respectively.
Spatial Resolution, cmQB 2008QB 2009Flight 2009
FNFFNF
6081.69 b81.85 b69.2 b70.55 b67.73 b
12080.95 a81.04 b71.89 b70.83 b69.03 a
18085.21 b84.88 b72.51 b71.44 b70.07 a
20083.1 b86.16 b73.65 b70.81 b70.56 a
22084.48 b85.71 a72.57 b68.54 b68.06 b
24081,19 b82.56 a63.25 a64.63 b69.25 b

Therefore, after applying a supervised classification by the Support Vector Machine method, based on selected regions of interest (ROIs) (Pyrus, Olea, Quercus, wet grass, dry grass, mixed woodland, riparian forest, soil, saturated soil, ponds and shade), the highest values in the classification were obtained for the QuickBird July 2008 image, non-fused and at 200 cm of spatial resolution.

To assess the performance of the two methods of classification (ML and SVM) through the statistical significance of the differences in overall accuracy, we used the July 2008 Quickbird image because it provided the highest overall accuracy values for all studied spatial resolutions (Table 4). The results of the McNemar Chi-squared test after comparing pairs of overall accuracy values (Table 5) were statistically significant in the following cases: at 60 cm (NF), 120 cm (NF) and 180 cm (F) SVM outperforms ML (p < 0.001). However, ML performed better than SVM at 220 cm (NF) of resolution (p < 0.05). There are no significant differences between the classifiers at 200 cm (NF) and 240 cm (NF) of resolution. Therefore, for high-spatial resolution, the SVM classifier performs better than ML, with significant increases in overall accuracy. However, from 2 meters of spatial resolution, the highest overall accuracy values change for the ML classifier, but the statistical significance is lower or even disappears.

Table Table 5. The McNemar Chi-squared test for comparing the overall accuracy (%) of the classifications by ML and SVM methods in the July 2008 Quickbird image with different spatial resolutions. Significant differences are indicated as ns (not significant); * (p < 0.05); *** (p < 0.001).

Click here to display table

Table 5. The McNemar Chi-squared test for comparing the overall accuracy (%) of the classifications by ML and SVM methods in the July 2008 Quickbird image with different spatial resolutions. Significant differences are indicated as ns (not significant); * (p < 0.05); *** (p < 0.001).
ImageResolutionMLSVMChi-Square
QB08 NF6079.8681.8528.1 ***
QB08 NF12079.3981.0410.4 ***
QB08 F18082.1585.2130.1 ***
QB08 NF20084.9486.161.2 ns
QB08 NF22085.7183.774.1 *
QB08 NF24082.5682.270.2 ns

3.4. Comparison between Images According to the Pyrus Class

As is already known, the producer’s accuracy (PA) is based on the number of pixels that are classified as Pyrus and actually are Pyrus, from the total real pixels of the Pyrus class. The user’s accuracy (UA) corresponds to the successes, and is based on the number of pixels classified as Pyrus that are in fact Pyrus, from the total number of pixels classified as Pyrus. In this sense, Table 6 shows the PA and UA values for the class Pyrus, for each type of image analyzed.

Table Table 6. Comparison between images based in the producer and user (P/U) index values for the Pyrus class. Values with symbol a and b were classified with the ML and SVM methods, respectively. For the Flight 2009 image, left and right, respectively.

Click here to display table

Table 6. Comparison between images based in the producer and user (P/U) index values for the Pyrus class. Values with symbol a and b were classified with the ML and SVM methods, respectively. For the Flight 2009 image, left and right, respectively.
Spatial
Resolution
QB 2008QB 2009Flight 2009
F aNF aF bNF bF aNF aF bNF b
6039.9/37.223.6/40.232.0/50.616.4/2814.3/6.828.8/14.88.4/3621/11.135.7/28.922.5/38.3
12037.5/43.619.2/28.927.3/49.310.4/17.325.4/9.929.1/12.55.6/6.65/6.230.8/23.920.3/42.8
18040.7/36.629.8/36.922.2/44.414/3240/16.534.2/14.618.1/1510.1/13.738.2/20.88.8/16.6
20043.5/39.238.8/46.317.4/38.116.3/33.331.1/10.512.1/13.82.2/10.39.3/1127.1/19.77.5/14.3
22045.7/42.140/45.120/36.88.5/21.429.2/13.37.4/5.810/15.68.8/10.344/21.84/15.8
24043.5/55.547.8/4415.2/33.330.4/4051.5/20.715.8/7.93.3/124/9.747/21.98.8/14.8

For the fused image from July 2008, the maximum classification value (OA = 85.21%) at 180 cm resolution, presents values of 40.74% for PA and 36.67% for UA, applying the SVM method. However, for a resolution of 240 cm and applying the ML classifier, the OA classification value is lower (80.60%), but it presents the highest values of 43.48% for the producer and 55.56% for the user. Moreover, the results are different for the non-fused image from July 2008. The highest OA value (86.16%) is given at 200 cm resolution, applying the SVM method, with 16.33% for PA and 33.33% for UA. However, as in the case of the fused image, the OA classification data is a few points lower (84.94%) at 200 cm of resolution, but the UA values are reinforced (producer: 38.78%; user: 46.34%) when we apply the ML method.

For the image from May 2009, we analyzed the PA and UA values in the same way. The maximum classification values for the fused image (OA = 73.65%) when we apply the SVM method at 200 cm resolution, are 2.22% for PA and 10.33% for UA. For resolution of 60 cm, the classification values (OA = 69.2%) are higher at 8.4% for PA and 36% for UA, also when the SVM method is applied. However, the results are different for the non-fused image from 2009. When we apply the SVM classifier, the highest OA value (71.44%) is at 180 cm, with 10.12% PA and 13.79% UA. However, when we apply the ML method, the OA classification data is a few points lower (65.02%) at 60 cm resolution, but the accuracy values are reinforced (producer: 28.83%; user: 14.88%).

Finally, for the flight 2009 images, the highest OA value (70.56) is at 200 cm of resolution when we apply the ML method, with 27.1% PA and 19.7% UA. However, the maximum classification values are at 120 cm of resolution (OA = 67.87) applying the SVM classifier, with 20.3% PA and 42.8% UA.

We analyze below the results of the error matrices with the highest classification values for the QuickBird fused image from July 2008 at 240 cm of resolution and classified by the ML method (Table 7), for the QuickBird fused image from May 2009 at 60 cm and classified by the SVM method (Table 8), and from the Flight 2009 image at 120 cm and classified by the SVM method (Table 9).

For the image from July 2008, the PA for the individual categories ranged between 25.81% for the Olea class and 95.28% for the dry grass class, while the UA was between 10.13% for the Olea class and 100% for the soil and saturated soil classes. The Pyrus class provided UA of 55.56% and PA of 43.48%. The classification recognizes some of the pixels of the Pyrus class, like other vegetation formations, mainly Olea, even with shadows. The results change considerably when we analyze the image from May 2009. The PA ranged from 8.38% for the Pyrus class to 99.72% for the soil class, whereas the UA ranged between 22.89% for the saturated soil class and 99.76% for the wet grass class. The Pyrus class provided UA of 35.9% and PA of 8.38%. In this case, the classification mistook most of the pixels of the Pyrus class for the Quercus, mixed woodland and dry grass classes. In the case of the Flight 2009 image, PA ranged from 19.52% for the Quercus class to 99.72% for the Soil class, whereas UA ranged between 30.63% for the Olea class and 99.67% for the saturated soil class. The Pyrus class provided UA of 42.86% and PA of 20.37%. In this case, the classification mistook most of the pixels of the Pyrus class for the Olea class.

Table Table 7. Error matrix of the QuickBird fused image from July 2008 at 240 cm of resolution and classified by the ML method.

Click here to display table

Table 7. Error matrix of the QuickBird fused image from July 2008 at 240 cm of resolution and classified by the ML method.
Classified CategoryActual CategoryTotalUser’s Accuracy (%)
PyrusOleaQuercusWet grassDry GrassMixed WoodlandRiparian ForestSoilSaturated SoilPondsShade
Pyrus2061222000033655.56
Olea98720260155347910.13
Quercus79900040000111181.08
Wet Grass0208900300009494.68
Dry Grass000010100140411091.82
Mixed Woodland64140013315001117476.44
Riparian Forest0127008500009589.47
Soil00000007400074100
Saturated Soil00000000870087100
Ponds0020010005035689.29
Shade41503000158610581.90
Total4631121100106166103909759102
Producer’s Accuracy (%)43.4825.8174.388995.2880.1282.5282.2289.6984.7584.31
Table Table 8. Error matrix of the QuickBird fused image from May 2009 at 60 cm and classified by the SVM method.

Click here to display table

Table 8. Error matrix of the QuickBird fused image from May 2009 at 60 cm and classified by the SVM method.
Classified CategoryActual CategoryTotalUser’s Accuracy (%)
PyrusOleaQuercusWet GrassDry GrassMixed WoodlandRiparian ForestSoilSaturated SoilPondsShade
Pyrus4252809262000511735.9
Olea2883860100000019841.92
Quercus103689490513463200249160059.31
Wet Grass03012690000000127299.76
Dry Grass38541370136939180002165782.62
Mixed Woodland23911246090211050400294353059.77
Riparian Forest27129152225009550000148864.18
Soil003011001447700146898.57
Saturated Soil001404030041250054622.89
Ponds2010036400655354105262.26
Shade22356001182003581072163165.73
Total50145718861503184426751517145113210171576
Producer’s Accuracy (%)8.3818.1650.3284.4374.2478.8862.9599.7294.7064.4168.02

Error matrix of the QuickBird fused image from May 2009 at 60 cm and classified by the SVM method.

Table Table 9. Error matrix of the Flight 2009 image at 120 cm and classified by the SVM method.

Click here to display table

Table 9. Error matrix of the Flight 2009 image at 120 cm and classified by the SVM method.
ClassifiedCategoryActual CategoryTotalUser’s Accuracy (%)
PyrusOleaQuercusWet GrassDry GrassMixed WoodlandRiparian ForestSoilSaturated SoilPondsShade
Pyrus33430000000017742.86
Olea4634000029000211130.63
Quercus799003146190002923138.96
Wet Grass303321910469000050963.06
Dry Grass101018335600101038187.93
Mixed Woodland2520258632379390001580147.32
Riparian Forest422591622128243000255943.47
Soil000000036220036499.45
Saturated Soil000000013000030199.67
Ponds0000000002481025896.12
Shade5090010200234937792.57
Total162131461408399673401363303250418
Producer’s Accuracy (%)20.3725.9519.5278.6883.9656.3260.6099.7299.0199.2083.49

Error matrix of the Flight 2009 image at 120 cm and classified by the SVM method.

4. Discussion

The degradation process at different spatial resolutions which was carried out over the images allowed smoothed images to be obtained which are much more useful for distinguishing shapes, features and sizes, especially for the small-size of the crown of P. bourgaeana. In addition, the analysis of the aerial flight images showed that the construction of mosaics from “parts” of an image, following the methodology described in this paper, facilitates the treatment of these parts without losing spectral information [61,62]. Thus, for the flight image from 2009, obtained as a mosaic, the maximum overall accuracy value (70.56%) was for the image with spatial resolution of 200 cm, applying the Maximum Likelihood method and having statistically significant differences (McNemar test Chi-square = 18.1, p < 0.05) from the Support Vector Machine method.

We applied the same methodology for the mosaic image as for the QuickBird images from July 2008 and May 2009, in order to observe which date gave better results in terms of spatial, spectral and temporal resolution, based on the classification results. In this case, the comparison was made between fused images by the IHS method and non-fused images, for the same date and between different dates. Visual analysis of the QuickBird images with different levels of degradation, and after applying two supervised classification methods (Maximum Likelihood and Support Vector Machine), showed that there were no significant differences. However, slight differences did appear in the color of the images, because they had been obtained in different months.

After applying two different classification methods over Quickbird fused images and non-fused images, with different spatial resolution and from different seasons, the highest overall accuracy values were obtained for the July 2008 image. For the fused image, the overall accuracy value was of 85.21% at 180 cm of spatial resolution, and for the non-fused image the overall accuracy was of 86.16% at 200 cm of spatial resolution. However, the overall accuracy for the QuickBird fused image from 2009 took its maximum value of 73.65% at 200 cm of spatial resolution, and for the non-fused image the maximum value was 71.44% at 180 cm of spatial resolution. For all cases, the classification method was the Support Vector Machine algorithm.

These results are in consonance with what has been reported by other authors [63]—the superior performance of the SVM method compared with the ML method. However, when we compared the highest overall accuracy values corresponding with the July 2008 QuickBird image, the differences between the two algorithms were in some cases not significant. The McNemar Chi-squared test demonstrated that there were differences depending on the spatial resolution that was used. A comparison of the overall accuracy values revealed that the use of any classifier at intermediate spatial resolution (180–200 cm) was superior to the corresponding classifier with higher and lower resolutions, since the overall accuracy increases when the pixel size is larger. However, the statistical significance varied from higher spatial resolution to lower spatial resolution, and was more pronounced for higher spatial resolution. A possible explanation for the lower accuracies of the high spectral resolution images is that the smaller pixel size introduced higher spectral variance within a class, and this resulted in lower spectral separability among the classes. However, at low spatial resolution, the spectral signatures of the different classes became over-generalized, and the spectral separability within each pixel was reduced. This approach could also explain the slight difference between the fused image and the non-fused image. Therefore the highest overall accuracy values were found at 1.8–2 m pixel size, regardless of the classification method that was used.

Furthermore, the accuracy at class level may differ between algorithms, and may thus influence the choice of one classifier or the other, depending on purposes of the user. The analysis of the user’s accuracy data and the producer’s accuracy data for the Pyrus class showed that the differences are highly dependent on the date of acquisition, on degradation, and on the type of image (fused and non-fused). If there are differences in any of these characteristics of the image, the overall rates can be reinforced at the expense of the partial indices, and vice versa. Although the results in terms of overall classification were lower than for the non-fused image, and there were no significant differences in terms of overall accuracy when we compared the classification methods (McNemar Chi-squared test = 0.25, ns), the QuickBird image from July 2008, fused by the IHS method, degraded at 240 cm spatial resolution and classified by the Maximum Likelihood method, provided accuracy values higher than 55% when classifying wild pears in the study area (producer’s accuracy = 43.48% and user’s accuracy = 55.56%). The Maximum Likelihood algorithm identified the Pyrus class with greater accuracy than SVM, even with differences among classifiers of 28.26% for PA and 22.23% for UA.

A hypothesis that can explain why the overall classification is better for the image of July is that P. bourgaeana can still maintain a considerably percentage of green leaves on its branches during the summer, as evidenced by its phenological cycle, in which the vegetative development extends from March to early August. There may be confusion with other vegetation because, at this time of the year, the spectral response of vegetation and mainly deciduous trees, such as wild pear, is strongly influenced by the process of leaves senescence and shedding, mainly related to the summer drought. It would therefore be advisable to capture the selected scene at a date between the end of spring and early summer. However, the lowest results extracted from the QuickBird image from May 2009 revealed that the greatest confusion is among vegetation classes. This may be mainly due to the overlap between the crowns of the trees, pastures and scrub, because in springtime most plant species foliate and flower at the same time, and this may lead to an overlap between spectral signatures. This could be confirmed by an analysis of the spectral differences between the wild pear and the surrounding vegetation, based on their spectral signature and by analyzing hyperspectral information, in order to minimize the effect of spectral overlap [64,65].

The small crown size of this species and its low abundance and scattered distribution across the Mediterranean evergreen forest and open woodland (dehesa), make the individual trees difficult to locate, limiting the search and selection of pixels required to carry out such studies. We must take into account that factors such as the density or the morphology of the leaves of the crown, and the shadows that they project, could influence the classification results [66]. Even with a manual delineation of the crowns, the yield of the classification may be low [67]. Clearly, it is a difficult task to classify, from multispectral images (less than 50%), individual trees of limited size, immersed in a matrix of mixed vegetation, especially when their crowns overlap to some extent [68,69]. However, although this process, which was based on the interpretation of single pixels, has improved the visual and spectral quality of the multispectral images, we are considering the use of hyperspectral data in the future to improve the classification results. In addition, other techniques could be also tested in future work, such as object classification applied to the original images based on groups of pixels [70].

5. Conclusions

The degradation process at different spatial resolutions that was carried out over the images enabled us to discriminate shapes, features and sizes, especially for the small-sized crown of P. bourgaeana, without losing spectral information.

Both the Maximum Likelihood and Support Vector Machines algorithms performed moderately well for discriminating individual P. bourgaeana trees from color-infrared images of QuickBird and aerial orthophotos obtained from the ADS40-SH52 airborne sensor, achieving overall accuracies higher than or equal to 86%. In general, Support Vector Machines gave better accuracies than the Maximum Likelihood algorithm for all fused and non-fused images. In fact, the highest overall accuracy value was provided by Support Vector Machines for the July 2008 Quickbird non-fused image at 200 cm of spatial resolution, reaching the highest values at intermediate-low spatial resolution. In addition, it was better than Maximum Likelihood for images with high spatial resolution, having significant differences in overall accuracy. However, the QuickBird image from July 2008, fused by the IHS method, degraded at 240 cm spatial resolution and classified by the Maximum Likelihood method, provided the highest accuracy values for the Pyrus class (producer’s accuracy = 43.48% and user’s accuracy = 55.56%).

The results provided in this study, although modest, provide a valuable starting point to understand the distribution and the spatial structure of P. bourgaeana, aimed at improving and prioritizing conservation efforts. Furthermore, the approach used in this work might be also applied to other taxa, and might also benefit from future improvements in both the quality of remote sensing imaginary and the methods of analysis.

Acknowledgments

The authors express their gratitude to Petr Sklenička for supporting this study and to Juan Pablo Argañaraz for statistical remarks. This research was supported by two grants awarded to Juan Fernández-Haeger, as the main researcher (ACUAVIR and IPA S.L.), and by the Postdoc ČZU Project grant (ESF/MŠMT CZ.1.07/2.3.00/30.0040) awarded to Salvador Arenas-Castro.

Author Contributions

Salvador Arenas-Castro: main responsible of the reported research, planning, data collection, interpretation of results and preparation and editing of the manuscript, (75% contribution). Juan Fernández-Haeger and Diego Jordano-Barbudo Planning: interpretation of results and critical review of the manuscript (25% contribution).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xie, Y.; Sha, Z.; Yu, M. Remote Sensing Imagery in Vegetation Mapping: A review. J. Plant. Ecol. 2008, 1, 9–23, doi:10.1093/jpe/rtm005.
  2. Ghosh, A.; Fassnach, F.E.; Josh, P.K.; Koch, B. A Framework for Mapping Tree Species Combining Hyperspectral and LiDAR Data: Role of Selected Classifiers and Sensor across Three Spatial Scales. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 49–63, doi:10.1016/j.jag.2013.05.017.
  3. Nagendra, H. Review article. Using Remote Sensing to Assess Biodiversity. Int. J. Remote Sens. 2001, 22, 2377–2400, doi:10.1080/01431160117096.
  4. Wulder, M.A.; Hall, R.J.; Coops, N.C.; Franklin, S.E. High Spatial Resolution Remotely Sensed Data for Ecosystem Characterization. Bioscience 2004, 4, 511–521.
  5. Thomas, C.; Ranchin, T.; Wald, L. Synthesis of Multispectral Images to High Spatial Resolution: A Critical Review of Fusion Methods Based on Remote Sensing Physics. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1301–1312, doi:10.1109/TGRS.2007.912448.
  6. Culvenor, D. TIDA: An Algorithm for the Delineation of Tree Crowns in High Spatial Resolution Remotely Sensed Imagery. Comput. Geosci. 2002, 28, 33–44, doi:10.1016/S0098-3004(00)00110-2.
  7. Everitt, J.H.; Fletcher, R.S.; Elder, H.S.; Yang, C. Mapping Giant Salvinia with Satellite Imagery and Image Analysis. Environ. Monit. Assess. 2008, 139, 35–40, doi:10.1007/s10661-007-9807-y.
  8. Wulder, M.; Niemann, O.; Goodenough, D. Local Maximum Filtering for the Extraction of Tree Location and Basal Area from High Spatial Resolution Imagery. Remote Sens. Environ. 2000, 73, 103–114, doi:10.1016/S0034-4257(00)00101-2.
  9. Wulder, M.A.; White, J.C.; Niemann, K.O.; Nelson, T. Comparison of Airborne and Satellite High Spatial Resolution Data for the Identification of Individual Trees with Local Maxima Filtering. Int. J. Remote Sens. 2004, 10, 2225–2232.
  10. Nelson, T.; Boots, B.; Wulder, M.A. Techniques for Accuracy Assessment of Tree Locations Extracted From Remotely Sensed Imagery. J. Environ. Manage. 2005, 74, 265–271, doi:10.1016/j.jenvman.2004.10.002.
  11. Ferrier, S. Mapping Spatial Pattern in Biodiversity for Regional Conservation Planning: Where to from Here? Syst. Biol. 2002, 51, 331–363, doi:10.1080/10635150252899806.
  12. Heller, R.C.; Doverspike, G.E.; Aldrich, R.C. Identification of Tree Species on Large Scale Panchromatic and Color Aerial Photographs; Department of Agriculture: Washington, DC, USA, 1964; pp. 1–17.
  13. Erikson, M. Species Classification of Individually Segmented Tree Crowns in High-Resolution Aerial Images Using Radiometric and Morphologic Image Measures. Remote Sens. Environ. 2004, 91, 469–477, doi:10.1016/j.rse.2004.04.006.
  14. Everitt, J.H.; Yang, C.; Drawe, D.L. Mapping Spiny Aster Infestations with QuickBird Imagery. Geocarto. Int. 2007, 22, 273–283, doi:10.1080/10106040701337543.
  15. Clark, M.L.; Roberts, D.A.; Clark, D.B. Hyperspectral Discrimination of Tropical Rain Forest Tree Species at Leaf to Crown Scales. Remote Sens. Environ. 2005, 96, 375–398, doi:10.1016/j.rse.2005.03.009.
  16. Lawrence, R.L.; Wood, S.D.; Sheley, R.L. Mapping Invasive Plants using Hyperspectral Imagery and Breiman Culter Classifications (randomForest). Remote Sens. Environ. 2006, 100, 356–362, doi:10.1016/j.rse.2005.10.014.
  17. Hirschmugl, M.; Weninger, B.; Raggam, H.; Schardt, M. Single Tree Detection in Very High Resolution Remote Sensing Data. Remote Sens. Environ. 2007, 110, 533–544, doi:10.1016/j.rse.2007.02.029.
  18. Waser, L.T.; Ginzler, C.; Kuechler, M.; Baltsavias, E.; Hurni, L. Semi-automatic Classification of Tree Species in Different Forest Ecosystems by Spectral and Geometric Variables Derived from Airborne Digital Sensor (ADS40) and RC30 Data. Remote Sens. Environ. 2011, 115, 76–85, doi:10.1016/j.rse.2010.08.006.
  19. Petrie, G.; Walker, A.S. Airborne Digital Imaging Technology: A New Overview. Photogramm. Rec. 2007, 22, 203–225, doi:10.1111/j.1477-9730.2007.00446.x.
  20. Key, T.; Warner, T.A.; McGraw, J.B.; Ann Fajvan, M. A Comparison of Multispectral and Multitemporal Information in High Spatial Resolution Imagery for Classification of Individual Tree Species in A Temperate Hardwood Forest. Remote Sens. Environ. 2001, 75, 100–112, doi:10.1016/S0034-4257(00)00159-0.
  21. Heinzel, J.N.; Koch, B. Exploring Full-Waveform LiDAR Parameters for Tree Species Classification. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 152–160, doi:10.1016/j.jag.2010.09.010.
  22. Brandtberg, T. Individual Tree-Based Species Classification in High Spatial Resolution Aerial Images of Forests using Fuzzy Sets. Fuzzy Sets Syst. 2002, 132, 371–387, doi:10.1016/S0165-0114(02)00049-0.
  23. Heinzel, J.; Koch, B. Investigating Multiple Data Sources for Tree Species Classification in Temperate Forest and Use for Single Tree Delineation. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 101–110, doi:10.1016/j.jag.2012.01.025.
  24. Dalponte, M.; Orka, H.O.; Gobakken, T.; Gianelle, D.; Naeesset, E. Tree Species Classification in Boreal Forests with Hyperspectral Data. IEEE Trans. Geosci. Remote Sensing. 2013, 51, 2632–2645, doi:10.1109/TGRS.2012.2216272.
  25. Everitt, J.H.; Yang, C.; Johnson, H.B. Canopy Spectra and Remote Sensing of Ashe Juniper and Associated Vegetation. Environ. Monit. Assess. 2007, 130, 403–413, doi:10.1007/s10661-006-9407-2.
  26. Fernandes, M.R.; Aguiar, F.C.; Silva, J.M.N.; Ferreira, M.T.; Pereira, J.M.C. Spectral Discrimination of Giant Reed (Arundo. donax, L.): A Seasonal Study in Riparian Areas. ISPRS J. Photogramm. Remote Sens. 2013, 80, 80–90, doi:10.1016/j.isprsjprs.2013.03.007.
  27. Scarascia-Mugnozzaa, G.; Oswald, H.; Piussi, P.; Radoglou, K. Forests of the Mediterranean Region: Gaps in Knowledge and Research Needs. For. Ecol. Manage. 2000, 132, 97–109, doi:10.1016/S0378-1127(00)00383-2.
  28. Carreiras, J.M.B.; Pereira, J.M.C.; Pereira, J.S. Estimation of Tree Canopy cover in Evergreen Oak Woodlands using Remote Sensing. For. Ecol. Manag. 2006, 223, 45–53, doi:10.1016/j.foreco.2005.10.056.
  29. Calvao, T.; Palmeirim, J.M. Mapping Mediterranean scrub with Satellite Imagery: Biomass Estimation and Spectral Behavior. Int. J. Remote Sens. 2004, 25, 3113–3126, doi:10.1080/01431160310001654978.
  30. Viedma, O.; Torres, I.; Pérez, B.; Moreno, J.M. Modeling Plant Species Richness using Reflectance and Texture Data Derived from QuickBird in A Recently Burned Area of Central Spain. Remote Sens. Environ. 2012, 119, 208–221, doi:10.1016/j.rse.2011.12.024.
  31. Médail, F.; Quézel, P. Hot-spots Analysis for Conservation of Plant Biodiversity in the Mediterranean Basin. Ann. Mo. Bot. Gard. 1997, 84, 112–127, doi:10.2307/2399957.
  32. Myers, N.; Mittermeier, R.A.; Mittermeier, C.G.; da Fonseca, G.A.B.; Kent, J. Biodiversity Hotspots for Conservation Priorities. Nature 2000, 403, 853–858, doi:10.1038/35002501.
  33. Arenas-Castro, S. Análisis de la estructura de una población de Piruétano (Pyrus. bourgaeana, Decne) basado en técnicas de Teledetección y SIG. Ph.D. Thesis; University of Cordoba: Spain, 2012; p. 348. Available online: http://hdl.handle.net/10396/7832 (accessed on 08 June 2013).
  34. Fedriani, J.M.; Delibes, M. Seed Dispersal in the Iberian Pear Pyrus. bourgaeana: A Role for Infrequent Mutualists. Ecoscience 2009, 16, 311–321, doi:10.2980/16-3-3253.
  35. Fedriani, J.M.; Wiegand, T.; Delibes, M. Spatial Pattern of Adult Trees and the Mammal-Generated Seed Rain in the Iberian Pear. Ecography 2010, 33, 545–555.
  36. Aldasoro, J.J.; Aedo, C.; Muñoz Garmendia, F. The Genus Pyrus. L. (Rosaceae.) in South-West Europe and North Africa. Bot. J. Linn. Soc. 1996, 121, 143–158.
  37. Arenas-Castro, S.; Julien, Y.; Jiménez-Muñoz, J.C.; Sobrino, J.A.; Fernández-Haeger, J.; Jordano-Barbudo, D. Mapping Wild Pear Trees (Pyrus bourgaeana) in Mediterranean Forest using High Resolution QuickBird Satellite Imagery. Int. J. Remote Sens. 2013, 34, 1–21, doi:10.1080/01431161.2012.700133.
  38. ENVI FLAASH. Atmospheric Correction Module; Spectral Sciences Incorporated (SSI): Burlington, MA, USA, 2009.
  39. Haydan, R.; Dalke, G.W.; Henkel, J.; Bare, J.E. Applications of the IHS Colour Transform to the Processing of Multisensor Data and Image Enhancement. In Proceedings of the International Symposium on Remote Sensing of Arid and Semi-Arid Lands, Cairo, Egypt, 19–25 January 1982; pp. 599–616.
  40. Huang, C.; Davis, L.S.; Townshend, J.R.G. An Assessment of Support Vector Machines for Land Cover Classification. Int. J. Remote Sens. 2002, 23, 725–749, doi:10.1080/01431160110040323.
  41. Cortes, C.; Vapnik, V. Support-Vector Networks, Machine Learning. Kluwer Academic Publisher: Boston, MA, USA, 1995.
  42. Mountrakis, G.; Im, J.; Ogole, C. Support Vector Machines in Remote Sensing: A Review. ISPRS-J. Photogramm. Remote Sens. 2011, 66, 247–259, doi:10.1016/j.isprsjprs.2010.11.001.
  43. Melgani, F.; Bruzzone, L. Classification of Hyperspectral Remote Sensing Images with Support Vector Machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790, doi:10.1109/TGRS.2004.831865.
  44. Exelis Visual Information Solutions. version 4.6; The Environment for Visualizing Images (ENVI): Boulder, CO, USA, 2010.
  45. Vermote, E.F.; El Saleous, N.; Justice, C.O.; Kaufman, Y.J.; Privette, J.L.; Remer, L.; Roger, J.C.; Tanré, D. Atmospheric Correction of Visible to Middle-Infrared EOS-MODIS Data over Land Surfaces: Background, Operational Algorithm and Validation. J. Geophys. Res. 1997, 102, 17131–17141, doi:10.1029/97JD00201.
  46. Digitalglobe, Inc. Radiometric Radiance Conversion for QB Data; Digitalglobe, Inc.: Longmont, CO, USA, 2007.
  47. Beisl, U. Absolute Spectroradiometric Calibration of the ADS40 Sensor. Int. Arch. Photogramm. Remote Sens. 2006, 36, 1–5.
  48. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data—Principles and Practices; CRC Press, Taylor & Francis Group: Boca Ratón, FL, USA, 2009.
  49. Foody, G.M. Sample Size Determination for Image Classification Accuracy Assessment and Comparison. Int. J. Remote Sens. 2009, 30, 5273–5291, doi:10.1080/01431160903130937.
  50. Jia, X.P.; Richards, J.A. Progressive Two-Class Decision Classifier for Optimization of Class Discriminations. Remote Sens. Environ. 1998, 63, 289–297, doi:10.1016/S0034-4257(97)00164-8.
  51. Oommen, T.; Misra, D.; Twarakavi, N.K.C.; Prakash, A.; Sahoo, B.; Bandopadhyay, S. An Objective Analysis of Support Vector Machine Based Classification for Remote Sensing. Math. Geosci. 2008, 40, 409–424, doi:10.1007/s11004-008-9156-6.
  52. Foody, G.M.; Mathur, A. A Relative Evaluation of Multiclass Image Classification by Support Vector Machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1335–1343, doi:10.1109/TGRS.2004.827257.
  53. Hsu, C.W.; Lin, C.J. A Comparison of Methods for Multiclass Support Vector Machines. IEEE Trans. Neural Netw. 2002, 13, 415–425, doi:10.1109/72.991427.
  54. Chen, C.H.; Ho, P.G.P. Statistical Pattern Recognition in Remote Sensing. Pattern Recognit. 2008, 41, 2731–2741, doi:10.1016/j.patcog.2008.04.013.
  55. Stehman, S.V. Selecting and Interpreting Measures of Thematic Classification Accuracy. Remote Sens. Environ. 1997, 62, 77–89, doi:10.1016/S0034-4257(97)00083-7.
  56. Congalton, R.G. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sens. Environ. 1991, 37, 35–46, doi:10.1016/0034-4257(91)90048-B.
  57. Pontius, R.G., Jr.; Millones, M. Death to Kappa: Birth of Quantity Disagreement and Allocation Disagreement for Accuracy Assessment. Int. J. Remote Sens. 2011, 32, 4407–4429, doi:10.1080/01431161.2011.552923.
  58. Agresti, A. An Introduction to Categorical Data Analysis, 2nd ed. ed.; Wiley-Interscience: Hoboken, NJ, USA, 2007.
  59. De Leeuw, J.; Jia, H.; Yang, L.; Liu, X.; Schmidt, K.; Skidmore, A.K. Comparing Accuracy Assessments to Infer Superiority of Image Classification Methods. Int. J. Remote Sens. 2006, 27, 223–232, doi:10.1080/01431160500275762.
  60. Foody, G.M. Thematic Map Comparison: Evaluating the Statistical Significance of Differences in Classification Accuracy. Photogramm. Eng. Remote Sens. 2004, 70, 627–633, doi:10.14358/PERS.70.5.627.
  61. Nicolas, H. New Methods for Dynamic Mosaicking. IEEE Trans. Image Process. 2001, 10, 1239–1251, doi:10.1109/83.935039.
  62. Zagrouba, E.; Barhoumi, W.; Amri, S. An Efficient Image-Mosaicing Method Based on Multifeature Matching. Mach. Vis. Appl. 2009, 20, 139–162, doi:10.1007/s00138-007-0114-y.
  63. Pal, M.; Mather, P.M. Support Vector Machines for Classification in Remote Sensing. Int. J. Remote Sens. 2005, 26, 1007–1011, doi:10.1080/01431160512331314083.
  64. Arenas-Castro, S.; Sobrino, J.A.; Fernández-Haeger, J.; Jordano-Barbudo, D.  2013. submitted.
  65. Everitt, J.H.; Yang, C. Mapping Broom Snakeweed through Image Analysis of Color-Infrared Photography and Digital Imagery. Environ. Monit. Assess. 2007, 134, 287–292, doi:10.1007/s10661-007-9619-0.
  66. Schmidt, K.S.; Skidmore, A.K. Spectral Discrimination of Vegetation Types in A Coastal Wetland. Remote Sens. Environ. 2003, 85, 92–108, doi:10.1016/S0034-4257(02)00196-7.
  67. Larsen, M. Single Tree Species Classification with a Hypothetical Multi-Spectral Satellite. Remote Sens. Environ. 2007, 110, 523–532, doi:10.1016/j.rse.2007.02.030.
  68. Leckie, D.G.; Gougeon, F.A.; Tinis, S.; Nelson, T.; Burnett, C.N.; Paradine, D. Automated Tree Recognition in Old Growth Conifer Stands with High Resolution Digital Imagery. Remote Sens. Environ. 2005, 94, 311–326, doi:10.1016/j.rse.2004.10.011.
  69. Carleer, A.; Wolff, E. Exploitation of Very High Resolution Satellite Data for Tree Species Identification. Photogramm. Eng. Remote Sens. 2004, 70, 135–140, doi:10.14358/PERS.70.1.135.
  70. Walter, V. Object-Based Classification of Remote Sensing Data for Change Detection. ISPRS J. Photogramm. Remote Sens. 2004, 58, 225–238, doi:10.1016/j.isprsjprs.2003.09.007.
Forests EISSN 1999-4907 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert