Next Article in Journal
Reconstruction of MODIS Land Surface Temperature Products Based on Multi-Temporal Information
Next Article in Special Issue
Individual Tree Crown Segmentation and Classification of 13 Tree Species Using Airborne Hyperspectral Data
Previous Article in Journal
Quantification of Polychlorinated Biphenyl (PCB) Concentration in San Francisco Bay Using Satellite Imagery
Previous Article in Special Issue
Object-Based Mangrove Species Classification Using Unmanned Aerial Vehicle Hyperspectral Images and Digital Surface Models
Article Menu
Issue 7 (July) cover image

Export Article

Remote Sens. 2018, 10(7), 1111; doi:10.3390/rs10071111

Tree Species Classification of the UNESCO Man and the Biosphere Karkonoski National Park (Poland) Using Artificial Neural Networks and APEX Hyperspectral Images
Department of Geoinformatics, Cartography and Remote Sensing, University of Warsaw (UW), ul. Krakowskie Przedmieście 30, 00-927 Warsaw, Poland
Author to whom correspondence should be addressed.
Received: 29 May 2018 / Accepted: 10 July 2018 / Published: 12 July 2018


Knowledge of tree species composition is obligatory in forest management. Accurate tree species maps allow for detailed analysis of a forest ecosystem and its interactions with the environment. The research presented here focused on developing methods of tree species identification using aerial hyperspectral data. The research area is located in Southwestern Poland and covers the Karkonoski National Park (KNP), which was significantly damaged by acid rain and pest infestation in the 1980s. High-resolution (3.35 m) Airborne Prism Experiment (APEX) hyperspectral images (288 spectral bands in the range of 413 to 2440 nm) were used as a basis for tree species classification. Beech (Fagus sylvatica), birch (Betula pendula), alder (Alnus incana), larch (Larix decidua), pine (Pinus sylvestris), and spruce (Picea abies) were classified. The classification algorithm used was feed-forward multilayered perceptron (MLP) with a single hidden layer. To simulate such a network, we used the R programming environment and the nnet package. To provide more accurate measurement of accuracy, iterative accuracy assessment was performed. The final tree species maps cover the whole area of KNP; a median overall accuracy (OA) of 87% was achieved, with median producer accuracy (PA) for all classes exceeding 68%. The best-classified classes were spruce, beech, and birch, with median producer accuracy of 93%, 88% and 83%, respectively. The pine class achieved the lowest median producer and user accuracies (68% and 75%, respectively). The results show great potential for the use of hyperspectral data as a tool for identifying tree species locations in diverse mountainous forest.
classification; hyperspectral data; protected areas; neural networks

1. Introduction

Managing forest resources requires a detailed inventory of forest species and information about changes occurring in the forest ecosystem, in both biotic and abiotic components. The main focus of forest management and nature protection endeavors should be on the monitoring of species composition changes over time, spatial distribution of species, and vegetation conditions [1]. Forested areas cover large expanses, making them difficult to analyze using traditional methods of creating forest inventory, which also require substantial financial investment [2]. In addition to purely pragmatic reasons for protecting forests, increased knowledge of forest ecosystem dynamics and relatively high willingness of developed nations to spend resources on nature conservation create a friendly environment for actions aimed at recreating natural forest ecosystems in place of existing forests. Such actions allow for effective protection of forest ecosystem biodiversity and the creation of new habitats that further increase species composition richness [3]. Existing methods of creating forest inventories use recurring field surveys, which are often supported by analyses of remotely sensed data (in the form of orthophoto maps or other remote sensing products) [4,5]. Remote sensing has proven to provide reliable information about spatial distribution of vegetation conditions [6], biophysical properties of plants [7,8], and monitoring of the human impact on valuable habitats [9,10], land cover [11], vegetation communities [12,13], and many other topics.
Classification of plant species using remote sensing is a challenging research area, mainly because of the high correlation of spectral reflectance between species [14] and the small number of variables that control the overall shape of the spectral reflectance curve (chlorophyll a and b content, carotenoids, leaf cellular structure) [15]. Moreover, leaf spectral properties are also dependent on plant age, phenology of a given species, and climate adaptations [16]. Masaitis and Mozgeris [15] have shown that the greatest spectral differences between several deciduous tree species (Populus tremula, Alnus glutinosa, Picea abies, Pinus sylvestris, Betula pendula) at the beginning of the growing period exist mainly in the blue and near-infrared (NIR) regions of the spectrum, while in summer shortwave infrared (SWIR) and red regions provide better differentiation [15]. Most of the spectral features typical of vegetation (such as chlorophyll a and b absorption features, red edge, pigment absorption features) are concentrated in the visible and near infrared (VNIR) part of the spectrum, making VNIR an attractive spectral range for tree species classification [17]. On the other hand, some works prove the usefulness of the 1000–2500 nm spectral range in tree species classification [2,18,19]. The above reasons necessitate careful selection of spectral bands used in the analysis of plant species.
Recent studies have demonstrated the use of hyperspectral remote sensing methods to gather information about the spatial distribution of forest tree species. Ghosh et al. [20] presented results of the classification of pine (Pinus sylvestris), beech (Fagus sylvatica), two species of oak (Quercus robur and Quercus petraea), and Douglas fir (Pseudotsuuga menziesii) using minimum noise fraction (MNF) transformed bands from a HySpex hyperspectral sensor [20]. Results have shown similar performance of random forest (RF) and support vector machine (SVM) classifiers after the MNF transformation of data (achieving 94% overall accuracy (OA) in the best case). Graves et al. [21] reported 62% OA for classification of 20 tropical tree species in Southern Panama. Classes with large numbers of samples have shown the smallest variation in results and the highest accuracy. The authors suggested detailed analysis of classification errors and their sources, with the help of cross-validation methods [21]. Despite the good results obtained using hyperspectral data, some researchers enrich spectral data with LiDAR (Light Detection and Ranging) data. Among other things, LiDAR data allow identification of individual crowns of trees, which reduces the problem of “mixels” in classification and enables use of an object-oriented approach [22,23,24]. One good example of the synergistic use of LiDAR and hyperspectral data is work by Ballanti et al. [23]. A comprehensive review of remote sensing use in tree species classification can be found in Fassnacht et al. 2016 [19].
Artificial neural networks (ANNs) were used relatively infrequently in remote sensing until recently [25,26,27]. According to [19], only three studies among 129 analyzed studies using remote sensing for tree species classification used ANNs as classification algorithms. The main problem with using ANNs is that they have large computational requirements, but they still have similar performance compared to faster, better known algorithms such as RF and SVM [19]. While this is true in general, some studies have reported higher accuracy for results from ANNs [28]. One of the simpler types of ANN is the multilayered feed-forward perceptron, built from at least three layers of interconnected layers of neurons [29]. Such a network of connected neurons arranged into layers can be seen as a highly concurrent computational system, capable of finding solutions to a given problem [30]. Typically, training of a neural network is aimed at minimizing the error of a given objective function, and its degree of success in doing so characterizes how capable the network is at providing a correct solution to the problem [31].
This paper aims to answer the following questions:
  • Are hyperspectral data and artificial neural networks useful for mapping tree species?
  • What are the differences between forest inventory and Airborne Prism Experiment (APEX) derived tree species compositions of forest growing in Karkonoski National Park (KNP)?

2. Research Area

Karkonoski National Park (KNP) is located in Southwestern Poland, bordering the Czech Republic (Figure 1). Park was founded in 1959 with the goal of protecting and restoring mountain vegetation and preserving endemic alpine and subalpine habitats. Its plant and forest communities were severely damaged during the 1980s, when environmental pollution caused by local heavy industry caused acid rain that weakened local vegetation. This caused a large outbreak of bark beetles that were able to easily infest the damaged forest ecosystem. As a result, large swaths of forested areas were cut down to control the pest outbreak, while remaining communities suffered from soil acidification and air pollution. Another important source of forest disturbance was strong winds [32]. As of 2012, 71% of KNP was occupied by forests [33]. Current forest management practices are aimed at restoring and enriching forest habitats. According to the most recent forest inventory, the forests of KNP are dominated by spruce (85.0% of all trees), while the remaining tree species are birch Betula pendula Roth (4.8%), larch Larix decidua Mill (4.6%), beech Fagus sylvatica L. (4.2%), pine Pinus sylvestris L. (0.6%), sycamore Acer pseudoplatanus L. (0.3%), alder Alnus incana L. Moench (0.3%), and fir Abies alba Mill (0.2%) [33].

3. Materials and Methods

The proposed research method is aimed at creating accurate forest species classification using airborne hyperspectral data and artificial neural networks. The whole work was divided into three parts: input, data processing, and classification and results (Figure 2). The most important parts of the research are as follows:
  • Acquisition of APEX hyperspectral data and LiDAR data.
  • Creation of normalized digital surface model (nDSM) using LiDAR data.
  • Acquisition of a large and representative sample of geographic locations of field-identified tree species during field surveys. Collected data were used to extract classified tree species spectral characteristics from images.
  • Band selection using principal component analysis (PCA).
  • Creation of forest mask used to limit classification scope only to forested areas.
  • Optimization of ANN learning parameters.
  • Assessment of iterative accuracy.
  • Classification of APEX scenes using trained and validated ANNs.
Data used in this work came from APEX hyperspectral sensors (VITO, Mol, Belgium; Table 1). APEX is a result of Swiss–Belgian cooperation (University of Zurich and Flemish Institute for Technological Research—VITO) as part of the ESA-PRODEX project, aimed at developing new sensors [34]. Data used in this paper were acquired on 10 September 2012, by the German Aerospace Center (DLR) as part of the HyMountEcos project. Raw data were corrected and processed into a reflectance product by the VITO. Raw data collected during overflights were first geometrically corrected using navigational data collected during data collection (such as platform geographical position and local deviations of the plane trajectory from the planned one) [35]. To produce orthorectified APEX scenes, VITO was supplied with 1 m resolution digital elevation model (DEM) and DSM (described in Section 3.5). Next, APEX data went through atmospheric correction, where the MODerate resolution atmospheric TRANsmission (MODTRAN® 4, Spectral Sciences Inc., Burlington, MA, USA) program was used to determine sources of radiation and apply atmospheric corrections [36]. VITO’s in-house central data processing center is responsible for producing reflectance products from at-sensor radiance. Spectrometric ground reference data (providing benchmark spectral characteristics of spectrally stable materials) were collected using a FieldSpec 3 field spectrometer (ASD Inc., Boulder, CO, USA). Processed scenes had 288 spectral bands in the range of 413 to 2440 nm, with spatial resolution varying from 3.12 to 3.4 m. Before any processing steps, all scenes had their spatial resolution standardized to 3.35 m.

3.1. Band Selection

Due to the high spectral resolution and number of bands coming from hyperspectral sensors, it is reasonable to select a smaller subset of spectral bands that will contain the most useful information while discarding redundant data and noise [38]. Moreover, high spectral resolution causes neighboring bands to be highly correlated, resulting in significant data redundancy. Before the classification step, the APEX dataset quality was assessed manually. Bands located in water vapor absorption spectral ranges (1300–1500 nm, 1770–2000 nm) and low-quality bands (visible striping and low signal-to-noise ratio) were removed. In total, 66 bands were removed. The remaining bands were subjected to information-load analysis to determine which bands were the most informative. Based on previous studies, it was determined that 40 bands is a reasonable number that will provide the most information for the lowest band count [28,39]. In this work we decided to use a band selection method based on PCA. Instead of using transformed bands, the method of band selection proposed by Thenkabail et al. [40] and Sommer et al. was used [17]. The information load of each spectral band can be assessed by an analysis of band loadings and eigenvalues by PCA. This can be interpreted as a measure of the correlation between each spectral band and each PCA transformed band. Higher absolute values indicate a spectral band that contains more useful information (Figure 3). Based on the above information, we decided to calculate band loadings for the first three PCA bands, and then select 40 spectral bands with the highest contribution to those three.

3.2. Field Data Collection

To be able to create accurate tree species maps using remote sensing, it is important to gather a large and representative sample of locations of field-identified species. Field samples were collected during field surveys carried out in 2013, 2014, and 2016. Before each field campaign, the survey route was planned with the help of a high-resolution orthophoto map (acquired in 2012; 15 cm spatial resolution) and existing tree species maps. Acquisition of measurement polygons characterizing classified tree species was carried out with help of a Leica Zeno 10 GPS receiver (Leica Geosystems AG, Heerbrugg, Switzerland) with a Leica external antenna (Figure 4). Collected geographical locations were used as the center of a circle with a radius of 4 m, to create a measurement polygon. The acquired polygons were then used to extract spectral characteristics from APEX scenes for each tree species. To further increase the number of measurement polygons, data collected during the most recent forest inventory were used. As a result, 712 measurement polygons were gathered during field surveys, while the forest inventory database provided an additional 564 polygons. In total, 1276 polygons were used to create spectral samples characterizing each classified tree species (Table 2). Measurement polygons were used to extract pixel samples representing classified classes.

3.3. Artificial Neural Network Parameter Tuning

A feed-forward artificial neural network with a single hidden layer was used as the classification algorithm. In this work we used the nnet package [41] and R programming environment [42] for simulation of artificial neural networks. Before final selection of network parameters, a grid search optimization method was used. For this work, ANN architectures with 5 to 120 neurons in the hidden layer were tested and compared. It was found that best results were obtained with a network with 34 neurons in the hidden layer. The final network architecture used was a feed-forward multilayered artificial neural network with three layers. The input layer had 40 neurons, corresponding to the number of spectral bands used in classification. The input layer was connected with a second hidden layer consisting of 34 neurons. The final output layer connected to the hidden layer had six neurons, corresponding to the number of classified classes. Learning algorithm back-propagation was used. The network training was done with the following parameters: decay value = 0.0005, iterations: 2500, training stop condition: 0.0001.

3.4. Iterative Accuracy Assessment

To increase accuracy assessment credibility and remove some of the bias introduced by arbitrary splitting the samples into training and validation datasets, we decided to use a 0.632 bootstrap procedure [20,28,43]. The accuracy assessment procedure is divided into a number of iterations. In every iteration, all available samples are randomly split (using stratified random sampling) into training and validation datasets (63.2% and 36.8% of all samples, respectively; Table 3). Next, the artificial neural network classifier is trained using the training dataset. After the training procedure is done, classifier performance is assessed against the validation dataset. During this step, OA, kappa coefficient, and producer accuracy (PA), and user accuracy (UA) are calculated for each class and saved to a file, and the training and validation datasets are removed. This step comprises a single iteration, after which the next iteration may start. Each iteration classifier is trained from scratch. After 100 iterations, the median OA, kappa coefficient, PA, and UA for each class were calculated and their distributions analyzed.

3.5. Forest Mask Creation

In order to eliminate from the classification results all pixels not representing forested areas, a forest mask was constructed using four datasets: nDSM, a vector layer containing the spatial extent of KNP, a modified red edge normalized difference vegetation index (mreNDVI) [44], and an orthophotomap. Normalized digital surface model was created using the LiDAR point cloud data collected in early spring 2012 using LAStools software (rapidlasso GmbH, Gilching, Germany). The resulting data had 1 m spatial resolution. The overall classification area was delimited by the spatial extent of KNP. To remove areas not covered by vegetation, mreNDVI was used with a threshold value of 0.4. All pixels with mreNDVI value higher than 0.4 were considered valid for classification. In order to focus only on pixels covering tree canopies, an nDSM-based mask was used. Small vegetation growing in forest clearings and meadows will create confusion when classification results are analyzed. To remove such confusion, we focused only on vegetation higher than 2.5 m. Such vegetation presumably should consist of only trees. Finally, to remove any imperfections of forest mask created using the above-mentioned datasets, a manually created mask of forest areas was created using high-resolution orthophoto map.

3.6. Scene Classification

After iterative accuracy assessment, the artificial neural network was retrained using all available samples. This network was used to classify APEX scenes covering the area of KNP with a pixel-wise approach. Classified scenes were then generalized using a majority filter (a 3 × 3 kernel window). This step was aimed at removing any “salt and pepper” effect from classified scenes [45].

4. Results

The band selection procedure selected the 40 most informative spectral bands (Figure 5). Most of the selected bands lay in the VNIR spectral region. In VNIR, the 600–690 nm and 850–1000 nm regions contained the majority of the selected bands. Those regions are associated with light absorption by chlorophyll and increased reflection of electromagnetic waves caused by the cellular structure of leaves. None of the selected spectral bands were located in the red edge region. Spectral bands in SWIR were less often selected, but all of those deemed informative were located in characteristic parts of the spectral curve (associated with absorption features and bending points of the curve). A full list of selected bands can be found in Table 4.
Median overall accuracy achieved by classification was 87%, with median kappa coefficient of 0.81. The classes with the highest median producer accuracy were beech and spruce (88% and 92%), and the classes with the lowest producer accuracy were larch and pine (73% and 69%; Figure 5). The classes birch and alder acquired 83% and 82% PA, respectively (Figure 6). The classes beech and spruce had the highest median user accuracy (89% and 91%), and the classes beech and alder achieved around 85% UA. The classes larch and pine had the lowest PA of all classes and achieved 75% median UA. In addition to commonly used accuracy measures, iterative accuracy assessment allows not only singular values to describe various measures to be analyzed, but also distribution of accuracies. Width of distribution and skewness can be used to assess training datasets for spectral homogeneity, show interesting properties of analyzed classes (such as high spectral heterogeneity, which would cause wide distribution of accuracies), or describe how well a given classification algorithm performs on the dataset. Moreover, classes with relatively narrow accuracy distributions can be seen as having been classified with higher confidence than those with wider distributions. Spruce, beech, birch, and larch classes had the narrowest PA distribution (4, 9, 13, and 14 percentage points, respectively), while alder and pine had the widest PA distributions (30 percentage points for alder and 33 for pine). Analysis of UA accuracy distribution shows similar results, with spruce (three percentage points) and beech (six percentage points) having the smallest distribution widths and alder and pine the highest (24 and 35 percentage points).
The created tree species map and nDSM were used to calculate the species composition of KNP by height. This allows for analysis of forest structure in terms of not only tree species, but also tree height. It is worth noting that nDSM does not provide trunk height, but measures tree height from ground level to the top of the tree crown. The species with the largest share of trees over 20 m high is beech (60% are over 20 m high, 27% are between 10 m and 20 m, and only 13% are smaller than 10 m). Larch and alder trees do not have a dominating height class (Figure 7). This can be explained by the fact that these were classified with relatively low accuracy. It is possible that some pixels classified as larch or alder are, in fact, covered by other tree species, particularly beech or spruce. The effects of 1980s pollution and a bark beetle outbreak can best be seen in the results for spruce. Only 47% of spruce trees are over 20 m high, and 41% are smaller than 10 m. This may give an estimate of the amount of regrown spruce after old forests were cut down to combat the pest outbreak or died due to soil poisoning and air pollution. More than 50% of birch trees in KNP are under 10 m high, 36% are between 10 m and 20 m, and the remaining 12% are above 20 m. This seems to be in line with birch’s growth patterns. Birch is typically one of the first tree species to occupy new areas (such as forest clearings), and is then replaced by other tree species more adapted to a given habitat, but not so expansive.
The tree species map (Figure 8) created from classified APEX scenes was compared with the most recent forest inventory data. Comparing those two datasets can be an interesting point of discussion on the accuracy of remote sensing products and existing datasets. As seen in Table 5, forest inventory data and classification results agree most closely for the share of birch, larch, alder, and pine in overall forest tree species composition. The greatest discrepancies exist for beech and spruce. Classification results suggest that beech trees comprise more than 10% of all trees, compared to the 4% indicated by the forest inventory. On the other hand, classification results show a lower share of spruce in forest composition as compared to the forest inventory data.

5. Discussion

When comparing the present classification results with those obtained by other researchers using remote sensing data, the data and classification algorithm used must be taken into account, as well as the number of classes determined by individual authors. This mainly relates to the spatial and spectral size of pixels, because large pixels register not only trees, but also surrounding shadows or other objects around or even under trees. Such “mixels” make correct interpretation of the results difficult. Too small a pixel is also not optimal because it registers the area between individual branches, thus introducing artifacts.
Works on tree stand classification are often practical in nature and aim to map the spatial distribution of tree species [2,46], although there are also those that treat the classification of tree species as a nontrivial problem on which they test different data-processing methods and classifiers [22].
In this study, producer accuracy was 93% for the spruce class (Picea abies L.) and 78% for pine (Pinus sylvestris L.). In a work by Dalponte et al. [47] the PA was, respectively, 97% and 95% for spruce (Picea abies L. Karst) and pine (Pinus sylvestris L.) classes, and 71% for other species of deciduous trees [birch (Betula spp. L.) and poplar (Populus tremula L.)] using an SVM algorithm. Comparing the results of the quoted work, similarly high accuracy was obtained for the spruce class, although the pine class was not classified with high accuracy. In this case, the spatial extent of the occurrence of pine in KPN is of great importance, as it makes it considerably more difficult to select a large set of samples. The author of this paper obtained an average producer accuracy for all broadleaf tree species of 88%, which is higher than the 71% of Dalponte et al. [47]. An overall accuracy above 80% is not uncommon in forest stand classification, which justifies the use of remote sensing techniques, and hyperspectral remote sensing in particular, to support the mapping of tree species. The vast majority of works focus on classifying only a few species of trees, although there are also those that classify more tree species [17,21,48].
While this work shows good performance of an ANN classifier, previous works were not so successful. In their research, the Feret and Asner team [48] attempted to classify 17 species of tropical trees growing in Hawaii and achieved decidedly poorer results, ranging around 40% OA despite using artificial neural networks in the MATLAB software (MathWorks, Inc., Natick, MA, USA). These lower results may attest to the greater spatial diversity of tropical tree species. A comparable pixel size and significantly lower accuracy (than the 87% obtained in this work, but with 17 classified tree species) confirm the need for a well-thought-out strategy for selecting learning parameters for ANN. Data from an APEX sensor has already been used to classify tree species using hyperspectral data. A team led by Tagliabue [18] classified five tree species in Lorraine (hornbeam Carpinus betulus, two species of oak Quercus petraea and Quercus robur as a single class, lime Tilia, and spruce Pinus) using APEX data with 3 m spatial resolution. The APEX data were acquired at the beginning of September. The work used a maximum probability algorithm as the classifier and all acquired spectral channels. The classification had an overall accuracy of 74% and a kappa coefficient of 0.63. The lowest producer accuracy was for lime (71%) and hornbeam (70%), and the highest for pine (80%) and oak (85%). User accuracy ranged from 61% (oak class) to 86% (hornbeam class). Some authors have suggested using higher spatial resolution data or images from a different season [18]. This work proves that this is not necessary, on the condition that the data and the classification algorithm are appropriately selected. When considering the subject of forest stand classification, it is worth comparing results against those obtained for an area where similar tree species occur. Lee et al. [24] obtained very good results when classifying tree species using an SVM algorithm in a forest in Oxfordshire, England. Accuracies of over 90% for beech (Fagus sylvatica) and larch (Larix decidua) show that it is possible to classify a larch class better than is shown in the present work (76% in the best iteration). The low accuracy for the larch class can be explained by how trees of this species occur in KNP (singly or in small groups, usually planted in a line) and features of its crown (sparse). This makes it difficult to obtain good samples and to correctly classify trees of this species in an image, because it is difficult to find a pixel that is not a mixel of the crowns of larch and other tree species. It is worth mentioning that Lee et al. [24] used channels after PCA transformation and LiDAR data, which significantly improved the results (increasing OA from 85 to 91%). Due to the dataset used in the present work (imperfections in atmospheric correction caused the spectral characteristics of the same object to be different in two different scenes), images after PCA transformation were not used, which could have led to classification of some classes being worse than anticipated. Comparing the results with the literature, they are in line with the results obtained by other scientists (Table 6), especially considering that few papers have addressed the classifying of tree species in an area as large as KNP.
Tree species distribution maps derived from hyperspectral data exploit spectral differences among species more than tree crown texture and shape to properly assign labels to pixels. While very effective, hyperspectral data collection is mostly limited to aerial sensors. This is an obvious limiting factor in terms of both the area covered by images and cost. On the other hand, very high-resolution multispectral sensors located on satellites can provide high-quality data for most of Earth’s landmass at reasonable cost. Work by Zeng et al. [49] showed the use of multispectral Advanced Land Observation Satellite (ALOS) images for classification of six tree species (Chinese fir, Masson pine, Fujian cypress, Schima superba, Michelia macclurei, and tung) in Jiangle Forest Farm within the city of Sanming, Fujian Province, China. The sensor spectral bands had a 10 m spatial resolution, which was boosted to 2.5 m by use of pansharpening, performed using the ALOS 2.5 m panchromatic band. Using a combination of image segmentation, object-based classification techniques, and rough set theory–based classifiers, researchers were able to achieve classification results with 80% OA and 0.76 kappa coefficient, outperforming the nearest simple neighbor classifier. Pu and Landry [50] managed to classify seven tree species (Quercus geminata, Q. laurifolia, Q. virginiana, pine, palm, Cinnamomum camphora, Magnolia grandiflora) growing in urban areas of Tampa, Florida, USA, using high-resolution IKONOS and WORLDVIEW 2 (WV2) satellite images. IKONOS images had five bands (four spectral bands with 4 m spatial resolution and one panchromatic band with 1 m spatial resolution), while WV2 had nine bands (eight multispectral bands at 2 m spatial resolution and one panchromatic band at 0.5 m spatial resolution). Both satellites provided bands collected in the VNIR spectral range. Linear discriminant analysis (LDA) and classification and regression tree (CART) classification algorithms were used. Classification created using IKONOS data achieved 57% OA and 0.47 kappa coefficient, while those made on WV2 data achieved 62% OA and 0.53 kappa coefficient. In conclusion, WV2 data were proven to be better for species classification thanks to higher spatial resolution and four additional spectral bands that IKONOS lacked. Classes that were classified above 80% PA were Quercus geminata for IKONOS data and pine and palm for WV2 data. Typically, when using high-resolution multispectral data, researchers perform various transformations of spectral bands to create new, potentially helpful features [49,50]. While multispectral data–derived tree species distribution maps have worse quality than those based on hyperspectral data, the literature suggests that use of submeter red-green-blue (RGB) images coupled with LiDAR data can be an interesting and competitive alternative. A study by Deng et al. [51] managed to classify four tree species (Pinus densiflora, Chamaecyparis obtusa, Larix kaempferi, and one class containing all broadleaved trees) using a quadratic SVM classifier. For their study, they acquired 0.25 m spatial resolution RGB images and LiDAR point cloud with density ranging from 13 to 30 points per m2. Classification based on only RGB images achieved 76% OA, while including LiDAR-derived products increased OA to 91%.
The overall accuracy of the classification of tree species can be considered very high (median 87%). This is partly a result of the high producer accuracy for the classes represented in the majority of sample pixels used in the classification (beech and spruce). A more balanced view of achieved accuracy is offered by Fassnacht et al. [19] and Ghosh et al. [20]. In both works, for each classified tree species, the authors decided to draw an equal number of reference pixels that, after being divided into a training set and a test set, were used for training the classifier and as a validation set. This approach can avoid inflation of the general accuracy index, which, in the case of the above method, is the simple arithmetic mean of the classes’ producer accuracies. In both works, it was decided to choose 60 pixels per class. Despite the aforementioned advantages of using a set of reference data samples with equal numbers per class, some works feature very unbalanced numbers of samples for classes [52,53]. For example, Lee et al. [24] used 636 reference pixels for the ash class (Fraxinus excelsior), 255 for larch (Larix decidua), and 186 for birch (Betula spp.), obtaining an overall accuracy of 91%. A similar state of affairs is to be found in this work, where there are quite large differences in the sizes of samples for individual classes (2677 reference pixels for spruce, 685 for larch, and 90 for alder).
The researchers classified eight tree species (sequoia (Sequoia sempervirens), Douglas fir (Pseudotsuga menziesii), laurel (Umbellularia californica), oak (Quercus agrifolia), alder (Alnus rubra), willow (Salix lasiolepis), eucalyptus (Eucalyptus globulus), and horse chestnut (Aesculus californica)) growing in Muir Woods, CA, USA, using an AISA Eagle scanner. In addition to spectral signature, they included a tree crown model in the classification. The classified image obtained an overall accuracy of 95%, and all classes except for willow (58%) obtained a producer accuracy above 90%, while only two classes had a user accuracy below 92% (oak: 67%, willow: 84%). This is convincing evidence for including LiDAR data in forest stand classification, on the condition that the desired data are extractable [23]. In this work, it was decided not to include LiDAR data as an additional predictor, due to the fact that LiDAR data were collected earlier in 2012 than spectral data. While LiDAR data can be helpful in tree species classification, our observations made us reconsider using them. First of all, the forests of KNP are actively reshaped, which includes the creation of new forest clearings. Comparing RGB APEX images with LiDAR-derived data (such as DSM), we noticed that several places that were clearly visible as forest clearings on APEX scenes were counted as forested with tree height more than 10 m on DSM. This suggests that those areas were cleared between LiDAR data collection and APEX data collection. It is preferable to use LiDAR data obtained at the same time as spectral characteristics. The use of out-of-date LiDAR data could significantly distort results.
Gathering field data is a very costly task and requires good logistical preparation [19]. Whether for classification or for other studies using remote sensing data, the in-field data collection stage is certainly very important. Having access to maps, orthophoto maps, and data from various government institutions and private companies does not remove the need to at least inspect the research area. Ideally, field studies should be conducted at the same time as imaging. Depending on resources, some authors carry out fieldwork in the same month as the execution of imaging, supporting their fieldwork with data from state institutions and maps [2]. Other researchers use data from state institutions and maps without conducting fieldwork [19]. Naturally, the need to carry out field studies depends on the research area. Areas that are well mapped with up-to-date data (e.g., managed forests) do not require extensive field research, as opposed to sparsely mapped areas with out-of-date data. The classification of species of trees can use field data collected after overflight because of the slow rate of natural changes in the forest and the fact that large and fast changes (felling, windthrow, etc.) are very easy to spot in the field. Of course, in the case of data collection for classifying communities that only occur during a certain part of the year, field tests should be carried out at the time of overflight in order not to falsify the results. In this work, it was decided to conduct a series of field studies for the purpose of gathering reference data for classification. The field studies took place one and two years after APEX imaging. Therefore, during data collection, areas were avoided that were actively being transformed or where cursory inspection detected changes (removed trees, remaining felled trunks) that could have occurred between overflight and field research. During the preparatory work, available maps and an orthophoto map were used to determine the area for field research.
Significant differences (mainly for spruce and beech) between forest inventory and our results can be explained by several reasons. First of all, we decided to use forest mask, which excluded from our analysis areas covered by vegetation smaller than 2.5 m. These excluded areas occupied by saplings and small trees, which would be included in forest inventory [54]. Moreover, by using forest mask, forest areas included in classification covered about 2027 ha, while officially, forests in KNP cover an area of 4022 ha [33]. This rather large difference might explain the differences between forest inventory and our results. A second probable cause of differences might be the method of creating forest inventory. Forest inventory in KNP is created using a network of measurement plots. Measurement points are distributed in 200 × 300 m grids [54]. This rather coarse resolution (but reasonable, considering cost constraints) might bias forest composition toward the most commonly occurring tree species (spruce). High accuracy measures for classes that were noted to have the highest differences in forest composition (spruce and beech) seem to invalidate claims that those differences are the result of an inaccurate tree species distribution map acquired in our work. Differences between tree species maps created using hyperspectral remote sensing and forest inventory data were also reported by Sommer et al. [17], where the greatest discrepancies were observed for spruce, fir, maple, and birch (39, 15, 8, and 7 percentage points, respectively).
The use of artificial neural networks in remote sensing is not widespread [19]. This is mainly due to the difficulties in using them and the fact that other classifiers, such as SVM and RF, provide comparable results with much less work. The process of optimizing an artificial neural network’s learning has a large influence on the final result [48], and it is only one of many things that need to be done to get a good result (others are selecting the architecture, choosing the learning algorithm, and finding the balance between sensible learning time and results). Using the nnet package, which is available for the R program, showed that artificial neural networks can successfully be used in classification tasks. The wealth of additional subprograms available for R allows the entire processing chain to be transferred to one place. In light of these facts, nothing stands in the way of extending the scope of ANNs in remote sensing. It is also worth analyzing whether the use of “deep-learning” networks in hyperspectral remote sensing cloud cover during the aerial pass may affect the results.

6. Conclusions

Information on spatial distribution of spruce (Picea abies L. Karst), birch (Betula pendula Roth), beech (Fagus sylvatica L.), larch (Larix decidua Mill), pine (Pinus sylvestris L.), and alder (Alnus incana L. Moench) within Karkonoski National Park was obtained. All classes achieved producer accuracy over 76% and user accuracy over 77%. The best classified classes were spruce (93%), alder (93%), beech (90%), and birch (84%). Based on the classification results, the tree species composition of KNP differs from the official data. The greatest discrepancies concerned beech and spruce classes (6 and 7 percentage points, respectively). There were no significant discrepancies for the other classes.
After initial processing, 40 APEX spectral channels were selected, based upon which further analyses were carried out. Even though it contained only one-seventh of the input channels, the dataset allowed high classification accuracy to be achieved. Working on a smaller dataset made it possible to carry out more experiments and to optimize the network’s learning parameters within a reasonable time. The selected channels were located in areas that are important in terms of vegetation identification.
The following conclusions can be drawn:
  • Despite some difficulty in classifying protected areas (due to the less orderly occurrence of species as compared to commercial forests), the classification achieved an overall accuracy of 87%. This allows the usefulness of hyperspectral data to be confirmed for studies of protected and difficult-to-access areas.
  • Hyperspectral data are undoubtedly useful in classifying tree species in protected areas. A large number of very narrow spectral channels allows tree species to be successfully distinguished, although the wealth of data requires an answer to the question of which data to use.
  • The results allowed the dominant tree species in the research area (spruce, beech) to be classified effectively, as well as species that occur there far less frequently (larch, pine, birch, alder).
  • The algorithm used to select hyperspectral channels allowed us to select the 40 most informative bands. Using the entire dataset and ANNs would have resulted in very long training times, which would have seriously hindered the optimization of learning parameters and the assessment of accuracy. Proper band selection is particularly important when analyzing large areas using advanced data-processing techniques.
  • Instead of relying on a one-time training and validation process, it was decided to randomly select multiple sets of samples for training and validation of the results. This method allows unintended influence by the researcher on the results to be reduced and frees the researcher from the tedious process of selecting samples for the training and validation set. Moreover, the accuracy assessment results can be better understood. Its use not only provides information on measures of accuracy, but also allows changes in accuracy and their impact on training and validation sets to be observed. One of the method’s downsides is the requirement to repeat the training and validation procedure a number of times, which translates to longer processing times. There remain open questions about the spatial autocorrelation of randomly-selected pixels, which can bias results.

Author Contributions

E.R. and B.Z. conceived and designed the experiments, performed the field measurements, and wrote the paper. E.R. performed data processing and classification tasks.


The authors wish to thank the European Facility for Airborne Research Transnational Access (EUFAR TA), which funded a flight campaign within the framework of the HyMountEcos project. The research was funded by the Polish Ministry of Sciences and Higher Education from science development funds in 2013–2017 as a research project under the framework of the Diamond Grant program for young researchers. The publishing costs were covered by the H2020-MSCA-RISE-2016: innoVation in geOspatiaL and 3D daTA (VOLTA), ref. GA No. 734687.


Special thanks go to the German Space Agency, for execution of the flight; VITO, for obtaining and processing APEX data; Adrian Ochtyra (University of Warsaw), who preprocessed ALS data and the administration of Karkonoski National Park, for help during field research. Feature extraction methods were tested in the frame of the WP3 Data and sensor integration of the project H2020-MSCA-RISE-2016: innoVation in geOspatiaL and 3D daTA (VOLTA).

Conflicts of Interest

The authors declare no conflict of interest.


  1. Shen, G.; Sakai, K.; Hoshino, Y. High Spatial Resolution Hyperspectral Mapping for Forest Ecosystem at Tree Species Level. Agric. Inf. Res. 2010, 19, 71–78. [Google Scholar] [CrossRef]
  2. Peerbhay, K.; Mutanga, O.; Ismail, R. Commercial tree species discrimination using airborne AISA Eagle hyperspectral imagery and partial least squares discriminant analysis (PLS-DA) in KwaZulu–Natal, South Africa. ISPRS J. Photogramm. Remote Sens. 2013, 79, 19–28. [Google Scholar] [CrossRef]
  3. Martinez del Castillo, E.; García-Martin, A.; Longares Aladrén, L.; de Luis, M. Evaluation of forest cover change using remote sensing techniques and landscape metrics in Moncayo Natural Park (Spain). Appl. Geogr. 2015, 62, 247–255. [Google Scholar] [CrossRef]
  4. Martin, M.; Newman, S.; Aber, J.; Congalton, R. Determining forest species composition using high spectral resolution remote sensing data. Remote Sens. Environ. 1998, 65, 249–254. [Google Scholar] [CrossRef]
  5. Bergseng, E.; Ørka, H.; Næsset, E.; Gobakken, T. Assessing forest inventory information obtained from different inventory approaches and remote sensing data sources. Ann. For. Sci. 2015, 72, 33–45. [Google Scholar] [CrossRef]
  6. Jarocińska, A.M.; Kacprzyk, M.; Marcinkowska-Ochtyra, A.; Ochtyra, A.; Zagajewski, B.; Meuleman, K. The application of APEX images in the assessment of the state of non-forest vegetation in the Karkonosze Mountains. Misc. Geogr. 2016, 20, 21–27. [Google Scholar] [CrossRef]
  7. Asner, G.P.; Martin, R.E.; Knapp, D.E.; Tupayachi, R.; Anderson, C.; Carranza, L.; Martinez, P.; Houcheime, M.; Sinca, F.; Weiss, P. Spectroscopy of canopy chemicals in humid tropical forests. Remote Sens. Environ. 2011, 115, 3587–3598. [Google Scholar] [CrossRef]
  8. Zagajewski, B.; Tømmervik, H.; Bjerke, J.W.; Raczko, E.; Bochenek, Z.; Kłos, A.; Jarocińska, A.; Lavender, S.; Ziółkowski, D. Intraspecific Differences in Spectral Reflectance Curves as Indicators of Reduced Vitality in High-Arctic Plants. Remote Sens. 2017, 9, 1289. [Google Scholar] [CrossRef]
  9. Kycko, M.; Zagajewski, B.; Zwijacz-Kozica, M.; Cierniewski, J.; Romanowska, E.; Orłowska, K.; Ochtyra, A.; Jarocińska, A. Assessment of Hyperspectral Remote Sensing for Analyzing the Impact of Human Trampling on Alpine Swards. Mt. Res. Dev. 2017, 37, 66–74. [Google Scholar] [CrossRef]
  10. Kycko, M.; Zagajewski, B.; Lavender, S.; Romanowska, E.; Zwijacz-Kozica, M. The Impact of Tourist Traffic on the Condition and Cell Structures of Alpine Swards. Remote Sens. 2018, 10, 220. [Google Scholar] [CrossRef]
  11. Guidici, D.; Clark, M.L. One-Dimensional Convolutional Neural Network Land-Cover Classification of Multi-Seasonal Hyperspectral Imagery in the San Francisco Bay Area, California. Remote Sens. 2017, 9, 629. [Google Scholar] [CrossRef]
  12. Kupková, L.; Červená, L.; Suchá, R.; Jakešová, L.; Zagajewski, B.; Březina, S.; Albrechtová, J. Classification of Tundra Vegetation in the Krkonoše Mts. National Park Using APEX, AISA Dual and Sentinel-2A Data. Eur. J. Remote Sens. 2017, 50, 29–46. [Google Scholar] [CrossRef]
  13. Marcinkowska-Ochtyra, A.; Zagajewski, B.; Raczko, E.; Ochtyra, A.; Jarocińska, A. Classification of High-Mountain Vegetation Communities within a Diverse Giant Mountains Ecosystem Using Airborne APEX Hyperspectral Imagery. Remote Sens. 2018, 10, 570. [Google Scholar] [CrossRef]
  14. Price, J.C. How unique are spectral signatures? Remote Sens. Environ. 1994, 49, 181–186. [Google Scholar] [CrossRef]
  15. Masaitis, G.; Mozgeris, G.; Augustaitis, A. Spectral reflectance properties of healthy and stressed coniferous trees. iForest 2013, 6, 30–36. [Google Scholar] [CrossRef]
  16. Cochrane, M.A. Using vegetation reflectance variability for species level classification of hyperspectral data. Int. J. Remote Sens. 2000, 10, 2075–2087. [Google Scholar] [CrossRef]
  17. Sommer, C.; Holzwarth, S.; Heiden, U.; Heurich, M.; Mueller, J.; Mauser, W. Feature-based tree species classification using airborne hyperspectral and LiDAR data in the Bavarian Forest National Park. EARSeL eProc. 2015, 14, 49–70. [Google Scholar]
  18. Tagliabue, G.; Panigada, C.; Colombo, R.; Fava, F.; Cilia, C.; Baret, F.; Vreys, K.; Meuleman, K.; Rossini, M. Forest species mapping using airborne hyperspectral APEX data. Misc. Geogr. 2016, 20, 28–33. [Google Scholar] [CrossRef]
  19. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  20. Ghosh, A.; Fassnacht, F.E.; Joshi, P.K.; Koch, B. A framework for mapping tree species combining hyperspectral and LiDAR data: Role of selected classifiers and sensor across three spatial scales. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 49–63. [Google Scholar] [CrossRef]
  21. Graves, S.J.; Asner, G.P.; Martin, R.E.; Anderson, C.B.; Colgan, M.S.; Kalantari, L.; Bohlman, S.A. Tree species abundance predictions in a tropical agricultural landscape with a supervised classification model and imbalanced data. Remote Sens. 2016, 8, 161. [Google Scholar] [CrossRef]
  22. Dalponte, M.; Ørka, H.O.; Gobakken, T.; Gianelle, D.; Naesset, E. Tree species classification in boreal forests with hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2632–2645. [Google Scholar] [CrossRef]
  23. Ballanti, L.; Blesius, L.; Hines, E.; Kruse, B. Tree Species Classification Using Hyperspectral Imagery: A Comparison of Two Classifiers. Remote Sens. 2016, 8, 445. [Google Scholar] [CrossRef]
  24. Lee, J.; Cai, X.; Dalponte, M.; Schonlieb, C.; Coomes, D.; Lellmann, J.; Malhi, Y.; Butt, N.; Morecroft, M. Individual Tree Species Classification from Airborne Multisensor Imagery Using Robust PCA. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2554–2567. [Google Scholar] [CrossRef]
  25. Mas, J.F.; Flores, J.J. The application of artificial neural networks to the analysis of remotely sensed data. Int. J. Remote Sens. 2008, 29, 617–663. [Google Scholar] [CrossRef]
  26. Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  27. Paoletti, M.; Haut, M.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photog. Remote Sens. 2017. [Google Scholar] [CrossRef]
  28. Raczko, E.; Zagajewski, B. Comparison of Support Vector Machine, Random Forest and Neural Network Classifiers for Tree Species Classification on Airborne Hyperspectral APEX images. Eur. J. Remote Sens. 2017, 50, 144–154. [Google Scholar] [CrossRef]
  29. Beluco, A.; Engel, P.; Alexandre, B. Classification of textures in satellite image with Gabor filters and a multilayer perceptron with back propagation algorithm obtaining high accuracy. Int. J. Energy Environ. 2015, 6, 437–459. [Google Scholar] [CrossRef]
  30. Jain, A.K.; Duin, R.P.W.; Mao, J. Statistical pattern recognition: A review. IEEE Trans. Pattern Anal. Mach. Intel. 2000, 22, 4–37. [Google Scholar] [CrossRef]
  31. Miguez, G.; Xavier, A.; Maculan, N. An evaluation of the bihyperbolic function in the optimization of the backpropagation algorithm. Int. Trans. Oper. Res. 2014, 21, 835–854. [Google Scholar] [CrossRef]
  32. Raj, A. Przemiany Krajobrazu Leśnego Karkonoskiego Parku Narodowego w Okresie Ostatnich Kilkudziesięciu Lat; Karkonoski Park Narodowy: Jelenia Góra, Poland, 2014; p. 100. ISBN 978-83-64528-16-3. [Google Scholar]
  33. Danielewicz, W.; Raj, A.; Zientarski, J. Ekosystemy Leśne Karkonoskiego Parku Narodowego; Karkonoski Park Narodowy: Jelenia Góra, Poland, 2012; p. 96. ISBN 978-83-926933-5-2. [Google Scholar]
  34. Itten, K.I.; Dell’Endice, F.; Hueni, A.; Kneubuhler, M.; Schlaepfer, D.; Odermatt, D.; Seidel, F.; Huber, S.; Schopfer, J.; Kellenberger, T.; et al. APEX—The Hyperspectral ESA Airborne Prism Experiment. Sensors 2008, 8, 6235–6259. [Google Scholar] [CrossRef] [PubMed]
  35. Vreys, K.; Iordache, M.; Biesemans, J.; Meuleman, K. Geometric correction of APEX hyperspectral data. Misc. Geogr. 2016, 20, 11–15. [Google Scholar] [CrossRef]
  36. Sterckx, S.; Vreys, K.; Biesemans, J.; Iordache, M.; Bertels, L.; Meuleman, K. Atmospheric correction of APEX hyperspectral data. Misc. Geogr. 2016, 20, 16–20. [Google Scholar] [CrossRef]
  37. Poop, C.; Brunner, D.; Damm, A.; van Roozendael, M.; Fayt, C.; Buchmann, B. High-resolution NO2 remote sensing from the Airborne Prism EXperiment (APEX) imaging spectrometer. Atmos. Meas. Tech. 2012, 5, 2211–2225. [Google Scholar] [CrossRef]
  38. Thenkabail, P.S.; Enclona, E.A.; Ashton, M.S.; van der Meer, B. Accuracy assessments of hyperspectral waveband performance for vegetation analysis applications. Remote Sens. Environ. 2004, 91, 354–376. [Google Scholar] [CrossRef]
  39. Pal, M.; Mather, P.M. Some issues in the classification of DAIS hyperspectral data. Int. J. Remote Sens. 2006, 27, 2895–2916. [Google Scholar] [CrossRef]
  40. Thenkabail, P.S.; Lyon, J.G.; Huete, A. Advances in hyperspectral remote sensing of Vegetation and Agricultural Croplands. In Hyperspectral Remote Sensing of Vegetation; Thenkabail, P.S., Lyon, J.G., Huete, A., Eds.; CRC Press Taylor & Francis Group: Boca Raton, FL, USA, 2012; pp. 3–35. [Google Scholar]
  41. Venables, W.N.; Ripley, B.D. Modern Applied Statistics with S, 4th ed.; Springer: New York, NY, USA, 2002; p. 462. ISBN 0-387-95457-0. [Google Scholar]
  42. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
  43. Braga-Neto, U.M.; Dougherty, E.R. Is cross-validation valid for small sample microarray classification? Bioinformatics 2004, 20, 374–380. [Google Scholar] [CrossRef] [PubMed]
  44. Sims, D.A.; Gamon, J.A. Relationships between leaf pigment content and spectral reflectance across a wide range of species, leaf structures and developmental stages. Remote Sens. Environ. 2002, 81, 337–354. [Google Scholar] [CrossRef]
  45. Raczko, E.; Zagajewski, B.; Ochtyra, A.; Jarocińska, A.; Marcinkowska-Ochtyra, A.; Dobrowolski, M. Forest species identification of Mount Chojnik (Karkonoski National Park) using airborne hyperspectal APEX data. Sylwan 2015, 159, 593–599. [Google Scholar]
  46. Kokaly, R.F.; Despain, D.G.; Clark, R.N.; Livo, K.E. Mapping vegetation in Yellowstone National Park using spectral feature analysis of AVIRIS data. Remote Sens. Environ. 2003, 84, 437–456. [Google Scholar] [CrossRef]
  47. Dalponte, M.; Ørka, H.O.; Ene, L.T.; Gobakken, T.; Næsset, E. Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data. Remote Sens. Environ. 2014, 140, 306–317. [Google Scholar] [CrossRef]
  48. Feret, J.B.; Asner, G.P. Tree species discrimination in tropical forests using Airborne Imaging Spectroscopy. IEEE Trans. Geosci. Remote Sens. 2013, 51, 73–84. [Google Scholar] [CrossRef]
  49. Zeng, Y.; Wang, S.; Zhao, T.; Wang, J. An application of tree species classification using high-resolution remote sensing image based on the rough set theory. Multimed. Tools Appl. 2017, 76, 22999–23015. [Google Scholar] [CrossRef]
  50. Pu, R.; Landry, S. A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery for mapping urban tree species. Remote Sens. Environ. 2012, 124, 516–533. [Google Scholar] [CrossRef]
  51. Deng, S.; Katoh, M.; Hyyppa, J.; Gao, T. Comparison of Tree Species Classifications at the Individual Tree Level by Combining ALS Data and RGB Images Using Different Algorithms. Remote Sens. 2016, 8, 1034. [Google Scholar] [CrossRef]
  52. Priedītis, G.; Šmits, I.; Daģis, S.; Paura, L.; Krūmiņš, J.; Dubrovskis, D. Assessment Of Hyperspectral Data Analysis Methods To Classify Tree Species. Res. Rural Dev. 2015, 2, 7–13. [Google Scholar]
  53. Baldeck, C.A.; Asner, G.P.; Martinm, R.E.; Andersonm, C.B.; Knappm, D.E.; Kellner, J.R.; Wright, J.S. Operational Tree Species Mapping in a Diverse Tropical Forest with Airborne Imaging Spectroscopy. PLoS ONE 2015, 10, 21. [Google Scholar] [CrossRef] [PubMed]
  54. Knapik, R.; Raj, A. Monitoring Przyrodniczy w Karkonoskim Parku Narodowym Jako Narzędzie do Śledzenia Zmian w Środowisku; Peckiana, Senckenberg Museum für Naturkunde: Görlitz, Germany, 2014; Volume 9, pp. 3–44. ISSN 1618-1735. [Google Scholar]
Figure 1. Research area location.
Figure 1. Research area location.
Remotesensing 10 01111 g001
Figure 2. Overview of research method. Explanations: nDSM—normalized Digital Surface Model; mreNDVI—modified red edge Normalized Difference Vegetation Index; PCA—Principal Component Analysis.
Figure 2. Overview of research method. Explanations: nDSM—normalized Digital Surface Model; mreNDVI—modified red edge Normalized Difference Vegetation Index; PCA—Principal Component Analysis.
Remotesensing 10 01111 g002
Figure 3. Band loadings for the first three PCA bands (bands 1, 2, and 3 explained 88% of total variance of the APEX dataset).
Figure 3. Band loadings for the first three PCA bands (bands 1, 2, and 3 explained 88% of total variance of the APEX dataset).
Remotesensing 10 01111 g003
Figure 4. Example survey plot. In the foreground, the Leica Zeno 10 GPS receiver can be seen.
Figure 4. Example survey plot. In the foreground, the Leica Zeno 10 GPS receiver can be seen.
Remotesensing 10 01111 g004
Figure 5. Locations of selected spectral bands (red lines). For comparison, spectral curves for two tree species were added.
Figure 5. Locations of selected spectral bands (red lines). For comparison, spectral curves for two tree species were added.
Remotesensing 10 01111 g005
Figure 6. Producer and user accuracies.
Figure 6. Producer and user accuracies.
Remotesensing 10 01111 g006
Figure 7. Percentage of height classes for classified tree species.
Figure 7. Percentage of height classes for classified tree species.
Remotesensing 10 01111 g007
Figure 8. Tree species distribution map of Karkonoski National Park.
Figure 8. Tree species distribution map of Karkonoski National Park.
Remotesensing 10 01111 g008
Table 1. Airborne Prism Experiment (APEX) sensor characteristics [37]. VNIR—visible and near-infrared; SWIR—shortwave infrared.
Table 1. Airborne Prism Experiment (APEX) sensor characteristics [37]. VNIR—visible and near-infrared; SWIR—shortwave infrared.
Spectral range380.0–971.7 nm941.2–2501.5 nm
Number of bandsup to 334198
Spectral sampling interval0.6–6.3 nm6.2–11 nm
Field of view (FOV)28°
Instantaneous field of view (IFOV)0.028° (≈0.5 mrad)
Spatial resolution2.5 m at 5000 m above ground level (AGL)
Table 2. Number of collected locations during field surveys (forest inventory data included) and the corresponding number of pixel samples.
Table 2. Number of collected locations during field surveys (forest inventory data included) and the corresponding number of pixel samples.
Number of measurement polygons1151241931127806
Number of pixel samples615972685901252677
Table 3. Number of samples in training and validation datasets.
Table 3. Number of samples in training and validation datasets.
Training dataset38961543357791692
Validation dataset2263572523346985
Table 4. Spectral bands used for classification of tree species.
Table 4. Spectral bands used for classification of tree species.
Selected spectral bands (nm)530, 550, 590, 600, 610, 620, 630, 650, 680, 690,780, 800, 820, 830, 850, 860, 880, 900, 930, 940, 960, 980, 1000, 1040, 1120, 1200, 1240, 1260, 1280, 1500, 1530, 1560, 1660, 1720, 1760, 2000, 2030, 2060, 2090, 2110
Table 5. Comparison of KNP forest composition from classification results and forest inventory [33]. Values in %. The greatest differences are marked in bold.
Table 5. Comparison of KNP forest composition from classification results and forest inventory [33]. Values in %. The greatest differences are marked in bold.
Classification result4.8110.256,300.110.7277.81
Forest inventory4.794.144,580.120.5385.55
Difference (percentage points)
Table 6. Comparison of results with selected literature. Explanations: NA—no available.
Table 6. Comparison of results with selected literature. Explanations: NA—no available.
AuthorsDataClassification AlgorithmNumber of Classified Tree SpeciesOverall Accuracy (%)Kappa Coefficient
Kokaly et al. [46]HyperspectralExpert system4740.62
Peerbhay et al. [2]HyperspectralPartial least squares discriminant analysis (PLS-DA)6880.87
Dalponte et al. [22]HyperspectralSupport vector machine (SVM), random forest (RF), Gaussian maximum likelihood490<0.8
Feret et al. [48]HyperspectralSupport vector machine (radial-kernel)1783NA
Dalponte et al. [47]Hyperspectral and LiDARSupport vector machine3930.88
Ghosh et al. [20]Hyperspectral and LiDARSupport vector machine, random forest5940.95
Fassnacht et al. [19]HyperspectralSupport vector machine, random forest5920.83
Priedītis et al. [52]HyperspectralLinear Discriminant Analysis (LDA)and artificial neural networks (ANN)586 LDA; 71 ANNNA
Sommer et al. [17]HyperspectralRandom forest13940.93
Baldeck et al. [53]HyperspectralSupport vector machine398NA
Ballanti et al. [23]Hyperspectral and LiDARSupport vector machine895NA
Graves et al. [21]HyperspectralSupport vector machine2062NA
Lee et al. [24]Hyperspectral and LiDARSupport vector machine6910.89
Raczko and Zagajewski (present study)HyperspectralArtificial neural networks6870.82

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top