Next Article in Journal
Improved NOAA-20 Visible Infrared Imaging Radiometer Suite Day/Night Band Image Quality by Upgraded Gain Calibration
Next Article in Special Issue
Remote Sensing Estimation of Bamboo Forest Aboveground Biomass Based on Geographically Weighted Regression
Previous Article in Journal
Global Analysis of Coastal Gradients of Sea Surface Salinity
Previous Article in Special Issue
Woody Above-Ground Biomass Estimation on Abandoned Agriculture Land Using Sentinel-1 and Sentinel-2 Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object Oriented Classification for Mapping Mixed and Pure Forest Stands Using Very-High Resolution Imagery

1
Department for Innovation in Biological Agro-Food and Forestry System (DIBAF), University of Tuscia, Via San Camillo De Lellis, SNC, 01100 Viterbo, Italy
2
Council for Agricultural Research and Economics, Research Centre for Forestry and Wood, Viale S. Margherita, 80, 52100 Arezzo, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(13), 2508; https://doi.org/10.3390/rs13132508
Submission received: 31 May 2021 / Revised: 21 June 2021 / Accepted: 23 June 2021 / Published: 26 June 2021

Abstract

:
The importance of mixed forests is increasingly recognized on a scientific level, due to their greater productivity and efficiency in resource use, compared to pure stands. However, a reliable quantification of the actual spatial extent of mixed stands on a fine spatial scale is still lacking. Indeed, classification and mapping of mixed populations, especially with semi-automatic procedures, has been a challenging issue up to date. The main objective of this study is to evaluate the potential of Object-Based Image Analysis (OBIA) and Very-High-Resolution imagery (VHR) to detect and map mixed forests of broadleaves and coniferous trees with a Minimum Mapping Unit (MMU) of 500 m2. This study evaluates segmentation-based classification paired with non-parametric method K- nearest-neighbors (K-NN), trained with a dataset independent from the validation one. The forest area mapped as mixed forest canopies in the study area amounts to 11%, with an overall accuracy being equal to 85% and K of 0.78. Better levels of user and producer accuracies (85–93%) are reached in conifer and broadleaved dominated stands. The study findings demonstrate that the very high resolution images (0.20 m of spatial resolutions) can be reliably used to detect the fine-grained pattern of rare mixed forests, thus supporting the monitoring and management of forest resources also on fine spatial scales.

Graphical Abstract

1. Introduction

A mixed forest is defined as an area where at least two species coexist at any stage of development, sharing resources including light, water, and nutrients [1]. The relative abundance of species can be quantified as a percentage proportion of the stand density, volume or coverage parameters of the canopy. For operational or forest inventory purposes it is a common practice to classify as “mixed” those forest stands where two (or more) tree species contribute each to more than 10–30% to stand basal area [1]. Similar thresholds are used for canopy cover. For instance, the Corine Land Cover nomenclature defines “mixed forest” as the alternation of patches, groups or single trees of broadleaved and coniferous trees, over a minimum mapping unit of 25 ha, the share of both coniferous and broad-leaved species representing at least 25%, but maximum 75% of tree-covered area [2]. Mixed-species stands have gained considerable traction in science and policy, especially in Europe for several reasons. Forests composed of several tree species are expected to beget biodiversity in the forest habitat, because tree species mixing sets the stage for variation in other structural components (e.g., tree size differentiation, tree layering). Further, several studies have demonstrated that mixed forest communities are generally more productive, resilient and capable of providing more ecosystem services than stands dominated by a single species [3,4,5,6,7,8,9,10]. Albeit that the knowledge about the ecology of mixed forests has recently expanded, almost all studies have been conducted on experimental or observational platforms, where pairs or triplets of pure and mixed stands, growing under similar environmental conditions, are compared in terms of productivity, growth stability, ecosystem services provision, without any prior knowledge on their actual the spatial extent in the investigated area. It is worth noting that the occurrence of mixed forest stands may represent an exception, rather than the rule, in many forest regions since, at least in Europe, managed forests are to a large extent human-made or secondary forests, often being established and managed as monocultures [11]. Because of their rarity, mixed forest stands might not be sampled with sufficient intensity by random or systematic forest inventories, compared to monocultures. This limits the set-up of replicated experiments to analyze in real-world conditions the effect of mixing two or more tree species forest ecosystem properties and functions, and also to fill the existing knowledge gaps perceived by forest managers [12]. From this perspective, mapping the actual occupancy of mixed stands on a fine spatial scale can be a fundamental tool for both experimental studies and monitoring activities such as forest inventories. Indeed, maps of occurrence of tree species mixtures vs. single species would allow, by a stratified sampling approach, the extraction of sufficiently large samples of mixed forest plots. Several projects and studies have been developed at European level to map the distribution of single tree species or forest type by remote sensing with acceptable classification results [13,14,15,16]. In this regard, mapping of forest habitats types dominated by one canopy species is relatively straightforward [17,18], but where forest landscape is more heterogeneous and includes mixed-species forests, the mapping procedures may become more challenging [19].
In the scientific literature, there are several studies on forest-type mapping based on the freely-available multi-source and time-series imagery as high-resolution (HR) Sentinel-2 (S2) [20,21,22]. However, the possibility to detect forest canopies dominated by different tree species at fine spatial scale greatly depends on the spatial resolution of the images. Indeed, [23] have demonstrated that the spatial resolution of Sentinel-2A images (i.e., 10 m) may be insufficient for the classification of heterogeneous forests with fragmented species distribution and recommended combining these images with very high resolution data (VHR). VHR data have been successfully used in the identification of canopy tree species in simplified contexts such as urban areas [24]. However, to the best of our knowledge no studies attempted to use VHR images to map mixed stands over fine spatial scales across forest landscapes. Hyperspectral data [25,26], especially if coupled with lidar data [27,28], have been successful in the identification of canopy tree species. However, we believe that much research effort still remains to be addressed to demonstrate that, for the operational task of tree species identification and mixed stands delineation, the semi-automatic classification of VHR multispectral data, can result in a consistent improvement over the business-as-usual method of photointerpretation.
One approach for developing semi-automatic procedures for mapping mixed forest stands is to rely on Object-Based Image Analysis (OBIA). Such technique is widely used to extract and classify information from high spatial detail imagery [29]. The OBIA has been successfully applied in various fields of research [30,31] and, in particular, classifications for environmental studies [32], and small-scale forest mapping [33,34]. The OBIA technique encompasses two main steps: (i) the “segmentation”, which is the delineation of homogeneous objects from the input imagery, following the principle of clustering neighboring image pixels into “objects”, so as to maximize the intra-object spectral homogeneity and inter-object spectral heterogeneity (ii) the “classification”, which labels and assigns each polygon to the target cover class [35]. One of the advantages of segmentation is that it creates objects that can be associated to land cover types that may be spectrally variable at the pixel level and, thus, eliminates the “salt and pepper” effect associated with per-pixel classification [36]. Another advantage is that OBIA delineates non-arbitrary units for analysis as opposed to pixels; objects can approximate real world features better than pixels [36], thus resulting in better classification results, than pixel-based techniques, of high and very high spatial resolution imagery [37,38]. The OBIA is namely well-suited to detect the fine-scaled pattern of forest canopy and to delineate specific attributes (e.g., tree crowns, canopy gaps) [24,39,40]. In addition, the use of shape features, hierarchical structures of objects and classes, and the topological features relating to the objects are other benefits of OBIA approaches. In particular, the Multi-Resolution Segmentation (MRS) [35] can generate multiple hierarchical levels of image segmentation, i.e., a hierarchical set of image segmentations at different levels of spectral and shape homogeneity.
The main objective of this study is therefore to advance research in airborne VHR Object-Based Image Analysis (OBIA), with the goal to detect and map mixed forests of broadleaves and coniferous trees with a Minimum Mapping Unit (MMU) of 500 m2 in a forest-dominated landscape in Southern Italy. The selected MMU size approximately corresponds to the minimum area covered by forest inventory plots in National Forest Inventories in Europe.

2. Materials and Methods

2.1. Study Area

The study was conducted on the Sila plateau, which is located in Southern Italy and specifically in the Calabria region (Figure 1). The test site extends over about 6080 ha, with an elevation ranging between 850 and 1840 m above sea level. The actual forest area in the investigated test site amounts to 4846 ha, predominantly located on north-western exposure, based on ancillary information retrieved from a high-resolution land use map available for the study area [41]. Forest types are mainly represented by stands dominated by beech (Fagus sylvatica L.) or Corsican pine (Pinus nigra J.F. Arnold subsp. laricio (Poir.) Maire) on elevations above 1000 m. Mixed stands of the two species can be found either in ecotone zones or can develop from stages of forest succession in mature stands of Corsican pine. In the latter case, the most typical physiognomy is a bi-layered vertical structure with an open upper canopy layer of Corsican pine and an underlying layer of beech. In the submontane belt, ranging between 700–1200 m, other broadleaved deciduous trees dominate the landscape such as chestnut and turkey oak [42].

2.2. Image Data

The multispectral imagery was acquired by the high-resolution imaging system ADS 40 (Airborne Digital Sensor), in the framework of Terra-Italy project [43]. The time of acquisition was late spring (May 2017), a phenological period corresponding to early stage of leaf development for the deciduous tree species in the area. Each single VHR image (pixel size = 0.2 m, 8 bits radiometric resolution) acquired by the aerial platform covers around 11.5 km2. Therefore, a mosaic of digital orthophotos was created for the study area. The spectral resolution of the ADS 40 sensor covers the three spectral bands most relevant for vegetation mapping (Green, Red, Near-Infrared).

2.3. Methods

The multispectral orthophotos were processed in order to delineate mixed-species stands of broadleaved deciduous species (beech) and conifer (Corsican pine) from a wider forest landscape largely dominated by single species stands of these two species. The data processing workflow (Figure 2) consisted of the following steps: (1) multi-resolution segmentation; (2) noise removal; (3) training sample selection and classification (4) classification of smaller scale objects by KNN algorithm (5) built-up of an independent classification of sample polygons for map validation (6) map accuracy assessment. Steps from 1 to 4 were carried out using the eCognition Developer 9.5 software of Trimble Germany GmbH (München, Germany), which is specifically designed for object-oriented image analysis [35].

2.3.1. Image Segmentation

In the case of our study, the OBIA was implemented through the Multi-Resolution Segmentation (MRS) of the VHR imagery. We have taken advantage from this option by performing two hierarchical segmentations: a coarser segmentation (FS150) and a finer one (FS80). Four key parameters were used to adjust MRS at both levels: scale parameter (FS), the weight of compactness and smoothness, the weight of color and shape and the layer weights. All these parameters were empirically found to ensure the best results for delineation of desired segmentation, using the trial and error approach. The Image layer weights were tailored for the best differentiation between conifer and broadleaves, with the double emphasis being put mainly on the NIR band (RED = 1 NIR = 2 GREEN = 1). For the finer level of segmentation (FS 80) the scale parameter was set to 80, the shape criterion was set to 0.20 and the compactness criterion to 0.80. For the coarser level of segmentation (FS 150) the scale parameter was set to 150, the shape criterion was set to 0.40 and the compactness criterion to 0.80.
The rationale behind this two-level segmentation is that the small polygons delineated by FS80 segmentation, are used to outline spectrally homogeneous tree crowns, that can be then recognized and classified as “broadleaved deciduous” or “coniferous” by a semi-automatic algorithm; based on the proportion of areas assigned to “broadleaved” or “coniferous” dominated polygons, results can be aggregated at the coarser scale level (FS150 polygons), in order to delineate larger areas where the forest canopy is dominated by one of the two species group or is composed by small-scaled groups of the two species, i.e., mixed species stands. In this way, the coarser scale level can be classified into based on simple merges of regions from segmentations at finer detail levels.

2.3.2. Classification of Noise

The “extremely” high spatial resolution of the orthomosaic implied the generation of a very high number image polygons (more than polygons/km2). The latter correspond not only to sunlit tree crowns, supposedly of the same tree species group (conifer or broadleaved deciduous), but also to “noise” due to inter-crown space, shadows or even dead or unhealthy tree crowns. Therefore, one of the first tasks in the semi-automatic classification of extremely high-resolution images is to handle with the considerable presence of shadows affecting the spectral response of the objects observed. Accordingly, on the most detailed segmentation level (FS80) those polygons that were presumably associated to shadows, light shade or canopy gaps with a grassland understory have been identified and removed from the subsequent steps of the classification. The detection of these “noise” polygons has been performed using the “Assign Class” tool. The Assign Class algorithm is a simple classification algorithm, which allows the user to assign a class based on spectral reflectance condition (for example a brightness range). These thresholds (Table 1) have been determined by conducting repeated tests, and following photointerpretation control, using the function “feature view” provided by the software.

2.3.3. Training Sample Dataset

The selection of a sufficient number of representative training samples is a critical step for semi-automatic image classifications methods [44,45]. However, the precise determination of the number of training data samples needed to achieve an accurate classification is elusive.
In order to select sufficiently large training samples only FS80 polygons with a size >100 m2 were considered, corresponding to the average size of FS80 image polygons left after the removal of noise polygons. In order to spread the training sample at regular intervals across the area of interest, a grid with square cells of 1 km2 was superimposed on the study area. A set of training samples (n = 115) was initially extracted including two random polygons per square kilometer. However, the samples were extracted so as to represent, in a proportionate way, the variability of the spectral response of the polygons deemed to be “sunlit” assemblages of tree crowns, due to different tree species groups (broadleaved deciduous and conifer), but also other factors (topography, health conditions, phenological status). To this end, the normalized vegetation index NDVI (NIR-RED/NIR+RED, (where NIR is reflectance in the near-infrared band and RED is reflectance in the red band) was calculated for the orthomosaic along with the average value of the NDVI associated to each FS80. Based on a roughly bimodal distribution, two main classes of average NDVI value were identified for the FS80 polygons: 0.2–0.4 (73%) and 0.4–0.6 (27%). Using these two NDVI classes as strata for proportional allocation of polygons to the sample, the FS80 training samples were randomly drawn from polygons larger than 100 m2 in each square cell. All the training samples were then visually interpreted to be assigned to crowns of pure broadleaved deciduous trees (beech), pure coniferous trees (Corsican pine) or, more rarely, to mixtures of broadleaved deciduous trees. This admixture was only occasionally found in the study area, so was included in the “broadleaved deciduous forest” class (Table 2). The final proportion of broadleaved and coniferous training samples, being the result of the combination of a systematic and a stratified (using NDVI as stratum) sampling, reflects the actual spectral variability and proportion of pure FS80 classes in the study area.

2.3.4. Nearest Neighbor Classifier

The nearest neighbor approach was first introduced by [46] and later used in many models [47,48]. This non-parametric approach is widely used for the supervised classification of image polygons because of its simplicity and lack of assumptions. The basic theory behind kNN is that in the training dataset, the algorithm finds a group of k samples that are spectrally nearest to image polygons to be classified. A set of spectral or topographic variables calculated on the orthomosaic that provide the best separability between classes, were selected in order to conduct a better classification. This step was carried out by means of the Feature Space Optimization (FSO) tool implemented in eCognition. The variables that turned out to maximize the spectral separability between the two classes were mean NIR, standard deviation of green band, mean GRVI (GREEN-RED/GREEN+RED), mean eastness. In particular, eastness is calculated as the sine of aspect [49], which was previously derived from the Digital Elevation Model (DEM) of the study area released by Italian Military survey office.
The most significant individual band was, as expected, the near infrared.
From the k samples, the label (in our case the two classes in Table 2) of unclassified polygons is determined by calculating the average of the response variables (i.e., the class attributes of the k nearest neighbor). As a result, for this classifier, the k plays an important role in the performance of the kNN, i.e., it is the key tuning parameter of kNN. Usually, the K parameter in the KNN classifier is chosen empirically and, in our case the study was set equal to 3 (i.e., the default value in eCognition). This value was also shown to produce the best classification results in similar studies [50].

2.3.5. Mixed Forest Mapping

The polygons of the finer level of segmentation (FS80) as classified by the kNN algorithm, were used to label the coarser polygons (FS150), based on their relative proportion. The rule applied is reported in the Table 3.

2.3.6. Validation Dataset

In order to validate the classification, an independent and reliable classification of FS150 polygons was performed on a sufficiently large proportion of the study area. Sampling units were circular 1 ha plots with the center randomly selected according to a systematic unaligned sampling design across the grid with square cells of 1 km2. Each square cell contained one circular plot that crossed one or more FS150 image polygons (Figure 3). The overall forest area covered by the FS150 polygons intersecting the 1 ha plot was assigned to one of the three forest types (0, 1, 2) by visual interpretation based on the same thresholds defined for the classification of FS150 from FS80 polygons (§ 2.3.5) and used as “ground truth” independent data for map validation (Table 4). The surface covered by the validation sample amounts to 3.51% of the total forest area.

2.3.7. Accuracy Assessment

In order to assess the accuracy of the map, the confusion matrix was calculated. The error matrix is a cross-tabulation of the class labels allocated by the semi-automatic mapping procedure and reference data [51]. To examine the reliability of the mapping approach we used four metrics: Overall Accuracy (OA), Cohen’s Kappa index of agreement (K), Producer’s Accuracy (PA), and User’s Accuracy (UA). We calculated Cohen’s Kappa index of agreement (K) to evaluate the possibility of an agreement occurring simply by chance [52]. The K is a robust statistic useful for reliability testing. The K statistic varies from 0 to 1, where 0 represents the amount of agreement that can be expected from random chance, and 1 represents perfect agreement. It is possible that K assumes negative values. This means that the two classifications agree less than expected just by chance.

3. Results

3.1. Forest Types Mapping

The OBIA procedure applied to airborne multispectral VHR orthomosaic allowed us to produce a map of the forest areas covered by coniferous, broadleaves and mixed forests in the study area, for a very high number of FS150 image objects (Figure 4). As shown in the summary data reported in Table 5, the mixed forest class covers only 11% of the total area, a proportion that is significantly lower than areas dominated by deciduous species (beech or other broadleaved deciduous species) or Corsican pine (Table 5). In addition, tree species mixture appears to occur mostly as small-sized patches, as large as the size of forest inventory plots (Figure 5 and Figure 6). The overall frequency distribution of patch size of mixed broadleaved and coniferous stands follows reverse J shape distribution, patches larger than 2000 m2 being relatively rare in the investigated area.

3.2. Accuracy Assessment

The applied mapping approach produced good results, the overall accuracy being equal to 85% and K of 0.78. The detection and mapping of mixed stands with a MMU of 500 m2 with a single date imagery turned out to be feasible with the smaller training sample (115 polygons), and satisfactory level of producer’s (84%), and user’s accuracy (73%) were reached (Table 6). Better levels of user and producer accuracies (85–93%) were reached in conifer and broadleaved dominated stands.

4. Discussion

Study findings confirm the initial hypothesis, i.e., the possibility to delineate mixed stands on a fine spatial scale from VHR multispectral imagery (0.20 m). In particular, the OBIA approach here applied to the analysis of very high spatial resolution images proved to be a successful technique for detecting a fine-grained pattern of mixed forest in the investigated area, i.e., small patches (most often 500 to 2000 m2) dispersed in a forest landscape characterized by forest tracts dominated by pure stands of broadleaved or coniferous trees. Considering that spatial variation between areas dominated by one species group or admixtures of the two species is continuous, the main strength of the proposed approach is to reduce as much as possible the elements of subjectivity in the delineation of boundaries between the three examined forest types, the main limit of the photointerpretation method, taking full advantage of multi-resolution segmentation potential. It is worth noting that, in the currently growing scientific literature reporting advances in the use of VHR imagery for forest type delineation [53,54], this study has demonstrated that the use of a simple and cheap single date image, despite its limited spectral resolution, allowed accurate mapping of mixed stands on the MMU of 500 m2.
Under the examined conditions, the thematic accuracy of the map of the three forest types (conifers, broadleaves and mixed stands) achieved remarkable values, not lower than 0.73. Results obtained for the broadleaves and conifers classification were better than those for mixed forests, as expected. Indeed, many works have confirmed that pure conifer and broadleaved stands can be discriminated rather straightforwardly [40,55]. The results about mixed stands were less obvious, since previous studies had several difficulties in discriminating such stands from pure ones [19,56]. In this regard, the use of topographical variable such as “eastness” combined with the multispectral data makes it possible to resolve part of the spectral overlap between conifers and broadleaves, as a consequence of the different level of illumination of their canopy at the time of image acquisition due to aspect or slope. For example, broadleaved trees facing west or north may show a reflectance similar to that of conifers facing south. By introducing eastness, the classifier, even if the reflectance in the examined bands is the same, “learns” how to discriminate shaded deciduous trees or illuminated conifer canopies.
In this sense our results are encouraging, showing how the high resolution of the four-band orthorectified data and the OBIA methods are well suited for mapping mixed stands composed either by small groups or single trees of conifers and broadleaved trees.
Our results showed that most omission and commission errors are mainly due to a confusion between conifer and areas with mixtures of broadleaves such as chestnut or turkey oak. This could be explained by the spectral similarities between these groups of species in the period of acquisition of the image. In fact, during the late spring, the spectral signatures of chestnut or turkey oak could still be confused with the spectral reflectance of Corsican pine, because these tree species have a delayed leaf phenology compared to beech. This suggests that differences in canopy trees phenology are crucial for a successful broadleaved species discrimination. An airborne image acquired in early summer could have possibly solved most of the spectral confusion experienced in this study. Moreover, as highlighted in other studies [40,57], fall image tended to have a high discriminating ability when the leaf color changing process occurs. For these reasons, it is indisputable that the use of multitemporal data could bring improvements to classification accuracy. Indeed, multitemporal data could help to discriminate forest types that may be spectrally similar in any single time frame, especially if the appropriate timing of the images is selected, thus maximizing phenological differences and reducing redundant information which will not be used by the classifier. VHR commercial satellite imagery would be a suitable option to cover larger areas, but has considerable costs, especially if the mapping is required over large geographical areas (e.g., the cost of 0.5 m pan-sharpened imagery from Pleaides or WorldView is EUR 562 and 937 respectively, for 25 Km2 minimum order area and 5% or less cloud cover). In this perspective, a possible further development of this study is to apply the proposed object-oriented classification methodology to the new generation of VHR multispectral satellite products characterized by frequent revisit times (e.g., PLANETScope monitoring products). Other possible developments of the study can be the use of alternative methods for the parameterization of multi-scale image segmentation (e.g., the Estimation of the Scale Parameter tool [58]), or classifiers other than kNN, including for example Support Vector Machine (SVM) or Random Forest (RF).
Even with the above-mentioned limitations, the method proved to be effective to map admixtures of broadleaved deciduous and coniferous trees not only in terms of thematic accuracy, but also for the replicability of the image classification process. Indeed, the proposed method is an attempt to integrate, in a fairly transparent procedure, different for image classification algorithms (MRS, Assign Class, kNN), such that spectrally homogeneous tree crowns are identified and outlined, at the finer segmentation level, the tree species is recognized and classified, and the stand label is assigned and validated. Such an automated procedure does not exclude a careful input in the form of visual interpretation—in the form of training data and validation data. However, this support is limited to a negligible proportion of the investigated area. Consequently this (semi)automatic delineation of the target forest types from VHR imagery through MRS and classification demonstrated to be efficient also in term of time, if compared to visual interpretation.

5. Conclusions

This study sought to address the performance of VHR data for use in detecting mixtures of broadleaved deciduous (beech) and conifer trees (Corsican pine) in Sila, Italy. In this regard, we have developed an effective approach for mapping at stand level rare and patchily distributed mixed stands of conifer and broadleaves using VHR remotely sensed data. The proposed methodology, if similar data types are available, offers a good foundation for similar applications in other contexts where mixed stands of broadleaved deciduous and conifers and pure stands coexist in the same landscape.
Knowledge of the spatial pattern of the mixed stands could be used to assess the actual extent of these forest types and to steer more accurate investigations at a local level. In particular, reliable maps of tree species mixtures vs. single species are instrumental to sample with sufficient intensity mixed forest plots, in forest landscapes dominated by single species stands. This is a clear advantage for the set-up of experimental or observational platforms, aimed to study the effect of species mixing on the provision of habitat and other forest ecosystem services.

Author Contributions

Conceptualization, L.O., D.G., A.T. and A.B.; methodology, L.O. and D.G.; validation, L.O. and D.G.; resources, A.T. and A.B.; data curation, L.O.; writing—original draft preparation, L.O. and A.T.; writing—review and editing, L.O., A.T. and A.B.; visualization, L.O.; supervision, A.B.; project administration, A.T. and A.B.; funding acquisition, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

The study was supported by the ERA-Net SUMFOREST project REFORM “Mixed species forest management. Lowering risk, increasing resilience” (Italian ministry of agricultural food and forestry policies: Ministerial Decree no. 31950/7303/16).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors are grateful to Andrea Gentilucci for his support in the visual interpretation of the validation photoplots.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Bravo-Oviedo, A.; Pretzsch, H.; Ammer, C.; Andenmatten, E.; Barbati, A.; Barreiro, S.; Brang, P.; Bravo, F.; Coll, L.; Corona, P.; et al. European mixed forests: Definition and research perspectives. For. Syst. 2014, 23, 518–533. [Google Scholar] [CrossRef]
  2. Corine Land Cover Nomenclature Guidelines. Available online: https://land.copernicus.eu/user-corner/technical-library/corine-land-cover-nomenclature-guidelines/html/index-clc-313.html (accessed on 7 May 2019).
  3. Pardos, M.; del Río, M.; Pretzsch, H.; Jactel, H.; Bielak, K.; Bravo, F.; Brazaitis, G.; Defossez, E.; Engel, M.; Godvod, K.; et al. The greater resilience of mixed forests to drought mainly depends on their composition: Analysis along a climate gradient across Europe. For. Ecol. Manag. 2021, 481, 118687. [Google Scholar] [CrossRef]
  4. Pretzsch, H.; Steckel, M.; Heym, M.; Biber, P.; Ammer, C.; Ehbrecht, M.; Bielak, K.; Bravo, F.; Ordóñez, C.; Collet, C.; et al. Stand growth and structure of mixed-species and monospecific stands of Scots pine (Pinus sylvestris L.) and oak (Q. robur L., Quercus petraea (Matt.) Liebl.) analysed along a productivity gradient through Europe. Eur. J. For. Res. 2020, 139, 349–367. [Google Scholar] [CrossRef] [Green Version]
  5. Jonsson, M.; Bengtsson, J.; Gamfeldt, L.; Moen, J.; Snäll, T. Levels of forest ecosystem services depend on specific mixtures of commercial tree species. Nat. Plants 2019, 5, 141–147. [Google Scholar] [CrossRef] [PubMed]
  6. Jactel, H.; Gritti, E.S.; Drössler, L.; Forrester, D.I.; Mason, W.L.; Morin, X.; Pretzsch, H.; Castagneyrol, B. Positive biodiversity–productivity relationships in forests: Climate matters. Biol. Lett. 2018, 14, 12–15. [Google Scholar] [CrossRef] [PubMed]
  7. Van Der Plas, F.; Manning, P.; Allan, E.; Scherer-Lorenzen, M.; Verheyen, K.; Wirth, C.; Zavala, M.A.; Hector, A.; Ampoorter, E.; Baeten, L.; et al. Jack-of-all-trades effects drive biodiversity-ecosystem multifunctionality relationships in European forests. Nat. Commun. 2016, 7, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Forrester, D.I.; Bauhus, J. A Review of Processes Behind Diversity—Productivity Relationships in Forests. Curr. For. Rep. 2016, 2, 45–61. [Google Scholar] [CrossRef] [Green Version]
  9. Tomao, A.; Bonet, J.A.; Martínez de Aragón, J.; De-Miguel, S. Is silviculture able to enhance wild forest mushroom resources? Current knowledge and future perspectives. For. Ecol. Manag. 2017, 402, 102–114. [Google Scholar] [CrossRef] [Green Version]
  10. Tomao, A.; Antonio Bonet, J.; Castaño, C.; De-Miguel, S. How does forest management affect fungal diversity and community composition? Current knowledge and future perspectives for the conservation of forest fungi. For. Ecol. Manag. 2020, 457, 117678. [Google Scholar] [CrossRef]
  11. Bauhus, J.; Forrester, D.I.; Pretzsch, H.; Felton, A.; Pyttel, P.; Benneter, A. Silvicultural options for mixed-species stands. In Mixed-Species Forests; Springer: Berlin/Heidelberg, Germany, 2017; pp. 433–501. [Google Scholar]
  12. Coll, L.; Ameztegui, A.; Collet, C.; Löf, M.; Mason, B.; Pach, M.; Verheyen, K.; Abrudan, I.; Barbati, A.; Barreiro, S.; et al. Knowledge gaps about mixed forests: What do European forest managers want to know and what answers can science provide? For. Ecol. Manag. 2018, 407, 106–115. [Google Scholar] [CrossRef] [Green Version]
  13. Mustafa, Y.T.; Habeeb, H.N. Object based technique for delineating and mapping 15 tree species using VHR WorldView-2 imagery. Remote Sens. Agric. Ecosyst. Hydrol. XVI 2014, 9239, 92390G. [Google Scholar] [CrossRef]
  14. Waser, L.T.; Küchler, M.; Jütte, K.; Stampfer, T. Evaluating the potential of worldview-2 data to classify tree species and different levels of ash mortality. Remote Sens. 2014, 6, 4515–4545. [Google Scholar] [CrossRef] [Green Version]
  15. Madonsela, S.; Cho, M.A.; Mathieu, R.; Mutanga, O.; Ramoelo, A.; Kaszta, Ż.; Van De Kerchove, R.V.; Wolff, E. Multi-phenology WorldView-2 imagery improves remote sensing of savannah tree species. Int. J. Appl. Earth Obs. Geoinf. 2017, 58, 65–73. [Google Scholar] [CrossRef] [Green Version]
  16. Wu, H.; Levin, N.; Seabrook, L.; Moore, B.D.; McAlpine, C. Mapping foliar nutrition using WorldView-3 and WorldView-2 to assess koala habitat suitability. Remote Sens. 2019, 11, 215. [Google Scholar] [CrossRef] [Green Version]
  17. Lucas, R.; Rowlands, A.; Brown, A.; Keyworth, S.; Bunting, P. Rule-based classification of multi-temporal satellite imagery for habitat and agricultural land cover mapping. ISPRS J. Photogramm. Remote Sens. 2007, 62, 165–185. [Google Scholar] [CrossRef]
  18. Lengyel, S.; Déri, E.; Varga, Z.; Horváth, R.; Tóthmérész, B.; Henry, P.Y.; Kobler, A.; Kutnar, L.; Babij, V.; Seliškar, A.; et al. Habitat monitoring in Europe: A description of current practices. Biodivers. Conserv. 2008, 17, 3327–3339. [Google Scholar] [CrossRef]
  19. Lucas, R.; Medcalf, K.; Brown, A.; Bunting, P.; Breyer, J.; Clewley, D.; Keyworth, S.; Blackmore, P. Updating the Phase 1 habitat map of Wales, UK, using satellite sensor data. ISPRS J. Photogramm. Remote Sens. 2011, 66, 81–102. [Google Scholar] [CrossRef]
  20. Grabska, E.; Hostert, P.; Pflugmacher, D.; Ostapowicz, K. Forest stand species mapping using the sentinel-2 time series. Remote Sens. 2019, 11, 1197. [Google Scholar] [CrossRef] [Green Version]
  21. Persson, M.; Lindberg, E.; Reese, H. Tree species classification with multi-temporal Sentinel-2 data. Remote Sens. 2018, 10, 1794. [Google Scholar] [CrossRef] [Green Version]
  22. Nelson, M. Evaluating Multitemporal Sentinel-2 data for Forest Mapping using Random Forest. Master’s Thesis, Stockholm University, Stockholm, Sweden, 2017. [Google Scholar]
  23. Immitzer, M.; Vuolo, F.; Atzberger, C. First experience with Sentinel-2 data for crop and tree species classifications in central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  24. Pu, R.; Landry, S. A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery for mapping urban tree species. Remote Sens. Environ. 2012, 124, 516–533. [Google Scholar] [CrossRef]
  25. Pu, R. Broadleaf species recognition with in situ hyperspectral data. Int. J. Remote Sens. 2009, 30, 2759–2779. [Google Scholar] [CrossRef]
  26. Dalponte, M.; Ørka, H.O.; Gobakken, T.; Gianelle, D.; Næsset, E. Tree species classification in boreal forests with hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2632–2645. [Google Scholar] [CrossRef]
  27. Hill, R.A.; Thomson, A.G. Mapping woodland species composition and structure using airborne spectral and LiDAR data. Int. J. Remote Sens. 2005, 26, 3763–3779. [Google Scholar] [CrossRef]
  28. Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
  29. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014. [Google Scholar] [CrossRef] [Green Version]
  30. Feizizadeh, B.; Kazemi Garajeh, M.; Blaschke, T.; Lakes, T. An object based image analysis applied for volcanic and glacial landforms mapping in Sahand Mountain, Iran. Catena 2021. [Google Scholar] [CrossRef]
  31. Janowski, L.; Kubacka, M.; Pydyn, A.; Popek, M.; Gajewski, L. From acoustics to underwater archaeology: Deep investigation of a shallow lake using high-resolution hydroacoustics—The case of Lake Lednica, Poland. Archaeometry 2021. [Google Scholar] [CrossRef]
  32. Rajbhandari, S.; Aryal, J.; Osborn, J.; Lucieer, A.; Musk, R. Leveraging machine learning to extend Ontology-driven Geographic Object-Based Image Analysis (O-GEOBIA): A case study in forest-type mapping. Remote Sens. 2019. [Google Scholar] [CrossRef] [Green Version]
  33. Uddin, K.; Gilani, H.; Murthy, M.S.R.; Kotru, R.; Qamer, F.M. Forest Condition Monitoring Using Very-High-Resolution Satellite Imagery in a Remote Mountain Watershed in Nepal. Mt. Res. Dev. 2015, 35, 264–277. [Google Scholar] [CrossRef]
  34. Chehata, N.; Orny, C.; Boukir, S.; Guyon, D.; Wigneron, J.P. Object-based change detection in wind storm-damaged forest using high-resolution multispectral images. Int. J. Remote Sens. 2014, 35, 4758–4777. [Google Scholar] [CrossRef]
  35. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  36. Whiteside, T.G.; Boggs, G.S.; Maier, S.W. Comparing object-based and pixel-based classifications for mapping savannas. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 884–893. [Google Scholar] [CrossRef]
  37. Laliberte, A.S.; Rango, A.; Havstad, K.M.; Paris, J.F.; Beck, R.F.; McNeely, R.; Gonzalez, A.L. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sens. Environ. 2004. [Google Scholar] [CrossRef]
  38. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006. [Google Scholar] [CrossRef] [Green Version]
  39. Bagaram, M.B.; Giuliarelli, D.; Chirici, G.; Giannetti, F.; Barbati, A. UAV remote sensing for biodiversity monitoring: Are forest canopy gaps good covariates? Remote Sens. 2018, 10, 1397. [Google Scholar] [CrossRef]
  40. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with Random forest using very high spatial resolution 8-band worldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef] [Green Version]
  41. Geoportale Regione Calabria. Available online: http://www.pcn.minambiente.it/GN/ (accessed on 17 June 2020).
  42. Brullo, S.; Scelsi, F.; Spampinato, G. Vegetazione dell’Aspromonte; Laruffa: Reggio Calabria, Italy, 2001; ISBN 8872211603. [Google Scholar]
  43. Terraitaly. Available online: https://www.terraitaly.it/en/ (accessed on 17 June 2020).
  44. Mather, P.M.; Koch, M. Computer Processing of Remotely-Sensed Images: An Introduction, 4th ed.; Wiley: Hoboken, NJ, USA, 2010; ISBN 9780470742389. [Google Scholar]
  45. Lyons, M.B.; Keith, D.A.; Phinn, S.R.; Mason, T.J.; Elith, J. A comparison of resampling methods for remote sensing classification and accuracy assessment. Remote Sens. Environ. 2018. [Google Scholar] [CrossRef]
  46. Alpaydin, E. Design and Analysis of Machine Learning Experiments; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  47. Thanh Noi, P.; Kappas, M. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery. Sensors 2017, 18, 18. [Google Scholar] [CrossRef] [Green Version]
  48. Singh, A.; Halgamuge, M.N.; Lakshmiganthan, R. Impact of Different Data Types on Classifier Performance of Random Forest, Naïve Bayes, and K-Nearest Neighbors Algorithms. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 1–11. [Google Scholar] [CrossRef] [Green Version]
  49. Oke, O.A.; Thompson, K.A. Distribution models for mountain plant species: The value of elevation. Ecol. Modell. 2015. [Google Scholar] [CrossRef]
  50. Koukal, T.; Suppan, F.; Schneider, W. The impact of relative radiometric calibration on the accuracy of kNN-predictions of forest attributes. Remote Sens. Environ. 2007. [Google Scholar] [CrossRef]
  51. Finegold, Y.; Ortmann, A.; Lindquist, E.; d’Annunzio, R.; Sandker, M. Map Accuracy Assessment and Area Estimation: A Practical Guide; Food and Agriculture Organization of the United Nations: Rome, Italy, 2016. [Google Scholar]
  52. Rosenfield, G.H.; Fitzpatrick-Lins, K. A coefficient of agreement as a measure of thematic classification accuracy. Photogramm. Eng. Remote Sens. 1986, 52, 223–227. [Google Scholar]
  53. Miraki, M.; Sohrabi, H.; Fatehi, P.; Kneubuehler, M. Individual tree crown delineation from high-resolution UAV images in broadleaf forest. Ecol. Inform. 2021, 61, 101207. [Google Scholar] [CrossRef]
  54. Schepaschenko, D.; See, L.; Lesiv, M.; Bastin, J.-F.; Mollicone, D.; Tsendbazar, N.-E.; Bastin, L.; McCallum, I.; Bayas, J.C.L.; Baklanov, A. Recent advances in forest observation with visual interpretation of very high-resolution imagery. Surv. Geophys. 2019, 40, 839–862. [Google Scholar] [CrossRef] [Green Version]
  55. Puletti, N.; Chianucci, F.; Castaldi, C. Use of Sentinel-2 for forest classification in Mediterranean environments. Ann. Silvic. Res 2018, 42, 32–38. [Google Scholar]
  56. Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet-Brunet, V. Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2017, 126, 129–145. [Google Scholar] [CrossRef]
  57. Hovi, A.; Raitio, P.; Rautiainen, M. A spectral analysis of 25 boreal tree species. Silva Fenn. 2017, 51. [Google Scholar] [CrossRef] [Green Version]
  58. Drǎguţ, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. ISPRS J. Photogramm. Remote Sens. 2014. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The study area (in red) of the Sila plateau in Calabria (Southern Italy) (EPSG 3004).
Figure 1. The study area (in red) of the Sila plateau in Calabria (Southern Italy) (EPSG 3004).
Remotesensing 13 02508 g001
Figure 2. Overview of the image processing flow.
Figure 2. Overview of the image processing flow.
Remotesensing 13 02508 g002
Figure 3. Example of validation sample (EPSG 3004). The limits of the sample and the circular plot are in yellow and blue, respectively. The codes 0, 1, 2 refer to the classes reported in Table 3.
Figure 3. Example of validation sample (EPSG 3004). The limits of the sample and the circular plot are in yellow and blue, respectively. The codes 0, 1, 2 refer to the classes reported in Table 3.
Remotesensing 13 02508 g003
Figure 4. On the left, VHR aerial image (infrared false color) of the study area where “no forest” is highlighted in white and on the right, the map of forest areas covered by coniferous, broadleaves and mixed stands (EPSG 3004).
Figure 4. On the left, VHR aerial image (infrared false color) of the study area where “no forest” is highlighted in white and on the right, the map of forest areas covered by coniferous, broadleaves and mixed stands (EPSG 3004).
Remotesensing 13 02508 g004
Figure 5. Frequency distribution of the size of polygons assigned to the “mixed forest” class.
Figure 5. Frequency distribution of the size of polygons assigned to the “mixed forest” class.
Remotesensing 13 02508 g005
Figure 6. Examples of patches of different size assigned to “mixed forest” class (EPSG 3004).
Figure 6. Examples of patches of different size assigned to “mixed forest” class (EPSG 3004).
Remotesensing 13 02508 g006
Table 1. Decision-rules for classification of noise.
Table 1. Decision-rules for classification of noise.
ClassesParameterThreshold
Dark ShadeMean green≤35
Light ShadeMean green>35 & ≤45
GrasslandBrightness≥110
Max diff≥0.08 ≤0.329
Table 2. Definition of forest classes training sample.
Table 2. Definition of forest classes training sample.
Classification CodeTree Crown AssemblageNumber of Samples
0Broadleaved deciduous forest76
1Coniferous forest39
Total 115
Table 3. Decision-rules for classification of FS150 polygons.
Table 3. Decision-rules for classification of FS150 polygons.
CodeClassDescription
0Broadleaved deciduous forestNo less than 70% of the total area of the coarser polygons (FS150) is covered by sub-polygons (FS80) classified as broadleaved deciduous forest
1Coniferous forestNo less than 70% of the total area of the coarser polygons (FS150) is covered by sub-polygons (FS80) classified as coniferous forest
2Mixed forest of broadleaved deciduous and coniferous trees Both FS80 polygons classified as broadleaves and FS80 polygons classified as conifers occupy at least 30%, but maximum 70%, of the total area of coarser polygons (FS150)
Table 4. Distribution of the three forest types classes in the validation sample.
Table 4. Distribution of the three forest types classes in the validation sample.
CodeClassNumber of Polygons for Map ValidationSurface (ha)%
0Broadleaved deciduous forest4769.0150
1Coniferous forest3435.4125
2Mixed forest of broadleaved deciduous and coniferous trees3333.3124
tot 114137.73100
Table 5. Area covered by forest classes.
Table 5. Area covered by forest classes.
CodeClassesArea (ha)%Area
0Broadleaved deciduous forest2502.4455
1Coniferous forest1553.4434
2Mixed forest of broadleaved deciduous and coniferous trees497.111
Total4552.98100
Table 6. Overall accuracy for kNN classification.
Table 6. Overall accuracy for kNN classification.
012
UA0.930.900.73
PA0.850.880.84
OA 0.85
K 0.78
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Oreti, L.; Giuliarelli, D.; Tomao, A.; Barbati, A. Object Oriented Classification for Mapping Mixed and Pure Forest Stands Using Very-High Resolution Imagery. Remote Sens. 2021, 13, 2508. https://doi.org/10.3390/rs13132508

AMA Style

Oreti L, Giuliarelli D, Tomao A, Barbati A. Object Oriented Classification for Mapping Mixed and Pure Forest Stands Using Very-High Resolution Imagery. Remote Sensing. 2021; 13(13):2508. https://doi.org/10.3390/rs13132508

Chicago/Turabian Style

Oreti, Loredana, Diego Giuliarelli, Antonio Tomao, and Anna Barbati. 2021. "Object Oriented Classification for Mapping Mixed and Pure Forest Stands Using Very-High Resolution Imagery" Remote Sensing 13, no. 13: 2508. https://doi.org/10.3390/rs13132508

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop