Next Article in Journal
Multi-Year Global Oscillations in GNSS Deformation and Surface Loading Contributions
Previous Article in Journal
A New Transformer Network for Short-Term Global Sea Surface Temperature Forecasting: Importance of Eddies
Previous Article in Special Issue
RDAU-Net: A U-Shaped Semantic Segmentation Network for Buildings near Rivers and Lakes Based on a Fusion Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Automatic Extraction of Hedgerows from High-Resolution Satellite Imagery

by
Anna Lilian Gardossi
1,*,
Antonio Tomao
1,
MD Abdul Mueed Choudhury
2,3,
Ernesto Marcheggiani
3 and
Maurizia Sigura
1
1
Department of Agricultural, Food, Environmental and Animal Sciences, University of Udine, 33100 Udine, Italy
2
Department of Agriculture, Mediterranea University of Reggio Calabria, 89124 Reggio Calabria, Italy
3
Department of Agricultural, Food and Environmental Sciences, Marche Polytechnic University, 60131 Ancona, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(9), 1506; https://doi.org/10.3390/rs17091506
Submission received: 28 February 2025 / Revised: 15 April 2025 / Accepted: 16 April 2025 / Published: 24 April 2025

Abstract

:
Small landscape elements are critical in ecological systems, encompassing vegetated and non-vegetated features. As vegetated elements, hedgerows contribute significantly to biodiversity conservation, erosion protection, and wind speed reduction within agroecosystems. This study focuses on the semi-automatic extraction of hedgerows by applying the Object-Based Image Analysis (OBIA) approach to two multispectral satellite datasets. Multitemporal image data from PlanetScope and Copernicus Sentinel-2 have been used to test the applicability of the proposed approach for detailed land cover mapping, with an emphasis on extracting Small Woody Elements. This study demonstrates significant results in classifying and extracting hedgerows, a smaller landscape element, from both Sentinel-2 and PlanetScope images. A good overall accuracy (OA) was obtained using PlanetScope data (OA = 95%) and Sentinel-2 data (OA = 85%), despite the coarser resolution of the latter. This will undoubtedly demonstrate the effectiveness of the OBIA approach in leveraging freely available image data for detailed land cover mapping, particularly in identifying and classifying hedgerows, thus supporting biodiversity conservation and ecological infrastructure enhancement.

Graphical Abstract

1. Introduction

Small Woody Elements (SWEs), including linear features such as hedgerows, tree lines, small woods, and riparian vegetation, are vital components of ecological systems. These features, located primarily outside forested areas, form a mosaic of interconnected ecosystem resources, significantly influencing landscape patterns [1]. Their ecological importance goes beyond their spatial extent, as they contribute to connectivity and biodiversity conservation by mitigating habitat fragmentation and enhancing structural complexity [2]. Beyond ecological functions, SWEs provide essential ecosystem services, including carbon sequestration, water filtration, erosion control, and microclimate regulation [3,4,5].
The multifunctionality of hedgerows, encompassing their capacity to provide multiple ecological, socio-economic, and cultural functions [6], must be carefully considered, especially at a local scale, where the management, structure, and composition of vegetation layer significantly influence hedgerow biodiversity and ecological functioning [7]. Agricultural intensification, urban development, and habitat fragmentation have led to a simplification of landscape structures [8]. To counteract these effects, implementing green landscape features, such as hedgerows, line trees, and other SWEs, is essential for preserving biodiversity and maintaining the ecosystem services they support [9,10].
European policy strongly supports the increase and conservation of SWEs. The Common Agricultural Policy emphasizes the importance of landscape elements in maintaining biodiversity and ecosystem services. Moreover, the European Union Biodiversity Strategy for 2030 aims to ensure a coherent Trans-European Nature Network (Natura 2000). This includes, for example, restoring high-diversity landscape features, such as buffer strips, fallow land, and hedgerows outside protected areas. The overarching objective of the strategy is to transform at least 10% of agricultural land into high-diversity landscape features to provide space for wild animals, plants, pollinators, and natural pest regulators [11]. Furthermore, the Nature Restoration Law (NRL) (EU Regulation 2024–1991 Regulation-EU-2024/1991-EN-EUR-Lex) mandates that countries implement measures to achieve a national upward trend in at least two of three indicators for agricultural ecosystems, one being the percentage of agricultural land featuring landscape elements with high biodiversity, such as hedgerows [11]. The NRL also emphasizes the protection of landscape features with high biodiversity, such as hedgerows, from negative external disturbances to ensure safe habitats and support biodiversity. Understanding the distribution, extent, and condition of SWEs within landscape ecology and biodiversity conservation is fundamental for guiding effective conservation efforts across different spatial scales [12]. As for forests, the timely and precise monitoring and mapping of cover is a vital aspect of sustainable management and the monitoring of ecosystem transformations [13,14]. In addition, accurate mapping of SWEs is important to achieving the objectives of outcome indicator I.10/C21 of the Common Agricultural Policy [15]: “Improved provision of ecosystem services: Proportion of agricultural land covered by landscape features”. This indicator aims to estimate the proportion of agricultural land covered by different landscape features, including linear features such as hedgerows, tree patches, forests, wetlands, and semi-natural areas such as field margins. The indicator has two components: the proportion of agricultural land covered by landscape features and a more detailed index of the structure of landscape elements [16].
Maps of SWEs are also useful tools to support biodiversity studies. Hedgerows play a critical role in providing food resources for wildlife, serving as habitats for important species like pollinators and resident and overwintering birds [17]. As highlighted by Hinsley et al. [18] in their study across the UK, hedgerow size and the presence of trees were found to positively influence bird species richness and abundance. In this regard, reliable maps of such landscape elements can support the design of more efficient sampling schemes. Moreover, spatially explicit information can be potentially included in decision support systems at a regional scale [19,20], thus supporting local and regional planning aimed at nature conservation. For instance, accurate identification of hedgerows enables the mapping of agricultural areas that require restoration measures, identifying where hedgerows have been destroyed or lost, and facilitating the implementation of ecological corridors [21].
In recent years, remote sensing has emerged as a valuable tool for accurately and efficiently mapping small landscape features. Initially, high-resolution multispectral imagery facilitated mapping through visual interpretation and extensive fieldwork, a time-consuming and labor-intensive process [22]. However, advancements in remote sensing technology have led to the development of automated methods for delineating vegetation elements, enhancing objectivity, and reducing time and resource demands [23]. Consequently, remote sensing approaches offer several advantages: repeatability, wide area coverage, and real-time accessibility.
Several factors influence the successful extraction of SWEs. Due to their shape and size, medium spatial resolution imagery may be insufficient to distinguish individual elements, leading to mixed pixels. Mixed pixels represent situations where hedgerows are interspersed with other landscape features or land cover types. To ensure consistent and accurate extraction, the features must be significantly larger than the spatial resolution of the imagery. Coarse-scale multispectral images complicate the identification and drawing of hedgerow boundaries, with factors such as the length-to-width ratio and the area of the feature influencing extraction accuracy. Consequently, low spatial resolution can lead to a reduction in classification accuracy [24].
Remote sensing techniques for SWE mapping vary based on resolution, ranging from MODIS-based tree cover assessments at low resolution to object-based classifications and pixel-swapping techniques at medium resolution [25,26]. Later approaches introduced automated feature extraction using Object-Based Image Analysis (OBIA) and machine learning [27,28,29]. In the context of landscape study and classification, OBIA has emerged as a highly efficient method for processing satellite imagery and supporting data fusion [30]. Unlike traditional pixel-based approaches, OBIA groups pixels into meaningful objects based on their spectral, spatial, and contextual characteristics [31]. This object-based approach provides more accurate information within the Geographic Object-Based Image Analysis (GEOBIA) paradigms. GEOBIA significantly enhances the analysis and understanding of remote-sensing images by processing objects defined by one or more criteria of pixel homogeneity. The fundamental unit for classification becomes the object, to which a set of classifications is then applied [32,33].
Recent advancements include methods that combine multiple data sources such as hyperspectral and multispectral for enhanced vegetation classification, particularly with Sentinel-2 and PRISMA imagery [34,35,36]. However, challenges remain in harmonizing datasets across different resolutions and accounting for spatial variability in spectral signatures.
The ‘Small Woody Features’ (SWF) inventory released by Copernicus Land Monitoring Service provides a valuable overview of woodlands, including linear and patchy woody elements. The SWF dataset is part of the High Resolution Layers and provides pan-European data on linear and patchy woody vegetation elements outside forests, such as hedgerows, tree alignments, and isolated tree clusters. The 2018 SWF product was derived from satellite imagery provided by Copernicus Contributing Missions. The processing workflow included different steps: segmentation and pre-classification using GEOBIA, manual enhancement to remove artificial features, and post-processing to apply geometric constraints, ensuring that mapped elements met predefined criteria (e.g., minimum length of 30 m for linear structures). The available layers offer a spatial resolution of 5 and 100 m. While the broad coverage and the free, open accessibility of this database are significant advantages, it is crucial to acknowledge that the dataset is based on a mono-temporal Earth Observation imagery (European Union), and the validation process has been conducted only for the data at the resolution of 100 m. These limitations of the Copernicus SWF dataset mean that vegetation dynamics across seasons are not considered. In addition, the dataset has a three-year update cycle, which may limit its applicability for frequent monitoring needs [21].
This research addresses specific limitations in the Copernicus product and previous literature, particularly regarding the underutilization of multitemporal and multi-source data [37,38]. To overcome this, we integrate multitemporal, high-resolution PlanetScope imagery, offering a more accurate and temporally consistent extraction of Small Woody Elements (SWEs). Multitemporal imagery within a defined time frame is crucial in agriculture, especially for delineating field boundaries, a challenging task for conventional satellite systems. The innovative PlanetScope constellation provides access to high-resolution, multitemporal satellite imagery. Comprising 150–200 nano-satellites, this system enables daily acquisition of multispectral imagery with a resolution ranging from 3 to 5 m, all at a relatively low cost [39,40]. Additionally, our object-oriented approach enhances delineation accuracy by exploiting spectral and textural features at multiple scales, helping to overcome limitations associated with mixed-pixel and geometric constraints present in the Copernicus product [21,39,40].
This study proposes a semi-automatic method for extracting linear green landscape elements from multitemporal and multispectral satellite imagery using OBIA. Our objective is twofold: (i) to evaluate the effectiveness of OBIA in extracting SWEs, particularly hedgerows, from satellite imagery at different resolutions (i.e., Sentinel-2 and PlanetScope); and (ii) to compare the accuracy of their delineation and characterization with the Copernicus Small Woody Features dataset.
This paper is structured as follows: Section 2 details the study area, data sources, and methodology, including image segmentation, classification, and validation processes. Section 3 presents the results, analyzing the classification performance of Sentinel-2 and PlanetScope imagery. Section 4 discusses the findings, highlighting the advantages and limitations of the proposed approach. Finally, Section 5 provides conclusions and defines potential future research directions.

2. Materials and Methods

2.1. Dataset

The study area is located in the plain of the Friuli Venezia Giulia region (North-Eastern Italy) in a context of rural–urban landscape, between 46°2′38.05″N latitude, 13°7′45.45″E longitude (Figure 1). The area is characterized by a mixed mosaic of intensively and extensively cultivated areas enriched by tree formations generally dominated by black poplar (Populus nigra L.) or willow thickets of Salix eleagnos Scop. and Salix purpurea L., often contaminated by invasive exotic species (e.g., Robinia pseudacacia L., Amorpha fruticosa L., and Reynoutria japonica Houtt.). The soils of the area consist mainly of Quaternary sand, silt, and silt–clay sediments formed by glacial fluvial transport during the Pleistocene and alluvial deposition during the Holocene.
To address the issue of data availability at high resolutions, our study leveraged data from the PlanetScope platform (Planet Labs, Inc., San Francisco, CA, USA, spatial resolution of 3 m/pxl) to which were coupled images from the Copernicus Sentinel-2 platform (European Space Agency, Paris, France, EU, spatial resolution of 10 m/pxl). The Sentinel-2 bands used include B8 Near-Infrared (NIR) at 842 nm, B2 (Blue) at 490 nm, B3 (Green) at 560 nm, and B4 (Red) at 665 nm. The bands for PlanetScope included B2 (Blue), B4 (Green), B6 (Red), and B8 (Near-Infrared, NIR) at 780–860 nm.
To mitigate the impact of cloud cover on data, a pre-processing step was performed using the Google Earth Engine platform for Sentinel-2 images, applying a cloud filter threshold of 20%. For PlanetScope images, a 10% cloud cover threshold was set within the PlanetScope platform. The difference in cloud cover thresholds was chosen based on the revisit time of each satellite. Sentinel-2 has a longer revisit time compared to PlanetScope’s daily acquisitions.
To increase the possibility of obtaining an image that is temporally aligned with PlanetScope data, we utilized a higher threshold of cloud cover for Sentinel-2. The approach gave us a better temporal match between datasets, yet retaining sufficient image quality for analysis.
Data acquisition encompassed vegetation and non-vegetation seasons, explicitly focusing on April, July, and November 2022 (Table 1). A total of six images were collected during these months to capture seasonal variability and provide a robust dataset for subsequent analysis.
Very high resolution (VHR) true color orthophotos (Regione Friuli Venezia Giulia) were used for validation. The acquisition period was from 2017 to 2020; the spatial resolution is 20 cm/px.

2.2. Methods

The proposed method used the GEOBIA paradigm, with object-oriented classification of the images performed in eCognition Developer 10.3 software of Trimble Germany GmbH (München, Germany). The applied approach began with a segmentation and classification of the images to distinguish hedgerows from other land cover classes. A second round of segmentation and classification was performed to improve the accuracy and refine the results, focusing on addressing misclassifications and enhancing the detection of hedgerows. Finally, a validation phase was implemented.
The workflow (Figure 2) consisted of three main steps: (1) identification of the main SWE and refinement, (2) noise removal, and (3) validation.
The workflow includes the initial segmentation, feature set construction, object-oriented classification with the Nearest Neighbor algorithm, and subsequent refinements. Validation was performed using reference data and visual interpretation on orthophotos. QGIS 3.34.10 Prizren (QGIS Development Team. QGIS Geographic Information System, version 3.34. Open Source Geospatial Foundation) was used to handle spatial datasets and perform Geographical Information System (GIS) operations. eCognition 10.3 software was employed for object-based image classification, and finally, Microsoft Excel 2019 was used for deriving statistical analyses.

2.2.1. Image Segmentation

The first phase of the rule set was segmentation, which partitioned the Sentinel-2 and PlanetScope images into distinct and homogeneous regions based on feature properties, such as texture, color, or brightness [42]. A multiresolution segmentation method was adopted [41]. In this region-growing approach, pixels are initially treated as individual objects and then iteratively merged based on similarity to form larger, homogeneous segments [43]. Previous research has consistently identified determining optimal segmentation parameters as a significant challenge [44]. Parameters such as segmentation scale (Ss), shape (Sh), and compactness (Cm) are often determined through trial-and-error methods [45]. In our study, the Ss has been set to 30, ensuring that linear vegetation elements were accurately represented without excessive fragmentation. The shape and compactness parameters were set to 0.8 and 0.2, respectively. This was achieved through an iterative optimization process aimed at finding the best balance between (i) spectral homogeneity and object shape, and (ii) geometric fidelity to the target object and segmentation performance.
The segmentation process relies not only on shape and size (i.e., Ss, Sh, and Cm) criteria, but also incorporates spectral features (Table 2). In this regard, unlike conventional segmentation approaches that depend solely on single-date spectral properties, we introduced multitemporal NDVI differences (Normalized Difference Vegetation Index) as a key segmentation feature to distinguish stable vegetated elements, such as hedgerows, from agricultural areas with seasonal spectral variations [46,47]. This approach is particularly useful in agroecosystems where crops exhibit strong phenological cycles that affect their spectral response over different collection periods [48]. Cropland spectral response at different phenological stages was found to be sensitive to the variations of the growth variables that characterized plant seasonal development [49]. The growing season gives a positive response regarding NDVI values, while the spectral response results in lower NDVI values during the harvest period. In this study, the difference in NDVI between summer and spring is hypothesized to be significant for the different crops, while minimal or no difference in values corresponds to objects exhibiting stable vegetation characteristics, such as hedgerows.
Based on these considerations, we selected as spectral features not only individual multispectral bands (Red, Green, Blue, Near-Infrared), which help distinguish basic spectral differences across land cover types, but also NDVI values from different time periods. Specifically, NDVI for April and July were used to capture vegetation greenness at two distinct times of the year, facilitating the identification of vegetated versus non-vegetated areas. Multitemporal information was then used to compute a new layer representing the difference in NDVI values between July and April.
Each spectral feature was assigned a specific weight based on its relevance in distinguishing target objects (see Table 2). Similarly to Ss parameters, the weight assignment process was not automated but required iterative testing—a widely used method to define weights in OBIA [50,51]—in order to develop a setup that enabled optimal differentiation between hedgerows and other land uses.

2.2.2. Feature Space Construction

Each image object has a variety of characteristics, including spectral, geometric, spatial, topological, and hierarchical attributes. These measurable properties of an image object, such as its color, texture, or compactness, are utilized in the classification process and are called features. An object can be assigned to a particular class depending on the feature value. A multitemporal detection of changes helps to differentiate objects exhibiting high NDVI variability from those maintaining a more constant value, thereby enhancing classification accuracy. So, multitemporal profiles of the NDVI layer were identified as attributes useful for the classification.
In order to classify the objects, a set of features were extracted. Various categories of features are commonly employed for the classification of agricultural regions, including spectral, textural, structural, and geometric attributes [52]. Here, we used a combination of spectral and textural features (Table 3) for discriminant hedgerows and other land covers.
To optimize the set of features for classification, we adopted a two-step selection process. First, the existing literature was reviewed to identify features used in similar studies. Based on this, we initially considered a range of 11 spectral and textural features [46,47,48,53]. Second, we used the Feature Space Optimization tool (FSO) in eCognition, which allowed us to refine the selection and use only the most influential features. FSO is a tool available in eCognition and it calculates an optimum feature combination based on class samples. FSO evaluates the Euclidean distance in feature space between the samples of all classes and selects a feature combination resulting in the best class separation distance, which is defined as the largest of the minimum distances between the least separable classes [54].
This method evaluates class separability based on different features and helps to identify those that maximize classification accuracy, and is commonly used in studies using eCognition (e.g., [50,55]).
The five most influential features extracted with FSO were Brightness, GLDV Entropy (quick 9/11) (90°), Maximum difference, and differences in NDVI mean values between July and April and between July and November (the full list of features and the results of FSO process are available in Supplementary Materials). These features were chosen for classification because substantial differences in the spectral response of agricultural fields can be identified between the different seasons. The other spectral features considered included Maximum difference, which informs on the spectral variability within an object, and Brightness and Entropy, which measure the spectral heterogeneity. Entropy was derived from the Haralick texture features, a set of statistical measures used to describe an image’s texture. These features were introduced in the 1970s and are widely used in image analysis and computer vision to characterize the spatial arrangement of image pixel intensities [53,56,57]. The Haralick texture features are derived from the grey-level co-occurrence matrix (GLCM) and grey-level difference vector (GLDV), quantifying distance and angular spatial relationships within specific image sub-regions [58]. GLCM and GLDV texture measures provide information about the spectral differences between neighboring pixels. From the GLDV texture features, we selected the GLDV Entropy (quick 9/11) (90°). GLDV Entropy (quick 9/11) (90°) in eCognition calculates the entropy of the gradient direction distribution using a 9 × 11 quick neighborhood with a vertical (90°) gradient direction.

2.2.3. Classification and Refinement

The Nearest Neighbor (NN) classification approach was adopted to categorize hedgerows. This technique assigns membership values to image objects by comparing them to a set of reference samples from known classes. The classification process involves two main steps:
  • Training phase when the system learns from representative sample objects;
  • Classification phase is where image objects are assigned to the class of the nearest sample in feature space. Proximity is determined by calculating the distance between the image objects and each sample, with distance values standardized using the standard deviation of all feature values. This approach allows for a flexible and efficient categorization of image objects, relying on their nearest representative samples [54].
To classify the created objects into the classes “Hedgerows” and “Other”, the Nearest Neighbor classifier was trained with samples manually labeled in eCognition. The number of training samples for each class within the study area was 60. The training samples represent 60 polygons resulting from the segmentation. Each of those cells contains values of at least 100 pixels for that class. These samples were chosen randomly; the same procedures were followed for validation.
A second classification step was performed by assigning class labels to the image objects based on a threshold value. This step utilized the Blue Normalized Difference Vegetation Index (BNDVI) as the sole feature. Unlike NDVI, which relies on the Red and Near-Infrared (NIR) bands, BNDVI utilizes the Blue and NIR bands, making it more susceptible to atmospheric interference and variations in ground moisture. Several studies have reported that the BNDVI can effectively contribute to the spatial variability and distribution of chlorophyll within ecosystem assessment [59,60]. Gallegos et al. (2023) investigated tree health evaluation in urban green areas using different findings and significant correlations between BNDVI and variables such as crown density and transparency [61]. Furthermore, the use of BNDVI has demonstrated strong spatial and environmental coherence in previous studies [62], so BNDVI offers a complementary approach that can potentially enhance the accuracy and interpretability of classification results.

2.2.4. Noise Removal

Shadows are a common artefact in high-resolution remote sensing imagery arising from variations in terrain illumination captured by satellite sensors. Shadows can significantly hinder imagery analysis diminishing spectral radiance in shaded areas, making it challenging to separate spectral and spatial features within them [63,64]. Moreover, shadows are often misclassified as water or water-related land cover due to spectral and texture similarities with water bodies [65]. Given their detrimental impact on object classification, accurate shadow detection and removal are crucial for effective image analysis. In our case, shadows represent a false hedgerow encumbrance, distorting the size and shape of the target object. To detect shaded areas misclassified as hedgerows, shadows were explicitly addressed by identifying them as a distinct class. As shaded areas often appear contiguous with vegetation, they were identified through a refined sub-segmentation process applied exclusively to the “Hedgerow” class. To this aim, a scale parameter of 10, a shape factor of 0.8, and a compactness factor of 0.2 were used. The image layer weights employed in this sub-segmentation were consistent with those used in the previous segmentation step. The feature space for this sub-segmentation was defined considering the spectral absorption characteristics in the infrared and red spectral ranges. Four distinct features were defined to characterize this space, as reported in Table 4.
The Brightness feature was considered crucial because shaded areas exhibit brightness imbalance characterized by low reflectivity and the appearance of dark pixels, which can significantly interfere with analysis [66]. Recent shadow detection methods effectively leverage the NIR band to improve the segmentation of shadowed regions in imagery [28]. While shadows diminish reflectance across all spectral bands, this effect is more pronounced in the NIR band compared to the Red. The Red/NIR ratio effectively highlights these differences, increasing the contrast between illuminated and shadowed areas. This consideration is crucial given that completely dark objects often exhibit higher reflectivity values in the Near-Infrared spectrum [64,67].
A second supervised classification employing the same Nearest Neighbor algorithm as in the previous step was implemented within the eCognition environment to differentiate shadows from the initially classified hedgerows. The classification scheme included two classes: “Hedgerows” and “Shadows”. A total of 30 training samples were collected for each class within the study area.

2.2.5. Validation

To ensure robust validation, we adopted a two-step approach combining automated segmentation and visual interpretation. A validation dataset was derived from high-resolution orthophoto maps segmented in eCognition, producing structured reference polygons.
The orthophotos, provided by Friuli-Venezia Giulia Region—Central Directorate for Property, State Property, General Services, and Information Systems—are characterized by 0.20 m resolution and include RGB and Near-Infrared bands. This high spatial resolution provided a more robust validation dataset for comparison and helped to mitigate potential segmentation errors.
The orthophoto was segmented in eCognition using a scale parameter of 100, a shape factor of 0.8, and a compactness factor of 0.2. Unlike the segmentations performed during the hedgerow detection phase, this analysis utilized a mono-temporal layer. The NDVI was calculated, and image layer weights were assigned as follows: 4 for the NDVI and 2 for the NIR.
To ensure classification reliability, an independent polygon classification was conducted across a substantial portion of the study area. Sampling units consisted of circular plots of 3 ha [50], with centers randomly selected using a systematic unaligned sampling design across a grid with 1 km2 with square cells (4 × 5 km). Each square cell contained a circular plot intersecting one or more “Hedgerows” or “Other” image polygons. All polygons within each circular plot were visually interpreted (Figure 3) and assigned to one of two classes: 0 for “Other” and 1 for “Hedgerows”. This served as “ground truth” data for map validation. The validation dataset encompassed 3.0% of the study area, and 4595 polygons were visually interpreted (Table 5).
By using automated segmentation, we ensured standardized object boundaries, while visual interpretation refined classification reliability.
The first step, i.e., automatic segmentation of orthophotos in eCognition, provided a structured dataset of objectively delineated polygons, avoiding the subjective biases that can arise from manual delineation.
Compared to full visual interpretation, this approach provides a better spatial distribution of validation samples across the study area, while significantly reducing the time required to generate the dataset. Segmentation-based validation also improves comparability with object-based classification results, ensuring consistency in data processing. This two-step approach effectively balances accuracy, spatial representativeness, and time efficiency, thereby increasing the robustness of the validation dataset.
A confusion matrix was built to evaluate classification accuracy by comparing the object-based classified image with polygons of visually interpreted data. Validation and training samples did not overlap. The kappa coefficient was also calculated to assess the agreement between the two classifications. Overall accuracy, producer’s accuracy, and user’s accuracy were calculated, offering a comprehensive assessment of method performance and reliability in mapping hedgerows within the study area.
The confusion matrix is a two-dimensional table that compares the actual classes of objects to their predicted classes, including information about accurate and predicted classifications made by a classification system [68].
By converting sample counts into estimated areas, we evaluated overall accuracy (OA), producer’s accuracy (PA), and user’s accuracy (UA), providing a robust accuracy assessment of our classification results concerning both Sentinel-2 and PlanetScope data.

3. Results

3.1. Image Segmentation and Classification

The first level segmentation of the PlanetScope dataset yielded 11,844 objects, while the Sentinel-2 dataset produced 1582 objects. The mean area of segments was approximately 1700 m2 for PlanetScope and 12,600 m2 for Sentinel-2, reflecting the higher spatial resolution of PlanetScope imagery (Table 6).
The objects classified as “Hedgerows” were subsequently sub-segmented into smaller objects to identify and separate adjacent shadow areas, potentially false positives. These shaded regions were often misclassified as hedgerows. The initial class “Hedgerows” for the PlanetScope dataset included 2157 objects. The sub-segmentation process resulted in a total of 10,345 objects. For the Sentinel-2 dataset, the initial “Hedgerows” class comprised 437 objects (Figure 4).
After the second classification, the shaded regions were eliminated and moved to the “Other” class. Analysis of the map derived from classification of PlanetScope images revealed that shadows covered approximately 3.6% of the mapped area, whereas Sentinel-2 imagery indicated shadow coverage of approximately 7.9% of the total surface.
The detailed statistics of shadow coverage and classification accuracy are reported in Table 7, while Figure 5 and Figure 6 illustrate the final mapping results after the second classification.
Analysis of the higher resolution PlanetScope dataset identified 226 hectares of hedgerows, which constitute 11.3% of the study area. Conversely, in the Sentinel-2 dataset, the analysis revealed 292 hedgerows (413 hectares), accounting for 2.7% of the study area (Table 8).

3.2. Accuracy Assessment

Accuracy assessment is crucial in remote sensing-based mapping to evaluate the reliability of the resulting maps. In this context, the producer’s and the user’s accuracy are key metrics used to quantify the classification algorithm’s performance by applying a confusion matrix to each dataset.
Table 9 shows the actual and predicted areas (in hectares) for each class (Other and Hedgerow).
The assessment (Table 10) reveals an overall accuracy of 95% for PlanetScope data, demonstrating a good performance. Specifically, the producer’s accuracy for the “Hedgerows” class is 0.79, indicating that 79% of the reference area identified as hedgerows on the ground are correctly classified as such in the map derived from PlanetScope images. The user’s accuracy of 0.82 suggests that 82% of the area classified as hedges in the map correspond to actual hedges on the ground.
For Sentinel-2 data, the confusion matrix shows an overall accuracy of 85%. While the producer’s accuracy for the “Hedgerows” class is 71%, the user’s accuracy is 46%, suggesting lower reliability in correctly identifying and classifying hedgerows than the PlanetScope dataset.
Cohen’s Kappa coefficient (K) further highlights the differences between the two datasets. A K value of 0.77 for PlanetScope data indicates substantial agreement between classification and reference data, while the Sentinel-2 dataset, with a K of 0.47, shows only moderate agreement, reflecting higher classification uncertainty.

3.3. Comparison with SWF Copernicus Dataset

A comparison with large-scale datasets, such as the Copernicus Small Woody Features (SWF) (https://land.copernicus.eu/en/products/high-resolution-layer-small-woody-features, accessed on 10 April 2024), revealed a significant underestimation of hedgerow presence within the study area. The Copernicus SWF dataset estimated hedgerow coverage at just 6.2% of the total area. In contrast, our method using Sentinel-2 data estimated hedgerow coverage at 20.7%, showing a substantial discrepancy in area coverage. This marked difference highlights the limitations of the Copernicus SWF dataset in accurately capturing hedgerow coverage at a local scale. The SWF dataset, generated at a pan-European scale, may not account for the finer spatial variability of hedgerows, leading to a more generalized and lower estimate.
Further refining our approach, we applied PlanetScope data, which detected hedgerow coverage at 11.3% (Figure 7). This estimate lies between the two previous ones—significantly higher than the Copernicus SWF estimate but lower than the Sentinel-2 estimate—providing a more nuanced and precise depiction of hedgerow coverage.
A spatial comparison between the two datasets was conducted. After excluding the SWF polygons not detected with Sentinel-2 imagery and removing polygons mapped with Sentinel-2 imagery that contained no SWF features, we observed that each hedgerow polygon derived from Sentinel-2 data contains an average of three SWF elements. This suggests that the SWF dataset represents hedgerows as fragmented entities.
Similarly, for hedgerows mapped using PlanetScope data, each extracted hedgerow corresponds to an average of 1.96 SWF elements. The lower ratio compared to Sentinel-2 shows PlanetScope’s greater ability to represent hedgerows as continuous features while still accounting for the presence of gaps. The lack of continuity detected in SWF is possibly due to differences in mapping methodology, resolution, or classification method.

4. Discussion

This study aimed to develop a semi-automatic method for extracting hedgerows using multitemporal and multispectral satellite imagery. A key focus was comparing datasets with varying spatial resolutions to assess the effectiveness of the object-oriented approach (OBIA).
A significant challenge in hedgerow monitoring is the lack of up-to-date local inventories. While programs like Copernicus provide broad overviews, large-scale datasets often lack the necessary temporal resolution and detail for precise site-specific monitoring. This is particularly crucial as hedgerow structures and configurations change rapidly at the local scale, requiring frequent data updates to map this dynamic feature accurately [69]. The lack of localized, semi-automatic tools that can efficiently track hedgerows’ presence and topological accuracy further complicates conservation and management efforts.
By implementing OBIA, we sought to improve the delineation and characterization of small and linear vegetation elements, like hedgerows, overcoming the limitations of traditional pixel-based analysis. Our goal was to enhance the extraction accuracy of these ecologically essential features, thereby contributing to more precise mapping and better management of landscape elements in agroecosystems [70].
The results provide valuable insights into the potential of multiresolution and multitemporal data for refining detection and classification of hedgerows. Furthermore, by comparing results from imagery with different spatial resolutions (3 m from PlanetScope and 10 m from Sentinel-2), we could evaluate the effectiveness of the object-oriented approach in delineating hedgerows.
Utilizing a multitemporal dataset proved highly beneficial, enabling the detection of seasonal vegetation changes and effectively differentiating between agricultural fields and stable linear vegetation elements. This study demonstrated robust classification results achieved by constructing a suitable feature space and employing the Nearest Neighbor classifier, further refined through shadow removal techniques.
The results showed that higher spatial resolution data (PlanetScope) significantly improved hedgerow extraction accuracy, as confirmed by the accuracy assessment. Integrating multitemporal NDVI differences and Haralick texture features, our methodological approach effectively distinguished hedgerows from other land cover types. The PlanetScope dataset achieved an overall accuracy of 95%. Such a result is comparable to accuracies achieved in other studies [71], but even higher than that of recent studies using very-high-resolution satellites (IKONOS) [72].
The accuracy assessment of the Sentinel-2 data indicated a good overall accuracy (85%), but exhibited a low user accuracy (47%), suggesting an overestimation of the hedgerow presence. The overestimation is attributable to the lower spatial resolution (3 times lower than PlanetScope), which increases the influence of mixed pixels in defining the target class. This issue was evident during the segmentation phase, as hedgerows are often smaller than the 10 × 10 m pixel unit, leading to errors in polygon generation. The commission error, representing the percentage of areas incorrectly classified as “Hedgerows”, is notably high at 54%. This overestimation is further evidenced by the significantly larger area mapped (nearly double) as hedgerows using Sentinel-2 data compared with PlanetScope data.
While both datasets exhibited good overall accuracy, slightly higher measures of accuracy can be found in the literature. It should be noted that these studies often rely on presence/absence validations using control points [34,73]. In contrast, our validation aimed to assess the accuracy of the presence/absence predictions and the actual mapped area. We employed an independent dataset (regional orthophoto) to extract the surface area of the target element. Subsequently, an area comparison was conducted to verify the topological correctness of the elements mapped by our model. The results demonstrated good user accuracy and producer accuracy, especially for PlanetScope data, with values of 0.87 and 0.71, respectively.
The results of the comparison of the maps produced in our study and SWF dataset suggest that PlanetScope imagery offers a balanced compromise, capturing more detailed local features than Sentinel-2 while reflecting a broader view than the Copernicus SWF dataset. Our results are in line with previous studies that have highlighted limitations of the SWF dataset in capturing linear vegetation structures. For example, Huber Garcia et al. (2025) [74] mapped hedgerows in Bavaria using high-resolution orthophotos and convolutional neural networks, finding that approximately 43% of the identified hedgerows were absent from the SWF layer. This suggests that the SWF dataset may struggle to detect narrow and elongated vegetation elements, reinforcing the need for high-resolution mapping approaches. Similarly, Ahlswede et al. (2021) [72] demonstrated that convolutional neural networks applied to very-high-resolution IKONOS imagery outperformed traditional classification methods in hedgerow detection. These findings emphasize that while large-scale datasets such as Copernicus SWF provide a valuable baseline, accurate hedgerow mapping often necessitates the use of locally adapted methodologies with higher spatial resolution and advanced classification techniques.
Our findings confirm the importance of selecting the appropriate spatial resolution for accurate mapping at the local scale. While European-scale maps provide valuable baseline information, achieving accurate hedgerow area and boundary estimates often necessitates applying locally scaled methods.
The current availability of free satellite data does not yet match the spatial resolution provided by commercial platforms such as PlanetScope. However, future technological advancements may lead to higher-resolution imagery becoming accessible on non-commercial platforms as well. At present, the proposed method with the PlanetScope dataset is not directly scalable at a continental level. However, with a moderate investment, its applicability on a regional scale is feasible. It is also worth noting that PlanetScope imagery is offered under a variety of subscription plans for research and institutional use, and PlanetScope also provides non-commercial access through programs such as the Copernicus Data Space Ecosystem and the Norwegian International Climate and Forest Initiative tropical forest monitoring program. In addition, environmental agencies and public institutions can access high-resolution imagery under special agreements at reduced rates or through special partnerships. Compared to commercial VHR imagery, PlanetScope’s daily revisit time and 3 m spatial resolution make it a cost-effective alternative for land-monitoring applications that require frequent updates.
This study demonstrates that, in the future, the approach could potentially be extended to larger spatial scales. The method is likely transferable to other contexts with a predominantly agricultural landscape where hedgerows are a minor landscape element, as the selected features are characteristic of hedgerows and should remain relevant across similar environments. However, a potential limitation may arise in regions where hedgerows are narrower than those in Friuli-Venezia Giulia. In such cases, the source of the dataset and the spatial resolution would have to be re-evaluated to ensure accurate detection and classification. On the other hand, in regions with very large hedges, it may be sufficient to use data already completely free, such as Sentinel-2.
In terms of software implementation, eCognition was chosen because of its robust Object-Based Image Analysis (OBIA) capabilities, which can perform segmentation and classification of SWEs based on spectral and spatial features. eCognition has a flexible rule-based classification and machine learning environment that is suitable for working with high-resolution imagery.
Other potential limitations of the study include challenges with both Sentinel-2 data resolution and the processing of high-resolution imagery in eCognition. Extracting numerous features for target identification required substantial processing time, hindering the method scalability, especially for larger areas. The increased data volume of larger study areas exponentially extends computation time, potentially making analysis prohibitive. Furthermore, processing a high number of features complicates and reduces the efficiency of the analysis. Secondly, our method was developed based on the specific landscape characteristics of the study area. Therefore, careful consideration must be given to potential variations when applying this methodology to other contexts.
Our approach shows promise for mapping hedgerows, but several technical challenges remain. Future work could involve incorporating a broader range of spectral indices and exploring more advanced processing algorithms to enhance classification accuracy, especially in differentiating hedgerows from adjacent vegetation types.
In addition, integrating additional data from remote sensing technologies like elevation data from Light Detection and Ranging (LiDAR) would provide crucial information for defining landscape structure [75], especially combining aerial and terrestrial laser scanning [76]. This combination would allow a more comprehensive characterization of the three-dimensional structure of vegetation elements. However, it is worthy to note that aerial LiDAR is not available everywhere and usually, LiDAR scanning has a low revisiting time. Indeed, increasing the temporal frequency of data collection would provide more detailed information about seasonal dynamics and lead to more robust hedgerow detection.

5. Conclusions

Our research develops a semi-automatic method for SWE extraction using Object-Based Image Analysis on high-resolution satellite imagery, comparing the performance of our method on PlanetScope and Sentinel-2 data. By integrating multitemporal spectral indices and texture-based features, we have improved object mapping and classification accuracy. Furthermore, our approach has addressed the limitations of existing datasets, such as the Copernicus SWF inventory, by constructing a more frequently updated and adaptable mapping solution. Finally, we have analyzed the critical aspects of our method and discussed additional data sources that could be integrated to further improve the SWE mapping.
The results indicate that PlanetScope data achieved significantly higher accuracy, whereas Sentinel-2 data, despite their coarser resolution, confirmed an overall accuracy (OA) exceeding 80%. This comparative analysis illustrates the potential of using open-source and high-resolution commercial imagery for hedgerow detection. Secondly, the classification method applied in this study enhances the feasibility of developing a comprehensive database and addresses the challenge posed by the absence of a standardized reference dataset. Such a dataset would improve research comparability, foster collaboration, and drive progress, ensuring future applications. Moreover, these findings will contribute to advancing hedgerow detection research and deepening the understanding of hedgerows’ ecological significance, thereby supporting sustainable land management and biodiversity conservation.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/rs17091506/s1, Table S1: Feature Space Optimization process, Figure S1: Results from Feature Space Optimization.

Author Contributions

Conceptualization, A.T., M.A.M.C., E.M. and M.S.; methodology, A.T. and M.A.M.C.; software, A.T., M.A.M.C. and A.L.G.; validation, A.T. and A.L.G.; formal analysis, A.L.G.; investigation, data curation, writing—original draft preparation, writing—review and editing, A.L.G., A.T., M.A.M.C., E.M. and M.S.; visualization, A.L.G.; supervision, E.M. and M.S. resources, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of University and Research (MUR) Italy, project PRIN Eye-Land: A crowd-sensing geospatial database for the monitoring of rural areas (PRIN 2020–Settore ERC LS9–Bando 2020 Prot. 2020EMLWTN), CUP Eye-Land: G53C22000080001.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author.

Acknowledgments

We thank the anonymous reviewers for their constructive, detailed, and helpful comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Álvarez, F.A.; Gómez-Mediavilla, G.; López-Estébanez, N.; Holgado, P.M.; Barajas, J.A. Hedgerows and Enclosures in Rural Areas: Traditional vs. Modern Land Use in Mediterranean Mountains. Land 2021, 10, 57. [Google Scholar] [CrossRef]
  2. Lecoq, L.; Mony, C.; Saiz, H.; Marsot, M.; Ernoult, A. Investigating the Effect of Habitat Amount and Landscape Heterogeneity on the Gamma Functional Diversity of Grassland and Hedgerow Plants. J. Ecol. 2022, 110, 1871–1882. [Google Scholar] [CrossRef]
  3. Jose, S. Agroforestry for Ecosystem Services and Environmental Benefits: An Overview. Agroforest Syst. 2009, 76, 1–10. [Google Scholar] [CrossRef]
  4. Boinot, S.; Alignier, A.; Pétillon, J.; Ridel, A.; Aviron, S. Hedgerows Are More Multifunctional in Preserved Bocage Landscapes. Ecol. Indic. 2023, 154, 110689. [Google Scholar] [CrossRef]
  5. Marshall, E.J.P.; Moonen, A.C. Field Margins in Northern Europe: Their Functions and Interactions with Agriculture. Agric. Ecosyst. Environ. 2002, 89, 5–21. [Google Scholar] [CrossRef]
  6. Hölting, L.; Jacobs, S.; Felipe-Lucia, M.R.; Maes, J.; Norström, A.V.; Plieninger, T.; Cord, A.F. Measuring Ecosystem Multifunctionality across Scales. Environ. Res. Lett. 2019, 14, 124083. [Google Scholar] [CrossRef]
  7. Dainese, M.; Montecchiari, S.; Sitzia, T.; Sigura, M.; Marini, L. High Cover of Hedgerows in the Landscape Supports Multiple Ecosystem Services in M Editerranean Cereal Fields. J. Appl. Ecol. 2017, 54, 380–388. [Google Scholar] [CrossRef]
  8. Croxton, P.J.; Hann, J.P.; Greatorex-Davies, J.N.; Sparks, T.H. Linear Hotspots? The Floral and Butterfly Diversity of Green Lanes. Biol. Conserv. 2005, 121, 579–584. [Google Scholar] [CrossRef]
  9. Pe’er, G.; Bonn, A.; Bruelheide, H.; Dieker, P.; Eisenhauer, N.; Feindt, P.H.; Hagedorn, G.; Hansjürgens, B.; Herzon, I.; Lomba, Â.; et al. Action Needed for the EU Common Agricultural Policy to Address Sustainability Challenges. People Nat. 2020, 2, 305–316. [Google Scholar] [CrossRef]
  10. Lechner, A.M.; Stein, A.; Jones, S.D.; Ferwerda, J.G. Remote Sensing of Small and Linear Features: Quantifying the Effects of Patch Size and Length, Grid Position and Detectability on Land Cover Mapping. Remote Sens. Environ. 2009, 113, 2194–2204. [Google Scholar] [CrossRef]
  11. Regolamento-UE-2024/1991. EN-EUR-Lex. Available online: https://eur-lex.europa.eu/legal-content/IT/TXT/?uri=CELEX:32024R1991 (accessed on 18 November 2024).
  12. Staley, J.T.; Wolton, R.; Norton, L.R. Improving and Expanding Hedgerows—Recommendations for a Semi-natural Habitat in Agricultural Landscapes. Ecol. Sol. Evid. 2023, 4, e12209. [Google Scholar] [CrossRef]
  13. Chen, B.; Wang, L.; Fan, X.; Bo, W.; Yang, X.; Tjahjadi, T. Semi-FCMNet: Semi-Supervised Learning for Forest Cover Mapping from Satellite Imagery via Ensemble Self-Training and Perturbation. Remote Sens. 2023, 15, 4012. [Google Scholar] [CrossRef]
  14. Lewis, S.L.; Lopez-Gonzalez, G.; Sonké, B.; Affum-Baffoe, K.; Baker, T.R.; Ojo, L.O.; Phillips, O.L.; Reitsma, J.M.; White, L.; Comiskey, J.A.; et al. Increasing Carbon Storage in Intact African Tropical Forests. Nature 2009, 457, 1003–1006. [Google Scholar] [CrossRef]
  15. D’Andrimont, R.; Czucz, B.; De Marchi, D.; Gallego, J.; Lordanov, M.; Koeble, R.; Musavi, T.; Skoien, J.; Martinez Sanchez, L.; Terres, J. Estimation of the Share of Landscape Features in Agricultural Land Based on the LUCAS 2022 Survey; Publications Office of the European Union: Luxembourg, 2024. [Google Scholar]
  16. Rubio-Delgado, J.; Schnabel, S.; Lavado-Contador, J.F.; Schmutz, U. Small Woody Features in Agricultural Areas: Agroforestry Systems of Overlooked Significance in Europe. Agric. Syst. 2024, 218, 103973. [Google Scholar] [CrossRef]
  17. Smigaj, M.; Gaulton, R. Capturing Hedgerow Structure and Flowering Abundance with UAV Remote Sensing. Remote Sens. Ecol. Conserv. 2021, 7, 521–533. [Google Scholar] [CrossRef]
  18. Hinsley, S.A.; Bellamy, P.E. The Influence of Hedge Structure, Management and Landscape Context on the Value of Hedgerows to Birds: A Review. J. Environ. Manag. 2000, 60, 33–49. [Google Scholar] [CrossRef]
  19. Cadez, L.; Tomao, A.; Giannetti, F.; Chirici, G.; Alberti, G. Mapping Forest Growing Stock and Its Current Annual Increment Using Random Forest and Remote Sensing Data in Northeast Italy. Forests 2024, 15, 1356. [Google Scholar] [CrossRef]
  20. Cadez, L.; Giannetti, F.; De Luca, A.; Tomao, A.; Chirici, G.; Alberti, G. A WebGIS Tool to Support Forest Management at Regional and Local Scale. iForest 2023, 16, 361–367. [Google Scholar] [CrossRef]
  21. European Union, Copernicus Land Monitoring Service 2018; European Environment Agency (EEA): Copenhagen, Denmark, 2020.
  22. Barr, C.J.; Gillespie, M.K. Estimating Hedgerow Length and Pattern Characteristics in Great Britain Using Countryside Survey Data. J. Environ. Manag. 2000, 60, 23–32. [Google Scholar] [CrossRef]
  23. Aksoy, S.; Akcay, H.G.; Wassenaar, T. Automatic Mapping of Linear Woody Vegetation Features in Agricultural Landscapes Using Very High Resolution Imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 511–522. [Google Scholar] [CrossRef]
  24. Rosier, I.; Diels, J.; Somers, B.; Van Orshoven, J. A Workflow to Extract the Geometry and Type of Vegetated Landscape Elements from Airborne LiDAR Point Clouds. Remote Sens. 2021, 13, 4031. [Google Scholar] [CrossRef]
  25. Thornton, M.W.; Atkinson, P.M.; Holland, D.A. A Linearised Pixel-Swapping Method for Mapping Rural Linear Land Cover Features from Fine Spatial Resolution Remotely Sensed Imagery. Comput. Geosci. 2007, 33, 1261–1272. [Google Scholar] [CrossRef]
  26. Perry, C.H.; Woodall, C.W.; Liknes, G.C.; Schoeneberger, M.M. Filling the Gap: Improving Estimates of Working Tree Resources in Agricultural Landscapes. Agroforest Syst. 2009, 75, 91–101. [Google Scholar] [CrossRef]
  27. Tansey, K.; Chambers, I.; Anstee, A.; Denniss, A.; Lamb, A. Object-Oriented Classification of Very High Resolution Airborne Imagery for the Extraction of Hedgerows and Field Margin Cover in Agricultural Areas. Appl. Geogr. 2009, 29, 145–157. [Google Scholar] [CrossRef]
  28. Han, H.; Han, C.; Lan, T.; Huang, L.; Hu, C.; Xue, X. Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces. Appl. Sci. 2020, 10, 6467. [Google Scholar] [CrossRef]
  29. Wolstenholme, J.M.; Cooper, F.; Thomas, R.E.; Ahmed, J.; Parsons, K.J.; Parsons, D.R. Automated Identification of Hedgerows and Hedgerow Gaps Using Deep Learning. Remote Sens. Ecol. Conserv. 2025, rse2.432. [Google Scholar] [CrossRef]
  30. Vizzari, M.; Antonielli, F.; Bonciarelli, L.; Grohmann, D.; Menconi, M.E. Urban Greenery Mapping Using Object-Based Classification and Multi-Sensor Data Fusion in Google Earth Engine. Urban. For. Urban. Green. 2025, 105, 128697. [Google Scholar] [CrossRef]
  31. Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A Review of Algorithms and Challenges from Remote Sensing Perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
  32. Blaschke, T. Object Based Image Analysis for Remote Sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  33. Torres-Sánchez, J.; Mesas-Carrascosa, F.J.; Pérez-Porras, F.; López-Granados, F. Detection of Ecballium elaterium in Hedgerow Olive Orchards Using a Low-cost Uncrewed Aerial Vehicle and Open-source Algorithms. Pest. Manag. Sci. 2023, 79, 645–654. [Google Scholar] [CrossRef]
  34. Perretta, M.; Delogu, G.; Funsten, C.; Patriarca, A.; Caputi, E.; Boccia, L. Testing the Impact of Pansharpening Using PRISMA Hyperspectral Data: A Case Study Classifying Urban Trees in Naples, Italy. Remote Sens. 2024, 16, 3730. [Google Scholar] [CrossRef]
  35. Acito, N.; Diani, M.; Corsini, G. PRISMA Spatial Resolution Enhancement by Fusion With Sentinel-2 Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 62–79. [Google Scholar] [CrossRef]
  36. De Luca, G.; Carotenuto, F.; Genesio, L.; Pepe, M.; Toscano, P.; Boschetti, M.; Miglietta, F.; Gioli, B. Improving PRISMA Hyperspectral Spatial Resolution and Geolocation by Using Sentinel-2: Development and Test of an Operational Procedure in Urban and Rural Areas. ISPRS J. Photogramm. Remote Sens. 2024, 215, 112–135. [Google Scholar] [CrossRef]
  37. Faucqueur, L.; Morin, N.; Masse, A.; Remy, P.-Y.; Hugé, J.; Kenner, C.; Dazin, F.; Desclée, B.; Sannier, C. A New Copernicus High Resolution Layer at Pan-European Scale: Small Woody Features. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XXI, Strasbourg, France, 21 October 2019; SPIE: Bellingham, WA, USA; Volume 11149, pp. 268–278. [Google Scholar]
  38. Enache, S.; Louis, J.; Pflug, B.; de Los Reyes, R.; Lafrance, B.; Clerc, S.; Barrot, G.; Alhammoud, B.; Poustomis, F.; Iannone, R.Q.; et al. Copernicus Sentinel-2 Collection-1: A Consistent Dataset of Multi-Spectral Imagery with Enhanced Quality. In Proceedings of the IGARSS 2023–2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 16–21 July 2023; pp. 4302–4305. [Google Scholar]
  39. Houborg, R.; McCabe, M. High-Resolution NDVI from Planet’s Constellation of Earth Observing Nano-Satellites: A New Data Source for Precision Agriculture. Remote Sens. 2016, 8, 768. [Google Scholar] [CrossRef]
  40. Pirbasti, M.A.; McArdle, G.; Akbari, V. Hedgerows Monitoring in Remote Sensing: A Comprehensive Review. IEEE Access 2024, 12, 156184–156207. [Google Scholar] [CrossRef]
  41. Baatz, M.; Schäpe, A. Multiresolution Segmentation: An Optimization Approach for High Quality Multi-Scale Image Segmentation. Angew. Geogr. Inf. Verarb. 2000, XII, 12–23. [Google Scholar]
  42. Cufí, X.; Muñoz, X.; Freixenet, J.; Martí, J. A Review of Image Segmentation Techniques Integrating Region and Boundary Information. In Advances in Imaging and Electron Physics; Elsevier: Amsterdam, The Netherlands, 2003; Volume 120, pp. 1–39. ISBN 978-0-12-014762-5. [Google Scholar]
  43. Baatz, M.; Benz, U.; Dehghani, S.; Heynen, M.; Holtje, A.; Hofmann, P.; Lingenfelder, I.; Mimler, M.; Sohlbach, M.; Weber, M.; et al. eCognition Professional User Guide; Version 4.0; 2004. Available online: https://support.ecognition.com/hc/article_attachments/4407404758418/DefiniensProfessional4_Manual.pdf (accessed on 4 October 2024).
  44. Li, M.; Ma, L.; Blaschke, T.; Cheng, L.; Tiede, D. A Systematic Comparison of Different Object-Based Classification Techniques Using High Spatial Resolution Imagery in Agricultural Environments. Int. J. Appl. Earth Obs. Geoinf. 2016, 49, 87–98. [Google Scholar] [CrossRef]
  45. Rasuly, A.; Naghdifar, R.; Rasoli, M. Monitoring of Caspian Sea Coastline Changes Using Object-Oriented Techniques. Procedia Environ. Sci. 2010, 2, 416–426. [Google Scholar] [CrossRef]
  46. Gómez, C.; White, J.C.; Wulder, M.A. Characterizing the State and Processes of Change in a Dynamic Forest Environment Using Hierarchical Spatio-Temporal Segmentation. Remote Sens. Environ. 2011, 115, 1665–1679. [Google Scholar] [CrossRef]
  47. Zhu, L.; Zhang, J.; Sun, Y. Remote Sensing Image Change Detection Using Superpixel Cosegmentation. Information 2021, 12, 94. [Google Scholar] [CrossRef]
  48. Jamali, S.; Jönsson, P.; Eklundh, L.; Ardö, J.; Seaquist, J. Detecting Changes in Vegetation Trends Using Time Series Segmentation. Remote Sens. Environ. 2015, 156, 182–195. [Google Scholar] [CrossRef]
  49. Kancheva, R.; Georgiev, G. Seasonal Spectral Response Patterns of Winter Wheat Canopy for Crop Performance Monitoring. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XV, Dresden, Germany, 16 October 2013; p. 88871V. [Google Scholar]
  50. Oreti, L.; Giuliarelli, D.; Tomao, A.; Barbati, A. Object Oriented Classification for Mapping Mixed and Pure Forest Stands Using Very-High Resolution Imagery. Remote Sens. 2021, 13, 2508. [Google Scholar] [CrossRef]
  51. Oreti, L.; Barbati, A.; Marini, F.; Giuliarelli, D. Very High-Resolution True Color Leaf-off Imagery for Mapping Taxus Baccata L. and Ilex Aquifolium L. Understory Population. Biodivers. Conserv. 2020, 29, 2605–2622. [Google Scholar] [CrossRef]
  52. Helmholz, P.; Rottensteiner, F.; Heipke, C. Semi-Automatic Verification of Cropland and Grassland Using Very High Resolution Mono-Temporal Satellite Images. ISPRS J. Photogramm. Remote Sens. 2014, 97, 204–218. [Google Scholar] [CrossRef]
  53. Lacombe, T.; Favreliere, H.; Pillet, M. Modal Features for Image Texture Classification. Pattern Recognit. Lett. 2020, 135, 249–255. [Google Scholar] [CrossRef]
  54. About Classification. Available online: https://docs.ecognition.com/v9.5.0/eCognition_documentation/User%20Guide%20Developer/6%20About%20Classification.htm (accessed on 9 April 2025).
  55. Rana, M.; Kharel, S. Feature extraction for urban and agricultural domains using ecognition developer. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-3-W6, 609–615. [Google Scholar] [CrossRef]
  56. Duan, M.; Zhang, X. Using Remote Sensing to Identify Soil Types Based on Multiscale Image Texture Features. Comput. Electron. Agric. 2021, 187, 106272. [Google Scholar] [CrossRef]
  57. Lee, C.-H.; Chen, K.-Y.; Liu, L.D. Effect of Texture Feature Distribution on Agriculture Field Type Classification with Multitemporal UAV RGB Images. Remote Sens. 2024, 16, 1221. [Google Scholar] [CrossRef]
  58. Haralick, R.M. Statistical and Structural Approaches to Texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
  59. Kleefeld, A.; Gypser, S.; Herppich, W.B.; Bader, G.; Veste, M. Identification of Spatial Pattern of Photosynthesis Hotspots in Moss- and Lichen-Dominated Biological Soil Crusts by Combining Chlorophyll Fluorescence Imaging and Multispectral BNDVI Images. Pedobiologia 2018, 68, 1–11. [Google Scholar] [CrossRef]
  60. Van Der Merwe, D.; Price, K. Harmful Algal Bloom Characterization at Ultra-High Spatial and Temporal Resolution Using Small Unmanned Aircraft Systems. Toxins 2015, 7, 1065–1078. [Google Scholar] [CrossRef]
  61. Morales-Gallegos, L.M.; Martínez-Trinidad, T.; Hernández-de La Rosa, P.; Gómez-Guerrero, A.; Alvarado-Rosales, D.; Saavedra-Romero, L.D.L. Tree Health Condition in Urban Green Areas Assessed through Crown Indicators and Vegetation Indices. Forests 2023, 14, 1673. [Google Scholar] [CrossRef]
  62. Zerbe, L.M. Soo Chin Liew Reevaluating the Traditional Maximum NDVI Compositing Methodology: The Normalized Difference Blue Index. In Proceedings of the IEEE International IEEE International IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; IEEE: New York, NY, USA, 2004; Volume 4, pp. 2401–2404. [Google Scholar]
  63. Hu, X.; Wang, T.; Fu, C.-W.; Jiang, Y.; Wang, Q.; Heng, P.-A. Revisiting Shadow Detection: A New Benchmark Dataset for Complex World. IEEE Trans. Image Process. 2021, 30, 1925–1934. [Google Scholar] [CrossRef]
  64. Zhang, Y.; Chen, G.; Vukomanovic, J.; Singh, K.K.; Liu, Y.; Holden, S.; Meentemeyer, R.K. Recurrent Shadow Attention Model (RSAM) for Shadow Removal in High-Resolution Urban Land-Cover Mapping. Remote Sens. Environ. 2020, 247, 111945. [Google Scholar] [CrossRef]
  65. Kang, X.; Lin, H.; Benediktsson, J.A. Extended Random Walker for Shadow Detection in Very High Resolution Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 867–876. [Google Scholar] [CrossRef]
  66. Mo, N.; Zhu, R.; Yan, L.; Zhao, Z. Deshadowing of Urban Airborne Imagery Based on Object-Oriented Automatic Shadow Detection and Regional Matching Compensation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 585–605. [Google Scholar] [CrossRef]
  67. Rufenacht, D.; Fredembach, C.; Susstrunk, S. Automatic and Accurate Shadow Detection Using Near-Infrared Information. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1672–1678. [Google Scholar] [CrossRef]
  68. Deng, X.; Liu, Q.; Deng, Y.; Mahadevan, S. An Improved Method to Construct Basic Probability Assignment Based on the Confusion Matrix for Classification Problem. Inf. Sci. 2016, 340–341, 250–261. [Google Scholar] [CrossRef]
  69. Schmucki, R.; De Blois, S.; Bouchard, A.; Domon, G. Spatial and Temporal Dynamics of Hedgerows in Three Agricultural Landscapes of Southern Quebec, Canada. Environ. Manag. 2002, 30, 651–664. [Google Scholar] [CrossRef]
  70. Bennett, E.M.; Baird, J.; Baulch, H.; Chaplin-Kramer, R.; Fraser, E.; Loring, P.; Morrison, P.; Parrott, L.; Sherren, K.; Winkler, K.J.; et al. Ecosystem Services and the Resilience of Agricultural Landscapes. In Advances in Ecological Research; Elsevier: Amsterdam, The Netherlands, 2021; Volume 64, pp. 1–43. ISBN 978-0-12-822979-8. [Google Scholar]
  71. Schnell, S. Monitoring Trees Outside Forests: A Review. Environ. Monit. Assess. 2015, 187, 1–17. [Google Scholar] [CrossRef]
  72. Ahlswede, S.; Asam, S.; Röder, A. Hedgerow Object Detection in Very High-Resolution Satellite Images Using Convolutional Neural Networks. J. Appl. Rem. Sens. 2021, 15, 018501. [Google Scholar] [CrossRef]
  73. Patriarca, A.; Caputi, E.; Gatti, L.; Marcheggiani, E.; Recanatesi, F.; Rossi, C.M.; Ripa, M.N. Wide-Scale Identification of Small Woody Features of Landscape from Remote Sensing. Land 2024, 13, 1128. [Google Scholar] [CrossRef]
  74. Huber-García, V.; Kriese, J.; Asam, S.; Dirscherl, M.; Stellmach, M.; Buchner, J.; Kerler, K.; Gessner, U. Hedgerow Map of Bavaria, Germany, Based on Orthophotos and Convolutional Neural Networks. Remote Sens. Appl. Soc. Environ. 2025, 37, 101451. [Google Scholar] [CrossRef]
  75. Lucas, C.; Bouten, W.; Koma, Z.; Kissling, W.D.; Seijmonsbergen, A.C. Identification of Linear Vegetation Elements in a Rural Landscape Using LiDAR Point Clouds. Remote Sens. 2019, 11, 292. [Google Scholar] [CrossRef]
  76. Puletti, N.; Innocenti, S.; Guasti, M. A Co-Registration Approach between Terrestrial and UAV Laser Scanning Point Clouds Based on Ground and Trees Features. Ann. Silvic. Res. 2024, 49, 4466. [Google Scholar] [CrossRef]
Figure 1. Study area in Friuli-Venezia Giulia region.
Figure 1. Study area in Friuli-Venezia Giulia region.
Remotesensing 17 01506 g001
Figure 2. Workflow for the semi-automatic extraction of Small Woody Elements (SWEs) using Sentinel-2 and PlanetScope data through multiresolution segmentation [41]. The Ss parameter represents the scaling factor adopted in the eCognition environment.
Figure 2. Workflow for the semi-automatic extraction of Small Woody Elements (SWEs) using Sentinel-2 and PlanetScope data through multiresolution segmentation [41]. The Ss parameter represents the scaling factor adopted in the eCognition environment.
Remotesensing 17 01506 g002
Figure 3. Sampling design for independent classification.
Figure 3. Sampling design for independent classification.
Remotesensing 17 01506 g003
Figure 4. Sub-segmentation of the “Hedgerows” class (yellow) to separate shadows (blue) and reduce misclassification.
Figure 4. Sub-segmentation of the “Hedgerows” class (yellow) to separate shadows (blue) and reduce misclassification.
Remotesensing 17 01506 g004
Figure 5. Classification results from PlanetScope image data.
Figure 5. Classification results from PlanetScope image data.
Remotesensing 17 01506 g005
Figure 6. Classification results from Sentinel-2 image data.
Figure 6. Classification results from Sentinel-2 image data.
Remotesensing 17 01506 g006
Figure 7. SWE mapping comparison: The Copernicus Small Woody Features (SWF) 2018 dataset (pink) underestimates hedgerow coverage compared to the mapping with PlanetScope data (blue). The finer spatial resolution of PlanetScope imagery captures a greater level of detail, detecting additional hedgerow segments that are missed in the Copernicus SWF dataset.
Figure 7. SWE mapping comparison: The Copernicus Small Woody Features (SWF) 2018 dataset (pink) underestimates hedgerow coverage compared to the mapping with PlanetScope data (blue). The finer spatial resolution of PlanetScope imagery captures a greater level of detail, detecting additional hedgerow segments that are missed in the Copernicus SWF dataset.
Remotesensing 17 01506 g007
Table 1. Multitemporal image data.
Table 1. Multitemporal image data.
Satellite Data
Sentinel-2PlanetScope
Acquisition dateSpring season28 April 202229 April 2022
Summer season12 July 202213 July 2022
Autumn season6 November 202213 November 2022
Table 2. The table outlines the image layers used in the segmentation, along with their descriptions and the weights assigned to each layer.
Table 2. The table outlines the image layers used in the segmentation, along with their descriptions and the weights assigned to each layer.
Image LayerDescriptionWeights
Red, Green, Blue, Near-Infrared bandsApril and July1
N D V I apr, N D V I jul ( N I R R e d ) ( N I R + R e d ) 2
d N D V I j u l a p r N D V I j u l N D V I a p r 4
Table 3. Set of features extracted for the classification process.
Table 3. Set of features extracted for the classification process.
Object FeaturesDescription
d N D V I j u l a p r N D V I j u l N D V I a p r
d N D V I j u l n o v N D V I j u l N D V I n o v
Maximum difference
Brightness
GLDV Entropy (quick 9/11) (90°) i , j p i j log p i j   w h e r e   p i j = x i j i , j x i j
Table 4. Spectral features selected to enhance the distinction between hedgerows and adjacent shadows.
Table 4. Spectral features selected to enhance the distinction between hedgerows and adjacent shadows.
Feature Space ElementsDescription
Brightness
NDVIjul ( N I R R e d ) ( N I R + R e d )
∆R − GRed-Green
R/IRRed/NIR
Table 5. Classification of polygons for map validation.
Table 5. Classification of polygons for map validation.
ClassNumber of PolygonsSurface (ha)%
Hedgerows8818.2114
Other371452.0186
Total459560100
Table 6. Statistics of the initial segmentation of the PlanetScope and Sentinel-2 datasets.
Table 6. Statistics of the initial segmentation of the PlanetScope and Sentinel-2 datasets.
Statistics 1st Segmentation
DatasetCountMin Value [m2]Max Value [m2]Mean [m2]Median [m2]
PlanetScope11,84454,016.027,116.21691.41269.4
Sentinel-21582100.0149,747.212,589.29903.0
Table 7. Surface area statistics of different features after the second segmentation and classification for both datasets.
Table 7. Surface area statistics of different features after the second segmentation and classification for both datasets.
PlanetScope Dataset
[m2]
Class_NameCountMinMaxMeanMedian
hedgerow736627.02133.7307.8270.1
shadow296727.01395.4242.3207.1
subseg_unclassified1218.0486.2207.8139.5
total 10,345----
Sentinel-2 Dataset
[m2]
Class_NameCountMinMaxMeanMedian
Hedgerow960100.013.604.31630.51300.4
Shadow2198100.012,203.81878.61600.5
subseg_unclassified9100.01300.4333.4200.1
Total 3167----
Table 8. Area and number of elements for each class after classification for both the PlanetScope and Sentinel-2 datasets.
Table 8. Area and number of elements for each class after classification for both the PlanetScope and Sentinel-2 datasets.
ClassesArea [ha]% Arean° Elements
PlanetScope datasetHedgerow22611.3533
Other177488.7
Sentinel-2 datasetHedgerow41320.7292
Other158779.3
Table 9. Confusion Matrix for the PlanetScope and Sentinel-2 datasets.
Table 9. Confusion Matrix for the PlanetScope and Sentinel-2 datasets.
PlanetScope DataActual [ha]
OtherHedgerow
Predicted [ha]Other50.591.75
Hedgerow1.426.46
Sentinel-2 Data
OtherHedgerow
Predicted [ha]Other45.072.41
Hedgerow6.925.80
Table 10. Accuracy metrics for both the PlanetScope and Sentinel-2 data.
Table 10. Accuracy metrics for both the PlanetScope and Sentinel-2 data.
PlanetScope Dataset OtherHedgerow
User’s Accuracy-0.970.82
Producer’s Accuracy-0.970.79
Overall Accuracy0.95--
K Cohen0.77.-
Sentinel-2 Dataset OtherHedgerow
User’s Accuracy-0.950.46
Producer’s Accuracy-0.870.71
Overall Accuracy0.85--
K Cohen0.47--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gardossi, A.L.; Tomao, A.; Choudhury, M.A.M.; Marcheggiani, E.; Sigura, M. Semi-Automatic Extraction of Hedgerows from High-Resolution Satellite Imagery. Remote Sens. 2025, 17, 1506. https://doi.org/10.3390/rs17091506

AMA Style

Gardossi AL, Tomao A, Choudhury MAM, Marcheggiani E, Sigura M. Semi-Automatic Extraction of Hedgerows from High-Resolution Satellite Imagery. Remote Sensing. 2025; 17(9):1506. https://doi.org/10.3390/rs17091506

Chicago/Turabian Style

Gardossi, Anna Lilian, Antonio Tomao, MD Abdul Mueed Choudhury, Ernesto Marcheggiani, and Maurizia Sigura. 2025. "Semi-Automatic Extraction of Hedgerows from High-Resolution Satellite Imagery" Remote Sensing 17, no. 9: 1506. https://doi.org/10.3390/rs17091506

APA Style

Gardossi, A. L., Tomao, A., Choudhury, M. A. M., Marcheggiani, E., & Sigura, M. (2025). Semi-Automatic Extraction of Hedgerows from High-Resolution Satellite Imagery. Remote Sensing, 17(9), 1506. https://doi.org/10.3390/rs17091506

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop