Next Article in Journal
An Inversion Study of Constitutive Parameters for Powder Liner and Hard Rock Based on Finite Element Simulation
Previous Article in Journal
Low-Cost Full Correlated-Power-Noise Generator to Counteract Side-Channel Attacks
Previous Article in Special Issue
Spatial Evolution Analysis of Tailings Flow from Tailings Dam Failure Based on MacCormack-TVD
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GIS-Based Approach for Estimating Olive Tree Heights Using High-Resolution Satellite Imagery and Shadow Analysis

1
Department of Engineering, University of Perugia, 06125 Perugia, Italy
2
Department of Civil, Constructional and Environmental Engineering, ‘La Sapienza’ University, 00185 Roma, Italy
3
Department of Agricultural, Food and Environmental Sciences, University of Perugia, 06121 Perugia, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(6), 3066; https://doi.org/10.3390/app15063066
Submission received: 10 February 2025 / Revised: 5 March 2025 / Accepted: 10 March 2025 / Published: 12 March 2025
(This article belongs to the Special Issue GIS-Based Spatial Analysis for Environmental Applications)

Abstract

:
Measuring tree heights is a critical step for assessing ecological and agricultural parameters, including biomass, carbon stock, and canopy volume. In extensive areas exceeding a few hectares, traditional terrestrial measurement methods are often prohibitively expensive in terms of time and cost. This study introduces a GIS-based methodology for estimating olive tree (Olea europaea L.) heights using very-high-resolution (VHR) satellite imagery. The approach integrates a mathematical model that incorporates slope and aspect information derived in a GIS environment from a large-scale Digital Elevation Model. By leveraging sun position data embedded in satellite image metadata, a dedicated geometric model was developed to calculate tree heights. Comparative analyses with a drone-based 3D model demonstrated the statistical reliability of the proposed methodology. While this study focuses on olive trees due to their unique canopy structure, the method could also be applied to other tree species or even to buildings and other vertically developed structures on the ground. Future developments aim to enhance efficiency and usability through the creation of a specialized GIS tool, making it a valuable resource for environmental monitoring, sustainable agricultural management, and broader spatial analysis applications.

1. Introduction

High-resolution orthorectified imagery acquired via aerial or satellite sensors is highly valuable for Geographic Information System (GIS) applications, including environmental monitoring [1], precision agriculture [2], natural disaster assessments [3], infrastructure planning, and urban development [4,5]. These images lack height information, which prevents certain types of analyses. Three-dimensional data can be obtained through methods such as stereo image acquisition and processing (from aerial, drone, or satellite sources), laser scanning surveys, and Digital Surface Model (DSM) analysis. Unmanned Aerial Vehicle (UAV) imagery is often processed using Structure from Motion (SfM) techniques, which enable the extraction of height data through triangulation between consecutive image pairs [6]. Similarly, laser scanning surveys can provide altimetric data by generating point clouds, which can contain millions of points for extensive areas. However, both SfM and lidar point cloud processing for large-scale areas require substantial computational resources and significant storage capacity to handle the vast amounts of data generated.
The use of simpler materials, such as orthophotos, to obtain 3D information could significantly simplify workflows and reduce processing times.
An important feature in orthophotos, closely linked to altitude data, is the presence of shadows. Several studies have focused on deriving altitude information from single-source images, such as those captured by drones, aerial platforms, or satellites, often integrating additional data like DSMs or LiDAR, with a primary emphasis on buildings.
Liu et al. [7] introduced an automated technique for estimating building heights from a single image using a Convolutional Neural Network (CNN) architecture. Their method processes a single optical image to derive a Digital Surface Model. To train the CNN for height inference, aerial imagery was aligned with the corresponding DSM. Similarly, Xu et al. [8] proposed a methodology for extracting building heights from high-resolution remote sensing images, combining shadow and side information through the RMU-Net framework. This approach addresses issues like pixel detail loss and inaccuracies in edge segmentation caused by scale variations within the segmented buildings.
Gavancar et al. [9] calculated building heights by analyzing shadows in high-resolution multispectral Ikonos images. Their method considered both the Sun’s and satellite’s positions during image capture, employing multispectral classification to measure shadow widths and estimate heights.
The influence of acquisition geometry in building height estimation has been explored in various studies. Izadi and Saeedi [10] developed an automatic strategy for identifying flat polygonal roof types in QuickBird images by using shadow shapes and acquisition geometry for height estimation. To improve shadow detection accuracy, Liasis and Stavrou [11] optimized shadow segmentation with an active contour model that incorporated both spectral and spatial information. Their method enhanced shadow clarity by utilizing the solar elevation angle to estimate building heights. Xie et al. [12] introduced a method for estimating building heights using shadows in high-resolution imagery. Their framework accounts for varying building densities, terrains, and complex environments, effectively distinguishing between simple and complex building shapes. Giacaman [13] further refined shadow-based methods by integrating a geometric approach that accounts for the movement of the ASTER satellite. This method utilizes physical models to position shadows accurately, enhancing refraction calculations from meteorological data. Shettigara and Sumerling [14] applied similar techniques in flat terrain using a single SPOT image, factoring in potential obstructions like trees and considering Sun and satellite positions for height estimation.
The connection between shadow length and tree height has also been explored in studies like the one by Verma [15], which focused on Eucalyptus trees and accounted for terrain slope and aspect; this is one of the few studies examining tree heights estimation in contrast to the predominance of building-focused research. By using orthorectified images and photogrammetric data from the ADS40 airborne digital scanner, Verma analyzed shadows cast by 180 Eucalyptus trees with varying canopy conditions on farmland in southeastern Australia. The study utilized 50 cm resolution multispectral aerial imagery and developed a geometric shadow model incorporating local ground slope and aspect from a Digital Elevation Model (DEM). The model calculates a “local tree time” to deduce the sun elevation angle, making it applicable to single-scene images and mosaicked imagery where acquisition times may be unknown. The study highlighted that accuracy depends on precise shadow azimuth and shadow length measurements. Errors arise due to the off-nadir displacement of tree canopies and shadow projection distortions. A correction factor, derived from a geometric model assuming circular canopy profiles, was introduced to improve height estimates.
For accurate tree height estimation, understanding tree shape is crucial. For example, olive trees, with their broad and open canopy, cast distinctive shadows, unlike trees with central, symmetrical high points like conifers, which create different shadow patterns (Figure 1).
The canopies of olive trees are typically broad and irregular, characterized by twisted branches that form an expansive, structured arrangement. Canopy density varies depending on the cultivar, training system, tree age, and cultivation practices with reference to frequency and intensity of pruning, with managed trees often pruned to optimize light exposure and fruit production. In natural settings, the canopy can appear denser and more intricate.
For precise shadow identification of the tree canopy, high-resolution images, such as those from UAVs or satellites, are essential for reliable measurements.
WorldView-3 (WV3) is a high-resolution Earth observation satellite known for its advanced capabilities in capturing detailed imagery [16]. It is equipped with eight spectral bands, including visible (coastal blue, blue, green, yellow, and red) and near-infrared (NIR1 and NIR2). WV3 offers a panchromatic and multispectral resolution of up to 0.31 m per pixel and 1.24 m per pixel, respectively, allowing for accurate feature identification and detailed analysis. The satellite can revisit the same location every 1 to 3 days, enabling change detection examinations.
Thanks to its high spatial resolution and multispectral capabilities, WV3 imagery has proven to be a valuable tool for a wide range of applications, including environmental monitoring, agricultural management, and ecological studies. It enables the detailed analysis of vegetation patterns, species diversity, and crop characteristics, making it particularly well suited for precision agriculture and sustainable land management, as well as establishing itself as a powerful asset for remote sensing.
Lelong et al. [17] demonstrate how WV3 imagery aids in mapping tree diversity in semi-arid environments. By leveraging the satellite’s high spatial and spectral resolution, the study successfully classifies tree species and analyzes distribution patterns, highlighting the role of WV3 in supporting biodiversity and land management. In temperate montane forests, Liu et al. [18] use stereo WV3 imagery along with a fusion method to improve the mapping of standing dead trees, enhancing ecological monitoring. Similarly, Tong et al. [19] employ advanced image processing techniques to delineate tree crowns, demonstrating the capability of WV3 in precision forestry, urban management, and biodiversity studies. Vermote et al. [20] utilize WV3 imagery to remotely sense coconut trees across Tonga, contributing to agricultural monitoring and resource management. Ferreira et al. [21] combine WV3 imagery and convolutional neural networks (CNNs) to map Brazil nut trees in the Amazon, demonstrating an effective method for monitoring key species in dense forests. Rahaman et al. [22] explore how WV3 imagery can estimate mango yield by analyzing tree health and canopy structures, providing insights for precision agriculture.
The versatility of WV3 imagery for multispectral classification is further highlighted in various studies. For example, Li et al. [23] show how bi-temporal WV3 images enhance urban tree species classification. Johansen et al. [24] demonstrate the effective use of WV3 imagery with UAV data for assessing macadamia tree crop health. Ferreira et al. [25] combine texture analysis and WV3 data for improved tree species classification in tropical forests. Similarly, Wang et al. [26] use a combination of spectral and textural features to classify mangrove species. Solano et al. [27] propose a methodology for deriving detailed vegetation indices in olive orchards, offering insights into agricultural monitoring. Hiwely et al. [28] use WV3 shortwave infrared indices for mapping crop residue and tillage intensity, while Majid et al. [29] further emphasize the utility of WV3 for tree species classification.
In this study, a method to estimate the height of olive trees using a monoscopic WV3 satellite image is proposed. The approach is based on a mathematical model that calculates the height of the trees; this is conducted by using the solar position information (available in the satellite image metadata), measuring the shadow length on the orthoimage, and considering the slope and aspect of the ground. To validate the results, we compare the estimated tree heights with altimetric data obtained through drone surveys, providing a reliable ground-truth assessment of the proposed methodology.
Methods similar to the one proposed here focus solely on buildings or objects with well-defined geometries, and no standardized tests exist for measuring tree heights. The innovative aspect of the proposed methodology lies in its focus on measuring tree heights (e.g., olive trees) by exploiting the very high resolution of WorldView-3 monoscopic satellite imagery and the metadata related to the Sun’s position at the acquisition time. This approach enables rapid and cost-effective measurements compared to other techniques, such as UAV surveys. It allows for the analysis of very large areas (over a few square kilometers) much more efficiently than ground-based or drone measurements and benefits from the wide availability of archived satellite images, which can be purchased at lower prices compared to new acquisitions. Additionally, it facilitates data collection in remote or inaccessible regions, such as areas in developing countries, where other techniques may be limited.
Potential applications of the proposed method include using tree height measurements to estimate parameters such as biomass, carbon stock, or canopy volume, contributing to ecological and agricultural studies.

2. Materials and Methods

2.1. WorldView-3 Satellite Image

The high-resolution satellite image used for this study was acquired on 29 July 2023, using the WorldView-3 satellite. The acquired scene covered an area located in Spoleto, a city in the Umbria region (central Italy), an area of approximately 28.54 square kilometers and bounded by the following coordinates (Datum ETRF 2000, EPSG:6708 RDN2008/UTM zone 33N (N-E)): upper left at 4,743,417.3 N, 317,279.4 E and lower right at 4,737,005.7 N, 321,757.2 E (Figure 2).
Satellite image processing, including orthorectification and pan-sharpening, was performed using the Catalyst software v. 2223.0.3 (by PCI Geomatics Enterprises) [30]. First, the fusion of the panchromatic and multispectral images was performed using the pan-sharpening algorithm, resulting in a multispectral image with a resolution of 0.31 m/pixel (Figure 3).
The geometric correction and orthorectification of the image were performed using the following methods:
  • the positions of 20 Ground Control Points (GCPs) distributed across the scene were obtained through an NRTK (Network Real Time Kinematic) GNSS (Global Navigation Satellite System) survey. The GPSUmbria network [31] was used to determine the GCP positions using the Virtual Reference Station (VRS) mode (Figure 4);
  • a Digital Elevation Model (DEM) was derived from the regional vectorial cartography [32] (scale 1:5000) with a resolution of 2 m.
Six of the measured points were designated as Check Points (CPs) to properly estimate the planimetric accuracy of the orthorectification process.
The results of the orthorectification are reported in Table 1.
The data indicate that the altimetric residuals on the GCPs and CPs are lower than the planimetric ones, although the latter remain acceptable considering the geometric resolution of the WV3 images. This may be attributed to the fact that the points were accurately acquired using a dual-frequency GNSS receiver and corrected with a suitable geoid undulation model (UmbriaGeo). Furthermore, the effect of altitude on the orientation of the images is generally smaller than that of the planimetric coordinates, especially in areas with relatively low elevation changes, such as the one under study.

2.2. Study Area and Surveying Methods

This study was conducted in three specific areas of some olive orchards of a farm located in the area described above (42°46′15″ N, 12°46′41″ E).
The considered orchards contain 4 typical and widespread Italian cultivars: Frantoio, Leccino, Moraiolo, and Maurino [33]. A different size can be observed among the four cultivars, which is consistent with their typical characteristics: “Frantoio” and “Leccino” are generally larger; “Moraiolo” is smaller, and “Maurino” is even smaller. These olive trees are trained using the “vase system” typical of the traditional rain-fed orchards, with 3 or 4 main branches growing from the trunk with an upward and outward inclination, to facilitate a wide interception of light for the optimization of photosynthesis. This shape also allows for an easy and efficient harvesting of the olives, which are carried out manually, with pneumatic or electric combs on poles, or by means of a trunk shaker machine.
The first area, hereafter referred to as Area 1 (Figure 5), contains almost 1000 olive trees and is divided into two portions; the portion towards the north-east was 14 years old at the time of the survey and featured a prevalence of the Leccino cultivar with alternating rows of “Frantoio” and “Maurino”; the portion towards the south-west was 12 years old at the time of the survey and was characterized by a prevalence of the Moraiolo cultivar and two rows of “Maurino”. Figure 6 shows the different cultivars in Area 1. The plants are distributed with a 6 × 6 m planting distance, and each row contains a single cultivar. In this area, a sample of about 600 olive trees was selected for the study.
Further analyses were conducted in two additional regions, Area 2 and Area 3 (Figure 5), encompassing a total of forty olive trees. These analyses were specifically designed to account for the varying impact of terrain slope on shadow patterns, which was crucial for the accurate measurement of tree heights.
In particular, Area 2 was characterized by flat soil, and a sample of 20 olive trees of the Leccino cultivar was selected for this study; these olive trees were 22 years old at the time of the survey, and the planting distance was 5 × 7 m.
In contrast, Area 3 featured a steep slope, and a sample of 20 olive trees belonging to an ancient varietal selection was selected, attributable mainly, but not exclusively, to the Moraiolo cultivar; these trees were 68 years old at the time of the survey, and the planting distance was 6 × 6 m.
The orchard in Area 1 was surveyed using UAV photogrammetry to obtain point cloud data serving as the ground truth reference. In particular, the survey was performed in July 2023, the same time as the satellite image acquisition, with a DJI Phantom 4 RTK multispectral drone [34]. The flight plan was designed with the DJI GSPro app, (Apple iPad Pro 11’, fourth generation, version 17.5), with a 75% overlap in both side and heading directions and a flight speed of 5 m/s. The flight altitude was set to 30.7 m, achieving a Ground Sample Distance (GSD) of 1.6 cm/pixel.
The DJI Phantom 4 RTK multispectral captures images with an imaging system that includes an RGB camera and a multispectral camera with five sensors that cover the following bands: blue (B), green (G), red (R), red edge (RE), and near-infrared (NIR). For precise positioning, the drone was used with a D-RTK2 high-precision GNSS mobile station [35], an RTK base station capable of centimeter-level accuracy, supporting GPS, BEIDOU, GLONASS, and Galileo satellite systems.
Regarding the additional analyses in Area 2 and Area 3, more detailed photogrammetric UAV surveys were available. For these, the UAV data were collected with a DJI Phantom 4 RTK photogrammetric drone [36] flown at an average altitude of 25 m, resulting in a ground sample distance of 0.68 cm. The flight was conducted with a 90% longitudinal overlap and 80% lateral overlap at a speed of 2 m/s. The drone’s positions during data acquisition were obtained via the GPSUmbria network of permanent stations.
The UAV survey data were processed using Agisoft Metashape Professional v.1.7.5 [37], obtaining the three point clouds shown in Figure 7.
The extracted UAV point clouds were utilized to validate the tree heights estimated by the proposed model, which is described in the following sections.
In addition to the validation conducted with the drone survey, for Areas 2 and 3, a Terrestrial Laser Scanning (TLS) survey was available for a small number of trees. The TLS point cloud, with a higher level of detail compared to the UAV survey (about 7 mm at a distance of 10 m), was used to validate the experimental results, accounting for the influence of terrain slope and aspect on height estimation. A comparison between UAV and TLS surveys was performed in a previous study [38], which analyzed the geometric features extracted from the respective point clouds.

2.3. Geometric Model for Tree Height Measurement

A specific geometric model was developed to measure tree heights by considering the length of the shadows, the terrain’s slope and aspect, and the Sun’s position, which were derived from the image’s metadata.
In particular, the Sun’s azimuth and elevation were considered (Figure 8) and defined as follows:
  • Sun’s azimuth (θS): the angle between the north direction and the Sun’s direction projected onto a horizontal plane (143.7° for this specific acquisition);
  • Sun’s elevation (φS): the angle between the horizontal plane and the direction of the Sun (62.2° for this specific acquisition).
Figure 8. Sun’s azimuth (θS) and Sun’s elevation (φS).
Figure 8. Sun’s azimuth (θS) and Sun’s elevation (φS).
Applsci 15 03066 g008
The terrain slope and aspect were extracted from the DEM using QGIS software v. 3.34.10 [39]. Additionally, the image was visualized in false color (Figure 9a) with QGIS to enhance the identification of the tree canopy, minimizing the influence of shadows and the surrounding terrain. The following was conducted for each tree investigated in the study area:
  • The shadow length was estimated directly on the orthophoto along the Sun’s azimuth direction by creating a shape file containing a segment with an inclination of 143.7° from the shadow’s endpoint to the corresponding point on the canopy (Figure 9b);
  • The segment length was measured using the “Add geometry attributes” tool;
  • The two extreme points of the segment were then extracted using the “Extract vertices” tool of the vector menu (Figure 9c);
  • The first point of the segment (ground point) was used to measure the terrain’s aspect and slope using the “Extract value by point” tool;
  • The second point of the segment (canopy point) was used to identify the elevation of the corresponding point on the canopy from the UAV point cloud, which was later used to validate the tree height measured by the model.
Figure 9. Study area in false color visualization (a); cyan segment represented the shadow length (b); “ground point” and “canopy point” (cyan dots) were extracted from the shadow segment (c).
Figure 9. Study area in false color visualization (a); cyan segment represented the shadow length (b); “ground point” and “canopy point” (cyan dots) were extracted from the shadow segment (c).
Applsci 15 03066 g009
Finally, the following equation (Equation (1)) was used to estimate the tree height (H) (Figure 10):
H = h + = d × t a n φ S + d × cos β θ S × t a n α
where
  • d represents the shadow length measured along the Sun’s azimuth direction;
  • φS represents the Sun’s elevation angle;
  • θS represents the Sun’s azimuth angle;
  • α represents the terrain’s slope;
  • β represents the terrain’s aspect.
In particular,
h = d × t a n φ S
is the tree height without the terrain slope effect;
= d × cos β θ s × t a n α
is the difference between H (actual tree height) and h (Figure 11).
Figure 10. Geometric entities used for the tree height, as defined via Equation (1).
Figure 10. Geometric entities used for the tree height, as defined via Equation (1).
Applsci 15 03066 g010
Figure 11. Impact of terrain slope on shadow patterns.
Figure 11. Impact of terrain slope on shadow patterns.
Applsci 15 03066 g011

3. Results and Validation of Tree Height Estimation

A total of 612 trees were investigated within Area 1 by performing two different tests: Test 1a, which considers the length of the shadow of a specific point on the tree canopy, and Test 1b, which evaluates the maximum length of the shadow to extract the maximum elevation of the canopy (Figure 12).
Test 1a was the most time-consuming due to the difficulty in recognizing the specific corresponding points; two different “IDs” were assigned to the segment, “1” for the clearly identifiable shadow points (80.72% of the total trees) and “0” for points where the shadow was not clearly identifiable (19.28% of the total trees), in order to assess their respective contributions to the results during the subsequent validation process.
Test 1b was quicker and more approximate; in fact, it was conducted without identifying specific points on the canopy, but it allowed for the estimation of tree height values, which were useful for conducting faster analyses related to vegetative growth and conditions. Rather than assessing the height of a particular point on the canopy, the maximum height of an individual tree is likely a more meaningful parameter for subsequent vegetative analyses, such as estimating biomass, monitoring canopy development, or assessing growth trends.
For the additional forty trees investigated with a more detailed UAV survey, Tests 2 and 3 were conducted to determine the tree height at a specific point of the canopy (along with Test 1a). Specifically, for Test 2, twenty trees were considered in Area 2 (flat area), while for Test 3, twenty trees were studied in Area 3 (steep terrain slope) to consider the effect of the terrain’s slope for the calculated tree heights. Increasing the sample size for Tests 2 and 3 could certainly further enhance the reliability of the results, but due to logistical constraints, the available data were restricted to these specific regions.
A comparison was made by calculating the difference between the tree heights obtained using Equation (1) and those measured from the UAV point cloud, considering the vertical distance from points on the canopy to those on the DEM. The statistical parameters obtained are shown in Table 2. Therefore, a t-test to compare the means of the differences obtained from the different methods was performed. In particular, the null hypothesis that the difference between the means tree height obtained from the different methods (Equation (1) vs. Test 1a, Test 1b, Test 2, and Test 3 obtained from UAV measurements) is zero with a significant level of 5% was verified.
The graphs in Figure 13 compare the heights calculated using Equation (1) (x-axis) with those measured via UAV surveys (y-axis) for Test 1a and Test 1b. In both cases, there is a good agreement between the datasets, as indicated by the points being well distributed around the 1:1 line, which represents the ideal correspondence between calculated and measured values. The regression lines further confirm this agreement, with coefficients of determination (R2) of 0.85 and 0.77 for Test 1a and Test 1b, respectively. These values suggest a strong correlation, supporting the reliability of Equation (1) in estimating heights with good accuracy when compared to UAV-derived measurements. The results obtained for Tests 2 and 3 are provided in Appendix A (Figure A1).
The frequency distributions of elevation differences between the heights calculated using Equation (1) and those measured via UAV surveys are shown in Figure 14. It can be observed that, for most of the examined plants, the maximum difference between the two heights is on the order of 30, with the mean value on the order of centimeters. The results obtained for Test 2 and Test 3 are provided in Appendix A (Figure A2).
The graph in Figure 15 shows the mean elevation differences between the heights calculated using Equation (1) and those measured via UAV for the four tests considered, along with the corresponding standard deviations. The results indicate that, on average, the calculated heights are in close agreement with the UAV measurements, with centimetric deviations and standard deviations of about 0.20 m.
A further validation of the tree height calculation obtained from Equation (1) was carried out for Areas 2 and 3 using the available high-resolution TLS survey over nine olive trees. The differences between the heights calculated using Equation (1) and those measured from the TLS point cloud were computed, obtaining 0.166 m, −0.079 m, and 0.147 m, respectively, for the mean error, standard deviation, and mean absolute error.

4. Discussion

The primary aim of this experiment was to evaluate whether and with what level of accuracy olive tree heights can be estimated from high-resolution and very-high-resolution monoscopic satellite images, i.e., without employing stereoscopy. The reference data in this case consisted of the heights of the same olive trees measured using three-dimensional models derived from photogrammetric surveys conducted via drone, almost concurrently with the satellite image acquisition. The three-dimensional drone data serve as a reliable reference due to their higher accuracy, provided that the following points are clarified:
  • The satellite image was orthorectified using a photogrammetrically rigorous model [40], 20 ground control points (GCPs) measured with GNSS RTK receivers, and a digital elevation model (DEM) derived from 1:5000 scale vector cartography. The estimated planimetric accuracy of this orthorectification, based on independent check points (CPs), is approximately 20 cm. It should be noted that such planimetric accuracy cannot be achieved with raw satellite images or without the use of GCPs acquired via differential GPS/GNSS;
  • The two drones used are equipped with GNSS/GPS RTK receivers, enabling a planimetric accuracy of a few centimeters and an altimetric accuracy of about one decimeter. Most drones currently in use do not employ RTK technology but rely solely on autonomous GNSS positioning, with accuracies of approximately 10 m planimetrically and 15 m in height. The accuracy of the drone-derived models was anyway further validated through comparisons with additional GCPs;
  • The coordinates of the two surveys are compatible and comparable, as both are referenced to the same realization of the WGS84 datum (EPSG:6708 RDN2008/UTM zone 33N (N-E)) materialized by the same Continuously Operating Reference Station Network (CORSN) managed by the Umbria region [31];
  • The respective planimetric accuracies of the two methods, with expected maximum deviations of a few decimeters, ensure the consistency of the comparative test, as they prevent the possibility of confusing one olive tree with another within the same row. The use of differential GNSS positioning technologies (here employed in RTK mode) is crucial for avoiding such misidentifications.
After verifying that the planimetric and altimetric accuracies in the current configuration were sufficient to render the comparison with the drone data meaningful, subsequent tests were conducted to separately assess specific factors that could potentially reduce the accuracy and significance of the proposed methodology’s results.
The three primary challenges identified during the experimentation were as follows:
  • The difference between measurements at the tree’s apex and those at the most easily recognizable point on its shadow;
  • The influence of the recognizability of the measured points on the shadow and the corresponding olive tree canopy image;
  • The impact of terrain slope, which, for simplicity, is considered constant over the length of the shadow. For this purpose, an average value derived from the reference DEM is used.

4.1. Statistical Results

The analysis of the test results revealed the general statistical reliability of the proposed methodology. Although the sample size was not extensive, it was deemed somehow representative enough to demonstrate the method’s effectiveness in real-world applications.
From the results presented in Table 2, a preliminary observation is that there should be no systematic bias between the two datasets (heights calculated from the drone 3D model and the proposed methodology). This outcome was not guaranteed, despite referencing the same CORSN, due to potential decimetric systematic discrepancies observed in similar past experiments [41]. The discrepancies in mean errors, with maximum values of a few centimeters, confirm the proper planimetric and altimetric alignment of the two datasets.
Further analysis of Table 2 shows that, overall, the results are highly satisfactory, with standard deviations (SDs) between 10 and 20 cm. Considering the 30 cm resolution of the satellite images, these results exceed our expectations in certain respects. A key factor contributing to these outcomes was the precise absolute and relative planimetric and altimetric alignment of the two datasets (satellite orthophoto and UAV point cloud).
In Test 1a, it was observed that the accuracy of height measurements at points with higher identification difficulty (“0” class) was slightly better than at points with greater collimation certainty (“1” class). However, the difference (in terms of SD) is less than 3 mm, suggesting that the difficulty of identification of shadow points has little to no significant impact on the results.
In Test 1b, which involves the height calculated from the tree canopy apex to the corresponding point on the shadow for the same set of trees, the results appeared slightly less accurate than those of Test 1a. This is because, in Test 1b, the height is not calculated from a clearly identifiable point on the canopy but rather from the point that generates the longest shadow. Additionally, we are assuming that the point generating the longest shadow is located at the outermost edge of the canopy, which is true for most olive trees given their canopy structure, but this may not always be the case. However, the differences between the results of Test 1a and Test 1b amount to only a few millimeters, rendering them relatively insignificant given the satellite image resolution. Moreover, Test 1b might be more suitable for applications where a rapid and approximate estimation of tree heights is required.
Tests 2 and 3 produced even more accurate results, approximately twice as accurate as the previous tests. However, it should be noted that the dataset for these tests was smaller and therefore not directly comparable to the previous ones, but only relatively comparable among themselves. The similarity of errors obtained in Test 2 (flat terrain) and Test 3 (sloped terrain) can be attributed to the inclusion of a term in the height calculation formula (Equation (1)) that properly accounts for both the slope and aspect of the terrain. Test 3 was specifically designed to assess the impact of these factors, and the results suggest that this adjustment, considering the image resolution, effectively modeled potential distortions in sloped areas.
The t-test indicated that the difference between the tree heights obtained from Equation (1) and those from UAV surveys (Test 1a, 1b, 2, and 3) was not statistically significant at the 5% significance level. Detailed t-test outcomes are provided in Appendix B (Table A1, Table A2, Table A3 and Table A4).
Concerning the olive cultivars surveyed, there is a convergence with what was described in the section “Study area and surveying methods” about the normal size of the different cultivars. In fact, the average heights obtained in Test 1b were as follows: “Frantoio”: 3.04 m, “Leccino”: 3.20 m, “Maurino”: 2.57 m, and “Moraiolo”: 2.63 m, basically confirming what was reported above.

4.2. Cost–Benefit Analysis

For large areas, in comparison to ground-based or drone surveys, the monoscopic satellite-based method is more cost-effective, as it allows for the analysis of extensive areas with less time and resource investment. In fact, almost all operations can be performed remotely. High- to medium-resolution DEMs are often available for free from public agencies, either through cartographic data or, in some cases, from LiDAR flights. Only some GCPs need to be acquired on-site; for an RPC (Rational Polynomial Coefficient) model, fewer than 10 GCPs are required, which, depending on the extension of the study area, can be collected with a few hours of work. Ground-based methods require surveying each individual tree, which, in areas spanning several hectares, may take days of work. However, for very small areas, the proposed method may not be as cost-efficient, as the setup and processing time could become excessive relative to the area covered. Furthermore, ground-based laser surveys are even more time-consuming than drone surveys, as they require more extensive on-site data collection.
In any case, we can assess that, regarding both time and economic costs, they remain relatively contained when using satellite imagery (from archives, approximately EUR 1000 per image), with the acquisition of fewer than 10 GCPs using differential GPS, followed by remote processing time (georectification, then measurements on individual trees), and when using a drone (for drone, insurance, and pilot, approximately EUR 5000 in initial costs), it is still necessary to survey dozens of GCPs. However, it is important to emphasize that to acquire the approximately 3600 km2 of a single satellite image, a drone would theoretically require 360 flight hours, which, of course, cannot be performed consecutively. Additionally, battery recharge time limits the number of flights per day, meaning that acquiring the same area covered by a WorldView-3 image using a drone could take several weeks of field survey work.
Thus, the choice of method will depend on the specific size and nature of the area to be surveyed.

4.3. Limitations of the Proposed Methodology

Some limitations of the proposed method need to be highlighted. First of all, difficulties may arise in identifying the exact base of the tree and the endpoint of the shadow, especially where shadows may blend with other ground elements. Therefore, this method is most effective in areas where there is sufficient distance between trees to ensure that shadows are fully visible on the ground without overlapping other elements (a condition often met in modern orchards, for example, to facilitate mechanized operations). Additionally, the identification of segment vertices representing shadow length in the image, which follows a predetermined orientation (dependent on the Sun’s azimuth), may be influenced by the operator. The irregular shape of olive tree canopies can cause varied shadow projections on the ground, introducing a degree of subjectivity into tree height calculation. However, the results remain encouraging, as similar limitations exist when using traditional photogrammetry (e.g., from stereo pairs). Regarding Test 1b, we assume that the longest shadow corresponds to the maximum height of the trees. This assumption may affect the height estimation because, based on the canopy shape, it is not certain that the longest shadow corresponds to the highest point. Nonetheless, the shape of the olive tree canopy can be considered appropriate in this case, as the maximum height of the canopy is typically found at the outer perimeter.
In any case, this methodology can be considered a rapid assessment tool, although drone or ground-based measurements could achieve slightly better accuracy but would be significantly more time-consuming for larger areas.

5. Conclusions and Future Prospects

The analysis of the results comparing olive tree heights calculated using the proposed methodology with those obtained from the drone 3D model indicates that the differences are within the decimetric range, aligning with expectations given the satellite image resolution. The primary advantage of this method over drone usage lies in its greater efficiency over large areas, as drones are typically limited to a few hectares per flight due to battery constraints. Therefore, for areas larger than a few hectares, the method proposed here is faster and more cost-effective, especially when using archived satellite imagery. Furthermore, as is well known, drone flights are subject to regulatory constraints regarding operator permits and specific areas.
The proposed methodology is therefore preferable for large-scale applications and can be advantageous anyway in cases where high-resolution monoscopic satellite images are already available. It is worth emphasizing that the achieved accuracy was made possible by the careful orthorectification of the satellite image, ensuring full utilization of its intrinsic accuracy.
Summarizing the results of the standard deviation analysis from the comparisons between the two datasets (drone heights and shadow-based heights), three factors were tested for their influence on the final results:
  • The ease or difficulty of identifying specific points on the canopy and shadow;
  • The choice of measuring from the canopy’s apex or the most recognizable point;
  • The effect of terrain morphology by comparing flat and sloped areas.
In all three cases, these parameters appear to have no significant influence on the final results, which are highly accurate overall.
To streamline and expedite operations, the development of a specific tool, such as a plugin for an open GIS platform like QGIS, is under consideration.
The experimentation conducted so far has focused exclusively on olive trees, which have a distinct canopy structure. Trees with different canopy shapes may behave differently, and the proposed method might show variations in accuracy for denser trees or asymmetrical canopies. To address this, further experimentation is needed, involving other tree species with different morphologies than those of olive trees, to more thoroughly evaluate the method’s general reliability.
Furthermore, additional tests will be performed to account for the multispectral characteristics of WV3 images in the classification of different land cover types, a methodology that could enhance the automatic recognition of shadows.
The proposed methodology offers an innovative approach for measuring tree heights, providing a significant advantage in speed and cost-effectiveness compared to traditional techniques, such as UAV surveys, over large areas. It also benefits from the widespread availability of archived VHR monoscopic satellite images and their metadata, which are often more affordable than new acquisitions. Furthermore, this approach is particularly well suited for remote or inaccessible regions, including developing countries, where traditional methods may be limited.

Author Contributions

Conceptualization, V.B. and R.B.; methodology, R.B., V.B. and L.M.; software, R.B., L.M. and V.B.; validation, R.B., V.B., L.M. and A.V.; formal analysis, R.B., V.B., L.M. and A.V.; resources, R.B., V.B., F.R., L.M., A.V., P.P., L.R. and R.C.; data curation, R.B. and V.B.; writing—original draft preparation, R.B., V.B. and L.M.; writing—review and editing, R.B., V.B., F.R., L.M., A.V., P.P., L.R. and R.C.; visualization, R.B., V.B., F.R., L.M. and A.V.; supervision, V.B.; project administration, V.B.; funding acquisition, P.P., L.R., R.C. and A.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the project “PRECISOLIVO” under the 2014–2022 Rural Development Programme of the Umbria Region, Italy.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Comparison between heights calculated using Equation (1) and UAV-measured heights for Test 2 (on the left) and Test 3 (on the right). The dashed red line represents the 1:1 correspondence.
Figure A1. Comparison between heights calculated using Equation (1) and UAV-measured heights for Test 2 (on the left) and Test 3 (on the right). The dashed red line represents the 1:1 correspondence.
Applsci 15 03066 g0a1
Figure A2. Frequency distributions of elevation differences between heights calculated using Equation (1) and those measured via UAV surveys for Test 2 (on the left) and Test 3 (on the right).
Figure A2. Frequency distributions of elevation differences between heights calculated using Equation (1) and those measured via UAV surveys for Test 2 (on the left) and Test 3 (on the right).
Applsci 15 03066 g0a2

Appendix B

Table A1. t-test results for tree height comparisons, including means, variances, degrees of freedom, t-statistics, and p-values for both one-tailed and two-tailed tests (Test 1a).
Table A1. t-test results for tree height comparisons, including means, variances, degrees of freedom, t-statistics, and p-values for both one-tailed and two-tailed tests (Test 1a).
Test 1aH_UAVH_Eq1
Mean2.1062320262.11813773
Variance0.2064592060.184138669
Observations612612
Total Variance0.195298938
Hypothesized Difference for Means0
Degrees of Freedom1222
t Statistic−0.471265718
P(T ≤ t) One-Tailed0.318767557
t Critical One-Tailed1.646101525
P(T ≤ t) Two-Tailed0.637535113
t Critical Two-Tailed1.961907178
Table A2. t-test results for tree height comparisons, including means, variances, degrees of freedom, t-statistics, and p-values for both one-tailed and two-tailed tests (Test 1b).
Table A2. t-test results for tree height comparisons, including means, variances, degrees of freedom, t-statistics, and p-values for both one-tailed and two-tailed tests (Test 1b).
Test 1bH_UAVH_Eq1
Mean2.7292186362.704842175
Variance0.2181735870.21410717
Observations626626
Total Variance0.216140378
Hypothesized Difference for Means0
Degrees of Freedom1250
t Statistic0.927630189
P(T ≤ t) One-Tailed0.176889254
t Critical One-Tailed1.646073552
P(T ≤ t) Two-Tailed0.353778508
t Critical Two-Tailed1.961863609
Table A3. t-test results for tree height comparisons, including means, variances, degrees of freedom, t-statistics, and p-values for both one-tailed and two-tailed tests (Test 2).
Table A3. t-test results for tree height comparisons, including means, variances, degrees of freedom, t-statistics, and p-values for both one-tailed and two-tailed tests (Test 2).
Test 2H_UAVH_Eq1
Mean3,31343.351447175
Variance0.2378385680.235708219
Observations2020
Total Variance0.236773394
Hypothesized Difference for Means0
Degrees of Freedom38
t Statistic−0.247261197
P(T ≤ t) One-Tailed0.403018587
t Critical One-Tailed1.68595446
P(T ≤ t) Two-Tailed0.806037174
t Critical Two-Tailed2.024394164
Table A4. t-test results for tree height comparisons, including means, variances, degrees of freedom, t-statistics, and p-values for both one-tailed and two-tailed tests (Test 3).
Table A4. t-test results for tree height comparisons, including means, variances, degrees of freedom, t-statistics, and p-values for both one-tailed and two-tailed tests (Test 3).
Test 3H_UAVH_Eq1
Mean3.4996277563.4565
Variance0.4112501970.387079737
Observations2020
Total Variance0.399164967
Hypothesized Difference for Means0
Degrees of Freedom38
t Statistic0.215864214
P(T ≤ t) One-Tailed0.415124071
t Critical One-Tailed1.68595446
P(T ≤ t) Two-Tailed0.830248143
t Critical Two-Tailed2.024394164

References

  1. Parra, L. Remote Sensing and GIS in Environmental Monitoring. Appl. Sci. 2022, 12, 8045. [Google Scholar] [CrossRef]
  2. Sahu, B.; Chatterjee, S.; Mukherjee, S.; Sharma, C. Tools of precision agriculture: A review. Int. J. Chem. Stud. 2019, 7, 2692–2697. [Google Scholar]
  3. Manfré, L.A.; Hirata, E.; Silva, J.B.; Shinohara, E.J.; Giannotti, M.A.; Larocca, A.P.C.; Quintanilha, J.A. An Analysis of Geospatial Technologies for Risk and Natural Disaster Management. ISPRS Int. J. Geo-Inf. 2012, 1, 166–185. [Google Scholar] [CrossRef]
  4. Masser, I. Managing our urban future: The role of remote sensing and geographic information systems. Habitat Int. 2001, 25, 503–512. [Google Scholar] [CrossRef]
  5. Mundia, C.N.; Aniya, M. Analysis of land use/cover changes and urban expansion of Nairobi city using remote sensing and GIS. Int. J. Remote Sens. 2005, 26, 2831–2849. [Google Scholar] [CrossRef]
  6. Jiang, S.; Jiang, C.; Jiang, W. Efficient structure from motion for large-scale UAV images: A review and a comparison of SfM tools. ISPRS J. Photogramm. Remote Sens. 2020, 167, 230–251. [Google Scholar] [CrossRef]
  7. Liu, C.-J.; Krylov, V.A.; Kane, P.; Kavanagh, G.; Dahyot, R. IM2ELEVATION: Building Height Estimation from Single-View Aerial Imagery. Remote Sens. 2020, 12, 2719. [Google Scholar] [CrossRef]
  8. Xu, W.; Feng, Z.; Wan, Q.; Xie, Y.; Feng, D.; Zhu, J.; Liu, Y. Building Height Extraction From High-Resolution Single-View Remote Sensing Images Using Shadow and Side Information. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 6514–6528. [Google Scholar] [CrossRef]
  9. Gavankar, N.L.; Rathod, R.R.; Waghmare, V.N. Estimation of Building Height Using High-Resolution Satellite Imagery. In Proceedings of the 10th International Conference on Geographical Information Systems Theory, Applications and Management (GISTAM 2024), Angers, France, 2–4 May 2024; pp. 83–93. [Google Scholar]
  10. Izadi, M.H.; Saeedi, P. Three-Dimensional Polygonal Building Model Estimation From Single Satellite Images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2254–2272. [Google Scholar] [CrossRef]
  11. Liasis, G.; Stavrou, S. Satellite Image Analysis for Shadow Detection and Building Height Estimation. ISPRS J. Photogramm. Remote Sens. 2016, 119, 437–450. [Google Scholar] [CrossRef]
  12. Xie, Y.; Feng, D.; Xiong, S.; Zhu, J.; Liu, Y. Multi-Scene Building Height Estimation Method Based on Shadow in High Resolution Imagery. Remote Sens. 2021, 13, 2862. [Google Scholar] [CrossRef]
  13. Rada Giacaman, C.A. High-Precision Measurement of Height Differences from Shadows in Non-Stereo Imagery: New Methodology and Accuracy Assessment. Remote Sens. 2022, 14, 1702. [Google Scholar] [CrossRef]
  14. Shettigara, V.K.; Sumerling, G.M. Height Determination of Extended Objects Using Shadows in SPOT Images. Photogramm. Eng. Remote Sens. 1998, 64, 35–44. [Google Scholar]
  15. Verma, N.K. Estimating Trunk Diameter at Breast Height for Scattered Eucalyptus Trees: A Comparison of Remote Sensing Systems and Analysis Techniques. Ph.D. Thesis, University of New England, Armidale, Australia, 2015. [Google Scholar]
  16. WorldView3. Available online: https://earth.esa.int/eogateway/missions/worldview-3 (accessed on 10 December 2024).
  17. Lelong, C.C.D.; Tshingomba, U.K.; Soti, V. Assessing Worldview-3 Multispectral Imaging Abilities to Map the Tree Diversity in Semi-Arid Parklands. Int. J. Appl. Earth Obs. Geoinf. 2020, 93, 102211. [Google Scholar] [CrossRef]
  18. Liu, X.; Frey, J.; Denter, M.; Zielewska-Büttner, K.; Still, N.; Koch, B. Mapping Standing Dead Trees in Temperate Montane Forests Using a Pixel- and Object-Based Image Fusion Method and Stereo WorldView-3 Imagery. Ecol. Indic. 2021, 133, 108438. [Google Scholar] [CrossRef]
  19. Tong, F.; Tong, H.; Mishra, R.; Zhang, Y. Delineation of Individual Tree Crowns Using High Spatial Resolution Multispectral WorldView-3 Satellite Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7751–7776. [Google Scholar] [CrossRef]
  20. Vermote, E.F.; Skakun, S.; Becker-Reshef, I.; Saito, K. Remote Sensing of Coconut Trees in Tonga Using Very High Spatial Resolution WorldView-3 Data. Remote Sens. 2020, 12, 3113. [Google Scholar] [CrossRef]
  21. Ferreira, M.P.; Lotte, R.G.; D’Elia, F.V.; Stamatopoulos, C.; Kim, D.-H.; Benjamin, A.D. Accurate Mapping of Brazil Nut Trees (Bertholletia excelsa) in Amazonian Forests Using WorldView-3 Satellite Images and Convolutional Neural Networks. Ecol. Inform. 2021, 63, 101302. [Google Scholar] [CrossRef]
  22. Rahman, M.M.; Robson, A.; Bristow, M. Exploring the Potential of High Resolution WorldView-3 Imagery for Estimating Yield of Mango. Remote Sens. 2018, 10, 1866. [Google Scholar] [CrossRef]
  23. Li, D.; Ke, Y.; Gong, H.; Li, X. Object-Based Urban Tree Species Classification Using Bi-Temporal WorldView-2 and WorldView-3 Images. Remote Sens. 2015, 7, 16917–16937. [Google Scholar] [CrossRef]
  24. Johansena, K.; Duanb, Q.; Tua, Y.-H.; Searled, C.; Wuc, D.; Phinn, S.; Robsone, A.; McCabe, M.F. Mapping the Condition of Macadamia Tree Crops Using Multi-Spectral UAV and WorldView-3 Imagery. ISPRS J. Photogramm. Remote Sens. 2020, 165, 28–40. [Google Scholar] [CrossRef]
  25. Ferreira, M.P.; Wagner, F.H.; Aragão, L.E.; Shimabukuro, Y.E.; de Souza Filho, C.R. Tree species classification in tropical forests using visible to shortwave infrared WorldView-3 images and texture analysis. ISPRS J. Photogramm. Remote Sens. 2019, 149, 119–131. [Google Scholar] [CrossRef]
  26. Wang, T.; Zhang, H.; Lin, H.; Fang, C. Textural–Spectral Feature-Based Species Classification of Mangroves in Mai Po Nature Reserve from Worldview-3 Imagery. Remote Sens. 2016, 8, 24. [Google Scholar] [CrossRef]
  27. Solano, F.; Di Fazio, s.; Modica, G. A Methodology Based on GEOBIA and WorldView-3 Imagery to Derive Vegetation Indices at Tree Crown Detail in Olive Orchards. Int. J. Appl. Earth Obs. Geoinf. 2019, 83, 101912. [Google Scholar] [CrossRef]
  28. Hively, W.D.; Lamb, B.T.; Daughtry, C.S.T.; Shermeyer, J.; McCarty, G.W.; Quemada, M. Mapping Crop Residue and Tillage Intensity Using WorldView-3 Satellite Shortwave Infrared Residue Indices. Remote Sens. 2018, 10, 1657. [Google Scholar] [CrossRef]
  29. Ab Majid, I.; Abd Latif, Z.; Adnan, N.A. Tree species classification using worldview-3 data. In Proceedings of the 2016 7th IEEE Control and System Graduate Research Colloquium (ICSGRC), Shah Alam, Malaysia, 8 August 2016; pp. 73–76. [Google Scholar]
  30. Catalyst Earth. Available online: https://catalyst.earth/ (accessed on 10 December 2024).
  31. Radicioni, F.; Stoppini, A.; Tosi, G.; Marconi, L. Multi-constellation Network RTK for Automatic Guidance in Precision Agriculture. In Proceedings of the 2022 IEEE Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Perugia, Italy, 3–5 November 2022; pp. 260–265. [Google Scholar] [CrossRef]
  32. WebGIS CTRonWeb 3.0. Available online: https://siat.regione.umbria.it/webgisctr/ (accessed on 10 December 2024).
  33. Tombesi, A.; Proietti, P.; Iacovelli, G.; Tombesi, S.; Farinelli, D. Vegetative and productive behaviour of four olive italian cultivars and ‘arbequina’ according to super intensive olive training system in central Italy. In Proceedings of the XXVIII International Horticultural Congress on Science and Horticulture for People (IHC2010): Olive Trends Symposium—From the Olive Tree to Olive Oil: New Trends and Future Challenges, Lisbon, Portugal, 22–27 August 2011. [Google Scholar]
  34. dji P4 Multispectral. Available online: https://www.dji.com/it/p4-multispectral/specs (accessed on 10 December 2024).
  35. dji D-RTK 2. Available online: https://www.dji.com/it/d-rtk-2/info#specs (accessed on 10 December 2024).
  36. dji PHANTOM 4 RTK. Available online: https://www.dji.com/it/phantom-4-rtk/info#specs (accessed on 10 December 2024).
  37. Agisoft. Available online: https://www.agisoft.com/features/professional-edition/ (accessed on 10 December 2024).
  38. Brigante, R.; Calisti, R.; Marconi, L.; Proietti, P.; Radicioni, F.; Vinci, A. GNSS NRTK-UAV photogrammetry and LiDAR point clouds for geometric features extraction of olive orchard. In Proceedings of the 2024 IEEE International Workshop on Metrology for Agriculture and Forestry—MetroAgriFor), Padova, Italy, 29–31 October 2024. [Google Scholar]
  39. QGis. Available online: https://www.qgis.org/ (accessed on 10 December 2024).
  40. Baiocchi, V.; Giannone, F.; Monti, F. How to Orient and Orthorectify PRISMA Images and Related Issues. Remote Sens. 2022, 14, 1991. [Google Scholar] [CrossRef]
  41. Baiocchi, V.; Fortunato, S.; Giannone, F.; Marzaioli, V.; Monti, F.; Onori, R.; Ruzzi, L.; Vatore, F. LiDAR RTK Unmanned Aerial Vehicles for security purposes. Geogr. Tech. 2023, 19, 34–42. [Google Scholar] [CrossRef]
Figure 1. Different shapes of canopy trees: olive (on the left) and fir (on the right) trees.
Figure 1. Different shapes of canopy trees: olive (on the left) and fir (on the right) trees.
Applsci 15 03066 g001
Figure 2. Location of the study area (on the left) and the acquired WorldView-3 satellite image (on the right).
Figure 2. Location of the study area (on the left) and the acquired WorldView-3 satellite image (on the right).
Applsci 15 03066 g002
Figure 3. Pancromatic (a), multispectral (b), and pansharpened (c) images.
Figure 3. Pancromatic (a), multispectral (b), and pansharpened (c) images.
Applsci 15 03066 g003
Figure 4. GCP distributions: the red points and the corresponding arrows indicate some examples of the measured GCPs.
Figure 4. GCP distributions: the red points and the corresponding arrows indicate some examples of the measured GCPs.
Applsci 15 03066 g004
Figure 5. Study areas.
Figure 5. Study areas.
Applsci 15 03066 g005
Figure 6. Cultivars distribution in Area 1.
Figure 6. Cultivars distribution in Area 1.
Applsci 15 03066 g006
Figure 7. Area 1 UAV point cloud obtained using DJI Phantom 4 multispectral RTK (a); Area 2 (b) and Area 3 (c) point clouds obtained using DJI Phantom 4 photogrammetric RTK.
Figure 7. Area 1 UAV point cloud obtained using DJI Phantom 4 multispectral RTK (a); Area 2 (b) and Area 3 (c) point clouds obtained using DJI Phantom 4 photogrammetric RTK.
Applsci 15 03066 g007
Figure 12. Different shadow lengths for Test 1a and Test 1b.
Figure 12. Different shadow lengths for Test 1a and Test 1b.
Applsci 15 03066 g012
Figure 13. Comparison between heights calculated using Equation (1) and UAV-measured heights for Test 1a (on the left) and Test 1b (on the right). The dashed red line represents the 1:1 correspondence.
Figure 13. Comparison between heights calculated using Equation (1) and UAV-measured heights for Test 1a (on the left) and Test 1b (on the right). The dashed red line represents the 1:1 correspondence.
Applsci 15 03066 g013
Figure 14. Frequency distributions of differences between heights calculated using Equation (1) and those measured via UAV surveys for Test 1a (on the left) and Test 1b (on the right).
Figure 14. Frequency distributions of differences between heights calculated using Equation (1) and those measured via UAV surveys for Test 1a (on the left) and Test 1b (on the right).
Applsci 15 03066 g014
Figure 15. Box plot of the differences between heights calculated using Equation (1) and those measured via UAV for Tests 1a, 1b, 2, and 3.
Figure 15. Box plot of the differences between heights calculated using Equation (1) and those measured via UAV for Tests 1a, 1b, 2, and 3.
Applsci 15 03066 g015
Table 1. Residual errors of the orthorectification.
Table 1. Residual errors of the orthorectification.
Type of PointResidual Error—Ground (m)Residual Error—Image (px)
EastNorthElevationXY
GCPs0.0990.1030.0250.330.36
CPs0.1680.1450.0350.560.51
Table 2. The mean value, standard deviation, and absolute value of the mean difference between the heights calculated using Equation (1) and those measured via UAV point cloud.
Table 2. The mean value, standard deviation, and absolute value of the mean difference between the heights calculated using Equation (1) and those measured via UAV point cloud.
TestNumber of TreesMean Error (m)Standard
Deviation
(m)
Mean Error
Absolute Value (m)
1aTotal6120.0150.1900.140
“1” class494 (80.7%)0.0140.1950.144
“0” class118 (19.3%)0.0210.1690.127
1bTotal612−0.0240.2880.175
2Total200.0430.1350.115
3Total200.0380.1210.101
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brigante, R.; Baiocchi, V.; Calisti, R.; Marconi, L.; Proietti, P.; Radicioni, F.; Regni, L.; Vinci, A. GIS-Based Approach for Estimating Olive Tree Heights Using High-Resolution Satellite Imagery and Shadow Analysis. Appl. Sci. 2025, 15, 3066. https://doi.org/10.3390/app15063066

AMA Style

Brigante R, Baiocchi V, Calisti R, Marconi L, Proietti P, Radicioni F, Regni L, Vinci A. GIS-Based Approach for Estimating Olive Tree Heights Using High-Resolution Satellite Imagery and Shadow Analysis. Applied Sciences. 2025; 15(6):3066. https://doi.org/10.3390/app15063066

Chicago/Turabian Style

Brigante, Raffaella, Valerio Baiocchi, Roberto Calisti, Laura Marconi, Primo Proietti, Fabio Radicioni, Luca Regni, and Alessandra Vinci. 2025. "GIS-Based Approach for Estimating Olive Tree Heights Using High-Resolution Satellite Imagery and Shadow Analysis" Applied Sciences 15, no. 6: 3066. https://doi.org/10.3390/app15063066

APA Style

Brigante, R., Baiocchi, V., Calisti, R., Marconi, L., Proietti, P., Radicioni, F., Regni, L., & Vinci, A. (2025). GIS-Based Approach for Estimating Olive Tree Heights Using High-Resolution Satellite Imagery and Shadow Analysis. Applied Sciences, 15(6), 3066. https://doi.org/10.3390/app15063066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop