Next Article in Journal
Granitoid Mapping with Convolutional Neural Network from ASTER and Landsat 8 OLI Data: A Case Study in the Western Junggar Orogen
Previous Article in Journal
Near-Time Measurement of Aerosol Optical Depth and Black Carbon Concentration at Socheongcho Ocean Research Station: Aerosol Episode Case Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Accuracy of Forest Structure Analysis by Consumer-Grade UAV Photogrammetry Through an Innovative Approach to Mitigate Lens Distortion Effects

by
Arvin Fakhri
1,
Hooman Latifi
1,2,*,
Kyumars Mohammadi Samani
3,4 and
Fabian Ewald Fassnacht
5
1
Department of Photogrammetry and Remote Sensing, Faculty of Geodesy and Geomatics Engineering, K. N. Toosi University of Technology, Tehran 19967-15433, Iran
2
Department of Remote Sensing, Institute of Geography and Geology, University of Wuerzburg, 97074 Wuerzburg, Germany
3
Department of Forestry, Faculty of Natural Resources, University of Kurdistan, Sanandaj 66177-15175, Iran
4
Center for Research and Development of Northern Zagros Forestry, Baneh 66919-14919, Iran
5
Department of Remote Sensing and Geoinformation, Institute of Geographic Sciences, Freie Universität Berlin, Malteserstraße 74-100, 12249 Berlin, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(3), 383; https://doi.org/10.3390/rs17030383
Submission received: 30 November 2024 / Revised: 15 January 2025 / Accepted: 22 January 2025 / Published: 23 January 2025

Abstract

:
The generation of aerial and unmanned aerial vehicle (UAV)-based 3D point clouds in forests and their subsequent structural analysis, including tree delineation and modeling, pose multiple technical challenges that are partly raised by the calibration of non-metric cameras mounted on UAVs. We present a novel method to deal with this problem for forest structure analysis by photogrammetric 3D modeling, particularly in areas with complex textures and varying levels of tree canopy cover. Our proposed method selects various subsets of a camera’s interior orientation parameters (IOPs), generates a dense point cloud for each, and then synthesizes these models to form a combined model. We hypothesize that this combined model can provide a superior representation of tree structure than a model calibrated with an optimal subset of IOPs alone. The effectiveness of our methodology was evaluated in sites across a semi-arid forest ecosystem, known for their diverse crown structures and varied canopy density due to a traditional pruning method known as pollarding. The results demonstrate that the enhanced model outperformed the standard models by 23% and 37% in both site- and tree-based metrics, respectively, and can therefore be suggested for further applications in forest structural analysis based on consumer-grade UAV data.

1. Introduction

Forests play a pivotal role in maintaining global biodiversity and climate regulation, providing a wide array of ecosystem services that are crucial for sustaining nature and the surrounding societies [1]. Structural forest attributes such as tree height, diameter, and canopy cover are critical indicators of forest health and productivity [2,3]. As such, accurately measuring the structure of forests, especially in fragile ecosystems, is critical for their preservation and sustainable management [4]. Traditionally, these attributes have been measured using ground-based conventional methods which, while accurate, are tediously labor-intensive, time-consuming, and often limited in spatial coverage [5]. These limitations underscore the need for more advanced and efficient methods of forest structural analysis. Geospatial tools like remote sensing and photogrammetry have emerged as powerful means for the analysis of forest structures [6]. These are based on using sensors mounted on platforms such as satellites, aircrafts, and unmanned aerial vehicles (UAVs) to collect high-resolution imaging data across space and time.
Among these, UAV photogrammetry has shown great promise for small-scale data acquisition and inventories due to its ability to capture high-resolution three-dimensional data at a relatively low cost [7,8,9]. The UAV-based photogrammetry commonly involves the capture of overlapping aerial images, which are then processed to create detailed 3D models through a process known as structure from motion (SfM). These models allow for extracting various structural attributes such as tree height [10], crown diameter [11], and crown volume [12]. However, the precise application of UAV photogrammetry in forest structural analysis is associated with challenges [13], which can be broadly categorized into two groups of (1) those arising from UAV flight planning [14] and (2) those related to forest texture and image processing [13].
While the first group of challenges can be mitigated by optimizing flight parameters such as flight height and speed, as well as by conducting additional flights if necessary [15], dealing with the second group is more complex. With their compound textures and varying levels of canopy cover, tree formations within forest stands can pose significant complications for image matching algorithms [16]. For instance, pruned trees with low canopy cover can lead to problems in image matching, since matching involves identifying common features between overlapping images to be linked followed by creating a 3D model [17]. This matching relies heavily on the presence of distinct recognizable features in the images, which is hindered by the scarcity of distinct features to be matched across overlapping UAV images in presence of low canopy cover. This can confuse image matching algorithms, leading to errors and inaccuracies in the resulting 3D models.
Furthermore, camera lens distortion in non-metric cameras mounted on consumer-grade UAVs can exacerbate this problem due to typically higher lens distortion in such cameras [18]. Since the texture of tree objects cannot be altered, it is imperative to at least minimize the impact of lens distortion, which necessitates developing equations for modeling lens distortion. Previous studies have proposed models to tackle this [19], but selecting the best subset of calibration parameters requires careful consideration of the specific analytical requirements and constraints.
Accurate camera calibration is a critical step in UAV photogrammetry, as inaccuracies in interior orientation parameters (IOPs)—including focal length, principal point coordinates, and lens distortion coefficients—can significantly degrade the accuracy of derived 3D models. The challenge raised by the use of non-metric cameras is the instability of their IOPs, making the calibration process akin to solving an optimization problem that lacks a global solution. Consequently, residual errors persist even after calibration [20].
Inaccurate calibration can introduce systematic errors during the bundle adjustment process, where image observations are used to refine camera parameters and 3D point positions. These errors may result in geometric distortions, such as scale inconsistencies and spatial deformations, compromising the reliability of photogrammetric outputs [21].
However, the selection of appropriate calibration parameters is crucial, as different subsets can variably impact the quality of 3D reconstructions. For instance, neglecting to model certain lens distortions or inaccurately estimating focal length can lead to residual errors that propagate through the data processing workflow, affecting the precision of measurements [22].
Here, we propose a novel approach for addressing the challenges associated with lens distortions in non-metric cameras used for UAV-based tree reconstructions. Rather than directly calibrating the IOPs, our method explores various subsets of IOPs, generates a dense point cloud for each subset, and then synthesizes these individual models into a combined model. The primary goal of our research is to evaluate whether this combined model can improve the quality of the resulting dense point cloud as compared to a model calibrated using a single optimal subset of IOPs.
To evaluate the effectiveness of our methodology, we focused on forest stands within the semi-arid Zagros forests of Iran, known for their notably low crown density and complex crown structure of single trees. These stands have continuously undergone extensive structural changes due to a traditional pruning method known as pollarding (see [23]), resulting in a considerably sparse canopy. The common UAV-based 3D modeling techniques often yield suboptimal representations of such challenging forest structures. The main objective of this study is to examine whether the fidelity of the photogrammetric point clouds for these trees can be significantly improved by applying our proposed method to address the challenges of camera calibration and thereby help in providing a clearer and more accurate retrieval of their current structure. Our method relies on the tree canopy due to the fact that we leveraged optical UAV imagery. This resulted in evaluating the performance of the proposed method for the upper parts of the tree due to the limited visibility of under-canopy structures, i.e., tree trunks, in the acquired UAV data.

2. Theoretical Background

Lens distortion causes an image to deviate from its theoretically correct location, shifting it to its actual position [24]. Although lens aberrations are the most persistent types, they do not affect image quality while significantly influencing image geometry. This distortion is primarily composed of two elements of radial distortion and decentering distortion [19]. Radial distortion originates from imperfections in the lens grinding process [25], whereas decentering distortion is a result of inaccuracies in the placement of individual lens elements within the camera cone, as well as other manufacturing defects [26].
The values for lens distortion are derived through camera calibration procedures in the bundle adjustment process. These values are typically presented in a tabular format or expressed as a polynomial. Radial distortion is represented in the form of a polynomial as follows [19]:
δ r = k 1 r 3 + k 2 r 5 + k 3 r 7 + .
where r is defined as x 2 + y 2 . The correction of Cartesian coordinate components x and y of the distortion effects are calculated by [19,27]
δ x = δ r r x = ( k 1 r 2 + k 2 r 4 + ) x δ y = δ r r y = k 1 r 2 + k 2 r 4 + y .
The corrected image coordinates can then be computed using as follows:
x c = x δ x = 1 δ r r x = ( 1 k 1 r 2 k 3 r 4 ) x y c = y δ y = 1 δ r r y = 1 k 1 r 2 k 3 r 4 y .
Decentering lens distortion is asymmetric about the principal point. When the value is “one”, the radial line remains straight. This is referred to as the axis of zero tangential distortion. The correction developed for lens distortion due to decentering is expressed as [19,27]
δ x = J 1 r 2 + J 2 r 4 s i n φ 0 δ y = J 1 r 2 + J 2 r 4 c o s φ 0 .
Here, J1 and J2 are the coefficients of the profile function of the decentering distortion, while φ0 is the angle subtended by the axis of the maximum tangential distortion with the photo x axis. This was termed the thin prism model [27]. This model was found to be insufficient to fully account for the effects of decentering distortion. As a result, the Conrady–Brown model was developed to calculate the effects of decentering on the x and y [19]:
δ x = J 1 r 2 + J 2 r 4 1 2 x 2 r 2 s i n φ 0 2 x y r 2 c o s φ 0 δ y = J 1 r 2 + J 2 r 4 2 x y r 2 s i n φ 0 1 + 2 y 2 r 2 c o s φ 0 .
A revised Conrady–Brown model introduced further refinements to the computation of decentering distortion [19]. This model is expressed as
δ x = [ P 1 r 2 + 2 x 2 + 2 P 2 x y ] [ 1 + P 3 r 2 + ] δ y = [ P 2 r 2 + 2 y 2 + 2 P 1 x y ] 1 + P 3 r 2 + .
Parameters k1, k2, … and P1, P2, … are required to be estimated in order to correct lens distortion. The parameters P1 and P2 typically refer to the tangential distortion coefficients (for the x-and y-axis) used in the lens distortion model. Tangential distortion arises due to misalignment between the lens and the image sensor. It causes an image to be distorted in such a way that points appear shifted along a tangential direction relative to the image center.
It is therefore crucial to understand which parameters are needed, as they vary based on the camera and the site conditions, which calls for testing parameters under each specific condition.
In the following, we tested several subsets of parameters, estimated the optimal value for each parameter, and calibrated the camera accordingly. This will allow for generating different models tailored to reconstruct different tree crown structures. We evaluated the performance of the combined model and compared it to the best subset of parameters, which helps us to understand the effectiveness of our approach.

3. Implementation

3.1. Study Area

We applied our method for addressing camera calibration challenges in forest sites located in the Zagros forests of Iran. Zagros accounts for ca. 20% of Iran’s vegetation and hosts three endemic oak species (Quercus brantii Lindl., Q. infectoria Olivier, and Q. libani Olivier) along with a number of other sub-dominant species. These forests are subdivided into three zones along the latitudinal gradient [28]. The northern region, where all three oak species coexist, has been heavily used by locals as sources of fuel and forage, leading to the widespread use of a traditional silvopastoral method known as pollarding [29]. Each family owns a forest portion and cuts leafy branches from oak trees for livestock feed in winter. As a result, forests undergo a severe manipulation of tree crowns, leading to sparse crowns with numerous gaps and occasionally crooked branches [23]. The severity of these changes varies over time, with trees having the smallest crown area in the first year of pollarding and gradually increasing their crown to regain its shape [23]. The sparse tree crowns pose challenges for 3D modeling using image-based methods [16], which was the reason we selected these forests as a challenging case for 3D modeling by UAV photogrammetry. Figure 1 shows the location of the study area and examples of variations in tree structures.

3.2. Materials

3.2.1. UAV Imagery

We utilized a DJI Phantom 4 Pro multi-rotor UAV as a consumer-grade product for aerial imaging. This device is equipped with a three-axis stabilization gimbal, a non-metric sensor camera, and an 8.8 mm/24 mm lens that provides an 84° field of view. The flight plans for various sites were tailored to the specific topographic conditions and tree cover of each site, using the iOS version of Pix4DCapture installed on an iPad tablet. These flights took place across all six sites from 14 to 16 June 2021. We maintained uniformity in flight planning across all sites, conducting a Nadir cross-flight at a height of 80 m for each site. We used a multi-frequency GNSS to precisely measure five ground control points (GCPs) located at the corners and center of each site to ensure the accurate georeferencing of the 3D models generated from the UAV imagery. While five GCPs are considered the bare minimum, this number was selected to reflect common practice in UAV imagery in dense stands where both access and visibility are constrained. We assessed the georeferencing accuracy using LOOPCV with GCPs and obtained an RMSE of 4.2 cm for check points (CPs) across all sites.

3.2.2. Reference Data

We quantified two primary structural tree attributes. Initially, we measured tree heights utilizing accurate field inventory methods, which are widely recognized for their precision in capturing tree attributes [30]. Subsequently, we generated 3D models of the trees by integrating close-range photogrammetry (CRP) and the iPhone’s LiDAR technology, adhering to the standards outlined in [16] for the assessment of tree crown attributes. The iPhone’s LiDAR has been widely acknowledged for its accuracy in tree measurements [31]. Here, we further enhanced the precision by combining point clouds from LiDAR and CRP. This integration helped generate a more reliable and accurate dataset. In conclusion, we gathered a minimum of five tree height measurements and five 3D models of trees for each site. After generating models for individual trees, we found that each tree was reconstructed with at least 1 million points, i.e., ca. 10 times more than points within UAV-based models. This higher point density makes it reliable to compare the results from UAV models with these models.

3.3. Methods

We implemented our method by coding Python scripts, written within Agisoft Metashape 2.1.1, which incorporate the parametric lens distortion model developed by Brown [19,27]. We initially aligned the cameras and generated a sparse point cloud following the capture of UAV images. We then established four distinct parameter subsets of the lens distortion model as follows:
  • k1, k2, k3, P1, P2 (the default and most used subset of parameters [32]);
  • k1, k2, k3, k4, P1, P2;
  • k1, k2, k3, k4, b1, P1, P2;
  • k1, k2, k3, k4, P1, P2, b1, b2.
To circumvent a blind search among parameters, which would result in a time-consuming and complex process, we began with the default subset and then incrementally added terms to increase the complexity of the models. In addition to the aforementioned four subsets, we introduced additional distortion parameters as a fifth scenario by selecting “Fit additional corrections” (an option introduced by Metashape) that was added to case 4. Subsequently, we performed a refinement of the sparse point cloud that achieved the following:
  • Removed all points visible in two or fewer images;
  • Removed key points in such a way that the reprojection error was halved, followed by an optimization of the camera parameters;
  • Removed points in such a way that the reconstruction uncertainty was halved, followed by an optimization of the camera parameters;
  • Removed points in such a way that the projection accuracy was halved, followed by an optimization of the camera parameters;
  • Repeated these steps until the stopping condition was met.
Our stopping condition was set at a reprojection error of 0.5 px, a reconstruction uncertainty of 5 px, and a projection accuracy of 3 px, based on our prior knowledge and as suggested in [32,33]. Finally, we generated a dense point cloud for each scenario and merged all the obtained dense point clouds to form an enhanced point cloud. An overall workflow of the above-mentioned steps is shown in Figure 2.

3.4. Evaluation

We used distortion plots as a visual tool, illustrating the distortion inherent in a camera lens with a primary function to deepen our understanding of the camera’s distortions. We generated a distortion plot displaying the residuals of the lens distortion model for each pixel. This was based on the assumption that each model targets a distinct aspect of the lens, thus being instrumental in realizing the potential benefits of integrating multiple models into a combined 3D model.
Furthermore, we compared the enhanced model with standard models to assess its effectiveness. Initially, we evaluated models derived from five different scenarios (as detailed in Section 3.3) for each site. The best model was selected per site, which was then contrasted with the model created using the suggested method. The evaluation metrics employed were based on both site- and tree-level assessments. An evaluation covering the entire site was not possible due to the limited availability of reference trees at each site. Thus, the site-based evaluation involved a comprehensive analysis of the entire site to quantify the overall performance of the model. Meanwhile, the tree-based evaluation focused on a selected number of trees, specifically examining two main structural attributes of tree height and tree crown. This dual approach allowed for a more complete evaluation given the constraints of the available reference data. We utilized the CaR3DMIC approach [16] to assess the quality over the entire site. The CaR3DMIC is our newly suggested method to evaluate 3D forest models solely by considering tree point clouds. It provides an accuracy ranging between 0 and 1, where 1 indicates a model that perfectly mirrors reality.
While this approach ensures a fair evaluation of trees in terms of their structural attributes, we supplemented our assessment with two additional evaluations. Firstly, we compared the UAV-based 3D point cloud with the reference point cloud, which is a combination of CRP and iPhone’s LiDAR, as described in Section 3.2.2. Secondly, we compared the field-measured tree height with the tree height extracted from the UAV-based model (also detailed in Section 3.2.2) by calculating the relative root mean square error (rRMSE). The rRMSE, which quantifies the difference between observed and estimated values relative to the observed mean, is calculated as
r R M S E = 1 n i = 1 n y i y i ^ 2 y ¯ × 100 %
where y i represents the observed values (here, the reference value), y i ^ are the estimated values, n is the sample size, and y ¯ is the mean of the reference values. Due to our small sample size, the rRMSE values here are interpreted as indicative rather than statistically conclusive.

4. Results

Our approach was executed through the creation of five distinct camera calibration scenarios. Initially, we selected the best model based on the evaluation metrics, as detailed in Section 3.4. Following this, we compared the selected model and the enhanced model, which was a combination of all five dense point clouds. Subsequently, a comparative analysis was conducted amongst residuals of the five models (Figure 3 for Site 5 with more details, and results of other sites can be seen in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8).
Figure 9 provides a visual comparison of two representative sample trees, while Table 1 lists the detailed numerical results of this comparison.
As shown in Table 1, the enhanced model demonstrated superior performance in both site- and tree-based evaluation metrics. When examining the rRMSE for tree height, enhancements of the model were evidently not as effective as they were for the tree crown area. However, one may note that the enhanced model still managed to return lower rRMSE values compared to the standard models, indicating its overall improved accuracy. Nonetheless, further investigation with additional reference data is suggested to validate these findings across larger spatial domains.

5. Discussion

The retrieval of tree structural attributes has become increasingly common through the use of remote sensing tools, which help facilitate the process and improve its efficiency. Several studies have employed various tools, including both active and passive sensors, to estimate different structural attributes such as DBH, tree height, and tree crown features [34]. While active sensors have been widely used [35,36] due to their accuracy and ability to overcome many of the challenges faced by passive sensors, there are studies that have utilized passive sensor data such as UAV photogrammetry [10,37,38]. However, this approach comes with its own set of challenges, including the lens distortions of non-metric cameras.
Previous research has shown that lens distortion causes an image to deviate from its theoretically correct location, shifting it to its actual position [24]. Recognizing this phenomenon, we enhanced the lens distortion correction process by employing different models of lens distortion. Since each model has different errors in terms of distortion, single or multiple models may have the error for a specific region. By using all models, we aimed to balance the strengths and weaknesses of each dataset, acknowledging that each contains areas of error. This combined approach seeks to improve overall point cloud quality in challenging regions. To showcase the effectiveness of our suggested calibration approach for non-metric UAV cameras, we focused on pollarded semi-arid tree stands, a type of forest that represents unique challenges for photogrammetric 3D modeling. While pollarding has also been reported in numerous studies [39,40,41,42], our selection of these sites was driven by the photogrammetric complexity of the objects, which requires an approach able to cope with single trees with a complex crown and branching structure. The low texture of tree crowns in such forests complicates the common image matching process [43], leading to an often noisy dense point cloud [44]. To address this, we proposed an approach to produce a 3D point cloud by minimizing the impact of lens distortion, a significant source of errors in 3D modeling [45,46].
Our methodology was grounded on the premise that each model focuses on a unique portion of the lens distortion. Consequently, the integration of these models would yield a more comprehensive picture of lens distortion. As illustrated in Figure 3, the residual distortions vary across all five models. This variation can be attributed to the differing model complexities and their distinct responses to lens distortion. We improved the quality of the point cloud by retaining only well-modeled points identified through filtering reprojection error [47]. This demonstrates the effective exploitation of the benefits inherent in each calibration scenario. While this approach improves performance compared to the unenhanced version, the enhanced point cloud’s accuracy is still influenced by the limitations of each model. In essence, our methodology leverages the strengths of each model, thereby enhancing the overall accuracy of representing lens distortion.
Results demonstrated that a combination of the outputs of several camera calibration methods can significantly reduce inaccuracies in dense aerial point clouds. This is a critical aspect that other relevant studies [48,49] have overlooked, opting instead to produce models using default parameters of the lens distortion model. Furthermore, one of the main challenges in complex tree structures is the presence of numerous edges that result in the distortion of crown edges [50]. Pollarded trees do not comprise a continuously textured crown, which results in frequent gaps and fractions within their crown. Consequently, a single pixel error at the edges can cause a significant change in elevation, often leading to failure in reconstructing the tree crown. Similar situations may also occur in other forests with sparse crown covers or areas affected by tree decline. In this study, we minimized most errors in these regions by filtering out inaccurate points, particularly those from edge areas. Additionally, we increased the likelihood of accurately reconstructing these regions with precise points by applying different lens distortion correction approaches and combining various models.
A crucial aspect of our approach is the use of control points. In bundle adjustment, different lens distortion models indirectly affect the estimated values for exterior orientation parameters (EOPs), resulting in different EOPs for each model [51]. This directly impacts the georeferencing of the resulting photogrammetric products. Although the changes are small, these become apparent when combining models. To address this issue, we propose using control points when using data from a consumer-grade UAV without an RTK module, which is commonly the most cost-effective mode for UAV imaging flight in the absence of RTK-equipped UAV alternatives.
The proposed methodology has yielded some intriguing findings, as it appears that the quality and density of the point cloud representing tree crowns are significantly enhanced. This enhancement has a profound impact on the accuracy of measuring crown area, as demonstrated in Table 1. However, the same does not apply to the estimation of tree height, which showed little improvement when applying the method. This inconsistency can be traced back to how tree height is defined, namely as the vertical distance from the ground level to the tree top. When tree height is measured using a dense point cloud, we typically use the highest point to represent the top of the tree, i.e., a point that is likely to be present even in less accurate models. Consequently, an improved 3D model is not guaranteed to have a significant influence on the estimation of tree height, while it may notably boost the precision of tree canopy area estimation.

6. Conclusions

We introduced a novel approach for non-metric UAV camera calibration aimed at mitigating the impact of the parameter selection of lens distortion models on the 3D modeling of tree objects. Our proposed model, a combination of results from various camera calibration scenarios, significantly improved the quality of 3D dense point clouds for single tree reconstruction. While this enhancement might not be significant in tree height estimation, it proves to be extremely beneficial in measuring structures related to the canopy. We tested our approach across a range of study sites, each presenting unique challenges for photogrammetric UAV 3D modeling due to the numerous edges, complex branching, and poor tree crown texture. Although we claim that our method is not specifically designed for forests and can be applied to a wide range of cases that encounter modeling errors due to camera distortion, we showcased here the results of the example of pollarded tree stands due to the availability of reference data and their challenging nature. Furthermore, we used control points to georeference the models and align them, a common practice in dealing with errors found in semi-professional UAVs. This method could be further improved by using UAVs equipped with an RTK module, which is less common for many cost-effective applications. As a future direction, we suggest exploring the concept of automatic dense cloud matching and alignment in the absence of control points for co-registering the cloud points produced in the five scenarios introduced in this research, as this approach could prove highly effective in close-range photogrammetry.

Author Contributions

Conceptualization, A.F. and H.L.; methodology, A.F.; software, A.F.; validation, A.F. and H.L.; formal analysis, A.F., H.L., K.M.S. and F.E.F.; data curation, A.F., H.L. and F.E.F.; writing—original draft preparation, A.F. and H.L.; writing—review and editing, A.F., H.L., K.M.S. and F.E.F.; visualization, A.F.; supervision, H.L., F.E.F. and K.M.S.; funding acquisition, H.L., K.M.S. and A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Iran National Science Foundation (project no. 99031132).

Data Availability Statement

Data are available upon request.

Acknowledgments

We would like to express our sincerest appreciation to the kind people of Baneh county, Kurdistan Province, who hosted us during our data acquisition phase. We are particularly grateful to Mohammad Bahari for helping with our field inventories. This research was supported by the Iran National Science Foundation (INSF) (project no. 99031132) within the project “Spatial documentation and structural analysis of old-growth trees in traditionally managed and unmanaged forests in northern Zagros by means of Unmanned Aerial Vehicle (UAV)-based photogrammetry”. This research was conducted within the “Remote Sensing for Ecology and Ecosystem Conservation (RSEEC)” research lab of the K. N. Toosi University of Technology https://www.researchgate.net/lab/Remote-Sensing-for-Ecology-and-Ecosystem-Conservation-RSEEC-Hooman-Latifi (accessed date 15 January 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Paletto, A.; Favargiotti, S. Ecosystem Services: The Key to Human Well-Being. Forests 2021, 12, 480. [Google Scholar] [CrossRef]
  2. Jia, B.; Guo, W.; He, J.; Sun, M.; Chai, L.; Liu, J.; Wang, X. Topography, Diversity, and Forest Structure Attributes Drive Aboveground Carbon Storage in Different Forest Types in Northeast China. Forests 2022, 13, 455. [Google Scholar] [CrossRef]
  3. Sa, R.; Fan, W. Forest Structure Mapping of Boreal Coniferous Forests Using Multi-Source Remote Sensing Data. Remote Sens. 2024, 16, 1844. [Google Scholar] [CrossRef]
  4. Nowak, D.J.; Crane, D.E.; Stevens, J.C.; Hoehn, R.E.; Walton, J.T.; Bond, J. A Ground-Based Method of Assessing Urban Forest Structure and Ecosystem Services. Arboric. Urban For. 2008, 34, 347–358. [Google Scholar] [CrossRef]
  5. Fieber, K.D.; Davenport, I.J.; Tanase, M.A.; Ferryman, J.M.; Gurney, R.J.; Becerra, V.M.; Walker, J.P.; Hacker, J.M. Validation of Canopy Height Profile Methodology for Small-Footprint Full-Waveform Airborne LiDAR Data in a Discontinuous Canopy Environment. ISPRS J. Photogramm. Remote Sens. 2015, 104, 144–157. [Google Scholar] [CrossRef]
  6. Adamchuk, V.; Perk, R.; Schepers, J. Applications of Remote Sensing in Site-Specific Management; University of Nebraska Cooperative Extension Publication EC: Lincoln, NE, USA, 2003; p. 03-702. [Google Scholar]
  7. Akinbiola, S.; Salami, A.T.; Awotoye, O.O.; Popoola, S.O.; Olusola, J.A. Application of UAV Photogrammetry for the Assessment of Forest Structure and Species Network in the Tropical Forests of Southern Nigeria. Geocarto Int. 2023, 38, 2190621. [Google Scholar] [CrossRef]
  8. Frey, J.; Kovach, K.; Stemmler, S.; Koch, B. UAV Photogrammetry of Forests as a Vulnerable Process. A Sensitivity Analysis for a Structure from Motion RGB-Image Pipeline. Remote Sens. 2018, 10, 912. [Google Scholar] [CrossRef]
  9. Shimizu, K.; Nishizono, T.; Kitahara, F.; Fukumoto, K.; Saito, H. Integrating Terrestrial Laser Scanning and Unmanned Aerial Vehicle Photogrammetry to Estimate Individual Tree Attributes in Managed Coniferous Forests in Japan. Int. J. Appl. Earth Obs. Geoinf. 2022, 106, 102658. [Google Scholar] [CrossRef]
  10. Vacca, G.; Vecchi, E. UAV Photogrammetric Surveys for Tree Height Estimation. Drones 2024, 8, 106. [Google Scholar] [CrossRef]
  11. Zhang, J.; Lu, J.; Zhang, Q.; Qi, Q.; Zheng, G.; Chen, F.; Chen, S.; Zhang, F.; Fang, W.; Guan, Z. Estimation of Garden Chrysanthemum Crown Diameter Using Unmanned Aerial Vehicle (UAV)-Based RGB Imagery. Agronomy 2024, 14, 337. [Google Scholar] [CrossRef]
  12. Kameyama, S.; Sugiura, K. Estimating Tree Height and Volume Using Unmanned Aerial Vehicle Photography and SfM Technology, with Verification of Result Accuracy. Drones 2020, 4, 19. [Google Scholar] [CrossRef]
  13. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef]
  14. Bolívar-Santamaría, S.; Reu, B. Assessing Canopy Structure in Andean (Agro)Forests Using 3D UAV Remote Sensing. Agrofor. Syst. 2024, 98, 1225–1241. [Google Scholar] [CrossRef]
  15. Swayze, N.C.; Tinkham, W.T.; Vogeler, J.C.; Hudak, A.T. Influence of Flight Parameters on UAS-Based Monitoring of Tree Height, Diameter, and Density. Remote Sens. Environ. 2021, 263, 112540. [Google Scholar] [CrossRef]
  16. Fakhri, A.; Latifi, H.; Samani, K.M.; Fassnacht, F.E. CaR3DMIC: A Novel Method for Evaluating UAV-Derived 3D Forest Models by Tree Features. ISPRS J. Photogramm. Remote Sens. 2024, 208, 279–295. [Google Scholar] [CrossRef]
  17. Mousavi, V.; Varshosaz, M.; Remondino, F.; Pirasteh, S.; Li, J. A Two-Step Descriptor-Based Keypoint Filtering Algorithm for Robust Image Matching. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–21. [Google Scholar] [CrossRef]
  18. Burdziakowski, P.; Bobkowska, K. UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations. Sensors 2021, 21, 3531. [Google Scholar] [CrossRef]
  19. Fryer, J.G.; Brown, D.C. Lens Distortion for Close-Range Photogrammetry. Photogramm. Eng. Remote Sens. 1986, 52, 51–58. [Google Scholar]
  20. Tang, Z.; Grompone von Gioi, R.; Monasse, P.; Morel, J.-M. High-Precision Camera Distortion Measurements with a “Calibration Harp”. J. Opt. Soc. Am. A 2012, 29, 2134–2143. [Google Scholar] [CrossRef]
  21. Zhou, Y.; Rupnik, E.; Meynard, C.; Thom, C.; Pierrot-Deseilligny, M. Simulation and analysis of photogrammetric uav image blocks: Influence of camera calibration error. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, IV-2/W5, 195–200. [Google Scholar] [CrossRef]
  22. Harwin, S.; Lucieer, A.; Osborn, J. The Impact of the Calibration Method on the Accuracy of Point Clouds Derived Using Unmanned Aerial Vehicle Multi-View Stereopsis. Remote Sens. 2015, 7, 11933–11953. [Google Scholar] [CrossRef]
  23. Fakhri, A.; Latifi, H.; Mohammadi Samani, K.; Shakeri, Z.; Naghavi, H.; Fassnacht, F.E. Combination of UAV Photogrammetry and Field Inventories Enables Description of Height–Diameter Relationship within Semi-Arid Silvopastoral Systems. Remote Sens. 2023, 15, 5261. [Google Scholar] [CrossRef]
  24. Neale, W.T.; Hessel, D.; Terpstra, T. Photogrammetric Measurement Error Associated with Lens Distortion; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2011. [Google Scholar]
  25. Chari, V.; Veeraraghavan, A. Lens Distortion, Radial Distortion. In Computer Vision: A Reference Guide; Ikeuchi, K., Ed.; Springer: Boston, MA, USA, 2014; pp. 443–445. ISBN 978-0-387-31439-6. [Google Scholar]
  26. Ma, X.; Zhu, P.; Li, X.; Zheng, X.; Zhou, J.; Wang, X.; Au, K.W.S. A Minimal Set of Parameters Based Depth-Dependent Distortion Model and Its Calibration Method for Stereo Vision Systems. IEEE Trans. Instrum. Meas. 2024, 73, 7004111. [Google Scholar] [CrossRef]
  27. Brown, D. Decentering Distortion of Lenses. Photogramm. Eng. 1966, 32, 444–462. [Google Scholar]
  28. Jazirehi, M.; Ebrahimi Rostaghi, M. Silviculture in Zagros; Tehran University Press: Teheran, Iran, 2003. [Google Scholar]
  29. Valipour, A.; Plieninger, T.; Shakeri, Z.; Ghazanfari, H.; Namiranian, M.; Lexer, M.J. Traditional Silvopastoral Management and Its Effects on Forest Stand Structure in Northern Zagros, Iran. For. Ecol. Manag. 2014, 327, 221–230. [Google Scholar] [CrossRef]
  30. Andersen, H.-E.; Reutebuch, S.E.; McGaughey, R.J. A Rigorous Assessment of Tree Height Measurements Obtained Using Airborne Lidar and Conventional Field Methods. Can. J. Remote Sens. 2006, 32, 355–366. [Google Scholar] [CrossRef]
  31. Luetzenburg, G.; Kroon, A.; Bjørk, A.A. Evaluation of the Apple iPhone 12 Pro LiDAR for an Application in Geosciences. Sci. Rep. 2021, 11, 22221. [Google Scholar] [CrossRef]
  32. Over, J.-S.R.; Ritchie, A.C.; Kranenburg, C.J.; Brown, J.A.; Buscombe, D.D.; Noble, T.; Sherwood, C.R.; Warrick, J.A.; Wernette, P.A. Processing Coastal Imagery with Agisoft Metashape Professional Edition, Version 1.6—Structure from Motion Workflow Documentation; Open-File Report; U.S. Geological Survey: Reston, VA, USA, 2021; p. 46.
  33. Mousavi, V.; Varshosaz, M.; Remondino, F. Using Information Content to Select Keypoints for UAV Image Matching. Remote Sens. 2021, 13, 1302. [Google Scholar] [CrossRef]
  34. Fassnacht, F.E.; White, J.C.; Wulder, M.A.; Næsset, E. Remote Sensing in Forestry: Current Challenges, Considerations and Directions. For. Int. J. For. Res. 2024, 97, 11–37. [Google Scholar] [CrossRef]
  35. Chen, Q.; Gao, T.; Zhu, J.; Wu, F.; Li, X.; Lu, D.; Yu, F. Individual Tree Segmentation and Tree Height Estimation Using Leaf-Off and Leaf-On UAV-LiDAR Data in Dense Deciduous Forests. Remote Sens. 2022, 14, 2787. [Google Scholar] [CrossRef]
  36. Zhang, Z.; Wang, T.; Skidmore, A.K.; Cao, F.; She, G.; Cao, L. An Improved Area-Based Approach for Estimating Plot-Level Tree DBH from Airborne LiDAR Data. For. Ecosyst. 2023, 10, 100089. [Google Scholar] [CrossRef]
  37. Wang, X.; Xiang, H.; Niu, W.; Mao, Z.; Huang, X.; Zhang, F. Oblique Photogrammetry Supporting Procedural Tree Modeling in Urban Areas. ISPRS J. Photogramm. Remote Sens. 2023, 200, 120–137. [Google Scholar] [CrossRef]
  38. Gao, Q.; Kan, J. Automatic Forest DBH Measurement Based on Structure from Motion Photogrammetry. Remote Sens. 2022, 14, 2064. [Google Scholar] [CrossRef]
  39. Franzel, S.; Carsan, S.; Lukuyu, B.; Sinja, J.; Wambugu, C. Fodder Trees for Improving Livestock Productivity and Smallholder Livelihoods in Africa. Curr. Opin. Environ. Sustain. 2014, 6, 98–103. [Google Scholar] [CrossRef]
  40. Geta, T.; Nigatu, L.; Animut, G. Evaluation of Potential Yield and Chemical Composition of Selected Indigenous Multi-Purpose Fodder Trees in Three Districts of Wolayta Zone, Southern Ethiopia. World Appl. Sci. J. 2014, 31, 399–405. [Google Scholar]
  41. Guyassa, E.; Raj, A.J.; Gidey, K.; Tadesse, A. Domestication of Indigenous Fruit and Fodder Trees/Shrubs in Dryland Agroforestry and Its Implication on Food Security. Int. J. Ecosyst. 2014, 4, 83–88. [Google Scholar]
  42. Lang, P.; Jeschke, M.; Wommelsdorf, T.; Backes, T.; Lv, C.; Zhang, X.; Thomas, F.M. Wood Harvest by Pollarding Exerts Long-Term Effects on Populus Euphratica Stands in Riparian Forests at the Tarim River, NW China. For. Ecol. Manag. 2015, 353, 87–96. [Google Scholar] [CrossRef]
  43. McNicol, I.M.; Mitchard, E.T.A.; Aquino, C.; Burt, A.; Carstairs, H.; Dassi, C.; Modinga Dikongo, A.; Disney, M.I. To What Extent Can UAV Photogrammetry Replicate UAV LiDAR to Determine Forest Structure? A Test in Two Contrasting Tropical Forests. J. Geophys. Res. Biogeosci. 2021, 126, e2021JG006586. [Google Scholar] [CrossRef]
  44. Cunliffe, A.M.; Anderson, K.; Boschetti, F.; Brazier, R.E.; Graham, H.A.; Myers-Smith, I.H.; Astor, T.; Boer, M.M.; Calvo, L.G.; Clark, P.E.; et al. Global Application of an Unoccupied Aerial Vehicle Photogrammetry Protocol for Predicting Aboveground Biomass in Non-Forest Ecosystems. Remote Sens. Ecol. Conserv. 2022, 8, 57–71. [Google Scholar] [CrossRef]
  45. Liang, H.; Lee, S.-C.; Bae, W.; Kim, J.; Seo, S. Towards UAVs in Construction: Advancements, Challenges, and Future Directions for Monitoring and Inspection. Drones 2023, 7, 202. [Google Scholar] [CrossRef]
  46. Lunetta, R.; Congalton, R.; Fenstermaker, L.; Jensen, J.; Mcgwire, K.; Tinney, L.R. Remote Sensing and Geographic Information System Data Integration- Error Sources and Research Issues. Photogramm. Eng. Remote Sens. 1991, 57, 677–687. [Google Scholar]
  47. Mousavi, V.; Varshosaz, M.; Rashidi, M.; Li, W. A New Multi-Criteria Tie Point Filtering Approach to Increase the Accuracy of UAV Photogrammetry Models. Drones 2022, 6, 413. [Google Scholar] [CrossRef]
  48. Krisanski, S.; Taskhiri, M.S.; Turner, P. Enhancing Methods for Under-Canopy Unmanned Aircraft System Based Photogrammetry in Complex Forests for Tree Diameter Measurement. Remote Sens. 2020, 12, 1652. [Google Scholar] [CrossRef]
  49. Nasiri, V.; Darvishsefat, A.A.; Arefi, H.; Pierrot-Deseilligny, M.; Namiranian, M.; Le Bris, A. Unmanned Aerial Vehicles (UAV)-Based Canopy Height Modeling under Leaf-on and Leaf-off Conditions for Determining Tree Height and Crown Diameter (Case Study: Hyrcanian Mixed Forest). Can. J. For. Res. 2021, 51, 962–971. [Google Scholar] [CrossRef]
  50. Ghasemi, M.; Latifi, H.; Pourhashemi, M. RGB-UAV Data Enables Cost-Effective Discrimination of Single-and Multi-Stem Oak Trees Across Semi-Arid Forest Ecosystems. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4544688 (accessed on 15 January 2025). [CrossRef]
  51. Hådem, I. Bundle Adjustment in Industrial Photogrammetry. Photogrammetria 1981, 37, 45–60. [Google Scholar] [CrossRef]
Figure 1. Six managed study sites located in the northern part of Zagros forests. Each site features a representative tree that exemplifies the general form of trees in that particular site. The figure illustrates the diversity of trees in terms of crown area, a result of pollarding at various stages. This diversity presents varying levels of complexity in 3D modeling.
Figure 1. Six managed study sites located in the northern part of Zagros forests. Each site features a representative tree that exemplifies the general form of trees in that particular site. The figure illustrates the diversity of trees in terms of crown area, a result of pollarding at various stages. This diversity presents varying levels of complexity in 3D modeling.
Remotesensing 17 00383 g001
Figure 2. The overall workflow of the implemented methodology.
Figure 2. The overall workflow of the implemented methodology.
Remotesensing 17 00383 g002
Figure 3. A visual comparison between the residuals of different lens distortion models for Site 5, indicating that a model accurately models a specific part (for example, A and D in Models 2 and 5 and B and C in Model 4) and fails in another (for example, A and D in Models 1, 3, and 4 and B and C in Models 1, 3, and 5). This suggests that the models are complementary to each other, collectively providing a more accurate depiction of lens distortion.
Figure 3. A visual comparison between the residuals of different lens distortion models for Site 5, indicating that a model accurately models a specific part (for example, A and D in Models 2 and 5 and B and C in Model 4) and fails in another (for example, A and D in Models 1, 3, and 4 and B and C in Models 1, 3, and 5). This suggests that the models are complementary to each other, collectively providing a more accurate depiction of lens distortion.
Remotesensing 17 00383 g003
Figure 4. A visual comparison between the residuals of different lens distortion models in Site 1.
Figure 4. A visual comparison between the residuals of different lens distortion models in Site 1.
Remotesensing 17 00383 g004
Figure 5. A visual comparison between the residuals of different lens distortion models in Site 2.
Figure 5. A visual comparison between the residuals of different lens distortion models in Site 2.
Remotesensing 17 00383 g005
Figure 6. A visual comparison between the residuals of different lens distortion models in Site 3.
Figure 6. A visual comparison between the residuals of different lens distortion models in Site 3.
Remotesensing 17 00383 g006
Figure 7. A visual comparison between the residuals of different lens distortion models in Site 4.
Figure 7. A visual comparison between the residuals of different lens distortion models in Site 4.
Remotesensing 17 00383 g007
Figure 8. A visual comparison between the residuals of different lens distortion models in Site 6.
Figure 8. A visual comparison between the residuals of different lens distortion models in Site 6.
Remotesensing 17 00383 g008
Figure 9. Comparison between our enhanced model and the top-performing standard model. The superiority of our method is visually evident in the quality of the models for two sample trees. The upper section represents a tree with the smallest canopy, while the lower section depicts a tree with the largest canopy from our studied sites.
Figure 9. Comparison between our enhanced model and the top-performing standard model. The superiority of our method is visually evident in the quality of the models for two sample trees. The upper section represents a tree with the smallest canopy, while the lower section depicts a tree with the largest canopy from our studied sites.
Remotesensing 17 00383 g009
Table 1. Outcomes of assessing our proposed approach utilizing two distinct evaluation metrics compared with the top-performing standard model.
Table 1. Outcomes of assessing our proposed approach utilizing two distinct evaluation metrics compared with the top-performing standard model.
Site NameEvaluation Metric
Site-BasedTree-Based (rRMSE %)
CaR3DMICCrown AreaHeight
StandardEnhancedStandardEnhancedStandardEnhanced
10.700.89241666
20.740.862191513
30.680.8815376
40.690.843923149
50.660.7918953
60.700.8412332
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fakhri, A.; Latifi, H.; Mohammadi Samani, K.; Fassnacht, F.E. Improving the Accuracy of Forest Structure Analysis by Consumer-Grade UAV Photogrammetry Through an Innovative Approach to Mitigate Lens Distortion Effects. Remote Sens. 2025, 17, 383. https://doi.org/10.3390/rs17030383

AMA Style

Fakhri A, Latifi H, Mohammadi Samani K, Fassnacht FE. Improving the Accuracy of Forest Structure Analysis by Consumer-Grade UAV Photogrammetry Through an Innovative Approach to Mitigate Lens Distortion Effects. Remote Sensing. 2025; 17(3):383. https://doi.org/10.3390/rs17030383

Chicago/Turabian Style

Fakhri, Arvin, Hooman Latifi, Kyumars Mohammadi Samani, and Fabian Ewald Fassnacht. 2025. "Improving the Accuracy of Forest Structure Analysis by Consumer-Grade UAV Photogrammetry Through an Innovative Approach to Mitigate Lens Distortion Effects" Remote Sensing 17, no. 3: 383. https://doi.org/10.3390/rs17030383

APA Style

Fakhri, A., Latifi, H., Mohammadi Samani, K., & Fassnacht, F. E. (2025). Improving the Accuracy of Forest Structure Analysis by Consumer-Grade UAV Photogrammetry Through an Innovative Approach to Mitigate Lens Distortion Effects. Remote Sensing, 17(3), 383. https://doi.org/10.3390/rs17030383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop