Next Article in Journal
Estimation of Forest Structural Diversity Using the Spectral and Textural Information Derived from SPOT-5 Satellite Images
Next Article in Special Issue
Examining Spectral Reflectance Saturation in Landsat Imagery and Corresponding Solutions to Improve Forest Aboveground Biomass Estimation
Previous Article in Journal
PSInSAR Analysis in the Pisa Urban Area (Italy): A Case Study of Subsidence Related to Stratigraphical Factors and Urbanization
Previous Article in Special Issue
Using Stochastic Ray Tracing to Simulate a Dense Time Series of Gross Primary Productivity
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accuracy of Reconstruction of the Tree Stem Surface Using Terrestrial Close-Range Photogrammetry

1
Faculty of Forestry and Wood Sciences, Czech University of Life Sciences , Kamýcká 129, Praha 165 21, Czech Republic
2
The Institute of Statistical Mathematics, 10-3 Midori-cho, Tachikawa, Tokyo 190-8562, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(2), 123; https://doi.org/10.3390/rs8020123
Received: 8 September 2015 / Revised: 20 January 2016 / Accepted: 25 January 2016 / Published: 5 February 2016
(This article belongs to the Special Issue Digital Forest Resource Monitoring and Uncertainty Analysis)

Abstract

:
Airborne laser scanning (ALS) allows for extensive coverage, but the accuracy of tree detection and form can be limited. Although terrestrial laser scanning (TLS) can improve on ALS accuracy, it is rather expensive and area coverage is limited. Multi-view stereopsis (MVS) techniques combining computer vision and photogrammetry may offer some of the coverage benefits of ALS and the improved accuracy of TLS; MVS combines computer vision research and automatic analysis of digital images from common commercial digital cameras with various algorithms to reconstruct three-dimensional (3D) objects with realistic shape and appearance. Despite the relative accuracy (relative geometrical distortion) of the reconstructions available in the processing software, the absolute accuracy is uncertain and difficult to evaluate. We evaluated the data collected by a common digital camera through the processing software (Agisoft PhotoScan ©) for photogrammetry by comparing those by direct measurement of the 3D magnetic motion tracker. Our analyses indicated that the error is mostly concentrated in the portions of the tree where visibility is lower, i.e., the bottom and upper parts of the stem. For each reference point from the digitizer we determined how many cameras could view this point. With a greater number of cameras we found increasing accuracy of the measured object space point positions (as expected), with a significant positive change in the trend beyond five cameras; when more than five cameras could view this point, the accuracy began to increase more abruptly, but eight cameras or more provided no increases in accuracy. This method allows for the retrieval of larger datasets from the measurements, which could improve the accuracy of estimates of 3D structure of trees at potentially reduced costs.

Graphical Abstract

1. Introduction

In forest biometrics and other related research areas, analysis of three-dimensional (3D) data has gained a great deal of attention during the last decade. The 3D data may be characterized by three primary acquisition methods: (1) by scanned by laser; (2) by magnetic motion tracker, and (3) by photogrammetric reconstruction (stereoscopy). The data can also be divided into so-called surface data and structural data. Surface data are sensed directly from the surface of the three-dimensional object and are relatively easily and quickly displayable. Structural data describe the structure, i.e., the permanent relation of several features [1], of an organism, such as a plant or a tree. Features include common geometric shapes such as cylinders, spheres, and rectangles. The main advantage of structural data is that they describe the parts of a tree more directly, thus it provides an overview of the architecture and appearance of a tree. Estimates of tree biomass are easily quantified based on the length and volumes of individual segments, such as branches, bole, crown, and roots [2,3].
Light detection and ranging (LiDAR) technology allows us to measure terrestrial 3D data, and is widely used for commercial purposes in forest inventories [4,5,6]. Airborne LiDAR tends to slightly underestimate field-measured estimates in the case of dense forest stands because of its poor ability to penetrate the canopy and reach the forest floor. Also, tree height estimates may be underestimated because laser pulses are not always reflected from treetops, particularly for trees with smaller crown diameters or conically shaped crowns, whereby the laser pulse may detect the sides of the tree instead of the treetop [7,8]. One of the main advantages of airborne laser scanning (ALS) is that it covers large areas, but costs can be relatively high and lower point densities tend to limit tree detection accuracy, according to [9]. In contrast, terrestrial laser scanning (TLS) produces very high point densities and fills the gap between tree-scale manual measurements and large-scale airborne LiDAR measurements by providing a large amount of precise information on various forest structural parameters [6,10,11]. It also provides a digital record of the three-dimensional structure of forests at a given time. Several studies have shown that TLS is a promising technology because it provides objective measures of tree characteristics including height, plot level volumes, diameter at breast height (DBH), canopy cover, and stem density [12]. However, drawbacks include reduced spatial resolution of the tree surface point cloud with increasing distance to the sensor, and laser pulses are unable to penetrate through occluding vegetation; thus, TLS point density may be insufficient and provide underestimates compared to field-measured estimates [13,14]. However, the modern laser scanners with a sufficiently wide beam, either terrestrials or installed on Unmanned Aerial Vehicles (UAVs) (for example Riegl Vux-1), appear to be very promising. To evaluate DBH, [7] implemented a circle approximation and concluded that it estimated DBH capably if there were a sufficient number of surface laser points, but DBH estimates were smaller with too few data points. Similarly, [15] estimated DBH efficiently using a circle-fitting algorithm; they concluded that the use of TLS could be fraught with errors if there were an inadequate number of laser points due to occlusion from other stems. In another study, [16] concluded that accurate DBH measurements from TLS datasets could be obtained only for unobstructed trees. Additionally, [17] tested several geometrical primitives to represent the surface and topology of the point clouds from TLS; they concluded that the fit is dependent on the data quality and that the circular shape is the most robust primitive for estimation of the stem parameters.
In another study, [18] used a novel approach for acquisition of forest horizontal structure (stem distribution) with panoramic photography or a similar technique which may be applied through smartphone [19]; however, a recent technique using multi-view stereopsis (MVS) combined with computer vision and photogrammetry, for example [20], alongside algorithms such as Scale-invariant feature transform (SIFT) [21] or Speeded Up Robust Features (SURF) [22], allows the use of common optical cameras for the reconstruction of 3D objects to improve the architectural representation of tree stems and crown structures as, for example, demonstrated in [23,24]. The reconstruction is based on automatic detection of the same points, i.e., stem or crown features, in subsequent paired images. The algorithm aligns the photos whereby the mentioned points are used for the estimation of camera positions in the relative 3D coordinate system. The dense point cloud is then calculated whereby points from one photo are to be identified in the aligned corresponding photos (the amount of pixels is dependent on settings). Finally, the mesh is created using several predefined techniques in Agisoft software and the texture is mapped on the resultant mesh; however, some authors define mesh creation from points (see [25] for a detailed description of possible procedures).
Point cloud generation based on unmanned aerial vehicle (UAV) imaging and MVS techniques could fill the gap between ALS and TLS because it could cover large areas and deliver high point densities for precise detection at low costs. Recent studies by [26,27,28,29] all successfully adopted MVS to derive dense point cloud data from UAV photography of a complex terrain with 1–2 cm of point spacing (distance between detected points). This method can create visually realistic 3D sceneries and objects such as trees, or groups of trees; however, the accuracy of these models is difficult to verify, [30] used MVS to show more realistic and accurate model that captures sufficient control points for producing a mesh of a tree stem. The efficiency of this approach proved to be much higher in the case of mature trees with thicker stems. It was less effective with younger trees with small-diameter stems and branches, as is often the case in younger forest stands, because of an insufficient number of points in the point cloud. As a result, it fails to produce a mesh from the images and the point cloud. Excessive shadow is another disruption for the measurement, especially at the lower and upper parts of the stem where visibility of the optical camera is restricted; [31] studied the accuracy of dense point field reconstructions of coastal areas using a light UAV and digital images. They used the total station survey and the GPS survey from the UAV for comparison, and differences between the methods were analyzed by root mean square error. They concluded that the sub-decimeter terrain change is detectable by the method given a pixel resolution from the flight height of approximately 50 m. The total station offers high accuracy for ground truth points, though for spherical or cylindrical objects such as the tree stem, these points cannot be obtained from the unique position of the total station. Additional positions of the total station may become a source of additional error that is avoidable when using the system for 3D digitization, such as the magnetic motion tracker [3,25], especially for the measurement of close distance points whereby the occlusion of points by the stem would require several total station positions.
To overcome the occlusion problem and accurately represent tree characteristics, the position of the sensor relative to the source of the magnetic field needs to be recorded, even if the sensor is concealed behind an obstacle, such as the stem. The method must also initially produce an accurate 3D model, which could then be the basis for further data analyses that allows for maximum precision of the outputs. In this work we describe how to create photo-reconstructed models and compare them with points and models obtained from a magnetic motion tracker [25]. The aim of this article is not only to investigate the precision of the photo reconstruction, but also to identify factors that affect the accuracy.

2. Materials and Methods

We conducted an empirical study using overlapping terrestrial photos of individual stems for the creation of 3D photomodels. These stems were digitized using a magnetic motion tracker as described in [25]; the motion tracker points were then used as ground control data to evaluate the accuracy of the photo reconstructions. The reason for using the magnetic motion tracker is that the magnetic field is able to pass through materials such as wood and in such way it allows the continuous measurement of points behind such materials. This is the case of points being measured around the stem which would be hidden, for example, for a laser scanning device. All such points are possible to be measured from one position of the digitizer, while in the case of laser the scanner it would have to be repositioned several times.

2.1. Study Area

The research area is located at the University of Tokyo Chiba Forest (UTCBF) Prefecture, Japan, located in the south-eastern part of the Boso Peninsula (Figure 1). It extends from 140° 5’33” to 10’10”E and from 35° 8’25” to 12’51”N. The forest is composed of various types of forest stands, including Cryptomeria japonica (L. f.) D. Don, Chamaecyparis obtusa (Siebold and Zucc.) Siebold and Zucc. ex Endl., Abies spp., Tsuga spp., and other evergreen broad-leaved trees. The plot detail is displayed in Figure 2 with the digital terrain model based on the Z-axis measurement. We selected 20 trees for measurement within a pure stand of C. japonica with different diameters, slope, and light.
Figure 1. Location of the research area in Chiba Prefecture, Japan (source Google Earth, detail image partially modified for enhanced brightness).
Figure 1. Location of the research area in Chiba Prefecture, Japan (source Google Earth, detail image partially modified for enhanced brightness).
Remotesensing 08 00123 g001
Figure 2. Scheme of tree positions inside plot with the digital surface model based on their Z position.
Figure 2. Scheme of tree positions inside plot with the digital surface model based on their Z position.
Remotesensing 08 00123 g002

2.2. Photo Reconstruction

The photos were taken using a Sony NEX 7 digital camera with a fixed zoom lens of focal length 28 mm. The aperture was fixed to f/8.0, the shutter time was measured relative to the light conditions of the scene, and a flash was used because of the dense canopy and limited light penetration to the understory. However, in other (unpublished) datasets we verified that the flash is not necessary for accurate reconstruction; the potential problem that occurs in certain light environments on the stem (the illuminated and the shaded side) can be solved by using point exposure measurements with this point placed on the stem. We took approximately 20 photos regularly distributed around the stem, each of which included a view of the ground, and the distance was approximated to ensure that the stem represented approximately one-third of the photo. The distance from the camera to the tree was between 1 to 2 m, with a corresponding spatial pixel resolution of approximately 0.2–0.5 millimeters on the stem. The settings were selected in consultation with the Agisoft Photoscan © software support team. We then took an additional ring of 20 photos including the breast height (1.3 m) portion of the stem and above in order to cover a larger part of the stem. We used Agisoft PhotoScan © for image alignment, and the sparse and dense point cloud reconstruction. The mesh was reconstructed from the dense field, and the texture was mapped with density 4096 × 2 (option offered in Agisoft PhotoScan ©). The reference points were then identified on the texture model and the real world positions from measurements were attributed to them without using geometrical rectification of the model.

2.3. Field Measurement and Comparison

The trees were measured in the field including their XYZ positions (Figure 2), the circumference at breast height (1.3 m) and the height. The models produced by Agisoft Photoscan © were imported into 3D processing library for .NET environment (Devdept Eyeshot) and cross-sectional areas at 1.3 m were produced. The area of each cross-section and its perimeter (circumference) was annotated. The root mean square error (RMSE) between the circumferences measured in the field and those obtained by photo reconstruction was calculated using the following variable
R M S E = 1 n ( CBH p CBH f ) 2 n
where the C B H p is the circumference at breast height from the photo model and C B H f is the circumference at breast height from the field measurements. The n is the total amount of samples.

2.4. Magnetic Digitization

Next we analyzed the error using the digitized points and their distances from the photo reconstructed model. The digitization of these control points was implemented using a newly developed method similar to [32]. We collected surface data with a magnetic motion tracker, POLHEMUS FASTRAK®. We used FastrakDigitizer® software, as described in [25], with a TX4 source mounted on a wooden tripod avoiding the points to be measured at the edge of the magnetic field, where the accuracy according to [33] might be lower The manufacturer reports accuracy to be submillimeter (more precisely 0.03 inches (0.7 mm) Root Mean Square Error (RMSE) for the X, Y, or Z positions [33]). The system has an operational range of approximately five feet with the standard source, seven feet with the TX4 source, with the option to extend the field of range up to 10 feet (using Long Ranger extension). FASTRAK® provides both position and orientation data measuring requirements of applications and environments with no need for additional calculations.
For each stem, we recorded the horizontal contours around the stem at each change in curvature or more significant changes in the stem’s thickness to a height of approximately 3 m (the lower part of the stem) We marked the north and the vertical (horizontal) direction for post-processing orientation of the model. Six reference points regularly distributed along the stem were also assigned for later alignment with the photo model. The coordinates of these points were used as reference points in Agisoft Photoscan © for georeferencing the point cloud and 3D mesh. The magnetic digitizer uses a source with [000] so the points are referenced in this local coordinate system. During the photo reconstruction, we attempted to recover the texture precisely, e.g., with high detail. The model referenced in local coordinate system was exported and further processed in FastrakDigitizer in order to calculate the displacement of each point and average displacement compared to the digitized points.

2.5. Error Term—Minimum Distance and Morphology

The error was defined as the shortest distance of the digitized point to the photo reconstructed model. The algorithm for finding the smallest distance begins by searching for the closest vertex of the model. Once that vertex is found, it tests whether any of the triangles of the mesh, where this vertex is one of the vertices of the triangle, has a perpendicular distance to the testing point shorter than the distance to the vertex; if any exist, that shortest distance then becomes the distance error.
The morphological error was defined as the difference between the perimeter and cross-sectional area of the stem reconstructed from the magnetic measurement and from the photo-reconstruction. Each of the models is sectioned horizontally in a vertical direction, and then the values of the perimeter and area are compared for each cross-section. The distance error described above is a non-negative value, i.e., the distance is directed to either the outside or inside of the mesh surface (it may be also understood as absolute magnitude of the error). The reason we tested for the morphological error is that the displacement error may result from either of two different situations which can be also referred to as systematic and random errors: the random in which all points are alternatively found outside and inside of the true stem (no morphology error), and the systematic in which all points are found either outside or inside the stem (resulting in clear morphology error).
The results were analyzed in three steps. In step 1 we conducted the point-based analysis and compared the distance between the individual control points and the photo model. We investigated how the minimal distance correlated with the height of the control point on a stem and how it correlated with the azimuthal angle relative to the north position.
We hypothesized that the texture, particularly the presence or lack of lichens on northern and southern sites, respectively, may potentially influence the error on some sites. We used the circular statistic in the R statistical software for this analysis [34]. However, the behavior of the linear variable over the azimuth (circular) variable is better described by cylindrical statistics as a sub-area of circular statistics [34]. The cylinder shape was best approximated the texture data, and therefore we used cylindrical statistics to model stem texture. We used the Johnson-Wehrly-Mardia correlation coefficient to test the correlation between the azimuth and the angle [34].
The left panel of Figure 3 shows an example of the typical surface texture of a C. japonica stem. The presence of the lichens on the northern portion of the stem supported a hypothesis for better performance of the algorithm on the northern site. However, the right panel of Figure 3 shows that in the lower part of the stem, the detected points are also present in non-northern portions of the stem. Other possible features that may hypothetically influence accuracy might include tree age, season, or stem conditions, all of which may be examined in a future study. Step 2 was the analysis of morphological characteristic based on the stem perimeter and horizontal cross-section. Because the error distance is always positive, it is difficult to distinguish the direction of the error, either outside or inside the stem. The analysis of cross-sectional differences reveals the direction of the distance error; in this case we used the squared value of the difference.
Step 3 was to analyze the mean error from individual points with relation to the number of cameras able to capture a designated focal point. The number of cameras was determined by two factors: whether the point was inside the camera’s field of view, and if the point could be seen by the camera. To determine if a point was in the camera’s field of view, we specified each camera position based on its xyz coordinates and the rotation of a cone whose peak is situated in the camera position; the rotation was defined by the camera rotation (Figure 4, left). The height (length) of the peak was defined as any large number exceeding any possible position of the control points. We then calculated whether the camera could view the points through overlapping stems. The contour points were collected all over the surface, but they were partially hidden for each camera. This can be easily verified as shown in Figure 4 (right).
Figure 3. Surface data collected from a tree stem surface by a magnetic motion tracker.
Figure 3. Surface data collected from a tree stem surface by a magnetic motion tracker.
Remotesensing 08 00123 g003
On the left-hand side of Figure 4 are the positions, rotations, and angle of view for every position of a camera around the stem. The middle figure of Figure 4 shows the area of the stem covered by a camera from one position, which was used to calculate the amount of cameras theoretically seeing the point. The right-most figure of Figure 4 shows the connection of a camera with individual points and the reconstructed surface of the stem. Those cameras, from which the connecting line crosses the stem’s surface, are eliminated from the total amount of cameras seeing points.
Figure 4. Example of typical surface texture of Cryptomeria japonica and the points automatically detected by Agisoft Photoscan © software.
Figure 4. Example of typical surface texture of Cryptomeria japonica and the points automatically detected by Agisoft Photoscan © software.
Remotesensing 08 00123 g004

2.6. Trend Detection

To evaluate the trend of errors associated with the number of cameras, it was expected that the error value would decrease with the increasing number of cameras pointing at the point on the stem, and beyond a certain number of cameras the error would be stabilized. We attempted to identify the ideal number of cameras that would stabilize the associated error of point location accuracy. We used the method by [35] to analyze the trend in our data using joinpoint models, which utilizes different linear trends connected at certain points; these are termed as “joinpoints” by the authors. The “Joinpoint Trend Analysis Software” evaluates the trend data and fits the joinpoint model, starting from the minimum number of joinpoints to the user-defined maximum; we used a value of 0 as a minimum and a maximum value of 10. Significance is tested using the Monte Carlo permutations method, and for each number of joinpoints, the software estimated significance values for the amount of joinpoints and for each of the partial linear trends.

3. Results

3.1. Accuracy of Diameter Estimation from Photo Reconstructed Shapes

We evaluated the accuracy of photo reconstructed stems and compared them to the common field measurement of stem circumference using measuring tape at breast height (1.3 m). The root mean square error (RMSE) and its deviation is displayed in Table 1. The error of circumference measured by tape was 1.87 cm different from the one measured on the photo models; however, its variability was rather high (2.23). The difference in circumference represents a corresponding error of 0.59 cm in diameter.
Table 1. The root mean square error (RMSE) of circumference in breast height (CBH) calculated from photo reconstructed models and the terrain measurement using common dendrometric tape, and RMSE of diameter in breast height (DBH). Both values include standard deviation (SD).
Table 1. The root mean square error (RMSE) of circumference in breast height (CBH) calculated from photo reconstructed models and the terrain measurement using common dendrometric tape, and RMSE of diameter in breast height (DBH). Both values include standard deviation (SD).
RMSE CBH (cm)SD (cm)RMSE DBH (cm)SD (cm)
1.872.230.590.72

3.2. Accuracy of Point Reconstruction

The analysis of the accuracy of the individual point reconstruction is divided into the following parts: analysis of point error in vertical and horizontal directions, morphological analysis of error, e.g., the systematic error, and analysis of image overlapping here described as the number of cameras seeing a point.

3.2.1. Displacement Error and Its Distribution in Vertical and Horizontal Directions

As evident on the left side of Figure 5, when comparing the error by height, the error in the middle portion of the stem was lowest, and it was higher at the base and crown of trees where the degree of visibility and the ability of the optical sensor for capturing an image were low. Based on the azimuth (right figure of Figure 5), it appeared that the error was lower on the northern face of the stems, although differences were not statistically significant. The Johnson-Wehrly-Mardia correlation coefficient [34] resulted in a R x θ 2 coefficient of 0.0094 with a p-value for the test of independence equal to 0.0680; the low value of R x θ 2 suggests that the correlation of error size and azimuth was very low.
Figure 5. Left: the positions of individual cameras with their rotations and field of view, middle: the angle of view of one camera, right: the connections of point to all cameras for determination of visibility.
Figure 5. Left: the positions of individual cameras with their rotations and field of view, middle: the angle of view of one camera, right: the connections of point to all cameras for determination of visibility.
Remotesensing 08 00123 g005

3.2.2. Morphological Error and Accuracy of Stem Morphology

Figure 6 shows the morphological error, defined by stem circumference (left panel) and cross-sectional area (right panel), in relation to the height at 20 cm increments. Similar to the point error analysis, the error was highest at the bottom and upper portions of the stem model and lowest at the breast height portion of the stem where the area of overlap of the individual images was highest.
Figure 6. Error (cm) and its distribution with height (left) and azimuth (right).
Figure 6. Error (cm) and its distribution with height (left) and azimuth (right).
Remotesensing 08 00123 g006

3.2.3. Influence of the Number of Cameras and Related Precision

The model with two joinpoints was found to be the best (Table 2); other models (with joinpoints of zero and three) were considered not significant or less significant than this one. Figure 7 shows the results of the analysis on trend tendency and determination of joinpoints. The two joinpoints considered significant occurred at five cameras and eight cameras (Table 3). All three segment slope estimates were significantly negative, and the most significant decrease was observed in the second segment (Table 4), between five and eight cameras, thus implying that shape reconstruction error significantly decreased for every camera added beyond five cameras only up to eight cameras, where the slope essentially become zero. Beyond a total of eight cameras, the error stabilized, meaning that adding more cameras (views) would not improve the performance (Figure 8).
Table 2. Model statistics.
Table 2. Model statistics.
CohortNumber of JoinpointsNumber of ObservationsNumber of ParametersDegrees of FreedomSum of Squared ErrorsMean Squared ErrorAutocorrelation Parameter
All2216150.1820.012Uncorrelated
Figure 7. The error in circumference (cm) by height (left) and stem sectional area (cm2) by height (right).
Figure 7. The error in circumference (cm) by height (left) and stem sectional area (cm2) by height (right).
Remotesensing 08 00123 g007
Table 3. Estimated joinpoints with corresponding confidence limits (CL).
Table 3. Estimated joinpoints with corresponding confidence limits (CL).
JoinpointEstimateLower CLUpper CL
All1537
All28710
Table 4. Estimated regression coefficient (general parameterization).
Table 4. Estimated regression coefficient (general parameterization).
ParameterParameter EstimateStandard ErrorTest Statisticp-Value
AllIntercept 10.26320.14081.8690.084
AllIntercept 21.70741.11361.5330.149
AllIntercept 3−0.93480.1322−7.0670.000
AllSlope 1−0.04370.0497−0.8780.395
AllSlope 2−0.33250.1745−1.9050.079
AllSlope 3−0.00220.0082−0.2750.787
Figure 8. The trend analysis of the data. The error on y-axis, the amount of cameras that see the focal point on x-axis.
Figure 8. The trend analysis of the data. The error on y-axis, the amount of cameras that see the focal point on x-axis.
Remotesensing 08 00123 g008

4. Discussion

The recent advances in computer vision allow accurate reconstruction of the terrain surface from remotely sensed images (e.g., [29]) and they can also be used for reconstruction of individual 3D objects (as demonstrated in this study). This reconstruction method has great potential, especially when compared to the commonly used laser scanning methods. Although laser scanning methods are most likely more precise and create more points from fewer positions, the high cost of equipment is currently restrictive. Using the uncalibrated cameras, the photo reconstruction is possible with any mid-level commercial camera, and the processing is possible using several different software types, some of which are even freeware. However, the usage of calibrated cameras (even calibrated with freeware software) can only enhance the model as demonstrated, for example, in [36]. Recently, optical cameras are mostly being deployed on Unmanned Aerial Vehicles for extraction of inventory parameters, e.g., [37,38]. An overview of possible usages is demonstrated, for example, in [39].
In this study we used the handheld camera to create the three-dimensional models of stems using the recommended settings from the software manufacturer. The resulting stem objects are visually realistic, and comparing them to the field measurement we obtain the RMSE of 1.87 cm in circumference at breast height. However, this value has large variability and when viewing the object as a remote-sensed surface model we designed a method for evaluation of the accuracy at individual points spread on the stem.
The study in [31] used differential GPS (DGPS) technology as the ground truth data to verify the accuracy of point clouds from photos, and they concluded that terrain reconstruction accuracy was around 2.5 to 4 cm provided that the ground control points were clearly visible, well contrasted with the surrounding landscape, and sufficiently spread around the investigated scene. They also concluded that the flight planning must ensure a high degree of overlapping (at least 70% of the images); however, the flight altitude is generally considered to be the most critical factor for the recognition of individual features. The use of DGPS is limited inside a forest canopy, though, mostly because the GPS signal is weak under a dense canopy. Other authors have used data from the total station as the ground reference data. For instance, [39] stated that the use of total station provided accuracy with an error of approximately 1 cm in horizontal position and about 2 cm in vertical position (elevation). The studies [40,41] used total station ground-truthed data to assess the accuracy of LiDAR, and [42] used it to assess the accuracy of GPS.
A magnetic motion tracker provided the reference data in this study. A series of overlapped images were photographed around the individual trees in the sample plot using a hand-held camera. A point cloud was generated from these images for each stem, and they were used to construct the mesh object with high-resolution texture in Agisoft Photoscan © software. This model allowed the usage of reference points measured by the digitizer to be referenced in the local coordinate system.
We used the other points to evaluate the accuracy of the model and found that the accuracy is not significantly dependent on the presence or lack of lichens (typically present on the north part of the stem), but we found that the accuracy is decreasing and again increasing in height. We evaluated the newly introduced term “amount of cameras which see the point” which refers to the degree of overlapping for such a point and we found that the higher this degree of overlapping, the better the precision will be. In general we found in our data that when the amount of cameras goes beyond eight there is no further increase in accuracy.

5. Conclusions

We evaluated the suitability of a handheld camera for the reconstruction of the tree stem surface using the recommended settings from the manufacturer of the software Agisoft PhotoScan © which are based on the algorithms deployed in it. We found the RMSE error of circumference of the stem to be 1.87 cm to the field measurement with a measurement tape. Such an error represents the diameter estimation error of approximately 0.59 cm which can be considered rather good for inventory purposes; however, the large variability (standard deviation) of this error led to a more detailed error study based on control points from the magnetic motion tracker. The magnetic motion tracker allows the measurement of points which are hidden from the source of the magnetic field (e.g., those behind the stem).
When compared with control points, we determined that if five or more cameras could see the points, the error decreased significantly; however, eight or more cameras did not appreciably lower the error. Based on these observations we can conclude that terrestrial multi-view photography may be a promising method for forest inventory as it can provide reliable estimates for the diameters at breast height but also the additional diameters at different heights.

Acknowledgments

This research was partially funded by a Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports, and Culture of Japan (Grant No. 22252002), the project of Ministry of Agriculture of Czech Republic (Grant No. QJ1520187), and Internal Grant Agency of the Faculty of Forestry and Wood Sciences Czech University of Life Sciences in Prague (No. B07/15). The authors would like to acknowledge substantial help from Dr. Takuya Hiroshima of the University of Tokyo, Chiba School Forest, for his support in the field campaign and measurements.

Author Contributions

Peter Surový and Atsushi Yoshimoto designed the experiment, analyzed the data. Peter Surový wrote the script for point processing and evaluation. Atsushi Yoshimoto designed and performed the statistical analysis. Peter Surový, Atsushi Yoshimoto and Dimitrios Panagiotidis wrote the manuscript and the literature research and discussion. Dimitrios Panagiotidis elaborated and adjusted the figures.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pullan, W. Structure; Cambridge University Press: Cambridge, MA, USA, 2000. [Google Scholar]
  2. Frolking, S.; Palace, M.W.; Clark, D.B.; Chambers, J.Q.; Shugart, H.H.; Hurtt, G.C. Forest disturbance and recovery: A general review in the context of spaceborne remote sensing of impacts on aboveground biomass and canopy structure. J. Geophys. Res. Biogeosci. 2009, 114. [Google Scholar] [CrossRef]
  3. Danjon, F.; Reubens, B. Assessing and analyzing 3D architecture of woody root systems, a review of methods and applications in tree and soil stability, resource acquisition and allocation. Plant Soil 2008, 303, 1–34. [Google Scholar] [CrossRef]
  4. Hyyppä, J.; Hyyppä, H.; Leckie, D.; Gougeon, F.; Yu, X.; Maltamo, M. Review of methods of small-footprint airborne laser scanning for extracting forest inventory data in boreal forests. Int. J. Remote Sens. 2008, 29, 1339–1366. [Google Scholar] [CrossRef]
  5. Brolly, G.; Kiraly, G. Algorithms for stem mapping by means of terrestrial laser scanning. Acta. Silv. Lignaria. Hung. 2009, 5, 119–130. [Google Scholar]
  6. Dassot, M.; Constant, T.; Fournier, M. The use of terrestrial LiDAR technology in forest science: Application fields, benefits and challenges. Ann. For. Sci. 2011, 68, 959–974. [Google Scholar] [CrossRef]
  7. Huang, S.; Hager, S.A.; Halligan, K.Q.; Fairweather, I.S.; Swanson, A.K.; Crabtree, R.L. A comparison of individual tree and forest plot height derived from LiDAR and InSAR. Photogramm. Eng. Remote Sens. 2009, 75, 159–167. [Google Scholar] [CrossRef]
  8. Zolkos, S.G.; Goetz, S.J.; Dubayah, R.A. Meta-analysis of terrestrial aboveground biomass estimation using LiDAR remote sensing. Remote Sens. Environ. 2013, 128, 289–298. [Google Scholar] [CrossRef]
  9. Tesfamichael, S.; Ahmed, F.; Van Aardt, J.; Blakewa, F.A. Semi-variogram approach for estimating stems per hectare in eucalyptus grandis plantations using discrete-return lidar height data. Forest Ecol. Manag. 2009, 258, 1188–1199. [Google Scholar] [CrossRef]
  10. Liang, X.; Litkey, P.; Hyyppä, J.; Kaartinen, H.; Kukko, A.; Holopainen, M. Automatic plot-wise tree location mapping using single-scan terrestrial laser scanning. Photogramm. J. Finl. 2011, 22, 37–48. [Google Scholar]
  11. Liang, X.; Kankare, V.; Yu, X.; Hyyppa, J.; Holopainen, M. Automated stem curve measurement using terrestrial laser scanning. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1739–1748. [Google Scholar] [CrossRef]
  12. Hopkinson, C.; Chasmer, L.; Young-Pow, C.; Treitz, P. Assessing forest metrics with ground-based scanning LiDAR. Can. J. For. Res. 2004, 34, 573–583. [Google Scholar] [CrossRef]
  13. Moskal, L.M.; Zheng, G. Retrieving forest inventory variables with Terrestrial Laser Scanning (TLS) in urban heterogeneous forest. Remote Sens. 2012, 4, 1–20. [Google Scholar] [CrossRef]
  14. Van der Zande, D.; Hoet, W.; Jonckheere, I.; Van Aardt, J.; Coppin, P. Influence of measurement set-up of ground-based LiDAR for derivation of tree structure. Agric. For. Meteorol. 2006, 141, 147–160. [Google Scholar] [CrossRef]
  15. Bienert, A.; Scheller, S.; Keane, E.; Mullooly, G.; Mohan, F. Application of terrestrial laser scanners for the determination of forest inventory parameters. Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci. 2006, 36. Part 5. [Google Scholar]
  16. Watt, P.J.; Donoghue, D.N.M. Measuring forest structure with terrestrial laser scanning. Int. J. Remote Sens. 2005, 26, 1437–1446. [Google Scholar] [CrossRef]
  17. Åkerblom, M.; Raumonen, P.; Kaasalainen, M.; Casella, E. Analysis of geometric primitives in quantitative structure models of tree stems. Remote Sens. 2015, 7, 4581–4603. [Google Scholar]
  18. Dick, A.R.; Kershaw, J.A.; MacLean, D.A. Spatial tree mapping using photography. North. J. Appl. For. 2010, 27, 68–74. [Google Scholar]
  19. Vastaranta, M.; Latorre, E.G.; Luoma, V.; Saarinen, N.; Holopainen, M.; Hyyppä, J. Evaluation of a smartphone app for forest sample plot measurements. Forests 2015, 6, 1179–1194. [Google Scholar] [CrossRef]
  20. Furukawa, Y.; Ponce, J. Accurate, dense and robust multi-view stereopsis. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognitiom, Mineapolis, MN, USA, 18–23 June 2007; pp. 1–8.
  21. Lowe, D.G. Method and Apparatus for Identifying Scale Invariant Features in an Image and Use of Same for Locating an Object in an Image. U.S. Patent 6,711,293, 23 March 2004. [Google Scholar]
  22. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. SURF: Speeded up robust features. Comp. Vis. Image Und. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  23. Liang, X.; Jaakkola, A.; Wang, Y.; Hyyppä, J.; Honkavaara, E.; Liu, J.; Kaartinen, H. The use of a hand-held camera for individual tree 3d mapping in forest sample Plots. Remote Sens. 2014, 6, 6587–6603. [Google Scholar] [CrossRef]
  24. Miller, J.; Morgenroth, J.; Gomez, C. 3D modelling of individual trees using a handheld camera: Accuracy of height, diameter and volume estimates. Urban For. Urban Gree. 2015, 14, 932–940. [Google Scholar] [CrossRef]
  25. Yoshimoto, A.; Surovy, P.; Konoshima, M.; Kurth, W. Constructing tree stem form from digitized surface measurements by a programming approach within discrete mathematics. Trees 2014, 28, 1577–1588. [Google Scholar] [CrossRef]
  26. Neitzel, F.; klonowski, J. Mobile 3D mapping with a low-cost UAV system. Int Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 1–6. [Google Scholar] [CrossRef]
  27. Rosnell, T.; Honkavaara, E. Point cloud generation from aerial image data acquired by a quadcopter type micro unmanned aerial vehicle and a digital still camera. Sensors 2012, 12, 453–480. [Google Scholar] [CrossRef] [PubMed]
  28. Dandois, J.P.; Elis, E.C. Remote sensing of vegetation structure using computer vision. Remote Sens. 2010, 2, 1157–1176. [Google Scholar] [CrossRef]
  29. Lucieer, A.; Robinson, S.; Turner, D. Unmanned Aerial Vehicle (UAV) Remote Sensing For Hyperspatial Terrain Mapping Of Antartic Moss Beds Based on Structure from Motion (SfM) point clouds. In Proceedings of the 34th International Symposium on Remote Sensing of Environment (ISRSE34), Sydney, Australia, 10–15 April 2011; pp. 11–15.
  30. Morgenroth, J.; Gomez, C. Assessment of tree structure using a 3D image analysis technique—A proof of concept. Urban Gree. 2014, 13, 198–203. [Google Scholar] [CrossRef]
  31. Harwin, S.; Lucieer, A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef]
  32. Surový, P.; Ribeiro, N.A.; Pereira, J.S. Observations on 3-dimensional crown growth of Stone pine. J. Agroforest. Syst. 2011, 82, 105–110. [Google Scholar] [CrossRef]
  33. Polhemus Company. Fastrak Brochure. Available online: http://polhemus.com/_assets/img/FASTRAK_Brochure.pdf (accessed on 30 January 2016).
  34. Pewsey, A.; Neuhäuser, M.; Ruxton, G.D. Circular Statistics in R; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  35. Kim, H.J.; Fay, M.P.; Feuer, E.J.; Midthune, D.N. Permutation tests for joinpoint regression with applications to cancer rates. Stat Med. 2000, 19, 335–351. [Google Scholar] [CrossRef]
  36. Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar]
  37. Puliti, S.; Ørka, H.O.; Gobakken, T.; Næsset, E. Inventory of small forest areas using an unmanned aerial system. Remote Sens. 2015, 7, 9632–9654. [Google Scholar] [CrossRef][Green Version]
  38. Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned aircraft systems in remote sensing and scientific research: Classification and considerations of use. Remote Sens. 2012, 4, 1671–1692. [Google Scholar] [CrossRef]
  39. Walker, J.P.; Willgoose, G.R. A comparative study of Australian cartometric and photogrammetric digital elevation model accuracy. Photogramm. Eng. Remote Sens. 2006, 72, 771–779. [Google Scholar] [CrossRef]
  40. Shrestha, R.L.; Carter, W.E.; Lee, M.; Finer, P.; Sartori, M. Airborne laser swath mapping: ALSM. Civil Eng. 1999, 59, 83–94. [Google Scholar]
  41. Töyrä, J.; Pietroniro, A.; Hopkinson, C.; Kalbfleisch, W. Assessment of airborne scanning laser altimetry (lidar) in a deltaic wetland environment. Can. J. Remote Sens. 2003, 29, 718–728. [Google Scholar] [CrossRef]
  42. Farah, A.; Talaat, A.; Farrag, F. Accuracy assessment of digital elevation models using GPS. Artif. Satell. 2008, 43, 151–161. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Surový, P.; Yoshimoto, A.; Panagiotidis, D. Accuracy of Reconstruction of the Tree Stem Surface Using Terrestrial Close-Range Photogrammetry. Remote Sens. 2016, 8, 123. https://doi.org/10.3390/rs8020123

AMA Style

Surový P, Yoshimoto A, Panagiotidis D. Accuracy of Reconstruction of the Tree Stem Surface Using Terrestrial Close-Range Photogrammetry. Remote Sensing. 2016; 8(2):123. https://doi.org/10.3390/rs8020123

Chicago/Turabian Style

Surový, Peter, Atsushi Yoshimoto, and Dimitrios Panagiotidis. 2016. "Accuracy of Reconstruction of the Tree Stem Surface Using Terrestrial Close-Range Photogrammetry" Remote Sensing 8, no. 2: 123. https://doi.org/10.3390/rs8020123

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop