Next Article in Journal
A Nudge-Inspired AI-Driven Health Platform for Self-Management of Diabetes
Next Article in Special Issue
Performance and Accuracy Comparisons of Classification Methods and Perspective Solutions for UAV-Based Near-Real-Time “Out of the Lab” Data Processing
Previous Article in Journal
A Novel Method for Baroreflex Sensitivity Estimation Using Modulated Gaussian Filter
Previous Article in Special Issue
A Novel Hyperspectral Method to Detect Moldy Core in Apple Fruits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization and Evaluation of Sensor Angles for Precise Assessment of Architectural Traits in Peach Trees

by
Mugilan Govindasamy Raman
1,
Eduardo Fermino Carlos
1,2 and
Sindhuja Sankaran
1,*
1
Department of Biological System Engineering, Washington State University, Pullman, WA 99164, USA
2
Laboratory of Biotechnology, AMG, IDR-IAPAR-EMATER-Agronomic Institute of Paraná, Londrina-PR 86001-970, Brazil
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(12), 4619; https://doi.org/10.3390/s22124619
Submission received: 11 May 2022 / Revised: 6 June 2022 / Accepted: 14 June 2022 / Published: 18 June 2022
(This article belongs to the Special Issue Feature Papers in the Remote Sensors Section 2022)

Abstract

:
Fruit industries play a significant role in many aspects of global food security. They provide recognized vitamins, antioxidants, and other nutritional supplements packed in fresh fruits and other processed commodities such as juices, jams, pies, and other products. However, many fruit crops including peaches (Prunus persica (L.) Batsch) are perennial trees requiring dedicated orchard management. The architectural and morphological traits of peach trees, notably tree height, canopy area, and canopy crown volume, help to determine yield potential and precise orchard management. Thus, the use of unmanned aerial vehicles (UAVs) coupled with RGB sensors can play an important role in the high-throughput acquisition of data for evaluating architectural traits. One of the main factors that define data quality are sensor imaging angles, which are important for extracting architectural characteristics from the trees. In this study, the goal was to optimize the sensor imaging angles to extract the precise architectural trait information by evaluating the integration of nadir and oblique images. A UAV integrated with an RGB imaging sensor at three different angles (90°, 65°, and 45°) and a 3D light detection and ranging (LiDAR) system was used to acquire images of peach trees located at the Washington State University’s Tukey Horticultural Orchard, Pullman, WA, USA. A total of four approaches, comprising the use of 2D data (from UAV) and 3D point cloud (from UAV and LiDAR), were utilized to segment and measure the individual tree height and canopy crown volume. Overall, the features extracted from the images acquired at 45° and integrated nadir and oblique images showed a strong correlation with the ground reference tree height data, while the latter was highly correlated with canopy crown volume. Thus, selection of the sensor angle during UAV flight is critical for improving the accuracy of extracting architectural traits and may be useful for further precision orchard management.

1. Introduction

The challenges of a global growing population, simultaneously dealing with unresolved rural poverty in developing areas and the need for better worldwide resource management, have encouraged public awareness in search for healthier and more sustainable practices in crop production. Among them, the methods for achieving higher productivity have been a constant issue in food security forums. Although yield is a result of intrinsic genetics and all conditions given to a plant during the cultivation cycle, precise monitoring of those conditions may lead to better management and higher yields.
For fruit crops, management is also critical, in many cases requiring additional attention for perennial trees. Incorrect management may add years in corrective practices, and thus accurate ongoing monitoring is very important. Fruit crops, including peach trees (Prunus persica (L.) Batsch), can be cultivated under different agricultural conditions. In any tree, the canopy is the part exposed to sunlight, and external conditions affect the overall photosynthesis and respiration levels. Thus, it is important to access tree canopy conditions at the right time for better decision-making in orchard management, as well as further actions such as fertilization, spraying, irrigation, pruning, and others [1,2,3]. The architectural features of an orchard include the tree height, canopy area, and crown volume. They provide vital information about tree conditions and help to design precise inputs for fruit production [4].
Traditional orchard practices have provided low degree of information about tree canopies and have led to various problems, including excessive use of chemicals and water [5,6,7] and imprecise overall management. In addition, conventional measurement methods of architectural tree canopy parameters such as height, diameter, area, and volume of crown are laborious and expensive when a high accuracy is required over a larger area [4]. In the present scenario, modern orchard management will be enabled by precise information, which will provide better opportunities to growers to make decisions based on actual data. Thus, there is an urgent need to enhance sensing tools to measure crucial architectural canopy information to strengthen modern and precise orchard management [2].
Remote sensing is widely employed in applications of precision agriculture in orchard management [8]. The main strength of these techniques is to obtain information on large areas at different levels of precision in which different goals can be achieved [9]. Remote sensing platforms such as satellite systems, high and low altitude aircrafts, and unmanned aerial vehicles (UAVs) provide features with different spatial–temporal resolution, surface coverage, and costs. The space-borne satellite system is a promising tool and is generally focused on the long-term monitoring and surveillance of larger areas [10]. Although potentially useful, satellite observations have limited spatial and temporal resolutions and may give a heterogeneous structure of canopy features, leading to difficult evaluations [11]. High and low altitude aircrafts provide a better resolution and deeper level of detail, but usually come with higher preparation efforts regarding flight planning and operational costs [12]. UAVs are small platforms with a low operational cost and higher spatial and temporal resolutions, and are capable of monitoring site-specific areas [13]. Comparing different platforms, a large-area orchard for canopy measurement can be fulfilled through the joint use of UAVs and visual imaging technology [14]. This measurement provides scientific and reliable information for decision-making and brings greater significance to the development of precision orchards [2].
UAVs along with imaging systems can provide data for the digital reconstruction of individual trees in an orchard by computing 3D point clouds, large area orthomosaics, and digital surface and terrain models, providing a basis for orchard canopy measurement. A significant number of studies have been conducted using UAV-integrated sensing systems for precision agriculture and smart farming applications, such as for estimation of the leaf area density and chlorophyll content [15,16,17,18,19,20,21]. Most UAV observations are usually carried out at nadir, where the sensor/camera axis capturing the images is along the vertical direction. The type of image acquisition at nadir is suitable for 2D mapping, but it may not be suitable for 3D mapping and modeling [22]. Oblique images are utilized in the reconstruction of 3D models and achieve better accuracy in terms of both the number of point clouds and measurements [23].
The tree row volume (TRV), total leaf area, and canopy cover were estimated for grapevine and apple orchards using the aerial photogrammetry method with different combinations of sensor angles and flight altitudes (15–60 m flight altitude, 65 and 75° angles) [24]. The measured TRV showed strong a relationship between aerial and ground measurements at the lowest ground sampling distance (GSD) for the grapevines (R2 = 0.77 at 0.45 cm pixel−1) and apple canopies (R2 = 0.82 at 0.90 cm pixel−1), whereas increasing the GSD resulted in a weaker relationship for both canopies. Overall, the aerial flight mission with a double grid mission and sensor inclination angle of 65° provided accurate measurements and had greater significance in the development of site-specific higher quality and precise canopy vigor maps [24]. A ground-based 2D and UAV-based 3D model was also used to evaluate the critical architectural traits of apple trees [25]. The extracted traits, such as DBS (box-counting fractal dimension), middle branch angle, number of branches, trunk basal diameter, and tree volume, were compared with the ground reference datasets associated with the three training systems (spindle, V-trellis, and bi-axis), two rootstocks, and two pruning methods in the apple trees. The results indicated that DBS showed significantly higher values (associated with tree architecture complexity) for spindles training system than the V-trellis training system, and also showed a correlation between the ground and TRV features (total fruit per unit area and trunk area) [25].
In another study, apple orchard canopy parameters such as height and volume based on voxel grid and convex hull techniques were estimated using 3D light detection and ranging (LiDAR) [26]. The 3D point cloud estimated height and volume for apple trees showed a strong correlation with the manually measured height (r = 0.84) and volume (r = 0.81) [26]. Many studies such as ones described above [24] have collected UAS-RGB imagery at either nadir or oblique angles not lower than 65° and at altitude more than 30 m to measure the tree architectural traits. Most of the research focused on finding the best parameters for flight mission to capture, extract, and achieve the maximum accuracy of the ground measurements. Apparently, no study has yet explored the integration of data collected at different angles combined with low altitude, especially for evaluating the tree architectural traits. However, the reconstruction of high-resolution topography of quarries was performed by combining the nadir and off-nadir imagery, and better accuracies were achieved in the 3D reconstruction of surfaces [27].
Therefore, the overall goal of this study was to digitally reconstruct the architectural traits in peach trees for precise orchard assessment through the optimization and integration of sensor angles using an UAV platform integrated with RGB imagery and a LiDAR ground-based sensing approach.

2. Materials and Materials

2.1. Study Area and Ground Reference Data

The study was carried out at the Washington State University’s Tukey Horticultural Orchard (46°43′52.88″ N, 117°8′29.09″ W), Pullman, WA, USA. Two rows of peach trees (Prunus persica (L.) Batsch) were selected for this study, from which 20 trees were analyzed. Data collection was performed with a UAV integrated with both an RGB camera and a LiDAR system. Similarly, the ground reference data corresponding to the individual tree height, as well as the longitudinal and transversal width data, were manually collected using a measurement scale.

2.2. UAV Imagery and LiDAR Data Acquisition

The photogrammetry analysis of the UAV data was used for extracting the architectural traits of the peach trees in the orchard. The UAV system (Model: Phantom 4 Pro, SZ DJI Technology Co. Ltd., Shenzhen, China) equipped with RGB camera (SZ DJI Technology Co. Ltd., Shenzhen, China; 20 Megapixel, 84° field of view) was operated using the Pix4Dcapture ground control station. Pix4Dcapture enables planning the flight missions and flight parameters for UAV systems. A total of three missions were set up based on the parameters using the Pix4Dcapture ground control station, including the boundary of the study area, flight path planning, altitude of the flight, speed, and forward and side overlap conditions (Figure 1). All of the three missions were carried out under no cloud conditions. The parameters were the same for all of the three missions except the angle of the sensor inclination (Table 1).
A 3D LiDAR (VLP 16, Velodyne LiDAR, San Jose, CA, USA) system was also used to generate the 3D point cloud of individual trees at ground level. The range of the sensor is 100 m, which produces up to ~600,000 points/s and can acquire a 360° horizontal field view at a distance of 1 m away from a tree, and it has a 30° vertical field of view [26]. The acquired 3D point cloud was saved as .pcap format (Packet Capture) and was visualized using the dedicated software, “Veloview”.

2.3. Preprocessing of UAV Images and 3D LiDAR Data

After the image acquisition, the stereo-paired images were stitched using a Pix4Dmapper (Pix4D Mapper, version 4.3.31, Pix4D, Laussane, Switzerland) to generate the 3D point clouds, digital surface model (DSM), digital terrain model (DTM), and orthomosaic images for each individual flight mission. Each mission was processed separately and processed in combination with all of the stereo-paired images acquired from three missions. The pre-processing of UAV datasets using Pix4Dmapper includes computing key points, computing match points, calibration, and matching and point-cloud densification based on the Geotagged information (longitude, latitude, and altitude) of the waypoints. The 3D point cloud (.las format), DSM, DTM, and orthomosaic images for each individual angle and one for the integrated angles (with GSD of 0.40 cm/pixel) were generated for extraction of the architectural features of the individual trees. The integrated dataset can be created by setting the input images as the images acquired at different angles in the Pix4Dmapper software. The .las format 3D point cloud was converted to .csv format using Cloud Compare software to calculate the canopy information. The 3D LiDAR point cloud visualization software, Veloview, which has the capability of analyzing and measuring the 3D point clouds, were used to visualize the live 3D point cloud scenes, allowing the user to select the frame and export it into .csv format.

2.4. Extraction of Architectural Features

This study focused on measuring tree height and canopy crown volume using 2D images and 3D point cloud. The 2D canopy height model utilizing DTM from Pix4D (directly acquired from the software (technique 1, referred to as T1) and DTM (technique 2, referred to as T2) using point sampling from DSM (similar to that reported in [28]) for the individual sensor angle and integrated angle images were generated and used for measuring the tree height and canopy crown volume (Figure 2). In addition, UAV-based point cloud data and LiDAR point cloud data were also used to extract similar features.

2.4.1. Height and Volume Estimation from 2D Datasets

A geospatial processing platform, ArcMap 10.7 (Environmental Systems Research Institute, Redlands, CA, USA), was used to estimate the canopy height model (CHM) by subtracting the digital terrain model (DTM) from the digital surface model (DSM). DSM uses the Earth’s surface including all objects (tree, weeds, trellis, etc.) on it, whereas DTM uses the Earth’s surface without any objects (soil surface as a baseline). In this study, two techniques were used to generate the canopy height model—one was DTM generated from Pix4D (T1 approach) and the other was generated by the point sampling tool using ArcMap 10.7 (T2 approach).
Using ArcMap 10.7, DTM using the point sampling method was created by plotting random points at the ground surface and extracting the terrain data associated with the points. The extracted point data were interpolated (inverse distance weighing method) to generate a DTM raster. The CHM from both techniques was then reclassified to extract the tree from the other features. The extracted tree canopy area was defined into three shapes, namely polygon, bounding box, and circle (Figure 3). The tree height data were extracted as the maximum height from CHM within the individual tree canopy area within the defined shape using zonal statistics. Similarly, the volume (V) of the tree canopy crown was defined by setting up the tree height as 0.80 times that of the maximum tree height, such that the tree canopy height (excluding the tree trunk height) was used for measuring the volume (square or circular pyramid) using Equations (1) and (2) for the polygon/bounding box and circle boundary area, respectively. The ratio of the canopy height with respect to the maximum tree height was finalized based on the manual measurements.
V = 1 3 ( A × H )
V = π r 2 × H 3
where A is the area of the polygon or bounding box, r is the radius of the circle, and H is the maximum height of the tree.

2.4.2. Height and Volume Estimation from UAV-Based 3D Point Cloud

The .csv files converted from CloudCompare software (http://www.cloudcompare.org/ (accessed on 12 September 2021) were utilized for the tree height and canopy crown volume measurements. Using MATLAB software (R2016a, MathWorks Inc., Natick, MA, USA), a semi-automated algorithm was developed for processing the data to extract the architectural traits. The processing steps included the following: defining the matrix, rotating the point cloud using affine transformation to rectify the angular bias, followed by filtering unwanted 3D points. The individual trees were segmented by the boundary point, which was defined manually using the latitude (X) and longitude (Y) extent to segment the individual trees. Finally, the architectural traits of each tree from the 3D point cloud, including the height, were computed by subtracting the maximum Z value and minimum Z value (H = Zmax − Zmin), and the volume (using voxel grid technique for the canopy crown only) was measured using a tetramesh function in MATLAB. The tetramesh function creates 3D tetrahedral shape across the boundary of the canopy crown by connecting the extent of the matrix. This measurement (canopy crown volume) was evaluated by comparing the extracted data with the manual measurements.

2.4.3. Height and Volume Estimation from LiDAR System-Based 3D Point Cloud

The .pcap files were converted into .csv format using CloudCompare software. The data processing and extraction of tree height and canopy crown volume were performed using MATLAB (R2016a, MathWorks Inc., Natick, MA, USA). The background noise and ground points were eliminated prior to measuring the tree height and canopy crown volume. The acquired point cloud had an angle bias, which was corrected by affine transformation, and background noises were filtered by down sampling the point cloud. The individual trees were segmented by providing the width and length of the individual trees as the threshold. The tree height was measured by subtracting the minimum Z value from the maximum Z value (topmost level of the tree). The canopy crown volume was estimated using the voxel grid technique and the canopy crown alone was taken for estimation of the canopy crown volume instead of the whole tree. The voxel grid technique created the boundary that fit around the structure of the crown and measured the volume of the canopy crown.

2.5. Statistical Analysis

The Pearson’s correlation analysis was performed in R program (Version 4.1.1) to analyze the correlation coefficient at three levels of significance (5%, 1%, and 0.1%) between the ground reference data and the estimated tree height and canopy crown volume measurement under each scenario (individual and integrated nadir and oblique angles). As one tree in the second row was large, the data from this tree were eliminated so as to reduce its effect on the correlation results (n = 19). The data from each technique, including the ground reference data (manual measurements) were presented as a violin box plot to visualize the variation between the measurements for both the tree height and canopy crown volume. In the violin box plot, the median is represented as red dot, and the box ends represent the interquartile distance between the first and third quartile, and with the upper and lower values in the ends of the violin tips.

3. Results

3.1. UAV Data Analysis

3.1.1. Tree Segmentation

A total of three UAV missions were conducted at three different sensor angle inclinations (90°, 45°, and 65°) to capture the stereo paired images. The obtained images were processed individually using images at each angle using photogrammetric software and were also processed by integrating the nadir (90°) and oblique angle (65° and 45°) images. The data types generated were DTM, DSM, orthomosaic, and 3D point cloud for each condition (45°, 65°, 90°, and integrated data). Using ArcMap 10.7, the orthomosaic imagery was used to identify the 19 peach trees and to segment the trees by creating the boundary for each tree as a shape file (.shp) for the 2D data analyses (Figure 4a). The 3D point cloud was segmented using the latitude–longitude extent of the individual trees (Figure 4b,c).

3.1.2. Tree Height Estimation

The tree height was estimated from both the CHM and 3D point cloud data, generated for each of the sensor inclination angles and integrated datasets. Concerning the CHM approach, two data sources (T1 from Pix4D Mapper and T2 from the point sampling approach) were evaluated (Figure 5). A separate polygon of the individual tree boundary was created to capture canopy area, in addition to the bounding box and circle for the tree boundary using the canopy height model. The tree height was estimated using both CHMs (generated using T1 and T2 approaches) from the polygon, bounding box, and circular canopy area.
The maximum tree height based on the polygon, bounding box, and circular canopy area did not change for both the T1 and T2 datasets. The manually measured peach tree height (ground reference data) ranged from 2.13 to 3.05 m, with a median of 2.70 m. In both T1-based and T2-based tree height estimations (Figure 6 and Figure 7), among the datasets, the tree height from image acquired at 45° and the integrated nadir and oblique images showed the similar trend and high correlation with the ground reference data. The correlation coefficient of the T1-based tree height estimation with ground reference data was slightly higher than the T2-based tree height.
In the 3D point cloud dataset, similar to the T1- and T2-based approaches, the tree height from the data acquired at the angle of 45° and the integrated nadir and oblique images were closely associated with the ground reference data, than with the data acquired at 65° and 90° (Figure 8). The tree height from the integrated datasets was also highly correlated to the 45° datasets (r ≥ 0.92) in all three approaches (T1, T2, and points cloud).
Among all of the three techniques (T1, T2, and point cloud), the data extracted from the 45° and integrated nadir and oblique datasets showed a higher correlation and significant p-value. Comparing the T1, T2, and point cloud-based approaches, the correlation with ground reference data was marginally higher in the T1-based approach (45° and integrated datasets), while the median tree height was closest to the ground reference data in the 45° dataset. The slope in most cases (T1, T2, and point cloud, 45° and integrated datasets) was close to one (Figure 9). Given the efforts in data acquisition, processing, and analysis, a 45° sensor inclination angle may be beneficial. Overall, for orchard peach trees, a higher accuracy tree height can be estimated at a 45° sensor inclination angle.

3.1.3. Canopy Crown Volume Estimation

In the T1- and T2-based approaches, the volume of the canopy crown was measured using the canopy height (excluding trunk length) instead of the whole tree height for the polygon, bounding box, and circular canopy area (Figure 10 and Figure 11). Only 45° and the integrated datasets were analyzed, as these datasets resulted in the most accurate results for the tree height estimation. Among the volume extracted from the polygon, bounding box, and circle-shaped canopy area from 45° and the integrated datasets, using the circular and bounding box area with the integrated dataset showed a higher similarity to the ground reference data than the polygon-shaped canopy areas for the T1-based approach. However, the polygon showed similar relationship to the ground reference data with both datasets (45° and integrated) for the T2-based approach. The best results (high correlation with ground reference data) were from the volume of the canopy crown extracted from the integrated nadir and oblique images dataset (especially with circular canopy area) with the T1-based approach.
In the 3D point cloud dataset, the volume was extracted by delineating the canopy crown from the tree height by providing the threshold for each tree and measuring the volume of the boundary as the voxel grid (Figure 12). The 3D point cloud-based canopy crown volume showed the highest correlation to the ground reference data with the 45° dataset, followed by the integrated nadir and oblique images dataset. The lower correlation for the T1- and T2-based approaches could be associated with the un-matching ground reference data [26]. Among the three techniques (T1, T2, and 3D point cloud), the T1-based approach with integrated nadir and oblique datasets showed the strongest relationship with ground reference data.

3.2. LiDAR Data Analysis

In addition to the UAV data, the LiDAR data from 17 trees were acquired to measure the tree height and canopy crown volume (Figure 13). The canopy crown volume was estimated using the voxel grid technique. The estimated tree height (r = 0.98/R2 = 0.96) and canopy crown volume (r = 0.87 and R2 = 0.77) showed a high and strong correlation with the ground reference data (Figure 14).
The comparison of the estimated tree height from UAV data for 17 trees with LiDAR data indicated that with all datasets, the correlations coefficients were similar to those with the ground reference data. This could be because of the strong correlation between the ground reference and the LiDAR-based tree height data. However, similar results were not observed for the canopy crown volume. The canopy crown volume (from integrated and 45° datasets) extracted from the UAV data were not as highly correlated with the LiDAR data compared with the ground reference data. This could be the result from the nature of the ground reference data measurements, where the UAV data could be more directly related to the ground reference data than the LiDAR data. Further evaluation on these aspects needs to be performed.

4. Discussions

In recent years, researchers have utilized sensor-based technology coupled with UAV systems in various applications for monitoring and decision-making in orchard management [29]. UAV sensing techniques have been used to extract the architectural traits in tree crops, such as canopy height, volume, and other structures [30,31]. This study focused on the importance and need for optimizing flight and sensor parameters in the extraction of architectural traits such as tree height and canopy crown volume. In general, UAV flight variables such as flight altitude, image overlap, flying direction, flying speed, and solar elevation and azimuth affect the image quality [32].
In this study, three UAV missions at different sensor inclination angles (45°, 65°, and 90°) and the integration of all the angles (nadir and oblique) to extract the accurate measurement of the architectural traits of the individual trees was assessed using three different techniques. The results showed that the features extracted from the images collected at a sensor inclination of 45° and the integration of nadir and oblique images exhibited a similar correlation with the ground reference data and measurement results, followed by angles of 65° and 90°. Similar research findings showed that the images acquired at oblique angles improved the 3D reconstruction of the forest canopy. The results showed that the oblique images increased the understory point density and accuracy of the canopy crown percentage and the tree height increased by 33% and 50%, respectively [33].
In the tree height measurements, all UAV-based approaches (T1, T2, and point cloud) showed similar results with 45° and the integrated datasets. The best accuracy was found for those derived from LiDAR data, although the results from the UAV-based approaches were decent with a slope close to one. Regarding the canopy crown volume, Pix4D derived DTM showed a strong correlation with the ground reference data with the integrated nadir and oblique datasets. Overall, the approach of collecting oblique images and integrating datasets for measuring architectural traits can improve precision. The most significant improvements were found when including the oblique imagery for canopy representation [34]. The 3D LiDAR point cloud also showed a strong relationship with the ground reference data.

5. Conclusions

High-resolution RGB imagery acquired at different nadir and obliques images were processed and orthomosaic, DSM, DTM, 3D UAV point cloud, and LiDAR point cloud datasets were generated to measure the peach tree height and canopy crown volume. The study evaluated the tree height and canopy crown volume estimation accuracy at each angle and the integrated nadir and oblique imagery datasets. The results were validated, with statistical analysis performed to compare the extracted features to the ground reference data (manual measurements). Overall, the images acquired at 45° and the integrated nadir and oblique images yielded accurate tree height estimations using all three UAV-based approaches (T1, T2, and point cloud), with the integrated dataset proving useful in extracting the accurate canopy crown volume. The 3D LiDAR point cloud also showed a very high correlation in terms of tree height and canopy crown volume. The change in sensor inclination angles and the integration of multiple angular datasets did affect the measurement accuracy. Nevertheless, further studies are recommended to assess the change in flying altitude with respect to different sensor inclination angles to study the effect on the accuracy of the estimation of tree canopy attributes.

Author Contributions

Conceptualization, M.G.R. and S.S.; methodology, M.G.R., E.F.C. and S.S.; validation, M.G.R., E.F.C. and S.S.; formal analysis, M.G.R.; investigation, M.G.R. and S.S.; resources, S.S.; data curation, M.G.R.; writing—original draft preparation, M.G.R.; writing—review and editing, M.G.R., E.F.C. and S.S.; visualization, M.G.R.; supervision, S.S.; project administration, S.S.; funding acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the US Department of Agriculture National Institute of Food and Agriculture (USDA-NIFA) competitive projects (accession numbers 1011741) and hatch project (accession number 1014919), and Washington State University’s College of Agricultural, Human, and Natural Resource Sciences’ Emerging Research Issues competitive grant opportunity (ERI-20-04).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

Sincere thanks to Milton Valencia Ortiz, Worasit Sangjan, and Kesevan Veloo for their valuable help during data collection. Special thanks to Bradley Jaeckel for allowing us to use Washington State University’s Tukey Horticultural Orchard Farm.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any potential conflict of interest.

References

  1. Zhang, C.; Valente, J.; Kooistra, L.; Guo, L.; Wang, W. Orchard management with small unmanned aerial vehicles: A survey of sensing and analysis approaches. Precis. Agric. 2021, 22, 2007–2052. [Google Scholar] [CrossRef]
  2. Sun, G.; Wang, X.; Ding, Y.; Lu, W.; Sun, Y. Remote Measurement of apple orchard canopy information using unmanned aerial vehicle photogrammetry. Agronomy 2019, 9, 774. [Google Scholar] [CrossRef] [Green Version]
  3. Underwood, J.P.; Hung, C.; Whelan, B.; Sukkarieh, S. Mapping almond orchard canopy volume, flowers, fruit, and yield using lidar and vision sensors. Comput. Electron. Agric. 2016, 130, 83–96. [Google Scholar] [CrossRef]
  4. Ghanbari Parmehr, E.; Amati, M. Individual Tree Canopy Parameters Estimation using UAV-based photogrammetric and LiDAR point clouds in an urban park. Remote Sens. 2021, 13, 2062. [Google Scholar] [CrossRef]
  5. Garcia-Ruiz, F.; Sankaran, S.; Maja, J.M.; Lee, W.S.; Rasmussen, J.; Ehsani, R. Comparison of two aerial imaging platforms for identification of Huanglongbing-infected citrus trees. Comput. Electron. Agric. 2013, 91, 106–115. [Google Scholar] [CrossRef]
  6. Pforte, F.; Selbeck, J.; Hensel, O. Comparison of two different measurement techniques for automated determination of plum tree canopy cover. Biosyst. Eng. 2012, 113, 325–333. [Google Scholar] [CrossRef]
  7. Torres-Sánchez, J.; de Castro, A.I.; Pena, J.M.; Jimenez-Brenes, F.M.; Arquero, O.; Lovera, M.; Lopez-Granados, F. Mapping the 3D structure of almond trees using UAV acquired photogrammetric point clouds and object-based image analysis. Biosyst. Eng. 2018, 176, 172–184. [Google Scholar] [CrossRef]
  8. Gallo, R.; Grigolato, S.; Cavalli, R.; Mazzetto, F. GNSS-based operational monitoring devices for forest logging operation chains. J. Agric. Eng. 2013, 44, e27. [Google Scholar] [CrossRef]
  9. Di Gennaro, S.F.; Nati, C.; Dainelli, R.; Pastonchi, L.; Berton, A.; Toscano, P.; Matese, A. An automatic UAV based segmentation approach for pruning biomass estimation in irregularly spaced chestnut orchards. Forests 2020, 11, 308. [Google Scholar] [CrossRef] [Green Version]
  10. Dash, J.P.; Pearse, G.D.; Watt, M.S. UAV multispectral imagery can complement satellite data for monitoring forest health. Remote Sens. 2018, 10, 1216. [Google Scholar] [CrossRef] [Green Version]
  11. Campos, I.; Neale, C.M.; Calera, A.; Balbontín, C.; González-Piqueras, J. Assessing satellite-based basal crop coefficients for irrigated grapes (Vitis vinifera L.). Agric. Water Manag. 2010, 98, 45–54. [Google Scholar] [CrossRef]
  12. Matese, A.; Toscano, P.; Di Gennaro, S.F.; Genesio, L.; Vaccari, F.P.; Primicerio, J.; Gioli, B. Intercomparison of UAV, aircraft, and satellite remote sensing platforms for precision viticulture. Remote Sens. 2015, 7, 2971–2990. [Google Scholar] [CrossRef] [Green Version]
  13. Yu, L.; Huang, J.; Zong, S.; Huang, H.; Luo, Y. Detecting shoot beetle damage on Yunnan pine using Landsat time-series data. Forests 2018, 9, 39. [Google Scholar] [CrossRef] [Green Version]
  14. Aasen, H.; Honkavaara, E.; Lucieer, A.; Zarco-Tejada, P.J. Quantitative remote sensing at ultra-high resolution with UAV spectroscopy: A review of sensor technology, measurement procedures, and data correction workflows. Remote Sens. 2018, 10, 1091. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, X.; Ren, Y.; Yin, Z.Y.; Lin, Z.; Zheng, D. Spatial and temporal variation patterns of reference evapotranspiration across the Qinghai-Tibetan Plateau during 1971–2004. J. Geophys. Res. Atmos. 2009, 114, D15105. [Google Scholar] [CrossRef]
  16. Hardin, P.J.; Jensen, R.R. Small-scale unmanned aerial vehicles in environmental remote sensing: Challenges and opportunities. GIScience Remote Sens. 2011, 48, 99–111. [Google Scholar] [CrossRef]
  17. Knoth, C.; Klein, B.; Prinz, T.; Kleinebecker, T. Unmanned aerial vehicles as innovative remote sensing platforms for highresolution infrared imagery to support restoration monitoring in cut-over bogs. Appl. Veg. Sci. 2013, 16, 509–517. [Google Scholar] [CrossRef]
  18. Linchant, J.; Lisein, J.; Semeki, J.; Lejeune, P.; Vermeulen, C. Are unmanned aircraft systems (UASs) the future of wildlife monitoring? A review of accomplishments and challenges. Mamm. Rev. 2015, 45, 239–252. [Google Scholar] [CrossRef]
  19. Whitehead, K.; Hugenholtz, C.H.; Myshak, S.; Brown, O.; LeClair, A.; Tamminga, A.; Barchyn, T.E.; Moorman, B.; Eaton, B. Remote sensing of the environment with small, unmanned aircraft systems (UASs), part 2: Scientific and commercial applications. J. Unmanned Veh. Syst. 2014, 2, 86–102. [Google Scholar] [CrossRef] [Green Version]
  20. Shahbazi, M.; Théau, J.; Ménard, P. Recent applications of unmanned aerial imagery in natural resource management. GIScience Remote Sens. 2014, 51, 339–365. [Google Scholar] [CrossRef]
  21. Wallace, L.; Lucieer, A.; Watson, C.; Turner, D. Development of a UAV-LiDAR system with application to forest inventory. Remote Sens. 2012, 4, 1519–1543. [Google Scholar] [CrossRef] [Green Version]
  22. Persad, R.A.; Armenakis, C. Alignment of point cloud DSMs from TLS and UAV platforms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 369. [Google Scholar] [CrossRef] [Green Version]
  23. Vacca, G.; Dessì, A.; Sacco, A. The use of nadir and oblique UAV images for building knowledge. ISPRS Int. J. Geo-Inf. 2017, 6, 393. [Google Scholar] [CrossRef] [Green Version]
  24. Sinha, R.; Quirós, J.J.; Sankaran, S.; Khot, L.R. High resolution aerial photogrammetry-based 3D mapping of fruit crop canopies for precision inputs management. Inf. Processing Agric. 2021, 9, 11–23. [Google Scholar] [CrossRef]
  25. Zhang, C.; Serra, S.; Quirós-Vargas, J.; Sangjan, W.; Musacchi, S.; Sankaran, S. Non-invasive sensing techniques to phenotype multiple apple tree architectures. Inf. Processing Agric. 2022. [Google Scholar] [CrossRef]
  26. Chakraborty, M.; Khot, L.R.; Sankaran, S.; Jacoby, P.W. Evaluation of mobile 3D light detection and ranging based canopy mapping system for tree fruit crops. Comput. Electron. Agric. 2019, 158, 284–293. [Google Scholar] [CrossRef]
  27. Rossi, P.; Mancini, F.; Dubbini, M.; Mazzone, F.; Capra, A. Combining nadir and oblique UAV imagery to reconstruct quarry topography: Methodology and feasibility analysis. Eur. J. Remote Sens. 2017, 50, 211–221. [Google Scholar] [CrossRef] [Green Version]
  28. Zhang, C.; Craine, W.A.; McGee, R.J.; Vandemark, G.J.; Davis, J.B.; Brown, J.; Sankaran, S. High-throughput phenotyping of canopy height in cool-season crops using sensing techniques. Agron. J. 2021, 113, 3269–3280. [Google Scholar] [CrossRef]
  29. Zhang, C.; Valente, J.; Kooistra, L.; Guo, L.; Wang, W. Opportunities of UAVs in orchard management. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 673–680. [Google Scholar] [CrossRef] [Green Version]
  30. Rosell, J.R.; Sanz, R. A review of methods and applications of the geometric characterization of tree crops in agricultural activities. Comput. Electron. Agric. 2012, 81, 124–141. [Google Scholar] [CrossRef] [Green Version]
  31. Sangjan, W.; Sankaran, S. Phenotyping architecture traits of tree species using remote sensing techniques. Trans. ASABE 2021, 64, 1611–1624. [Google Scholar] [CrossRef]
  32. Tu, Y.H.; Phinn, S.; Johansen, K.; Robson, A.; Wu, D. Optimising drone flight planning for measuring horticultural tree crop structure. ISPRS J. Photogramm. Remote Sens. 2020, 160, 83–96. [Google Scholar] [CrossRef] [Green Version]
  33. Díaz, G.M.; Mohr-Bell, D.; Garrett, M.; Muñoz, L.; Lencinas, J.D. Customizing unmanned aircraft systems to reduce forest inventory costs: Can oblique images substantially improve the 3D reconstruction of the canopy? Int. J. Remote Sens. 2020, 41, 3480–3510. [Google Scholar] [CrossRef]
  34. Wallace, L.; Bellman, C.; Hally, B.; Hernandez, J.; Jones, S.; Hillman, S. Assessing the ability of image-based point clouds captured from a UAV to measure the terrain in the presence of canopy cover. Forests 2019, 10, 284. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Graphical representation of UAV image acquisition for each flight mission at each sensor angle (nadir and oblique angles) and (b) different tile points for three different imaging angles. The blue points indicate the GPS positions from the UAV (initial camera position) and the green points are the calibrated positions extracted using the Pix4Dmapper.
Figure 1. (a) Graphical representation of UAV image acquisition for each flight mission at each sensor angle (nadir and oblique angles) and (b) different tile points for three different imaging angles. The blue points indicate the GPS positions from the UAV (initial camera position) and the green points are the calibrated positions extracted using the Pix4Dmapper.
Sensors 22 04619 g001
Figure 2. The methodology utilized in this study to extract the canopy architectural traits from the individual trees using UAV and LiDAR data.
Figure 2. The methodology utilized in this study to extract the canopy architectural traits from the individual trees using UAV and LiDAR data.
Sensors 22 04619 g002
Figure 3. Individual tree boundary (polygon, bounding box, and circle) used to extract the tree height and canopy crown volume.
Figure 3. Individual tree boundary (polygon, bounding box, and circle) used to extract the tree height and canopy crown volume.
Sensors 22 04619 g003
Figure 4. (a) Boundary representing the individual peach trees for segmentation, (b) 3D point cloud, and (c) 3D point cloud of an individual tree after segmentation.
Figure 4. (a) Boundary representing the individual peach trees for segmentation, (b) 3D point cloud, and (c) 3D point cloud of an individual tree after segmentation.
Sensors 22 04619 g004
Figure 5. Canopy height model generated using the T1 (Pix4D) and T2 (point sampling) approaches.
Figure 5. Canopy height model generated using the T1 (Pix4D) and T2 (point sampling) approaches.
Sensors 22 04619 g005
Figure 6. (a) Violin box plot showing the tree height range (m) and the (b) correlation matrix showing the correlation between the ground reference data and estimated T1-based tree height at individual angles (45°, 65°, and 90°) and the integration of nadir and oblique images. Significant probability level: * 0.05, ** 0.01, and *** 0.001.
Figure 6. (a) Violin box plot showing the tree height range (m) and the (b) correlation matrix showing the correlation between the ground reference data and estimated T1-based tree height at individual angles (45°, 65°, and 90°) and the integration of nadir and oblique images. Significant probability level: * 0.05, ** 0.01, and *** 0.001.
Sensors 22 04619 g006
Figure 7. (a) Violin box plot showing the tree height range (m) and (b) correlation matrix showing the correlation between the ground reference data and estimated T2-based tree height at individual angles (45°, 65°, and 90°) and the integration of nadir and oblique images. Significant probability level: * 0.05, ** 0.01, and *** 0.001.
Figure 7. (a) Violin box plot showing the tree height range (m) and (b) correlation matrix showing the correlation between the ground reference data and estimated T2-based tree height at individual angles (45°, 65°, and 90°) and the integration of nadir and oblique images. Significant probability level: * 0.05, ** 0.01, and *** 0.001.
Sensors 22 04619 g007
Figure 8. (a) Violin box plot showing the tree height range (m) and (b) correlation matrix showing the correlation between the ground reference data and UAV point cloud-based estimated tree height at individual angles (45°, 65°, and 90°) and the integration of nadir and oblique images. Significant probability level: ** 0.01, and *** 0.001.
Figure 8. (a) Violin box plot showing the tree height range (m) and (b) correlation matrix showing the correlation between the ground reference data and UAV point cloud-based estimated tree height at individual angles (45°, 65°, and 90°) and the integration of nadir and oblique images. Significant probability level: ** 0.01, and *** 0.001.
Sensors 22 04619 g008
Figure 9. Relationship between the ground reference data and estimated tree height (a) and volume (b) acquired at 45° and the integrated nadir and oblique datasets using T1 (circular), T2 (polygon), and point cloud datasets, respectively.
Figure 9. Relationship between the ground reference data and estimated tree height (a) and volume (b) acquired at 45° and the integrated nadir and oblique datasets using T1 (circular), T2 (polygon), and point cloud datasets, respectively.
Sensors 22 04619 g009
Figure 10. (a) Violin box plot showing the canopy crown volume range (m) and the (b) correlation matrix showing correlation between ground reference data and estimated volume with a 45° sensor angle dataset using T1, T2, and point cloud (PC) UAV-based approaches with canopy area estimated using polygon, box, and circular canopy area. Significant probability level: * 0.05, ** 0.01, and *** 0.001.
Figure 10. (a) Violin box plot showing the canopy crown volume range (m) and the (b) correlation matrix showing correlation between ground reference data and estimated volume with a 45° sensor angle dataset using T1, T2, and point cloud (PC) UAV-based approaches with canopy area estimated using polygon, box, and circular canopy area. Significant probability level: * 0.05, ** 0.01, and *** 0.001.
Sensors 22 04619 g010
Figure 11. (a) Violin box plot showing the canopy crown volume range (m) and the (b) correlation matrix showing the correlation between ground the reference data and estimated volume with the integrated sensor angle dataset using the T1, T2, and point cloud (PC) UAV-based approaches with canopy area estimated using the polygon, box and circular canopy area. Significant probability level: * 0.05, ** 0.01, and *** 0.001.
Figure 11. (a) Violin box plot showing the canopy crown volume range (m) and the (b) correlation matrix showing the correlation between ground the reference data and estimated volume with the integrated sensor angle dataset using the T1, T2, and point cloud (PC) UAV-based approaches with canopy area estimated using the polygon, box and circular canopy area. Significant probability level: * 0.05, ** 0.01, and *** 0.001.
Sensors 22 04619 g011
Figure 12. (a,b) Tree canopy crown delineation and (c) voxel grid model creating the boundary to measure the volume of the canopy crown.
Figure 12. (a,b) Tree canopy crown delineation and (c) voxel grid model creating the boundary to measure the volume of the canopy crown.
Sensors 22 04619 g012
Figure 13. Visualization of a representative peach tree 3D data captured using the 3D LiDAR system. The color scale refers to height (z) data in m above sea level.
Figure 13. Visualization of a representative peach tree 3D data captured using the 3D LiDAR system. The color scale refers to height (z) data in m above sea level.
Sensors 22 04619 g013
Figure 14. Relationship between the ground reference data with the estimated tree height (a) and tree volume (b) data acquired using the point cloud dataset from the LiDAR system.
Figure 14. Relationship between the ground reference data with the estimated tree height (a) and tree volume (b) data acquired using the point cloud dataset from the LiDAR system.
Sensors 22 04619 g014
Table 1. Summary of parameters used for UAV flight.
Table 1. Summary of parameters used for UAV flight.
MissionTypeAltitudeSensor InclinationGSD (cm/Pixel)Flight SpeedOverlap (Forward/Side)
1Double grid15 m90°0.292.5 m/s80%
265°0.37
345°0.81
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Raman, M.G.; Carlos, E.F.; Sankaran, S. Optimization and Evaluation of Sensor Angles for Precise Assessment of Architectural Traits in Peach Trees. Sensors 2022, 22, 4619. https://doi.org/10.3390/s22124619

AMA Style

Raman MG, Carlos EF, Sankaran S. Optimization and Evaluation of Sensor Angles for Precise Assessment of Architectural Traits in Peach Trees. Sensors. 2022; 22(12):4619. https://doi.org/10.3390/s22124619

Chicago/Turabian Style

Raman, Mugilan Govindasamy, Eduardo Fermino Carlos, and Sindhuja Sankaran. 2022. "Optimization and Evaluation of Sensor Angles for Precise Assessment of Architectural Traits in Peach Trees" Sensors 22, no. 12: 4619. https://doi.org/10.3390/s22124619

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop