Next Article in Journal
Safeguarding UAV Networks against Active Eavesdropping: An Elevation Angle-Distance Trade-Off for Secrecy Enhancement
Previous Article in Journal
Adaptive Fault-Tolerant Tracking Control of Quadrotor UAVs against Uncertainties of Inertial Matrices and State Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structural Component Phenotypic Traits from Individual Maize Skeletonization by UAS-Based Structure-from-Motion Photogrammetry

by
Monica Herrero-Huerta
1,2,*,
Diego Gonzalez-Aguilera
1 and
Yang Yang
2
1
Department of Cartographic and Land Engineering, Higher Polytechnic School of Avila, Universidad de Salamanca, Hornos Caleros 50, 05003 Avila, Spain
2
Institute for Plant Sciences, College of Agriculture, Purdue University, West Lafayette, IN 47906, USA
*
Author to whom correspondence should be addressed.
Drones 2023, 7(2), 108; https://doi.org/10.3390/drones7020108
Submission received: 22 December 2022 / Revised: 27 January 2023 / Accepted: 3 February 2023 / Published: 4 February 2023

Abstract

:
The bottleneck in plant breeding programs is to have cost-effective high-throughput phenotyping methodologies to efficiently describe the new lines and hybrids developed. In this paper, we propose a fully automatic approach to overcome not only the individual maize extraction but also the trait quantification challenge of structural components from unmanned aerial system (UAS) imagery. The experimental setup was carried out at the Indiana Corn and Soybean Innovation Center at the Agronomy Center for Research and Education (ACRE) in West Lafayette (IN, USA). On 27 July and 3 August 2021, two flights were performed over maize trials using a custom-designed UAS platform with a Sony Alpha ILCE-7R photogrammetric sensor onboard. RGB images were processed using a standard photogrammetric pipeline based on structure from motion (SfM) to obtain a final scaled 3D point cloud of the study field. Individual plants were extracted by, first, semantically segmenting the point cloud into ground and maize using 3D deep learning. Secondly, we employed a connected component algorithm to the maize end-members. Finally, once individual plants were accurately extracted, we robustly applied a Laplacian-based contraction skeleton algorithm to compute several structural component traits from each plant. The results from phenotypic traits such as height and number of leaves show a determination coefficient (R2) with on-field and digital measurements, respectively, better than 90%. Our test trial reveals the viability of extracting several phenotypic traits of individual maize using a skeletonization approach on the basis of a UAS imagery-based point cloud. As a limitation of the methodology proposed, we highlight that the lack of plant occlusions in the UAS images obtains a more complete point cloud of the plant, giving more accuracy in the extracted traits.

1. Introduction

Nowadays, climate change and environmental degradation are increasing the risk of fiber, fuel and food insecurity; cost-effective phenotyping methods are needed to meet this challenge. Traits in plants serve as features that are able to highlight the associations between genetic or physiological characteristics [1] and are imperative to plant breeding programs, biomass and yield estimations [2,3] and growth simulations [4]. Recently, phenotypic data were manually measured in the field, which is time-consuming, labor intensive and error-prone, not to mention destructive. The demand for precise agriculture and the development of close-range remote sensing technology makes image-based methods the solution to the phenotypic trait extraction challenge regarding plant physiology and structure [2], yield-related traits [3], canopy over [5] or root architecture [6,7]. Another one of the current challenges for plant phenotyping is to, accurately and with high-throughput, extract the structural components, usually composed of the root, stem, leaf, flower, fruit and seed [8]. Structural component traits are directly connected to functional phenomics, an emerging discipline leading to an increased understanding of plant functioning by leveraging high-throughput phenotyping and data analytics [9].
Evaluating the information encoded in the shape of a plant is vital to understanding the function of plant organs [10]. A powerful shape descriptor of plant networks is the skeleton, easily computed from imaging data [11]. The skeleton opens a wide range of possibilities for quantitative phenotyping at a plant level, including describing hierarchies and branching plant networks. From the literature, there are several methods to extract the curve-skeleton from a solid, usually classified into two key types: volumetric and geometric [12]. This classification system relies on the solid’s representation, depending on whether one is using an interior representation or a surface representation. Regarding volumetric approaches, they normally use a volumetric discrete representation, either a regularly partitioned voxelized representation or a discretized function demarcated in the 3D space. The potential loss of details within the solid and numerical instability due to inappropriate discretization resolution are the general disadvantages of this method [13]. On the other hand, geometric approaches directly work on the meshes or point sets. The most common used geometric methods are the Voronoi diagram [14] and medial axis [15]. Currently, Reeb graph-based methods have increased in popularity [16]. In addition, there are another group of approaches based on 3D modeling: voxel approaches and parametric surface methods. It is worth mentioning that voxel-based approaches are limited in modelling irregular surfaces.
Recently, unmanned aerial systems (UAS) have positioned themselves as a basic tool for high-throughput plant phenotyping in precision agriculture [3]. The latest advances in technology and miniaturization of their components provides additional opportunities for UAS data collection platforms. As high-resolution imaging sensors, light detection and ranging (LiDAR) has the capacity to acquire 3D measurements of plants, even in the absence of light [17,18]. This technology relies on the reflection of laser beams from the surfaces [19,20]. Currently, there are several studies using the terrestrial LiDAR to perform organ stratification (even leaf labeling) and its angles from field maize [21,22,23,24]. However, the payload reducing and cost increasing nature of LiDAR onboard UAS are the main disadvantages. On the other hand, passive imaging technologies, such as visible cameras, are lighter and less expensive. In addition, SfM (structure from motion), defined as a photogrammetric range imaging technique, offers the opportunity to acquire point clouds on the basis of images taken from various viewpoints [3]. Point clouds as three dimensional, and massive data can be used for extracting complex structural information [25]. In addition, deep learning consists of methods which can deal with object detection, classification and segmentation tasks [8], based on voxels, octree, multi-surface, multi-view and directly on point clouds. The challenge of its high cost of computing memory means these networks are mainly used in small data applications. There are some approaches using UAS imagery-based point clouds to compute basic traits such as plant height or the leaf area index in maize [26,27,28].
Still, methodologies to fully exploit the potential of UAS-collected data in agriculture are urgently required. In this paper, we present a novel pipeline to automatically and accurately characterize several structural component phenotypic traits in maize trails. To the best of our knowledge, the skeletonization of maize from UAS imagery-based point clouds has not been performed before. RGB images using UAS is the input of the proposed workflow to acquire a georeferenced dense point cloud of the entire study field using SfM. Topological and deep learning-based algorithms were combined to extract individual plants from the point cloud. Once a surface reconstruction process from each individual plant was achieved, the skeleton extraction algorithm was applied. Finally, we were easily able to compute structural component traits highly demanded in phenotypic tasks, comparing them with on-field and digital measurements. The paper is structured as follows: after this brief introduction, the materials, including experimental setup, data acquisition and proposed methodology are described in detail. Next, the experimental results are described, validated and discussed. Finally, the more important conclusions reached with this study are addressed, along with future perspectives.

2. Materials and Methods

2.1. Experimental Setup and Data Acquisition

The research trial was located at the Indiana Corn and Soybean Innovation Center Manager at the Agronomy Center for Research and Education (ACRE) in West Lafayette (IN, USA) at Purdue University. Figure 1 illustrates the visualization of the workflow to follow.
The dates of planting (DOP) were June 6 and 17, 2021. The trail was designed with an arrangement of 18 ranges and 4 rows, planting at two different densities as Figure 2 shows: approximately 14 (DOP June 17) and 18 seeds*row-1 (DOP June 6); 3 ranges and 4 rows the first density and 15 ranges and 4 rows the second one. Four GCPs (ground control points) were placed on the ground and measured using a GNSS device for georeferencing. The material of these accuracy markers was highly reflected to be easily detected in the UAS imagery dataset. The flights were carried out on 27 July (flight 1) and 3 August (flight 2), 2021 around noon solar time on sunny and no-cloud days. A Sony Alpha ILCE-7R RGB camera with a Sony 35 mm lens was the photogrammetric sensor onboard a DJI Matrice 600 Pro (M600P) platform (Gryfn, West Lafayette, IN, USA). This platform is a rotocopter UAS with onboard GPS, IMU and magnetometer and a maximum payload of 6 kg. The photogrammetric flight configuration was set up with an along- and across-track overlap of 88% and a flight altitude of 22 m. A total of 530 and 518 images from flights 1 and 2, respectively, were captured with a dimension of 7952 × 5304 pixels, given the characteristic of the photogrammetric sensor as pixel size of 4.52 µm, focal length of 35 mm and size of 35.9 × 23.9 mm2. The sensor configuration was ISO (the International Organization of Standardization) 200, an aperture with a F-stop of f/5.6 and a fixed exposure time of 1/1250 s.
As for ground measurements, stem count and plant height for the full experiment were taken at the same date as the image acquisition from UAS. Notice that before the second flight, 30% of the plants were pulled over in order to avoid occlusions from the aerial images.

2.2. Imagery-Based Point Cloud

Pix4Dmapper software package (Pix4D SA, Lausanne, Switzerland) was used to process aerial images, which includes camera calibration, image orientation and dense point cloud extraction. In this way, the point cloud of the study field was obtained and accurately georeferenced to the earth reference system World Geodetic System 84. However, point clouds automatically generated by SfM techniques probably englobe outlier points. To remove these points, a statistical outlier removal-based filter was applied. First, it computes the mean distance of each point to its neighbors (considering k nearest neighbors for each—k is the first parameter). Then, it rejects the points that are farther than the mean distance plus a number of times the standard deviation (second parameter). In other words, the process computes a threshold based on the Gaussian distribution of all pairwise distances in the neighborhood defined by a specific number of points (mean distance) and a number (k) to multiply the standard deviation (std. deviation), as Equation (1) shows. Points within a distance larger than the threshold are classified as outliers and removed from the point cloud [17].
t h r e s h o l d = µ + k σ
where µ is the mean distance, σ is the standard deviation, and k is a constant.
Figure 3 illustrates the low-cost photogrammetry result to 3D reconstruct a random plant from UAS imagery and the outlier removal process defined before.

2.3. Individual Maize Extraction

A 3D deep learning unified architecture named PointNet [29] was employed to automatically perform a semantic segmentation to extract the plants from the point cloud. As the main advantage, PointNet directly runs on point clouds; that means the permutation invariance of points is not altered. Moreover, PointNet is highly robust, with little perturbation of the input points and in dealing with outliers and missing data. PointNet architecture works as follows: each point is represented by six values, its three coordinates (x, y, z) and its colors (R, G, B). The final fully connected layers of the network aggregate these optimal values into the global descriptor for the extraction. It is easy to independently apply rigid or affine transformations to each point due to the input format. Therefore, a data-dependent spatial transformer network was added, which attempted to standardize the data with the intention to further improve the results. In addition, we reduced overfitting using a data augmentation procedure that works by creating a new dataset using label-preserving transformations [30]. The first stage in the data augmentation process generates n translations in the training dataset defined by manually extracting individual maize from the point cloud. The second stage proceeds to modify the RGB intensities. For this purpose, principal component analysis was computed on the RGB value set for each training point cloud. We added multiples of the found principal components m times, with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian with µ (mean) of zero and σ (standard deviation) of 0.1. In that way, the training set was increased by a factor of n*m. In terms of geometry and the intensity and color of the illumination, the corn plant characteristics were mainly invariant.
Once the semantic segmentation of the plants was undertaken, we extracted individual mazes by connected component labeling and setting up an octree level to define the minimum gap between two components; this means the corresponding cell size of the 3D grid for extraction [31]. This processing consists of an octree decomposition, followed by a split-and-merge procedure. First, a decomposition of a point cloud into an octree based on point density is performed. Then, the points are split within each voxel into spatially connected components. Finally, a recursive merging of components across voxels is carried out, based on a connectivity criterion until the root node is reached. As a visual example, Figure 4 displays the outputs from the steps of our pipeline to extract individual maize from the point cloud within a random plot.

2.4. Curve-Skeleton Extraction

Once the individual plants were extracted, the skeletonization process was applied to each point cloud. The skeleton structure is basically able to abstract the model volume and topological characteristics. In this case, a Laplacian-based contraction algorithm was used [13], which worked directly on the point cloud and operated on every point [32]. Advantageously, no resampled volumetric representation was required. Moreover, it was pose-insensitive and invariant to global rotation. We summarize the stages of the skeletonization process as follows: first, the mesh is contracted into a zero-volume skeletal shape, iteratively moving all the vertices along their curvature normal directions. After each iteration, all the collapsed faces from the degenerated mesh are removed until no triangles exist. During the contraction, the mesh connectivity is not altered, retaining all the key features using sufficient skeletal nodes. Lastly, the skeleton’s geometric embedding is refined, moving each node to the center of mass of its local mesh region [32,33]. After these steps, we get the curve-skeleton of each individual maize.

2.5. Phenotypic Traits of Structural Components

The curve-skeleton is a structure that extracts the volume and topological characteristics of each individual plant represented by a point cloud and 3D line. In that way, we can easily define individual plant traits and different structural components of the plant: stem and leaves. As an individual plant phenotypic trait, we extracted the total height (difference between zmaximum and zminimum), crown diameter (difference between xmaximum and xminimum) and plant azimuth. The azimuth angle is defined as the angle between the maximum eigenvector of the plant skeleton and the north direction on the vertical projection plane. The origin of coordinate axes was selected as the leftmost point of the plant skeleton, with a value between 0 and 180°. The stem was defined as the most vertical line. The leaves originate from stem bifurcations and have a dead-end as a topological rule. In addition, leaves must have a minimum length to be considered a proper leaf. The stem lodging was calculated by computing the orientation between medium points from the beginning and end stretch (defined by a minimum distance) of the stem skeleton; from each leaf, we mathematically computed the length based on the length of the skeleton defined as leaf and the azimuth. The leaf azimuth is defined as the angle between the maximum eigenvector of a leaf skeleton and the north direction on the vertical projection plane. The origin of the coordinate axes was selected as the connection node between the proper leaf and the stem. The value of leaf azimuth is between 0 and 360° [34]. Figure 5 shows how the traits were extracted from the skeleton.
Furthermore, this skeletal structure drives the registration process in temporal series. The registration process is critical to being able to automatically evaluate the growth of each individual plant. To register a temporal series, principal component analysis (PCA) was performed [35]. In general, the principal components are eigenvectors of the data’s covariance matrix. More specifically, this statistical analysis uses the first and second moments of the curve-skeleton, resulting in three orthogonal vectors grouped on its center of gravity. The PCA summarizes the distribution of the lines along the three dimensions and models the principal directions and magnitudes of the curve-skeleton distribution around the center of gravity. Thereby, the registration of the temporal series was carried out by overlapping the principal component axes. After the registration, we can robustly monitor the growth as orientation and length variation.

2.6. Accuracy Assessment

The correlation between the plant height, stem count and number of leaves estimated by the skeleton and the on-field measurements or digital leaf counts was verified to evaluate the accuracy of the proposed methodology. Moreover, the rest of the skeleton algorithm-derived phenotypic traits (length and angles) were compared with manual and digital measurements from the point cloud of each individual plant. The leaf azimuth was manually measured by choosing the best suitable view direction which had the largest inclination. The determination coefficient (R2), root mean square error (RMSE) and normalized root means square error (nRMSE) were calculated. The R2 value was used to evaluate the coincidence between the computed and the measured value. The RMSE was used to measure the deviation between both values. The nRMSE represents the degree of difference between these both values (nRMSE <10% indicates no difference, 10%≤ nRMSE <20% denotes a small difference, 20%≤ nRMSE <30% is moderate, and nRMSE ≤30% represents a large difference) [36]. Among them, a larger R2 value indicates better data fit, and smaller RMSE and nRMSE values indicate higher estimation accuracy [37]. The calculation formulas of R2, RMSE and nRMSE are shown in the following Formulas (2)–(4):
R 2 = 1 i = 1 n ( x c o m p i x a c t _ ) 2 i = 1 n ( x a c t i x a c t _ ) 2
R M S E = i = 1 n ( x c o m p i x a c t i ) 2 n
n R M S E = R M S E x a c t _
where x a c t i and x a c t _ represent the actual value and the average of them, respectively (on-field measured in case of plant height and manually measured in the individual point cloud for the length and angles), x c o m p i represents the computed value of the trait, and n represents the number of samples (leaf, stem or individual plant).
Furthermore, the mean bias error (MBE), the absolute mean bias error (AMBE), the relative error (RE) and the absolute error (AE) were computed as follows (Equations (5)–(8)):
M B E = i = 1 n ( x c o m p i x a c t i ) n
A M B E = i = 1 n | ( x c o m p i x a c t i ) | n
R E = 100 i = 1 n ( x c o m p i x a c t i ) x a c t i n
A E = 100 i = 1 n | ( x p r e d i x a c t i ) | x a c t i n
In addition, the Nash and Sutcliffe index, η is also computed (Equation (9)) and used in modelling to characterize the error related to the spatial heterogeneity:
η = 1 i = 1 n ( x p r e d i x a c t i ) 2 i = 1 n ( x p r e d i x a c t _ ) 2
Some of these evaluation metrics have been extensively used to analysis the power of regression models [38]. Smaller values of MBE, AMBE, RE and AE and larger values of η (∞ < η ≤ 1) indicate better precision and accuracy of the prediction model.

3. Results

All the experimental results obtained below were run on a 3.6-GHz desktop computer with an Intel CORE I7 CPU and 32-GB RAM. We started the image processing using a Pix4D mapper software package (Pix4D SA, Lausanne, Switzerland) as a commercial solution for SfM. RGB imagery and ground control points were measured with terrestrial GPS works as inputs to finally reconstruct the study field into a scaled 3D point cloud. As an outcome, two point clouds from flights on different dates (27 July, first flight, and 3 August 2021, second flight) were computed and exactly georeferenced to EPSG 32616, WGS84 CRS. The point cloud was formed using a total of 35,983,365 points for the first flight and 34,851,008 points for the second flight, with a spatial resolution better than >24,800 points/m2 in both cases. These values are valid within the limits of the study field (14 × 100 m2). Due to the automated and massive character of the photogrammetric processing, an uncertainty quantity of outliers could be enclosed into these point clouds. A statistical analysis was carried out by supposing a Gaussian distribution of neighbors’ distances to establish the threshold and determine outliers. The procedure has already been executed by [39]. We reached a spatial resolution better than 22,100 points/m2 for both flights once the outlier detection approach was finished. A total of 257 plants for the first flight and 172 for the second flight were counted in the field, and all the plants were correctly and accurately extracted within the point clouds from both flights (30% of the plants were pulled over after the first flight and before the second in order to avoid occlusions from the aerial image). The average number of points per plant is 3968. Figure 6a represents the point clouds from the two dates and the corresponding individual plant extraction in top view. In Figure 6b, a zoomed window is shown with a 3D view. Figure 6c illustrates the growth of the individual plants within this zoomed area between the two dates precisely quantified in meters. We registered the point cloud of each individual plant from these two dates by computing PCA from the skeleton and overlapping the principal component axes. In this particular case, the maximum growth is 0.41 m. In addition, we can calculate the average maximum growth per plant, which is 0.22 m, and we can point out that the growth is always bigger at the upper part of the corn between these two dates.
Next, once the individual plants were extracted, the skeleton was computed from each point cloud using a Laplacian-based contraction algorithm, as Section 2.4 explains. Figure 7 graphically shows, in 3D, the individual point cloud in black overlapping the skeleton in red of the 16 plant cases: the maximum and minimum height, crown diameter, number of points and grown increment from the two flights (27 July 2011 and 3 August 2011). It is worth mentioning that the axes are in relative values. Below, Table 1 shows the plant data location and traits from the plant samples: the number of points, UTM coordinate of the point cloud center, and dimension of the bounding box from each individual point cloud, as well as traits computed by the individual point cloud skeletonization, such as the number of leaves, plant height, crow diameter, plant azimuth, lodging calculated as stem azimuth, stem height, mean leaf azimuth and mean leaf length.

4. Discussion

In this section, the results are discussed and validated. The stem counts measured in the field with GPS were exactly the same as the digitally taken stem counts for both flights: 257 plants for the first flight and 172 for the second one. The individual height of each plant was also measured in the field using a tape with centimetric precision. Comparing this on field-measurement with the digital height computed from the point cloud of each extracted plant, an R2 of more than 0.99 was achieved. No outliers were detected in this regression, guaranteeing accurate and precise height results. From Table 1, it is remarkable that the plants with a greater number of leaves are the ones with the maximum plant height and a greater number of points. It seems reasonable because when the point density is higher, the plant has more detail to distinguish the leaves, and higher plants have more chance to have leaves. On the other hand, the plants with less recognizable leaves coincide with the minimum crown diameter plants. Another bit of information we can extract is that the more vertical plants are the highest ones, while the more inclined plants (lodging) coincide with the minimum crown diameter one. The following table, Table 2, illustrates statistics values from the computed traits of all the individual plant point clouds from both flights (27 July and 3 August 2011): mean, standard deviation (Std), median, normalized median absolute deviation (NMAD) (Equation (10)) and square root of the biweight midvariance (BwMv) (Equation (11)). It is worth mentioning that for the computation of the table, the outliers were discarded according to the studentized residuals for a significance level of 0.05 with two-tail distribution.
N M A D = 1.4826 M A D
B w M v = n i = 1 n a i ( x i m ) 2 ( 1 U i 2 ) 4 ( i = 1 n a i ( 1 U i 2 ) ( 1 5 U i 2 ) ) 2
a i = { 1 , i f | U i | < 1   0 , i f | U i | 1  
U = x i m 9 M A D
being the median absolute deviation (MAD) and the median (m) of the absolute deviations from the data’s median (mx).
Analyzing the computed traits, the plant height and the stem height have more dispersed values (in this position), as the larger values of NMAD and BwMv show. With regard to errors, the determination coefficient is superior to 0.9 for all the traits, except for the mean leaf length. The algorithm fails to recognize the short leaves, and it, therefore, concludes that the mean of the computed leaf length is much larger. This is the same reason why the normalized root mean square error and the Nash and Sutcliffe index get the worst score in this computed trait. The rest of the errors show no difference between the actual and computed values at all.

5. Conclusions

As this study highlights, skeletons are powerful descriptors for analyzing plant networks by defining structural components and computing several phenotypic traits. Moreover, close-range platforms together with novel deep learning networks show a powerful combination to extract individual maize plants. Therefore, the approach proposed here is pretty rapid, accurate and cost-effective. It is worth mentioning that particular attention has been paid to the spatial resolution and completeness of the computed point cloud to effectively run our approach. These aspects are directly related to the plant spacing, which could generate shadows and to the variables coming from the flight (overlap, altitude and flight direction) to get a dense point cloud. In this study, the image acquisition strategy was only from nadir. However, oblique images will improve the completeness of the plant point cloud. Future analyses are needed to be able to apply our pipeline to different plant species and phenotypic growth stages, as well as to investigate the influence of environmental factors such as soil properties and light conditions. In addition, several platforms for high-throughput phenotyping, even terrestrial platforms or LiDAR-collected point clouds, are intended to be tested.

Author Contributions

M.H.-H. conceived the idea, developed the data analysis pipelines and software, performed the data analysis and visualization and wrote the manuscript; D.G.-A. supported the research and edited the manuscript; Y.Y. supervised the research. All authors have read and agreed to the published version of the manuscript.

Funding

MHH was supported by the Spanish Government under Maria Zambrano (Requalification of the Spanish university system for 2021–2023). This research was also funded by the European project H2020 CHAMELEON: A Holistic Approach to Sustainable, Digital EU Agriculture, Forestry, Livestock and Rural Development based on Reconfigurable Aerial Enablers and Edge Artificial Intelligence-on-Demand Systems. Ref: 101060529, Call: HORIZON-CL6-2021-GOVERNANCE-01-21.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data analyzed during the current study (the point cloud of the area of interest on the two dates and the point cloud of each individual plant and its skeleton) can be found as supplementary data at https://drive.google.com/file/d/1Yp-5ujfXRvtNqqmZ4xr-uRGKvXKJjpYM/view?usp=share_link (accessed on 9 January 2020).

Acknowledgments

The authors would like to thank the Institute for Plant Sciences (College of Agriculture), Jason Adams, Brian Dilkes and his lab and all at Purdue University (IN, USA) for their collaboration during the experimental phase of this research.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Roitsch, T.; Cabrera-Bosquet, L.; Fournier, A.; Ghamkhar, K.; Jiménez-Berni, J.; Pinto, F.; Ober, E.S. New sensors and data-driven approaches—A path to next generation phenomics. Plant Sci. 2019, 282, 2–10. [Google Scholar] [CrossRef] [PubMed]
  2. Herrero-Huerta, M.; Bucksch, A.; Puttonen, E.; Rainey, K.M. Canopy roughness: A new phenotypic trait to estimate above-ground biomass from unmanned aerial system. Plant Phenomics 2020, 2020, 1–10. [Google Scholar] [CrossRef] [PubMed]
  3. Herrero-Huerta, M.; Rodriguez-Gonzalvez, P.; Rainey, K.M. Yield prediction by machine learning from UAS-based multi-sensor data fusion in soybean. Plant Methods 2020, 16, 78. [Google Scholar] [CrossRef] [PubMed]
  4. Symonova, O.; Topp, C.N.; Edelsbrunner, H. DynamicRoots: A software platform for the reconstruction and analysis of growing plant roots. PLoS ONE 2015, 10, e0127657. [Google Scholar] [CrossRef] [PubMed]
  5. Moreira, F.F.; Hearst, A.A.; Cherkauer, K.A.; Rainey, K.M. Improving the efficiency of soybean breeding with high-throughput canopy phenotyping. Plant Methods 2019, 15, 139–148. [Google Scholar] [CrossRef]
  6. Herrero-Huerta, M.; Meline, V.; Iyer-Pascuzzi, A.S.; Souza, A.M.; Tuinstra, M.R.; Yang, Y. 4D Structural root architecture modeling from digital twins by X-Ray Computed Tomography. Plant Methods 2021, 17, 1–12. [Google Scholar] [CrossRef]
  7. Gerth, S.; Claußen, J.; Eggert, A.; Wörlein, N.; Waininger, M.; Wittenberg, T.; Uhlmann, N. Semiautomated 3D root segmentation and evaluation based on X-Ray CT imagery. Plant Phenomics 2021, 2021, 8747930. [Google Scholar] [CrossRef]
  8. Jin, S.; Su, Y.; Gao, S.; Wu, F.; Ma, Q.; Xu, K.; Guo, Q. Separating the structural components of maize for field phenotyping using terrestrial lidar data and deep convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2644–2658. [Google Scholar] [CrossRef]
  9. York, L.M. Functional phenomics: An emerging field integrating high-throughput phenotyping, physiology, and bioinformatics. J. Exp. Bot. 2019, 70, 379–386. [Google Scholar] [CrossRef]
  10. Bucksch, A.; Fleck, S. Automated detection of branch dimensions in woody skeletons of fruit tree canopies. Photogramm. Eng. Remote Sens. 2011, 77, 229–240. [Google Scholar] [CrossRef]
  11. Bucksch, A. A practical introduction to skeletons for the plant sciences. Appl. Plant Sci. 2014, 2, 1400005. [Google Scholar] [CrossRef] [PubMed]
  12. Cornea, N.D.; Silver, D.; Min, P. Curve-skeleton properties, applications, and algorithms. IEEE Trans. Vis. Comput. Graph. 2007, 13, 530. [Google Scholar] [CrossRef] [PubMed]
  13. Au, O.K.C.; Tai, C.L.; Chu, H.K.; Cohen-Or, D.; Lee, T.Y. Skeleton extraction by mesh contraction. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  14. Brandt, J.W.; Algazi, V.R. Continuous skeleton computation by Voronoi diagram. CVGIP Image Underst. 1992, 55, 329–338. [Google Scholar] [CrossRef]
  15. Marie, R.; Labbani-Igbida, O.; Mouaddib, E.M. The delta medial axis: A fast and robust algorithm for filtered skeleton extraction. Pattern Recognit. 2016, 56, 26–39. [Google Scholar] [CrossRef]
  16. Mohamed, W.; Hamza, A.B. Reeb graph path dissimilarity for 3D object matching and retrieval. Vis. Comput. 2012, 28, 305–318. [Google Scholar] [CrossRef]
  17. Herrero-Huerta, M.; Lindenbergh, R.; Gard, W. Leaf Movements of indoor plants monitored by Terrestrial LiDAR. Front. Plant Sci. 2018, 9, 189. [Google Scholar] [CrossRef]
  18. Moeslund, J.E.; Clausen, K.K.; Dalby, L.; Fløjgaard, C.; Pärtel, M.; Pfeifer, N.; Brunbjerg, A.K. Using airborne lidar to characterize North European terrestrial high-dark-diversity habitats. Remote Sens. Ecol. Conserv. 2022. [Google Scholar] [CrossRef]
  19. Li, N.; Kähler, O.; Pfeifer, N. A comparison of deep learning methods for airborne lidar point clouds classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6467–6486. [Google Scholar] [CrossRef]
  20. Campos, M.B.; Litkey, P.; Wang, Y.; Chen, Y.; Hyyti, H.; Hyyppä, J.; Puttonen, E. A long-term terrestrial laser scanning measurement station to continuously monitor structural and phenological dynamics of boreal forest canopy. Front. Plant Sci. 2021, 11, 606752. [Google Scholar] [CrossRef]
  21. Lei, L.; Li, Z.; Wu, J.; Zhang, C.; Zhu, Y.; Chen, R.; Yang, G. Extraction of maize leaf base and inclination angles using Terrestrial Laser Scanning (TLS) data. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–17. [Google Scholar] [CrossRef]
  22. Lin, C.; Hu, F.; Peng, J.; Wang, J.; Zhai, R. Segmentation and stratification methods of field maize terrestrial LiDAR point cloud. Agriculture 2022, 12, 1450. [Google Scholar] [CrossRef]
  23. Wu, S.; Wen, W.; Xiao, B.; Guo, X.; Du, J.; Wang, C.; Wang, Y. An accurate skeleton extraction approach from 3D point clouds of maize plants. Front. Plant Sci. 2019, 10, 248. [Google Scholar] [CrossRef] [PubMed]
  24. Miao, T.; Zhu, C.; Xu, T.; Yang, T.; Li, N.; Zhou, Y.; Deng, H. Automatic stem-leaf segmentation of maize shoots using three-dimensional point cloud. Comput. Electron. Agric. 2021, 187, 106310. [Google Scholar] [CrossRef]
  25. Xin, X.; Iuricich, F.; Calders, K.; Armston, J.; De Floriani, L. Topology-based individual tree segmentation for automated processing of terrestrial laser scanning point clouds. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103145. [Google Scholar]
  26. Li, M.; Shamshiri, R.R.; Schirrmann, M.; Weltzien, C.; Shafian, S.; Laursen, M.S. UAV oblique imagery with an adaptive micro-terrain model for estimation of leaf area index and height of maize canopy from 3D point clouds. Remote Sens. 2022, 14, 585. [Google Scholar] [CrossRef]
  27. Tirado, S.B.; Hirsch, C.N.; Springer, N.M. UAV-based imaging platform for monitoring maize growth throughout development. Plant Direct 2020, 4, e00230. [Google Scholar] [CrossRef]
  28. Du, L.; Yang, H.; Song, X.; Wei, N.; Yu, C.; Wang, W.; Zhao, Y. Estimating leaf area index of maize using UAV-based digital imagery and machine learning methods. Sci. Rep. 2022, 12, 15937. [Google Scholar] [CrossRef]
  29. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  30. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  31. Su, Y.-T.; Bethel, J.; Hu, s. Octree-based segmentation for terrestrial LiDAR point cloud data in industrial applications. ISPRS J. Photogramm. Remote Sens. 2016, 113, 59–74. [Google Scholar] [CrossRef]
  32. Cao, J.; Tagliasacchi, A.; Olson, M.; Zhang, H.; Su, Z. Point Cloud Skeletons via Laplacian Based Contraction. In Proceedings of 2010 Shape Modeling International Conference, Aix-en-Provence, France, 21–23 June 2010; pp. 187–197. [Google Scholar]
  33. Herrero-Huerta, M.; Meline, V.; Iyer-Pascuzzi, A.S.; Souza, A.M.; Tuinstra, M.R.; Yang, Y. Root phenotyping from X-ray Computed Tomography: Skeleton extraction. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sc 2021, XLIII-B4–2021, 417–422. [Google Scholar] [CrossRef]
  34. Jin, S.; Su, Y.; Zhang, Y.; Song, S.; Li, Q.; Liu, Z.; Ma, Q.; Ge, Y.; Liu, L.; Ding, Y.; et al. Exploring Seasonal and Circadian Rhythms in Structural Traits of Field Maize from LiDAR Time Series. Plant Phenomics 2021, 2021, 9895241. [Google Scholar] [CrossRef] [PubMed]
  35. Russ, T.; Boehnen, C.; Peters, T. 3D Face Recognition Using 3D Alignment for PCA. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR′06), New York, NY, USA, 17–22 June 2006; pp. 1391–1398. [CrossRef]
  36. Jin, S.; Su, Y.; Wu, F.; Pang, S.; Gao, S.; Hu, T.; Guo, Q. Stem–leaf segmentation and phenotypic trait extraction of individual maize using terrestrial LiDAR data. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1336–1346. [Google Scholar] [CrossRef]
  37. Zhou, L.; Gu, X.; Cheng, S.; Yang, G.; Shu, M.; Sun, Q. Analysis of plant height changes of lodged maize using UAV-LiDAR data. Agriculture 2020, 10, 146. [Google Scholar] [CrossRef]
  38. Elarab, M.; Ticlavilca, A.M.; Torres-Rua, A.F.; Maslova, I.; McKee, M. Estimating chlorophyll with thermal and roadband multispectral high-resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture. Int. J. Appl Earth Obs 2015, 43, 32–42. [Google Scholar]
  39. Herrero-Huerta, M.; Rodriguez-Gonzalvez, P.; Lindenbergh, R. Automatic Tree Parameter Extraction by Mobile Lidar System in an Urban Context. PLoS ONE 2018, 13, e0196004. [Google Scholar] [CrossRef]
Figure 1. Proposed workflow.
Figure 1. Proposed workflow.
Drones 07 00108 g001
Figure 2. Point cloud sample processed with the imagery dataset from two flights performed (27 July and 3 August 2021) at different densities: 14 seeds*row-1 (DOP June 17) (a) and 18 seeds*row-1 (DOP June 6) (b).
Figure 2. Point cloud sample processed with the imagery dataset from two flights performed (27 July and 3 August 2021) at different densities: 14 seeds*row-1 (DOP June 17) (a) and 18 seeds*row-1 (DOP June 6) (b).
Drones 07 00108 g002
Figure 3. Photogrammetric 3D reconstruction of a random plant: 2D manual picture from the ground (a), scaled 3D point cloud from UAS imagery (b), clean point cloud (outlier removal) (c).
Figure 3. Photogrammetric 3D reconstruction of a random plant: 2D manual picture from the ground (a), scaled 3D point cloud from UAS imagery (b), clean point cloud (outlier removal) (c).
Drones 07 00108 g003
Figure 4. Partial and global outputs of the plant extraction pipeline within a random plot: RGB-based point cloud (a), height-based point cloud (b), vegetation-based semantic classification (c) and individual maize extraction and labeling (d).
Figure 4. Partial and global outputs of the plant extraction pipeline within a random plot: RGB-based point cloud (a), height-based point cloud (b), vegetation-based semantic classification (c) and individual maize extraction and labeling (d).
Drones 07 00108 g004
Figure 5. Skeleton-based structural component phenotypic traits: visual definition of the traits (a) and all the computed traits listed by type of the structural component (b).
Figure 5. Skeleton-based structural component phenotypic traits: visual definition of the traits (a) and all the computed traits listed by type of the structural component (b).
Drones 07 00108 g005
Figure 6. Point clouds from the two dates and individual plant extraction in different colors surrounded by a bounding box (a); zoomed window of a random area with a 3D view of the extracted plants (b); plant growth within this zoomed area from the two dates quantified in meters (c).
Figure 6. Point clouds from the two dates and individual plant extraction in different colors surrounded by a bounding box (a); zoomed window of a random area with a 3D view of the extracted plants (b); plant growth within this zoomed area from the two dates quantified in meters (c).
Drones 07 00108 g006
Figure 7. The 3D skeleton extraction in red overlapped the individual point cloud in black of 16 plant samples: maximum and minimum height, crown diameter, number of points and grown increment from the two flights (27 July 2011 and 3 August 2011).
Figure 7. The 3D skeleton extraction in red overlapped the individual point cloud in black of 16 plant samples: maximum and minimum height, crown diameter, number of points and grown increment from the two flights (27 July 2011 and 3 August 2011).
Drones 07 00108 g007
Table 1. Plant data and traits from the 16 plant samples: number of points, UTM coordinate of the point cloud center and dimension of the bounding box from the point cloud; traits computed by the individual point cloud skeletonization, such as number of leaves, plant height, crow diameter, plant azimuth, lodging calculated as stem azimuth, stem height, mean leaf azimuth (LA) and mean leaf length (LL).
Table 1. Plant data and traits from the 16 plant samples: number of points, UTM coordinate of the point cloud center and dimension of the bounding box from the point cloud; traits computed by the individual point cloud skeletonization, such as number of leaves, plant height, crow diameter, plant azimuth, lodging calculated as stem azimuth, stem height, mean leaf azimuth (LA) and mean leaf length (LL).
Plant IDPoint CloudSkeleton
Nº. PointsUTM Coord. CenterBounding Box (m)Trait Extraction
AX+500,000AY+4,480,000AZ+0XYZ#LeavesHeight (m)Crow Diam. (m)Plant Azimuth (º)Lodging (º)Stem Height (m)Mean LA (º)Mean LL (m)
12429224.21202.40181.930.660.301.0490.970.69339.4351.00.58344.40.21
2956225.36156.35180.900.380.260.2540.220.37338.2341.10.16340.70.13
33481223.47205.84182.200.430.821.3281.290.781.13.20.871.40.34
41729223.11156.49181.190.580.340.3250.290.5227.819.40.2424.30.20
53929223.33190.44181.870.900.310.8380.810.886.78.40.568.30.24
62259225.88221.57181.490.160.230.8170.790.24358.15.50.611.70.12
73891223.3172.99181.980.890.641.1571.120.823.92.50.892.70.30
8740223.79149.17180.860.1390.310.3930.360.2919.823.30.1622.00.07
94641224.96199.35181.900.590.590.8770.850.5718.88.10.6015.60.21
10369223.85149.24180.680.440.400.2650.230.479.412.40.1210.80.06
115045223.44197.97182.170.880.581.2291.200.81357.61.20.89259.60.32
12547223.22158.67181.200.360.420.4040.350.40345.6354.80.13347.20.10
132115225.82212.08181.680.860.451.0060.970.79174.00.494.90.23
14947223.87153.23180.800.400.430.4540.390.412.96.20.165.80.15
151636225.72212.19181.680.740.230.8280.790.68349.2356.10.50355.60.38
16847223.85153.22180.840.300.340.5680.510.386.14.20.146.70.11
Table 2. Statistics of the computed traits (mean, Std, median, NMAD and BwMv) and error metrics of all plants from both flights at 95% confidence interval (MBE, AMBE, RMSE, NMAD, RE, AE and η).
Table 2. Statistics of the computed traits (mean, Std, median, NMAD and BwMv) and error metrics of all plants from both flights at 95% confidence interval (MBE, AMBE, RMSE, NMAD, RE, AE and η).
#LeafHeight (cm)Crown Diam. (cm)Azimuth (º)Lodging (º)Hstem (cm)Mean LA (º)Mean LL (cm)
Mean5.9870.1654.661.184.5642.76−4.9419.19
Std1.4034.8421.4315.618.4828.1226.829.61
Median779.8154.911.714.6651.573.5420.55
NMAD2.4344.8428.4514.4513.235.8023.1311.44
BwMv0.2582.6631.4312.224.8253.8024.656.22
R2 (%)90.999.899.799.899.999.499.768.8
RMSE 0.6611.7691.1378.4564.6502.34111.0548.231
nRMSE (%)10.52.52.06.14.95.26.132.7
MBE 0.063−0.431−0.150−3.375−2.125−0.544−3.563−5.000
AMBE0.4381.2440.8886.0003.8751.90610.0635.613
RE 0.781−0.026−0.052−0.429−0.122−0.333−0.294−2.780
AE 0.7810.0260.1040.4290.1220.3330.29414.053
Ƞ0.8790.9970.9970.9970.9990.9930.9960.267
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Herrero-Huerta, M.; Gonzalez-Aguilera, D.; Yang, Y. Structural Component Phenotypic Traits from Individual Maize Skeletonization by UAS-Based Structure-from-Motion Photogrammetry. Drones 2023, 7, 108. https://doi.org/10.3390/drones7020108

AMA Style

Herrero-Huerta M, Gonzalez-Aguilera D, Yang Y. Structural Component Phenotypic Traits from Individual Maize Skeletonization by UAS-Based Structure-from-Motion Photogrammetry. Drones. 2023; 7(2):108. https://doi.org/10.3390/drones7020108

Chicago/Turabian Style

Herrero-Huerta, Monica, Diego Gonzalez-Aguilera, and Yang Yang. 2023. "Structural Component Phenotypic Traits from Individual Maize Skeletonization by UAS-Based Structure-from-Motion Photogrammetry" Drones 7, no. 2: 108. https://doi.org/10.3390/drones7020108

Article Metrics

Back to TopTop