3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery

Sorghum is one of the most important crops worldwide. An accurate and efficient high-throughput phenotyping method for individual sorghum panicles is needed for assessing genetic diversity, variety selection, and yield estimation. High-resolution imagery acquired using an unmanned aerial vehicle (UAV) provides a high-density 3D point cloud with color information. In this study, we developed a detecting and characterizing method for individual sorghum panicles using a 3D point cloud derived from UAV images. The RGB color ratio was used to filter non-panicle points out and select potential panicle points. Individual sorghum panicles were detected using the concept of tree identification. Panicle length and width were determined from potential panicle points. We proposed cylinder fitting and disk stacking to estimate individual panicle volumes, which are directly related to yield. The results showed that the correlation coefficient of the average panicle length and width between the UAV-based and ground measurements were 0.61 and 0.83, respectively. The UAV-derived panicle length and diameter were more highly correlated with the panicle weight than ground measurements. The cylinder fitting and disk stacking yielded R2 values of 0.77 and 0.67 with the actual panicle weight, respectively. The experimental results showed that the 3D point cloud derived from UAV imagery can provide reliable and consistent individual sorghum panicle parameters, which were highly correlated with ground measurements of panicle weight.


Introduction
Sorghum is commonly used for human consumption, as livestock feed, and in ethanol fuel production [1]. There are numerous varieties, including grain sorghums used for human food, and forage sorghum for livestock hay and fodder [2]. Sorghum products also play an important role in food security because sorghum is one of the leading cereal crops worldwide, and it is a principal source of energy and nutrition for humans [3]. Although final grain yield can be potentially estimated by measuring the plant population and the weight per panicle [4], traditional hand-sampling methods in the field are labor-intensive, time-consuming, and prone to human errors. Therefore, an accurate and efficient method to measure crop phenotypes is required to enable research scientists and breeders to increase sorghum product yield and to develop improved cultivar [5].
The rapid development of unmanned aerial vehicle (UAV) and sensor technologies in recent years has enabled the collection of very-high-resolution images and high-throughput phenotyping (HTP) from remotely sensed imagery data [6]. Crop phenotypes, such as plant height, canopy cover, and vegetation indices, can be estimated with high accuracy and reliability from UAV images for agriculture applications [6][7][8][9]. UAV-based phenotypic data has been utilized to predict cotton yield and select high-yielding varieties [10]. Ashapure et al. [11] demonstrated that multi-temporal UAV data can provide more consistent measurements, and that UAV-based phenotypes are significant features to monitor the difference between tillage and no-tillage management. In addition, an artificial intelligence (AI) algorithm was adopted to yield estimation for soybean and cotton using UAV-based HTP [12,13]. Many studies have verified that UAV imagery can provide high-quality phenotypes and is feasible for conducting agriculture applications [14].
In sorghum breeding plots, Chang et al. [7] and Watanabe et al. [15] showed that the plant height estimated from a UAV-based digital surface model (DSM) is highly correlated with the plant height measured with rulers in the field. Although different sorghum heights were found according to the UAV image specifications and field conditions, it was verified that UAV data could provide reliable plant height. Shafian et al. [16] evaluated the performance of a fixed-wing UAV system for quantification of crop growth parameters of sorghum, including leaf area index (LAI), fractional vegetation cover (fc), and yield. UAV-derived crop parameters have also been used to estimate the biomass and yield for cereal crops, such as maize, corn, and rice [17][18][19].
Although UAV data have shown good performance in estimating crop parameters and predicting yield, most researchers have used UAV-based phenotypes at the plot level. Phenotypic data at the individual plant scale, such as panicle count, number of seeds per panicle, and panicle size (length and width), play key roles in assessing genetic diversity, selection of new cultivars, and estimation of potential yields [20][21][22]. However, the handsampling method to measure panicles traits has proved to be a bottleneck to sorghum crop improvement [23,24]. More recently, deep learning has shown tremendous potential for detecting and counting sorghum panicles from UAV images [5,24]. However, a large number of training images are required to obtain robust and accurate machine-learning algorithms. This means that constructing large training samples requires a long time and heavy labor. A terrestrial LiDAR (light detection and ranging) method was also used for the detection and measurement of individual sorghum panicles, including their length, width, and height [25]. Although terrestrial LiDAR can provide a high-density 3D point cloud, the point density should not be uniform, depending on the location of the LiDAR sensor and the distance between the sensor and objects.
Structure from motion (SfM) techniques can generate colored 3D point clouds from UAV images. When a UAV is flown at a lower altitude with high overlap under stable atmospheric conditions, high-density point clouds can be produced sufficiently to extract phenotypic data of individual sorghum panicles. The aim of this study is to develop a high-throughput method of detecting and characterizing individual sorghum panicles from UAV-derived 3D point clouds. Figure 1 shows the overall procedure employed to characterize sorghum estimates and compare tree phenotypes from UAV imagery. Veryhigh-resolution imagery is collected using a UAV platform and processed to generate an orthomosaic, a DSM, and a colored 3D point cloud. The color ratio is adopted to select potential panicle points. Then, the individual tree identification method is applied to detect individual sorghum panicles. We propose a phenotyping method of estimating panicle length, width, and panicle volume. The panicle parameters estimated from UAV-based 3D point cloud are compared with ground measurements.
te Sens. 2020, 12, x FOR PEER REVIEW 2 of 10 such as plant height, canopy cover, and vegetation indices, can be estimated with high accuracy and reliability from UAV images for agriculture applications [6][7][8][9]. UAV-based phenotypic data has been utilized to predict cotton yield and select high-yielding varieties [10]. Ashapure et al. [11] demonstrated that multi-temporal UAV data can provide more consistent measurements, and that UAV-based phenotypes are significant features to monitor the difference between tillage and no-tillage management. In addition, an artificial intelligence (AI) algorithm was adopted to yield estimation for soybean and cotton using UAV-based HTP [12,13]. Many studies have verified that UAV imagery can provide high-quality phenotypes and is feasible for conducting agriculture applications [14].
In sorghum breeding plots, Chang et al. [7] and Watanabe et al. [15] showed that the plant height estimated from a UAV-based digital surface model (DSM) is highly correlated with the plant height measured with rulers in the field. Although different sorghum heights were found according to the UAV image specifications and field conditions, it was verified that UAV data could provide reliable plant height. Shafian et al. [16] evaluated the performance of a fixed-wing UAV system for quantification of crop growth parameters of sorghum, including leaf area index (LAI), fractional vegetation cover (fc), and yield. UAV-derived crop parameters have also been used to estimate the biomass and yield for cereal crops, such as maize, corn, and rice [17][18][19].
Although UAV data have shown good performance in estimating crop parameters and predicting yield, most researchers have used UAV-based phenotypes at the plot level. Phenotypic data at the individual plant scale, such as panicle count, number of seeds per panicle, and panicle size (length and width), play key roles in assessing genetic diversity, selection of new cultivars, and estimation of potential yields [20][21][22]. However, the handsampling method to measure panicles traits has proved to be a bottleneck to sorghum crop improvement [23,24]. More recently, deep learning has shown tremendous potential for detecting and counting sorghum panicles from UAV images [5,24]. However, a large number of training images are required to obtain robust and accurate machine-learning algorithms. This means that constructing large training samples requires a long time and heavy labor. A terrestrial LiDAR (light detection and ranging) method was also used for the detection and measurement of individual sorghum panicles, including their length, width, and height [25]. Although terrestrial LiDAR can provide a high-density 3D point cloud, the point density should not be uniform, depending on the location of the LiDAR sensor and the distance between the sensor and objects.
Structure from motion (SfM) techniques can generate colored 3D point clouds from UAV images. When a UAV is flown at a lower altitude with high overlap under stable atmospheric conditions, high-density point clouds can be produced sufficiently to extract phenotypic data of individual sorghum panicles. The aim of this study is to develop a high-throughput method of detecting and characterizing individual sorghum panicles from UAV-derived 3D point clouds. Figure 1 shows the overall procedure employed to characterize sorghum estimates and compare tree phenotypes from UAV imagery. Veryhigh-resolution imagery is collected using a UAV platform and processed to generate an orthomosaic, a DSM, and a colored 3D point cloud. The color ratio is adopted to select potential panicle points. Then, the individual tree identification method is applied to detect individual sorghum panicles. We propose a phenotyping method of estimating panicle length, width, and panicle volume. The panicle parameters estimated from UAV-based 3D point cloud are compared with ground measurements.

Study Area and Data Collection
The study area was located in the research farm at the Texas A&M AgriLife Research and Extension Center in Corpus Christi, TX, USA. Three types of grain sorghum (100 pedigrees) and one forage sorghum (15 pedigrees) with four replications were planted on 28 March 2016, in north-south oriented two-rows. The single-row size was 1 × 5 m. There were a total of 784 rows, including filler and border rows. Grain sorghum plots were selected to extract 3D information of panicles growing at the top of the plant. The panicle length and diameter were manually measured by ruler in the field in early June. At the end of the growing season, actual panicle samples were collected over a 1 m space in the middle of each row. Whole plants were cut and moved into the lab to measure the panicle weight after drying. A total of panicle weight in each plot was used to calculate correlation with the UAV measurements.
UAV RGB images were collected using a DJI Phantom 4 (DJI, Shenzhen, China) quadcopter platform on 7 June 2016, at a 10 m altitude with an 85% overlap to generate an ultra-high-density 3D point cloud. The four flight missions were conducted at a flight speed of about 1 m/s under very calm wind conditions (<2.5 m/s) to circumvent plantmotion effects that can produce image misalignment and sparse point density in 3D point cloud generations due to insufficient tie-points and blurred imagery. Ground control points (GCPs) were also installed around the sorghum field and surveyed by PPK (postprocessed kinematic) GPS (global positioning system) devices with sub-centimeter accuracy to precisely eliminate the bowl effect and georeferencing in SfM processing [7,26].

UAV Data Pre-Processing
The Agisoft Photoscan Pro (AgiSoft LLC, St. Petersburg, Russia), a commercial SfM software, was employed to generate the 3D point cloud, the DSM (digital surface model), and the orthomosaic image from UAV data. The ground sampling distance (GSD) for the orthomosaic image and DSM was 4.4 mm and 8.7 mm, respectively (Figure 2a,b). The point density of the RGB-colored 3D point cloud was 1.32 points/cm 2 ( Figure 2c). The orthomosaic image, DSM, and 3D point cloud were clipped using a single row boundary to detect sorghum panicles and extract the phenotypes for each variety.

Sorghum Panicle Detection
In this study, blooming plots were selected to extract panicle parameters, because panicles appear differently colored compared to other objects such as leaves, soil, and shadow. Potential blooming panicle pixels, which are yellowish, were extracted from the clipped orthomosaic image using the RGB color ratio given by Equation (1): where R, G, and B are the pixel values of the red, green, and blue bands of the RGB orthomosaic image, respectively. In an RGB color system, a yellowish pixel is composed of high red, high green, and low blue values. Therefore, the ratio among red, green, and blue bands was adopted to extract the panicles. Patrignani and Ochsner [27] developed a Canopeo algorithm to extract green pixels using the ratio parameters in RGB images. We used the reversed conditions to select yellowish pixels. The threshold values were empirically determined in this study. Morphological operations (opening and closing) were applied to remove speckle noise and fill holes in the binary classification result obtained using Equation (1). Elevation values of potential panicles in the selected area were used to determine the geometrical characteristics of individual panicles. Figure 2d shows an example of the concept of individual panicle detection. The concept of identifying individual tree crowns was adopted [28]. After extracting local maximum points, boundary points were searched using a threshold of 50 cm through eight directions (every 45 • ) around each local maximum; that is, pixels with an elevation difference from neighbor pixels larger Remote Sens. 2021, 13, 282 4 of 10 than the threshold were determined to be boundary pixels. The panicle cross section was assumed to be circular, and Käsa's circle-fitting algorithm [29] was used to calculate the center and diameter of a suitable circle. Horizontal 3D points within the fitted circle were selected as panicle points, among which those that did not satisfy the RGB color ratio determined using Equation (1) were filtered out. the most suitable circle (red circle) was fitted using boundary points for an individual panicle.

Sorghum Panicle Detection
In this study, blooming plots were selected to extract panicle parameters, because panicles appear differently colored compared to other objects such as leaves, soil, and shadow. Potential blooming panicle pixels, which are yellowish, were extracted from the clipped orthomosaic image using the RGB color ratio given by Equation (1): where R, G, and B are the pixel values of the red, green, and blue bands of the RGB orthomosaic image, respectively. In an RGB color system, a yellowish pixel is composed of high red, high green, and low blue values. Therefore, the ratio among red, green, and blue bands was adopted to extract the panicles. Patrignani and Ochsner [27] developed a Canopeo algorithm to extract green pixels using the ratio parameters in RGB images. We used the reversed conditions to select yellowish pixels. The threshold values were empirically determined in this study. Morphological operations (opening and closing) were applied to remove speckle noise and fill holes in the binary classification result obtained using Equation (1). Elevation values of potential panicles in the selected area were used to determine the geometrical characteristics of individual panicles. Figure 2d shows an example of the concept of individual panicle detection. The concept of identifying individual tree crowns was adopted [28]. After extracting local maximum points, boundary points were searched using a threshold of 50 cm through eight directions (every 45°) around each local maximum; that is, pixels with an elevation difference from neighbor pixels larger

3D Characterization of Sorghum Panicles
The 3D characteristics of individual panicles, such as the length, diameter, and volume, were estimated from the RGB-colored panicle points for each plot (Figure 3c). The UAVbased panicle length (L C ) was estimated as the elevation difference between the highest and lowest points of individual panicle points, and the UAV-based panicle diameter (D C ) was estimated as the circle diameter fitted during individual panicle detection (Figure 3a).
The volume of individual panicles was calculated using a 3D point cloud by two methods: (1) cylinder fitting and (2) disk stacking. Cylinder fitting involved using all the panicle points to calculate a cylinder volume from which the panicle length and diameter were estimated (Figure 3a). Multiple disk stacking was used to capture details of elongated ellipsoidal shape of the panicle (Figure 3b). Individual panicle points were vertically divided into multiple layers with a constant height (L d i ) to select points within a prescribed disk. The diameter (D d i ) of the i th disk, from top to bottom, was determined by fitting a circle to the selected disk points. When an insufficient number of points or point deployment resulted in an unreasonable disk diameter (too large or small), the previous Remote Sens. 2021, 13, 282 5 of 10 disk diameter was used. All the disk volumes were summed to estimate the individual panicle volume.
selected as panicle points, among which those that did not satisfy the RGB color ratio determined using Equation (1) were filtered out.

3D Characterization of Sorghum Panicles
The 3D characteristics of individual panicles, such as the length, diameter, and volume, were estimated from the RGB-colored panicle points for each plot (Figure 3c). The UAV-based panicle length ( ) was estimated as the elevation difference between the highest and lowest points of individual panicle points, and the UAV-based panicle diameter ( ) was estimated as the circle diameter fitted during individual panicle detection (Figure 3a). The volume of individual panicles was calculated using a 3D point cloud by two methods: (1) cylinder fitting and (2) disk stacking. Cylinder fitting involved using all the panicle points to calculate a cylinder volume from which the panicle length and diameter were estimated (Figure 3a). Multiple disk stacking was used to capture details of elongated ellipsoidal shape of the panicle (Figure 3b). Individual panicle points were vertically divided into multiple layers with a constant height ( ) to select points within a prescribed disk. The diameter ( ) of the ℎ disk, from top to bottom, was determined by fitting a circle to the selected disk points. When an insufficient number of points or point deployment resulted in an unreasonable disk diameter (too large or small), the previous disk diameter was used. All the disk volumes were summed to estimate the individual panicle volume. Figure 3d,e provide an example of how cylinder fitting and disk stacking are used to estimate the volume of an individual panicle. Disk stacking can capture fine details of the panicle shape, but could be more sensitive to the point density and distribution of panicle points than cylinder fitting.

Comparison of Panicle Numbers
A total of seventeen varieties in blooming were selected to discriminate panicles from the other objects such as soil and leaves in the 3D point cloud. Since the panicles were sampled over a 1 m space in the middle of each row, the total number of panicles in two rows (single variety) was counted by visual assessment of the orthomosaic image. There were a few errors in detecting individual panicles. For example, some panicles were classified as a single panicle when they were closely located and overlapped, while small or green panicles, which still were not mature, were not detected properly. Despite these errors, the correlation coefficient and R 2 values of linear regression were 0.91 and 0.83, respectively (Figure 4). If UAS imagery is collected at the proper timing at very high resolution, individual sorghum panicles can be successfully detected using the proposed method without a field survey, which would be efficient and useful for variety selection in the breeding program.
sified as a single panicle when they were closely located and overlapped, while small or green panicles, which still were not mature, were not detected properly. Despite these errors, the correlation coefficient and R 2 values of linear regression were 0.91 and 0.83, respectively (Figure 4). If UAS imagery is collected at the proper timing at very high resolution, individual sorghum panicles can be successfully detected using the proposed method without a field survey, which would be efficient and useful for variety selection in the breeding program.

Evaluationa of Panicle Length and Diameter
As a limited number of panicle samples were collected in the field, the average of the all ground-measured panicle's lengths and diameters was compared with those estimated from UAV data for each variety ( Figure 5). The R 2 values of the linearly regressed panicle length and diameter were 0.38 and 0.69, respectively. Although the panicle length and diameter extracted from UAV data were underestimated and overestimated, respectively, a linear trend between field and UAV measurements was found. The UAV and field measurements showed high correlation coefficients for panicle length (0.62) and diameter (0.83). The UAV-based panicle length exhibited a weaker correlation and a wider distribution than the UAV-based panicle diameter, which implies that panicle diameter could represent more stable data to be extracted from the UAV images.

Evaluationa of Panicle Length and Diameter
As a limited number of panicle samples were collected in the field, the average of the all ground-measured panicle's lengths and diameters was compared with those estimated from UAV data for each variety ( Figure 5). The R 2 values of the linearly regressed panicle length and diameter were 0.38 and 0.69, respectively. Although the panicle length and diameter extracted from UAV data were underestimated and overestimated, respectively, a linear trend between field and UAV measurements was found. The UAV and field measurements showed high correlation coefficients for panicle length (0.62) and diameter (0.83). The UAV-based panicle length exhibited a weaker correlation and a wider distribution than the UAV-based panicle diameter, which implies that panicle diameter could represent more stable data to be extracted from the UAV images. The 3D point cloud generated from UAV images resulted in a nonuniform point density due to image misalignment, camera angle, and blurred images. Thus, sufficiently dense 3D points to express smooth panicle surfaces could not always be generated. That is, 3D points on one side or the bottom of a panicle can occasionally be missed when these parts are not directly captured on camera. For these reasons, the panicle diameter can be estimated more reliably with less sensitivity to the point density than the panicle length from UAV-imagery based 3D point clouds. An insufficient number of panicle points can result in vertical variability in the panicle length, which was estimated from the highest and lowest elevation in this study. The panicle diameter was determined by fitting a circle to boundary points located around a horizontal cross section of the panicle. This method reduces the possibility of estimating highly unrealistic panicle diameters. The 3D point cloud generated from UAV images resulted in a nonuniform point density due to image misalignment, camera angle, and blurred images. Thus, sufficiently dense 3D points to express smooth panicle surfaces could not always be generated. That is, 3D points on one side or the bottom of a panicle can occasionally be missed when these parts are not directly captured on camera. For these reasons, the panicle diameter can be estimated more reliably with less sensitivity to the point density than the panicle length from UAV-imagery based 3D point clouds. An insufficient number of panicle points can result in vertical variability in the panicle length, which was estimated from the highest and lowest elevation in this study. The panicle diameter was determined by fitting a circle to boundary points located around a horizontal cross section of the panicle. This method reduces the possibility of estimating highly unrealistic panicle diameters.

Correlation between Panicle Phenotypes and Weight
We analyzed the correlations for the panicle weight versus the panicle length, diameter, and volume estimated by the proposed methods. Figure 6a shows a comparison of the panicle weights of the ground samples in each plot versus the average panicle length obtained from field data (red dots and lines) and the UAV measurements (blue dots and line). The UAV-derived panicle length exhibited a linear trend with the weight, showing very high R 2 values (0.9). Although the ground measurements of the panicle length and weight were also positively correlated, the wide scatter in the data resulted in an R 2 value below 0.5. The average of UAV-derived panicle diameter in each plot was also more highly correlated with the panicle weight than the ground measurements. The panicle diameters obtained from field sampling were distributed within an approximately 2 cm range, whereas the UAV measurements were spread over a wider range (Figure 6b).
The 3D point cloud generated from UAV images resulted in a nonuniform point density due to image misalignment, camera angle, and blurred images. Thus, sufficiently dense 3D points to express smooth panicle surfaces could not always be generated. That is, 3D points on one side or the bottom of a panicle can occasionally be missed when these parts are not directly captured on camera. For these reasons, the panicle diameter can be estimated more reliably with less sensitivity to the point density than the panicle length from UAV-imagery based 3D point clouds. An insufficient number of panicle points can result in vertical variability in the panicle length, which was estimated from the highest and lowest elevation in this study. The panicle diameter was determined by fitting a circle to boundary points located around a horizontal cross section of the panicle. This method reduces the possibility of estimating highly unrealistic panicle diameters.

Correlation between Panicle Phenotypes and Weight
We analyzed the correlations for the panicle weight versus the panicle length, diameter, and volume estimated by the proposed methods. Figure 6a shows a comparison of the panicle weights of the ground samples in each plot versus the average panicle length obtained from field data (red dots and lines) and the UAV measurements (blue dots and line). The UAV-derived panicle length exhibited a linear trend with the weight, showing very high R 2 values (0.9). Although the ground measurements of the panicle length and weight were also positively correlated, the wide scatter in the data resulted in an R 2 value below 0.5. The average of UAV-derived panicle diameter in each plot was also more highly correlated with the panicle weight than the ground measurements. The panicle diameters obtained from field sampling were distributed within an approximately 2 cm range, whereas the UAV measurements were spread over a wider range (Figure 6b). The advantage of applying UAV-based phenotyping to sorghum panicle is that a large number of samples can be measured more efficiently. As field sampling is labor-intensive and time-consuming, a limited number of samples can be collected over entire plots or rows. By comparison, UAV data can provide more observations over larger area. The results of this study show that UAV-imagery based 3D point clouds can provide more reliable panicle parameters, which are more highly correlated with the actual panicle weight than field measurement by hand-sampling. A linear regression model was employed to fit the ground-measured panicle weight to the total panicle volume obtained by cylinder fitting and disk stacking for each variety. The R 2 value (0.77) for a linear fit of the data obtained using cylinder fitting was higher than that obtained using disk stacking (0.67) (Figure 7). The correlation coefficient between UAV and field measurements were also calculated as 0.88 and 0.82 for cylinder fitting and disk stacking, respectively. There was a very high relationship and correlation between the UAV-derived panicle volume and the field-measured panicle weight. Although cylinder fitting estimated an approximately two times larger panicle volume than that calculated using disk stacking, the panicle volume obtained using the proposed panicle methods were highly correlated with the field-measured panicle weight. If a sufficient 3D point cloud could be acquired from very-high-resolution images, the panicle weight could be estimated more efficiently without destructive sampling in the field. fitting and disk stacking, respectively. There was a very high relationship and correlation between the UAV-derived panicle volume and the field-measured panicle weight. Although cylinder fitting estimated an approximately two times larger panicle volume than that calculated using disk stacking, the panicle volume obtained using the proposed panicle methods were highly correlated with the field-measured panicle weight. If a sufficient 3D point cloud could be acquired from very-high-resolution images, the panicle weight could be estimated more efficiently without destructive sampling in the field.

Figure 7.
Relationships for ground-measured panicle weight versus panicle volume of selected varieties obtained by cylinder fitting (blue) and disk stacking (red); scattered points were fitted using a linear regression model.

Conclusions
Individual sorghum panicles were characterized using a UAV-imagery-based 3D point cloud in this study. An ultra-high-density point cloud was generated from UAV RGB images collected at a 10 m altitude. Despite the limitations of the 3D point cloud generated from the UAV images, individual panicle parameters, such as the length and diameter, were successfully estimated from RGB-colored 3D panicle points. The proposed cylinder-fitting and disk-stacking methods outperformed at calculating individual panicle volumes at the plot level, which are highly correlated with ground-measurements of panicle weight.
In the current study, it was necessary to generate the canopy height model (CHM) or remove the ground elevation to estimate the 3D parameters of individual panicles. It could Figure 7. Relationships for ground-measured panicle weight versus panicle volume of selected varieties obtained by cylinder fitting (blue) and disk stacking (red); scattered points were fitted using a linear regression model.

Conclusions
Individual sorghum panicles were characterized using a UAV-imagery-based 3D point cloud in this study. An ultra-high-density point cloud was generated from UAV RGB images collected at a 10 m altitude. Despite the limitations of the 3D point cloud generated from the UAV images, individual panicle parameters, such as the length and diameter, were successfully estimated from RGB-colored 3D panicle points. The proposed cylinder-fitting and disk-stacking methods outperformed at calculating individual panicle volumes at the plot level, which are highly correlated with ground-measurements of panicle weight.
In the current study, it was necessary to generate the canopy height model (CHM) or remove the ground elevation to estimate the 3D parameters of individual panicles. It could be possible to apply the proposed method to a 3D point cloud obtained using LiDAR sensors to extract phenotypic data for sorghum. However, a main limitation of this study was how to collect high-quality images at a very low altitude under fine weather in blooming season. In the future, we will test the different types of 3D point clouds to estimate phenotypic information for various crops.