WG-3D: A Low-Cost Platform for High-Throughput Acquisition of 3D Information on Wheat Grain

: The three-dimensional (3D) morphological information of wheat grains is an important parameter for discriminating seed health, wheat yield, and wheat quality. High-throughput acquisition of 3D indicators of wheat grains is of great importance for wheat cultivation management, genetic breeding, and economic value. Currently, the 3D morphology of wheat grains still relies on manual investigation, which is subjective, inefﬁcient, and poorly reproducible. The existing 3D acquisition equipment is complicated to operate and expensive, which cannot meet the requirements of high-throughput phenotype acquisition. In this paper, an automatic, economical, and efﬁcient method for the 3D morphometry of wheat grain is proposed. A line laser binocular camera was used to obtain high-quality point-cloud data. A wheat grain 3D model was constructed by point-cloud segmentation, ﬁnding, clustering, projection, and reconstruction. Based on this, 3D morphological indicators of wheat grains were calculated. The results show that the root mean square error (RMSE) and mean absolute percentage error (MAPE) of the length were 0.2256 mm and 2.60%, the width, 0.2154 mm and 5.83%, the thickness, 0.2119 mm and 5.81%, and the volume, 1.7740 mm 3 and 4.31%. The scanning time was around 12 s and the data processing time was around 3.18 s under a scanning speed of 25 mm/s. This method can achieve the high-throughput acquisition of the 3D information of wheat grains, and it provides a reference for in-depth study of the 3D morphological indicators of wheat and other grains.


Introduction
Wheat is one of the most important foods for humans and the most popular ingredient for bread. Wheat for bread alone accounts for 20% of the world's calorie consumption [1]. As the "chip of agriculture", the seed is the key to produce more and better food on limited arable land. A complete and healthy wheat grain can yield more and better flour, which in turn affects people's consumption. The morphological characteristics of wheat grains are not only important evaluation indicators of grains' morphological quality and important indicators of wheat processing quality but also important evaluation criteria for wheatgrain grading. Studies have shown that wheat-grain size and traits are closely related to wheat flour production and yield [2,3]. Wheat-grain morphological characteristics are important markers to distinguish different kinds of wheat, and they have become the grains were obtained based on the 3D grain model. Indicators such as length, width, thickness, and volume were used to verify the accuracy of the 3D model.

Experimental Materials
The materials used in the experiment were the Yangmai series varieties developed and supplied by the Institute of Agricultural Science in Lixiahe, Jiangsu Province, China. The test materials were divided into two groups according to different experimental purposes. The first group was a validation experiment, which included three varieties, namely Zhenmai No. 9,Yangmai No. 23,and Ningmai No. 13, each with 100 randomly selected grains. The second group was a test experiment and comprised 26 varieties of Yangmai series (Table 1), and 40 grains from each variety were randomly selected. Figure  1 shows the images of the wheat grains of different Yangmai series varieties.

Experimental Device
This study used a high-quality wide-field stereo camera (VZ-JGY-1300G-120M6, working distance: 20 cm; lens parameters: 6 mm-1/1.8"-120 mm) manufactured by Beijing Weijing Intelligent Technology Co., (Beijing, China). It adopts line laser binocular stereo vision technology system to acquire 3D spatial information through the binocular parallax principle. The infrared laser is designed to be invisible. It is primarily designed to perform auxiliary positioning to assist the intelligent binocular stereo camera in extracting only 3D data from all points on the laser, thus excluding external light interference. The computer configuration was as follows: 64-bit Windows 7 Professional operating system, Intel (R) Core (TM) i5-7400 3.00 GHz processor, 8 GB RAM, 120 GB (SSD) hard disk, and Intel HD Graphics 630 graphics card. Figure 2 shows the 3D point-cloud acquisition platform.
working distance: 20 cm; lens parameters: 6 mm-1/1.8″-120 mm) manufactured by Beijing Weijing Intelligent Technology Co., (Beijing, China). It adopts line laser binocular stereo vision technology system to acquire 3D spatial information through the binocular parallax principle. The infrared laser is designed to be invisible. It is primarily designed to perform auxiliary positioning to assist the intelligent binocular stereo camera in extracting only 3D data from all points on the laser, thus excluding external light interference. The computer configuration was as follows: 64-bit Windows 7 Professional operating system, Intel (R) Core (TM) i5-7400 3.00 GHz processor, 8 GB RAM, 120 GB (SSD) hard disk, and Intel HD Graphics 630 graphics card. Figure 2 shows the 3D point-cloud acquisition platform.

Grain Point Cloud Processing Pipeline
The overall processing pipeline of the grain point cloud is shown in Figure 3. It mainly includes 5 steps: point-cloud acquisition, point-cloud preprocessing, single-grain point-cloud extraction, 3D model construction, and 3D morphological feature calculation.

Point Cloud Acquisition
The process of acquiring 3D point-cloud data with the binocular camera consists of three parts.
1. Binocular calibration. This uses the known correspondence between the world coordinate system (calibration plate) and the image coordinate system (after processing the image of the calibration plate) to calculate the parameter information of the binocular camera in the current position relationship. Before binocular calibration, it is also necessary to perform single camera calibration for each camera to determine its distortion coefficient, camera internal reference matrix, and other parameters. 2. Extraction of points of interest (feature extraction). In this process, the binocular camera extracts all data points on the laser line. Two pictures taken with the left and right cameras from different angles are used to describe the laser line. Then, a suitable preprocessing algorithm is added to extract and segment the laser lines from the two images.

Grain Point Cloud Processing Pipeline
The overall processing pipeline of the grain point cloud is shown in Figure 3. It mainly includes 5 steps: point-cloud acquisition, point-cloud preprocessing, single-grain point-cloud extraction, 3D model construction, and 3D morphological feature calculation.

Point Cloud Acquisition
The process of acquiring 3D point-cloud data with the binocular camera consists of three parts.

1.
Binocular calibration. This uses the known correspondence between the world coordinate system (calibration plate) and the image coordinate system (after processing the image of the calibration plate) to calculate the parameter information of the binocular camera in the current position relationship. Before binocular calibration, it is also necessary to perform single camera calibration for each camera to determine its distortion coefficient, camera internal reference matrix, and other parameters.

2.
Extraction of points of interest (feature extraction). In this process, the binocular camera extracts all data points on the laser line. Two pictures taken with the left and right cameras from different angles are used to describe the laser line. Then, a suitable pre-processing algorithm is added to extract and segment the laser lines from the two images.

3.
Accurate digital description (stereo matching). In this stage, the "stereo matching" algorithm is used, which calculates the base matrix based on the coordinate points of the feature points in the left and right images and corresponds the coordinate points of the same name in the left and right images one by one. The calculation is performed using the parallax principle ( Figure 4).
3. Accurate digital description (stereo matching). In this stage, the "stereo matching" algorithm is used, which calculates the base matrix based on the coordinate points of the feature points in the left and right images and corresponds the coordinate points of the same name in the left and right images one by one. The calculation is performed using the parallax principle ( Figure 4).  The point P in Figure 4a is a point in 3D space; O L and O R are the left and right camera optical centers; and P L and P R are the imaging points of the point P on the left and right image planes. The parallax of the point P in the left and right cameras is defined as where u L and u R are the distances of the two imaging points on the left and right image planes from the left edge of the image, respectively.
where and are the distances of the two imaging points on the left and right image planes from the left edge of the image, respectively.
According to the similar triangle, it follows that where denotes the depth information; is the distance between the two camera optical centers, also called the baseline; is the focal length; and is calculated as It can be deduced that When the point moves in the 3D space, the imaging position of the point on the left and right cameras will also change, and thus, the parallax will also change accordingly. As can be seen from the above equation, the parallax is inversely proportional to the distance from the point on the 3D space to the center plane of the projection. Therefore, as long as the parallax of a point is known, the depth information of that point can be known. From Figure 4b, it can be deduced that  According to the similar triangle, it follows that where Z denotes the depth information; T x is the distance between the two camera optical centers, also called the baseline; f is the focal length; and d is calculated as It can be deduced that When the point P moves in the 3D space, the imaging position of the point P on the left and right cameras will also change, and thus, the parallax will also change accordingly. As can be seen from the above equation, the parallax is inversely proportional to the distance from the point on the 3D space to the center plane of the projection. Therefore, as long as the parallax of a point is known, the depth information of that point can be known. From Figure 4b, it can be deduced that Finally, the 3D coordinates of the point P are determined as All grains are placed on the carrier table in a uniform position with the back of the belly facing down and the front facing up. The grains can be quickly adjusted with tweezers and kept from overlapping and piling up. After the grains are prepared, the 3D point-cloud scanning system is activated, and the carrier table moves between the limiters at an even speed (25 mm/s). The binocular camera starts generating valid point-cloud data in real time as the carrier moves within the binocular camera capture distance. When the carrier leaves the capture range of the binocular camera, all point-cloud data are generated and transferred to the computer for point-cloud data processing.

Point Cloud Preprocessing
The raw point-cloud data contains both grain information and carrier information. It was necessary to segment the two objects to obtain useful grain point-cloud information. We set the segmentation plane threshold ϕ1, and the threshold size is related to the height of the carrier table from the binocular camera, as shown in Figure 5b. The horizontal height of the carrier stage is fixed. After analyzing a large amount of data, it was concluded that ϕ1 = −213.00 can split the grain from the carrier stage. Figure 5a shows the original point cloud, where blue represents the carrier table, and yellow-green represents the grain. All grains are placed on the carrier table in a uniform position with the back of the belly facing down and the front facing up. The grains can be quickly adjusted with tweezers and kept from overlapping and piling up. After the grains are prepared, the 3D pointcloud scanning system is activated, and the carrier table moves between the limiters at an even speed (25 mm/s). The binocular camera starts generating valid point-cloud data in real time as the carrier moves within the binocular camera capture distance. When the carrier leaves the capture range of the binocular camera, all point-cloud data are generated and transferred to the computer for point-cloud data processing.

Point Cloud Preprocessing
The raw point-cloud data contains both grain information and carrier information. It was necessary to segment the two objects to obtain useful grain point-cloud information. We set the segmentation plane threshold φ1, and the threshold size is related to the height of the carrier table from the binocular camera, as shown in Figure 5b. The horizontal height of the carrier stage is fixed. After analyzing a large amount of data, it was concluded that φ1 = −213.00 can split the grain from the carrier stage. Figure 5a shows the original point cloud, where blue represents the carrier table, and yellow-green represents the grain. Figure 5c-e shows the extracted point clouds of the grains after segmentation.

Single-Grain Point-Cloud Extraction
In the experimental environment, the grains were spaced apart and split from each other more easily. However, there were also cases of grain adhesion. Therefore, we partitioned between grains under more extreme conditions to obtain more reliable single grain. First, setting a segmentation plane threshold ϕ2 to split grain point cloud into two parts: the upper and lower parts. Second, the AlphaShape function method for constructing polyhedra from points was used for the upper part [33]. By setting the Alpha shape radius value, the upper part of grains could be clustered into different polyhedra. Final, for the lower part of the point cloud, each point was classified to its nearest polyhedron.
Usually, the thickness of the grain was not less than 2 mm, and setting the segmentation plane ϕ2 = −211.5, which allowed for the easily splitting of the grain point cloud into two parts: the upper and lower parts (Figure 6a,b). As can be seen in Figure 6c, the red line distance is larger than the green line distance, which ensured that the right radius was used to categorize the different grain point clouds. Setting the Alpha shape radius α = 0.5, which could accurately cluster the grain point clouds of the same category together (Figure 6d). Thus, the upper part of the grain point cloud was grouped into different polyhedra (Figure 6e In the experimental environment, the grains were spaced apart and split from each other more easily. However, there were also cases of grain adhesion. Therefore, we partitioned between grains under more extreme conditions to obtain more reliable single grain. First, setting a segmentation plane threshold φ2 to split grain point cloud into two parts: the upper and lower parts. Second, the AlphaShape function method for constructing polyhedra from points was used for the upper part [33]. By setting the Alpha shape radius value, the upper part of grains could be clustered into different polyhedra. Final, for the lower part of the point cloud, each point was classified to its nearest polyhedron. Usually, the thickness of the grain was not less than 2 mm, and setting the segmentation plane φ2 = −211.5, which allowed for the easily splitting of the grain point cloud into two parts: the upper and lower parts (Figure 6a,b). As can be seen in Figure 6c, the red line distance is larger than the green line distance, which ensured that the right radius was used to categorize the different grain point clouds. Setting the Alpha shape radius α = 0.5, which could accurately cluster the grain point clouds of the same category together

3D Model Construction
All the grain point-cloud data were taken from the grain frontal scan. To construct a complete 3D grain body, the grain bottom needed to be obtained. Using overhead projection to project all grain point clouds to the height level of the carrier table, which could simulate the bottom of the grains (Figure 8b). Combining it with the bottomless point cloud (Figure 8a) to obtain a complete grain point cloud (Figure 8c). Again, setting the Alpha shape radius α = 0.5, which clustered all point clouds into a complete 3D model of the grains (Figure 8d).

3D Model Construction
All the grain point-cloud data were taken from the grain frontal scan. To construct a complete 3D grain body, the grain bottom needed to be obtained. Using overhead projection to project all grain point clouds to the height level of the carrier table, which could simulate the bottom of the grains (Figure 8b). Combining it with the bottomless point cloud (Figure 8a) to obtain a complete grain point cloud (Figure 8c). Again, setting the Alpha shape radius α = 0.5, which clustered all point clouds into a complete 3D model of the grains (Figure 8d).

3D Model Construction
All the grain point-cloud data were taken from the grain frontal scan. To construct a complete 3D grain body, the grain bottom needed to be obtained. Using overhead projection to project all grain point clouds to the height level of the carrier table, which could simulate the bottom of the grains (Figure 8b). Combining it with the bottomless point cloud (Figure 8a) to obtain a complete grain point cloud (Figure 8c). Again, setting the Alpha shape radius α = 0.5, which clustered all point clouds into a complete 3D model of the grains (Figure 8d). The grain length and width were calculated based on the minimum bounding rectangle (MBR) method on the bottom of the grain obtained after mapping. The grain thickness (T) is the difference between the maximum value of the grain on the Z-axis and the Z-axis value of the carrier table. The formula used to calculate it is as follows:

3D Morphological Feature Calculation
The grain length and width were calculated based on the minimum bounding rectangle (MBR) method on the bottom of the grain obtained after mapping. The grain thickness (T) is the difference between the maximum value of the grain on the Z-axis and the Z-axis value of the carrier table. The formula used to calculate it is as follows: where Z max is the maximum coordinate point of the grain in the Z-axis direction, and Z re f is the coordinate of the carrier table in the Z-axis.
Volumes of more complicated polyhedra may not have simple formulas. Volumes of such polyhedra may be computed by subdividing the polyhedron into smaller pieces (for example, by triangulation). For example, the volume of a regular polyhedron can be computed by dividing it into congruent pyramids, with each pyramid having a face of the polyhedron as its base and the center of the polyhedron as its apex. In general, it can be derived from the divergence theorem, which states that the volume of a polyhedral solid is given by where the sum (S) is over the faces F of the polyhedron; Q F is an arbitrary point on face F; N F is the unit vector perpendicular to F pointing outside the solid; and the multiplication dot is the dot product; x i and y i are the horizontal and vertical coordinates of each point on the surface.

Manual Measurement Indicators
High-precision electronic vernier calipers were used to measure the size of the longest, widest, and thickest parts of the grains. We marked and positioned 100 grains acquired from the 3D images to achieve matching of the manual measurement data with the data measured by the WG-3D platform.
General methods of solid volume measurement include the sand discharge method and drainage method. These conventional methods are less effective because the volume of individual grains is too small, the accuracy of the measuring cylinder is not high, and water bubbles will be attached to the surface of the grains. This experiment used a high-precision burette as a volumetric measuring device (Figure 9), in which water was replaced with alcohol to reduce the air bubbles adhering to the surface of the grains. We divided 100 grains into 10 groups and took readings for each of the sunken 10 grains to obtain the total volume of the 10 grains.

Evaluation Indicators
The root mean square error (RMSE) and mean absolute percentage error (MAPE) were used to evaluate the WG-3D measurement accuracy. The formulae are as follows:

Evaluation Indicators
The root mean square error (RMSE) and mean absolute percentage error (MAPE) were used to evaluate the WG-3D measurement accuracy. The formulae are as follows: where x i is the WG-3D measurement, y i the manual measurement, and n is the number of grains.

Data Processing and Analysis Software
The point-cloud data processing software was MATLAB R2019a, and IBM SPSS Statistics 26 and Microsoft Excel 2019 Professional Edition were used for data analysis and plotting.

Measurement Accuracy
This research verified the usefulness of the grain 3D model construction by assessing the accuracy of four indicators: length, width, thickness, and volume. The WG-3D measurements of 100 grains of three varieties were compared with manual measurements at a carrier table moving speed of 25 mm/s. The results are shown in Table 2. The measurement error of length, width, and thickness of three varieties was at the same level, and the RMSE was about 0.20 mm. Length was lower than width and thickness in the MAPE evaluation index due to its large base number. The volume data are the total value of the measured 10 grains. The measurement error of volume fluctuated among varieties, and the RMSE was within 20.00 mm 3 . From the distribution of all samples (Figure 10), the manual measurement data and WG-3D measurement data were evenly distributed around the 1:1 line. The RMSE and MAPE of the length were 0.2256 mm and 2.60%. The RMSE and MAPE of the width were 0.2154 mm and 5.83%. The RMSE and MAPE of the thickness were 0.2119 mm and 5.81%. The RMSE and MAPE of the volume were 1.7740 mm 3 and 4.31%. Overall, the measurement accuracy of the four indicators was relatively reliable. This indicates that our method can be used as an effective means for high-throughput acquisition of 3D information of wheat grains. In order to further evaluate the accuracy of the method proposed in this paper, 26 varieties of Yangmai series were tested. The results are shown in Table 3. There was no abnormal value, and the measurement accuracy of all varieties is good. The average RMSE and MAPE of the length were 0.2181 mm and 2.93%. The average RMSE and MAPE of the width were 0.1974 mm and 5.43%. The average RMSE and MAPE of the thickness were 0.2027 mm and 5.15%. The average RMSE and MAPE of the volume were 1.8997 mm 3 and 4.61%. These results are consistent with the previous verification results, which shows that the method has strong robustness.  In order to further evaluate the accuracy of the method proposed in this paper, 26 varieties of Yangmai series were tested. The results are shown in Table 3

Measurement Efficiency
The WG-3D information acquisition platform can perform point-cloud data acquisition, 3D point-cloud data processing, and several indicator measurement tasks in real time. Therefore, evaluating the real-time measurement efficiency is necessary. The two main timeconsuming blocks involved are the scan acquisition of the point-cloud data and the amount of calculation of the 3D point-cloud data processing and measurement indicators. The results are shown in Figure 11a at a carrier table scanning speed of 25 mm/s. The scanning time consumption was not affected, as the number of grains increased and remained stable at approximately 12 s. This is because we set the whole scanning process to scan the pointcloud data above the carrier (if there are no grains, only the carrier point-cloud data are available). Therefore, the total amount of point-cloud data remained the same at a constant scanning speed, and the time consumed was also stable. However, the time consumed for the processing of the grain 3D point-cloud data and the indicator measurement increased positively with the number of grains. At a scanning speed of 25 mm/s, the effective interval time (calculation time) for each batch was approximately 9 s. This enables automated wheat grain 3D data measurement in batches of up to 500 grains. the effective interval time (calculation time) for each batch was approximately 9 s. This enables automated wheat grain 3D data measurement in batches of up to 500 grains. Usually, the slower that the carrier table is set to move, the slower the scanning speed will be, and the more amount of point-cloud data can be acquired with higher quality. To better balance the relationship between speed and quality, we tested 100 grains and compared the effect of different scanning speeds on the total measurement time consumed and RMSE. The results are shown in Figure 11b. If the speed is controlled within 30 mm/s, a high measurement accuracy can be effectively guaranteed. However, too-slow speed will seriously affect the efficiency. Therefore, 25 mm/s was set as the normal scanning speed, and the time consumed for one batch was around 14 s. Of course, this value is also related to the computer configuration, the number of measurement indicators, and the efficiency of the algorithm. The higher the computer configuration and the smaller the number of measurement indicators, the more efficient the algorithm will be, the faster the data processing will be, and the lower the time consumption will be.

Comparison of Related Studies
The significance of 3D morphological traits of grains has attracted the attention of many researchers, and Table 4 shows the comparison result with related studies. The results shows that most of the methods used binocular cameras for its cheapness and high accuracy. Among them, the accuracy of reference [27] is the best, but this method only measures one grain at a time. Therefore, it is not suitable for mass screening scenarios because of its low efficiency. The methods in reference [34,35] and WG-3D-5 in this paper have the same accuracy level, while the former two do not measure volume parameters. In addition, they have unstable threshold segmentation results of two-dimensional im- Usually, the slower that the carrier table is set to move, the slower the scanning speed will be, and the more amount of point-cloud data can be acquired with higher quality. To better balance the relationship between speed and quality, we tested 100 grains and compared the effect of different scanning speeds on the total measurement time consumed and RMSE. The results are shown in Figure 11b. If the speed is controlled within 30 mm/s, a high measurement accuracy can be effectively guaranteed. However, too-slow speed will seriously affect the efficiency. Therefore, 25 mm/s was set as the normal scanning speed, and the time consumed for one batch was around 14 s. Of course, this value is also related to the computer configuration, the number of measurement indicators, and the efficiency of the algorithm. The higher the computer configuration and the smaller the number of measurement indicators, the more efficient the algorithm will be, the faster the data processing will be, and the lower the time consumption will be.

Comparison of Related Studies
The significance of 3D morphological traits of grains has attracted the attention of many researchers, and Table 4 shows the comparison result with related studies. The results shows that most of the methods used binocular cameras for its cheapness and high accuracy. Among them, the accuracy of reference [27] is the best, but this method only measures one grain at a time. Therefore, it is not suitable for mass screening scenarios because of its low efficiency. The methods in reference [34,35] and WG-3D-5 in this paper have the same accuracy level, while the former two do not measure volume parameters. In addition, they have unstable threshold segmentation results of two-dimensional images and low efficiency in obtaining point clouds from multiple angles. The accuracy and efficiency of reference [32] are good, but the measurement indicator is single. Compared with WG-3D-25, WG-3D-5 improves accuracy, but with reduced efficiency. Therefore, the proposed method can meet the challenges of different grain morphology measurement.

3D Morphological Cluster Analysis
Based on the rapid batch measurement of grain 3D morphology, the 3D morphological traits such as length, width, thickness, and volume were used as cluster analysis variables for the stratified clustering of Yangmai series varieties. As shown in Table 5 and Figure 12, the differences in grain 3D morphology were small, and the Yangmai series varieties could be classified into two classes at the genetic distance of 5-level. The first category is dominated by small-grain wheat, with the volume generally not exceeding 35 mm 3 , including Y 1#, Y 2#, Y 3#, Y 6#, Y 12#, Y 16#, Y 22#, Y 23#, and Y 158. The second category was dominated by large-grain wheat with a volume of more than 35 mm 3 , and Y 28#. Zhang et al. [36] clustered the Yangmai series varieties into two categories using different quality traits as variables, which was supported by the results of genealogical analysis. The results of the present study are similar to their findings, such as Y 158 being the genetic basis of the strong-gluten and small-grain varieties of Yangmai, Y 23# being inherited from Y 16#, and Y 16# being inherited from Y 158 superior line Yang91F138. Yang80Jian3 and Ningmai No.9 are the genetic basis of the weak-gluten and large-grain varieties of Yangmai, Y 15# is inherited from Yang 89-40, and Yang 89-40 is inherited from Yang80Jian3; Y 20# and Y 13# are also inherited from Yang80Jian3; Y 18# and Y 21# are inherited from Ningmai No.9.

Advantages and Disadvantages
The WG-3D phenotyping platform has obvious advantages. First, it enables automated high-throughput acquisition of grain 3D indicators at a higher level of accuracy. Second, it uses a line laser binocular camera (which costs only a few hundred dollars) to greatly reduce the cost compared to other expensive 3D acquisition equipment (which can cost up to tens of thousands of dollars for tomography). In addition, it can acquire RGB

Advantages and Disadvantages
The WG-3D phenotyping platform has obvious advantages. First, it enables automated high-throughput acquisition of grain 3D indicators at a higher level of accuracy. Second, it uses a line laser binocular camera (which costs only a few hundred dollars) to greatly reduce the cost compared to other expensive 3D acquisition equipment (which can cost up to tens of thousands of dollars for tomography). In addition, it can acquire RGB color information and fuse it into RGBD point-cloud data. The grain color characteristics can be analyzed effectively. It is not limited to morphological indicators and will be developed for more indicators, such as surface area, curvature, and roughness. However, there are also some shortcomings of the platform. When the back side of the grain belly is placed facing upward, it causes the grain to be tilted at an angle. This is not suitable for horizontal scanning. Therefore, it is currently recommended to scan only the frontal point-cloud data of the grain. The 3D point-cloud data and indicator parameters about the grain belly back are not available. In the future, we will consider how to achieve a standard for grain belly balance, such as by designing notches in the carrier table.

Conclusions
Grain 3D morphological indicators have good application value and research prospects. This study developed a low-cost WG-3D platform for high-throughput acquisition of wheat grain 3D indicators to lay the foundation for the in-depth study of grain 3D characteristics. The platform uses a line laser binocular camera to capture high-quality point-cloud data. It realizes the construction of grain 3D models using point-cloud segmentation, finding, clustering, projection, and reconstruction. The grain length and width information are calculated using the MBR method. The grain thickness is calculated using height difference. The grain volume is calculated using the division method to derive the irregular polyhedral volume. Validation of the manual measurement data proved that the WG-3D platform can achieve an effective and reliable accuracy level. The platform also has high efficiency in point-cloud scanning and parameter calculation. It is a reliable platform for high-throughput acquisition of wheat grain 3D indicators and lays the foundation for in-depth study of other grain 3D morphologies.