Next Article in Journal
An Empirical Study on Convective Drying of Ginger Rhizomes Leveraging Environmental Stress Chambers and Linear Heat Conduction Methodology
Next Article in Special Issue
Identifying an Image-Processing Method for Detection of Bee Mite in Honey Bee Based on Keypoint Analysis
Previous Article in Journal
Diallel Analysis: Choosing Parents to Introduce New Variability in a Recurrent Selection Population
Previous Article in Special Issue
Method for Classifying Apple Leaf Diseases Based on Dual Attention and Multi-Scale Feature Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Soybean-MVS: Annotated Three-Dimensional Model Dataset of Whole Growth Period Soybeans for 3D Plant Organ Segmentation

1
College of Engineering, Northeast Agricultural University, Harbin 150030, China
2
College of Arts and Sciences, Northeast Agricultural University, Harbin 150030, China
3
College of Agriculture, Northeast Agricultural University, Harbin 150030, China
*
Author to whom correspondence should be addressed.
Agriculture 2023, 13(7), 1321; https://doi.org/10.3390/agriculture13071321
Submission received: 7 May 2023 / Revised: 23 June 2023 / Accepted: 24 June 2023 / Published: 28 June 2023

Abstract

:
The study of plant phenotypes based on 3D models has become an important research direction for automatic plant phenotype acquisition. Building a labeled three-dimensional dataset of the whole growth period can help the development of 3D crop plant models in point cloud segmentation. Therefore, the demand for 3D whole plant growth period model datasets with organ-level markers is growing rapidly. In this study, five different soybean varieties were selected, and three-dimensional reconstruction was carried out for the whole growth period (13 stages) of soybean using multiple-view stereo technology (MVS). Leaves, main stems, and stems of the obtained three-dimensional model were manually labeled. Finally, two-point cloud semantic segmentation models, RandLA-Net and BAAF-Net, were used for training. In this paper, 102 soybean stereoscopic plant models were obtained. A dataset with original point clouds was constructed and the subsequent analysis confirmed that the number of plant point clouds was consistent with corresponding real plant development. At the same time, a 3D dataset named Soybean-MVS with labels for the whole soybean growth period was constructed. The test result of mAccs at 88.52% and 87.45% verified the availability of this dataset. In order to further promote the study of point cloud segmentation and phenotype acquisition of soybean plants, this paper proposed an annotated three-dimensional model dataset for the whole growth period of soybean for 3D plant organ segmentation. The release of the dataset can provide an important basis for proposing an updated, highly accurate, and efficient 3D crop model segmentation algorithm. In the future, this dataset will provide important and usable basic data support for the development of three-dimensional point cloud segmentation and phenotype automatic acquisition technology of soybeans.

1. Introduction

With the continuous development of plant phenomics, three-dimensional plant phenotypic analysis has become a challenging research topic. Using deep learning for point cloud segmentation is the foundation of crop phenotype measurement and breeding. The common point cloud datasets used for training are scarce and difficult to obtain, and there is no commonly used basic data for organ instance segmentation for phenotype extraction. In addition, due to the complex structure of plants, the data annotation work needs considerable manual processing. A well-labeled dataset is essential for the segmentation of plant point clouds using deep learning. In order to obtain a well-labeled dataset, it should have the following characteristics: complete plant structure, high precision, and the ability to cover multiple varieties and growth periods. Consequently, building a labeled crop plant point cloud dataset of the entire growth period is a key step toward achieving accurate crop point cloud segmentation using deep learning.
Although the lack of well-labeled 3D plant datasets limits the further progress of plant point cloud segmentation [1], many scholars have made significant advancements in building plant point cloud segmentation datasets in recent years. Zhou et al. [2] manually segmented the 3D point cloud data of soybean plants and gave each point a real label. This was used as the training set for point cloud segmentation and real ground data for evaluating segmentation accuracy using machine learning methods. Li et al. [3] used the MVS-Pheno platform to obtain multi-view images and point clouds of corn plants in the study of organ-level point cloud automatic segmentation of corn branches based on high-throughput data acquisition and deep learning. At the same time, the research team developed a data annotation tool kit specifically for corn plants, called Label3DMatch, and annotated the data to ultimately build a training dataset. Conn et al. [4] planted tomatoes, tobacco, and sorghum under the five growth conditions of ambient light, shade, high temperature, strong light, and drought, and performed 3D laser scanning (311 tomato scans, 105 tobacco scans, and 141 sorghum scans) on the plant stem structure during 20–30 days’ development. A 3D plant dataset was constructed after summarizing the species, conditions, and time points. Li et al. [5] used this original dataset and manually marked the semantic labels belonging to stems and leaves using the semantic segmentation editor (SSE) tool and established a well-labeled point cloud dataset for plant stem leaf semantic segmentation and leaf instance segmentation. Hideaki et al. [6] proposed a 3D phenotype platform that can measure plant growth and environmental information in a small indoor environment to obtain plant image datasets. In addition, annotation tools were introduced, which can manually, but effectively, create leaf tags in plant images on a pixel-by-pixel basis. Barth et al. [7] rendered a composite dataset containing 10,500 images through Blender. The scene used had 42 program-generated plant models and random plant parameters. These parameters were based on 21 empirically measured plant characteristics at 115 locations on 15 plant stems. The fruit model was obtained through 3D scanning and the plant part textures were collected through photos as a reference dataset for modeling and evaluating the segmentation performance. David et al. [8] established a large, diverse, and well-labeled wheat image dataset, called the Global Wheat Head Detection (GWHD) dataset. It contained 4700 high-resolution RGB images from multiple countries and 190,000 wheat head markers at different growth stages, with a wide range of genotypes. Wang et al. [9] constructed a lettuce point cloud dataset consisting of 620 real and synthetic point clouds fused together for 3D instance segmentation network training. Lai et al. [10] first used the SfM-MVS method to obtain point clouds of these plant population scenes, which were then annotated similarly to the S3DIS dataset to obtain data that could be trained and tested. In order to provide important and available basic data support for the development of three-dimensional point cloud segmentation and phenotype automatic acquisition technology of soybeans, this study uses the multiple-view stereo technology to construct 102 soybean three-dimensional plant models by taking advantage of its low cost, fast speed and high precision. At the same time, it is manually labeled to construct the dataset for point cloud segmentation. Compared with other datasets, this dataset contains three-dimensional information on soybean plants during the whole growth period, which has certain advantages in model accuracy and quantity.
There are several key binocular stereovision spatial positioning technologies involving image acquisition, camera calibration, image preprocessing, edge feature extraction, and stereo matching. Multi-vision is based on binocular vision, adding one or more cameras as a measuring assistant so that multiple pairs of images from different angles of the same object can be obtained. For the 3D reconstruction of a single plant, this method is more suitable for low sunlight conditions in the laboratory (Duan et al. [11]; Hui et al. [12]). This method can also be used for 3D reconstruction in the field such as studying overall crop canopy volumes (Biskup et al. [13]; Shafiekhani et al. [14]). Compared with other methods, the multiple-view stereo method requires relatively simple equipment, and the model can be established quickly and effectively, with minimum human-computer interaction required. Although the reconstruction speed is average and the requirements for the reconstruction of environmental factors are high, the reconstruction accuracy is high, it is easy to use, and the required equipment price is relatively low. Zhu et al. [15] built a soybean digital image acquisition platform based on the principle of constructing a multi-perspective stereovision system with digital cameras covering different angles, effectively improving the problem of mutual occlusion between soybean leaves. The morphological sequence images of target plants for 3D reconstruction were then obtained. Nguyen et al. [16] described a field 3D reconstruction system for plant phenotype acquisition. The system used synchronous, multi-view, high-resolution color digital images to create real 3D crop reconstructions and successfully obtained the plant canopy geometric characteristic parameters. Lu et al. [17] developed an MCP-based SfM system using multiple-view stereo technology and studied the appropriate 3D reconstruction method and the optimal shooting angle range. Choudhury et al. [18] devised the 3DPhenoMV method. Plant images captured from multiple side views were used as the algorithm input, and a 3D model of the plant was reconstructed using multiple side views and camera parameters. Miller et al. [19] used low-cost hand-held cameras and SfM-MVS to reconstruct a spatially accurate 3D model of a single tree. Shi et al. [20] adopted the multi-view method, allowing information from two-dimensional (2D) images to be integrated into the three-dimensional (3D) plant point cloud model, and evaluated the performance of 2D and multi-view methods on tomato seedlings. Lee et al. [21] proposed an image-based 3D plant reconstruction system based on multiple UAVs to simultaneously obtain two images from different views of plants during growth and reconstruct 3D crop models with moving structures, based on multiple view stereo algorithms and metric structures. Sunvittayakul et al. [22] developed a platform for acquiring 3D cassava root crown (CRC) models using close-range photogrammetry for phenotypic analysis. This novel method is low cost, and it is easy to set up the 3D acquisition requiring only a background sheet, a reference object, and a camera and is suitable for field experiments in remote areas. Wu et al. [23] developed a small branch phenotype analysis platform, MVS-Pheno V2, based on multi-view 3D reconstruction, which focused on low plant branches and realized high-throughput 3D data collection.
In this study, the multiple view stereo method (MVS) was used to reconstruct soybean plants. A soybean image acquisition platform was constructed to obtain multi-angle images of soybean plants at different growth stages. Based on the silhouette contour principle, the model was established by contour approximation, vertex analysis, and triangulation, and 3D point cloud and original soybean datasets were constructed. Meanwhile, the obtained 3D models of soybean were manually labeled using CloudCompare v2.6.3 software. An annotated 3D dataset called Soybean-MVS, including 102 models, was established. Due to the inherent changes in the appearance and shape of natural objects, the segmentation of plant parts was a challenge. In this paper, to verify the availability of this dataset, RandLA-Net and BAAF-Net point cloud semantic segmentation networks were used to train and test the Soybean-MVS dataset.

2. Materials and Methods

2.1. Method Process

In 2018 and 2019, we cultivated high-quality soybean plants including DN251, DN252, DN253, HN48, and HN51 varieties. An original 3D soybean dataset and labeled 3D soybean plant dataset were constructed for the whole soybean growth period, consisting of the first trifoliolate stage (V1), second trifoliolate stage (V2), third trifoliolate stage (V3), fourth trifoliolate stage (V4), fifth trifoliolate stage (V5), initial flowering stage (R1), full bloom stage (R2), initial pod stage (R3), full pod stage (R4) initial seed stage (R5), full seed stage (R6), initial maturity stage (R7), and full maturity stage (R8). Among them, V represents the vegetative growth stage and R represents the reproductive growth stage. Table 1 shows the basic characteristics of experimental soybean materials, including soybean varieties, growing days, planting methods, and active accumulated temperature greater than 10 °C. The research process of this paper mainly involved 3D reconstructions based on the multiple view stereo method, manually labeling data to build datasets, and training and evaluating datasets through point cloud segmentation. Figure 1 details the overall process of building a soybean 3D dataset for point cloud segmentation.

2.2. Image Acquisition

This study prepared the image acquisition of 3D reconstruction in the room. The tools used to collect plant images included: (1) photo studio, (2) Canon EOS 600D SLR (Canon (China) Co. Ltd., Beijing, China) digital camera and camera rack, (3) rotary table, (4) calibration pad, and (5) white light absorbing cloth. A light source was added around the plant to guarantee the required basic environment needed for 3D reconstruction, based on the multiple view stereo method. The pot was about 90 cm from the camera. During the image acquisition for each pot of plants, we placed the plant pots on the rotary table, positioned a dot calibration pad at the plant roots, lowered the camera height, manually operated the rotary table, took a photo every 10°~25° (this study determined 24° according to the black dot on the calibration pad), and collected 15 photos after a circle of rotation. Then, according to the height of the plant, we adjusted the camera height three times on average, from low to high, and repeated the process. Finally, 60 photos were obtained by taking four sets of circular rotation shots at different angles. According to the soybean growth, image acquisition was conducted at each growth stage (Figure 2). The final number of images of different varieties of soybean plants is shown in Appendix A Table A1.

2.3. Three-Dimensional Reconstruction

This study obtained a large number of corresponding soybean plant images (about 60) from multiple perspectives. In addition, this study preprocessed basic image operations such as noise removal and distortion correction based on Python. At the same time, in the process of three-dimensional modeling, it is necessary to connect and combine images from different directions. Therefore, the relationship between the spatial positions of various images is particularly important. This study adopted the auxiliary camera calibration method of the calibration device, using a calibration pad to determine the problem of image overlap, and to determine the shooting direction of various multi-angle images. The model was established using the “contour extraction”, “vertex calculation”, and “visual shell generation” steps of the silhouette contour method. Silhouette contour is the contour line of the image projected on the imaging plane, which is an important clue to understanding the geometric shape of the object. When a space object is observed from multiple perspectives by perspective projection, a silhouette line of the object can be obtained in the corresponding screen of each perspective. Here the silhouette line and the corresponding perspective projection center together determine a cone of general shape in three-dimensional space, and the object to be observed is located inside this cone. By analogy, increasing the number of viewing angles of the target object from different directions can make the shape of each corresponding cone approach the surface of the object, so as to carry out three-dimensional visualization of the shape features of the target object.
Firstly, we masked the multi-angle images, selected the position of the soybean plants in each image, and purified all the background and calibration pad areas unrelated to the soybean plants, leaving only the complete soybean plant information. Then, according to the partial information of the target object in each multi-angle image, we obtained several approximate polygonal contours, numbered each approximate contour, calculated three vertices from the polygon contour, and recorded the information of each vertex. A triangular grid was used to divide the complete surface to outline the surface fine joints. The above is the realization of the “contour extraction” and “vertex calculation and visual shell generation” steps of the silhouette contour method. At that point, only the soybean plant skeleton had been generated. In addition, further optimization operations such as volume optimization and surface refinement were required to obtain the final soybean plant surface morphology model. Finally, according to the corresponding orientation information characteristics of the three-dimensional surface contour soybean plant model obtained above, combined with the orientation information of different multi-angle images, texture mapping of its surface was performed, so that the model had more visual features and better described the characteristics of actual objects. Following three-dimensional reconstruction, 102 original models were obtained and named according to the year, date, and variety.

2.4. Data Annotation

The data annotation work in this study was completed using the open-source software CloudCompare v2.6.3. The acquired soybean 3D plant model (.obj format file) was imported into CloudCompare software, the leaves, main stems, and stems were manually segmented and marked on the soybean plants, and each point cloud was given a real label. At the same time, each segmented and marked organ was sampled points on a mesh. The number of sampling points was fixed at 50,000. The labeled point cloud information included xyzRGB information and was stored in .txt format. The soybean plant leaves, main stems, and stems were marked, as shown in Figure 3 (using 20180612_HN48 as an example). Finally, a labeled soybean 3D point cloud dataset named Soybean-MVS was constructed, including 102 3D models, of which 89 models were used as the training set and 13 models were used as the test set.

2.5. Point Cloud Segmentation Network

For the semantic segmentation of the soybean-MVS 3D point cloud dataset, this study selected two deep learning-based point cloud segmentation network architectures, (1) RandLA-Net [24]; (2) BAAF-Net [25] to test its availability. Appendix A Table A2 shows the hardware, software, and super parameter configuration of the deep learning model. Figure 4 shows the architecture of the two-point cloud segmentation semantic models. We have already submitted the data and computer programs used for the analysis, which will allow the results of our experiments to be reproduced by anyone. The link addresses are https://github.com/18545155636/BAAF-Net.git (accessed on 1 January 2023) and https://github.com/18545155636/randla-net.git (accessed on 1 January 2023). The following briefly describes the key methods of these architectures for encoding 3D point cloud local geometry. Please refer to the original text for the default structure and other details of the architecture.

2.5.1. RandLA-Net

RandLA-Net is an effective and portable network that can identify the semantics of each point and apply it to large-scale point clouds. It uses the local feature aggregation module (LFA) to gradually improve the receptivity of each 3D point, which can effectively save the geometric details of the point cloud. The local feature aggregation module involves three main steps:
The first step is local spatial encoding (LocSE). The coordinates and features of a point (center point) in the point cloud P and K points adjacent to the point are taken as input. It consists of three parts: (1) Finding neighboring points, (2) relative point position encoding, and (3) point feature augmentation. A new adjacent feature of the center point is output, which encodes the local geometric feature of the center point. This module can significantly learn the local geometric features of point clouds, which will eventually play a beneficial role in learning the complex local structure information of the entire network. The second step is known as attention pooling. The LocSE output is used as the input of this step. This includes two parts: (1) computing attention scores and (2) weighted summation. Then, the feature vectors generated by the center point aggregated local features are output. The third step is called the divided residential block. It consists of multiple LocSE and attention pooling layers plus a skip connection.
RandLA-Net regards each point as the center point and each point aggregates the information of the surrounding points to itself. According to the principle that the points sampled in the whole point cloud by random sampling should conform to a normal distribution, random sampling is directly adopted. By employing this, the sampling speed can be greatly accelerated.

2.5.2. BAAF-Net

BAAF-Net uses a bilateral structure to increase the local context information of a point, while adaptively fusing multi-resolution features, to propose a new point cloud semantic segmentation network, involving the following two steps:
The first step is the bilateral context module. This consists of multiple bilateral context blocks (BCBs). A BCB is composed of bilateral augmentation and mixed local aggregation. During bilateral augmentation, the neighborhood information is aggregated around a point to the point to obtain the local context information in the geometric and feature spaces, but this is insufficient to express the domain information. Then, the local geometric context information is adjusted through the local semantic context information, which in turn is adjusted through the enhanced local geometric context information. Finally, MLP is used to further process the enhanced local geometric, and local semantic, context information and stack them together to obtain the enhanced local context information. The mixed local aggregation process uses the maximum pooling method, that is, the maximum K values of each feature are calculated as the value of the feature of point i. Then, the mean point of the local neighborhood of the point is learned through MLP, and the feature of the point is taken as the feature of point i. Lastly, the above two aggregated features are spliced to obtain the final feature of point i. The bilateral context module is used to combine bilateral context modules and continuously output the downsampled points to BCB, which is also the corresponding encoder part. The second step is the Adaptive Fusion Module. This part corresponds to the decoder. The encoder will output feature maps with different resolutions. The output of each layer is gradually upsampled to obtain full-size feature maps. The previous layer’s feature maps need to be fused each time upsampling is performed. Then, the full-size feature maps sampled on these multiple scales need to be fused. To obtain different-sized important information, the full-size feature map is inputted into MLP to obtain the point level information, which is then normalized using Sofmax. Finally, the integrated feature map for semantic segmentation is obtained by fusing the normalized point level information and the full-size feature map after upsampling.
BAAF-Net enhances its local context by making full use of geometric and semantic features in bilateral structures. It fully explains the uniqueness of points from multiple resolutions and represents feature maps at the point level according to adaptive fusion methods for accurate semantic segmentation.

2.6. Evaluation Index

In this study, the average value of the IoU scores of three categories (mIoU) and the average accuracy (mAcc) were used to evaluate the success of each architecture. The number of true positives, true negatives, false positives, and false negatives in each category were expressed as TP, TN, FP, and FN, respectively. Then, the intersection over union (IoU) of each semantic class, the total accuracy (Acc) of each plant, the mean score of IoU (mIoU), and the mean accuracy (mAcc) were defined as:
A c c u r a c y = T P + T N T P + T N + F P + F N ,
I o U = T P T P + F P + F N ,
m A c c = 1 n i = 1 n A c c ,
m I o U = 1 k i = 1 k I o U ,
where n represents the total number of datasets in the test set (13 data) and k represents the total number of categories.

3. Results

3.1. Soybean-MVS Dataset

3.1.1. Original 3D Dataset

This paper tracked and recorded the entire growth period of five varieties of soybean and created a 3D reconstruction of the soybean plants during each period. A total of 102 3D virtual soybean plants were obtained and a 3D point cloud dataset of original soybean plants was constructed. Appendix A Table A3 details the point cloud of the original soybean 3D plant dataset. Figure 5 shows the point cloud information map of the original soybean three-dimensional plant dataset. Figure 5a displays the comparison results of the total point cloud cover of stage V and stage R using a t-test. It can be seen that there was a significant difference between the point cloud covers of stage R and stage V, with the stage R point cloud cover being significantly larger than that of stage V. Figure 5b shows the comparison results of the reconstructed point cloud cover in 2018 and 2019 using a t-test. It can be seen that the reconstructed model had almost the same point cloud cover over two years. Figure 5c is the comparison map of the point cloud cover of soybean plants at different development stages following an ANOVA variance test, among which the point cloud cover of soybean plants at the R5 stage is the greatest, indicating that soybean plants grow the most vigorously during the R5 stage and reach the peak stage of their development. The two control graphs show that the more complex the soybean plant, the greater the model point cloud cover. Figure 5d is the comparison map of point cloud cover of different soybean varieties after an ANOVA variance test, and the difference in point cloud cover among different varieties is not found to be significant.

3.1.2. Labeled 3D Dataset

This study annotated the original dataset. In order to homogenize the point cloud, this study conducted network point collection for each labeled organ, and the number of sampled point clouds was controlled at 50,000. A labeled soybean 3D point cloud dataset was constructed. Figure 6 compares the point amount of the original 3D dataset and the sampled point cloud amount of the labeled 3D dataset, taking the DN252 soybean plant as an example. The leaves, main stems, and stems of three soybean plant organs were manually marked. Table 2 shows the number of organs of different types of soybean plants after labeling.
Finally, 89 labeled models were divided into a training set, and 13 labeled models were divided into a test set. The point cloud amount distribution of each organ in the training set and test set is shown in Table 3.

3.2. Point Cloud Segmentation

The test results of 20 models in the Soybean-MVS dataset on the RandLA-Net and BAAF-Net models are shown in Table 4.
Figure 7 shows the Acc of the same soybean plant (DN251) at different growth stages after RandLA-Net and BAAF-Net network tests. Overall, the mAcc tested by the two networks was high. For the different complex stages of soybean plant growth, the segmentation accuracy was high and there was no significant difference. Among them, the Acc value in the R5 period was the highest, which may be because the soybean plants are the most vigorous and the leaves are the most luxuriant during the R5 period. The effect of the two networks on the leaf segmentation was better than on the main stems and stems. At the R8 stage, because the soybean plant was leafless, the Acc value was lowest.
Figure 8 shows the label data, label data visualization results, RandLA-Net test visualization results, and BAAF-Net test visualization results of the DN251 soybean plants. From the results, both networks separated soybean plant leaves, main stems, and stems, but there were still identification errors in some details. Figure 9 highlights an example of a false prediction with a red ellipse. In terms of leaves, both networks performed well, which may be due to the regular leaf shape and a large amount of training, and they were all segmented. However, Figure 9a,b show that the two networks recognized stems as leaves when recognizing the petiole. In terms of the main stem, BAAF-Net performed worse than RandLA-Net. Figure 9c,d show that some main stem components were identified as stems. This may be due to the small amount of main stem training and the similar morphology of main stems and stems. In terms of the stem, Figure 9e,f show that both network test results identified the stems as part as leaves. In addition, Figure 9g,h show that RandLA-Net identified the connection between main stems and stems as a leaf, while the BAAF-Net performed well.

4. Discussion

This paper explored the growth of soybean plants based on 3D reconstruction technology. Figure 10 shows the full soybean plant growth period, using the three-dimensional model of DN251 soybean plants constructed in this study as an example. The original three-dimensional soybean plant whole growth period dataset and the labeled three-dimensional plant soybean whole growth period dataset constructed in this study can provide an important basis for solving and tackling issues raised by breeders, producers, and consumers. For example, research on crop phenotypic measurement and other issues requires the effective phenotypic analysis of plant growth and morphological changes throughout the growth period. Considering this, we propose the use of point cloud segmentation.
First of all, this paper chose the multiple-view stereo method to reconstruct the entire growth period of soybean plants. This method obtains detailed information about plants through crop images and extracts the phenotypic parameters of crops through related algorithms. Cao et al. [26] developed a 3D imaging acquisition system to collect plant images from different angles to reconstruct 3D plant models. However, only 20 images were collected in that study to meet the minimum image overlap requirements for 3D model reconstruction. In our study, 60 soybean plant images from different perspectives were collected at four different heights during image acquisition, so the 3D model obtained after 3D reconstruction was more accurate. At the same time, a three-dimensional dataset of the whole growth period of the original soybean was established. By comparing the original point cloud amount of the V and R stages, the relationship between the point cloud amount of the three-dimensional soybean plant model and the growth period was analyzed, which confirmed that the number of plant point clouds was consistent with corresponding real plant development. This provides an important basis for more accurate three-dimensional reconstruction of crops in the whole growth period in the future.
Secondly, training point cloud segmentation models usually require a large amount of tag data, the cost of which is very high, particularly in intensive prediction tasks such as semantic segmentation. In addition, the plant phenotype dataset also faces the additional challenges of severe occlusion and different lighting conditions, which makes obtaining annotations more time-consuming (Rawat et al. [27]). Gong et al. [28] used a structured light 3D scanning platform, based on a special turntable, to obtain the 3D point cloud data of rice panicles, and then used the open-source software LabelMe to mark point by point and create a rice panicle point cloud dataset. Boogaard et al. [29] manually marked cucumber plants twice with CloudCompare and constructed annotated dataset A and annotated dataset B. Dutagaci et al. [30] obtained 11 3D point cloud models of Rosa through X-ray tomography and manually annotated them, creating a labeled dataset to evaluate 3D plant organ segmentation methods, called the ROSE-X dataset. However, these datasets do not emphasize the importance of three-dimensional data of the entire growth period of plants and the amount of data is relatively small, which lacks integrity for subsequent studies such as the phenotypic measurement of whole plant growth periods. In our study, Soybean-MVS, a labeled three-dimensional dataset of the whole growth period of soybean, was constructed, which fully meets the data volume requirements of in-depth learning point cloud segmentation training and evaluation and ensures the integrity of the dataset used for the point cloud segmentation research. This not only provides a basis for measuring plant phenotype, bionic species, and other issues, but may also provide a basis for exploring the natural laws of plant growth.
Thirdly, in the process of labeling the dataset in our paper, since the soybean plant main stem and stem information are relatively similar, and a soybean plant only has one main stem, the number is much lower than leaf and stem, leading to a low segmentation accuracy of the main stems. There is a situation where the points on the petiole were classified as leaves. However, the visualization results show that each point cloud segmentation network model still segmented most of the points on the main stems. Therefore, the Soybean-MVS dataset can ensure the effectiveness of the point cloud segmentation task.
Finally, the Soybean-MVS dataset is universal. The universality of datasets is crucial to empirical research evaluation for at least three reasons: (1) providing a basis for measuring progress by copying and comparing results; (2) revealing the shortcomings of the latest technology, thus paving the way for novel methods and research directions; (3) the method can be developed without first collecting and tagging data (Schunck et al. [31]). Furthermore, data with high universality can meet the requirements of different point cloud segmentation models and obtain a highly reliable segmentation model. Turgut et al. [32] evaluated their performance on real rose shrubs based on the ROSE-X and synthetic model datasets and adjusted six-point cloud-based deep learning architectures (PointNet, etc.) to subdivide the structure of a rosebush model. In our paper, RandLA-Net and BAAF-Net were used for testing (also applicable to other 3D point cloud classification and segmentation models based on depth learning). In the future, we will continue to expand and adjust the Soybean-MVS dataset and apply it to other point cloud segmentation network models, to further improve it.

5. Conclusions

In order to provide important and usable basic data support for the development of three-dimensional point cloud segmentation and phenotype automatic acquisition technology of soybeans, this paper adopted the multiple-view stereo technology and obtained 60 photos in each group through four different height circular rotation shots. Three-dimensional plant reconstruction was carried out using the profile contour method to construct the original three-dimensional soybean plant dataset of the whole growth period. It was concluded that the number of point clouds was consistent with the actual plant development. The leaf, mainstem and stem in the obtained data and sample points were manually annotated on a mesh. A soybean three-dimensional plant dataset named Soybean-MVS was constructed for point cloud semantic segmentation. Finally, RandLA-Net and BAAF-Net models were used to evaluate the dataset, and the mAcc of the test results were 88.52% and 87.45%, respectively. The usability of the Soybean-MVS labeled 3D plant dataset was verified. The publication of this dataset provides an important basis for proposing an updated, high-precision, and efficient 3D crop model segmentation algorithm. In the future, we will constantly update and supplement the dataset, and apply it to more point cloud segmentation models to make it more universal. At the same time, the automatic acquisition and breeding of soybean phenotype will be further explored on the basis of this dataset.

Author Contributions

Y.S.: formal analysis, investigation, methodology, image acquisition, three-dimensional reconstruction, annotation of data and writing—original draft. Z.Z. (Zhixin Zhang): supervision and validation. K.S. and J.Y.: image acquisition and three-dimensional reconstruction. S.L. and L.M.: annotation of data. Y.L., Z.H., Z.Z. (Zhanguo Zhang) and H.Z.: project administration and resources. D.X.: writing—review and editing and funding acquisition. Q.C.: writing—review and editing, funding acquisition, and resources. R.Z.: designed the research of the article, conceptualization, data curation, funding acquisition, resources, and writing—review and editing. All authors agreed to be accountable for all aspects of their work to ensure that the questions related to the accuracy or integrity of any part is appropriately investigated and resolved. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Heilongjiang Province of China (LH2021C021).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Original models are available in a publicly accessible repository: The original contributions presented in the study are publicly available. These data can be found here: https://www.kaggle.com/datasets/soberguo/soybean-original-model (accessed on 1 January 2023). The soybean-MVS dataset is available in a publicly accessible repository: Publicly available datasets were analyzed in this study. These data can be found here: https://www.kaggle.com/datasets/soberguo/soybeanmvs (accessed on 1 January 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Image collection quantity of soybean plants of different varieties in different stages.
Table A1. Image collection quantity of soybean plants of different varieties in different stages.
V1V2V3V4V5R1R2R3R4R5R6R7R8
20182019201820192018201920182019201820192018201920182019201820192018201920182019201820192018201920182019
DN251060060606060600606060606060600606060606060606060
DN252060060606060600606060606060600606060606060606060
DN253060060606060600606060606060600606060606060606060
HN48060060606060600606060606060600606060606060606060
HN51060060606060600606060606060600606060606060606060
Notes: In this study, five kinds of soybeans, DN251, DN252, DN253, HN48 and HN51, were planted in the pot farm of Northeast Agricultural University in 2018 and 2019, and images were collected during the whole growth period of soybeans. Table 1 shows the specific number of images collected.
Table A2. Hardware, software, and hyperparameter configuration of deep learning models.
Table A2. Hardware, software, and hyperparameter configuration of deep learning models.
CatalogueContent
CPUCore i9-12900kf
RAM64 GB
GPUNVIDIA 3090 (24 GB)
operating systemUbuntu 18.04
Cuda11.3
Cudnn8.4
Data AnnotationCloudCompare
Deep learning frameworkTensorflow 2.6.0
AnacondaAnaconda 5.2
Momentum0.9
threshold0.5
Table A3. Original information of 3D soybean plant model.
Table A3. Original information of 3D soybean plant model.
VarietyDate of ReconstructionStagePoints
DN25112 June 2018V366,528
DN25212 June 2018V385,871
DN25312 June 2018V35164
HN4812 June 2018V363,915
HN5112 June 2018V35390
DN25119 June 2018V478,211
DN25219 June 2018V47482
DN25319 June 2018V46581
HN4819 June 2018V45776
HN5119 June 2018V46734
DN25126 June 2018R110,752
DN25226 June 2018R1140,986
DN25326 June 2018R111,535
HN4826 June 2018R19371
DN2514 July 2018R214,842
DN2524 July 2018R221,367
HN484 July 2018R218,757
HN514 July 2018R212,300
DN25111 July 2018R325,306
DN25211 July 2018R324,316
DN25311 July 2018R326,733
HN4811 July 2018R322,995
HN5111 July 2018R3271,221
DN25126 July 2018R599,451
DN25226 July 2018R537,704
DN25326 July 2018R551,456
HN4826 July 2018R561,301
HN5126 July 2018R5808,638
DN25117 August 2018R635,193
DN25217 August 2018R637,896
DN2518 September 2018R724,864
DN2528 September 2018R719,805
DN2538 September 2018R719,145
HN488 September 2018R735,983
HN518 September 2018R733,647
DN2513 October 2018R85574
DN2523 October 2018R88662
DN2533 October 2018R811,313
HN483 October 2018R811,220
HN513 October 2018R89366
DN25129 May 2019V19415
DN25229 May 2019V110,233
DN25329 May 2019V17014
HN4829 May 2019V18766
HN5129 May 2019V16541
DN2513 June 2019V26113
DN2523 June 2019V24671
DN2533 June 2019V24860
HN483 June 2019V24947
HN513 June 2019V24269
DN2518 June 2019V38322
DN2528 June 2019V35228
DN2538 June 2019V35161
HN488 June 2019V37974
HN518 June 2019V35777
DN25112 June 2019V47890
DN25212 June 2019V45612
DN25312 June 2019V488,756
HN4812 June 2019V4113,444
HN5112 June 2019V45956
DN25118 June 2019V59132
DN25218 June 2019V57669
DN25318 June 2019V59416
HN4818 June 2019V510,604
HN5118 June 2019V5100,902
DN25124 June 2019R1149,372
DN25224 June 2019R19728
DN25324 June 2019R1135,007
HN4824 June 2019R1160,789
HN5124 June 2019R17672
DN25127 June 2019R213,951
DN25227 June 2019R2171,706
DN25327 June 2019R2176,975
HN4827 June 2019R2242,936
HN5127 June 2019R211,597
DN2515 July 2019R319,569
DN2525 July 2019R320,336
DN2535 July 2019R3286,872
HN485 July 2019R322,544
HN515 July 2019R317,661
DN25113 July 2019R429,729
DN25213 July 2019R426,609
DN25313 July 2019R428,611
HN4813 July 2019R435,583
HN5113 July 2019R426,426
DN25122 July 2019R537,823
DN25222 July 2019R550,636
DN25322 July 2019R554,806
HN4822 July 2019R556,830
DN2516 August 2019R654,325
DN2526 August 2019R6712,682
DN2536 August 2019R6632,552
HN486 August 2019R6603,497
DN25126 August 2019R745,556
DN25226 August 2019R745,332
DN25326 August 2019R744,100
HN4826 August 2019R727,986
DN25121 September 2019R89990
DN25221 September 2019R88426
DN25321 September 2019R89317
HN4821 September 2019R87229
HN5121 September 2019R89964

References

  1. Li, D.; Shi, G.; Li, J.; Chen, Y.; Zhang, S.; Xiang, S.; Jin, S. PlantNet: A dual-function point cloud segmentation network for multiple plant species. ISPRS J. Photogramm. Remote Sens. 2022, 184, 243–263. [Google Scholar] [CrossRef]
  2. Zhou, J.; Fu, X.; Zhou, S.; Zhou, J.; Ye, H.; Nguyen, H.T. Automated segmentation of soybean plants from 3D point cloud using machine learning. Comput. Electron. Agric. 2019, 162, 143–153. [Google Scholar] [CrossRef]
  3. Li, Y.; Wen, W.; Miao, T.; Wu, S.; Yu, Z.; Wang, X.; Guo, X.; Zhao, C. Automatic organ-level point cloud segmentation of maize shoots by integrating high-throughput data acquisition and deep learning. Comput. Electron. Agric. 2022, 193, 106702. [Google Scholar] [CrossRef]
  4. Conn, A.; Pedmale, U.V.; Chory, J.; Navlakha, S. High-Resolution Laser Scanning Reveals Plant Architectures that Reflect Universal Network Design Principles. Cell Syst. 2017, 5, 53–62.e3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Li, D.; Li, J.; Xiang, S.; Pan, A. PSegNet: Simultaneous Semantic and Instance Segmentation for Point Clouds of Plants. Plant Phenomics 2022, 2022, 9787643. [Google Scholar] [CrossRef]
  6. Uchiyama, H.; Sakurai, S.; Mishima, M.; Arita, D.; Okayasu, T.; Shimada, A.; Taniguchi, R.I. An easy-to-setup 3D phenotyping platform for KOMATSUNA dataset. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 2038–2045. [Google Scholar]
  7. Barth, R.; Ijsselmuiden, J.; Hemming, J.; Henten, E.J.V. Data synthesis methods for semantic segmentation in agriculture: A Capsicum annuum dataset. Comput. Electron. Agric. 2018, 144, 284–296. [Google Scholar] [CrossRef]
  8. David, E.; Madec, S.; Sadeghi-Tehran, P.; Aasen, H.; Zheng, B.; Liu, S.; Kirchgessner, N.; Ishikawa, G.; Nagasawa, K.; Badhon, M.A.; et al. Global Wheat Head Detection (GWHD) Dataset: A Large and Diverse Dataset of High-Resolution RGB-Labelled Images to Develop and Benchmark Wheat Head Detection Methods. Plant Phenomics 2020, 2020, 3521852. [Google Scholar] [CrossRef]
  9. Wang, L.; Zheng, L.; Wang, M. 3D Point Cloud Instance Segmentation of Lettuce Based on PartNet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Period, New Orleans, LA, USA, 19–23 June 2022; pp. 1647–1655. [Google Scholar]
  10. Lai, Y.; Lu, S.; Qian, T.; Chen, M.; Zhen, S.; Guo, L. Segmentation of Plant Point Cloud based on Deep Learning Method. Comput. Aided Des. Appl. 2022, 19, 1117–1129. [Google Scholar] [CrossRef]
  11. Duan, T.; Chapman, S.C.; Holland, E.; Rebetzke, G.J.; Guo, Y.; Zheng, B. Dynamic quantification of canopy structure to characterize early plant vigour in wheat genotypes. J. Exp. Bot. 2016, 67, 4523–4534. [Google Scholar] [CrossRef] [Green Version]
  12. Hui, F.; Zhu, J.; Hu, P.; Meng, L.; Zhu, B.; Guo, Y.; Li, B.; Ma, Y. Image-based dynamic quantification and high-accuracy 3D evaluation of canopy structure of plant populations. Ann. Bot. 2018, 121, 1079–1088. [Google Scholar] [CrossRef]
  13. Biskup, B.; Scharr, H.; Schurr, U.; Rascher, U. A stereo imaging system for measuring structural parameters of plant canopies. Plant Cell Environ. 2007, 30, 1299–1308. [Google Scholar] [CrossRef]
  14. Shafiekhani, A.; Kadam, S.; Fritschi, F.B.; Desouza, G.N. Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput Field Phenotyping. Sensors 2017, 17, 214. [Google Scholar] [CrossRef] [Green Version]
  15. Zhu, R.; Sun, K.; Yan, Z.; Yan, X.; Yu, J.; Shi, J.; Hu, Z.; Jiang, H.; Xin, D.; Zhang, Z.; et al. Analysing the phenotype development of soybean plants using low-cost 3D reconstruction. Sci. Rep. 2020, 10, 7055. [Google Scholar] [CrossRef]
  16. Nguyen, T.T.; Slaughter, D.C.; Townsley, B.; Carriedo, L.; Sinha, N. Comparison of Structure-from-Motion and Stereo Vision Techniques for Full In-Field 3D Reconstruction and Phenotyping of Plants: An Investigation in Sunflower. In Proceedings of the Asabe International Meeting, Orlando, FL, USA, 17–20 July 2016. [Google Scholar]
  17. Lu, X.; Ono, E.; Lu, S.; Zhang, Y.; Teng, P.; Aono, M.; Shimizu, Y.; Hosoi, F.; Omasa, K. Reconstruction method and optimum range of camera-shooting angle for 3D plant modeling using a multi-camera photography system. Plant Methods 2020, 16, 118. [Google Scholar] [CrossRef]
  18. Das Choudhury, S.; Maturu, S.; Samal, A.; Stoerger, V.; Awada, T. Leveraging Image Analysis to Compute 3D Plant Phenotypes Based on Voxel-Grid Plant Reconstruction. Front Plant Sci. 2020, 11, 521431. [Google Scholar] [CrossRef]
  19. Miller, J.; Morgenroth, J.; Gomez, C. 3D modelling of individual trees using a handheld camera: Accuracy of height, diameter and volume estimates. Urban For. Urban Green. 2015, 14, 932–940. [Google Scholar] [CrossRef]
  20. Shi, W.; Van De Zedde, R.; Jiang, H.; Kootstra, G. Plant-part segmentation using deep learning and multi-view vision. Biosyst. Eng. 2019, 187, 81–95. [Google Scholar] [CrossRef]
  21. Lee, H.-S.; Thomasson, J.A.; Han, X. Improvement of field phenotyping from synchronized multi-camera image collection based on multiple UAVs collaborative operation systems. In Proceedings of the 2022 ASABE Annual International Meeting, Houston, TX, USA, 17–20 July 2022. [Google Scholar]
  22. Sunvittayakul, P.; Kittipadakul, P.; Wonnapinij, P.; Chanchay, P.; Wannitikul, P.; Sathitnaitham, S.; Phanthanong, P.; Changwitchukarn, K.; Suttangkakul, A.; Ceballos, H.; et al. Cassava root crown phenotyping using three-dimension (3D) multi-view stereo reconstruction. Sci. Rep. 2022, 12, 10030. [Google Scholar] [CrossRef]
  23. Wu, S.; Wen, W.; Gou, W.; Lu, X.; Zhang, W.; Zheng, C.; Xiang, Z.; Chen, L.; Guo, X. A miniaturized phenotyping platform for individual plants using multi-view stereo 3D reconstruction. Front. Plant Sci. 2022, 13, 897746. [Google Scholar] [CrossRef]
  24. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. Randla-net: Efficient semantic segmentation of large-scale point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11108–11117. [Google Scholar]
  25. Qiu, S.; Anwar, S.; Barnes, N. Semantic segmentation for real point cloud scenes via bilateral augmentation and adaptive fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1757–1767. [Google Scholar]
  26. Cao, W.; Zhou, J.; Yuan, Y.; Ye, H.; Nguyen, H.T.; Chen, J.; Zhou, J. Quantifying Variation in Soybean Due to Flood Using a Low-Cost 3D Imaging System. Sensors 2019, 19, 2682. [Google Scholar] [CrossRef] [Green Version]
  27. Rawat, S.; Chandra, A.L.; Desai, S.V.; Balasubramanian, V.N.; Ninomiya, S.; Guo, W. How Useful Is Image-Based Active Learning for Plant Organ Segmentation? Plant Phenomics 2022, 2022, 9795275. [Google Scholar] [CrossRef] [PubMed]
  28. Gong, L.; Du, X.; Zhu, K.; Lin, K.; Lou, Q.; Yuan, Z.; Huang, G.; Liu, C. Panicle-3D: Efficient Phenotyping Tool for Precise Semantic Segmentation of Rice Panicle Point Cloud. Plant Phenomics 2021, 2021, 9838929. [Google Scholar] [CrossRef] [PubMed]
  29. Boogaard, F.P.; Van Henten, E.J.; Kootstra, G. Boosting plant-part segmentation of cucumber plants by enriching incomplete 3D point clouds with spectral data. Biosyst. Eng. 2021, 211, 167–182. [Google Scholar] [CrossRef]
  30. Dutagaci, H.; Rasti, P.; Galopin, G.; Rousseau, D. ROSE-X: An annotated dataset for evaluation of 3D plant organ segmentation methods. Plant Methods 2020, 16, 28. [Google Scholar] [CrossRef] [Green Version]
  31. Schunck, D.; Magistri, F.; Rosu, R.A.; Cornelissen, A.; Chebrolu, N.; Paulus, S.; Leon, J.; Behnke, S.; Stachniss, C.; Kuhlmann, H.; et al. Pheno4D: A spatio-temporal dataset of maize and tomato plant point clouds for phenotyping and advanced plant analysis. PLoS ONE 2021, 16, e0256340. [Google Scholar] [CrossRef]
  32. Turgut, K.; Dutagaci, H.; Galopin, G.; Rousseau, D. Segmentation of structural parts of rosebush plants with 3D point-based deep learning methods. Plant Methods 2022, 18, 20. [Google Scholar] [CrossRef]
Figure 1. The process of building a soybean 3D dataset for point cloud segmentation. The process mainly includes three parts: 3D reconstruction, building the dataset, and point cloud segmentation. 3D reconstruction includes: (A) original image acquisition; (B) image preprocessing; (C) generation of 3D model skeleton; (D) generation of 3D model texture. Building the dataset includes: (E) data annotation; (F) construction of annotated dataset. Point cloud segmentation includes: (G) point cloud segmentation network selection; (H) result of point cloud segmentation.
Figure 1. The process of building a soybean 3D dataset for point cloud segmentation. The process mainly includes three parts: 3D reconstruction, building the dataset, and point cloud segmentation. 3D reconstruction includes: (A) original image acquisition; (B) image preprocessing; (C) generation of 3D model skeleton; (D) generation of 3D model texture. Building the dataset includes: (E) data annotation; (F) construction of annotated dataset. Point cloud segmentation includes: (G) point cloud segmentation network selection; (H) result of point cloud segmentation.
Agriculture 13 01321 g001
Figure 2. Soybean 3D reconstruction image acquisition. (a) Soybean image acquisition platform. (b) Schematic diagram of soybean plant 3D reconstruction image acquisition. The 3D reconstruction was carried out in a laboratory with no wind and sufficient light, using multiple-view stereo technology (MVS).
Figure 2. Soybean 3D reconstruction image acquisition. (a) Soybean image acquisition platform. (b) Schematic diagram of soybean plant 3D reconstruction image acquisition. The 3D reconstruction was carried out in a laboratory with no wind and sufficient light, using multiple-view stereo technology (MVS).
Agriculture 13 01321 g002
Figure 3. Manually mark leaves, main stems, and stems of soybean plants. The organs of the soybean plants were manually labeled.
Figure 3. Manually mark leaves, main stems, and stems of soybean plants. The organs of the soybean plants were manually labeled.
Agriculture 13 01321 g003
Figure 4. Point cloud semantic segmentation architecture. (a) RandLA-Net semantic segmentation architecture diagram. (b) BAAF-Net semantic segmentation architecture diagram. The dataset was trained and tested on two networks.
Figure 4. Point cloud semantic segmentation architecture. (a) RandLA-Net semantic segmentation architecture diagram. (b) BAAF-Net semantic segmentation architecture diagram. The dataset was trained and tested on two networks.
Agriculture 13 01321 g004
Figure 5. Point cloud information map of original soybean 3D plant dataset. (a) Comparison chart of total point cloud amount of stage V and stage R. (b) Comparison chart of reconstructed point cloud amount in 2018 and 2019. (c) Comparison chart of point cloud amount in different development stages. (d) Comparison chart of point cloud amounts of various varieties.
Figure 5. Point cloud information map of original soybean 3D plant dataset. (a) Comparison chart of total point cloud amount of stage V and stage R. (b) Comparison chart of reconstructed point cloud amount in 2018 and 2019. (c) Comparison chart of point cloud amount in different development stages. (d) Comparison chart of point cloud amounts of various varieties.
Agriculture 13 01321 g005aAgriculture 13 01321 g005b
Figure 6. Comparison between the amount of points in the original 3D dataset and the amount of sampled point clouds in the labeled 3D dataset. (a,c) Point volume of the original dataset. (b,d) Sampled point cloud volume of labeled dataset.
Figure 6. Comparison between the amount of points in the original 3D dataset and the amount of sampled point clouds in the labeled 3D dataset. (a,c) Point volume of the original dataset. (b,d) Sampled point cloud volume of labeled dataset.
Agriculture 13 01321 g006
Figure 7. Acc results of soybean plants tested in the RandLA-Net network and BAAF-Net network during the whole growth period. This shows a comparison of the Acc results of the test set on the two models.
Figure 7. Acc results of soybean plants tested in the RandLA-Net network and BAAF-Net network during the whole growth period. This shows a comparison of the Acc results of the test set on the two models.
Agriculture 13 01321 g007
Figure 8. Soybean plant annotation data, RandLA Net, and BAAF Net visualization results in different stages. By contrast, this shows the overall segmentation effect of the two models.
Figure 8. Soybean plant annotation data, RandLA Net, and BAAF Net visualization results in different stages. By contrast, this shows the overall segmentation effect of the two models.
Agriculture 13 01321 g008
Figure 9. Example of error prediction. (a,b) Examples of false prediction of the petiole. (c,d) Examples of main stem error prediction. (e,f) Examples of stem error prediction. (g,h) Examples of error prediction at the connection of main stem and stem. (a,c,e,g) The RandLA-Net test results. (b,d,f,h) BAAF-Net test results. By contrast, this shows the local segmentation difference between the two models.
Figure 9. Example of error prediction. (a,b) Examples of false prediction of the petiole. (c,d) Examples of main stem error prediction. (e,f) Examples of stem error prediction. (g,h) Examples of error prediction at the connection of main stem and stem. (a,c,e,g) The RandLA-Net test results. (b,d,f,h) BAAF-Net test results. By contrast, this shows the local segmentation difference between the two models.
Agriculture 13 01321 g009
Figure 10. The life of soybean.
Figure 10. The life of soybean.
Agriculture 13 01321 g010
Table 1. Basic characteristics of soybean materials. This shows the basic attribute information of soybean materials selected for this experiment, including soybean varieties, childbearing days, accumulated temperature and planting methods.
Table 1. Basic characteristics of soybean materials. This shows the basic attribute information of soybean materials selected for this experiment, including soybean varieties, childbearing days, accumulated temperature and planting methods.
VarietyChildbearing Days>10 °C Accumulated TemperaturePlanting Method
DN 2511252600 °Cpotted planting
DN 2521242500 °Cpotted planting
DN 2531152350 °Cpotted planting
HN 481182350 °Cpotted planting
HN 511262600 °Cpotted planting
Table 2. Number of organ markers in different soybean plants. The number of leaves, the number of main stems, and the number of stems were compared by counting the organs of labeled soybean plants.
Table 2. Number of organ markers in different soybean plants. The number of leaves, the number of main stems, and the number of stems were compared by counting the organs of labeled soybean plants.
LeafMain StemStem
DN25175622182
DN25281322188
DN25371820165
HN4864921161
HN5143717125
Table 3. Point cloud amount distribution of each organ in the training set and test set (%). The proportion of cloud cover of different organ points in the training set and the test set was calculated.
Table 3. Point cloud amount distribution of each organ in the training set and test set (%). The proportion of cloud cover of different organ points in the training set and the test set was calculated.
LeafMain StemStem
Soybean-MVS training models78.082.7219.20
Soybean-MVS test models79.132.3618.51
Table 4. Point cloud segmentation test results (%). The results of the dataset on two models, including IoU, mIoU, and mAcc.
Table 4. Point cloud segmentation test results (%). The results of the dataset on two models, including IoU, mIoU, and mAcc.
RandLA-NetBAAF-Net
IoUleaf88.5888.83
main stem57.0327.25
stem45.5448.23
mIoU 63.7254.77
mAcc 88.5287.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; Zhang, Z.; Sun, K.; Li, S.; Yu, J.; Miao, L.; Zhang, Z.; Li, Y.; Zhao, H.; Hu, Z.; et al. Soybean-MVS: Annotated Three-Dimensional Model Dataset of Whole Growth Period Soybeans for 3D Plant Organ Segmentation. Agriculture 2023, 13, 1321. https://doi.org/10.3390/agriculture13071321

AMA Style

Sun Y, Zhang Z, Sun K, Li S, Yu J, Miao L, Zhang Z, Li Y, Zhao H, Hu Z, et al. Soybean-MVS: Annotated Three-Dimensional Model Dataset of Whole Growth Period Soybeans for 3D Plant Organ Segmentation. Agriculture. 2023; 13(7):1321. https://doi.org/10.3390/agriculture13071321

Chicago/Turabian Style

Sun, Yongzhe, Zhixin Zhang, Kai Sun, Shuai Li, Jianglin Yu, Linxiao Miao, Zhanguo Zhang, Yang Li, Hongjie Zhao, Zhenbang Hu, and et al. 2023. "Soybean-MVS: Annotated Three-Dimensional Model Dataset of Whole Growth Period Soybeans for 3D Plant Organ Segmentation" Agriculture 13, no. 7: 1321. https://doi.org/10.3390/agriculture13071321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop