Next Article in Journal
Does Online Public Opinion Regarding Swine Epidemic Diseases Influence Fluctuations in Pork Prices?—An Analysis Based on TVP-VAR and LDA Models
Previous Article in Journal
Review on Mechanisms of Iron Accelerants and Their Effects on Anaerobic Digestion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of the Canopy Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction

by
Xiuni Li
1,2,3,
Menggen Chen
1,
Shuyuan He
1,
Xiangyao Xu
1,
Panxia Shao
1,
Yahan Su
1,
Lingxiao He
1,
Jia Qiao
4,
Mei Xu
1,
Yao Zhao
1,2,3,
Wenyu Yang
1,2,3,
Wouter H. Maes
5,* and
Weiguo Liu
1,2,3,*
1
College of Agronomy, Sichuan Agricultural University, Chengdu 610000, China
2
Sichuan Engineering Research Center for Crop Strip Intercropping System, Chengdu 610000, China
3
Key Laboratory of Crop Ecophysiology and Farming System in Southwest, Ministry of Agriculture, Chengdu 610000, China
4
School of Marxism, Nanjing Agricultural University, Nanjing 210014, China
5
UAV Research Center, Department of Plants and Crops, Ghent University, 9000 Ghent, Belgium
*
Authors to whom correspondence should be addressed.
Agriculture 2025, 15(7), 729; https://doi.org/10.3390/agriculture15070729
Submission received: 11 March 2025 / Revised: 27 March 2025 / Accepted: 27 March 2025 / Published: 28 March 2025
(This article belongs to the Section Digital Agriculture)

Abstract

:
Intercropping is a key cultivation strategy for safeguarding national food and oil security. Accurate early-stage yield prediction of intercropped soybeans is essential for the rapid screening and breeding of high-yield soybean varieties. As a widely used technique for crop yield estimation, the accuracy of 3D reconstruction models directly affects the reliability of yield predictions. This study focuses on optimizing the 3D reconstruction process for intercropped soybeans to efficiently extract canopy structural parameters throughout the entire growth cycle, thereby enhancing the accuracy of early yield prediction. To achieve this, we optimized image acquisition protocols by testing four imaging angles (15°, 30°, 45°, and 60°), four plant rotation speeds (0.8 rpm, 1.0 rpm, 1.2 rpm, and 1.4 rpm), and four image acquisition counts (24, 36, 48, and 72 images). Point cloud preprocessing was refined through the application of secondary transformation matrices, color thresholding, statistical filtering, and scaling. Key algorithms—including the convex hull algorithm, voxel method, and 3D α-shape algorithm—were optimized using MATLAB, enabling the extraction of multi-dimensional canopy parameters. Subsequently, a stepwise regression model was developed to achieve precise early-stage yield prediction for soybeans. The study identified optimal image acquisition settings: a 30° imaging angle, a plant rotation speed of 1.2 rpm, and the collection of 36 images during the vegetative stage and 48 images during the reproductive stage. With these improvements, a high-precision 3D canopy point-cloud model of soybeans covering the entire growth period was successfully constructed. The optimized pipeline enabled batch extraction of 23 canopy structural parameters, achieving high accuracy, with linear fitting R2 values of 0.990 for plant height and 0.950 for plant width. Furthermore, the voxel volume-based prediction approach yielded a maximum yield prediction accuracy of R2 = 0.788. This study presents an integrated 3D reconstruction framework, spanning image acquisition, point cloud generation, and structural parameter extraction, effectively enabling early and precise yield prediction for intercropped soybeans. The proposed method offers an efficient and reliable technical reference for acquiring 3D structural information of soybeans in strip intercropping systems and contributes to the accurate identification of soybean germplasm resources, providing substantial theoretical and practical value.

1. Introduction

Soybean is a vital crop, serving as a major source of food, oil, and feed, and playing a crucial role in both national economies and daily life. Intercropping systems, which efficiently utilize vertical space, significantly enhance land use efficiency and improve soybean self-sufficiency, making them a key strategy for increasing overall soybean production. Furthermore, soybean root nodules exhibit strong nitrogen-fixing capacity, contributing to improved soil fertility, a higher cropping index, and a substantial reduction in nitrogen fertilizer and pesticide usage, thereby delivering notable ecological benefits [1,2]. Globally, in major soybean-producing regions such as the United States, China, India, Australia, Egypt, and Nigeria, intercropping and relay cropping systems are widely practiced. These systems allow soybean to be grown alongside a variety of crops, including cassava [3,4], wheat [5,6,7], maize [8], sugarcane [9], sorghum [10,11], and sunflower [12]. However, in such systems, soybeans, being shorter in stature, are often shaded by taller companion crops, limiting light interception, restricting growth, and ultimately reducing yield [13].
Three-dimensional (3D) modeling technology allows for the precise reconstruction of crop canopy structures, providing detailed information on canopy surface area, volume, centroid height, and top canopy projection [14,15]. Such data are critical for high-throughput crop phenotyping, yield prediction, and functional–structural plant modeling [16].
Currently, crop 3D reconstruction is mainly conducted using three approaches. Rule-based methods rely on measuring crop morphological parameters to generate approximate models by adjusting structural features, though these tend to have relatively high error margins [17]. Sensor-based methods, such as LiDAR, directly capture high-precision 3D point clouds, but are costly, complex, and sensitive to environmental factors [18]. Stereovision-based methods use multi-view image sequences to reconstruct 3D crop structures, offering lower costs, better adaptability across environments, and reduced labor requirements [19].
Structure-from-motion (SfM), a stereovision-based technique utilizing multi-view 2D image sequences for automatic camera calibration, has gained attention for its cost-effectiveness [20]. SfM generates sparse point clouds from multi-view images, which are further processed using multi-view stereo (MVS) techniques to extract depth and normal information, resulting in dense 3D point clouds [21]. The open-source platform Colmap integrates both SfM and MVS, producing high-precision 3D reconstructions [22]. However, this approach requires strict image acquisition protocols, and manual collection is labor-intensive, time-consuming, and difficult to standardize, which may compromise model quality. Recent advancements in automated control systems and precision mechanical devices have made low-cost, high-accuracy crop 3D reconstruction increasingly feasible [22].
Several studies have demonstrated the utility of 3D reconstruction for various crops. For instance, Wu Dan [23] applied a contour projection method based on visual hulls to generate rice plant point clouds and extract plant height, bounding box volume, and leaf number. Suresh Thapa et al. [24] used LiDAR to scan rotating samples of corn and sorghum, extracting parameters such as leaf area and leaf angle after point cloud denoising and triangulation. Minhui Li et al. [25] employed SfM and MVS to reconstruct wheat point clouds and estimate leaf parameters. Similarly, Yuchao Li et al. [26] applied SfM to maize, showing high correlations between 3D-derived parameters and manual measurements, with R2 values of 0.991 for plant height, 0.989 for leaf length, 0.926 for relative leaf area, and 0.963 for leaf width.
Additionally, numerous studies have explored crop yield prediction using unmanned aerial vehicles (UAVs) combined with multi-sensor fusion and machine learning techniques [27,28]. The fusion of multispectral and thermal infrared data has been shown to significantly improve prediction accuracy. For example, support vector machine (SVM) and deep neural network (DNN) models achieved an R2 of 0.692 for wheat yield prediction [29]. By further integrating multimodal sensor data, winter wheat yield predictions reached an R2 of 0.78, with a 22% reduction in RMSE [30]. Spatiotemporal deep learning models such as 3D-CNN, which use multi-temporal RGB sequences, have achieved high accuracy during early growth stages, with a mean absolute error (MAE) of 292.8 kg/ha [31]. The combination of LiDAR with machine learning methods has further improved farmland biomass predictions, with R2 values of 0.71 and 0.93 at 1 m and 2 m resolutions, respectively [32]. These advances offer strong technical support for high-throughput plant phenotyping and precision agriculture.
Despite these achievements, most research focuses on crops with relatively simple architectures, such as maize [33], rice [34], wheat [35], and sorghum [36]. In contrast, soybean poses unique challenges due to its complex plant architecture, characterized by significant occlusion from leaves and branches. Current research on soybean 3D reconstruction remains limited to early seedling stages [37] or individual leaves [38].
How can soybean 3D reconstruction be optimized to enable early and accurate yield prediction? This study systematically investigates image acquisition angles, plant rotation speeds, point cloud preprocessing, and multi-dimensional parameter extraction across the entire soybean growth cycle. The objective is to provide technical support for the efficient and accurate acquisition of soybean 3D structural data under strip intercropping conditions and to establish a scientific basis for the precise identification of soybean germplasm resources.

2. Experimental Site and Experimental Design

The experiment was conducted during the 2022–2023 growing seasons at the Chongzhou Experimental Base of Sichuan Agricultural University (103°39′ E, 30°33′ N). The site is characterized by a subtropical monsoon humid climate, with an average annual temperature of 16.2 °C, approximately 1400 h of sunshine per year, and an annual precipitation of 918 mm. The soil chemical properties at a depth of 0–20 cm were as follows: soil organic matter, 24.3 g·kg−1; total potassium, 15.2 g·kg−1; total nitrogen, 1.6 g·kg−1; total phosphorus, 1.3 g·kg−1; available potassium, 169.4 mg·kg−1; available nitrogen, 299.5 mg·kg−1; and available phosphorus, 36.5 mg·kg−1.
As Figure 1 illustrates, the field was laid out using a wide–narrow row configuration, with each strip measuring 2 m in width and 20 m in length. The maize strips were planted in two rows, with row spacings of 60 cm (wide) and 40 cm (narrow), and an in-row plant spacing of 20 cm. Maize was sown on 31 March, accompanied by a basal application of a compound fertilizer (N:P:K = 13:5:7) at a rate of 923 kg·hm−2. Urea (N ≥ 46%) was applied as a topdressing at rates of 98 kg·hm−2 during the jointing stage and 163 kg·hm−2 at the tasseling stage. The experiment was designed using a maize–soybean strip intercropping system, with Zhongyu 3, a semi-compact spring maize variety, selected as the maize material. The soybean materials included nine varieties (A: E9-2, B: Nandou 25, C: Nandou 12, D: Chuanxiadou 3, E: Dazhu Dongdouzi, F: Bayuehuang, G: Gongxiadou 12, H: Shuangjiang Zongpidou, I: Hehuadou), all provided by the Crop Strip Intercropping Engineering Technology Research Center of the College of Agronomy, Sichuan Agricultural University.
Soybeans were sown on June 9 in a pot-based system. Each variety was grown in 60 pots, resulting in a total of 540 pots. The pots measured 25 cm in top diameter, 20 cm in bottom diameter, and 25 cm in height. The potted soybean plants were randomly mixed and arranged in two parallel rows within the wide maize row by placing them into pre-dug trenches. No fertilizers were applied throughout the entire soybean growth period.

2.1. High-Throughput Phenotyping Acquisition

During the 2022–2023 study period, soybean image acquisition was performed using a custom-built imaging platform developed by Sichuan Agricultural University. The platform is centered around an automated rotating table, integrated with a high-precision industrial camera to enable multi-view imaging of soybean plants. The table’s rotation speed and number of revolutions were programmable via a PLC, allowing flexibility to meet diverse experimental needs. The imaging system consisted of a Hikvision industrial camera (model: MV-CH250-90GC, Hangzhou, China) paired with a Hikvision robotic lens (model: MVL-KF1624M-25MP, Hangzhou, China). The lens featured a 16 mm focal length, a maximum aperture of F2.4, and was compatible with a 1.2-inch sensor (manufactured in Hangzhou, China). Image acquisition was controlled by custom-developed automatic shooting software, built on Hikvision’s official MVS V3.4.1 platform and operated within a Windows environment. The software allowed for flexible adjustments of camera settings, including exposure and imaging area. Camera triggering was precisely synchronized using signals from a barcode scanner. Based on predefined shooting schedules and the required number of images, the system autonomously captured raw images which were then categorized and archived according to the corresponding field soybean variety numbers.

2.1.1. Raw Image Acquisition

Soybean Plant Image Acquisition Angles

Given the complex three-dimensional architecture of soybean plants, selecting an appropriate imaging angle is essential for accurately capturing their structural details. In this study, the angle α (15°, 30°, 45°, and 60°)—defined as the angle between the imaging sensor’s line of sight and the horizontal plane—was used as a variable (Figure 2). For each angle setting, 36 images were captured and used for 3D reconstruction. The optimal imaging angle was identified by evaluating the completeness of the resulting 3D reconstruction models, focusing on aspects such as realism and the retention of structural details. A visual assessment approach was employed to determine which angle yielded the most comprehensive and accurate models.
The angle α was calculated using the tangent function, expressed as tan(α) = a/b, where a represents the length of the side opposite to angle α, and b denotes the length of the adjacent side. During camera adjustments, the center of the plant was consistently aligned with the center of the image frame to ensure uniform imaging conditions.
Using the global matching algorithm in Colmap 3.7 software, this study investigated the effect of varying image quantities on both the quality and processing time of soybean plant 3D point cloud reconstructions.

Soybean Plant Rotation Speed

In the strip intercropping system, soybean seedlings are relatively fragile, and excessive rotation speeds can adversely affect image quality, leading to artifacts such as “ghosting”. To achieve a balance between image quality and acquisition efficiency, rotation speeds were adjusted according to the growth stage of the plants. Preliminary trials indicated a marked deterioration in image quality when the rotation speed exceeded 1.5 rpm. Therefore, four rotation speeds, denoted as ω (0.8 rpm, 1.0 rpm, 1.2 rpm, and 1.4 rpm), were selected for testing. To determine the optimal rotation speed, image clarity under each condition was evaluated. The rotation speed was programmed via the programmable logic controller (PLC) integrated into the high-throughput phenotyping platform.
Image quality was assessed using standard metrics, including intersection over union (IOU), precision (PA), and recall. The calculation formulas are as follows:
I O U = T P T P + F P + T N
P A = T P + T N T P + F P + T N
R e c a l l = T P T P + T N
where TP refers to actual plant pixel points that are correctly classified as plant points by the network, FP refers to plant pixel points that are incorrectly classified as background points, TN represents actual background pixel points that are correctly identified as background, and FN refers to background pixel points that are mistakenly classified as plant points by the network.

Image Acquisition Quantity

Multi-view 3D reconstruction often generates large datasets, which can increase computational load and processing time. Previous studies indicate that the typical number of images required for effective multi-view 3D reconstruction ranges from 20 to 90 [39]. To balance reconstruction quality with time efficiency, this study tested four different image quantities—24, 36, 48, and 72 images—across various soybean growth stages. The optimal number of images was determined by assessing the visual completeness of the resulting sparse point clouds and evaluating the trade-off between image quality and acquisition efficiency.

2.1.2. Point Cloud Preprocessing

Spatial Angle Transformation

As soybean plants grow, changes in their height and morphology continuously alter the relative position between the camera and the plants, posing challenges for batch image processing throughout the entire growth cycle. To address this issue, spatial angle transformation is required to standardize the camera–plant spatial relationship across all stages. This study applies homogeneous transformation matrices, incorporating a fourth dimension to simultaneously perform “translation + rotation” operations, thereby improving point cloud consistency and processing efficiency. For instance, to translate the point (1, 1, 1) by (1 + x1, 1 + x2, 1 + x3), this transformation can be represented in homogeneous coordinates as follows:
1 0 0 x 1 0 1 0 x 2 0 0 1 x 3 0 0 0 1 p 1 p 2 p 3 1 = p 1 + x 1 p 2 + x 2 p 3 + x 3 1
In CloudCompare, the corresponding homogeneous transformation matrix after translation and rotation can be calculated, where T is the homogeneous matrix for the transformation in this experiment.
T = 0.9653113 0.1246889 0.2507586 0.0921756 0.2685687 0.4248055 0.8370305 1.7990317 0.0060606 0.8368102 0.4567661 5.4710474 0.0000000 0.0000000 0.0000000 1.0000000

Point Cloud Segmentation

To improve the accuracy of feature point matching and enhance the overall efficiency and precision of 3D reconstruction, a reference object with distinct color characteristics was introduced during image acquisition. This reference object rotated synchronously with the soybean plant. During the 3D model construction, the point cloud corresponding to the reference object was removed using a color thresholding technique. The specific threshold parameters were set as follows: a deviation of 0.2 for the red channel (hue), 0.2 for the green channel (saturation), and 0.2 for the blue channel (value). This method effectively isolated and excluded the reference object’s point cloud, thereby ensuring greater accuracy and reliability in the subsequent analyses.

Point Cloud Denoising

To eliminate outliers and noise introduced by measurement errors, a statistical filtering-based point cloud denoising method was employed. Initially, the average distance μ to the nearest n neighbors for each point was calculated, along with the standard deviation σ. A distance threshold d m i n = μ + a σ was defined, where a is a constant ratio. Using this threshold, the average distance between adjacent points was computed. If the average distance was less than d m i n , the point was retained as a target point; otherwise, it was classified as noise and removed. This method produced a clean target point cloud for subsequent analyses.

Point Cloud Scaling

During the 3D reconstruction process, Colmap generates the spatial scale relationships of the canopy point cloud but does not provide actual physical size information. To determine the real physical size L of the soybean canopy, a reference object was utilized to calculate the scaling ratio, which was then applied to the measurements. The scaling calculation formula is as follows:
L = L 1 l 1 l
where L 1 is the actual size of the reference object, l 1 is the initial reference object point cloud measurement size, l is the measured size of the reference object’s point cloud, and   L is the actual size of the soybean canopy.

2.1.3. Extraction of Image Parameters

One-Dimensional Parameters

The soybean plant height (H) was calculated using the formula H = z   m a x z m i n where H represents the canopy plant height, z m a x is the maximum value along the Z-axis, and z m i n is the minimum value. Similarly, plant height and width can be calculated using the same principle. The centroid height was calculated using the equation x 0 = i = 1 n x i n , y 0 = i = 1 n y i n , z 0 = i = 1 n z i n , H 0 = z 0 z m i n where n is the number of 3D point cloud data points, and ( x i , y i , z i ) represents the 3D coordinates of any given point. z m i n refers to the Z-axis coordinate of the lowest point in the 3D point cloud, and H 0 represents the canopy centroid height. In the 3D point cloud, each point corresponds to a 3D spatial coordinate, and by reading the spatial coordinates of all points and calculating their averages, the centroid of the 3D point cloud ( x 0 , y 0 , z 0 ) can be obtained. By subtracting the coordinates of the lowest point in the point cloud from the centroid coordinates, the centroid height of the 3D point cloud canopy can be determined.

Two-Dimensional Parameters

Two-dimensional parameters primarily describe the projected area of the canopy, which can be further divided into top and side projections. Traditional measurement approaches often approximate the canopy using simplified geometric shapes, such as circles, ellipses, or rectangles, which can introduce substantial errors. To improve accuracy, this study employed both concave hull and convex hull algorithms to extract the soybean canopy’s projected area more precisely (Figure 3).

Three-Dimensional Parameters

To precisely extract three-dimensional parameters, this study utilized the convex hull algorithm, voxel method, and 3D α-shape algorithm to capture the soybean canopy’s 3D structural features. The convex hull algorithm was implemented using the 3D convex hull function available on the MATLAB platform, adopting a layered convex hull approach to minimize gaps within the 3D hull. Specifically, the soybean canopy was uniformly segmented along the height axis, and the approximate convex hull volume for each layer was calculated. The total canopy volume was then obtained by summing the volumes of all layers, as expressed by the following formula:
V = 1 3 S n h + i = 1 n 1 S i h
where V is the soybean canopy volume in cubic meters; S n is the area of the topmost layer of the canopy (modeled as an approximate cone), in square meters; h is the canopy layer thickness, in meters; n is the number of layers; and S i is the convex hull area of the i -th layer, in square meters.
The voxel method divides the space of the point cloud into numerous small cubes. For each cube, it is determined whether it contains any 3D points. Cubes that do not contain points are removed, and the resulting model represents the point cloud’s fit. The formula for calculating the voxel volume is as follows:
V = a x 3 i = 0 n K i
where V is the canopy volume in cubic meters, a is the voxel edge length in meters, n is the number of canopy layers, and K i is the number of valid voxels in the i -th layer.
The 3D α-shape algorithm utilizes spheres with a radius of α to identify the boundary points of a point cloud. By connecting these boundary points, triangular facets are formed, enabling the reconstruction of the object’s surface. Consequently, the value of α-directly affects the quality of the triangular mesh surface reconstruction of the point cloud. To determine the optimal α value for soybean canopy research, this study reconstructed the triangular mesh model of the soybean canopy using different α values (0.1 m, 0.05 m, 0.02 m).

2.1.4. High-Throughput Phenotyping Information Acquisition Time

During the soybean growth period, images were captured every 7 days, from the V1 stage (when the first trifoliate leaf opens) to the R8 stage (maturity). Three pots of each variety were photographed.

2.2. Traditional Phenotyping

Throughout the entire soybean growth period, samples were collected at 7-day intervals starting from the V1 stage. Plant height and width were measured manually using a tape measure [32], while the fresh weight of the soybean canopy was recorded using an electronic scale. Leaf area was calculated with Image-Pro Plus 6.3 software. Upon reaching maturity, the yield of individual soybean plants was measured.

2.3. Establishment and Evaluation of the Prediction Model

Using image parameters extracted from the 3D model as independent variables and yield as the dependent variable, a stepwise regression model was constructed. To evaluate the model’s accuracy, the root mean square error (RMSE) and the coefficient of determination (R2) were calculated using the following formulas:
R M S E = i = 1 n y i y i 2 n
R 2 = 1 i = 1 n y i y i 2 i = 1 n y i y ¯ 2
where y i is the actual measured value for sample i , y i is the predicted value from the model, y ¯ is the average actual measured value, and n is the number of samples.

2.4. Data Analysis Software

This study implemented a multi-platform collaborative data analysis workflow (Figure 4). Initially, raw soybean image sequences were automatically processed using the COLMAP 3.8 3D reconstruction system to generate an initial plant point cloud model through feature point detection and matching, followed by both sparse and dense reconstruction algorithms.
Subsequently, Cloud Compare 2.12.4 and MATLAB R2022b were used in combination to optimize the point cloud data. Preprocessing steps, including spatial segmentation and noise filtering, were applied to produce a high-precision 3D point cloud model of the soybean canopy.
To facilitate the extraction of canopy structural parameters, a customized program module was developed on the MATLAB platform, integrating point cloud coordinate analysis with geometric feature calculation algorithms. This enabled the automated batch extraction of key parameters such as canopy volume and leaf area index.
Finally, a stepwise regression model was developed using IBM SPSS Statistics 21 for yield prediction, and data visualization along with chart refinement was performed using Origin 2021b.

3. Experimental Results

3.1. Three-Dimensional Reconstruction Image Acquisition Parameters

3.1.1. Imaging Angle

As shown in Figure 5a, as the camera’s imaging angle increases from 15° to 60° the completeness of the top part of the soybean canopy 3D model gradually improves, while the completeness of the bottom part decreases. At small angles (15°), the 3D model exhibits significant gaps in the leaves, while at larger angles (60°), information from the bottom of the canopy is lost. Losses at both the top and bottom significantly impact the accuracy of canopy structure parameters, resulting in substantial estimation deviations in key traits such as plant height and volume. Taking into account the need for completeness in both the upper and lower sections of the canopy, an imaging angle of 30° was found to provide the best balance, allowing for comprehensive capture of structural details across the entire canopy and yielding a more complete 3D reconstruction. Therefore, a 30° imaging angle was selected as the optimal setting for acquiring images for soybean 3D reconstruction, ensuring both the accuracy and reliability of the extracted structural parameters.

3.1.2. Plant Rotation Speed

The rotation speed of the plant directly influences both image clarity and acquisition efficiency. In this study, rotation speeds of 0.8 rpm, 1.0 rpm, 1.2 rpm, and 1.4 rpm were tested, with Figure 5b presenting the results. While the quality of the raw images showed minimal variation across the different speeds, the completeness and clarity of the reconstructed 3D point cloud models differed significantly. As the rotation speed increased from 0.8 rpm to 1.2 rpm, no notable decline in model completeness was observed. However, at 1.4 rpm, the point cloud models began to exhibit a “ghosting” effect, with leaf structures appearing duplicated and the model boundaries becoming blurred. This degradation is likely due to plant vibration at higher rotation speeds, which causes slight displacements of leaf positions within the image sequence, leading to spatial misalignment and ghosting artifacts in the reconstructed model. Considering both image quality and acquisition efficiency, a rotation speed of 1.2 rpm was identified as optimal, providing a balance between maintaining model integrity and ensuring efficient data collection.
By calculating the IOU, PA, and recall values for images captured at different rotation speeds, it was observed that increasing the speed from 0.8 rpm to 1.2 rpm had minimal impact on image quality—IOU and recall remained stable, while PA decreased by only 0.01. However, when the rotation speed increased from 1.2 rpm to 1.4 rpm, all three metrics—IOU, PA, and recall—showed significant declines (Table 1). Based on a comprehensive assessment of image quality and acquisition efficiency, a rotation speed of 1.2 rpm was selected as the optimal setting as it ensures the accuracy of the 3D model while maintaining a balanced and efficient data collection process.

3.1.3. Number of Images Required for 3D Reconstruction

As shown in Figure 5c, increasing the number of images from 24 to 72 resulted in a progressive improvement in model completeness but also led to longer reconstruction times. Specifically, when using 24, 36, 48, and 72 images, the corresponding 3D reconstruction times were approximately 12, 22, 26, and 30 min, respectively. The most notable improvement in model quality occurred between 24 and 36 images, whilst the gains from 36 to 48 images were relatively modest.
The required number of images for effective 3D reconstruction varied depending on the soybean growth stage. During the vegetative growth phase, 36 images were sufficient to achieve a complete and accurate 3D model, enabling reliable extraction of structural parameters. However, in the reproductive growth phase, when the canopy became denser and structurally more complex, models reconstructed from 36 images exhibited missing details and reduced completeness. Balancing reconstruction efficiency with model accuracy, this study selected 36 images for 3D reconstruction during the vegetative phase and 48 images for the reproductive phase to ensure both time efficiency and high-quality model outputs.

3.2. Point Cloud Preprocessing Results

3.2.1. Spatial Angle Transformation

The left image in Figure 6a shows a 3D point cloud model of the soybean canopy captured from a 45° top-down perspective, while the right image displays the front view of the same model after applying rotation and translation using a homogeneous transformation matrix. The results demonstrate that the homogeneous transformation matrix effectively performs translation and rotation of the 3D point cloud in any desired direction while preserving the integrity and quality of the point cloud data. This approach provides a reliable technical foundation for subsequent data alignment and analysis.

3.2.2. Point Cloud Segmentation

During image acquisition, rotating the plant alongside distinctive reference objects improves the accuracy of feature point matching, thereby enhancing the efficiency and precision of 3D model reconstruction. However, in subsequent soybean phenotypic analyses, the point cloud data associated with these reference objects increase the computational load and may compromise the accuracy of the analytical results. Therefore, it is essential to accurately isolate and remove non-canopy point clouds from the initial model. As shown in Figure 6b, the initial 3D point cloud model contains stray points, particularly around the plant edges, which differ in color from the actual canopy. By applying a color threshold, both the reference objects and stray points are effectively filtered out, resulting in a clean and complete 3D point cloud model of the soybean canopy. This preprocessing step substantially improves both the efficiency and reliability of subsequent data analyses.

3.2.3. Point Cloud Denoising

To reduce background noise interference, this study stabilized the imaging environment and optimized the background conditions. However, despite initial segmentation, some randomly distributed discrete noise points remained within the point cloud. To address this, the effectiveness of two denoising methods—pass-through filtering and statistical filtering—was compared (Figure 6c). The results revealed that pass-through filtering was less effective at removing noise points and noticeably reduced the density of the target point cloud. In contrast, statistical filtering successfully eliminated discrete noise surrounding the plant while preserving the shape and structural integrity of the canopy. Based on these findings, statistical filtering was selected as the primary denoising method to ensure both the accuracy and completeness of the final point cloud model.

3.2.4. Point Cloud Scaling

To restore the physical dimensions of the soybean canopy point cloud model, this study employed a potted plant support frame as a reference object. Owing to its regular geometry, the support frame allowed for easy measurement of both its actual physical dimensions and the corresponding point cloud dimensions in terms of length, width, and height. As illustrated in Figure 6d, the measurements confirmed that the scaling ratios across all three axes were consistent, verifying the model’s high fidelity in accurately representing the 3D structure of the canopy. These results demonstrate that the point cloud model can be reliably used to calculate the actual 3D dimensions of the soybean canopy, providing a solid foundation for the precise extraction of canopy structural parameters.

3.3. Parameter Extraction

3.3.1. One-Dimensional Parameters

To assess the accuracy of the 3D canopy model, linear regression analysis was conducted between model-extracted values and manual measurements of plant height and width across the entire soybean growth period. The coefficient of determination (R2) was used as the primary indicator of model accuracy. As shown in Figure 7a, the R2 value between the manually measured canopy height and the model-extracted height reached 0.990, with a root mean square error (RMSE) of 0.018 m. Similarly, the R2 value for canopy width was 0.950, with an RMSE of 0.016 m. These results demonstrate that the 3D canopy model exhibits high accuracy in both vertical and horizontal dimensions and effectively captures the structural characteristics of the soybean canopy. This provides a robust and reliable data foundation for further research on soybean canopy architecture.

3.3.2. Two-Dimensional Parameters

In this experiment, the concave hull algorithm was employed to generate the top-view projection of the soybean canopy point cloud, as Figure 7b illustrates. The resulting projection closely matched the actual canopy shape. In comparison, the convex hull algorithm consistently overestimated the projection area, with deviations ranging from 17% to 41%. These findings indicate that the concave hull algorithm provides greater accuracy in calculating the canopy projection area of soybeans and is therefore better suited for the precise extraction of two-dimensional canopy parameters in phenotypic analysis.

3.3.3. Three-Dimensional Parameters

To enhance the accuracy of 3D canopy volume extraction, this study optimized multiple computational methods. The initial 3D convex hull model generated using MATLAB contained large voids, leading to an overestimation of canopy volume (Figure 7c). To address this issue, a hierarchical convex hull approach was introduced, which effectively reduced these gaps and improved the accuracy of volume estimation.
The α-shape algorithm was employed to further refine canopy boundary detection. By fitting spheres of radius α to the point cloud, the algorithm identified boundary points and generated triangular facets to reconstruct the canopy surface. In this study, 3D mesh models were generated using various α values (0.1 m, 0.05 m, and 0.02 m) (Figure 7d). The results showed that smaller α values improved the retention of canopy details; however, reducing α to 0.01 m introduced gaps in the mesh, negatively affecting accuracy. Since canopy volume is linearly correlated with fresh weight, and canopy surface area correlates with leaf area, α optimization was guided by these relationships. A curve was plotted with α as the independent variable and the correlation coefficient as the dependent variable. As shown in Figure 7d, an α value of 0.02 m yielded the optimal balance between canopy surface area and volume correlations, and was therefore selected for this study.
Voxel size was also found to be critical for accurately estimating canopy volume. As voxel side length decreased, the reconstructed model increasingly resembled the true canopy structure (Figure 7e). Using the R2 value between canopy volume and fresh weight as an evaluation metric, the study found that within the voxel size range of 0.01 m to 0.02 m, correlations were consistently high. However, reducing voxel size led to a sharp increase in voxel count, significantly raising computational costs and lowering efficiency. To balance accuracy and computational efficiency, voxel size optimization was conducted during both vegetative and reproductive growth stages. The results (Figure 7f) revealed that at small voxel sizes, canopy volume varied linearly with voxel size. However, once the voxel side length exceeded certain thresholds (e.g., V3: 0.02 m, V6: 0.03 m, R4: 0.04 m, R6: 0.05 m), the volume began fluctuating, with greater fluctuation amplitudes as voxel size increased. Considering data across all growth stages, a voxel size of 0.015 m was identified as the optimal value, providing stable canopy volume estimates while maintaining computational efficiency.
In summary, this study successfully extracted a total of 25 canopy structural parameters from the 3D point cloud data, including 8 one-dimensional parameters, 4 two-dimensional parameters, and 13 three-dimensional parameters. Table 2 provides detailed parameter information.

3.4. Soybean Yield Prediction

To develop an early-stage soybean yield prediction model, this study performed stepwise regression analysis between image-derived parameters collected throughout the entire growth period and single-plant yield. As shown in Figure 8a, prediction performance progressively improved as the plants advanced through growth stages, with increasing accuracy and decreasing prediction error. The highest prediction accuracy and lowest error were observed at the V6 (vegetative) and R4 (reproductive) stages.
The stepwise regression results showed that at the V4 stage, the model achieved a prediction performance of R2 = 0.503, RMSE = 2.54 g, and MAE = 2.17 g. The regression equation at this stage was as follows: y = 6.903 + 4.615*x1 + 3124.496*x2 where x1 represents α-shape volume, and x2 represents minimum bounding box surface area (Figure 8b).
At the R4 stage, prediction performance further improved to R2 = 0.625, RMSE = 2.12 g, and MAE = 1.7 g, with the regression equation as follows: y = 5.557 + 717.88*x1 where x1 represents voxel volume (Figure 8c).
To investigate the factors underlying yield variation in intercropped soybeans, a statistical analysis of yield data across all soybean varieties was conducted. Four varieties with significant yield differences—A, F, H, and I—were selected for further analysis (Figure 9a). Since the R4 stage exhibited the highest yield prediction accuracy and voxel-derived canopy volume emerged as the key predictor in the stepwise regression analysis, canopy volume change curves for these four varieties were fitted across the entire growth period. The results revealed substantial differences in peak canopy volume, following the order H > F > A > I (Figure 9b), which corresponded to their respective yield levels.
Further analysis showed that during the coexistence phase, canopy volume increments did not differ significantly among varieties. However, in the independent growth phase, high-yield varieties (F and H) exhibited significantly larger canopy volume increases compared to low-yield varieties (A and I) (Figure 9c). A dynamic assessment of canopy volume trends across the full growth cycle revealed a typical three-phase pattern: an initial slow growth phase, followed by an accelerated growth phase, and a final phase of slowed growth until the peak volume was reached (Figure 10).
These results suggest that dynamic changes in soybean canopy volume, particularly the volume increments during the independent growth phase, are critical factors influencing yield variation. This study provides valuable insights for optimizing intercropping strategies to enhance soybean productivity.

4. Discussion

To ensure the quality of the original images, this study focused on three key factors, with the shooting angle being the most critical as it directly influences the model’s ability to capture structural details and extract features effectively [40]. Previous research has consistently highlighted the importance of imaging angles in 3D modeling. For instance, Jiang Y et al. [41] used a depth camera to acquire top-view images of plants for 3D model construction and parameter extraction. Their results indicated that fiber yield was associated with static traits after the canopy development stage (R2 = 0.35–0.71) and with growth rate during the early canopy development stage (R2 = 0.29–0.52). Similarly, Andújar D et al. [42] compared the effects of four different viewing angles—top view (0°), oblique view (45°), vertical side-view (90°), and ground-up view (−45°)—on 3D reconstruction performance. They found that the top-view performed poorly due to upper leaves obscuring lower canopy layers, whereas other angles yielded better results. These findings are consistent with the preliminary experimental observations of this study. Building on the prior research, this present study systematically explored four gradient imaging angles for soybeans and optimized them based on image-quality criteria. The selected optimal angle maximized the capture of canopy structural details, thereby enhancing both the accuracy and stability of the resulting 3D models.
In addition, under intercropping conditions, soybean stems tend to be fragile and susceptible to lodging due to limited light availability [43], making rotation speed a key factor in image clarity and reconstruction quality. To address this, the rotation speed was optimized to ensure both image stability and sharpness. However, as stem weakness is primarily influenced by shading severity, the rotation speed could be further reduced in future experiments conducted under more intense shading conditions to preserve image quality.
Third, as soybean plants grow and their canopy structure becomes increasingly complex, the relationship between image quantity and model quality exhibits a “diminishing returns” effect. While increasing the number of images initially leads to significant improvements in model quality, this effect plateaus beyond a certain threshold [44]. Based on this observation, this study optimized the number of images for different growth stages (vegetative and reproductive) to balance data processing efficiency with model accuracy.
It is important to note that, in this study, soybeans were grown outdoors, while image acquisition was conducted in a controlled indoor environment. Thus, weather conditions were a crucial consideration during image collection. In extreme weather, soybean growth may be temporarily impacted, but plants typically resume growth through self-regulatory mechanisms once conditions improve. To ensure the accuracy of phenotypic data acquisition, image collection was scheduled under optimal conditions—specifically on clear and windless days.
During image processing, batch operations such as rotation, scaling, and color threshold filtering were employed to remove noise and streamline the workflow, significantly reducing processing time. For parameter extraction, traditional methods that simplify canopies into regular geometric shapes often introduce measurement errors (e.g., an accuracy of only 0.890 for stem diameter) [45]. In contrast, this study retrieved extreme points from the 3D point cloud and accurately extracted basic structural parameters, such as plant height and width, via coordinate calculations, achieving R2 values of 0.990 and 0.950, and RMSE values of 0.018 m and 0.016 m, respectively. These results represent a substantial improvement in extraction accuracy over previous studies [46,47]. Building on prior research [48], this study also optimized parameter settings for the concave hull, convex hull, voxel, and α-shape algorithms [49,50,51] by evaluating their correlations with fresh weight, leaf area, and model-derived parameters at different soybean growth stages. This significantly enhanced both data accuracy and model applicability.
In total, this study extracted 23 one-, two-, and three-dimensional canopy parameters across the full soybean growth period, providing a comprehensive characterization of plant architectural dynamics and biomass accumulation. Nonetheless, some limitations remain. The current approach—capturing rotating plants using a fixed camera—though cost-effective and efficient, restricts the capture of complete canopy information in later growth stages when leaves and branches become denser. To address this, the use of multi-view imaging, as recommended by prior studies, could help reduce leaf occlusion and improve model completeness and accuracy [52]. Increasing the number of cameras would help achieve this without compromising efficiency.
Regarding yield prediction, accuracy varied across growth stages. Results indicate that prediction accuracy peaked at the V6 stage during the vegetative phase and at the R4 stage during the reproductive phase. This pattern is attributed to changes in biomass accumulation and phenotypic characteristics over time. During the vegetative phase, greater biomass is often linked to higher yield potential, and by V6, growth disparities between cultivars become more apparent. In the reproductive phase, the R4 stage precedes significant biomass transfer to pods, allowing for relatively accurate yield prediction. However, as soybeans transition into the grain-filling period, pod phenotyping is hindered by nutrient translocation and increased leaf occlusion, reducing the predictive power of canopy traits.
Additionally, the study observed that after the maize–soybean co-growth period ended, cultivars A and I entered the reproductive stage earlier, while cultivars F and H remained in the vegetative stage. During weeks 8–11, canopy volume of cultivars F and H increased rapidly, significantly exceeding that of A and I, highlighting distinct differences in architectural responses to environmental conditions which directly impacted yield formation.

5. Conclusions

This study developed a three-dimensional reconstruction method applicable to the entire growth cycle of soybeans, encompassing image acquisition, 3D canopy reconstruction, and structural parameter extraction. This method enables continuous and non-destructive monitoring of canopy parameters in intercropped soybeans and establishes an early-yield prediction model. The study optimized image acquisition parameters (capture angle of 30°, plant rotation speed of 1.2 rpm, and image numbers of 36 and 48 for the vegetative and reproductive stages, respectively) and point cloud preprocessing methods to ensure high-precision 3D canopy reconstruction. Additionally, the voxel volume-based yield prediction achieved an R2 of up to 0.788. This research provides a scientific basis for phenotypic screening under stress conditions, high-yield soybean variety selection, and optimization of intercropping systems. Moreover, it offers a reliable approach for accurately identifying soybean germplasm resources and efficiently obtaining 3D structural information of intercropped soybeans, holding significant theoretical and practical value.

Author Contributions

X.L.: Writing—review and editing, writing—original draft, visualization, validation, investigation, and formal analysis. M.C.: validation, investigation, and formal analysis. S.H. and X.X.: validation, and methodology. P.S., Y.S. and L.H.: data curation. J.Q.: methodology, and conceptualization. M.X. and Y.Z.: conceptualization. W.Y., W.H.M. and W.L.: resources, project administration, conceptualization, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Major Project on Agricultural Biotechnology Breeding under the Technology Innovation 2030 Initiative (2023ZD0403405), the National Natural Science Foundation of China (32172122), the National Modern Agricultural Industry Technology System, Sichuan Soybean Innovation Team (SC-CXTD-2024-21) and by the Fundamental Research Funds for the Central Universities in Humanities and Social Sciences, Nanjing Agricultural University (Grant No. SKYC2024015).

Data Availability Statement

The data from the experimental results can be found in the attachment. The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Skidmore, M.E.; Sims, K.M.; Gibbs, H.K. Agricultural intensification and childhood cancer in Brazil. Proc. Natl. Acad. Sci. USA 2023, 120, e2306003120. [Google Scholar] [CrossRef] [PubMed]
  2. Li, X.-F.; Wang, Z.-G.; Bao, X.-G.; Sun, J.-H.; Yang, S.-C.; Wang, P.; Wang, C.-B.; Wu, J.-P.; Liu, X.-R.; Tian, X.-L. Long-term increased grain yield and soil fertility from intercropping. Nat. Sustain. 2021, 4, 943–950. [Google Scholar] [CrossRef]
  3. Tsay, J.; Fukai, S.; Wilson, G. Effects of relative sowing time of soybean on growth and yield of cassava in cassava/soybean intercropping. Field Crops Res. 1988, 19, 227–239. [Google Scholar]
  4. Mbah, E.U.; Ogidi, E. Effect of soybean plant populations on yield and productivity of cassava and soybean grown in a cassava-based intercropping system. Trop. Subtrop. Agroecosyst. 2012, 15, 241–248. [Google Scholar]
  5. Buehring, N.; Reginelli, D.; Blaine, M. Long term wheat and soybean response to an intercropping system. In Proceedings of the Southern Conservation Tillage Conference, Raleigh, NC, USA, 16–17 July 1990; Citeseer: Princeton, NJ, USA, 1990; pp. 65–68. [Google Scholar]
  6. Li, L.; Sun, J.; Zhang, F.; Li, X.; Yang, S.; Rengel, Z. Wheat/maize or wheat/soybean strip intercropping: I. Yield advantage and interspecific interactions on nutrients. Field Crops Res. 2001, 71, 123–137. [Google Scholar]
  7. Li, L.; Sun, J.; Zhang, F.; Li, X.; Rengel, Z.; Yang, S. Wheat/maize or wheat/soybean strip intercropping: II. Recovery or compensation of maize and soybean after wheat harvesting. Field Crops Res. 2001, 71, 173–181. [Google Scholar]
  8. Chen, P.; Song, C.; Liu X-m Zhou, L.; Yang, H.; Zhang, X.; Zhou, Y.; Du, Q.; Pang, T.; Fu, Z.-D. Yield advantage and nitrogen fate in an additive maize-soybean relay intercropping system. Sci. Total Environ. 2019, 657, 987–999. [Google Scholar] [CrossRef]
  9. Wang, X.; Feng, Y.; Yu, L.; Shu, Y.; Tan, F.; Gou, Y.; Luo, S.; Yang, W.; Li, Z.; Wang, J. Sugarcane/soybean intercropping with reduced nitrogen input improves crop productivity and reduces carbon footprint in China. Sci. Total Environ. 2020, 719, 137517. [Google Scholar]
  10. Ghosh, P.; Tripathi, A.; Bandyopadhyay, K.; Manna, M. Assessment of nutrient competition and nutrient requirement in soybean/sorghum intercropping system. Eur. J. Agron. 2009, 31, 43–50. [Google Scholar]
  11. Egbe, O. Effects of plant density of intercropped soybean with tall sorghum on competitive ability of soybean and economic yield at Otobi, Benue State, Nigeria. J. Cereals Oilseeds 2010, 1, 1–10. [Google Scholar]
  12. Saudy, H.; El-Metwally, I. Weed management under different patterns of sunflower-soybean intercropping. J. Cent. Eur. Agric. 2009, 10, 41–51. [Google Scholar]
  13. Qin, C.; Li Y-h Li, D.; Zhang, X.; Kong, L.; Zhou, Y.; Lyu, X.; Ji, R.; Wei, X.; Cheng, Q. PH13 improves soybean shade traits and enhances yield for high-density planting at high latitudes. Nat. Commun. 2023, 14, 6813. [Google Scholar] [CrossRef] [PubMed]
  14. Zhu, Z.; Kleinn, C.; Nölke, N. Towards tree green crown volume: A methodological approach using terrestrial laser scanning. Remote Sens. 2020, 12, 1841. [Google Scholar] [CrossRef]
  15. Wang, J.; Zhang, Y.; Gu, R. Research status and prospects on plant canopy structure measurement using visual sensors based on three-dimensional reconstruction. Agriculture 2020, 10, 462. [Google Scholar] [CrossRef]
  16. Fahlgren, N.; Gehan, M.A.; Baxter, I. Lights, camera, action: High-throughput plant phenotyping is ready for a close-up. Curr. Opin. Plant Biol. 2015, 24, 93–99. [Google Scholar] [CrossRef]
  17. Liu, G.; Si, Y.; Feng, J. Research Progress on 3D Reconstruction Methods for Agricultural and Forestry Crops. Trans. Chin. Soc. Agric. Mach. 2014, 45, 38–46. [Google Scholar]
  18. Kaminuma, E.; Heida, N.; Tsumoto, Y.; Yamamoto, N.; Goto, N.; Okamoto, N.; Konagaya, A.; Matsui, M.; Toyoda, T. Automatic quantification of morphological traits via three-dimensional measurement of Arabidopsis. Plant J. 2004, 38, 358–365. [Google Scholar] [CrossRef]
  19. Alenya, G.; Dellen, B.; Foix, S.; Torras, C. Robotized plant probing: Leaf segmentation utilizing time-of-flight data. IEEE Robot. Autom. Mag. 2013, 20, 50–59. [Google Scholar] [CrossRef]
  20. Ivanov, N.; Boissard, P.; Chapron, M.; Andrieu, B. Computer stereo plotting for 3-D reconstruction of a maize canopy. Agric. For. Meteorol. 1995, 75, 85–102. [Google Scholar] [CrossRef]
  21. Biskup, B.; Scharr, H.; Schurr, U.; Rascher, U. A stereo imaging system for measuring structural parameters of plant canopies. Plant Cell Environ. 2007, 30, 1299–1308. [Google Scholar] [CrossRef]
  22. Wang, J.; Zhong, Y.; Dai, Y.; Birchfield, S.; Zhang, K.; Smolyanskiy, N.; Li, H. Deep two-view structure-from-motion revisited. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8953–8962. [Google Scholar]
  23. Wu, D.; Yu, L.; Ye, J.; Zhai, R.; Duan, L.; Liu, L.; Wu, N.; Geng, Z.; Fu, J.; Huang, C. Panicle-3D: A low-cost 3D-modeling method for rice panicles based on deep learning, shape from silhouette, and supervoxel clustering. Crop J. 2022, 10, 1386–1398. [Google Scholar] [CrossRef]
  24. Thapa, S.; Zhu, F.; Walia, H.; Yu, H.; Ge, Y. A novel LiDAR-based instrument for high-throughput, 3D measurement of morphological traits in maize and sorghum. Sensors 2018, 18, 1187. [Google Scholar] [CrossRef] [PubMed]
  25. Li, M.; Shamshiri, R.R.; Schirrmann, M.; Weltzien, C. Impact of camera viewing angle for estimating leaf parameters of wheat plants from 3D point clouds. Agriculture 2021, 11, 563. [Google Scholar] [CrossRef]
  26. Li, Y.; Liu, J.; Zhang, B.; Wang, Y.; Yao, J.; Zhang, X.; Fan, B.; Li, X.; Hai, Y.; Fan, X. Three-dimensional reconstruction and phenotype measurement of maize seedlings based on multi-view image sequences. Front. Plant Sci. 2022, 13, 974339. [Google Scholar] [CrossRef]
  27. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  28. Vatter, T.; Gracia-Romero, A.; Kefauver, S.C.; Nieto-Taladriz, M.T.; Aparicio, N.; Araus, J.L. Preharvest phenotypic prediction of grain quality and yield of durum wheat using multispectral imaging. Plant J. 2022, 109, 1507–1518. [Google Scholar] [CrossRef]
  29. Fei, S.; Hassan, M.A.; Xiao, Y.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.; Chen, R.; Ma, Y. UAV-based multi-sensor data fusion and machine learning algorithm for yield prediction in wheat. Precis. Agric. 2023, 24, 187–212. [Google Scholar] [CrossRef]
  30. Shen, Y.; Mercatoris, B.; Cao, Z.; Kwan, P.; Guo, L.; Yao, H.; Cheng, Q. Improving wheat yield prediction accuracy using LSTM-RF framework based on UAV thermal infrared and multispectral imagery. Agriculture 2022, 12, 892. [Google Scholar] [CrossRef]
  31. Nevavuori, P.; Narra, N.; Linna, P.; Lipping, T. Crop yield prediction using multitemporal UAV data and spatio-temporal deep learning models. Remote Sens. 2020, 12, 4000. [Google Scholar] [CrossRef]
  32. Revenga, J.C.; Trepekli, K.; Oehmcke, S.; Jensen, R.; Li, L.; Igel, C.; Gieseke, F.C.; Friborg, T. Above-ground biomass prediction for croplands at a sub-meter resolution using UAV–LiDAR and machine learning methods. Remote Sens. 2022, 14, 3912. [Google Scholar] [CrossRef]
  33. Wen, W.; Wu, S.; Lu, X.; Liu, X.; Gu, S.; Guo, X. Accurate and semantic 3D reconstruction of maize leaves. Comput. Electron. Agric. 2024, 217, 108566. [Google Scholar] [CrossRef]
  34. Rungyaem, K.; Sukvichai, K.; Phatrapornnant, T.; Kaewpunya, A.; Hasegawa, S. Comparison of 3D Rice Organs Point Cloud Classification Techniques. In Proceedings of the 2021 25th International Computer Science and Engineering Conference (ICSEC), Chiang Rai, Thailand, 18–20 November 2021; IEEE: New York, NY, USA, 2021; pp. 196–199. [Google Scholar]
  35. Gu, W.; Wen, W.; Wu, S.; Zheng, C.; Lu, X.; Chang, W.; Xiao, P.; Guo, X. 3D Reconstruction of Wheat Plants by Integrating Point Cloud Data and Virtual Design Optimization. Agriculture 2024, 14, 391. [Google Scholar] [CrossRef]
  36. Patel, A.K.; Park, E.-S.; Lee, H.; Priya, G.L.; Kim, H.; Joshi, R.; Arief, M.A.A.; Kim, M.S.; Baek, I.; Cho, B.-K. Deep Learning-Based Plant Organ Segmentation and Phenotyping of Sorghum Plants Using LiDAR Point Cloud. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 8492–8507. [Google Scholar]
  37. Sun, Y.; Miao, L.; Zhao, Z.; Pan, T.; Wang, X.; Guo, Y.; Xin, D.; Chen, Q.; Zhu, R. An efficient and automated image preprocessing using semantic segmentation for improving the 3D reconstruction of soybean plants at the vegetative stage. Agronomy 2023, 13, 2388. [Google Scholar] [CrossRef]
  38. Zhu, R.; Sun, K.; Yan, Z.; Yan, X.; Yu, J.; Shi, J.; Hu, Z.; Jiang, H.; Xin, D.; Zhang, Z. Analysing the phenotype development of soybean plants using low-cost 3D reconstruction. Sci. Rep. 2020, 10, 7055. [Google Scholar]
  39. Fang, W.; Feng, H.; Yang, W.; Liu, Q. Rapid 3D Reconstruction Method for Wheat Plant Type Research in Phenotypic Detection. China Agric. Sci. Technol. Bulletin. 2016, 18, 95–101. [Google Scholar] [CrossRef]
  40. Ling, S.; Li, J.; Ding, L.; Wang, N. Multi-View Jujube Tree Trunks Stereo Reconstruction Based on UAV Remote Sensing Imaging Acquisition System. Appl. Sci. 2024, 14, 1364. [Google Scholar] [CrossRef]
  41. Jiang, Y.; Li, C.; Paterson, A.H.; Sun, S.; Xu, R.; Robertson, J. Quantitative analysis of cotton canopy size in field conditions using a consumer-grade RGB-D camera. Front. Plant Sci. 2018, 8, 2233. [Google Scholar]
  42. Andújar, D.; Escolà, A.; Rosell-Polo, J.R.; Ribeiro, A.; San Martín, C.; Fernández-Quintanilla, C.; Dorado, J. Using depth cameras for biomass estimation—A multi-angle approach. In Precision Agriculture’15; Wageningen Academic: Wageningen, The Netherlands, 2015; pp. 97–102. [Google Scholar]
  43. Liu, W.-G.; Jiang, T.; Zhou, X.-R.; Yang, W.-Y. Characteristics of expansins in soybean (Glycine max) internodes and responses to shade stress. Asian J. Crop Sci. 2011, 3, 26–34. [Google Scholar] [CrossRef]
  44. Javadnejad, F.; Slocum, R.K.; Gillins, D.T.; Olsen, M.J.; Parrish, C.E. Dense point cloud quality factor as proxy for accuracy assessment of image-based 3D reconstruction. J. Surv. Eng. 2021, 147, 04020021. [Google Scholar] [CrossRef]
  45. Ma, X.; Zhu, K.; Guan, H.; Feng, J.; Yu, S.; Liu, G. Calculation method for phenotypic traits based on the 3D reconstruction of maize canopies. Sensors 2019, 19, 1201. [Google Scholar] [CrossRef] [PubMed]
  46. Zhu, C.; Miao, T.; Xu, T.; Yang, T.; Li, N. Stem-leaf segmentation and phenotypic trait extraction of maize shoots from three-dimensional point cloud. arXiv 2020, arXiv:200903108. [Google Scholar]
  47. Ma, X.; Zhu, K.; Guan, H.; Feng, J.; Yu, S.; Liu, G. High-throughput phenotyping analysis of potted soybean plants using colorized depth images based on a proximal platform. Remote Sens. 2019, 11, 1085. [Google Scholar] [CrossRef]
  48. Wang, F.; Ma, X.; Liu, M.; Wei, B. Three-dimensional reconstruction of soybean canopy based on multivision technology for calculation of phenotypic traits. Agronomy 2022, 12, 692. [Google Scholar] [CrossRef]
  49. Xu, W.; Feng Zh Su, Z.; Xu, H.; Jiao Yo Deng, O. An Automatic Extraction Algorithm for Individual Tree Canopy Projection Area and Canopy Volume Based on 3D Laser Point Cloud Data. Spectrosc. Spectr. Anal. 2014, 34, 465–471. [Google Scholar]
  50. Li Xi Tang, L.; Huang, H.; Chen Ch He, J. A Method for Estimating Individual Tree 3D Green Volume Based on Point Cloud Data. Remote Sens. Technol. Appl. 2022, 37, 1119–1127. [Google Scholar]
  51. Li, Q.; Gao, X.; Fei, X.; Zhang, H.; Wang, J.; Cui, Y.; Li, B. Constructing Tree Canopy 3D Models Using the Alpha-shape Algorithm. Surv. Mapp. Bulletin. 2018, 91–95. [Google Scholar] [CrossRef]
  52. Lee, B.; Di Girolamo, L.; Zhao, G.; Zhan, Y. Three-Dimensional Cloud Volume Reconstruction from the Multi-angle Imaging SpectroRadiometer. Remote Sens. 2018, 10, 1858. [Google Scholar] [CrossRef]
Figure 1. Field planting model diagram.
Figure 1. Field planting model diagram.
Agriculture 15 00729 g001
Figure 2. Image acquisition angle. Note: 1. camera, 2. plant, 3. turntable.
Figure 2. Image acquisition angle. Note: 1. camera, 2. plant, 3. turntable.
Agriculture 15 00729 g002
Figure 3. Schematic diagram of 2D convex and concave hull.
Figure 3. Schematic diagram of 2D convex and concave hull.
Agriculture 15 00729 g003
Figure 4. Soybean 3D reconstruction workflow diagram.
Figure 4. Soybean 3D reconstruction workflow diagram.
Agriculture 15 00729 g004
Figure 5. Soybean intercropping 3D reconstruction image acquisition. (a) Comparison of models at different acquisition angles, (b) original images at different rotation speeds, and comparison of model quality at different plant rotation speeds, (c) comparison of model quality with different numbers of images.
Figure 5. Soybean intercropping 3D reconstruction image acquisition. (a) Comparison of models at different acquisition angles, (b) original images at different rotation speeds, and comparison of model quality at different plant rotation speeds, (c) comparison of model quality with different numbers of images.
Agriculture 15 00729 g005
Figure 6. Point cloud preprocessing. (a) Comparison of the model transformation before (left) and after (right) changing the spatial angle; (b) comparison of the model after color threshold segmentation, from left to right: original 3D point cloud model, threshold segmentation process, segmented point cloud model; (c) comparison of denoising effects of different methods, from left to right: original point cloud model, point cloud model after pass-through filtering denoising, point cloud model after statistical filtering denoising; (d) measurement of point cloud dimensions before and after scaling.
Figure 6. Point cloud preprocessing. (a) Comparison of the model transformation before (left) and after (right) changing the spatial angle; (b) comparison of the model after color threshold segmentation, from left to right: original 3D point cloud model, threshold segmentation process, segmented point cloud model; (c) comparison of denoising effects of different methods, from left to right: original point cloud model, point cloud model after pass-through filtering denoising, point cloud model after statistical filtering denoising; (d) measurement of point cloud dimensions before and after scaling.
Agriculture 15 00729 g006
Figure 7. Extraction of image parameters.* Represents the multiplication sign. (a). Measurement of canopy model height and width, and the linear fitting between the 3D model-extracted height/width and manual measurements; (b). convex hull and concave hull projection of the soybean canopy top; (c). schematic of the 3D convex hull and hierarchical convex hull of the soybean canopy; (d). soybean canopy simulation results at different α values, with correlation curves showing changes in canopy surface area and volume with α values; (e). soybean canopy simulations at different voxel sizes and the correlation curve between soybean canopy biomass and canopy volume with changes in voxel size; (f). the relationship between canopy volume measured using the voxel method and voxel size at different periods.
Figure 7. Extraction of image parameters.* Represents the multiplication sign. (a). Measurement of canopy model height and width, and the linear fitting between the 3D model-extracted height/width and manual measurements; (b). convex hull and concave hull projection of the soybean canopy top; (c). schematic of the 3D convex hull and hierarchical convex hull of the soybean canopy; (d). soybean canopy simulation results at different α values, with correlation curves showing changes in canopy surface area and volume with α values; (e). soybean canopy simulations at different voxel sizes and the correlation curve between soybean canopy biomass and canopy volume with changes in voxel size; (f). the relationship between canopy volume measured using the voxel method and voxel size at different periods.
Agriculture 15 00729 g007
Figure 8. Prediction of soybean yield. The red solid line in (b,c) represents linear fitting, and the black dashed line represents 1:1. (a). Dynamic prediction performance of soybean yield based on image-derived parameters throughout the entire growth period. The green background represents the vegetative growth stage, while the red background represents the reproductive growth stage. (b). Prediction performance of soybean yield at the V6 stage based on image-derived parameters. (c). Prediction performance of soybean yield at the R4 stage based on image-derived parameters.
Figure 8. Prediction of soybean yield. The red solid line in (b,c) represents linear fitting, and the black dashed line represents 1:1. (a). Dynamic prediction performance of soybean yield based on image-derived parameters throughout the entire growth period. The green background represents the vegetative growth stage, while the red background represents the reproductive growth stage. (b). Prediction performance of soybean yield at the V6 stage based on image-derived parameters. (c). Prediction performance of soybean yield at the R4 stage based on image-derived parameters.
Agriculture 15 00729 g008
Figure 9. Soybean individual plant yield and canopy volume. a, b, c, d represent significant differences; identical letters indicate no significant difference, while different letters indicate a significant difference (p < 0.05). (a). Significance level of differences in individual plant yield among different cultivars (p < 0.05). (b). Growth curves of canopy volume across the full growth period for four representative soybean cultivars. (c). Comparison of canopy volume increase between the co-growth period and the independent growth period for four representative soybean cultivars.
Figure 9. Soybean individual plant yield and canopy volume. a, b, c, d represent significant differences; identical letters indicate no significant difference, while different letters indicate a significant difference (p < 0.05). (a). Significance level of differences in individual plant yield among different cultivars (p < 0.05). (b). Growth curves of canopy volume across the full growth period for four representative soybean cultivars. (c). Comparison of canopy volume increase between the co-growth period and the independent growth period for four representative soybean cultivars.
Agriculture 15 00729 g009
Figure 10. Changes in canopy volume growth rate during the entire growth period for four representative soybean varieties.
Figure 10. Changes in canopy volume growth rate during the entire growth period for four representative soybean varieties.
Agriculture 15 00729 g010
Table 1. Image quality evaluation.
Table 1. Image quality evaluation.
Rotation Speed (rpm)IOUPARecall
0.80.970.980.97
1.00.970.980.97
1.20.970.970.97
1.40.950.950.95
Table 2. Soybean canopy structural parameters extracted from 3D point clouds.
Table 2. Soybean canopy structural parameters extracted from 3D point clouds.
DimensionsMethodsImage Indexes
One-dimensional-Plant height, plant length, plant width, centroid height, minimum bounding box length, minimum bounding box width, minimum bounding box height, centroid height ratio.
Two-dimensionalConvex hullTop projection convex hull area, side projection convex hull area.
Concave hullTop projection concave hull area, side projection concave hull area.
Three-dimensionalConvex hullConvex hull volume, convex hull surface area, layered convex hull surface area, layered convex hull volume.
VoxelVoxel volume, voxel surface area, voxel surface area ratio (top/bottom), minimum bounding box volume, minimum bounding box surface area.
α-shapeα-shape volume, α-shape surface area, canopy upper-to-lower volume ratio.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Chen, M.; He, S.; Xu, X.; Shao, P.; Su, Y.; He, L.; Qiao, J.; Xu, M.; Zhao, Y.; et al. Optimization of the Canopy Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction. Agriculture 2025, 15, 729. https://doi.org/10.3390/agriculture15070729

AMA Style

Li X, Chen M, He S, Xu X, Shao P, Su Y, He L, Qiao J, Xu M, Zhao Y, et al. Optimization of the Canopy Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction. Agriculture. 2025; 15(7):729. https://doi.org/10.3390/agriculture15070729

Chicago/Turabian Style

Li, Xiuni, Menggen Chen, Shuyuan He, Xiangyao Xu, Panxia Shao, Yahan Su, Lingxiao He, Jia Qiao, Mei Xu, Yao Zhao, and et al. 2025. "Optimization of the Canopy Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction" Agriculture 15, no. 7: 729. https://doi.org/10.3390/agriculture15070729

APA Style

Li, X., Chen, M., He, S., Xu, X., Shao, P., Su, Y., He, L., Qiao, J., Xu, M., Zhao, Y., Yang, W., Maes, W. H., & Liu, W. (2025). Optimization of the Canopy Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction. Agriculture, 15(7), 729. https://doi.org/10.3390/agriculture15070729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop