Next Article in Journal
Compost and Vermicompost from Vine Pruning and Sewage Sludge as Peat Alternatives in Cucumber Seedling Production
Previous Article in Journal
Soil Type, Tomato Genotype, and Pathogen Stress Shape the Tomato Rhizobacterial Community
Previous Article in Special Issue
Aerial Image-Based Crop Row Detection and Weed Pressure Mapping Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Applications of 3D Reconstruction Techniques in Crop Canopy Phenotyping: A Review

1
College of Mechanical Engineering, Guangxi University, Nanning 530004, China
2
Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen 518120, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Agronomy 2025, 15(11), 2518; https://doi.org/10.3390/agronomy15112518
Submission received: 28 August 2025 / Revised: 26 October 2025 / Accepted: 28 October 2025 / Published: 29 October 2025

Abstract

Amid growing challenges to global food security, high-throughput crop phenotyping has become an essential tool, playing a critical role in genetic improvement, biomass estimation, and disease prevention. Unlike controlled laboratory environments, field-based phenotypic data collection is highly vulnerable to unpredictable factors, significantly complicating the data acquisition process. As a result, the choice of appropriate data collection equipment and processing methods has become a central focus of research. Currently, three key technologies for extracting crop phenotypic parameters are Light Detection and Ranging (LiDAR), Multi-View Stereo (MVS), and depth camera systems. LiDAR is valued for its rapid data acquisition and high-quality point cloud output, despite its substantial cost. MVS offers the potential to combine low-cost deployment with high-resolution point cloud generation, though challenges remain in the complexity and efficiency of point cloud processing. Depth cameras strike a favorable balance between processing speed, accuracy, and cost-effectiveness, yet their performance can be influenced by ambient conditions such as lighting. Data processing techniques primarily involve point cloud denoising, registration, segmentation, and reconstruction. This review summarizes advances over the past five years in 3D reconstruction technologies—focusing on both hardware and point cloud processing methods—with the aim of supporting efficient and accurate 3D phenotype acquisition in high-throughput crop research.

1. Introduction

The world population is projected to reach 8.4 billion by 2075 [1]. In the face of global warming and other global crises, the ability of food systems to feed the world’s population will continue to be challenged. Conventional yet continually evolving strategies—such as precision agriculture, genetic crop improvement, sustainable soil management, and the use of biofertilizers—have demonstrated significant potential in enhancing productivity and resilience under resource constraints. Sustainable soil management practices—such as conservation tillage, crop rotation, and balanced fertilization—play a crucial role in maintaining soil health and long-term productivity. Recent findings indicate that balanced fertilization combining organic and inorganic inputs can improve microbial activity, nutrient cycling, and soil structure [2]. Collectively, these approaches represent not only practical pathways to increasing agricultural productivity but also essential strategies for ensuring long-term global food security under the dual pressures of population growth and environmental change.
Alongside these established agronomic practices, the integration of digital and sensing technologies has become increasingly important for enabling more precise, data-driven agricultural management. While conventional strategies provide a solid foundation, there is growing interest in applying advanced technologies such as 3D reconstruction techniques in crop phenotyping to further accelerate crop improvement and precision agriculture efforts. About five years ago, most crop phenotyping studies relied mainly on two-dimensional imaging and manual measurements, with limited use of 3D reconstruction techniques due to high equipment costs and immature data-processing algorithms. Now, the development of sufficiently intelligent, high-throughput (rapid, automated processing of large volumes of data or samples in a short time) 3D crop phenotypic analysis platforms has become a research focus. Such platforms can significantly improve the efficiency and quality of agricultural production and support sustainable agriculture development [3,4]. This technological transformation bridges traditional agronomic research and the digital agriculture paradigm, illustrating how 3D reconstruction offers a measurable and visual basis for data-driven crop management.
3D reconstruction technology provides a foundation for accurate and efficient crop phenotyping under field conditions. Large-scale analysis of phenotypic traits in the field is essential for crop genetic improvement [5], biomass estimation [6], and understanding crop responses to environmental conditions [7]. Phenotypic traits such as canopy parameters, stem and leaf parameters, and fruit size show great potential in studying field crops like wheat, maize, rice, and soybean. Monitoring these traits to provide timely feedback for crop management can help alleviate the pressure of global food problems to some extent [8,9,10,11]. For example, the leaf area index (LAI) directly reflects canopy density [12]. It is influenced by multiple factors, including plant species, growth stages, and environmental conditions (such as light, water, and temperature). LAI is closely associated with photosynthesis, respiration, and transpiration rates. It can also be used to assess crop growth status and yield potential. Leaf angle distribution (LAD) is another key structural parameter that influences plant photosynthetic efficiency and water evaporation. Understanding and controlling LAD can help optimize crop planting density and growth conditions, thereby improving crop yield and quality [13]. Together, LAI and LAD reflect the influence of light radiation on plant canopies [14]. Accurate 3D characterization of these traits thus provides structural inputs that connect phenotyping data with modeling and management in precision agriculture.
In terms of measurement, sensor technology has largely replaced traditional manual measurements. RGB cameras, multi-view stereo (MVS) systems, depth cameras, and light detection and ranging (LiDAR) technology all demonstrate significant potential for phenotypic data acquisition [15,16]. Among these, depth cameras are often preferred due to their low cost, high precision, and ease of point cloud post-processing [17]. Additionally, LiDAR mounted on unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) has proven to be a cost-effective solution for large-scale data collection [15,18]. These sensing technologies support complementary 3D reconstruction pipelines, enabling multi-scale data capture from the organ to the field level.
Despite the rapid evolution of sensing and computing technologies, few comprehensive studies have compared LiDAR, MVS, depth cameras, and multi-sensor fusion within a unified framework. Recent reviews have begun to address this gap [19,20], while OB-NeRF and related neural methods have pushed 3D reconstruction toward higher efficiency and fidelity [21]. Aiming to assess the application potential of 3D reconstruction technology in agriculture and promote the development of high-throughput crop phenotyping analysis, this paper comprehensively compares the data acquisition methods, reconstruction efficiency, accuracy, advantages, disadvantages, and environmental applicability of LiDAR, MVS, depth cameras and multi-sensor fusion in crop phenotyping. We also examine their applications in both laboratory and field settings to shed light on future development directions. In addition, this paper analyzes the main algorithms and applications of three core point cloud processing technologies—point cloud denoising, registration, segmentation and reconstruction—which are essential in the 3D reconstruction workflow. Through this comparative analysis, this paper aims to clarify how 3D reconstruction technologies can be effectively integrated with other digital agriculture tools to improve crop monitoring, yield prediction, and intelligent decision-making.
To summarize development and trends in 3D reconstruction technologies, this review used the Web of Science (WOS) database as the primary source. Relevant literature on the applications of 3D reconstruction in crop phenotyping published between 2019 and 2025 was collected. To ensure relevance, an exact search was conducted within the subject, title, and abstract fields of WOS using predefined keywords such as “MVS”, “LIDAR three-dimensional reconstruction”, “plant phenotype”, “multi-sensor fusion”, and “point cloud processing”. The inclusion criteria focused on journal articles, conference papers, and review articles. Articles were further screened based on methodological rigor, experimental relevance, and citation impact to ensure the representativeness and quality of the selected studies. Based on the Web of Science Core Collection, we searched for articles, reviews, and conference papers published from 2019 to 2025, retrieving 7120 records. After removing duplicates, 4970 records remained. Initial screening of titles and abstracts excluded 4400 irrelevant documents. Full-text review of 570 articles led to the exclusion of 442 studies that did not meet the research criteria. Ultimately, 124 studies were included in this review (Figure 1).

2. Data Acquisition

2.1. Light Detection and Ranging

Light detection and ranging (LiDAR) technology generates point cloud data by scanning a laser beam and measuring the reflected signal. The coordinates of these points represent the depth information of the crops in the scene, thereby enabling the construction of a 3D model [22]. Given the high penetration of laser beams, LiDAR technology is less affected by ambient light conditions. Therefore, certain environmental factors can be disregarded in field measurements, allowing for the acquisition of more accurate 3D point clouds [23]. The resolution of the 3D point cloud generated by LiDAR is lower than that of image-based methods but higher than that of depth camera methods, which is sufficient to meet the requirements of most data acquisition tasks [24,25]. The larger sensing angle of LiDAR also significantly reduces the possibility of data loss [26]. Compared with image-based methods, which are constrained by photographic and alignment accuracy [23], LiDAR sensors, as active sensors, are well-suited for high-frequency and large-scale characterization information extraction. Additionally, LiDAR technology can penetrate crop canopies to obtain measurement data, making it more effective in addressing occlusion problems [22,27]. However, due to the limited laser throughput, the data acquisition speed and detection range of LiDAR are constrained [28,29]. Currently, the most significant challenge facing LiDAR technology is its high cost, which includes both initial investment and ongoing operational expenses. At the same time, some consumer-grade LiDAR sensors are susceptible to environmental conditions. For example, the Intel RealSense L515 requires strict control over ambient light conditions to ensure optimal performance [30].
LiDAR can be mounted on phenotyping platforms based on ground or aerial systems. Common ground platforms include handheld devices, gantry systems, and unmanned ground vehicles (UGVs), while aerial platforms encompass unmanned aerial vehicles (UAVs) and satellites. The optimization of UAV flight parameters has recently been shown to markedly enhance canopy-level trait estimation accuracy in wide–narrow row cultivation systems [31]. Handheld LiDAR sensors are highly flexible and portable, allowing for close-range scanning of crops from multiple angles and resulting in high-point cloud coverage. However, due to their slow acquisition speed and the inherent instability of manual operation [26], they are typically used for collecting small-scale verification data [32]. The mounting platform for LiDAR sensors is not strictly defined; for example, the handheld ZEB1 scanner and ZEB-REVO also support vehicle-mounted configurations [26]. Gantry-type LiDAR platforms ensure the stability and accuracy of the system. Additionally, gantries or similar large-scale structures can be equipped with multiple sensors to collect crop phenotypic information from various directions. However, due to the size of the framework, the scope of data collection is constrained. Small ground-based fixed platforms are more suitable for indoor measurement of individual plants rather than large-scale data extraction due to their conical viewing angles and installation heights. Forero et al. fixed a LiDAR vertically on a slide rail under laboratory conditions, placed the crop on a rotatable platform, and extracted point cloud data of the entire seedling growth process through vertical scanning [33]. Saha et al. achieved good reconstruction efficiency and accuracy by installing 2D LiDAR sensors on a linear transporter to scan tomato plants from both sides at a uniform speed [34]. In cases of lower plant height or higher outdoor mounting platforms, tripods [35] or suspension systems [36] can also be employed.
Ground mobile platforms enhance the efficiency of data acquisition, while airborne platforms offer higher, faster, and longer-range capabilities, making them suitable for large-scale field applications [37]. Murcia et al. developed a mobile terrestrial laser scanner mounted on a GPS-based UGV for small-scale crop mapping, demonstrating its potential in field applications [38]. Esser et al. integrated sensors with field robots equipped with autonomous navigation systems, achieving significant advantages in 3D phenotypic analysis at both plant and organ levels [8]. Yan et al. employed a UAV-mounted LiDAR to capture the vertical structure, yielding high-resolution data [39]. Meanwhile, combined with global navigation satellite systems, it can show good performance in measuring crop height [40]. However, these platforms often prioritize efficiency at the expense of detailed accuracy and perform poorly in organ-level point cloud acquisition. Additionally, the presence of numerous irrelevant background elements can severely impact point cloud quality, making subsequent processing techniques crucial for data usability.
In large-scale experiments, the occlusion caused by adjacent plants often prevents field phenotyping platforms from obtaining fine phenotypic traits of individual plants [41]. The results obtained under controlled indoor conditions are not universally applicable. Large-scale field equipment capable of achieving similar trait accuracy at the individual plant level as indoor equipment is prohibitively expensive [42].
The applications of the LiDAR sensors mentioned above, in terms of mounting methods and crop phenotypic measurements, are summarized in Table 1. The selection of phenotyping platforms is primarily based on plant size, planting area, and equipment accuracy. Inputting the point cloud data obtained by LiDAR into relevant supporting software can further enhance the efficiency of segmentation and reconstruction.

2.2. Multi-View Stereo

Multi-View Stereo (MVS) Reconstruction is an image-based 3D reconstruction technique, the accuracy of which is primarily contingent upon the subject plant characteristics, camera capture precision, and the efficacy of point cloud processing [45]. The intrinsic plant parameters encompass growth status, volume size, and structure complexity. Camera shooting accuracy is determined by factors such as camera configuration, calibration, shooting angle, and environmental conditions. Point cloud processing involves techniques such as denoising, registration, and segmentation. Leveraging the MVS approach, extensive image sequence data can be utilized to enhance the precision of dense point cloud reconstruction. It achieves optimal results at the fine scale for individual crops and is predominantly employed under controlled laboratory conditions. Nonetheless, this method also escalates the challenge of feature matching, significantly increasing both the temporal and computational costs. In field settings, image accuracy can be compromised by environmental factors such as lighting and wind speed, thereby affecting the stability of image capture [46].
Currently, there is a scarcity of high-throughput field data capture platforms that utilize MVS, with the majority of existing methods focusing on experiments with individual crops under either laboratory or field conditions. Unlike some other techniques, MVS does not directly acquire 3D point clouds. Instead, it employs Structure from Motion (SFM) as the principal technique for reconstruction. SFM is capable of rapidly reconstructing the 3D structure of plants from a series of high-resolution two-dimensional images [47]. This method is less susceptible to environmental influences and is recognized for its high accuracy [45].
In the realm of 3D reconstruction and phenotypic analysis, numerous phenotypic analysis platforms have been designed based on MVS-SFM technology. Wu et al. proposed small-scale phenotypic analysis platforms for both tall and low plants, known as MVS-Pheno V1 and MVS-Pheno V2 [46,48]. These platforms are portable and suitable for phenotypic measurement of individual plants in both indoor and outdoor settings. MVS-Pheno V2 was an improvement over V1, integrating an open-source MVS reconstruction system, a robust calibration system, and an automatic point cloud phenotype calculation module to provide higher reconstruction accuracy. Sandhu et al. focused on developing imaging platforms for the compositional traits of small plants, offering new solutions for the 3D reconstruction of small plants [49]. These platforms typically use plant fixation and camera rotation to reduce blade vibration and decrease 3D point cloud noise. Additionally, UAVs utilizing SFM technology can collect multi-view overlapping images of field crops, generate high-density 3D point clouds, and obtain information such as plant height [50] and canopy [51]. It is straightforward to accurately locate and track the UAVs by employing the Global Navigation Satellite System and Real-Time Kinematic Global Navigation Satellite System [52]. The trajectory data obtained can subsequently be utilized to facilitate image matching and parameter estimation [47].
With technological advancements, smartphone cameras have also been utilized to capture multi-view images, albeit with relatively lower accuracy, making them more suitable for the reconstruction and segmentation of simple structures. He et al. used the iPhone 11 for multi-view shooting, reconstructing the 3D point cloud of soybean plants based on the SFM algorithm [45]. Rui et al. captured different angles of corn using high-resolution smartphones and Agisoft Metashape software to generate point cloud data, further proposing the corn carrion segmentation network [53]. These developments illustrate the diverse applications of MVS-SFM technology in plant phenotypic analysis, ranging from portable platforms to drones and smartphones, offering practical tools for image sequence acquisition across different scales and complexity.
The three MVS-based phenotypic measurement platforms mentioned above are summarized in Table 2. They offer straightforward and precise solutions for phenotypic measurements across different scales and volumes of crops. These platforms are often utilized in conjunction with software such as PhotoScan, Point Cloud Library (PCL), CloudCompare, MATLAB and others to facilitate subsequent processing needs.
Nevertheless, as previously discussed, MVS-based reconstruction techniques are highly contingent upon favorable environmental conditions and image quality. It is essential to employ a perspective that is as cross-angled as possible to ensure the most comprehensive reconstruction of features. This approach guarantees that all aspects of the plant are captured while minimizing occlusion, albeit at the expense of increased computational costs [26]. When it comes to large-scale data collection, the issue of occlusion between plants is inevitable. Consequently, only specific basic and straightforward phenotypic analyses can be conducted, such as measuring plant height and canopy size. The emerging neural radiance field technology, such as OB-NeRF, has achieved high-fidelity reconstruction with a peak signal-to-noise ratio of over 29 decibels within 30 s, maintaining a geometric error of less than 0.2 mm for organs like leaves. Its exposure adaptive feature further mitigates the interference of natural light, pioneering a new paradigm for high-throughput field phenotypic analysis [21].

2.3. Camera

Depth camera-based crop phenotypic data acquisition methods exhibit notable advantages in terms of accuracy and cost-effectiveness. Although the point cloud density generated by depth cameras is slightly lower compared to LiDAR, it suffices for the imaging requirements of most scenarios and significantly reduces costs. In contrast to the MVS method, depth cameras provide a more direct means of obtaining depth information, which decreases reliance on complex image-matching algorithms and streamlines the data processing workflow. Kinect v1 (Microsoft, Redmond, WA, USA) employs structured light technology with a color camera resolution of 640 × 480, making it suitable for close-range data acquisition. Kinect v2 (Microsoft, Redmond, WA, USA) utilizes Time-of-Flight (TOF) technology. It emits optical pulses towards the target, receives the reflected light with a sensor, and calculates the target distance by measuring the pulse’s flight time [54]. Compared to Kinect v1, Kinect v2 offers significantly improved resolution, with the color camera resolution enhanced to 1920 × 1080 and the depth image resolution to 512 × 424. Its capability to deliver more explicit images and a broader field of view has made it the most widely adopted option for phenotypic acquisition tasks [55]. Guan et al. constructed a heterogeneous data acquisition system for maize utilizing three Kinect v2 sensors, successfully capturing 810 sets of color images and depth data of maize plants [56]. The Azure Kinect DK (Microsoft, Redmond, WA, USA) facilitates real-time capture of environmental data and conversion into 3D point clouds [57].
Depth cameras possess a certain degree of robustness to lighting conditions. However, image acquisition is still best conducted in the morning, evening, on cloudy days, or indoors where ambient light is not intense [58]. Under field conditions, the carrying platform should be equipped with a shading structure [59]. Intense sunlight can impair the resolution of Kinect cameras, leading to a significant reduction in point cloud density and negatively impacting the reconstruction of canopy structures [60], edges [58] and fine details [61]. Conversely, too low illumination can compromise the quality of color data, necessitating the use of artificial lighting. Nonetheless, the primary consideration should always be ensuring that the point cloud is sufficiently dense. Gene-Mola et al. reached a conclusion through their experiments in apple orchards that an appropriate lighting level should be maintained between 50 and 2000 lux [62]. They also provided a coordination relationship between lighting and shooting distance: under high illumination, the distance from the sensor to the target can be decreased to enhance point cloud density. Cameras are typically mounted on movable vehicles or gantries to facilitate the extraction of large-scale plant point cloud data [63]. The former approach extends the reach of data collection, whereas the latter ensures stable imaging conditions. Vázquez-Arellano et al. affixed a Kinect v2 to the mobile robot platform TALOS, which systematically traversed greenhouse farmlands, capturing data from 41 corn seedlings per row [64]. The establishment of a shape model for the entire plant group demonstrated the efficacy of depth cameras in large-scale point cloud data extraction. Additionally, they explored the relationship between the accuracy of 3D reconstruction and the perspective of camera data acquisition. Ma et al. employed a Kinect V2 sensor mounted on a gantry to vertically obtain canopy information from soybean plants, calculating plant height and LAI with determination coefficients R2 exceeding 0.97 and 0.94, respectively [65]. For the phenotypic extraction of individual plants, methods such as positioning the plant on a turntable and photographing it with a camera [18,66,67] or arranging multiple cameras around the plant to capture images simultaneously [68,69] can be utilized. These approaches enable the extraction of more detailed data but also increase the complexity of point cloud registration. Beyond the study of static traits, the development of a real-time monitoring platform based on dynamic plant traits represents a future direction for advancement [70]. However, accounting for variations in environmental conditions such as illumination and background, obtaining real-time and precise point clouds remains challenging. The development of more refined algorithms and the utilization of advanced sensors are necessary to enhance the accuracy of crop 3D modelling.
We have distilled the aforementioned findings in Table 3, which illustrates that LiDAR sensors offer the highest accuracy and efficiency but come at a premium cost. In contrast, MVS and depth cameras, while more affordable, are less efficient than LiDAR. Specifically, MVS is constrained by its indirect method of obtaining point cloud data, which relies on the SFM algorithm to transform image data into point clouds. This process encounters significant challenges when dealing with large-scale data. Nonetheless, due to its capability to directly extract depth information from images, it demonstrates considerable advantages in capturing fine details. To ensure both scale consistency and geometric accuracy, both MVS-based methods and depth camera technologies necessitate precise camera calibration [71]. The most commonly employed methods for static calibration are the checkerboard calibration method and the Zhang Zhengyou calibration method [71,72]. By utilizing the detected corner coordinates and the known dimensions of the checkerboard, camera parameters are estimated by solving the camera model equations, followed by the correction of distortions [73].
Depth cameras outperform MVS and LiDAR in controlled settings with stable lighting (low to moderate intensity) and minimal plant movement, providing cost-effective 3D reconstruction for small-scale phenotyping (e.g., maize, soybean). At close range, they deliver high accuracy comparable to MVS at a lower cost and with simpler processing than LiDAR, making them ideal for indoor or shaded field setups (e.g., greenhouses) for traits like leaf area index [74].

2.4. Multi-Sensor Fusion

In the realm of multi-sensor fusion, systems can be categorized into three primary types: fusion imaging systems between 2D sensors, fusion imaging systems between 2D and 3D sensors, and fusion imaging systems among 3D sensors. Utilizing an array of capture devices enables the extraction of crop phenotypic parameters at varying levels of detail. For instance, when employing mobile phones and cameras for data acquisition, mobile phones, despite their lower resolution, offer shorter capture times, while cameras, with their higher resolution, require longer capture times [75]. Multi-sensors can also serve to complement one another. For example, RGB cameras can compensate for the lack of color information in LiDAR sensors [30] and some depth cameras, such as PMD cameras [76]. Also, it can aid in the positional and attitudinal calibration of depth cameras, which is essential for the accurate alignment of multi-view 3D data [71]. Clearly, multi-sensor fusion systems offer a more comprehensive, reliable, and robust scheme for crop data acquisition. This fusion strategy enhances the completeness and richness of the data, such as in combined RGB camera and LiDAR systems. LiDAR can swiftly acquire large-area 3D structural data unaffected by lighting and reflections, while the camera system can supply abundant color information and more comprehensive geometric details to remedy the shortcomings of the LiDAR system [63]. In UAV-based wheat phenotyping, Fei et al. fused RGB, LiDAR, and multispectral sensors for yield prediction. The multi-sensor model achieved a 15–20% reduction in canopy height RMSE and improved yield estimation accuracy from R2 = 0.68 to 0.89, outperforming single-sensor systems [77]. This demonstrates that sensor fusion not only compensates for illumination and occlusion limitations but also enhances the robustness of phenotypic modeling in large-scale field conditions. However, multi-sensor systems are more complex than their single-sensor counterparts and require a longer time. The challenge of rapidly and accurately fusing multi-source data is one that necessitates profound consideration.

3. Point Cloud Data Processing

Three-dimensional plant phenotypic analysis can break through the dimensional limitations of traditional two-dimensional images, thereby providing more comprehensive and accurate phenotypic data such as plant height, leaf width, leaf area and fruit volume [78]. As a tool for precisely capturing the 3D morphology of plants, point cloud data frequently struggles to directly represent the intricate structure of plants as initially obtained from acquisition equipment. Open-source software such as CloudCompare, Open3D, and PCL offers a multitude of point cloud processing solutions. Point cloud denoising marks the initial phase in the processing sequence. Given the substantial volume of raw point cloud data and the impact of noise interference, both denoising and downsampling have become essential components of the process. Point cloud registration is pivotal for constructing a comprehensive 3D model. It is crucial to establish the consistency and integrity of target features by accurately aligning disparate datasets within the same coordinate system, especially when dealing with point cloud data gathered from multiple viewpoints and at various time intervals. The segmentation and reconstruction process is central to extracting plant morphological characteristics and developing a detailed model, laying a solid groundwork for plant phenotypic assessment and analysis. In Section 3.1, Section 3.2 and Section 3.3 of this paper, an exploration of the application and evolution of algorithms within the three critical domains of point cloud denoising, registration, and segmentation and reconstruction will be presented, focusing on recent years’ advancements.

3.1. Point Cloud Denoising

Point cloud denoising is an essential step in 3D reconstruction, aimed at eliminating background noise and jitter that occur during the capture process. Denoising not only enhances the accuracy of point cloud registration but also reduces computational overhead. Commonly used denoising approaches can be broadly classified into several categories: statistical methods (e.g., Gaussian filtering, Bilateral filtering, and Median filtering), clustering-based techniques (e.g., Density-Based Spatial Clustering of Applications with Noise, DBSCAN [76]; Random Sample Consensus, RANSAC [43]), local neighborhood methods (e.g., Statistical Outlier Removal, SOR [79]; K-Nearest Neighbors, KNN [80]), and geometric-topology-based strategies such as Voxel filtering [35].
Large-area background noise can be removed by semantic segmentation [81]. While obvious outliers, such as soil, weeds, and flowerpots, can be directly trimmed using distance thresholds and bounding boxes [26,35,59,65]. Bilateral filtering reduces noise while preserving edge details by considering both spatial proximity and intensity differences between points. The DBSCAN algorithm, which clusters points based on density, is effective for complex data distributions but is sensitive to parameter choices and relies heavily on prior knowledge [82,83]. The RANSAC is primarily used for model fitting and outlier removal, iterating multiple times to identify the optimal model [84]. The SOR filtering identifies and removes points that deviate significantly from local density patterns; it is straightforward and efficient, requiring only two parameters (k and n) [85]. Voxel filtering reduces noise and computational load through downsampling, though the voxel size must be carefully selected to avoid loss of detail [35,86].
Combining multiple filtering techniques can leverage their complementary strengths and improve denoising performance. Chen et al. proposed a hierarchical method integrating DBSCAN and fast bilateral filtering to handle noise with diverse spatial distributions while preserving structural edges [87]. Jiang et al. used SOR and DBSCAN to progressively filter non-target points, achieving the extraction of clean target point clouds [26]. Hu et al. applied Gaussian filtering, SOR, and voxel filtering to denoise rapeseed plant point clouds during downsampling [35]. They demonstrated that appropriately reducing point density can shorten processing time without compromising accuracy. Larger voxels help reduce density more rapidly but may sacrifice fine details; smaller voxels retain more detail at the cost of higher computational load. Thus, selecting an appropriate voxel size is essential for balancing point density and detail preservation [88].
A summary of the above denoising methods and their effects is provided in Table 4. The choice of method should be guided by the specific characteristics of the point cloud data and the requirements of the application.
Although traditional filtering algorithms remain widely used, deep learning technologies have demonstrated considerable potential—particularly in maintaining structural integrity and spatial uniformity of point cloud edges. For example, PointCleanNet denoises point clouds by estimating local geometric features. It effectively handles unordered point clouds and remains robust under rigid transformations [89]. Wang et al. introduced the PFN model, which integrates filtering concepts into a deep learning framework equipped with an outlier recognizer and a denoiser [90]. The outlier recognizer assigns a probability of being an outlier to each point, while the denoiser uses a two-stage sub-network architecture to progressively refine point positions through coarse and fine recovery. A drawback of this approach is its three-stage training process, each with separate loss functions, making the network sensitive to the tuning of multiple parameters. Seppanen et al. developed a lightweight spatio-temporal denoising network capable of handling adverse weather conditions such as rain, fog, and snow [91]. Designed for integration with LiDAR, it enables real-time point cloud enhancement.
However, as a supervised model, its performance may degrade under environmental conditions not well represented in the training set. Moreover, deep learning methods generally require large annotated datasets and often exhibit limited generalization across different crop types or varying environmental conditions [82,83]. Several denoising networks—such as FCNet (based on a teacher–student framework) [92], the lightweight LPCDNet [93], PD-Flow (which integrates normalized flows and noise decoupling) [94], and NPD (based on neural projection) [95]—are all supervised methods whose performance depends heavily on the availability of clean training samples.
When labeled samples are scarce, data augmentation and semi-supervised learning can help alleviate overfitting and improve generalization [90,92,96]. Given the high variability of agricultural settings, unsupervised methods that do not require clean samples hold particular promise. Wang et al. proposed an unsupervised denoising framework that synthesizes point cloud versions with different noise levels to estimate displacement vectors of noisy points and recover accurate point positions [97].
Current research on point cloud denoising focuses mainly on individual samples, with limited attention paid to group point cloud data. Owing to the complexity, diversity, and variability of plant point cloud samples, traditional filtering algorithms remain the dominant preprocessing choice, and the application of deep learning in agricultural contexts warrants further exploration. Strengthening agricultural datasets through synthetic noise injection, geometric transformations, partial-occlusion simulation, and domain adaptation can substantially improve model robustness and transferability across crop types.

3.2. Point Cloud Registration

Point cloud registration is a critical technique for precisely aligning multi-source point cloud datasets into a unified coordinate reference system. It is especially vital when dealing with data captured from diverse viewpoints or across different time intervals. The Iterative Closest Point (ICP) algorithm stands as a foundational method within the domain of point cloud registration. It operates by iteratively identifying the nearest points and determining the optimal alignment parameters to ensure the most effective superposition between the source and target point clouds [18]. Nonetheless, the ICP algorithm can be susceptible to noise and reliant on initial alignment parameters. Consequently, it is often essential to preprocess point cloud coordinates using supplementary techniques to enhance the precision of the initial alignment. Yang et al. utilized over 100 calibration balls in red, yellow, and blue to assist in the calibration process [98]. By leveraging the spherical coordinates of these calibration balls from various perspectives, they derived the corresponding spatial transformation matrix to achieve coarse registration from a single viewpoint. Subsequently, they employed the ICP algorithm for precise registration, resulting in a complete 3D point cloud model of fruit trees. They accurately calculated canopy height, maximum canopy width, and canopy thickness through two-view data fusion, with the average relative error between their measurements and field measurements being approximately 3%. Wang et al. firstly determined the positional relationship between point clouds based on the turntable angle relationship and then used the ICP algorithm for registration [18]. This approach optimized the stem-leaf matching rate of the model and effectively avoided falling into local optimal solutions. Guan et al. employed Principal Component Analysis to align the source and target point clouds into the same reference frame prior to the application of the ICP algorithm [76]. This approach not only diminished computational load but also enhanced the accuracy of the registration process. Sun et al. sequentially applied the RANSAC algorithm followed by the ICP algorithm to achieve coarse and fine registration, respectively [81]. However, there is no one-size-fits-all coarse registration algorithm. The specific choice of algorithm must be tailored based on factors such as sample characteristics, acquisition schemes, point cloud quality, and data complexity.
Integrating the temporal dimension into point cloud registration provides a direct reflection of plant growth states. Magistri et al. introduced a hierarchical registration scheme for 4D plant point clouds, leveraging pre-recorded 3D point cloud time series to monitor LAI, leaf length, stem diameter, and stem length through feature matching [99]. This approach facilitated effective data association across various growth stages of plants. Building on this, Chebrolu et al. proposed estimating deformation parameters by utilizing the plant skeleton structure over extended intervals, thereby ascertaining the non-rigid registration process and the corresponding relationships [100]. They also developed a method for interpolating point clouds between actual scan times to estimate data points at intermediate times. However, these methods are primarily applicable to plants with simple structures. As plant morphology evolves over extended timescales, it can no longer be considered merely as rigid deformation. Capturing the non-rigid deformation parameters of plants with intricate topological structures presents a significant challenge. A detailed summary of certain point cloud registration schemes, including their algorithms and accuracy, is provided in Table 5.

3.3. Segmentation and Reconstruction

High-throughput phenotyping facilitates non-contact measurement of a diverse array of phenotypic traits associated with plant growth, yield, and adaptability [15,101]. Rapid and precise segmentation of the crops and their organs from the acquired point cloud is an essential step, as it directly impacts the accuracy of subsequent phenotypic parameter extraction and analysis. The discussion that follows will be divided into three sections to outline various segmentation and reconstruction algorithms and their research outcomes, which are compiled in Table 6.

3.3.1. Canopy

Accurately reconstructing the structure of plant canopy holds substantial importance for agricultural scientific research, crop management, and yield estimation [106,107]. Calculation of canopy parameters, such as LAI, aids in providing feedback on pesticide deposition and optimizing spraying parameters [12]. With advancements in remote sensing technology, particularly the use of UAVs and LiDARs, 3D reconstruction techniques for plant canopies have advanced rapidly [51,108]. When calculating canopy height, it is essential to account for the influence of ground slopes, particularly in extensive farmlands. Ground slopes can be effectively modelled using the linear best-fitting plane method and by expanding the sampling range [109]. Point cloud processing techniques, such as the convex hull and concave hull algorithms [59], the α-shape (AS) algorithm and the voxel-based (VB) algorithm [110,111], can be directly employed to extract the 3D boundary structure. These methods enable surface volume estimation and surface reconstruction from point cloud data. However, the large-scale estimated volume tends to be smaller than the actual volume, as the data typically represents the top surface, which usually lacks information from the sides. This bias can be mitigated by integrating multi-sensor fusion (e.g., LiDAR with RGB-depth cameras) to capture lateral geometry, complementing missing spatial information. The convex hull and concave hull algorithms are particularly effective for dense crop canopies [51], while the AS algorithm can reconstruct more intricate details when the parameter α is set to a lower value [85]. Although these methods perform well in shape capture, they are associated with high computational costs. In contrast, VB and stereo vision methods are dominant in computational efficiency [112], but they have limitations in shape capture accuracy. Li et al. comprehensively evaluated a variety of algorithms, including the manual geometric model, convex hull, slice-based convex hull, platform convex hull, AS, slice-based AS, VB, and their proposed dynamic slicing and reconstruction algorithm [85]. The dynamic slicing and reconstruction algorithm captured canopy growth characteristics by dynamically cutting adjacent slices. Although it had the longest running time, it could better reflect the actual growth of the canopy due to its efficient search and iterative operations. Deng et al. further optimized canopy boundary detection by employing the α-shape algorithm.
Table 6 shows the key accuracy metrics of representative stem–leaf segmentation and reconstruction models. The variation in evaluation metrics reflects the different focuses and application contexts of these methods. Geometry-based approaches (e.g., MST–DBSCAN, RANSAC–Kalman) emphasize structural accuracy, hence use metrics like R2, MAPE, or AP to assess stalk geometry and node precision. In contrast, deep learning models (e.g., U-Net, DeepLabv3+, PF-Net) target semantic segmentation, using IoU, RMSE, or overall accuracy to capture surface completeness and pixel-level fidelity. Thus, metric differences mainly indicate distinct analytical priorities rather than inconsistency.

3.3.2. Stem and Leaf

Accurate segmentation and reconstruction of crop stems and leaves is a crucial step in analyzing plant phenotypic characteristics [66,81]. These characteristics include plant height, volume, stem cross-sectional area, as well as the number, length, and volume of leaves. This process is of great significance for understanding plant growth dynamics, optimizing planting strategies, and increasing crop yield [13,58]. Plant seedlings are widely used in this field due to the simplicity of their stems and the sparseness of their leaves. The segmentation process involves first isolating individual plants and then sequentially extracting the skeleton, stem, and leaves [16,84]. Skeleton extraction aims to simplify the data structure, reduce computational complexity, and retain key topological information about the plant structure. The extraction of the stem is the most critical step, requiring identification of the main stem followed by “trimming” of the nodes. Sun et al. proposed a graph-based identification method [16], while Zermas et al. introduced a Random Interception Node algorithm [84]. The former method selects the path with the highest terminal point as the main stem and identifies nodes along this path by traversing the skeleton from the bottom upward. In contrast, the latter simulates the flow path of raindrops on the plant surface from the top downward to locate the main stem and nodes. Subsequently, they all reduced the parameters gradually to make regions close to the stem are trimmed and fitted. This pruning strategy can also assist in the design of automatic leaf-cutting systems, where the distance between the cutting point and the main stem can be appropriately increased [102]. The reconstruction of leaves can be achieved by using the remaining point cloud after removing the main stem, employing clustering methods such as DBSCAN, Euclidean Clustering, or K-means, thereby enhancing the completeness of single leaf reconstruction.
However, sometimes, the topological structure of the plant is not clear enough, with apparent occlusions and distortions present. Chen et al. indicated that the PSGR model has achieved significant improvements in visual clarity [113]. It can more effectively remove backgrounds, substantially reduce visual artifacts, and better preserve fine structural details. These enhancements enable morphological features such as leaf curvature and spatial topology to be clearly presented, which are often obscured or lost in traditional 3DGS outputs. Ando et al. proposed a reconstruction method for curled blades, which enhances robustness to noise and missing points [114]. However, this method requires high-density point clouds and cannot handle overly complex distortions. New neural-field-based segmentation frameworks have emerged, integrating NeRF-derived dense point clouds with lightweight segmentation networks for efficient stem and leaf trait extraction [103]. Deep learning networks, such as DeepLabv3+, U-Net, and U-Net++, as well as machine learning algorithms including support vector machine (SVM) and K-means clustering, have demonstrated excellent performance in dealing with leaf overlap [104,105]. The intense learning capabilities of these algorithms enable them to extract key features from complex image data and perform efficient and accurate classification [81]. Chen et al. further advanced this field by proposing a point cloud completion network, PF-Net [115]. This network can learn the structural features of plant leaf point clouds at different scales, significantly improving the accuracy and robustness of completion and providing a new perspective for solving 3D reconstruction problems under leaf occlusion conditions. Nevertheless, the aforementioned approach requires manual intervention and can only handle a single or a small number of shaped blades.
In recent years, virtual design technology has proposed innovative solutions to address overlapping problems [116]. Through extensive iterative calculations, the virtual model can gradually converge towards the real model. Particularly in handling occlusion issues, virtual design technology has demonstrated robustness and accuracy by iteratively adjusting the orientation of 3D plant components [117]. However, optimizing iterative parameters and enhancing algorithm efficiency remains a subject that requires further exploration through experimentation in this field [118].
The reconstruction of a voxel grid can discretize an actual object or scene into a voxel-based representation. Voxels can be derived from 2D images captured from multiple perspectives [119] or obtained from point cloud data using tool libraries such as PCL, Visualization Toolkit, and OpenCV. Das Choudhury et al. proposed a multi-view image reconstruction method, PhenoMV-3D, for single tall crops [120]. They utilized a spatial carving method combined with light consistency theory to reconstruct the 3D voxel grid model of maize from multi-view contours. Based on voxel overlap consistency checks, the seedling stem was identified, and subsequently, point cloud clustering techniques were employed to detect and separate the leaves. Kliman et al. developed a methodology for generating precise and memory-efficient 3D models of plant root systems and leaf architectures, yet continue to confront two significant challenges: optical distortion and aggressive voxel culling, which require further resolution [121].

3.3.3. Fruit

In the 3D reconstruction of fruits, the process can be divided into two categories: one involves fruits separated from the stem, and the other involves fruits connected to the stem. For fruits that are detached from the stem, they are often placed on a rotary table or platform due to the absence of branches and leaves, and are scanned directly to obtain point cloud data [43,53,122]. However, obtaining point cloud data at the bottom of the Fruit’s rotation axis can be challenging. In such cases, the symmetry characteristics of the Fruit can be utilized for interpolation processing [43]. Fourier analysis plays a significant role in calculating the centroid of the seed, providing a powerful mathematical tool for precise seed positioning [123]. For fruits with pronounced depressions or excessive asymmetry, traditional mirror completion methods are not applicable [122].
For fruits connected to the stem, their 3D reconstructions are directly related to the evaluation of crop yield. The reconstruction of such fruits often aims for high throughput but generally faces challenges such as occlusion by stems and leaves, as well as complex backgrounds. In this context, clustering algorithms provide an effective solution for separating fruits from complex crop structures [66,101]. Wang et al. proposed an unsupervised framework based on an adaptive k-means algorithm [124]. This framework, incorporating a dynamic perspective, can automatically segment and reconstruct hundreds of wheat spikes simultaneously. They separated wheat spikes and stems using side-view Clustering and identified individual wheat spikes via top-view clustering, thereby achieving the segmentation process. Subsequently, the RANSAC algorithm was employed for shape fitting, and the size of each wheat spike was measured based on the fitting results. In addition, deep learning technology has demonstrated excellent performance in the 3D reconstruction of fruits. Panicle-3D is a comprehensive method that integrates rice reconstruction technology based on contour shape recovery, two-dimensional ear segmentation technology based on deep convolutional neural networks, and 3D ear segmentation technology based on ray tracing and super voxel Clustering [101]. This method can recover the 3D shape of rice panicles from multi-view images and is adaptable to rice panicles of different varieties and growth stages. The Panicle-3D method is characterized by its automation, rapidity, low cost, and non-destructiveness. It is suitable for high-throughput 3D phenotypic analysis of large-scale rice populations and provides strong technical support for agricultural research and production.
Overall, the above analysis systematically compared the segmentation and recon-struction algorithms applied to canopy, stem-leaf and fruit. The objects and the per-formance of reconstruction and segmentation of each algorithm are comprehensively summarized in Table 7, demonstrating the practical effects of the current segmentation and reconstruction methods for different plant components.

4. Conclusions

Three-dimensional reconstruction technology has become increasingly important in high-throughput crop phenotyping. This paper comprehensively reviews the application progress of four main data acquisition techniques—LiDAR, MVS, depth cameras, and multi-sensor fusion. LiDAR offers high-precision, rapid data acquisition for large-scale field crop phenotyping, but its high cost limits adoption. MVS provides detailed, high-resolution point clouds, ideal for controlled settings; yet, it is computationally intensive and sensitive to environmental factors. Depth cameras balance cost, accuracy, and processing simplicity, suitable for both lab and field, but require controlled lighting. Multi-sensor fusion enhances data richness by combining LiDAR’s structural accuracy with RGB cameras’ color detail, though it increases system complexity and processing time. Each method’s efficiency and accuracy depend on application scale, environmental conditions, and computational resources. Meanwhile, a hybrid LiDAR-depth camera model on a UGV fuses LiDAR’s precise, large-scale 3D data with depth camera’s affordable, detailed depth and color, using voxel filtering, ICP registration, and DBSCAN segmentation to achieve >95% trait accuracy for high-throughput crop phenotyping, deployable in 3–6 months. Overall, these technologies represent a continuum of trade-offs between accuracy, scalability, and cost, suggesting that hybrid approaches combining LiDAR’s precision, MVS’s fine detail, and depth cameras’ efficiency will drive the next generation of high-throughput phenotyping systems.
This paper also reviews the three core point cloud processing steps: denoising, registration, segmentation and reconstruction. Denoising employs algorithms like Statistical Outlier Removal (SOR), Voxel filtering, and DBSCAN, often combined to balance noise reduction and detail preservation, with deep learning methods like PointCleanNet showing potential but limited by data needs. Registration relies on the Iterative Closest Point (ICP) algorithm, enhanced by coarse registration methods like RANSAC and PCA, to align multi-source point clouds, supporting temporal analysis despite challenges with non-rigid plant deformations. Segmentation and reconstruction use algorithms such as α-shape, voxel-based methods, and clustering (e.g., DBSCAN, K-means) for canopy, stem, leaf, and fruit analysis, with deep learning models like PF-Net and Panicle-3D improving accuracy under occlusion, though computational costs remain a challenge. These techniques enable precise phenotypic trait extraction, critical for high-throughput crop analysis. Although crop 3D reconstruction technology holds broad application prospects, it still faces several challenges in practical use.
First, it is necessary to balance accuracy and cost. Cost considerations include not only economic expenditure but also experimental and time costs. LiDAR demonstrates significant potential for large-scale field data acquisition due to its high precision. However, its high economic cost limits its widespread application. Depth cameras offer lower accuracy but are more cost-effective. It is also suitable for large-scale outdoor data acquisition. Nevertheless, the influence of ambient light must be considered, and shading measures should be implemented when necessary. MVS achieves the highest detail accuracy but requires more complex experimental setups and data processing, resulting in longer processing times. Multi-sensor fusion systems provide richer information but increase computational costs. However, the emergence of new technologies is gradually solving these difficulties. For instance, OB-NERF has successfully overcome the limitations of traditional MVS in terms of sensitivity to lighting conditions and computational efficiency, establishing a new paradigm for high-throughput phenotyping.
Secondly, in the aspect of point cloud denoising, the fusion of multi-filtering schemes can effectively improve the denoising ability. Deep learning technologies show great potential, but their application in the field of agriculture is still relatively limited. In terms of point cloud registration, the ICP algorithm is still the mainstream method. The use of coarse registration algorithms before it helps to improve the accuracy of the alignment. 4D monitoring technology makes it difficult to capture non-rigid deformation parameters on a long time scale, and it still needs further development. In terms of segmentation and reconstruction, the accuracy of technologies has been improving at the detailed level, but these advancements are often limited to the individual plant level.
Finally, the complexity of data processing and resource allocation is a critical issue. Compared with the single-crop phenotypic analysis under laboratory conditions, high-throughput phenotypic analysis of field crops under field conditions holds more significant potential. However, detailed and complex algorithms that improve accuracy may not be suitable for large-scale point cloud data, as they can lead to increased computational resource demands and time costs. Recent developments such as UAV-based 3D reconstruction combined with deep learning and NeRF-driven neural modeling [21,77] indicate a clear transition from laboratory-scale demonstrations to fully automated field-scale phenotyping systems. These efforts complement ongoing surveys emphasizing multiscale integration across sensing and modeling domains [20].
In the future, the research should focus on addressing these challenges to achieve high-throughput, high-precision, automated, and non-destructive crop phenotyping. Besides, future efforts should prioritize multi-scale fusion of LiDAR and depth camera data to combine large-scale structural accuracy with fine organ-level details, reducing computational costs for high-throughput field phenotyping. Developing real-time algorithms, such as unsupervised deep learning for denoising and segmentation, will address occlusion and environmental variability, enabling rapid processing. Establishing benchmark datasets for diverse crops (e.g., maize, soybean) under varied field conditions will standardize validation and improve algorithm robustness, as highlighted in the review’s analysis of current limitations. Additionally, the combination of artificial intelligence and 3D reconstruction is a key research direction for the future, aiming to achieve intelligent, adaptive and real-time crop phenotypic analysis. Deep learning models can enhance the effects of point cloud denoising, segmentation and trait extraction, while the multi-scale fusion of LiDAR, depth cameras and multispectral data will improve the accuracy at the canopy and organ levels. The construction of standardized datasets and open-source frameworks will further promote the transformation from sensor-driven to AI-driven phenotypic analysis and facilitate scalable and automated agricultural monitoring.

Author Contributions

Y.L.: Writing—original draft, Formal analysis, Supervision. Z.L.: Writing—review and editing, Validation, Data curation. B.L.: Writing—review and editing, Resources. L.Y.: Funding acquisition, Resources. F.W.: Resources, Investigation, W.Q.: Funding acquisition, Supervision, Resources. X.Q.: Funding acquisition, Project administration, Resources, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This article was funded by the National Key Research and Development Program of China (2024YFC2607700 & 2024YFC2607702), the National Natural Science Foundation of China (32272633), the Agricultural Science and Technology Innovation Program (CAAS-ZDRW202505).

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

This article thank the NetEase Youdao Translation Tool used for writing assistance.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Becker, S.; Fanzo, J. Population and food systems: What does the future hold? Popul. Environ. 2023, 45, 20. [Google Scholar] [CrossRef]
  2. Xing, Y.; Xie, Y.; Wang, X. Enhancing soil health through balanced fertilization: A pathway to sustainable agriculture and food security. Front. Microbiol. 2025, 16, 1536524. [Google Scholar] [CrossRef]
  3. Song, P.; Wang, J.; Guo, X.; Yang, W.; Zhao, C. High-throughput phenotyping: Breaking through the bottleneck in future crop breeding. Crop J. 2021, 9, 633–645. [Google Scholar] [CrossRef]
  4. Ninomiya, S. High-throughput field crop phenotyping: Current status and challenges. Breed. Sci. 2022, 72, 3–18. [Google Scholar] [CrossRef] [PubMed]
  5. Li, Y.; Wang, W.; Hu, C.; Yang, S.; Ma, C.; Wu, J.; Wang, Y.; Xu, Z.; Li, L.; Huang, Z.; et al. Ectopic Expression of a Maize Gene ZmDUF1645 in Rice Increases Grain Length and Yield, but Reduces Drought Stress Tolerance. Int. J. Mol. Sci. 2023, 24, 9794. [Google Scholar] [CrossRef] [PubMed]
  6. van Eeuwijk, F.A.; Bustos-Korts, D.; Millet, E.J.; Boer, M.P.; Kruijer, W.; Thompson, A.; Malosetti, M.; Iwata, H.; Quiroz, R.; Kuppe, C.; et al. Modelling strategies for assessing and increasing the effectiveness of new phenotyping techniques in plant breeding. Plant Sci. 2019, 282, 23–39. [Google Scholar] [CrossRef]
  7. Gracia-Romero, A.; Vergara-Díaz, O.; Thierfelder, C.; Cairns, J.E.; Kefauver, S.C.; Araus, J.L. Phenotyping Conservation Agriculture Management Effects on Ground and Aerial Remote Sensing Assessments of Maize Hybrids Performance in Zimbabwe. Remote Sens. 2018, 10, 349. [Google Scholar] [CrossRef]
  8. Esser, F.; Klingbeil, L.; Zabawa, L.; Kuhlmann, H. Quality Analysis of a High-Precision Kinematic Laser Scanning System for the Use of Spatio-Temporal Plant and Organ-Level Phenotyping in the Field. Remote Sens. 2023, 15, 1117. [Google Scholar] [CrossRef]
  9. Fei, S.; Hassan, M.A.; Xiao, Y.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.; Chen, R.; Ma, Y. UAV-based multi-sensor data fusion and machine learning algorithm for yield prediction in wheat. Precis. Agric. 2023, 24, 187–212. [Google Scholar] [CrossRef]
  10. Prey, L.; Hanemann, A.; Ramgraber, L.; Seidl-Schulz, J.; Noack, P.O. UAV-Based Estimation of Grain Yield for Plant Breeding: Applied Strategies for Optimizing the Use of Sensors, Vegetation Indices, Growth Stages, and Machine Learning Algorithms. Remote Sens. 2022, 14, 6345. [Google Scholar] [CrossRef]
  11. Tadesse, W.; Bishaw, Z.; Assefa, S. Wheat production and breeding in Sub-Saharan Africa Challenges and opportunities in the face of climate change. Int. J. Clim. Change Strateg. Manag. 2019, 11, 696–715. [Google Scholar] [CrossRef]
  12. Chen, C.; Jia, Y.; Zhang, J.; Yang, L.; Wang, Y.; Kang, F. Development of a 3D point cloud reconstruction-based apple canopy liquid sedimentation model. J. Clean. Prod. 2024, 451, 142038. [Google Scholar] [CrossRef]
  13. Mochida, K.; Koda, S.; Inoue, K.; Hirayama, T.; Tanaka, S.; Nishii, R.; Melgani, F. Computer vision-based phenotyping for improvement of plant productivity: A machine learning perspective. Gigascience 2019, 8, giy153. [Google Scholar] [CrossRef] [PubMed]
  14. Zou, X.; Mõttus, M.; Tammeorg, P.; Torres, C.L.; Takala, T.; Pisek, J.; Mäkelä, P.; Stoddard, F.L.; Pellikka, P. Photographic measurement of leaf angles in field crops. Agric. For. Meteorol. 2014, 184, 137–146. [Google Scholar] [CrossRef]
  15. Qiu, R.; Wei, S.; Zhang, M.; Li, H.; Sun, H.; Liu, G.; Li, M. Sensors for measuring plant phenotyping: A review. Int. J. Agric. Biol. Eng. 2018, 11, 1–17. [Google Scholar] [CrossRef]
  16. Sun, S.; Li, C.; Chee, P.W.; Paterson, A.H.; Meng, C.; Zhang, J.; Ma, P.; Robertson, J.S.; Adhikari, J. High resolution 3D terrestrial LiDAR for cotton plant main stalk and node detection. Comput. Electron. Agric. 2021, 187, 106276. [Google Scholar] [CrossRef]
  17. Milella, A.; Marani, R.; Petitti, A.; Reina, G. In-field high throughput grapevine phenotyping with a consumer-grade depth camera. Comput. Electron. Agric. 2019, 156, 293–306. [Google Scholar] [CrossRef]
  18. Wang, Y.; Chen, Y. Non-Destructive Measurement of Three-Dimensional Plants Based on Point Cloud. Plants 2020, 9, 571. [Google Scholar] [CrossRef]
  19. Li, S.; Cui, Z.; Yang, J.; Wang, B. A Review of Optical-Based Three-Dimensional Reconstruction and Multi-Source Fusion for Plant Phenotyping. Sensors 2025, 25, 3401. [Google Scholar] [CrossRef]
  20. Qi, J.; Gao, F.; Wang, Y.; Zhang, W.; Yang, S.; Qi, K.; Zhang, R. Multiscale phenotyping of grain crops based on three-dimensional models: A comprehensive review of trait detection. Comput. Electron. Agric. 2025, 237, 110597. [Google Scholar] [CrossRef]
  21. Wu, S.; Hu, C.; Tian, B.; Huang, Y.; Yang, S.; Li, S.; Xu, S. A 3D reconstruction platform for complex plants using OB-NeRF. Front. Plant Sci. 2025, 16, 1449626. [Google Scholar] [CrossRef]
  22. Jin, S.; Sun, X.; Wu, F.; Su, Y.; Li, Y.; Song, S.; Xu, K.; Ma, Q.; Baret, F.; Jiang, D.; et al. Lidar sheds new light on plant phenomics for plant breeding and management: Recent advances and future prospects. ISPRS J. Photogramm. Remote Sens. 2021, 171, 202–223. [Google Scholar] [CrossRef]
  23. Ali, B.; Zhao, F.; Li, Z.; Zhao, Q.; Gong, J.; Wang, L.; Tong, P.; Jiang, Y.; Su, W.; Bao, Y.; et al. Sensitivity Analysis of Canopy Structural and Radiative Transfer Parameters to Reconstructed Maize Structures Based on Terrestrial LiDAR Data. Remote Sens. 2021, 13, 3751. [Google Scholar] [CrossRef]
  24. Jimenez-Berni, J.A.; Deery, D.M.; Rozas-Larraondo, P.; Condon, A.G.; Rebetzke, G.J.; James, R.A.; Bovill, W.D.; Furbank, R.T.; Sirault, X.R.R. High Throughput Determination of Plant Height, Ground Cover, and Above-Ground Biomass in Wheat with LiDAR. Front. Plant Sci. 2018, 9, 237. [Google Scholar] [CrossRef] [PubMed]
  25. Zhu, Y.; Sun, G.; Ding, G.; Zhou, J.; Wen, M.; Jin, S.; Zhao, Q.; Colmer, J.; Ding, Y.; Ober, E.S.; et al. Large-scale field phenotyping using backpack LiDAR and CropQuant-3D to measure structural variation in wheat. Plant Physiol. 2021, 187, 716–738. [Google Scholar] [CrossRef] [PubMed]
  26. Jiang, Y.; Li, C.; Takeda, F.; Kramer, E.A.; Ashrafi, H.; Hunter, J. 3D point cloud data to quantitatively characterize size and shape of shrub crops. Hortic. Res. 2019, 6, 43. [Google Scholar] [CrossRef]
  27. Walter, J.D.C.; Edwards, J.; McDonald, G.; Kuchel, H. Estimating Biomass and Canopy Height with LiDAR for Field Crop Breeding. Front. Plant Sci. 2019, 10, 1145. [Google Scholar] [CrossRef]
  28. Virlet, N.; Sabermanesh, K.; Sadeghi-Tehran, P.; Hawkesford, M.J. Field Scanalyzer: An automated robotic field phenotyping platform for detailed crop monitoring. Funct. Plant Biol. 2017, 44, 143–153. [Google Scholar] [CrossRef]
  29. Paulus, S. Measuring crops in 3D: Using geometry for plant phenotyping. Plant Methods 2019, 15, 103. [Google Scholar] [CrossRef]
  30. Wei, K.; Liu, S.; Chen, Q.; Huang, S.; Zhong, M.; Zhang, J.; Sun, H.; Wu, K.; Fan, S.; Ye, Z.; et al. Fast Multi-View 3D reconstruction of seedlings based on automatic viewpoint planning. Comput. Electron. Agric. 2024, 218, 108708. [Google Scholar] [CrossRef]
  31. Yan, P.; Feng, Y.; Han, Q.; Wu, H.; Hu, Z.; Kang, S. Revolutionizing crop phenotyping: Enhanced UAV LiDAR flight parameter optimization for wide-narrow row cultivation. Remote Sens. Environ. 2025, 320, 114638. [Google Scholar] [CrossRef]
  32. Ma, X.; Zhu, K.; Guan, H.; Feng, J.; Yu, S.; Liu, G. Calculation Method for Phenotypic Traits Based on the 3D Reconstruction of Maize Canopies. Sensors 2019, 19, 1201. [Google Scholar] [CrossRef] [PubMed]
  33. Forero, M.G.; Murcia, H.F.; Mendez, D.; Betancourt-Lozano, J. LiDAR Platform for Acquisition of 3D Plant Phenotyping Database. Plants 2022, 11, 2199. [Google Scholar] [CrossRef] [PubMed]
  34. Saha, K.K.; Weltzien, C.; Zude-Sasse, M. Non-destructive Leaf Area Estimation of Tomato Using Mobile LiDAR Laser Scanner. In Proceedings of the 1st IEEE International Workshop on Metrology for the Agriculture and Forestry (IEEE MetroAgriFor), Trento-Bolzano, Italy, 3–5 November 2021; pp. 187–191. [Google Scholar] [CrossRef]
  35. Hu, F.; Lin, C.; Peng, J.; Wang, J.; Zhai, R. Rapeseed Leaf Estimation Methods at Field Scale by Using Terrestrial LiDAR Point Cloud. Agronomy 2022, 12, 2409. [Google Scholar] [CrossRef]
  36. Medic, T.; Manser, N.; Kirchgessner, N.; Roth, L. Towards Wheat Yield Estimation in Plant Breeding from Inhomogeneous LiDAR Point Clouds Using Stochastic Features. In Proceedings of the 5th International-Society-for-Photogrammetry-and-Remote-Sensing (ISPRS) Geospatial Week (GSW), Cairo, Egypt, 2–7 September 2023; pp. 741–747. [Google Scholar]
  37. Guo, Q.; Su, Y.; Hu, T.; Guan, H.; Jin, S.; Zhang, J.; Zhao, X.; Xu, K.; Wei, D.; Kelly, M.; et al. Lidar Boosts 3D Ecological Observations and Modelings: A Review and Perspective. IEEE Geosci. Remote Sens. Mag. 2021, 9, 232–257. [Google Scholar] [CrossRef]
  38. Murcia, H.F.; Tilaguy, S.; Ouazaa, S. Development of a Low-Cost System for 3D Orchard Mapping Integrating UGV and LiDAR. Plants 2021, 10, 2804. [Google Scholar] [CrossRef]
  39. Yan, W.; Guan, H.; Cao, L.; Yu, Y.; Li, C.; Lu, J. A Self-Adaptive Mean Shift Tree-Segmentation Method Using UAV LiDAR Data. Remote Sens. 2020, 12, 515. [Google Scholar] [CrossRef]
  40. Pan, Y.; Han, Y.; Wang, L.; Chen, J.; Meng, H.; Wang, G.; Zhang, Z.; Wang, S. 3D Reconstruction of Ground Crops Based on Airborne LiDAR Technology. IFAC-PapersOnLine 2019, 52, 35–40. [Google Scholar] [CrossRef]
  41. Young, S.N.; Kayacan, E.; Peschel, J.M. Design and field evaluation of a ground robot for high-throughput phenotyping of energy sorghum. Precis. Agric. 2019, 20, 697–722. [Google Scholar] [CrossRef]
  42. Yao, L.; van de Zedde, R.; Kowalchuk, G. Recent developments and potential of robotics in plant eco-phenotyping. Emerg. Top. Life Sci. 2021, 5, 289–300. [Google Scholar] [CrossRef]
  43. Huang, X.; Zheng, S.; Zhu, N. High-Throughput Legume Seed Phenotyping Using a Handheld 3D Laser Scanner. Remote Sens. 2022, 14, 431. [Google Scholar] [CrossRef]
  44. Malambo, L.; Popescu, S.C.; Horne, D.W.; Pugh, N.A.; Rooney, W.L. Automated detection and measurement of individual sorghum panicles using density-based clustering of terrestrial lidar data. ISPRS J. Photogramm. Remote Sens. 2019, 149, 1–13. [Google Scholar] [CrossRef]
  45. He, W.; Ye, Z.; Li, M.; Yan, Y.; Lu, W.; Xing, G. Extraction of soybean plant trait parameters based on SfM-MVS algorithm combined with GRNN. Front. Plant Sci. 2023, 14, 1181322. [Google Scholar] [CrossRef] [PubMed]
  46. Wu, S.; Wen, W.; Wang, Y.; Fan, J.; Wang, C.; Gou, W.; Guo, X. MVS-Pheno: A Portable and Low-Cost Phenotyping Platform for Maize Shoots Using Multiview Stereo 3D Reconstruction. Plant Phenomics 2020, 2020, 1848437. [Google Scholar] [CrossRef] [PubMed]
  47. Hasheminasab, S.M.; Zhou, T.; Habib, A. GNSS/INS-Assisted Structure from Motion Strategies for UAV-Based Imagery over Mechanized Agricultural Fields. Remote Sens. 2020, 12, 351. [Google Scholar] [CrossRef]
  48. Wu, S.; Wen, W.; Gou, W.; Lu, X.; Zhang, W.; Zheng, C.; Xiang, Z.; Chen, L.; Guo, X. A miniaturized phenotyping platform for individual plants using multi-view stereo 3D reconstruction. Front. Plant Sci. 2022, 13, 897746. [Google Scholar] [CrossRef]
  49. Sandhu, J.; Zhu, F.; Paul, P.; Gao, T.; Dhatt, B.K.; Ge, Y.; Staswick, P.; Yu, H.; Walia, H. PI-Plat: A high-resolution image-based 3D reconstruction method to estimate growth dynamics of rice inflorescence traits. Plant Methods 2019, 15, 162. [Google Scholar] [CrossRef]
  50. Schirrmann, M.; Hamdorf, A.; Garz, A.; Ustyuzhanin, A.; Dammer, K.-H. Estimating wheat biomass by combining image clustering with crop height. Comput. Electron. Agric. 2016, 121, 374–384. [Google Scholar] [CrossRef]
  51. Tolomelli, G.; Kothawade, G.S.; Chandel, A.K.; Manfrini, L.; Jacoby, P.; Khot, L.R. Aerial-RGB imagery based 3D canopy reconstruction and mapping of grapevines for precision management. In Proceedings of the IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Perugia, Italy, 3–5 November 2022; pp. 203–207. [Google Scholar] [CrossRef]
  52. Matsuura, Y.; Zhang, H.; Nakao, K.; Qiong, C.; Firmansyah, I.; Kawai, S.; Yamaguchi, Y.; Maruyama, T.; Hayashi, H.; Nobuhara, H. High-precision plant height measurement by drone with RTK-GNSS and single camera for real-time processing. Sci. Rep. 2023, 13, 6329. [Google Scholar] [CrossRef]
  53. Yang, R.; He, Y.; Lu, X.; Zhao, Y.; Li, Y.; Yang, Y.; Kong, W.; Liu, F. 3D-based precise evaluation pipeline for maize ear rot using multi-view stereo reconstruction and point cloud semantic segmentation. Comput. Electron. Agric. 2024, 216, 108512. [Google Scholar] [CrossRef]
  54. Yang, L.; Zhang, L.; Dong, H.; Alelaiwi, A.; El Saddik, A. Evaluating and Improving the Depth Accuracy of Kinect for Windows v2. IEEE Sens. J. 2015, 15, 4275–4285. [Google Scholar] [CrossRef]
  55. Wasenmueller, O.; Stricker, D. Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision. In Proceedings of the 13th Asian Conference on Computer Vision (ACCV), Taipei, Taiwan, 20–24 November 2016; Volume 10117, pp. 34–45. [Google Scholar] [CrossRef]
  56. Guan, H.; Zhang, X.; Ma, X.; Zhuo, Z.; Deng, H. Recognition and phenotypic detection of maize stem and leaf at seedling stage based on 3D reconstruction technique. Opt. Laser Technol. 2025, 187, 112787. [Google Scholar] [CrossRef]
  57. Teng, X.; Zhou, G.; Wu, Y.; Huang, C.; Dong, W.; Xu, S. Three-Dimensional Reconstruction Method of Rapeseed Plants in the Whole Growth Period Using RGB-D Camera. Sensors 2021, 21, 4628. [Google Scholar] [CrossRef] [PubMed]
  58. Song, S.; Duan, J.; Yang, Z.; Zou, X.; Fu, L.; Ou, Z. A three-dimensional reconstruction algorithm for extracting parameters of the banana pseudo-stem. Optik 2019, 185, 486–496. [Google Scholar] [CrossRef]
  59. Jiang, Y.; Li, C.; Paterson, A.H.; Sun, S.; Xu, R.; Robertson, J. Quantitative Analysis of Cotton Canopy Size in Field Conditions Using a Consumer-Grade RGB-D Camera. Front. Plant Sci. 2018, 8, 2233. [Google Scholar] [CrossRef]
  60. Ma, X.; Zhu, K.; Guan, H.; Feng, J.; Yu, S.; Liu, G. High-Throughput Phenotyping Analysis of Potted Soybean Plants Using Colorized Depth Images Based on A Proximal Platform. Remote Sens. 2019, 11, 1085. [Google Scholar] [CrossRef]
  61. Moreno, H.; Rueda-Ayala, V.; Ribeiro, A.; Bengochea-Guevara, J.; Lopez, J.; Peteinatos, G.; Valero, C.; Andujar, D. Evaluation of Vineyard Cropping Systems Using on-Board RGB-Depth Perception. Sensors 2020, 20, 6912. [Google Scholar] [CrossRef]
  62. Gene-Mola, J.; Llorens, J.; Rosell-Polo, J.R.; Gregorio, E.; Arno, J.; Solanelles, F.; Martinez-Casasnovas, J.A.; Escola, A. Assessing the Performance of RGB-D Sensors for 3D Fruit Crop Canopy Characterization under Different Operating and Lighting Conditions. Sensors 2020, 20, 7072. [Google Scholar] [CrossRef]
  63. Esser, F.; Rosu, R.A.; Cornelißen, A.; Klingbeil, L.; Kuhlmann, H.; Behnke, S. Field Robot for High-Throughput and High-Resolution 3D Plant Phenotyping: Towards Efficient and Sustainable Crop Production. IEEE Robot. Autom. Mag. 2023, 30, 20–29. [Google Scholar] [CrossRef]
  64. Vázquez-Arellano, M.; Reiser, D.; Paraforos, D.S.; Garrido-Izard, M.; Burce, M.E.C.; Griepentrog, H.W. 3-D reconstruction of maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 145, 235–247. [Google Scholar] [CrossRef]
  65. Ma, X.; Wei, B.; Guan, H.; Yu, S. A method of calculating phenotypic traits for soybean canopies based on three-dimensional point cloud. Ecol. Inform. 2022, 68, 101524. [Google Scholar] [CrossRef]
  66. Ma, X.; Wei, B.; Guan, H.; Cheng, Y.; Zhuo, Z. A method for calculating and simulating phenotype of soybean based on 3D reconstruction. Eur. J. Agron. 2024, 154, 127070. [Google Scholar] [CrossRef]
  67. Sun, G.; Wang, X. Three-Dimensional Point Cloud Reconstruction and Morphology Measurement Method for Greenhouse Plants Based on the Kinect Sensor Self-Calibration. Agronomy 2019, 9, 596. [Google Scholar] [CrossRef]
  68. Wang, F.; Ma, X.; Liu, M.; Wei, B. Three-Dimensional Reconstruction of Soybean Canopy Based on Multivision Technology for Calculation of Phenotypic Traits. Agronomy 2022, 12, 692. [Google Scholar] [CrossRef]
  69. Zhu, T.; Ma, X.; Guan, H.; Wu, X.; Wang, F.; Yang, C.; Jiang, Q. A calculation method of phenotypic traits based on three-dimensional reconstruction of tomato canopy. Comput. Electron. Agric. 2023, 204, 107515. [Google Scholar] [CrossRef]
  70. Song, P.; Li, Z.; Yang, M.; Shao, Y.; Pu, Z.; Yang, W.; Zhai, R. Dynamic detection of three-dimensional crop phenotypes based on a consumer-grade RGB-D camera. Front. Plant Sci. 2023, 14, 1097725. [Google Scholar] [CrossRef] [PubMed]
  71. Li, J.; Tang, L. Developing a low-cost 3D plant morphological traits characterization system. Comput. Electron. Agric. 2017, 143, 1–13. [Google Scholar] [CrossRef]
  72. Yang, D.; Yang, H.; Liu, D.; Wang, X. Research on automatic 3D reconstruction of plant phenotype based on Multi-View images. Comput. Electron. Agric. 2024, 220, 108866. [Google Scholar] [CrossRef]
  73. Xu, N.; Sun, G.; Bai, Y.; Zhou, X.; Cai, J.; Huang, Y. Global Reconstruction Method of Maize Population at Seedling Stage Based on Kinect Sensor. Agriculture 2023, 13, 348. [Google Scholar] [CrossRef]
  74. Tolgyessy, M.; Dekan, M.; Chovanec, L.; Hubinsky, P. Evaluation of the Azure Kinect and Its Comparison to Kinect V1 and Kinect V2. Sensors 2021, 21, 413. [Google Scholar] [CrossRef]
  75. Li, Y.; Liu, J.; Zhang, B.; Wang, Y.; Yao, J.; Zhang, X.; Fan, B.; Li, X.; Hai, Y.; Fan, X. Three-dimensional reconstruction and phenotype measurement of maize seedlings based on multi-view image sequences. Front. Plant Sci. 2022, 13, 974339. [Google Scholar] [CrossRef]
  76. Guan, H.; Liu, M.; Ma, X.; Yu, S. Three-Dimensional Reconstruction of Soybean Canopies Using Multisource Imaging for Phenotyping Analysis. Remote Sens. 2018, 10, 1206. [Google Scholar] [CrossRef]
  77. Qiao, G.; Zhang, Z.; Niu, B.; Han, S.; Yang, E. Plant stem and leaf segmentation and phenotypic parameter extraction using neural radiance fields and lightweight point cloud segmentation networks. Front. Plant Sci. 2025, 16, 1491170. [Google Scholar] [CrossRef]
  78. Chen, Y.; Xiao, K.; Gao, G.; Zhang, F. High-fidelity 3D reconstruction of peach orchards using a 3DGS-Ag model. Comput. Electron. Agric. 2025, 234, 110225. [Google Scholar] [CrossRef]
  79. Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dolha, M.; Beetz, M. Towards 3D Point cloud based object maps for household environments. Robot. Auton. Syst. 2008, 56, 927–941. [Google Scholar] [CrossRef]
  80. Zhu, B.; Zhang, Y.; Sun, Y.; Shi, Y.; Ma, Y.; Guo, Y. Quantitative estimation of organ-scale phenotypic parameters of field crops through 3D modeling using extremely low altitude UAV images. Comput. Electron. Agric. 2023, 210, 107910. [Google Scholar] [CrossRef]
  81. Sun, Y.; Miao, L.; Zhao, Z.; Pan, T.; Wang, X.; Guo, Y.; Xin, D.; Chen, Q.; Zhu, R. An Efficient and Automated Image Preprocessing Using Semantic Segmentation for Improving the 3D Reconstruction of Soybean Plants at the Vegetative Stage. Agronomy 2023, 13, 2388. [Google Scholar] [CrossRef]
  82. Zhou, L.; Sun, G.; Li, Y.; Li, W.; Su, Z. Point cloud denoising review: From classical to deep learning-based approaches. Graph. Models 2022, 121, 101140. [Google Scholar] [CrossRef]
  83. Li, Z.; Pan, W.; Wang, S.; Tang, X.; Hu, H. A point cloud denoising network based on manifold in an unknown noisy environment. Infrared Phys. Technol. 2023, 132, 104735. [Google Scholar] [CrossRef]
  84. Zermas, D.; Morellas, V.; Mulla, D.; Papanikolopoulos, N. 3D model processing for high throughput phenotype extraction—The case of corn. Comput. Electron. Agric. 2020, 172, 105047. [Google Scholar] [CrossRef]
  85. Li, W.; Tang, B.; Hou, Z.; Wang, H.; Bing, Z.; Yang, Q.; Zheng, Y. Dynamic Slicing and Reconstruction Algorithm for Precise Canopy Volume Estimation in 3D Citrus Tree Point Clouds. Remote Sens. 2024, 16, 2142. [Google Scholar] [CrossRef]
  86. Chantrapornchai, C.; Srijan, P. On the 3D point clouds-palm and coconut trees data set extraction and their usages. BMC Res. Notes 2023, 16, 363. [Google Scholar] [CrossRef] [PubMed]
  87. Chen, L.; Yuan, Y.; Song, S. Hierarchical Denoising Method of Crop 3D Point Cloud Based on Multi-view Image Reconstruction. In Proceedings of the 11th IFIP WG 5.14 International Conference on Computer and Computing Technologies in Agriculture (CCTA), Jilin, China, 12–15 August 2017; Volume 545, pp. 416–427. [Google Scholar] [CrossRef]
  88. Yu, S.; Yan, X.; Jia, T.; Qiu, D.; Hu, D. Binocular structured light-based 3D reconstruction for morphological measurements of apples. Postharvest Biol. Technol. 2024, 213, 112952. [Google Scholar] [CrossRef]
  89. Rakotosaona, M.-J.; La Barbera, V.; Guerrero, P.; Mitra, N.J.; Ovsjanikov, M. POINTCLEANNET: Learning to Denoise and Remove Outliers from Dense Point Clouds. Comput. Graph. Forum 2020, 39, 185–203. [Google Scholar] [CrossRef]
  90. Wang, X.; Fan, X.; Zhao, D. PointFilterNet: A Filtering Network for Point Cloud Denoising. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 1276–1290. [Google Scholar] [CrossRef]
  91. Seppanen, A.; Ojala, R.; Tammi, K. 4DenoiseNet: Adverse Weather Denoising from Adjacent Point Clouds. IEEE Robot. Autom. Lett. 2023, 8, 456–463. [Google Scholar] [CrossRef]
  92. Wang, X.; Cui, W.; Xiong, R.; Fan, X.; Zhao, D. FCNet: Learning Noise-Free Features for Point Cloud Denoising. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 6288–6301. [Google Scholar] [CrossRef]
  93. Sheng, H.; Li, Y. Denoising point clouds with fewer learnable parameters. Comput.-Aided Des. 2024, 172, 103708. [Google Scholar] [CrossRef]
  94. Mao, A.; Du, Z.; Wen, Y.-H.; Xuan, J.; Liu, Y.-J. PD-Flow: A Point Cloud Denoising Framework with Normalizing Flows. In Proceedings of the 17th European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; Volume 13663, pp. 398–415. [Google Scholar] [CrossRef]
  95. Duan, C.; Chen, S.; Kovacevic, J. 3D Point Cloud Denoising via Deep Neural Network Based Local Surface Estimation. In Proceedings of the 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8553–8557. [Google Scholar] [CrossRef]
  96. Xin, B.; Sun, J.; Bartholomeus, H.; Kootstra, G. 3D data-augmentation methods for semantic segmentation of tomato plant parts. Front. Plant Sci. 2023, 14, 1045545. [Google Scholar] [CrossRef]
  97. Wang, W.; Liu, X.; Zhou, H.; Wei, L.; Deng, Z.; Murshed, M.; Lu, X. Noise4Denoise: Leveraging noise for unsupervised point cloud denoising. Comput. Vis. Media 2024, 10, 659–669. [Google Scholar] [CrossRef]
  98. Yang, H.; Wang, X.; Sun, G. Three-Dimensional Morphological Measurement Method for a Fruit Tree Canopy Based on Kinect Sensor Self-Calibration. Agronomy 2019, 9, 741. [Google Scholar] [CrossRef]
  99. Magistri, F.; Chebrolu, N.; Stachniss, C. Segmentation-Based 4D Registration of Plants Point Clouds for Phenotyping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 2433–2439. [Google Scholar] [CrossRef]
  100. Chebrolu, N.; Magistri, F.; Labe, T.; Stachniss, C. Registration of spatio-temporal point clouds of plants for phenotyping. PLoS ONE 2021, 16, e0247243. [Google Scholar] [CrossRef]
  101. Wu, D.; Yu, L.; Ye, J.; Zhai, R.; Duan, L.; Liu, L.; Wu, N.; Geng, Z.; Fu, J.; Huang, C.; et al. Panicle-3D: A low-cost 3D-modeling method for rice panicles based on deep learning, shape from silhouette, and supervoxel clustering. Crop J. 2022, 10, 1386–1398. [Google Scholar] [CrossRef]
  102. Giang, T.T.H.; Ryoo, Y.-J. Pruning Points Detection of Sweet Pepper Plants Using 3D Point Clouds and Semantic Segmentation Neural Network. Sensors 2023, 23, 4040. [Google Scholar] [CrossRef]
  103. Yao, J.; Wang, W.; Fu, H.; Deng, Z.; Cui, G.; Wang, S.; Wang, D.; She, W.; Cao, X. Automated measurement of field crop phenotypic traits using UAV 3D point clouds and an improved PointNet+. Front. Plant Sci. 2025, 16, 1654232. [Google Scholar] [CrossRef] [PubMed]
  104. Zhou, J.; Fu, X.; Zhou, S.; Zhou, J.; Ye, H.; Nguyen, H.T. Automated segmentation of soybean plants from 3D point cloud using machine learning. Comput. Electron. Agric. 2019, 162, 143–153. [Google Scholar] [CrossRef]
  105. Kok, E.; Wang, X.; Chen, C. Obscured tree branches segmentation and 3D reconstruction using deep learning and geometrical constraints. Comput. Electron. Agric. 2023, 210, 107884. [Google Scholar] [CrossRef]
  106. Fasiolo, D.T.; Pichierri, A.; Sivilotti, P.; Scalera, L. An analysis of the effects of water regime on grapevine canopy status using a UAV and a mobile robot. Smart Agric. Technol. 2023, 6, 100344. [Google Scholar] [CrossRef]
  107. Rosell, J.R.; Sanz, R. A review of methods and applications of the geometric characterization of tree crops in agricultural activities. Comput. Electron. Agric. 2012, 81, 124–141. [Google Scholar] [CrossRef]
  108. Liu, F.; Hu, P.; Zheng, B.; Duan, T.; Zhu, B.; Guo, Y. A field-based high-throughput method for acquiring canopy architecture using unmanned aerial vehicle images. Agric. For. Meteorol. 2021, 296, 108231. [Google Scholar] [CrossRef]
  109. Valluvan, A.B.; Raj, R.; Pingale, R.; Jagarlapudi, A. Canopy height estimation using drone-based RGB images. Smart Agric. Technol. 2023, 4, 100145. [Google Scholar] [CrossRef]
  110. Kothawade, G.S.; Chandel, A.K.; Schrader, M.J.; Rathnayake, A.P.; Khot, L.R. High throughput canopy characterization of a commercial apple orchard using aerial RGB imagery. In Proceedings of the 1st IEEE International Workshop on Metrology for the Agriculture and Forestry (IEEE MetroAgriFor), Trento-Bolzano, Italy, 3–5 November 2021; pp. 177–181. [Google Scholar]
  111. Dharni, J.S.; Dhatt, B.K.; Paul, P.; Gao, T.; Awada, T.; Bacher, H.; Peleg, Z.; Staswick, P.; Hupp, J.; Yu, H.; et al. A non-destructive approach for measuring rice panicle-level photosynthetic responses using 3D-image reconstruction. Plant Methods 2022, 18, 126. [Google Scholar] [CrossRef] [PubMed]
  112. Malekabadi, A.J.; Khojastehpour, M. Optimization of stereo vision baseline and effects of canopy structure, pre-processing and imaging parameters for 3D reconstruction of trees. Mach. Vis. Appl. 2022, 33, 87. [Google Scholar] [CrossRef]
  113. Chen, J.; Jiao, Y.; Jin, F.; Qin, X.; Ning, Y.; Yang, M.; Zhan, Y. Plant Sam Gaussian Reconstruction (PSGR): A High-Precision and Accelerated Strategy for Plant 3D Reconstruction. Electronics 2025, 14, 2291. [Google Scholar] [CrossRef]
  114. Ando, R.; Ozasa, Y.; Guo, W. Robust Surface Reconstruction of Plant Leaves from 3D Point Clouds. Plant Phenomics 2021, 2021, 3184185. [Google Scholar] [CrossRef]
  115. Chen, H.; Liu, S.; Wang, C.; Wang, C.; Gong, K.; Li, Y.; Lan, Y. Point Cloud Completion of Plant Leaves under Occlusion Conditions Based on Deep Learning. Plant Phenomics 2023, 5, 0117. [Google Scholar] [CrossRef]
  116. Badhan, S.; Desai, K.; Dsilva, M.; Sonkusare, R.; Weakey, S. Real-Time Weed Detection using Machine Learning and Stereo-Vision. In Proceedings of the 6th International Conference for Convergence in Technology (I2CT), Maharashtra, India, 2–4 April 2021. [Google Scholar] [CrossRef]
  117. Gu, W.; Wen, W.; Wu, S.; Zheng, C.; Lu, X.; Chang, W.; Xiao, P.; Guo, X. 3D Reconstruction of Wheat Plants by Integrating Point Cloud Data and Virtual Design Optimization. Agriculture 2024, 14, 391. [Google Scholar] [CrossRef]
  118. Lu, L.; Ma, J.; Qu, S. Value of Virtual Reality Technology in Image Inspection and 3D Geometric Modeling. IEEE Access 2020, 8, 139070–139083. [Google Scholar] [CrossRef]
  119. Tross, M.C.; Gaillard, M.; Zweiner, M.; Miao, C.; Grove, R.J.; Li, B.; Benes, B.; Schnable, J.C. 3D reconstruction identifies loci linked to variation in angle of individual sorghum leaves. PeerJ 2021, 9, e12628. [Google Scholar] [CrossRef]
  120. Das Choudhury, S.; Maturu, S.; Samal, A.; Stoerger, V.; Awada, T. Leveraging Image Analysis to Compute 3D Plant Phenotypes Based on Voxel-Grid Plant Reconstruction. Front. Plant Sci. 2020, 11, 521431. [Google Scholar] [CrossRef]
  121. Kliman, R.; Huang, Y.; Zhao, Y.; Chen, Y. Toward an Automated System for Nondestructive Estimation of Plant Biomass. Plant Direct 2025, 9, e70043. [Google Scholar] [CrossRef]
  122. Cai, Z.; Jin, C.; Xu, J.; Yang, T. Measurement of Potato Volume with Laser Triangulation and Three-Dimensional Reconstruction. IEEE Access 2020, 8, 176565–176574. [Google Scholar] [CrossRef]
  123. Zapata, N.T.; Tsoulias, N.; Saha, K.K.; Zude-Sasse, M. Fourier analysis of LiDAR scanned 3D point cloud data for surface reconstruction and fruit size estimation. In Proceedings of the IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Perugia, Italy, 3–5 November 2022; pp. 197–202. [Google Scholar] [CrossRef]
  124. Wang, F.; Li, F.; Mohan, V.; Dudley, R.; Gu, D.; Bryant, R. An unsupervised automatic measurement of wheat spike dimensions in dense 3D point clouds for field application. Biosyst. Eng. 2022, 223, 103–114. [Google Scholar] [CrossRef]
Figure 1. Identification of studies via the Web of Science (WOS) database.
Figure 1. Identification of studies via the Web of Science (WOS) database.
Agronomy 15 02518 g001
Table 1. Commonly used LiDAR sensors and their applications.
Table 1. Commonly used LiDAR sensors and their applications.
SensorsPlatformsObjectsMeasured ParametersBundled SoftwareReferences
ZEB1Handheld device,
Unmanned ground vehicle
Blueberry bushesCanopy sizeGeoSLAM Cloud[26]
Focus S70TripodRapeseed leavesPlanting density, Number of plantsPCL version 1.11.1, Visual Studio 2022[35]
RigelScan EliteHandheld deviceSoybeans, Peas,
Black beans, Red beans, Mung beans
34 traits (shapes, edge feature, etc.)Geomagic Studio[43]
LMS4121R-13,000Slide railMaize seedlingsPlant height, Volume, Classification of stalks and leavesSOPAS ET,
CloudCompare
[33]
Focus X330Agricultural spray tractorSorghum paniclesPlant height, Panicle length, Panicle widthCloudCompare
(version 2.7.0),
FUSION/LDV
(version 3.60)
[44]
Zoller + Froehlich Profiler 9012 AUnmanned ground robotMaizes, Soybeans, Potatoes, Wheat, Sugar beetsLAI, LAD,
Plant height
CloudCompare[8]
Focus 3D S120 TLSSuspension systemWheatYieldCloudCompare,
FARO Scene V2021.1.0., Open3D
[36]
RPLIDAR A2Unmanned aerial vehicleWheat, American mintsPlant heightMATLAB 2018a[40]
Table 2. Phenotypic measurement platforms based on MVS.
Table 2. Phenotypic measurement platforms based on MVS.
NamesEquipmentObjectsMeasured ParametersBundled SoftwareReferences
MVS-Pheno V1Canon 77D cameras, FARO Focus3D X120, Fastrak 3D DigitizerHigher crops (maize, etc.)Plant height (R2 = 0.99), Leaf width (R2 = 0.87), Leaf area (R2 = 0.93)Agisoft Photoscan
Professional, PCL
[46]
MVS-Pheno V2Canon 77D cameras, Fastrak 3D DigitizerLower crops (wheat, etc.), Seedling plants, Plant organsPlant height
(R2 = 0.999),
Leaf length (R2 = 0.995),
Leaf width (R2 = 0.969)
OpenGL, PCL, OpenMVG, OpenMVS, ContextCapture v.4.4.9, CloudCompare[48]
PI-PlatSony α6500 cameras, Epson Expression 12,000 XLRice panicles, Grain new branches,
New leaves
Number of seeds, ColorMATLAB[49]
Table 3. Comparison of three reconstruction methods.
Table 3. Comparison of three reconstruction methods.
MethodsPlatformsAccuracyGet the
Point Clouds
Directly
PriceScalabilityEnvironmental
Robustness
Scale of AnalysisReferences
LiDARUnmanned aerial vehicle, Unmanned ground vehicle, Fixed platformmmYes$10,000~$50,000Moderate; limited by platform weight and cost, but suitable for various scales.Strong; performs well under different lighting and weather conditions.Can be applied from plant to field scale depending on platform (handheld, UAV, UGV).[8,27,43,48,49,50,51]
MVSUnmanned aerial vehicle, Fixed platformmm~cmNo$1000~$10,000High; easy to deploy in greenhouse and indoor environments.Weak; performance degrades under strong sunlight or reflective surfaces.Mostly used for single-plant or small-scale canopy studies.[21,45,52,53,72]
Depth camera + SDKUnmanned ground vehicle, Fixed platformmm~cmYes$500~$5000High; adaptable for large-scale or field phenotyping.Moderate; affected by lighting and weather variations.Suitable for plot or field-level analysis.[54,55,56,57,65,66,73]
Multi-
Sensor Fusion System
Unmanned aerial vehicle, Unmanned ground vehicle, Fixed platformmm~cmYes>$50,000Moderate; complexity limits scalability but enhances versatility.Strong; maintains stability across variable environments.Applicable for plant to field scale, depending on sensor configuration.[58,59,60]
Pricing estimates are based on products available in the market and general market conditions, and actual prices may vary.
Table 4. Algorithms and effects of filtering point cloud denoising filtering schemes.
Table 4. Algorithms and effects of filtering point cloud denoising filtering schemes.
ClassificationsAlgorithmsObjectsEffectsReferences
Obvious outliersElevation Values + RGB AttributeRapeseed plants, Cotton CanopySet a fixed height; suitable for flat ground and flowerpots; enhance color features to improve segmentation accuracy[35,59]
Adaptive ThresholdBlueberry ShrubsAdjust height-histogram gradient dynamically according to terrain conditions; suitable for complex field environments[26]
Bounding BoxSoybean plantsUse box selection to remove large background interference and extract target objects[65]
Cloth Simulation FilterApple treesExtract ground model from point cloud data; separate crown point cloud data[12]
Non-obvious outliersRandom Sample ConsensusSoybeans, Peas,
Black beans, Red beans, Mung beans
Set a threshold to distinguish between internal and external points; multiple fitting to find the model[43]
K-Nearest Neighbors + Plane Layered FittingLeaves of monocots and dicotsCalculate point cloud density to identify noise points; calculate distance from point to fitting plane to filter out noise points based on Interquartile Range[80]
Density-Based Spatial Clustering of Applications with Noise + Fast Bilateral FilterRice and cucumber plantsRemove noise of different scales; adjust parameters according to shape and noise distribution of different crops[87]
Statistical Outlier Removal + Density-Based Spatial Clustering of Applications with NoiseShrub canopyRemove isolated noise points and clusters of non-target objects (weeds) in turn[26]
Gaussian Filter + Statistical Outlier Removal + Voxel FilterRapeseed leavesEnsure smoothness and consistency of point cloud; optimize point cloud density and improve operating efficiency[35]
Table 5. Algorithms and accuracy of point cloud registration schemes.
Table 5. Algorithms and accuracy of point cloud registration schemes.
AlgorithmsObjectsAccuracyReferences
Calibration + Iterative Closest PointApple trees Canopy height (ARE = 2.5%), Canopy width (R2 = 3.6%), Canopy thickness (R2 = 3.2%)[98]
Refined Iterative Closest PointPlants of early growthPlant height (MAE = 0.392 cm), Leaf length (MAE = 0.2537 cm), Leaf width (MAE = 0.2676 cm), Leaf area (MAE = 0.957 cm2)[18]
Principal Component Analysis + Iterative Closest PointSoybean plantsSide view: Plant height (R2 = 0.9890), Green index (R2 = 0.6059), Top view: Plant height (R2 = 0.9936), Green index (R2 = 0.8864)[76]
Random Sample Consensus + Iterative Closest PointSoybean plantsModel matching accuracy > 0.9[81]
Intrinsic Shape Signatures-Coherent Point Drift + Iterative Closest PointSoybean plantsRMSE = 0.0107~0.0128 cm, Registration error and time are decreased[66]
Hungarian method + Non-rigid RegistrationTomatoes, MaizesTomatoes: F1-score = 97.73,
Maizes: F1-score = 98.76
[99]
Hidden Markov Model + Non-rigid RegistrationTomatoes, MaizesTomatoes: AP = 89%, Maizes: AP = 97%[100]
AP: Average Precision; ARE: Absolute Relative Error; RMSE: Root Mean Square Error; R2: Coefficient of Determination; MAE: Mean Absolute Error.
Table 6. Key accuracy metrics of representative stem–leaf segmentation and reconstruction models.
Table 6. Key accuracy metrics of representative stem–leaf segmentation and reconstruction models.
Model/MethodCrop/TargetPerformance (Metric & Value)Reference
Geometric Contraction+ MST + DBSCANCotton (stalk & node extraction)R2 = 0.94; MAPE = 5.1%[16]
Random Interception Node + RANSAC–KalmanMaize (stem & leaf structure)AP (LAI) = 92.5%; Height = 89.2%; Leaf length = 74.8%[84]
U-NetSweet pepper (pruning node detection)Mean error = 4.1–6.2 mm[102]
DeepLabv3+Soybean (leaf segmentation)IoU = 0.92–0.95[103]
U-Net++Maize (organ segmentation)Overall accuracy = 0.90–0.94[104]
PF-NetChinese flowering cabbage (leaf completion)RMSE = 6.79 cm2[105]
Table 7. Algorithms and effects of segmentation and reconstruction on different plant parts.
Table 7. Algorithms and effects of segmentation and reconstruction on different plant parts.
ClassificationsAlgorithmsObjectsEffectsResultsReferences
CanopyEuclidean Clustering + Voxel ProfilingApple tree canopySeparate trunk/branch point clouds; voxelize canopy for calculating LAD/LAI to estimate pesticide depositionPesticide deposition model: R2 = 0.92[12]
Poisson Surface ReconstructionMaize canopyGenerate smooth polygonal meshes; analyze vertices to estimate canopy heightCanopy height: R2 = 0.86[109]
Spatial analysis + Convex/Concave HullCotton canopyUse position info to distinguish weeds; detect canopy boundary to estimate height/volumeCanopy height: R2 = 0.87, RMSE = 0.04 m
Canopy volume: R2 = 0.87
[59]
Density-Based Spatial Clustering of Applications with Noise + Convex HullGrape canopyRecognize and segment canopy; create convex polygon to estimate volumeCanopy volume: r = 0.64[51]
Dynamic Slicing and ReconstructionCitrus tree canopyReconstruct canopy shape; calculate volume by dynamic slicing and iterative optimizationCanopy volume: R2 = 0.794[85]
Stem and LeafGeometric Contraction + Topological Thinning + Minimum Spanning Tree + Density-Based Spatial Clustering of Applications with Noise Cotton plantsPerform 3D skeleton extraction; detect main stalk and node in turn; extract nodes and stalk lengthMain Stalk Length: R2 = 0.94, MAPE = 4.3%,
Node Counting: R2 = 0.70, MAPE = 5.1%
[16]
Random Interception Node + Skeletonization + Random Sample Consensus + Kalman FilterMaize plantsExtract topological structure; extract stems and leaves after skeletonizationLAI: AP = 92.5%,
Plant height: AP = 89.2%,
Leaf length: AP = 74.8%,
[84]
Distance-field-based Segmentation Pipeline + Median Normalized-Vectors Growth Segmentation Soybean plantsSegment the stems and leaves by distance field segmentation method; extract stems by region growing methodOA = 0.7619~0.9064[66]
Semantic Segmentation Neural NetworkSweet pepper plantsCombine 3D point cloud and semantic segmentation to detect stems/leaves and reconstruct their 3D structures; use semantic labels to mark pruning nodesThe average error between trim points: 4.1~6.2 mm[102]
Leaf Shape DecompositionSoybean and sugar beet leavesDecompose leaf shape into flat and twisted parts to improve reconstruction robustness No obvious distortion, protrusion or missing parts after reconstruction: R2 > 0.91[114]
PF-Net + Delaunay 2.5D Triangulation Chinese flowering cabbagesComplete point cloud using PF-Net; reconstruct surface using triangulationRMSE = 6.79 cm2,
AvRE = 8.82%
[115]
Virtual Design TechnologyWheat plantsDesign 3D models; obtain optimal model by iterative modificationHeight: R2 = 0.80,
NRMSE = 0.10,
Crown width: R2 = 0.73, NRMSE = 0.12,
Leaf area: R2 = 0.90,
NRMSE = 0.08
[117]
Voxel Carving + Skeletonization + Machine Learning ClassifierSorghum plantsReconstruct 3D structure; identify/classify leaves using Machine learning classifier Small-scale experiment:
r = 0.98.
Single leaf: r = 0.86
[119]
3DPhenoMV + Voxel Overlapping Consistency Check + ClusteringMaize plantsSeparate stems, leaves, and leaf clusters using voxel-based mesh reconstruction and ClusteringUse 10 side views to reconstruct the 3D voxel grid of single maize plant:
Time = 3.5 min
[120]
FruitEuclidean Clustering Segmentation + Principal Component Analysis + Symmetrical 3D Reconstruction + Poisson Surface ReconstructionSoybeans, Peas,
Black beans, Red beans,
Mung beans
Segment and extract seeds using Clustering; complete 3D shape model using symmetryMAE: sub-millimeter
MRE < 3%
R2 > 0.9983
[43]
α-shape Algorithm + Fourier Series Reference Shape GenerationApplesReconstruct surface; estimate fruit size using Fourier analysis-based methodRMSE = 5.85 mm[123]
Point Cloud Slicing and B-spline Curve Fitting + Local Vertex Search and Interpolation + α-shape AlgorithmPotatoesUse the fitting method to reduce point cloud dispersion in the interpolation area; repair missing areas at the top and bottom; generate 3D modelVolume: AvRE = −0.08%, MaxRE= 2.17%,
MinRE = 0.03%
Mass: AvRE = 0.48%,
MaxRE = 2.92%,
MinRE = 0.12%
[122]
Adaptive K-means Based on Dynamic Perspective + Random Sample ConsensusWheat spikesAdjust k-means parameters dynamically; fit spike shape Number: AvRE = 16.27%,
Height: AvRE = 5.24%,
Width: AvRE = 12.38%
[124]
SegNet + Ray Tracing + Supervoxel Clustering + Shape from SilhouetteRice paniclesProcess the 2D and 3D segmentation in turn to improve complex plant structures non-destructively Average IoU = 0.95
Time cost: about 26 min per plant
[101]
AP: Average Precision; AvRE: Average Relative Error; IoU: Intersection over Union; MAE: Mean Absolute Error; MAPE: Mean Absolute Percentage Error; MRE: Mean Relative Error; MaxRE: Maximum Relative Error; MinRE: Minimum Relative Error; NRMSE: Normalized Root Mean Square Error; OA: Overall Accuracy; RMSE: Root Mean Square Error; R2: Coefficient of Determination; r: Correlation Coefficient.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Liang, Z.; Liu, B.; Yin, L.; Wan, F.; Qian, W.; Qiao, X. Applications of 3D Reconstruction Techniques in Crop Canopy Phenotyping: A Review. Agronomy 2025, 15, 2518. https://doi.org/10.3390/agronomy15112518

AMA Style

Li Y, Liang Z, Liu B, Yin L, Wan F, Qian W, Qiao X. Applications of 3D Reconstruction Techniques in Crop Canopy Phenotyping: A Review. Agronomy. 2025; 15(11):2518. https://doi.org/10.3390/agronomy15112518

Chicago/Turabian Style

Li, Yanzhou, Zhuo Liang, Bo Liu, Lijuan Yin, Fanghao Wan, Wanqiang Qian, and Xi Qiao. 2025. "Applications of 3D Reconstruction Techniques in Crop Canopy Phenotyping: A Review" Agronomy 15, no. 11: 2518. https://doi.org/10.3390/agronomy15112518

APA Style

Li, Y., Liang, Z., Liu, B., Yin, L., Wan, F., Qian, W., & Qiao, X. (2025). Applications of 3D Reconstruction Techniques in Crop Canopy Phenotyping: A Review. Agronomy, 15(11), 2518. https://doi.org/10.3390/agronomy15112518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop