Next Article in Journal
Turfgrass Through Time: Historical Uses, Cultural Values, and Sustainability Transitions
Previous Article in Journal
Response of Alfalfa Yield to Rates and Ratios of N, P, and K Fertilizer in Arid and Semi-Arid Regions of China Based on Meta-Analysis
Previous Article in Special Issue
Machine Learning and Deep Learning for Crop Disease Diagnosis: Performance Analysis and Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Winter Wheat Canopy Height Estimation Based on the Fusion of LiDAR and Multispectral Data

1
College of Agricultural Equipment Engineering, Henan University of Science and Technology, Luoyang 471023, China
2
Longmen Laboratory, Luoyang 471000, China
3
Luoyang Academy of Agriculture and Forestry Sciences, Luoyang 471022, China
4
Department of Bioproducts and Biosystems Engineering, University of Minnesota, Minneapolis, MN 55455-0213, USA
5
Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
6
Luoyang Tractor Research Institute Co., Ltd., Luoyang 471039, China
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(5), 1094; https://doi.org/10.3390/agronomy15051094
Submission received: 6 March 2025 / Revised: 14 April 2025 / Accepted: 28 April 2025 / Published: 29 April 2025
(This article belongs to the Collection Machine Learning in Digital Agriculture)

Abstract

:
Wheat canopy height is an important parameter for monitoring growth status. Accurately predicting the wheat canopy height can improve field management efficiency and optimize fertilization and irrigation. Changes in the growth characteristics of wheat at different growth stages affect the canopy structure, leading to changes in the quality of the LiDAR point cloud (e.g., lower density, more noise points). Multispectral data can capture these changes in the crop canopy and provide more information about the growth status of wheat. Therefore, a method is proposed that fuses LiDAR point cloud features and multispectral feature parameters to estimate the canopy height of winter wheat. Low-altitude unmanned aerial systems (UASs) equipped with LiDAR and multispectral cameras were used to collect point cloud and multispectral data from experimental winter wheat fields during three key growth stages: green-up (GUS), jointing (JS), and booting (BS). Analysis of variance, variance inflation factor, and Pearson correlation analysis were employed to extract point cloud features and multispectral feature parameters significantly correlated with the canopy height. Four wheat canopy height estimation models were constructed based on the Optuna-optimized RF (OP-RF), Elastic Net regression, Extreme Gradient Boosting, and Support Vector Regression models. The model training results showed that the OP-RF model provided the best performance across all three growth stages of wheat. The coefficient of determination values were 0.921, 0.936, and 0.842 at the GUS, JS, and BS, respectively. The root mean square error values were 0.009 m, 0.016 m, and 0.015 m. The mean absolute error values were 0.006 m, 0.011 m, and 0.011 m, respectively. At the same time, it was obtained that the estimation results of fusing point cloud features and multispectral feature parameters were better than the estimation results of a single type of feature parameters. The results meet the requirements for canopy height prediction. These results demonstrate that the fusion of point cloud features and multispectral parameters can improve the accuracy of crop canopy height monitoring. The method provides a valuable method for the remote sensing monitoring of phenotypic information of low and densely planted crops and also provides important data support for crop growth assessment and field management.

1. Introduction

Wheat is a major global cereal crop. The monitoring of wheat growth is an important basis for precision agriculture [1,2]. Canopy height, as an important indicator of the growth status, not only reflects the growth rate of crops but is also crucial for the development of agricultural management strategies, such as fertilization and irrigation, as well as in preventing collapse [3,4,5]. In addition, canopy height is not only a reflection of crop health and growth but also serves as an indirect indicator for yield estimations [6,7]. For example, Gao et al. confirmed the correlation between the wheat height and yield [8]. Therefore, the accurate monitoring of wheat canopy height is important for optimizing agricultural management and assessing the yield potential. Although traditional ground-based measurement methods are highly accurate in estimating the wheat canopy height, they are laborious and inefficient and can introduce subjective errors that affect the accuracy of the data [9,10].
In recent years, remote sensing has become an effective tool for monitoring the crop canopy height. In particular, point cloud data from light detection and ranging (LiDAR) have been widely used to estimate structural parameters, such as the vegetation height [11,12,13]. LiDAR technology is an active sensing method that captures the three-dimensional structural form of objects using lasers and is employed to track crop growth [14,15]. Moreover, LiDAR offers many advantages over passive sensing technologies, including independence from light conditions [16] and the ability to quickly measure the crop canopy height and structure [4]. It has been shown that LiDAR data can provide highly accurate vegetation height data [17]. This is valuable for crop growth monitoring. However, LiDAR data also face some challenges; for example, Fareed et al. [18] utilized multi-temporal LiDAR for crop height estimation and found that the R2 values for the height estimation of tall crops, such as corn, ranged from 0.83 to 1, while those for low crops, such as soybeans, ranged from 0.59 to 0.75 across different phenological stages. It can be concluded that point cloud data are more suitable for estimating the height of high crops, while the estimation accuracy for low crops is lower. Since wheat is a low and dense crop, the shading and overlapping between its plants will cause the collected LiDAR data to contain invalid or noisy points, which affects the quality of the point cloud data and thus reduces the accuracy of height estimations.
Meanwhile, spectral remote sensing techniques have been applied in crop height estimation. For example, Li et al. [19] based on the UAS multispectral platform, found that the vegetation indices calculated from different bands of spectra can be used to estimate the crop height. Xie et al. [20] found that vegetation indices are advantageous for estimating crop heights in areas with an uneven topography. In addition, some researchers have also attempted to fuse point cloud data and spectral data for plant height estimations [21]. For example, when Sankey et al. [22] fused LiDAR and hyperspectral data to estimate the forest height, they found that data fusion was significantly more effective than using a single data type and could more accurately reflect the structural characteristics of forests. Luo et al. [23] fused hyperspectral imagery and UAV LiDAR data to estimate the corn height and found that combining the two data sources improved the prediction accuracy by 7.2% compared to using a single data source. It can be seen that the fusion of LiDAR point cloud and spectral data to estimate the altitude has better accuracy than using point cloud data or spectral data alone. However, currently, the fusion of point cloud and spectral data for height estimation is mainly focused on high canopy forests and sparsely planted plants. Wheat, on the other hand, has a complex canopy structure due to its low, densely planted, and overlapping leaf blades. Although the study by Nadeem Fareed et al. analyzed the height estimation of low crops such as soybeans, it also used only point cloud data. The feasibility of the fusion method to estimate the wheat canopy height needs to be further explored and investigated.
To resolve the above issues, this study fuses UAS point cloud and multispectral data. Fusing spatial structure information and spectral response features, we explore the feasibility and advantages of multi-source data fusion for wheat canopy height estimation. The point cloud and multispectral data of winter wheat at different growth stages were used to construct a model for estimating the canopy height of wheat. The correlation between point cloud and multispectral parameters and the canopy height was analyzed. The specific aims of this study were as follows: (1) to analyze the correlation between different point cloud feature parameters and multispectral feature parameters and canopy height and screen out the feature parameters suitable for estimating the canopy height of wheat; (2) to establish a wheat canopy height estimation model that fuses point cloud features and multispectral feature parameters to solve the problem of the low accuracy of height estimation relying on point cloud features or multispectral feature parameters alone.

2. Materials and Methods

2.1. Experimental Site

This experiment was conducted in a wheat experimental field, located at the Luoyang Academy of Agriculture and Forestry in Luoyang City, Henan Province, China (34°39′0.34″ N, 112°28′36.34″ E) (Figure 1). The total area of the experimental field was approximately 371 m2. A total of 28 experimental plots were selected. These plots included 16 irrigated plots and 12 rainfed plots. The dimensions of a single plot were 7 m × 1.4 m (length × width). The wheat varieties used were Luohan 7 and Luohan 22, and the planting densities included 120,000 plants/ha, 140,000 plants/ha, 160,000 plants/ha, and 180,000 plants/ha, respectively. During data collection, each experimental plot was delineated into five subplots along its length, with the area of each subplot after boundary delineation being 1.4 m × 1.4 m, totaling 140 subplots.

2.2. Data Acquisition

Canopy height data were collected for three key growth stages of wheat: the green-up stage (GUS) on 2 March 2024, the jointing stage (JS) on 29 March 2024, and the booting stage (BS) on 11 April 2024. The UAS was used to carry LiDAR and multispectral equipment for data acquisition. The LiDAR point cloud data were acquired using a LiDAR surveying system (model: AA10, Shanghai Huace Navigation Technology Ltd., Shanghai, China). The multispectral data were acquired using a Phantom 4 Multispectral UAS (DJI Technology Co., Ltd., Shenzhen, China). The specific parameters of the UAS are shown in Table 1. Data collection was scheduled between 11:00 a.m. and 2:00 p.m. under clear weather conditions and with ground wind speeds less than 5 m/s. To ensure the efficient coverage of large areas, reduce flight deviations, and maintain the uniformity and completeness of data collection, an “S”-shaped flight path was adopted (Figure 1).
Once the remote sensing data collection of the wheat canopy was completed, a tape measure was used to measure the height of the wheat canopy in the experimental subplots. Samples were randomly selected from each experimental subplot, and three replicate measurements were taken for each sample. The average of the three measurements was used as the height measurement for that sample, and the average of all measurements within each experimental subplot was used as the canopy height for that subplot. These values were then used to calculate the basic statistics, including the maximum, minimum, mean, and standard deviation of crop heights (Table 2). At the JS, the standard deviation range between the irrigated subplot and rainfed subplot is relatively large, indicating a higher degree of data dispersion during this period.

2.3. Data Processing

The overall workflow diagram of this study is shown in Figure 2. The workflow contains four main steps: data acquisition, data processing, feature extraction and model construction. Details of UAS data acquisition are provided in Section 2.2, while data processing is thoroughly analyzed and described in Section 2.3. The next sections describe and analyze the feature extraction and model construction and evaluation.
First, both the acquired UAS LiDAR point cloud and multispectral data were converted to the CGCS2000. coordinate system. During preprocessing, Copre2 software (Version 2.7.1, Shanghai Huace Navigation Technology Ltd., Shanghai, China), which supports the laser aerial survey system, was used to perform Position and Orientation System (POS) solving and obtain the LiDAR point cloud data. The acquired multispectral images were processed using DJI Terra software (Version 3.7.6, DJI Technology Co., Ltd., Shenzhen, China) to generate multispectral orthophotos (e.g., radiometric correction).
The point cloud data and multispectral orthophotos were then aligned. The alignment process uses white markers with known ground coordinates common to both datasets as ground control points (GCPs) (Figure 1) and employs DJI Terra software to align them with the corresponding positions in the visible images. Next, the inter-image alignment function of ENVI software (Version 5.3.1, Exelis Visual Information Solutions, Inc., Broomfield, CO, USA) is used to perform spatial alignment between the aligned visible image and the multispectral image, completing the alignment between the point cloud and the multispectral image. The accuracy and reliability of the alignment results are verified by evaluating the alignment error between the image control points and the images, ensuring the spatial consistency of the point cloud data and the multispectral images in the same coordinate system.

2.3.1. Point Cloud Data Processing

In order to solve the problem that crop canopies cover each other among experimental plots, resulting in the poor accuracy of plot segmentation, this study performed angle correction, ground point filtering, and the plot segmentation and boundary delineation of experimental field point cloud data.
Angle correction. Due to the geographic location of the experimental field, the acquired remote sensing data deviated from the true north–south axis, which hindered the subsequent delineation of experimental fields and plots. Therefore, the iterative linear regression rotation method was used to perform the angular correction. To reduce the influence of noisy point cloud data on the accuracy of the angular correction, a statistical filtering method was applied to remove outliers. Subsequently, linear regression analysis was conducted using the x and y coordinates to obtain the best-fit curve, which was then used to calculate the inclination angle of the curve relative to the X-axis. A series of iterative processes was performed to ensure the accuracy of the angular measurement and to align the north–south axis (the experimental field was rotated by an angle of 6.389 degrees) (Figure 3a).
Ground point filtering. Based on the spectral reflectance properties of the red, green, and blue color channels in the point cloud, the RGB Vegetation Index is calculated (as shown in Formula (1)), and the ground points are filtered out using the thresholding method. Through the actual observation and analysis of different vegetation types, combined with the descriptions in available references, the vegetation index threshold for the experimental field was set to 0.12 (Figure 3a).
V e g e t a t i o n   I n d i c e s = G r e e n ( R e d + B l u e ) 2
Plot segmentation and boundary delineation. To reduce the interference of plant morphological differences on the partitioning results during wheat growth, the plots were partitioned and delineated at the BS when plant morphology is more stable. The segmentation process is displayed in Figure 3b. Firstly, the K-means algorithm was employed to perform initial clustering on the point cloud data, dividing the experimental field into four sections. Then, point cloud upper- and lower-layer segmentation was conducted using the height information of each experimental section as a threshold, preserving the high-canopy characteristics of vegetation. Next, the DBSCAN algorithm was utilized to perform secondary clustering on the point cloud data of high-canopy vegetation to achieve a more accurate boundary delineation of experimental plots. Finally, boundary boxes of the clustered vegetation point clouds were drawn to obtain the contour point cloud (Figure 3c).
To solve the problem of poor land segmentation caused by overlapping wheat canopy layers between experimental plots, the segmentation of adjacent plots was performed using terrain variation curves between the plots (Figure 3c). From the obtained contour point cloud, it was observed that the fifth and sixth experimental plots were not effectively delineated to define clear boundaries. Therefore, the fifth and sixth plots were segmented based on the characteristic of the relatively lower terrain between them, enabling effective separation of the adjacent plots. The experimental field was evenly delineated into five subplots along its length to thoroughly analyze the growth status of wheat within the plots, thus delineating the boundaries of the entire experimental field into 140 independent subplots (Figure 3d).

2.3.2. Multispectral Image Processing

To ensure the consistency of the results for experimental field plot segmentation and the boundary delineation of LiDAR point cloud and multispectral data on a spatial scale, the processing results of the point cloud data were used to guide the segmentation and boundary delineation of plots in the multispectral data (Figure 4).
To remove the interference of non-crop backgrounds, such as land, in multispectral image data, the Normalized Difference Vegetation Index (NDVI) [24], the Enhanced Vegetation Index (EVI) [25], and the Normalized Difference Water Index (NDWI) [26] calculations were used. Subsequently, a classification function was defined to categorize pixels that met at least two vegetation index threshold conditions as vegetation (where vegetated areas were color-mapped to white and non-vegetated areas to black), as shown in Figure 4c. Threshold ranges for each vegetation index were initially set based on descriptions in the references [27,28,29], as follows: 0.4 < NDVI ≤ 1, 0.4 < EVI ≤ 1, and −0.5 < NDWI ≤ 1. Based on the actual observation and analysis of different vegetation types, combined with descriptions from previous studies, the NDVI and EVI thresholds for the test plots in this study were both set to 0.4, and the NDWI threshold was set to −0.5.
The multispectral imagery was then delineated by using the plot split distance values to define boundaries from Section 2.3.1. The experimental plot was then equally delineated into five subplots along its length, delineating the boundaries of the experimental field covered by the multispectral imagery into 140 experimental subplots, as shown in Figure 4d.

2.4. Feature Parameter Extraction and Model Construction

2.4.1. Feature Parameter Extraction

A method for estimating the wheat canopy height was proposed to estimate the wheat height accurately, fusing LiDAR point cloud feature parameters with multispectral feature parameters. The point cloud feature parameters included the roughness (Rough), mean planarity (MP), density (Den), mean height difference (MHD), skew, and three-dimensional volume (3DVol) [30,31]. The specific calculation method is shown in Table 3. Multispectral characterization parameters are vegetation indices calculated based on different bands, including the NDVI, the Soil-Adjusted Vegetation Index (SAVI), the Normalized Difference Red Edge Index (NDRE), the Near-Infrared Reflectance of Vegetation (NIRV), the Modified Triangular Vegetation Index 1 (MTVI1), the Modified Triangular Vegetation Index 2 (MTVI2), the Enhanced Normalized Green–Blue Vegetation Index (ENGBVI), the Weighted Difference in Red and Infrared Vegetation Index (WDRVI), and the Modified Chlorophyll Absorption Ratio Index (MCARI), among other parameters [24,32] (Table 4).
To identify the point cloud and multispectral feature parameters significantly correlated with wheat canopy height, correlation analyses were conducted between the canopy height and feature parameters at different growth stages. The followed the following three steps:
Data Standardization. To eliminate the influence of dimensional differences between variables, the z-score method was applied to standardize the data, as shown in Formula (2).
Z i = ( X i μ ) σ
where X i is the original data point, µ is the mean of the dataset, and σ is the standard deviation of the dataset.
Analysis of variance. To investigate whether there was a significant relationship between each feature parameter and the canopy height, one-way analysis of variance was performed, with each feature parameter as the independent variable and the wheat canopy height as the dependent variable. The p-values were used for the evaluation, and features with a p-value less than or equal to 0.05 were considered significant features. The feature parameters significantly associated with the canopy height were also selected for a subsequent analysis.
Variance inflation factor. In order to evaluate the degree of multicollinearity between the feature parameters, the variance inflation factor was introduced in this study, and the point cloud feature parameters and multispectral feature parameters were used as the independent variables to calculate the linear relationship between each independent variable and the other variables. The larger the value of the variance inflation factor, the more severe the collinearity between the variables. It is generally accepted that when the value of the variance inflation factor is greater than 10, there is severe multi-collinearity and consideration should be given to eliminating or combining variables. Values of a variance inflation factor below 5 are generally considered to have weak collinearity and are acceptable (Formula (3)).
V I F i = 1 1 R i 2
where R i 2 is the coefficient of determination obtained by linearly regressing each independent variable on the other independent variables.
Pearson correlation analysis. The Pearson correlation coefficient was used to assess the correlation between feature parameters. By comparing the sizes of the correlation coefficients, the relationships between parameters were quantified, and parameters with significant correlations were removed to obtain the key point cloud features and multispectral feature parameters. Similarly, features with p-value less than or equal to 0.05 were treated as significant features.

2.4.2. Modeling and Evaluation Indexes of Wheat Canopy Height Estimation

A dataset was constructed based on the extracted point cloud feature parameters and multispectral feature parameters, including the canopy height. The dataset was randomly divided into training and test sets in a 7:3 ratio, with 98 experimental subplots assigned to the training set and 42 experimental subplots assigned to the test set. In order to comprehensively evaluate the estimation of the wheat canopy height, the data fusing point cloud features and multispectral feature parameters were modeled using a machine learning approach. More than ten methods, such as the Optuna optimized RF (OP-RF) model, Elastic Net regression model, Extreme Gradient Boosting Tree (XGBoost), Support Vector Machine Regression (SVR), Multi-Layer Perceptron Regression, Partial Least Squares Regression, and Decision Tree Regression, were selected for model training and evaluation. Taking into account the models’ prediction accuracy (OP-RF), efficiency (XGBoost), and ability to deal with linear and nonlinear relationships (Elastic Net and SVR), four models, namely, OP-RF, XGBoost, Elastic Net, and SVR, were finally selected for subsequent analysis.
During model construction, optimal random seed values (random_state) for each model were determined by systematically selecting and testing different seed values within the range of 0 to 500. At the same time, five-fold cross-validation with a grid search was used for parameter optimization to avoid overfitting or underfitting caused by unreasonable parameter settings. The optimized parameters for each model are presented in Table 5.
Model training was performed on a computer equipped with an Intel (R) Core (TM) i5-12400F processor, an NVIDIA GeForce RTX 3060 GPU (NVIDIA Corporation, Santa Clara, CA, USA), and 32GB of memory (Crucial Technology, Meridian, ID, USA). The experimental platform was set up on a Windows 11 operating system and a Python 3.8 programming environment, incorporating CUDA 11.8, cuDNN 8, and PyTorch 2.0.1. The coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE) were then used as evaluation metrics, as shown in Formulas (4)–(6). The most suitable model for estimating the wheat canopy height was selected by comparing the R2, RMSE, and MAE values of different models.
R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
R M S E = 1 n i = 1 n ( y i y i ^ ) 2
M A E = 1 n i = 1 n y i y i ^
where y i represents the ith measured value. y i ^ represents the ith predicted value. y ¯ represents the mean of the measured values. n represents the sample size.

3. Results

3.1. Selection of Feature Parameters

3.1.1. Feature Parameters—Height Variance Analysis Results

The analysis of variance and variance inflation factor results between the canopy heights of the GUS, JS, and BS and the standardized feature parameters are shown in Table 6 and Table 7.
As shown in Table 6 and Table 7, the featured parameters that showed a significant correlation were different for different growth stages. The p-value (≤0.001) of the analysis of variance results for the point cloud feature parameters and canopy height during the GUS indicated an extremely significant correlation. Similarly, the analysis of variance results for the multispectral feature parameters (NDVI, SAVI, MTVI1, MTVI2, and ENGBVI) and canopy height also showed an extremely significant correlation. However, an observation of the variance inflation factor reveals that the variance inflation factor values of 3DVol, NDVI, MTVI1, MTVI2, and ENGBVI were in the range of 5–10, which suggested the existence of multi-collinearity between these features and other features. At the JS, the point cloud feature parameters that showed an extremely significant correlation with the analysis of variance results for canopy height were MP and 3DVol, and the multispectral feature parameters were WDRVI and MCARI. Also, it was observed that the variance inflation factor values of these four feature parameters were in the range of 2–5, which indicated that the collinearity between these four features was weak. At the BS, the point cloud feature parameters that showed an extremely significant correlation with the analysis of variance results for canopy height were MP, MHD, and Skew, while the multispectral feature parameters were SAVI, NDRE, MTVI1, MTVI2, ENGBVI, and WDRVI. It could be observed that the variance inflation factor values of MHD, Skew, MTVI1, and WDRVI were in the range of 2–5, which indicated that the collinearity between these four features was weak.

3.1.2. Pearson Correlation Coefficient Analysis Results Between Feature Parameters

The pairwise correlation analysis between feature parameters and the correlation analysis results between each parameter and canopy height are shown in Figure 5.
As shown in Figure 5a–c, the correlation analysis between the point cloud feature parameters and canopy height at different growth stages reveals that at the GUS, the two feature parameters with higher correlation coefficients were Rough and MHD, with values of 0.72 and 0.71, respectively. The two feature parameters with high correlation coefficients at the JS were MP and 3DVol, which were 0.50 and 0.33, respectively. The two feature parameters with high correlation coefficients in the BS were Skew and MHD with correlation coefficients of −0.71 and 0.62, respectively. At the same time, it could be observed that the point cloud feature parameters for each of the mentioned growth stages were statistically significant. And an observation of Table 7 revealed that their variance inflation factor values were in the range of 2–5 with weak collinearity.
As shown in Figure 5d–f, the correlation analysis of multispectral feature parameters with canopy height at different growth stages revealed that the highest Pearson correlation coefficient at the GUS was observed for SAVI (r = 0.37). At the JS, the highest Pearson correlation coefficient was observed for WDRVI (r = 0.45). The highest Pearson correlation coefficient at the BS was observed for MTVI1 (r = 0.40). Also, it could be obtained that the three parameters of the multispectral features were statistically significant. And from Table 7, the variance inflation factor values of these three parameters were in the range of 4–5 with weak collinearity.
A comprehensive analysis of variance, the variance inflation factor, and the Pearson correlation coefficient showed that the feature parameters finally selected based on GUS were Rough, MHD, and SAVI. The feature parameters finally selected for the JS were MP, 3DVol, and WDRVI. The feature parameters finally selected for the BS were Skew, MHD, and MTVI1 for subsequent analysis.

3.2. Canopy Height Model Fusing Point Cloud and Multispectral Feature Parameters

Based on the selected point cloud and multispectral feature parameters, a wheat canopy height estimation model was constructed by fusing the point cloud features and multispectral feature parameters. The optimized parameters for each model, based on 5-fold cross-validation, are shown in Table 8, and the fitting results of the machine learning model for the wheat canopy height are presented in Table 9.
According to Table 9, the following was indicated:
The OP-RF model was selected to build canopy height estimation models for wheat, and the best height estimation results were obtained at all three growth stages. The OP-RF model had the lowest values of RMSE and MAE based on the test set for each growth stage. This showed that the model had the smallest prediction error, and the fits were all better than the other models. Among them, the model fitting effect was the best in the JS, with an R2 of 0.950, an RMSE of 0.013 m, and an MAE of 0.009 m for the training set and an R2 of 0.936, an RMSE of 0.016 m, and an MAE of 0.011 m for the test set.
The wheat canopy height estimation models constructed using Elastic Net and SVR showed lower overall prediction accuracy compared to other models. The model accuracy of SVR was highest at the JS, but the R2 of the test set was only 0.847, the RMSE was 0.021 m, and the MAE was 0.013 m, which differed from the model accuracy of OP-RF by 0.089. This indicated that the prediction accuracy of the Elastic Net and SVR models was significantly lower than that of the OP-RF model. Meanwhile, the wheat canopy height model constructed based on XGBoost reached an R2 of 0.954 in the training set at the GUS, which was improved by 0.023 compared with OP-RF. However, the R2 of the XGBoost test set was lower than the model results of OP-RF, with a difference in R2 of 0.016. And a comparison of the results of the test sets at the JS and at the BS revealed that both of the model results of XGBoost were lower than the model results of OP-RF. For example, at the BS, the test set R2, RMSE, and MAE of XGBoost were 0.743, 0.021 m, and 0.016 m, respectively. In contrast, the OP-RF model estimates based on the test set were 0.842, 0.015 m, and 0.011 m, respectively. This suggested that XGBoost’s model fit was weaker than the OP-RF model in these three stages.
Therefore, the OP-RF model was selected as it provided better performance in estimating the canopy height. The structure of the model is shown in Figure 6a, and the fitting results of the OP-RF model are presented in Figure 6b. When Optuna hyperparameter optimization was used for the GUS, JS, and BS, the number of trials (n_trials) for each growth stage was set to 30, and the number of decision trees (n_estimators) used was 64, 72, and 145, respectively.

3.3. Modeling of Wheat Canopy Height with a Single Class of Feature Parameters

When using OP-RF, Elastic Net, XGBoost, and SVR to build a wheat canopy height estimation model, the OP-RF model showed good results in the three growth stages of wheat (Table 9). Therefore, the OP-RF model is used to construct an estimation model based on a single class of feature parameters (Table 10).
As shown in Table 10, the canopy height models constructed using separate point cloud feature parameters in both the GUS and BS outperformed the estimation models using only multispectral feature parameters. In contrast, at the JS, the test set accuracy R2 of the estimation model using multispectral feature parameters was improved by 0.028, and the RMSE and MAE were reduced by 0.003 m and 0.009 m, respectively, compared with the use of point cloud feature parameters alone. This indicated that the model fitting results of the multispectral feature parameters were better than those of the point cloud feature parameters during the JS. As can be seen from Table 9 and Table 10, the estimation model fit of the canopy height model using a single class of feature parameters was lower than that of the estimation model fusing point cloud features and multispectral feature parameters at different growth stages. Compared to using point cloud feature parameters alone, the model that fused point cloud and multispectral feature parameters showed increases in R2 values of 0.034, 0.244, and 0.001 for the training set, and 0.088, 0.241, and 0.074 for the test set, respectively. This indicated that the estimation model with fused feature parameters outperformed the estimation model using a single class of feature parameters at all three growth stages.

3.4. Stability Analysis Based on the OP-RF Estimation Model

In order to verify the stability of the model, the model and growth stage with better model performance were selected for this study. From Table 9 and Table 10, it could be concluded that the OP-RF model constructed by fusing point cloud features and multispectral feature parameters could obtain better estimation results at the JS. Therefore, this study utilized the OP-RF model constructed from fused feature parameter data at the JS for 500 tests. The distribution of RMSE and MAE estimation results for 500 test results were obtained (Figure 7).
As can be seen from Figure 7a,b, the average values of RMSE and MAE for both the training and test sets were close to the median. The mean and median of the RMSE training set were 0.013 m. The mean and median of the test set were 0.016 m. The mean and median of the training set for the MAE were 0.010 m. The mean and median of the test set were 0.013 m. It showed that the prediction error of the model based on the test set was slightly higher than that based on the training set, and the difference between the training and test sets for both RMSE and MAE was 0.003 m. This indicated that the model was able to fit the training data well and the overall performance was more stable. Although the mean and median values were identical, the boxplots indicated a slight right-skewness in the distribution of RMSE and MAE errors in the training set. The distribution of errors in the test set also showed a slight left-skewness. Such skewness indicated that, despite the relatively concentrated and stable overall error distributions in both the training and test sets, a certain level of asymmetry remained.
In summary, these results showed a better performance of the model in the wheat canopy height estimation with a more concentrated and stable error distribution. Although there was a slight skewness; this skewness was not significant, and the model performed more consistently on both the training and test sets. This further validated the stability and generalization ability of the model.

4. Discussion

This study verified the feasibility of fusing UAS point cloud features and multispectral feature parameters to estimate wheat canopy height and outperformed a single data source in terms of accuracy. As an important indicator of crop growth and health, the accurate height information provides data support for field management and methodological references for subsequent yield-related studies. Meanwhile, compared with the traditional manual measurement method, the UAS method adopted in this study has the advantages of high throughput and non-contact, which has good potential for application in improving operational efficiency and reducing manual intervention.
Canopy height, as a key growth parameter of crops, has been the focus of numerous studies [33]. Dhami et al. [13] used point cloud data to estimate crop heights based on wheat farms with a measurement accuracy of 0.061 m (RMSE = 0.061 m). Li et al. [34] used point clouds to estimate the canopy height of wheat at different growth stages, with the highest R2 of the estimation results being 0.784. In contrast, in this study, after introducing multispectral information, we fused point cloud features and multispectral feature parameters to estimate the canopy height model, with the highest R2 of 0.936 and the RMSE of only 0.016 m. The method in this study reduced the RMSE by 0.045 m in the measurement accuracy compared to the study by Dhami and Li et al. At the same time, the R2 was improved by 0.152 in terms of model fitting ability. This suggests that the fusion of multispectral features can compensate for the limited accuracy of point cloud estimation from the point cloud due to the variation in growth characteristics in wheat. This enhances the accuracy of the canopy height estimation. Current research on height estimation using point cloud and spectral fusion has focused on estimating the height of tall and sparse vegetation, such as corn and trees [22,35]. Wheat, being a low and dense crop, has small plant spacing, a high growth density, and a complex canopy structure. And LiDAR point cloud data are affected by the growth characteristics of wheat at different growth stages, resulting in poor estimation accuracy when point cloud data are used alone (Table 10). And the multispectral data can reflect the growth characteristics of wheat and better capture the growth information of wheat. It can make up for the deficiency of point cloud data and then improve the canopy height estimation accuracy. Especially for the JS, the canopy height model estimation results, which fused point cloud features and multispectral feature parameters, improved the R2 of the training and test sets by 0.244 and 0.241, respectively, compared to the estimation results using point cloud features alone.
Although this study provided an in-depth analysis of the three key growth stages of winter wheat, its scope was not comprehensive enough to fully capture the growth characteristics and changes across all stages. Expanding the data collection to include more growth stages of winter wheat is necessary. In addition, geographic location was a limiting factor, as the trial was conducted at only one study site. Future work could involve testing the feasibility of the current fusion data model across different geographic locations.

5. Conclusions

A method for estimating the canopy height by fusing point cloud features and multispectral feature parameters was proposed. A UAS equipped with LiDAR and a multispectral camera was used to acquire remote sensing data of wheat at different growth stages. The experimental field was processed using the vegetation index threshold method and the Height–Kmeans–DBSCAN clustering algorithm. Feature parameters were then extracted using an analysis of variance, the variance inflation factor, and Pearson’s correlation coefficient analysis to construct a canopy height estimation model that fused the point cloud features and multispectral feature parameters. The conclusions are as follows:
The point cloud feature parameters and multispectral feature parameters that were significantly correlated with canopy height were screened based on an analysis of variance, the variance inflation factor, and Pearson’s correlation coefficient at different growth stages. During the GUS, the feature parameters significantly correlated with canopy height included Rough, MHD, and SAVI. The feature parameters for the JS were MP, 3DVol, and WDRVI. For the BS, the feature parameters were Skew, MHD, and MTVI1.
A canopy height estimation model based on fused point cloud features and multispectral feature parameters was constructed using OP-RF, Elastic Net, XGBoost, and SVR. The OP-RF-based canopy height model performed well at all three growth stages. The R2 values for the test sets at the GUS, JS, and BS stages were 0.921, 0.936, and 0.842, and the RMSE was 0.009 m, 0.016 m, and 0.015 m. The MAE was 0.006 m, 0.011 m, and 0.011 m, respectively. Compared to using point cloud feature parameters alone, the R2 values of the test set improved by 0.088, 0.241, and 0.074, respectively, when using fused point cloud features and multispectral feature parameters. Compared to using multispectral feature parameters alone, the R2 of the test set with fused feature parameters is improved by 0.350, 0.213, and 0.380, respectively. It is shown that the model fusing point cloud features and multispectral feature parameters has higher accuracy and stability in estimating the canopy height.
The results show that the method can quickly estimate the wheat canopy height with high accuracy, which makes up for the limitation of a single feature parameter in the estimation accuracy. Canopy height is an important indicator to characterize crop growth. It not only provides basic data support for agricultural production and field management but also serves as an indirect indicator of the potential yield and provides a methodological reference for subsequent yield-estimation studies. Due to the significant differences in growth characteristics, the spectral reflectance and environmental response of different crops, the study can be extended to collect data from different crops (e.g., soybean, peanut, etc.) to further evaluate the applicability and stability of the model that fuses the point cloud features and multispectral feature parameters for height estimation across various types of crops and diverse environments.

Author Contributions

Conceptualization, H.M., Y.L., C.Y., X.A. and H.C.; methodology, H.M., Y.L., H.C. and S.J.; software, Y.L., H.C. and S.J.; validation, H.M., Y.L. and S.J.; formal analysis, H.M., Y.L. and H.C.; investigation, Y.Z. and H.C.; resources, H.M., Y.Z., K.Z. and H.C.; data curation, H.M., Y.L. and Y.Z.; writing-original draft preparation, H.M., Y.L. and S.J.; writing-review and editing, H.M., Y.L., H.C., S.J., C.Y. and X.A.; supervision, Y.Z., C.Y., X.A. and K.Z.; project administration, H.M., H.C. and K.Z.; funding acquisition, H.M. and H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Longmen Laboratory Major Projects (No. 231100220200), Henan Provincial Higher Education Technological Innovation Talent Support Program (No. 23HASTIT020), Henan Province Professional Degree Graduate High-Quality Teaching Case Project (No. YJS2023AL035), Major Science and Technology Project of Henan Province (No. 242102111188), National Natural Science Foundation of China (No. 32401696), and Youth Science and Technology Fund Project of China National Machinery Industry Corporation Ltd. (No. QNJJ-PY-2024-24).

Data Availability Statement

The original contributions presented in the research are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Kai Zhang was employed by the company Luoyang Tractor Research Institute Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be constructed as a potential conflict of interest.

References

  1. Watanabe, K.; Guo, W.; Arai, K.; Takanashi, H.; Kajiya-Kanegae, H.; Kobayashi, M.; Yano, K.; Tokunaga, T.; Fujiwara, T.; Tsutsumi, N.; et al. High-throughput phenotyping of sorghum plant height using an unmanned aerial vehicle and its application to genomic prediction modeling. Front. Plant Sci. 2017, 8, 421. [Google Scholar] [CrossRef]
  2. Singh, S.K.; Houx, J.H., III; Maw, M.J.; Fritschi, F.B. Assessment of growth, leaf N concentration and chlorophyll content of sweet sorghum using canopy reflectance. Field Crops Res. 2017, 209, 47–57. [Google Scholar] [CrossRef]
  3. Oehme, L.H.; Reineke, A.-J.; Weiß, T.M.; Würschum, T.; He, X.; Müller, J. Remote Sensing of Maize Plant Height at Different Growth Stages Using UAV-Based Digital Surface Models (DSM). Agronomy 2022, 12, 958. [Google Scholar] [CrossRef]
  4. Madec, S.; Baret, F.; De Solan, B.; Thomas, S.; Dutartre, D.; Jezequel, S.; Hemmerlé, M.; Colombeau, G.; Comar, A. High-throughput phenotyping of plant height: Comparing unmanned aerial vehicles and ground LiDAR estimates. Front. Plant Sci. 2017, 8, 2002. [Google Scholar] [CrossRef]
  5. Fei, S.; Hassan, M.A.; Ma, Y.; Shu, M.; Cheng, Q.; Li, Z.; Chen, Z.; Xiao, Y. Entropy weight ensemble framework for yield prediction of winter wheat under different water stress treatments using unmanned aerial vehicle-based multispectral and thermal data. Front. Plant Sci. 2021, 12, 730181. [Google Scholar] [CrossRef]
  6. Liu, T.; Wu, F.; Mou, N.; Zhu, S.; Yang, T.; Zhang, W.; Wang, H.; Wu, W.; Zhao, Y.; Sun, C.; et al. The estimation of wheat yield combined with UAV canopy spectral and volumetric data. Food Energy Secur. 2024, 13, e527. [Google Scholar] [CrossRef]
  7. Mustafa, G.; Liu, Y.; Khan, I.H.; Hussain, S.; Jiang, Y.; Liu, J.; Arshad, S.; Osman, R. Establishing a knowledge structure for yield prediction in cereal crops using unmanned aerial vehicles. Front. Plant Sci. 2024, 15, 1401246. [Google Scholar] [CrossRef] [PubMed]
  8. Gao, Z.; Wang, Y.; Tian, G.; Zhao, Y.; Li, C.; Cao, Q.; Han, R.; Shi, Z.; He, M. Plant height and its relationship with yield in wheat under different irrigation regime. Irrig. Sci. 2020, 38, 365–371. [Google Scholar] [CrossRef]
  9. Holman, F.H.; Riche, A.B.; Michalski, A.; Castle, M.; Wooster, M.J.; Hawkesford, M.J. High Throughput Field Phenotyping of Wheat Plant Height and Growth Rate in Field Plot Trials Using UAV Based Remote Sensing. Remote Sens. 2016, 8, 1031. [Google Scholar] [CrossRef]
  10. Sun, S.; Li, C.; Paterson, A.H. In-field high-throughput phenotyping of cotton plant height using LiDAR. Remote Sens. 2017, 9, 377. [Google Scholar] [CrossRef]
  11. Friedli, M.; Kirchgessner, N.; Grieder, C.; Liebisch, F.; Mannale, M.; Walter, A. Terrestrial 3D laser scanning to track the increase in canopy height of both monocot and dicot crop species under field conditions. Plant Methods 2016, 12, 9. [Google Scholar] [CrossRef]
  12. Ten Harkel, J.; Bartholomeus, H.; Kooistra, L. Biomass and Crop Height Estimation of Different Crops Using UAV-Based Lidar. Remote Sens. 2019, 12, 17. [Google Scholar] [CrossRef]
  13. Dhami, H.; Yu, K.; Xu, T.; Zhu, Q.; Dhakal, K.; Friel, J.; Li, S.; Tokekar, P. Crop Height and Plot Estimation for Phenotyping from Unmanned Aerial Vehicles using 3D LiDAR. IEEE/RSJ Int. Conf. Intell. Robots Syst. 2020, 2643–2649. [Google Scholar]
  14. Debnath, S.; Paul, M.; Debnath, T. Applications of LiDAR in agriculture and future research directions. J. Imaging 2023, 9, 57. [Google Scholar] [CrossRef] [PubMed]
  15. Jimenez-Berni, J.A.; Deery, D.M.; Rozas-Larraondo, P.; Condon, A.T.G.; Rebetzke, G.J.; James, R.A.; Bovill, W.D.; Furbank, R.T.; Sirault, X.R. High throughput determination of plant height, ground cover, and above-ground biomass in wheat with LiDAR. Front. Plant Sci. 2018, 9, 237. [Google Scholar] [CrossRef]
  16. Nidamanuri, R.R.; Jayakumari, R.; Ramiya, A.M.; Astor, T.; Wachendorf, M.; Buerkert, A. High-resolution multispectral imagery and LiDAR point cloud fusion for the discrimination and biophysical characterisation of vegetable crops at different levels of nitrogen. Biosyst. Eng. 2022, 222, 177–195. [Google Scholar] [CrossRef]
  17. Vatandaslar, C.; Narin, O.G.; Abdikan, S. Retrieval of forest height information using spaceborne LiDAR data: A comparison of GEDI and ICESat-2 missions for Crimean pine (Pinus nigra) stands. Trees 2023, 37, 717–731. [Google Scholar] [CrossRef]
  18. Fareed, N.; Das, A.K.; Flores, J.P.; Mathew, J.J.; Mukaila, T.; Numata, I.; Janjua, U.U.R. UAS Quality Control and Crop Three-Dimensional Characterization Framework Using Multi-Temporal LiDAR Data. Remote Sens. 2024, 16, 699. [Google Scholar] [CrossRef]
  19. Li, Z.; Feng, X.; Li, J.; Wang, D.; Hong, W.; Qin, J.; Wang, A.; Ma, H.; Yao, Q.; Chen, S. Time Series Field Estimation of Rice Canopy Height Using an Unmanned Aerial Vehicle-Based RGB/Multispectral Platform. Agronomy 2024, 14, 883. [Google Scholar] [CrossRef]
  20. Xie, T.; Li, J.; Yang, C.; Jiang, Z.; Chen, Y.; Guo, L.; Zhang, J. Crop height estimation based on UAV images: Methods, errors, and strategies. Comput. Electron. Agric. 2021, 185, 106155. [Google Scholar] [CrossRef]
  21. Kahraman, S.; Bacher, R. A comprehensive review of hyperspectral data fusion with lidar and sar data. Annu. Rev. Control. 2021, 51, 236–253. [Google Scholar] [CrossRef]
  22. Sankey, T.; Donager, J.; McVay, J.; Sankey, J.B. UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens. Environ. 2017, 195, 30–43. [Google Scholar]
  23. Luo, S.; Wang, C.; Xi, X.; Nie, S.; Fan, X.; Chen, H.; Yang, X.; Peng, D.; Lin, Y.; Zhou, G. Combining hyperspectral imagery and LiDAR pseudo-waveform for predicting crop LAI, canopy height and above-ground biomass. Ecol. Indic. 2019, 102, 801–812. [Google Scholar] [CrossRef]
  24. Guo, Y.; Xiao, Y.; Li, M.; Hao, F.; Zhang, X.; Sun, H.; Beurs, K.; Fu, Y.H.; He, Y. Identifying crop phenology using maize height constructed from multi-sources images. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103121. [Google Scholar] [CrossRef]
  25. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 1, 1353691. [Google Scholar] [CrossRef]
  26. Zhou, H.; Zhou, G.; Song, X.; He, Q. Dynamic characteristics of canopy and vegetation water content during an entire maize growing season in relation to spectral-based indices. Remote Sens. 2022, 14, 584. [Google Scholar] [CrossRef]
  27. Kwan, C.; Gribben, D.; Ayhan, B.; Li, J.; Bernabe, S.; Plaza, A. An Accurate Vegetation and Non-Vegetation Differentiation Approach Based on Land Cover Classification. Remote Sens. 2020, 12, 3880. [Google Scholar] [CrossRef]
  28. Mehta, A.; Shukla, S.; Rakholia, S. Vegetation Change Analysis using Normalized Difference Vegetation Index and Land Surface Temperature in Greater Gir Landscape. J. Sci. Res. 2021, 65, 1–6. [Google Scholar] [CrossRef]
  29. Gao, B. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  30. Crosilla, F.; Macorig, D.; Scaioni, M.; Sebastianutti, I.; Visintini, D. LiDAR data filtering and classification by skewness and kurtosis iterative analysis of multiple point cloud data categories. Appl. Geomat. 2013, 5, 225–240. [Google Scholar] [CrossRef]
  31. Đorić, D.; Nikolić-Đorić, E.; Jevremović, V.; Mališić, J. On measuring skewness and kurtosis. Qual. Quant. 2009, 43, 481–493. [Google Scholar] [CrossRef]
  32. Yue, J.; Yang, G.; Li, C.; Li, Z.; Wang, Y.; Feng, H.; Xu, B. Estimation of Winter Wheat Above-Ground Biomass Using Unmanned Aerial Vehicle-Based Snapshot Hyperspectral Sensor and Crop Height Improved Models. Remote Sens. 2017, 9, 708. [Google Scholar] [CrossRef]
  33. Ma, X.; Wei, B.; Guan, H.; Yu, S. A method of calculating phenotypic traits for soybean canopies based on three-dimensional point cloud. Ecol. Inf. 2022, 68, 101524. [Google Scholar] [CrossRef]
  34. Li, Y.; Li, C.; Cheng, Q.; Chen, L.; Li, Z.; Zhai, W.; Mao, B.; Chen, Z. Precision estimation of winter wheat crop height and above-ground biomass using unmanned aerial vehicle imagery and oblique photoghraphy point cloud data. Front. Plant Sci. 2024, 15, 1437350. [Google Scholar] [CrossRef] [PubMed]
  35. Liu, T.; Zhu, S.; Yang, T.; Zhang, W.; Xu, Y.; Zhou, K.; Wu, W.; Zhao, Y.; Yao, Z.; Yang, G.; et al. Maize height estimation using combined unmanned aerial vehicle oblique photography and LIDAR canopy dynamic characteristics. Comput. Electron. Agric. 2024, 218, 108685. [Google Scholar] [CrossRef]
Figure 1. Experimental site and experimental plot demarcation.
Figure 1. Experimental site and experimental plot demarcation.
Agronomy 15 01094 g001
Figure 2. Schematic diagram of the overall workflow.
Figure 2. Schematic diagram of the overall workflow.
Agronomy 15 01094 g002
Figure 3. Field and plot delineation process based on point cloud data. (a) Angle correction and ground point filtering. (b) Height-Kmeans-DBSCAN clustering segmentation. (c) Boundary delineation of adjacent plots. (d) Fine boundary delineation of the field. Note: (a) Green color represents the true color of the point cloud. (b) Different colors represent the process of experimental plot segmentation. The experimental field is divided first into four sections, and then each section is finely delineated into four experimental plots respectively. (c,d) Red represents the height value of wheat, with closer to red representing higher values. The dark blue line in the terrain change curve represents the distribution of the point cloud in the xz-plane.
Figure 3. Field and plot delineation process based on point cloud data. (a) Angle correction and ground point filtering. (b) Height-Kmeans-DBSCAN clustering segmentation. (c) Boundary delineation of adjacent plots. (d) Fine boundary delineation of the field. Note: (a) Green color represents the true color of the point cloud. (b) Different colors represent the process of experimental plot segmentation. The experimental field is divided first into four sections, and then each section is finely delineated into four experimental plots respectively. (c,d) Red represents the height value of wheat, with closer to red representing higher values. The dark blue line in the terrain change curve represents the distribution of the point cloud in the xz-plane.
Agronomy 15 01094 g003
Figure 4. Field and plot delineation process based on multispectral data. (a) Raw multispectral imagery. (b) Angle correction. (c) Ground filtering. (d) Segmentation and boundary delineation of the experimental field.
Figure 4. Field and plot delineation process based on multispectral data. (a) Raw multispectral imagery. (b) Angle correction. (c) Ground filtering. (d) Segmentation and boundary delineation of the experimental field.
Agronomy 15 01094 g004
Figure 5. Correlation analysis results between feature parameters and canopy height. (ac) The point cloud features for the GUS, JS, and BS, respectively. (df) The multispectral features for the GUS, JS, and BS, respectively. Note: in the picture, GUS_height (GUS_H), JS_height (JS_H), and BS_height (BS_H) represent the canopy heights at the green-up stage, jointing stage, and booting stage, respectively.
Figure 5. Correlation analysis results between feature parameters and canopy height. (ac) The point cloud features for the GUS, JS, and BS, respectively. (df) The multispectral features for the GUS, JS, and BS, respectively. Note: in the picture, GUS_height (GUS_H), JS_height (JS_H), and BS_height (BS_H) represent the canopy heights at the green-up stage, jointing stage, and booting stage, respectively.
Agronomy 15 01094 g005
Figure 6. Structure and results of the wheat canopy height estimation model based on OP-RF. (a) Model structure. (b) Model fitting results.
Figure 6. Structure and results of the wheat canopy height estimation model based on OP-RF. (a) Model structure. (b) Model fitting results.
Agronomy 15 01094 g006
Figure 7. Stability analysis of wheat canopy height estimation model based on OP-RF. (a) Distribution of RMSE estimation results. (b) Distribution of MAE estimation results.
Figure 7. Stability analysis of wheat canopy height estimation model based on OP-RF. (a) Distribution of RMSE estimation results. (b) Distribution of MAE estimation results.
Agronomy 15 01094 g007
Table 1. Detailed parameter settings of the UAS.
Table 1. Detailed parameter settings of the UAS.
ParameterDJI Matrice 350DJI Phantom 4
Multispectral
Relative flight altitude20 m20 m
Flight speed3.5 m/s3.5 m/s
Side overlap rate60%60%
Forward overlap rate80%80%
SensorAlphaAir 10 (LiDAR + RGB System)Multispectral and RGB
Table 2. Descriptive statistics of the measured crop heights at different growth stages.
Table 2. Descriptive statistics of the measured crop heights at different growth stages.
Growth Stages of WheatCrop Height (m)
Irrigated SubplotsRainfed Subplots
MinimumMaximumMeanStandard DeviationMinimumMaximumMeanStandard Deviation
GUS0.2380.3550.2960.0280.2220.3590.2950.027
JS0.4590.5930.5260.0300.4210.6220.5370.052
BS0.6500.7900.7070.0310.6200.7990.7290.039
Table 3. Formulas for point cloud feature parameters.
Table 3. Formulas for point cloud feature parameters.
VariablesFormulasExplanation
Rough R o u g h = 1 n i = 1 n ( z i z ¯ ) 2 Quantifying the roughness of the terrain surface
MP M P = 1 n i = 1 n z i ( a x i + b y i + c ) Measuring the smoothness of the terrain surface
Den D e n = n V Describing the density of point distribution in the point cloud
MHD M H D = 1 n i = 1 n z i z ¯ Providing a measure of central deviation of point cloud data relative to average height
Skew S k e w = n ( n 1 ) ( n 2 ) i = 1 n z i z ¯ σ 3 Describing the statistical measure of asymmetry in point cloud height distribution
3DVol 3 D V o l = V c o n v e x   h u l l Describing the distribution of point cloud in space
Note: in the table, n is the total number of points in the point cloud data. z i is the height value (Z coordinate) of the ith point. z ¯ is the average height value of all points. a x i + b y i + c is the height value of the ith point on the fitting plane. V is the volume of the region where the point cloud is located. σ is the standard deviation of the heights in the point cloud data. V c o n v e x   h u l l is the convex envelope volume of the point cloud.
Table 4. Formulas for multispectral feature parameters.
Table 4. Formulas for multispectral feature parameters.
Vegetation IndicesFormulas
NDVI N D V I = ( R n i r R r e d ) ( R n i r + R r e d )
SAVI S A V I = 1.5 × ( R n i r R r e d ) ( R n i r + R r e d + 0.5 )
NDRE N D R E = R n i r R r e d g R n i r + R r e d g
NIRV N I R V = R n i r R r e d R n i r + R r e d × R n i r
MTVI1 M T V I 1 = 1.2 [ 1.2 ( R n i r R g r e e n ) 2.5 ( R r e d R g r e e n ) ]
MTVI2 M T V I 2 = 1.5 [ 1.2 ( R n i r R g r e e n ) 2.5 ( R r e d R g r e e n ) ] ( 2 R n i r + 1 ) 2 [ 6 R n i r 5 ( R r e d ) 1 / 2 ] 0.5 1 / 2
ENGBVI E N G B V I = R r e d g × R n i r + 2 × R g r e e n 2 × R b l u e R r e d g × R n i r + 2 × R g r e e n + 2 × R b l u e
WDRVI W D R V I = ( 0.1 × R n i r R r e d ) ( 0.1 × R n i r + R r e d )
MCARI M C A R I = ( R r e d g R r e d ) ( 0.2 × ( R r e d g R g r e e n ) ) × R r e d g R r e d
Note: in the table, R b l u e ,   R g r e e n ,   R red ,   R r e d g ,   a n d   R n i r represent the spectral reflectance of the blue (450 nm), green (560 nm), red (650 nm), red edge (730 nm), and near-infrared bands (840 nm), respectively.
Table 5. Parameters to be optimized for different models.
Table 5. Parameters to be optimized for different models.
ModelParameter
OP-RFn_estimators
n_trials
Elastic Netalpha
l1_ratio
XGBoostn_estimators
reg_alpha
learning_rate
SVRepsilon
gamma
Table 6. The variance analysis results between the canopy height and feature parameters.
Table 6. The variance analysis results between the canopy height and feature parameters.
Feature ParametersGUSJSBS
p-ValueSignificancep-ValueSignificancep-ValueSignificance
Point cloud feature parametersRough2.63 × 10−23***1.12 × 10−3**2.72 × 10−3**
MP3.58 × 10−9***2.30 × 10−10***1.54 × 10−4***
Den6.02 × 10−12***4.17 × 10−3**1.06 × 10−1
MHD4.87 × 10−23***2.79 × 10−1 3.27 × 10−16***
Skew6.08 × 10−20***4.32 × 10−2*7.41 × 10−23***
3DVol3.50 × 10−11***7.61 × 10***1.51 × 10−1
Multispectral feature parametersNDVI3.37 × 10−4***4.48 × 10−1 5.41 × 10−3**
SAVI8.78 × 10−6***6.31 × 10−1 1.63 × 10−4***
NDRE4.60 × 10−1 1.07 × 10−1 1.68 × 10−4***
NIRV1.23 × 10−1 8.44 × 10−1 7.83 × 10−3**
MTVI14.04 × 10−5***4.42 × 10−2*1.00 × 10−6***
MTVI29.02 × 10−5***4.69 × 10−2*1.87 × 10−5***
ENGBVI1.77 × 10−4***1.85 × 10−1 1.09 × 10−4***
WDRVI2.02 × 10−1 1.73 × 10***2.29 × 10−5***
MCARI4.60 × 10−3**9.01 × 10***1.61 × 10−3**
Note: “*” indicates that it was significant (p-value ≤ 0.05), “**” indicates that it was highly significant (p-value ≤ 0.01), and “***” indicates that it was extremely significant (p-value ≤ 0.001).
Table 7. Variance inflation factor results between feature parameters.
Table 7. Variance inflation factor results between feature parameters.
Feature ParametersVariance Inflation Factor
GUSJSBS
Rough3.962.863.60
MP3.653.318.43
Den4.241.841.54
MHD3.404.553.61
Skew4.905.822.39
3DVol5.443.882.13
NDVI7.708.168.81
SAVI4.649.437.23
NDRE2.4610.679.23
NIRV6.074.302.92
MTVI18.4011.134.73
MTVI29.6810.427.29
ENGBVI8.336.537.63
WDRVI2.574.943.34
MCARI5.482.949.27
Table 8. Machine learning model parameter settings.
Table 8. Machine learning model parameter settings.
ModelParameterValue
GUSJSBS
OP-RFn_estimators6472145
n_trials303030
Elastic Netalpha0.0010.0010.001
l1_ratio0.50.10.8
XGBoostn_estimators100050100
reg_alpha0.10.10.1
learning_rate0.050.20.2
SVRepsilon0.0010.010.01
gamma0.20.20.3
Table 9. Machine learning model fitting results.
Table 9. Machine learning model fitting results.
Growth Stages of WheatModelTraining SetTest Set
R2RMSEMAER2RMSEMAE
GUSOP-RF0.9310.008 m0.006 m0.9210.009 m0.006 m
Elastic Net0.7670.014 m0.010 m0.7410.018 m0.010 m
XGBoost0.9540.006 m0.003 m0.9050.009 m0.005 m
SVR0.8510.011 m0.006 m0.7210.019 m0.009 m
JSOP-RF0.9500.013 m0.009 m0.9360.016 m0.011 m
Elastic Net0.7290.033 m0.026 m0.6220.036 m0.022 m
XGBoost0.9120.019 m0.013 m0.9120.018 m0.013 m
SVR0.8610.023m0.015 m0.8470.021 m0.013 m
BSOP-RF0.8540.014 m0.011 m0.8420.015 m0.011 m
Elastic Net0.5110.026 m0.021 m0.5050.027 m0.018 m
XGBoost0.7570.021 m0.014 m0.7430.021 m0.016 m
SVR0.7010.022 m0.017 m0.6330.023 m0.013 m
Table 10. OP-RF model fitting results.
Table 10. OP-RF model fitting results.
Growth Stages of WheatModel Input ParametersTraining SetTest Set
R2RMSEMAER2RMSEMAE
GUSpoint cloud feature0.8970.009 m0.007 m0.8330.010 m0.007 m
multispectral feature0.5840.019 m0.014 m0.5710.016 m0.013 m
JSpoint cloud feature0.7060.032 m0.025 m0.6950.033 m0.027 m
multispectral feature0.7440.031 m0.022 m0.7230.030 m0.018 m
BSpoint cloud feature0.8530.015 m0.012 m0.7680.016 m0.011 m
multispectral feature0.5320.024 m0.018 m0.4620.029 m0.022 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, H.; Liu, Y.; Jiang, S.; Zhao, Y.; Yang, C.; An, X.; Zhang, K.; Cui, H. Winter Wheat Canopy Height Estimation Based on the Fusion of LiDAR and Multispectral Data. Agronomy 2025, 15, 1094. https://doi.org/10.3390/agronomy15051094

AMA Style

Ma H, Liu Y, Jiang S, Zhao Y, Yang C, An X, Zhang K, Cui H. Winter Wheat Canopy Height Estimation Based on the Fusion of LiDAR and Multispectral Data. Agronomy. 2025; 15(5):1094. https://doi.org/10.3390/agronomy15051094

Chicago/Turabian Style

Ma, Hao, Yarui Liu, Shijie Jiang, Yan Zhao, Ce Yang, Xiaofei An, Kai Zhang, and Hongwei Cui. 2025. "Winter Wheat Canopy Height Estimation Based on the Fusion of LiDAR and Multispectral Data" Agronomy 15, no. 5: 1094. https://doi.org/10.3390/agronomy15051094

APA Style

Ma, H., Liu, Y., Jiang, S., Zhao, Y., Yang, C., An, X., Zhang, K., & Cui, H. (2025). Winter Wheat Canopy Height Estimation Based on the Fusion of LiDAR and Multispectral Data. Agronomy, 15(5), 1094. https://doi.org/10.3390/agronomy15051094

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop