Next Article in Journal
Delayed Sowing under the Same Transplanting Date Shortened the Growth Period of Machine-Transplanted Early-Season Rice with No Significant Yield Reduction Caused
Previous Article in Journal
Effect of Calcium Cyanamide as an Alternative Nitrogen Source on Growth, Yield, and Nitrogen Use Efficiency of Short-Day Onion
Previous Article in Special Issue
Evaluation of Coffee Plants Transplanted to an Area with Surface and Deep Liming Based on Multispectral Indices Acquired Using Unmanned Aerial Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Inversion of Leaf Area Index in Citrus Tree by Merging UAV LiDAR with Multispectral Remote Sensing Data

1
College of Electronic Engineering, South China Agricultural University, Guangzhou 510642, China
2
Rice Research Institute, Guangdong Academy of Agricultural Sciences, Guangzhou 510642, China
3
Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou 510642, China
4
National Center for International Collaboration on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou 510642, China
5
Department of Biological and Agricultural Engineering, Texas A&M University, College Station, TX 77843, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Agronomy 2023, 13(11), 2747; https://doi.org/10.3390/agronomy13112747
Submission received: 13 September 2023 / Revised: 18 October 2023 / Accepted: 30 October 2023 / Published: 31 October 2023
(This article belongs to the Special Issue New Trends in Agricultural UAV Application)

Abstract

:
The LAI (leaf area index) is an important parameter describing the canopy structure of citrus trees and characterizing plant photosynthesis, as well as providing an important basis for selecting parameters for orchard plant protection operations. By fusing LiDAR data with multispectral data, it can make up for the lack of rich spatial features of multispectral data, thus obtaining higher LAI inversion accuracy. This study proposed a multiscale LAI inversion method for citrus orchard based on the fusion of point cloud data and multispectral data. By comparing various machine learning algorithms, the mapping relationship between the characteristic parameters in multispectral data and point cloud data and citrus LAI was established, and we established the inversion model based on this, by removing redundant features through redundancy analysis. The experiment results showed that the BP neural network performs the best at both the community scale and the individual scale. After removing redundant features, the R2, RMSE, and MAE of the BP neural network at the community scale and individual scale were 0.896, 0.112, 0.086, and 0.794, 0.408, 0.328, respectively. By adding the three-dimensional gap fraction feature to the two-dimensional vegetation index features, the R2 at community scale and individual scale increased by 4.43% and 7.29%, respectively. The conclusion of this study suggests that the fusion of point cloud and multispectral data exhibits superior accuracy in multiscale citrus LAI inversion compared to relying solely on a single data source. This study proposes a fast and efficient multiscale LAI inversion method for citrus, which provides a new idea for the orchard precise management and the precision of plant protection operation.

1. Introduction

Citrus fruits are one of the most important fruit crops in the world, and in terms of both cultivation area and yield, they rank at the top. China is a significant producer of citrus, with a total cultivation area of 2829.8 hectares in 2020. Currently, chemical pesticides are mainly used for pest control in citrus orchards. However, conventional quantitative spraying methods often lead to problems such as pesticide residues and environmental pollution due to overspraying. To effectively solve these problems, precise variable spraying technology can be implemented.
The leaf area index (LAI) is a comprehensive indicator that signifies the extent to which vegetation utilizes light energy for photosynthesis [1,2,3,4]. The LAI was introduced by the British ecologist D.J. Watson in the 1940s and is defined as the total one-sided area of green leaves per unit land area [5]. The formula for calculating LAI is shown in Equation (1). For example, if the total area of leaves of a certain plant on a land area of 2 m2 is 6 m2, the LAI of the plant on this land is 3. The crop yield is positively correlated with the LAI within a certain threshold. However, when the LAI exceeds a certain threshold, insufficient light penetration leads to decreased photosynthetic efficiency and ultimately reduces the crop yield. Additionally, LAI can be used to regulate the amount of pesticide spraying per unit land area, which is crucial for precise pest control and cost reduction [6,7,8]. Furthermore, the LAI serves as an important phenotype for citrus trees, facilitating yield prediction and evaluating their health status. Conventional remote sensing techniques relying on single vegetation indices exhibit limited accuracy due to saturation issues. Research has indicated that estimation methods based on multiple vegetation indices provide higher accuracy in inverting LAI [9].
L A I = A r e a   o f   l e a v e s A r e a   o f   g r o u n d
The traditional measurement of LAI has been conducted with two main methods: direct and indirect [10,11,12]. Direct methods include leaf collection and grid methods, while indirect methods are primarily based on radiation transfer theory, such as optical methods like hemispherical photography [13]. Although traditional LAI measurement methods have been proven to be highly accurate, their efficiency is low, rendering them unsuitable for large-scale LAI measurements. With advancements in remote sensing technology, unmanned aerial vehicle (UAV) and satellite remote sensing have emerged as alternative approaches for LAI inversion. However, satellite remote sensing is limited by transit time and resolution, making it more suitable for large areas rather than real-time LAI inversion in intermediate regions [14,15,16]. In contrast, UAV remote sensing offers greater flexibility, higher resolution, stronger real-time performance, and the capacity to carry multiple sensors. As agricultural informatization continues to advance, an increasing number of scholars are utilizing UAV technology for agricultural monitoring and crop information acquisition [17].
UAV remote sensing technology can be categorized into passive optical remote sensing technology and active LiDAR remote sensing technology [18]. Passive optical remote sensing technology involves using UAVs equipped with multispectral or hyperspectral cameras to acquire canopy reflectance of plants and calculate vegetation indices based on the reflectance. Hyperspectral imaging has higher spectral resolution compared to multispectral imaging. Previous studies have shown that hyperspectral imaging has high accuracy in estimating plant LAI [19,20]. However, hyperspectral sensors have the disadvantage of being expensive. Active laser technology includes mobile laser scanner (MLS) technology and UAV LiDAR technology. MLS emits laser light actively to penetrate the canopy to obtain point cloud data of trees and has been widely utilized for assessing canopy characteristics such as height, volume, and LAI [21,22,23]. MLS has the advantage of obtaining complete point cloud data, and its on-the-go capability allows it to be directly installed on farm tractors, thus enabling automatic data collection during field operations. However, this technology is more time-consuming and has difficulty in collecting data in mountainous and hilly environments. LiDAR technology uses UAV LiDAR to emit laser light actively, penetrating the canopy to obtain structural information such as gap fraction which can be described as the ratio of ground points to the total number of points [24], the ratio of intensity values from ground returns to the total intensity [25], or the ratio of energy from ground returns to the total return energy in the LiDAR cloud data [26]. By establishing a statistical regression model between vegetation index, canopy structure, and leaf area index (LAI), it is possible to estimate LAI [27]. In forestry, LiDAR is utilized to obtain canopy parameters like tree height and diameter and establish allometric equations between LAI and these parameters for LAI estimation. However, the applicability of the allometric equation is limited to certain tree species, and statistical models generally exhibit low accuracy. With the advancements in artificial intelligence, machine learning algorithms such as random forest regression, support vector regression, gradient-enhanced regression, and artificial neural network have been implemented in plant phenotype monitoring due to their advantages in processing large datasets and fitting nonlinear relationships [28,29,30,31,32]. Currently, UAV remote sensing has found wide application in the extraction of LAI for crops like wheat, rice, corn, cotton, etc. [33,34,35,36]. Current research is mostly based on a single data source, and the accuracy needs to be improved. This experiment is based on merging UAV LiDAR with multispectral remote sensing data to obtain higher accuracy in estimating citrus LAI.

2. Materials and Methods

2.1. Research Area and Technical Route

2.1.1. Research Area

Figure 1 depicts the geographical situation of the experimental area, which is located in Zhaoqing City, Guangdong Province, China (23°26′25″ N, 112°31′11″ E). The locale has a subtropical monsoon climate with high humidity, plentiful sunshine, and ample rainfall, resulting in a warm and agreeable temperature throughout the year. The average annual temperature is 21.2 °C, and the average annual rainfall is 1803.8 mm. The research fields are the property of Sihui Cuitian Agricultural Technology Co., Ltd. (Zhaoqing, China). The terrain in the north and south is slightly higher, while the terrain in the middle is slightly lower with gentle slope. The planting pattern features a trunk-based row spacing of 3.5 m and column spacing of 4.0 m. The study area includes two species: Gonggan and Shatangju. Gonggan is located in Zones A and B, while Shatangju is in Zones C, D, and E. The community-scale LAI research focuses on Zones C, D, and E, and 220 samples were collected from these three zones for community-scale modeling, as shown in Figure 2b. Meanwhile, the individual-scale LAI research focuses on Zones A, B, C, and D, and 341 samples were collected for individual-scale modeling, as shown in Figure 2a.

2.1.2. Technical Route

The technical methodology adopted in this study is presented in Figure 3. The left and right sides of the first row are ground-truth LAI collections at the community scale and single tree scale, respectively. The middle two of the first row are remote sensing data collection including UAV multispectral data and UAV LiDAR data. The method is as follows: (1) Preprocess the UAV data to prepare for feature extraction. (2) Extract two-dimensional (vegetation index) and three-dimensional (gap fraction) features from the community and single tree scales, respectively. (3) Establish models between extracted features and LAI from single tree and community scales, and use them to invert LAI of large-area citrus orchards.

2.2. Data Acquisition

2.2.1. Ground-Truth LAI Acquisition

In this study, the CI-110 plant canopy analyzer produced by CID Bio Science (Washington, DC, USA) was utilized for measuring ground-truth LAI (leaf area index), as shown in Figure 4a. A high-definition fisheye lens and a CCD image sensor were employed to capture plant canopy images. Through software analysis (by setting the same value of azimuth divisions parameter to mask human), the LAI of the plant canopy can be obtained, as shown in Figure 4b,c. The Ice River 660RTK (Shenzhen, China) puncher was used to record the longitude and latitude coordinates, as shown in Figure 4d. The ground-truth LAI was collected from 26 April 2023 to 5 May 2023.
Due to the nature of the research area being a purposefully planted artificial orchard with neat arrangements, we developed a method for measuring LAI in artificial orchards at the community level. We referred to a paper written by Jakub Černý [37] and designed optimal layouts for placing transects in a pure plantation established through line planting, as depicted in Figure 5a. Due to the height of the Gonggan trees in zones A and B being relatively low (height: 1.0 m–2.5 m; diameter: 1.0 m–2.0 m) and the crown of the tree contracting inward to form a spherical shape so that leaves cannot be captured by the fisheye lens in these two zones, the LAI estimation at the community scale was only conducted in zones C, D, and E. Each community involved six trees; we measured three points within each community and calculated the average value as the LAI for that community, as illustrated in Figure 5b. Blue dash points represent sampling points, and 220 communities were surveyed to collect ground-truth LAI data, as shown in Figure 2b.
For the individual scale, the research area includes zones A, B, C, and D. We conducted measurements at four points below each tree canopy and calculated the average value as the LAI for that specific tree, as shown in Figure 5c. We collected LAI data from 341 trees, which are shown in Figure 2a. Table 1 provides the maximum, minimum, mean, and standard deviation of the ground-truth LAI obtained at both community and individual scales.

2.2.2. Remote Sensing Data Collection

Two types of UAV, the DJI Phantom 4 Multispectral and the DJI M300, produced by DJ-Innovations Company (Shenzhen, China), were used to capture multispectral remote sensing data and LiDAR point cloud data, respectively, as shown in Figure 6. The DJI Phantom 4 Multispectral UAV, depicted in Figure 6a, is equipped with a 5-band camera for capturing multispectral data, and a correction grey plate is used to obtain reflectance, as depicted in Figure 6b. On the other hand, the DJI M300 UAV, as shown in Figure 6c, is equipped with ZENMUSE LI LiDAR to collect point cloud data. Both UAVs are equipped with centimeter-level positioning systems (RTK). The UAV remote sensing data were collected on 26 April 2023, from 11:30 to 12:30 for multispectral data and from 13:00 to 15:00 for LiDAR data, under clear and windless weather conditions. Detailed parameters of these two UAVs and flight parameters during data collection can be found in Table 2 and Table 3, respectively. DJI Terra software was used to stitch together the remote sensing data collected by the UAVs to generate an orthophoto panorama of the research area.

2.3. Remote Sensing Data Preprocessing

2.3.1. Multispectral Data Preprocessing

Vegetation Index

The absorption and scattering effects of vegetation on incident light of different wavelengths result in characteristic spectral responses. Vegetation exhibits strong absorption in the red band but reflects strongly in the near-infrared band. Numerous studies have shown that these two bands are closely related to vegetation cover and LAI [38]. In this study, 11 vegetation indices and 4 bands (R, G, NIR, REG) were used as partial features to estimate the LAI of citrus trees during the fruit growth and development period. Table 4 provides the abbreviations, full names, formulas, and references for the 11 vegetation indices.

Removing Soil Background

Due to the significant presence of soil background pixels in the spectral image of the study area, accurate extraction of vegetation components requires masking soil pixels. In this research, the bimodal threshold method was employed for this purpose. The bimodal threshold method involves generating an image histogram and counting the number of pixels associated with different pixel region values. The threshold for separating soil and vegetation is identified as the abscissa value at the trough of the histogram. Taking RVI as example, the technical process is illustrated in Figure 7.

2.3.2. LiDAR Data Preprocessing

Gap Fraction

The gap fraction is derived from Bill Lambert’s law and can be obtained through the reflectivity measurements of UAV LiDAR. It can be described as the ratio of ground points to the total number of points in the LiDAR point cloud data [24], or the ratio of intensity values from ground returns to the total intensity [25], or the ratio of energy from ground returns to the total return energy [26]. The gap fraction decreases as the LAI increases. The key factor in this method is determining the ground threshold. In this study, the gap fraction calculated from the LiDAR point cloud data was selected as one of the training features, and its formula is shown in Equation (2). The value of the gap fraction depends on the height threshold used to separate ground and vegetation points. Therefore, height normalization of the point cloud data is necessary to accurately extract the gap fraction.
G a p   F r a c t i o n = N u m b e r   o f   g r o u n d   p o i n t s T o t a l   n u m b e r

Height Normalization

In this study, the LiDAR360 software developed by Beijing Green Valley Technology Co., Ltd. (Beijing, China) was implemented for height normalization. Elevation normalization included three steps: (1) the LiDAR360’s ground separation tool was used to separate ground points [50]; (2) A digital elevation model (DEM) was generated from the ground points using an inverse distance weighted interpolation method, and the parameters settings are shown in Figure 8b; (3) The point cloud was normalized in elevation by subtracting the DEM values from the original Z-values of the points. The specific flowchart is illustrated in Figure 8a.

2.4. Extract Features

2.4.1. Community Scale

Vegetation Indices in Each ROI

In this study, the region of interest (ROI) was generated for each community and we obtained 220 ROIs, as shown in Figure 2b. The average value of each vegetation index (or band) within each ROI area was extracted from multispectral data as input features for modeling. A total of 15 features were obtained from the multispectral reflectance data (including 11 vegetation indices detailed in Table 4 and 4 multispectral bands representing R, G, REG, and NIR).

Gap Fraction in Each ROI

The formula for calculating gap fraction is presented in Equation (2). The value of gap fraction is dependent on the chosen of height threshold, where points below this threshold are considered as ground points. To determine the optimal threshold, a binary search method was implemented in this study (Figure 9). Firstly, the middle value of the elevation was computed, dividing the elevation into two regions. As displayed in Figure 9, the normalized elevation ranged from 0 to 4.78, hence the middle value was 2.39. In order to prevent the gap fraction from equating to zero or 1, the minimum and maximum values of the height were adjusted by adding 0.01 and subtracting 0.01, respectively. This resulted in three thresholds: 0.01, 2.39, and 4.77. The gap fraction was then calculated for each community (ROI) separately using these three thresholds, utilizing Equation (2). The Pearson correlation coefficient (r) between the gap fraction and the ground-truth LAI was determined using Equation (3). Here, “i” represents the ID of each community, “n” denotes the total number of communities, “Xi” represents the gap fraction of the i-th community, and “Yi” represents the ground-truth LAI of the i-th community. “X” and “Y” represent the averages of X and Y, respectively. If the maximum Pearson correlation coefficient occurred at one of two ends, the half-height range containing the highest correlation coefficient was selected. Alternatively, if the maximum correlation coefficient did not occur at either end, the half-height range with the smallest difference in correlation coefficients between the endpoint and midpoint was chosen as the starting point for the next iteration. This process continued until the threshold with the highest correlation coefficient was found. Ultimately, an elevation threshold of 1.21 m was utilized for calculating the gap fraction at the community scale.
r = i = 1 n ( X i X - ) ( Y i Y - ) i = 1 n ( X i X - ) 2 i = 1 n ( Y i Y - ) 2

Pearson Correlation Coefficient and Scatter Plot

We drew the scatter plot of all features and calculated the Pearson correlation coefficient between LAI and the features extracted in Vegetation Indices in Each ROI and Gap Fraction in Each ROI sections. The formula for Pearson correlation coefficient is shown in Equation (3) and the result is shown in Figure 10.
By consulting the significance test table of the correlation coefficient, for the 220 sample, a correlation above 0.18 at a confidence level of 0.01 is considered significant. This indicates that at a confidence level of 0.01, LAI has a significant correlation with all features (column 1 in Figure 10), satisfying the experimental requirements.

2.4.2. Individual Tree Scale

Instance Segmentation

This experiment uses the watershed algorithm invented by Chen et al. for instance segmentation of normalized point clouds [51]. The resolution of both digital surface model (DSM) and digital elevation model (DEM) was 0.5 m; the segmentation results are shown in Figure 11a, and the results were evaluated using detection rate (Equation (4)), accuracy (Equation (5)), and overall accuracy (Equation (6)) [52]:
r = N c N c + N l
p = N c N c + N o
F = 2 r p r + p
Here, r represents the detection rate; p represents accuracy; F represents overall accuracy; N c , N l ,   a n d   N o represent the number of correct segmentation, missed segmentation, and oversegmentation, respectively. High F represents high accuracy of individual tree detection results. The specific results are shown in Table 5.
From the results, it can be seen that most trees exhibit oversegmentation and missed segmentation, Therefore, we manually performed secondary segmentation on the watershed segmentation results with the hexagonal tool of LiDAR360. The final segmentation result is shown in Figure 11b.

Extracting Boundary of Individual Tree

We used the concave envelope method to extract boundaries of the individual tree. It contains three steps: (1) project a single tree point cloud onto the XOY plane, (2) establish Delaunay triangulation network for points in the XOY plane, and (3) remove edges with a length greater than the threshold. The process and results are shown in Figure 12. The threshold in this study is 0.5 m. We generated ROI for each concave packet.

Pearson Correlation Coefficient and Scatter Plot

We extracted features of each ROI with the method as per Vegetation Indices in Each ROI and Gap Fraction in Each ROI sections, and a height threshold of 0.29 m was applied to calculate gap fraction at the individual scale. We calculated the Pearson correlation coefficient between LAI and the features, and drew the scatter plot of all features, as shown in Figure 13.
By consulting the significance test table of the correlation coefficient, for the 341 sample, a correlation above 0.14 at a confidence level of 0.01 was considered significant. This indicates that at a confidence level of 0.01, LAI has a significant correlation with all features (column 1 in Figure 13), satisfying the experimental requirements.

2.5. Regression Model

2.5.1. BP Neural Network Model

The BP neural network is a backward neural network, which includes input layer, hidden layer, and output layer [53]. The training process includes forward and backward propagation. Its learning rule is to use the steepest descent method to continuously adjust the weight and bias values of the network through backward propagation, so as to minimize the sum of squared errors of the network. Since this experiment was a regression problem, the number of neurons in the output layer was 1. The Adam optimizer was used to achieve gradient decline. The number of hidden layer neurons was 5 and the learning rate was 0.001. If the loss dropped to less than 0.001 for 10 consecutive times, the learning rate would be reduced by a factor of 0.98. The initial epoch was set to 5000. The early stop condition was set to avoid overfitting. If the loss did not decrease for 100 consecutive times, the training would stop. The network structure is shown in Figure 14.

2.5.2. Other Models

In this study, other traditional machine learning models are used for comparison, including random forest regression [54], gradient boosting regression [55], support vector regression [56], linear regression [57], and Bayesian ridge regression [58].
Random forest regression is an algorithm based on ensemble learning. It carries out regression tasks by constructing multiple decision trees and integrating their prediction results. In random forest, each decision tree is independent and trained on the randomly selected subsamples, which can effectively reduce the risk of overfitting. Random forest can obtain the final regression result by averaging or weighted averaging the prediction results of multiple decision trees. The advantages of random forest include the following: (1) It is capable of processing high-dimensional data and large-scale datasets; (2) It has good generalization performance and can effectively reduce the risk of overfitting; (3) It is able to handle missing values and outliers; (4) For data with nonlinear relationships, it has strong fitting ability.
Different from random forest, gradient boosting regression predicts the results by constructing multiple concatenated decision trees, with the regression goal of the latter tree being the residual between the sum of the predicted values of all previous trees and the final predicted true value.
Support vector regression is used to find a regression plane and make all data in a set closest to that plane. The model creates a “spacing strip” on both sides of the linear function, with a spacing of tolerance deviation which is a manually set empirical value. It does not calculate the loss for all samples falling within the interval; only the support vector will have an impact on its functional model. Finally, the optimized model is obtained by minimizing the total loss and maximizing the interval.
Bayesian ridge regression is a statistical modeling method that utilizes Bayes’ theorem to calculate the posterior distribution. By establishing a Gaussian prior distribution for the parameters and combining the prior distribution with the observed data, it is capable of incorporating prior information into the model, thereby enhancing the accuracy and stability of parameter estimation.

3. Results

3.1. Model Comparison

We divided the dataset into ten equal parts and verified it with the 10-fold cross-validation method. We combined the predicted value of 10-fold cross-validation with the ground truth value and calculated the coefficient of determination (R2), mean absolute error (MAE), and root-mean-square error (RMSE). We compared the R2, MAE, and RMSE of each model. The scatter plots between the predicted values and the true values at the community and individual scales are shown in Figure 15. Python 3.9.7 was used to complete the modeling and results evaluation of this experiment.
It can be seen from the results that the prediction performance of BP neural network modeling is superior to the other five machine learning models at both the community scale and the individual scale. Therefore, the BP neural network was chosen as the framework for the prediction model.

3.2. Remove Redundant Features

It can be seen from Section 3.1 that BP neural network modeling obtained the highest R2 of 0.888 at the community scale and 0.791 at the individual scale. In order to remove redundant features, the one-by-one removal method was used for all input features of the BP neural network model to conduct redundant analysis. After each feature was removed, R2 was counted (Table 6). If R2 increased or remained unchanged after removing a certain feature, then that feature was considered as a redundant feature.
It is evident from Table 6 that the redundant features at the community scale are DVI, NDRE, NIR, and RED; OSAVI is the redundant feature at the individual scale. In order to obtain the final inversion model, a retraining of the BP neural network model was performed using variables with eliminated redundant features. The outcomes of the retrained BP neural network are illustrated in Figure 16. Additionally, Figure 17 presents the histogram depicting the importance of the characteristics within the random forest model.

3.3. The Effect of Gap Fraction on Results

This study synergizes two-dimensional spectral information with three-dimensional gap fraction. In order to demonstrate the improvement effect of gap fraction on the results, Figure 18 shows the fitting results between the predicted value and the true value after removing gap fraction.
According to Figure 16 and Figure 18, it can be observed that the coefficient of determination (R2) exhibited an improvement from 0.858 to 0.896, corresponding to a 4.43% increase at the community scale. Additionally, R2 increased from 0.740 to 0.794, representing a 7.29% increase at the individual scale. These findings indicate that the fusion approach combining two-dimensional multispectral data with three-dimensional spatial information is superior to conventional two-dimensional multispectral methods.

3.4. Model Application

The BP neural network model trained in Section 3.2 was utilized for the inversion of leaf area index (LAI) in both the community research area and the individual research area. At the community scale, LAI inversion was conducted at a grid size of 5 m × 5 m, with a pixel resolution of 0.5 m. At the individual tree scale, the pixel resolution was set to 0.1 m. The inversion results are presented in Figure 19.
At the individual scale, the tree species on the left side of the study area were identified as Gonggan, while the tree species on the right side were identified as Shatangju. As shown in Figure 19b, it is evident that the LAI of Gonggan is higher than that of Shatangju, which aligns with the actual conditions observed.

4. Discussion

This study employed the method of fusing LiDAR data with multispectral data to model and invert LAI in citrus orchards at both the community and individual scales. The results show that higher accuracy is achieved at both scales than with a single data source. Compared to manual ground measurement, the method based on UAVs has the advantages of higher efficiency and lower cost. Compared to MLS, UAV is not affected by mountainous environments, but MLS obtains a more comprehensive canopy point cloud dataset than UAV LiDAR, which can improve the accuracy of LAI extraction. Previous studies have indicated a strong correlation between LAI and volume extracted by MLS [21]. The main methods for extracting volume are the voxel method grid method and the Afa shape method [23,59,60,61]. In the future, we plan to use MLS to extract LAI based on the voxel method, and compare the accuracy and feasibility of LAI extraction between MLS and UAV LiDAR. Additionally, hyperspectral full-frame imaging systems can obtain spectral reflectance data of tree canopies with wider bands and higher spectral resolution. We will add hyperspectral imaging data in subsequent studies to further explore their ability in estimating citrus LAI.
In addition, the LAI of fruit trees is highly correlated with the general vegetation index of fruit trees [62]. Single vegetation indices only represent limited information and may have different saturation levels, resulting in poor generalization performance for the model. The method based on multiple vegetation indexes has higher accuracy [63,64]. In this study, we combined the gap fraction parameter describing canopy structure with multiple vegetation indices to improve the generalization performance of the model. Due to the complex relationship between citrus tree LAI and vegetation index and canopy structure parameters, traditional linear regression models may struggle to express their mapping relationship. Compared with traditional linear regression, machine learning can achieve higher accuracy regression by training input features. BP neural networks can adjust the number of neurons and the depth of the network to adapt to different levels of complexity. In this study, BP neural networks performed better than other machine learning models.
From the results, it can be seen that higher accuracy was observed at the community scale than at the individual tree scale. This is due to LAI being strongly related to the proportion of gaps. At the community scale, in the modeling stage, each community involved six trees and the gaps between six trees; however, at the individual tree scale, the difference of gap proportion inside each tree was not obvious. Therefore, higher accuracy was observed at the community scale than at the individual tree scale. In addition, for the individual scale, the error was minimized by averaging the LAI values acquired from four points in the ground truth measurements. It is worth noting that adjusting the zenith angle was necessary to ensure that the lens field of view avoided capturing adjacent fruit trees.
In this study, we only collected data on the growth and development period of citrus trees in a small region without sampling in different large regions. We plan to collect data for the entire growth cycle in different large regions in the future and optimize the model to enrich the diversity of the dataset and improve the generalization and accuracy of the model for wider applications.

5. Conclusions

This study integrated active remote sensing technology and passive remote sensing technology (fusing LiDAR data with multispectral data) to develop a machine learning model with PyTorch framework and Python 3.9.7 environment for estimating leaf area index (LAI) in citrus orchards at both community and individual scales, providing decision-making support for large-area citrus orchard management. The main research findings and conclusions are as follows:
(1)
The R2 values of the six models at both the community scale and individual scale, before removing redundant features, were as follows: 0.808 (SVR), 0.841 (GBR), 0.859 (LR), 0.859 (RF), 0.859 (Bayesian), and 0.888 (BP) for the community scale; and 0.681 (SVR), 0.680 (GBR), 0.738 (LR), 0.689 (RF), 0.748 (Bayesian), and 0.791 (BP) for the individual scale. The BP neural network demonstrated the best performance among the models at both scales.
(2)
The R2 values of the BP neural network model, after removing redundant features, were found to be 0.896 at the community scale and 0.794 at the individual scale. It was observed that the model achieved higher accuracy at the community scale compared to the individual scale.
(3)
By integrating LiDAR data with multispectral data, we observed a substantial improvement in the R2 values. Specifically, at the community scale, there was a notable increase of 4.43%, while at the individual scale, the improvement reached an impressive 7.29%. These results strongly suggest that the fusion approach, which combines the two-dimensional multispectral information with the three-dimensional spatial information, outperforms the conventional two-dimensional multispectral methods.
In this study, we employed the approach of integrating LiDAR and multispectral data to achieve precise estimation of leaf area index (LAI) in citrus orchards at both community and individual scales. Our findings demonstrate that this integrated method offers superior accuracy and efficiency compared to traditional techniques such as hemispherical photography and leaf collection methods, and is superior to the methods that rely on a single data source. Consequently, this approach proves to be highly advantageous for acquiring LAI data in large citrus orchards.

Author Contributions

The contributions of the authors involved in this study are as follows: W.X.: conceptualization, data curation, formal analysis, methodology, supervision, investigation, validation, writing—original draft, and writing—review and editing; F.Y.: data curation, formal analysis, investigation, methodology, software, validation, visualization, writing—original draft, and writing—review and editing; G.M.: methodology, investigation, and resources; J.W. (Jinhao Wu): methodology, investigation, and resources; J.W. (Jiapei Wu): methodology, investigation, and resources; Y.L.: conceptualization, funding acquisition, project administration, software, supervision, resources, and writing—review and editing. W.X. and F.Y. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

Discipline Innovation and Talent Introduction Program for Higher Education Institutions (No.: D18019).

Data Availability Statement

The data used in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

References

  1. Wang, Y.; Leuning, R. A two-leaf model for canopy conductance, photosynthesis and partitioning of available energy I.: Model description and comparison with a multi-layered model. Agric. Forest Meteorol. 1998, 91, 89–111. [Google Scholar] [CrossRef]
  2. Calvet, J.; Noilhan, J.; Roujean, J.; Bessemoulin, P.; Cabelguenne, M.; Olioso, A.; Wigneron, J. An interactive vegetation SVAT model tested against data from six contrasting sites. Agric. Forest Meteorol. 1998, 92, 73–95. [Google Scholar] [CrossRef]
  3. Asner, G.P.; Scurlock, J.M.; Hicke, J.A. Global synthesis of leaf area index observations: Implications for ecological and remote sensing studies. Glob. Ecol. Biogeogr. 2003, 12, 191–205. [Google Scholar] [CrossRef]
  4. Sellers, P.J.; Dickinson, R.E.; Randall, D.A.; Betts, A.K.; Hall, F.G.; Berry, J.A.; Collatz, G.J.; Denning, A.S.; Mooney, H.A.; Nobre, C.A. Modeling the exchanges of energy, water, and carbon between continents and the atmosphere. Science 1997, 275, 502–509. [Google Scholar] [CrossRef]
  5. Watson, D.J. Comparative physiological studies on the growth of field crops: II. The effect of varying nutrient supply on net assimilation rate and leaf area. Ann. Bot. London 1947, 11, 375–407. [Google Scholar] [CrossRef]
  6. Verrelst, J.; Rivera, J.P.; Veroustraete, F.; Muñoz-Marí, J.; Clevers, J.G.; Camps-Valls, G.; Moreno, J. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods—A comparison. ISPRS J. Photogramm. 2015, 108, 260–272. [Google Scholar] [CrossRef]
  7. Zhai, C.; Zhao, C.; Ning, W.; Long, J.; Wang, X.; Weckler, P.; Zhang, H. Research progress on precision control methods of air-assisted spraying in orchards. Trans. Chin. Soc. Agric. Eng. 2018, 34, 1–15. [Google Scholar]
  8. Liao, J.; Zang, Y.; Luo, X.; Zhou, Z.; Zang, Y.; Wang, P.; Hewitt, A.J. The relations of leaf area index with the spray quality and efficacy of cotton defoliant spraying using unmanned aerial systems (UASs). Comput. Electron. Agric. 2020, 169, 105228. [Google Scholar] [CrossRef]
  9. Qi, H.; Zhu, B.; Wu, Z.; Liang, Y.; Li, J.; Wang, L.; Chen, T.; Lan, Y.; Zhang, L. Estimation of peanut leaf area index from unmanned aerial vehicle multispectral images. Sensors 2020, 20, 6732. [Google Scholar] [CrossRef]
  10. Bréda, N.J. Ground-based measurements of leaf area index: A review of methods, instruments and current controversies. J. Exp. Bot. 2003, 54, 2403–2417. [Google Scholar] [CrossRef]
  11. Zheng, G.; Moskal, L.M. Retrieving leaf area index (LAI) using remote sensing: Theories, methods and sensors. Sensors 2009, 9, 2719–2745. [Google Scholar] [CrossRef] [PubMed]
  12. Jonckheere, I.; Fleck, S.; Nackaerts, K.; Muys, B.; Coppin, P.; Weiss, M.; Baret, F. Review of methods for in situ leaf area index determination: Part I. Theories, sensors and hemispherical photography. Agric. Forest Meteorol. 2004, 121, 19–35. [Google Scholar] [CrossRef]
  13. Klingler, A.; Schaumberger, A.; Vuolo, F.; Kalmár, L.B.; Pötsch, E.M. Comparison of direct and indirect determination of leaf area index in permanent grassland. PFG—J. Photogramm. Remote Sens. Geoinf. Sci. 2020, 88, 369–378. [Google Scholar] [CrossRef]
  14. Schraik, D.; Varvia, P.; Korhonen, L.; Rautiainen, M. Bayesian inversion of a forest reflectance model using sentinel-2 and landsat 8 satellite images. J. Quant. Spectrosc. Radiat. Transfer. 2019, 233, 1–12. [Google Scholar] [CrossRef]
  15. Lee, J.; Kang, Y.; Son, B.; Im, J.; Jang, K. Estimation of Leaf Area Index Based on Machine Learning/PROSAIL Using Optical Satellite Imagery. Korean J. Remote Sens. 2021, 37, 1719–1729. [Google Scholar]
  16. Kang, Y.; Gao, F.; Anderson, M.; Kustas, W.; Nieto, H.; Knipper, K.; Yang, Y.; White, W.; Alfieri, J.; Torres-Rua, A. Evaluation of satellite Leaf Area Index in California vineyards for improving water use estimation. Irrig. Sci. 2022, 40, 531–551. [Google Scholar] [CrossRef]
  17. Weiss, M.; Jacob, F.; Duveiller, G. Remote sensing for agricultural applications: A meta-review. Remote Sens. Environ. 2020, 236, 111402. [Google Scholar] [CrossRef]
  18. Wang, Y.; Fang, H. Estimation of LAI with the LiDAR technology: A review. Remote Sens. 2020, 12, 3457. [Google Scholar] [CrossRef]
  19. Ali, A.; Imran, M. Evaluating the potential of red edge position (REP) of hyperspectral remote sensing data for real time estimation of LAI & chlorophyll content of kinnow mandarin (Citrus reticulata) fruit orchards. Sci. Hortic. 2020, 267, 109326. [Google Scholar]
  20. Ma, J.; Wang, L.; Chen, P. Comparing different methods for wheat LAI inversion based on hyperspectral data. Agriculture 2022, 12, 1353. [Google Scholar] [CrossRef]
  21. Pagliai, A.; Ammoniaci, M.; Sarri, D.; Lisci, R.; Perria, R.; Vieri, M.; D’Arcangelo, M.E.M.; Storchi, P.; Kartsiotis, S. Comparison of Aerial and Ground 3D Point Clouds for Canopy Size Assessment in Precision Viticulture. Remote Sens. 2022, 14, 1145. [Google Scholar] [CrossRef]
  22. Colaço, A.F.; Trevisan, R.G.; Molin, J.P.; Rosell-Polo, J.R.; Escolà, A. A method to obtain orange crop geometry information using a mobile terrestrial laser scanner and 3D modeling. Remote Sens. 2017, 9, 763. [Google Scholar] [CrossRef]
  23. Li, Q.; Xue, Y. Total leaf area estimation based on the total grid area measured using mobile laser scanning. Comput. Electron. Agric. 2023, 204, 107503. [Google Scholar] [CrossRef]
  24. Luo, S.; Chen, J.M.; Wang, C.; Gonsamo, A.; Xi, X.; Lin, Y.; Qian, M.; Peng, D.; Nie, S.; Qin, H. Comparative performances of airborne LiDAR height and intensity data for leaf area index estimation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 300–310. [Google Scholar] [CrossRef]
  25. Tang, H.; Brolly, M.; Zhao, F.; Strahler, A.H.; Schaaf, C.L.; Ganguly, S.; Zhang, G.; Dubayah, R. Deriving and validating Leaf Area Index (LAI) at multiple spatial scales through lidar remote sensing: A case study in Sierra National Forest, CA. Remote Sens. Environ. 2014, 143, 131–141. [Google Scholar] [CrossRef]
  26. Lang, A.; Yueqin, X. Estimation of leaf area index from transmission of direct sunlight in discontinuous canopies. Agric. Forest Meteorol. 1986, 37, 229–243. [Google Scholar] [CrossRef]
  27. Tunca, E.; Köksal, E.S.; Çetin, S.; Ekiz, N.M.; Balde, H. Yield and leaf area index estimations for sunflower plants using unmanned aerial vehicle images. Environ. Monit. Assess. 2018, 190, 1–12. [Google Scholar] [CrossRef]
  28. López-Calderón, M.J.; Estrada-Ávalos, J.; Rodríguez-Moreno, V.M.; Mauricio-Ruvalcaba, J.E.; Martínez-Sifuentes, A.R.; Delgado-Ramírez, G.; Miguel-Valle, E. Estimation of total nitrogen content in forage maize (Zea mays L.) Using Spectral Indices: Analysis by Random Forest. Agriculture 2020, 10, 451. [Google Scholar] [CrossRef]
  29. Shen, B.; Ding, L.; Ma, L.; Li, Z.; Pulatov, A.; Kulenbekov, Z.; Chen, J.; Mambetova, S.; Hou, L.; Xu, D. Modeling the Leaf Area Index of Inner Mongolia Grassland Based on Machine Learning Regression Algorithms Incorporating Empirical Knowledge. Remote Sens. 2022, 14, 4196. [Google Scholar] [CrossRef]
  30. Zhang, Z.; Masjedi, A.; Zhao, J.; Crawford, M.M. Prediction of sorghum biomass based on image based features derived from time series of UAV images. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 6154–6417. [Google Scholar]
  31. Mansaray, L.R.; Kabba, V.T.; Yang, L. Rice biophysical parameter retrieval with optical satellite imagery: A comparative assessment of parametric and non-parametric models. Geocarto Int. 2022, 37, 13561–13578. [Google Scholar] [CrossRef]
  32. Zhu, X.; Li, J.; Liu, Q.; Yu, W.; Li, S.; Zhao, J.; Dong, Y.; Zhang, Z.; Zhang, H.; Lin, S. Use of a BP Neural Network and Meteorological Data for Generating Spatiotemporally Continuous LAI Time Series. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  33. Li, S.; Yuan, F.; Ata-UI-Karim, S.T.; Zheng, H.; Cheng, T.; Liu, X.; Tian, Y.; Zhu, Y.; Cao, W.; Cao, Q. Combining color indices and textures of UAV-based digital imagery for rice LAI estimation. Remote Sens. 2019, 11, 1763. [Google Scholar] [CrossRef]
  34. Yan, P.; Han, Q.; Feng, Y.; Kang, S. Estimating lai for cotton using multisource uav data and a modified universal model. Remote Sens. 2022, 14, 4272. [Google Scholar] [CrossRef]
  35. Hasan, U.; Sawut, M.; Chen, S. Estimating the leaf area index of winter wheat based on unmanned aerial vehicle RGB-image parameters. Sustainability 2019, 11, 6829. [Google Scholar] [CrossRef]
  36. Sun, X.; Yang, Z.; Su, P.; Wei, K.; Wang, Z.; Yang, C.; Wang, C.; Qin, M.; Xiao, L.; Yang, W. Non-destructive monitoring of maize LAI by fusing UAV spectral and textural features. Front. Plant Sci. 2023, 14, 1158837. [Google Scholar] [CrossRef] [PubMed]
  37. Černý, J.; Pokorný, R.; Haninec, P.; Bednář, P. Leaf area index estimation using three distinct methods in pure deciduous stands. J. Vis. Exp. 2019, 150, e59757. [Google Scholar]
  38. Zhang, X.; Zhang, K.; Wu, S.; Shi, H.; Sun, Y.; Zhao, Y.; Fu, E.; Chen, S.; Bian, C.; Ban, W. An investigation of winter wheat leaf area index fitting model using spectral and canopy height model data from unmanned aerial vehicle imagery. Remote Sens. 2022, 14, 5087. [Google Scholar] [CrossRef]
  39. Gitelson, A.A.; Merzlyak, M.N. Remote sensing of chlorophyll concentration in higher plant leaves. Adv. Space Res. 1998, 22, 689–692. [Google Scholar] [CrossRef]
  40. Peñuelas, J.; Isla, R.; Filella, I.; Araus, J.L. Visible and near-infrared reflectance assessment of salinity effects on barley. Crop Sci. 1997, 37, 198–202. [Google Scholar] [CrossRef]
  41. Gitelson, A.; Merzlyak, M.N. Spectral reflectance changes associated with autumn senescence of Aesculus hippocastanum L. and Acer platanoides L. leaves. Spectral features and relation to chlorophyll estimation. J. Plant Physiol. 1994, 143, 286–292. [Google Scholar] [CrossRef]
  42. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  43. Jordan, C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
  44. Wu, W. The generalized difference vegetation index (GDVI) for dryland characterization. Remote Sens. 2014, 6, 1211–1233. [Google Scholar] [CrossRef]
  45. Becker, F.; Choudhury, B.J. Relative sensitivity of normalized difference vegetation index (NDVI) and microwave polarization difference index (MDPI) for vegetation and desertification monitoring. Remote Sens. Environ. 1988, 24, 297–311. [Google Scholar] [CrossRef]
  46. Xiao, H.; Chen, X.; Yang, Z.; Li, H.; Zhu, H. Vegetation index estimation by chlorophyll content of grassland based on spectral analysis. Spectrosc. Spect. Anal. 2014, 34, 3075–3078. [Google Scholar]
  47. Cao, Q.; Miao, Y.; Gao, X.; Liu, B.; Feng, G.; Yue, S. Estimating the nitrogen nutrition index of winter wheat using an active canopy sensor in the North China Plain. In Proceedings of the 2012 First International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Shanghai, China, 2–4 August 2012; pp. 1–5. [Google Scholar]
  48. Xiaoqin, W.; Miaomiao, W.; Shaoqiang, W.; Yundong, W. Extraction of vegetation information from visible unmanned aerial vehicle images. Trans. Chin. Soc. Agric. Eng. 2015, 31, 152–159. [Google Scholar]
  49. Wu, H.; Jiang, J.J.; Zhang, H.L.; Zhang, L.; Zhou, J. Application of ratio resident-area index to retrieve urban residential areas based on landsat TM data. J. Nanjing Norm. Univ. Nat. Sci. 2006, 3, 118–121. [Google Scholar]
  50. Zhao, X.; Guo, Q.; Su, Y.; Xue, B. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas. ISPRS J. Photogramm. 2016, 117, 79–91. [Google Scholar] [CrossRef]
  51. Chen, Q.; Baldocchi, D.; Gong, P.; Kelly, M. Isolating individual trees in a savanna woodland using small footprint lidar data. Photogramm. Eng. Remote Sens. 2006, 72, 923–932. [Google Scholar] [CrossRef]
  52. Ma, K.; Chen, Z.; Fu, L.; Tian, W.; Jiang, F.; Yi, J.; Du, Z.; Sun, H. Performance and sensitivity of individual tree segmentation methods for UAV-LiDAR in multiple forest types. Remote Sens. 2022, 14, 298. [Google Scholar] [CrossRef]
  53. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  54. Rigatti, S.J. Random forest. J. Insur. Med. 2017, 47, 31–39. [Google Scholar] [CrossRef] [PubMed]
  55. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  56. Awad, M.; Khanna, R.; Awad, M.; Khanna, R. Support vector regression. In Efficient Learning Machines: Theories, Concepts, and Applications for Engineers and System Designers; Springer Nature: New York, NY, USA, 2015; pp. 67–80. [Google Scholar]
  57. Su, X.; Yan, X.; Tsai, C.L. Linear regression. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 275–294. [Google Scholar] [CrossRef]
  58. Shi, Q.; Abdel-Aty, M.; Lee, J. A Bayesian ridge regression analysis of congestion’s impact on urban expressway safety. Accid. Anal. Prev. 2016, 88, 124–137. [Google Scholar] [CrossRef] [PubMed]
  59. Di Gennaro, S.F.; Matese, A. Evaluation of novel precision viticulture tool for canopy biomass estimation and missing plant detection based on 2.5D and 3D approaches using RGB images acquired by UAV platform. Plant Methods 2020, 16, 1–12. [Google Scholar] [CrossRef]
  60. Ouyang, J.; De Bei, R.; Collins, C. Assessment of canopy size using UAV-based point cloud analysis to detect the severity and spatial distribution of canopy decline. Oeno One 2021, 55, 253–266. [Google Scholar] [CrossRef]
  61. López-Granados, F.; Torres-Sánchez, J.; Jiménez-Brenes, F.M.; Oneka, O.; Marín, D.; Loidi, M.; de Castro, A.I.; Santesteban, L.G. Monitoring vineyard canopy management operations using UAV-acquired photogrammetric point clouds. Remote Sens. 2020, 12, 2331. [Google Scholar] [CrossRef]
  62. Liu, Z.; Guo, P.; Liu, H.; Fan, P.; Zeng, P.; Liu, X.; Feng, C.; Wang, W.; Yang, F. Gradient boosting estimation of the leaf area index of apple orchards in uav remote sensing. Remote Sens. 2021, 13, 3263. [Google Scholar] [CrossRef]
  63. Yao, X.; Wang, N.; Liu, Y.; Cheng, T.; Tian, Y.; Chen, Q.; Zhu, Y. Estimation of wheat LAI at middle to high levels using unmanned aerial vehicle narrowband multispectral imagery. Remote Sens. 2017, 9, 1304. [Google Scholar] [CrossRef]
  64. He, L.; Ren, X.; Wang, Y.; Liu, B.; Zhang, H.; Liu, W.; Feng, W.; Guo, T. Comparing methods for estimating leaf area index by multi-angular remote sensing in winter wheat. Sci. Rep. 2020, 10, 13943. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Test site location.
Figure 1. Test site location.
Agronomy 13 02747 g001
Figure 2. Schematic diagram of the distribution of the sampling points. (a) Individual scale; (b) community scale.
Figure 2. Schematic diagram of the distribution of the sampling points. (a) Individual scale; (b) community scale.
Agronomy 13 02747 g002
Figure 3. Flow chart.
Figure 3. Flow chart.
Agronomy 13 02747 g003
Figure 4. (a) CI–110 plant canopy analyzer; (b) set azimuth divisions; (c) result of LAI; (d) Using Ice River 660RTK to record the longitude and latitude coordinates.
Figure 4. (a) CI–110 plant canopy analyzer; (b) set azimuth divisions; (c) result of LAI; (d) Using Ice River 660RTK to record the longitude and latitude coordinates.
Agronomy 13 02747 g004
Figure 5. Ground-truth LAI. (a) Sampling point distribution at community scale; (b) average LAI of 3 points in each community; (c) average LAI of 4 points in each tree (individual tree scale).
Figure 5. Ground-truth LAI. (a) Sampling point distribution at community scale; (b) average LAI of 3 points in each community; (c) average LAI of 4 points in each tree (individual tree scale).
Agronomy 13 02747 g005
Figure 6. Equipment for collecting remote sensing data. (a) DJI Phantom 4 with multispectral camera; (b) multispectral grey-plate calibration; (c) DJI M300 with ZENMUSE LI.
Figure 6. Equipment for collecting remote sensing data. (a) DJI Phantom 4 with multispectral camera; (b) multispectral grey-plate calibration; (c) DJI M300 with ZENMUSE LI.
Agronomy 13 02747 g006
Figure 7. Removing soil background using bimodal threshold method.
Figure 7. Removing soil background using bimodal threshold method.
Agronomy 13 02747 g007
Figure 8. (a) Height normalization flowchart; (b) IDW parameters.
Figure 8. (a) Height normalization flowchart; (b) IDW parameters.
Agronomy 13 02747 g008
Figure 9. Flow chart of half search threshold method.
Figure 9. Flow chart of half search threshold method.
Agronomy 13 02747 g009
Figure 10. Scatter plot of community scale.
Figure 10. Scatter plot of community scale.
Agronomy 13 02747 g010
Figure 11. Instance segmentation results. (a) Watershed algorithm results; (b) manual secondary segmentation results; (c) hexagonal tool.
Figure 11. Instance segmentation results. (a) Watershed algorithm results; (b) manual secondary segmentation results; (c) hexagonal tool.
Agronomy 13 02747 g011
Figure 12. (a) Concave envelope method (red is the line longer than the threshold); (b) ROI.
Figure 12. (a) Concave envelope method (red is the line longer than the threshold); (b) ROI.
Agronomy 13 02747 g012
Figure 13. Scatter plot of Individual scale.
Figure 13. Scatter plot of Individual scale.
Agronomy 13 02747 g013
Figure 14. BP neural network model.
Figure 14. BP neural network model.
Agronomy 13 02747 g014
Figure 15. Ten-fold cross-validation results at community scale. (a) Community scale; (b) individual scale.
Figure 15. Ten-fold cross-validation results at community scale. (a) Community scale; (b) individual scale.
Agronomy 13 02747 g015aAgronomy 13 02747 g015b
Figure 16. Ten-fold cross-validation results after removing redundant features. (a) Community scale; (b) individual tree scale.
Figure 16. Ten-fold cross-validation results after removing redundant features. (a) Community scale; (b) individual tree scale.
Agronomy 13 02747 g016
Figure 17. Importance of features. (a) Community scale; (b) individual tree scale.
Figure 17. Importance of features. (a) Community scale; (b) individual tree scale.
Agronomy 13 02747 g017
Figure 18. Ten-fold cross-validation results after removing gap fraction. (a) Community scale; (b) individual tree scale.
Figure 18. Ten-fold cross-validation results after removing gap fraction. (a) Community scale; (b) individual tree scale.
Agronomy 13 02747 g018
Figure 19. Inversion results. (a) Community scale; (b) individual tree scale.
Figure 19. Inversion results. (a) Community scale; (b) individual tree scale.
Agronomy 13 02747 g019
Table 1. Ground-truth LAI parameters.
Table 1. Ground-truth LAI parameters.
ScaleNumber of Samples MaxMinMeanStandard Deviation
Community2201.9630.1600.9270.348
Individual3415.001.0802.3720.901
Table 2. Phantom 4 multispectral equipment parameters and flight parameters.
Table 2. Phantom 4 multispectral equipment parameters and flight parameters.
Parameter NameValue
CMOS pixel1600 × 1300
Flight altitude35 m
Course overlap75%
Lateral overlap75%
Speed of flight3 m/s
Photography modeEquidistant photography
Pan tilt angle−90°
BandBlue (450 nm ± 16 nm)
Green (560 nm ± 16 nm)
Red (650 nm ± 16 nm)
Red edge (730 nm ± 16 nm)
Near infrared (840 nm ± 26 nm)
Table 3. DJI M300 equipment parameters and flight parameters.
Table 3. DJI M300 equipment parameters and flight parameters.
Parameter NameValue
Point cloud density4041 points/m2
Flight altitude35 m
Scan modeRepeat scan
Lateral overlap20%
Speed of flight1 m/s
Table 4. Mathematical formulas for vegetation indices.
Table 4. Mathematical formulas for vegetation indices.
Vegetation IndexFull NameFormulaReference
GNDVIGreen Normalized Difference Vegetation Index N I R     G N I R   +   G [39]
NDVINormalized Difference Vegetation Index N I R     R N I R   +   R [40]
NDRENormalized Difference Red Edge N I R     R E G N I R   +   R E G [41]
OSAVIOptimized Soil Adjusted Vegetation Index N I R     R N I R   +   R   +   0.16 [42]
RVIRatio Vegetation Index N I R R [43]
GDVIGreen Difference Vegetation Index N I R     G [44]
DVIDifference Vegetation Index N I R     R [45]
GCIGreen Chlorophyll Index N I R G  − 1[46]
RNDVIRed Normalized Difference Vegetation Index R E G     R R E G   +   R [47]
VDVIVisible Difference Vegetation Index 2 G     R     B 2 G   +   R   +   B [48]
RRIRed Ratio Index N I R R E G [49]
Table 5. Results of instance segmentation with watershed.
Table 5. Results of instance segmentation with watershed.
N c N l N o r p F
921330 468 0.736 0.663 0.698
Table 6. Redundant analysis.
Table 6. Redundant analysis.
Feature RemovedR2 (Community Scale)R2 (Individual Tree Scale)Whether Redundant (Community Scale, Individual Scale)
Gap fraction0.8550.738×, ×
DVI0.8900.780√, ×
GCI0.8860.765×, ×
GDVI0.8830.764×, ×
GNDVI0.8770.757×, ×
NDRE0.8890.785√, ×
NDVI0.8720.770×, ×
OSAVI0.8850.794×, √
GREEN0.8790.789×, ×
NIR0.8890.788√, ×
RED0.8920.784√, ×
REDEDGE0.8820.751×, ×
RNDVI0.8870.778×, ×
RRI0.8820.789×, ×
RVI0.8820.758×, ×
VDVI0.8790.784×, ×
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, W.; Yang, F.; Ma, G.; Wu, J.; Wu, J.; Lan, Y. Multiscale Inversion of Leaf Area Index in Citrus Tree by Merging UAV LiDAR with Multispectral Remote Sensing Data. Agronomy 2023, 13, 2747. https://doi.org/10.3390/agronomy13112747

AMA Style

Xu W, Yang F, Ma G, Wu J, Wu J, Lan Y. Multiscale Inversion of Leaf Area Index in Citrus Tree by Merging UAV LiDAR with Multispectral Remote Sensing Data. Agronomy. 2023; 13(11):2747. https://doi.org/10.3390/agronomy13112747

Chicago/Turabian Style

Xu, Weicheng, Feifan Yang, Guangchao Ma, Jinhao Wu, Jiapei Wu, and Yubin Lan. 2023. "Multiscale Inversion of Leaf Area Index in Citrus Tree by Merging UAV LiDAR with Multispectral Remote Sensing Data" Agronomy 13, no. 11: 2747. https://doi.org/10.3390/agronomy13112747

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop