Next Article in Journal
Spatial Estimation of Regional PM2.5 Concentrations with GWR Models Using PCA and RBF Interpolation Optimization
Next Article in Special Issue
Multisensor Assessment of Leaf Area Index across Ecoregions of Ardabil Province, Northwestern Iran
Previous Article in Journal
Regional Patterns of Vegetation Dynamics and Their Sensitivity to Climate Variability in the Yangtze River Basin
Previous Article in Special Issue
Forest Biodiversity Monitoring Based on Remotely Sensed Spectral Diversity—A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enabling Deep-Neural-Network-Integrated Optical and SAR Data to Estimate the Maize Leaf Area Index and Biomass with Limited In Situ Data

1
State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Earth Observation of Hainan, Aerospace Information Research Institute, Chinese Academy of Sciences, Sanya 572029, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(21), 5624; https://doi.org/10.3390/rs14215624
Submission received: 16 September 2022 / Revised: 29 October 2022 / Accepted: 4 November 2022 / Published: 7 November 2022

Abstract

:
Accurate estimation of the maize leaf area index (LAI) and biomass is of great importance in guiding field management and early yield estimation. Physical models and traditional machine learning methods are commonly used for LAI and biomass estimation. However, these models and methods mostly rely on handcrafted features and theoretical formulas under idealized assumptions, which limits their accuracy. Deep neural networks have demonstrated great superiority in automatic feature extraction and complicated nonlinear approximation, but their application to LAI and biomass estimation has been hindered by the shortage of in situ data. Therefore, bridging the gap of data shortage and making it possible to leverage deep neural networks to estimate maize LAI and biomass is of great significance. Optical data cannot provide information in the lower canopy due to the limited penetrability, but synthetic aperture radar (SAR) data can do this, so the integration of optical and SAR data is necessary. In this paper, 158 samples from the jointing, trumpet, flowering, and filling stages of maize were collected for investigation. First, we propose an improved version of the mixup training method, which is termed mixup+, to augment the sample amount. We then constructed a novel gated Siamese deep neural network (GSDNN) based on a gating mechanism and a Siamese architecture to integrate optical and SAR data for the estimation of the LAI and biomass. We compared the accuracy of the GSDNN with those of other machine learning methods, i.e., multiple linear regression (MLR), support vector regression (SVR), random forest regression (RFR), and a multilayer perceptron (MLP). The experimental results show that without the use of mixup + , the GSDNN achieved a similar accuracy to that of the simple neural network MLP in terms of R 2 and RMSE, and this was slightly lower than those of MLR, SVR, and RFR. However, with the help of mixup + , the GSDNN achieved state-of-the-art performance ( R 2 = 0.71, 0.78, and 0.86 and RMSE = 0.58, 871.83, and 150.76 g/m 2 , for LAI, Biomass_wet, and Biomass_dry, respectively), exceeding the accuracies of MLR, SVR, RFR, and MLP. In addition, through the integration of optical and SAR data, the GSDNN achieved better accuracy in LAI and biomass estimation than when optical or SAR data alone were used. We found that the most appropriate amount of synthetic data from mixup + was five times the amount of original data. Overall, this study demonstrates that the GSDNN + mixup + has great potential for the integration of optical and SAR data with the aim of improving the estimation accuracy of the maize LAI and biomass with limited in situ data.

1. Introduction

Maize is one of the most important crops grown throughout the world, with the United States, China, and Brazil being the top three maize-producing countries [1]. It is an important staple food for more than two billion people and a valuable source material for the production of ethanol, animal feed, biofuel, and other products, such as starch and syrup [2,3]. With population growth, the demand for maize is rapidly increasing, so monitoring maize growth status has attracted much attention. The leaf area index (LAI) and biomass are important indicators for maize growth, reflecting the effects of nutritional deficiencies, pests and diseases, droughts and floods, etc.; therefore, their accurate estimation can assist in the monitoring of maize growth status to guide field management and early yield estimation [4,5,6]. Traditional estimation methods rely mainly on field sampling and manual measurement, which are time-consuming, labor-intensive, and prone to errors due to subjective factors. However, in the last 60 years, satellite remote sensing technology has rapidly developed, provided more and more data at high spatial and temporal resolutions, and it is now possible to use remote sensing data to obtain dynamic estimates of the LAI and biomass rapidly, accurately, and on a large scale [7]. As a consequence, increasing effort is being devoted to research on LAI and biomass estimation based on remote sensing inversion methods [8,9,10,11,12,13].
Remote sensing inversion methods can be classified into two categories: physical models and statistical models [14,15,16]. Physical models are commonly used for LAI and biomass estimation and provide an effective representation of the relationship between biophysical parameters and remote sensing data. However, physical models rely on some prior knowledge for input parameters and some idealized assumptions regarding the crop canopy; therefore, they are of limited accuracy. In addition, it is difficult to estimate the LAI and biomass through the application of physical models to optical and synthetic aperture radar (SAR) data, since the different imaging mechanisms involved in the acquisition of optical and SAR data mean that the corresponding physical models are very different. Therefore, statistical methods play a vital role in the estimation of the LAI and biomass. Traditional machine learning methods, such as multiple linear regression (MLR), support vector regression (SVR), and random forest regression (RFR), are widely used in LAI and biomass estimation and have achieved very good results [17,18]. Nevertheless, all of these methods rely on empirical formulas and custom-built features, such as the normalized difference vegetation index (NDVI) and the enhanced vegetation index (EVI), which limit the capabilities of the corresponding inversion models. Deep neural networks, also known as deep learning, which have demonstrated great superiority in automated extraction of deep features and in the approximation of complicated nonlinear relationships, have attracted much attention in recent years. They have achieved great success in many remote sensing tasks [19,20,21,22,23,24], such as classification [25], image preprocessing [26], object detection [27,28], and scene understanding [29,30]. However, their application to LAI and biomass estimation has been hindered by the limited amounts of in situ data that are available [24]. Owing to the limited efficiency of in situ data collection, it is very expensive to obtain the massive amounts of data that deep neural networks, as data-driven models, require for the training of their multitudes of parameters. Consequently, finding a way for deep neural networks to estimate the maize LAI and biomass with limited in situ data is a necessary, albeit challenging, task.
Data augmentation is one of the techniques used in machine learning to increase the size and diversity of training datasets as much as possible, thereby enhancing the generalization abilities of models. Widely used data augmentation methods in image processing include horizontal/vertical flip, rotation, scaling, clipping, translation, contrast, color disturbance, and the addition of noise. However, these methods change the position or color of the original pixels and, therefore, can only be applied to scenes that are not sensitive to target correspondence or local color changes (such as in image classification [31] and target recognition [32]), and they are not suitable for quantitative remote sensing inversion. Because changes in contrast, the vibrance of color, and the addition of noise will change the one-to-one mappings between different bands and the LAI and biomass, rotation and translation will also eliminate the corresponding relationships between remote sensing data and the measured LAI and biomass, leading to what we call inconsistency between source and target (IST). The data mixing-up method (mixup) [33] is a novel method proposed to extend datasets based on interpolation and has achieved excellent results in the field of image classification. By mixing training data, it greatly improves the generalization ability of a model and the processing ability in the face of adversarial attack. This paper adapts the mixup method to estimate the maize LAI and biomass with a number of innovative improvements. Instead of using the mixed-up data directly, we predict the interpolation coefficient through a deep neural model, thereby mitigating the interpolation error that arises in the original mixup method. We call our method mixup + .
With the help of mixup + , we are able to leverage a deep neural network to estimate the maize LAI and biomass with limited in situ data. In this paper, we propose a novel deep neural network, a gated Siamese deep neural network (GSDNN), to integrate optical and SAR data in order to estimate the maize LAI and biomass through a gating mechanism. Considering the respective advantages and disadvantages of optical data and SAR data, our approach is to integrate these data for maize LAI and biomass estimation [17]. Specifically, optical data can provide rich spectral information, but it is difficult to use them to reflect the contributions of leaves within the maize canopy. When the leaf density of the canopy is high, electromagnetic waves in the visible and near-infrared bands mainly interact with the middle and upper layers of the canopy, which leads to saturation of the reflectivity and vegetation index of the optical data. For example, the most commonly used vegetation index, the NDVI, is sensitive to low LAI values (≤3), but it will reach saturation for medium or high LAI values (>3) [34]. Similarly, when the biomass is at a medium to high level (>2 kg/m 2 ), the NDVI also exhibits saturation [35]. It has also been found that SAR data have quite good penetration and contain much more three-dimensional structural information about the maize canopy, but radar remote sensing data are easily affected by scattering and attenuation by the canopy, resulting in limited inversion accuracy of the growth parameters. The backscatter coefficient and polarization decomposition parameters extracted from SAR data can help to alleviate the saturation phenomenon in LAI and biomass inversion [18,36], but they are easily affected by soil background and topographic factors, resulting in errors in LAI and biomass inversion [37]. Therefore, the integration of optical and SAR data in order to estimate maize LAI and biomass is essential, and the key aspect of this process is the establishment of an effective and deep integration mechanism. In this paper, the use of the GSDNN is proposed as a way to achieve the above goals by leveraging a gating mechanism [38] and a Siamese architecture [39], thereby establishing multiple information interaction channels between the two branches of the neural network corresponding to optical and radar remote sensing data, respectively, and enabling relatively exact and deep information interactions during optical and radar data fusion. At the same time, the use of a gating mechanism helps to increase the depth of the deep neural network and avoid gradient disappearance and gradient explosion problems [40].
Overall, in this paper, we focus on the use of deep-neural-network-integrated optical and SAR data to retrieve the maize LAI and biomass. Our objectives are (1) to adapt the mixup method to maize LAI and biomass estimation tasks, (2) to propose a novel deep neural network (with integrated optical and SAR data) to estimate the maize LAI and biomass, and (3) to study the effect of the number of extending samples on the model accuracy.

2. Materials and Methods

2.1. Study Area

The study area was located in Dajianchang Town (39°54.20N, 117°26.90E), Wuqing District, Tianjin City, China (Figure 1). Dajianchang Town has a warm, temperate, semi-humid continental monsoon climate, with annual average temperature, precipitation, and sunshine hours of 11.6 °C, 606.8 mm, and 2705 h, respectively. The terrain is relatively flat and the soil is loose and fertile, making it suitable for the cultivation and production of crops. In this area, the main summer crops are maize and soybeans, accounting, respectively, for 78.3% and 8.9% of the total crop area [41]. Maize is planted continuously, which was convenient for our collection of in situ data and the corresponding satellite remote sensing data.

2.2. Data

2.2.1. Satellite Data

Optical and SAR data were both acquired and preprocessed at four different growth stages (jointing stage, trumpet stage, flowering stage, and filling stage) of maize in 2018 (Table 1). The optical data consisted of three scenes from Sentinel-2 data and one scene from Landsat-8 OLI data. The scene from Landsat-8 OLI data was used because the Sentinel-2 scenes were covered by thick clouds in the jointing period. The preprocessing of the Sentinel-2 multispectral data and Landsat-8 OLI data was carried out with Sen2cor plug-ins provided by the ESA and ENVI software, respectively. Both sets of data were preprocessed with radiometric calibration and atmospheric correction. The SAR data consisted of four scenes from Sentinel-1B GRDH and SLC data. The preprocessing of these GRDH and SLC data was based on the SNAP software provided by the ESA official website and mainly consisted of radiometric calibration, multi-look processing, refined Lee filtering, and geocoding. Finally, both optical and SAR data were resampled to a spatial resolution of 10 m × 10 m .

2.2.2. In Situ Data

We also collected the in situ LAI and biomass data (Table 2) in the four growth stages (jointing stage, trumpet stage, flowering stage, and filling stage) of maize in the study area on 20–23 July, 31 July–2 August, 15–17 August, and 3–5 September 2018, respectively. During the measurement, 40 sample points (Figure 1) were collected for each growth stage in the study area. Each sample point was at the center of a 100 m × 100 m quadrat, with no overlap between them, and the coordinate data were recorded with a Garmin GPS 60. We then measured the lengths and widths of maize leaves manually and calculated the LAI with an empirical formula: Area of single leaf = Length × Width × 0.73, and LAI = Area of single leaf × Plant density. After that, we put each fresh sample into a Kraft paper bag, weighed it to get Biomass_wet, and then dried it in the oven and weighed the dry sample to get the Biomass_dry of maize per unit area. We calculated the biomass of maize combined with the measured plant density (biomass = Wet/Dry weight per plant × Density of plant). To ensure the accuracy of the in situ data, we selected three representative samples of each at each sample point and averaged them to obtain the final result. We collected 158 samples in total, and the statistics of the LAI and biomass values are shown in Table 2. The ranges of the LAI, Biomass_wet, and Biomass_dry were 0.38–5.1, 16.48–8448.79 g/m 2 , and 190.14–1559.08 g/m 2 , respectively.

3. Methods

In this paper, our aim is to enable a deep neural network to estimate the maize LAI and biomass with limited in situ data. Deep neural networks are data-driven and, therefore, are difficult to apply directly to scenarios such as LAI and biomass estimation, where there is a shortage of training data. First, we attempt to augment the training data through an improved version of the mixup method. We then propose a novel gated Siamese deep neural network (GSDNN) to leverage both SAR and optical data to improve the accuracy of LAI and biomass estimation. The proposed GSDNN can effectively extract deep features from SAR and optical data through its use of a Siamese architecture and learn to fuse them to yield better estimation accuracy of the LAI and biomass through a gating mechanism.

3.1. From Mixup to Mixup +

The mixup method is a data augmentation method that was first proposed for image classification in machine learning [33]. It augments data by incorporating the prior knowledge that linear interpolations of feature representations should lead to the same interpolations of the associated targets. In mixup, virtual feature–target training examples are constructed based on convex combinations of pairs of examples and their labels sampled from the mixup vicinal distribution as follows:
x ˜ = λ x i + ( 1 λ x j ) , where x i and x j are raw input vectors , y ˜ = λ y i + ( 1 λ y j ) , where y i and y j are th corresponding labels ,
where λ Beta ( α , α ) , for α ( 0 , ) . The hyper-parameter α controls the interpolation coefficient λ , which means that the strength of interpolation between feature–target pairs is controlled by α . The function f can then be learned by minimizing the following expression:
R ( f ) = 1 m i = 1 m l ( f ( x ˜ i ) , y ˜ i ) , where m is the total sample number .
This is known as the empirical vicinal risk (EVR) principle [42]. Previous studies have shown that mixup is simple but quite effective in image classification.
Motivated by mixup, we tried to adapt it for use in maize LAI and biomass estimation instead of image classification. We termed this adaptation mixup + . Unlike the applications considered in previous studies, maize LAI and biomass estimation is a regression task and, therefore, much more sensitive to the problem of inconsistency between the source and target (IST) caused by data augmentation. As a linear interpolation method, mixup still suffers from the IST problem when the training data are nonlinear. As has been shown in previous studies, the values of the LAI and biomass gradually increase with the nonlinear growth of maize. Although the use of a smaller λ can alleviate the IST problem, this will reduce the diversity of virtual data and increase the risk of a large neural network memorizing limited data, leading to serious overfitting of the model.
To solve this problem, we propose a training method to predict the interpolation coefficient λ according to x ˜ as follows:
R ( f ) = 1 m i = 1 m l f ( x i ˜ ) y i 2 y i 1 y i 2 , λ ,
where y i 1 and y i 2 are in situ LAI or biomass values corresponding to synthetic x i ˜ . This provides the model with decoupling ability. Through analysis of the interpolation data components, the prediction ability of the model is greatly improved, and the problem of inconsistency between source and target is greatly alleviated. It can be seen from Equation (3) that the loss function is weighted according to the distance between the two interpolation samples. When this distance increases, the weight of the virtual samples constructed from the interpolation samples gradually becomes weaker, and thus, the inconsistency between source and target is reduced. In this paper, we call the method represented Equations (2) and (3) mixup + .

3.2. Deep Neural Network for LAI and Biomass Estimation: GSDNN

Figure 2 presents the architecture of our proposed GSDNN. This model consists mainly of two types of modules: a fusion layer and a regression layer. The former is used for deep fusion of optical and SAR data, and the latter is used to perform quantitative inversion of the maize LAI and biomass. First, a gating mechanism and a Siamese architecture are used to realize the effective fusion of optical and SAR data. Then, multitask learning is used to obtain the maize LAI, Biomass_wet, and Biomass_dry simultaneously, which helps to overcome the overfitting problem and improve model accuracy.

3.2.1. Fusion Layer

The fusion layer consists of two main parts: (1) a gated control layer (GCL), which is used to extract the complementary effective information of each channel and reduce mutual interference, and (2) a full connection layer (FCL), which is used to realize nonlinear transformation of features and to map data from high to low dimensions or from low to high dimensions.
In the ith fusion layer, we denote the input data of the optical and SAR channels by x i o and x i s , respectively. In the GCL, the optical channel will get complementary information from the SAR channel, and vice versa. Specifically, the gating mechanism is designed to select effective information and is defined as
g i o = σ ( W i o · x i o + b i o ) , g i s = σ ( W i s · x i s + b i s ) ,
where superscripts o and s indicate the optical and SAR channels, respectively, and σ ( · ) denotes the activation function. In this paper, ReLU is used as the activation function. W i o , W i s , x i s , and x i o are the parameters of the gating mechanism and are acquired based on the stochastic gradient descent (SGD) algorithm. We then obtain the output of the GCL for the fusion layer as follows:
h i s = g i o x i o + x i s , h i o = g i s x i s + x i o ,
where ⊙ denotes the Hadamard product. As shown in Equation (5), the information extracted from the optical data is selectively integrated into the SAR channel, thus enriching the information of the SAR channel, and vice versa. In this way, the GCL layer enables deep fusion of optical and SAR information.
The FCL in the fusion layer is a nonlinear transformation increasing the depth of the network and the fitting ability of the model. The output of the fusion layer is defined as follows:
y i o = σ ( W i , f o · h i o + b i , f o ) , y i s = σ ( W i , f s · h i s + b i , f s ) ,
where y i o and y i s denote the outputs of the optical and SAR channels, respectively, in the ith fusion layer. W i , f o and b i , f o are the parameters of the nonlinear transformation of the optical channel, and W i , f s and b i , f s are the parameters of the nonlinear transformation of the SAR channel.

3.2.2. Regression Layer

The regression layer is mainly used to explore the relationship between the depth features obtained from the fusion layer and the maize LAI and biomass. Considering the difference between the LAI and biomass, multitask learning is used to obtain the maize LAI, Biomass_wet, and Biomass_dry simultaneously, which helps to overcome the overfitting problem and improve model accuracy [43,44].
First, the regression layer cascades the optical and SAR information from the fusion layer:
x r = [ y l o ; y l s ] ,
where l denotes the last layer of the fusion layer. Then, an FCL is used to implement reduction and optimization:
x f = f FCL ( x r )
Note that f FCL ( · ) denotes an FCL. All of the FCLs considered in this paper have the same structure, namely,
y = f FCL ( x , θ ) = σ ( W · x + b )
where x and y denote the input and output, respectively, of the FCL. θ represents the parameters learned by training, including W and b . σ is the activation function, for which ReLU is used in this paper. All of the FCLs in Figure 2 have their own independent parameters. Finally, based on the optical and SAR depth fusion features, this paper uses an independent fully connected network for regression of each maize parameter as follows:
LAI = f FCL LAI ( x f ) , Biomass wet = f FCL Biomass wet ( x f ) , Biomass dry = f FCL Biomass dry ( x f ) ,
where LAI , Biomass wet , and Biomass dry are the estimation values of the model.

3.2.3. Timestamp Embedding

Given that the growth parameters of crops exhibit a regular trend of change with the passage of the growth period, this study considers time information in a half month (15 days) to take account of time and, thereby, enhance the inversion accuracy of the model. Because of the discreteness and large ranges of variation of remote sensing data and imaging dates, simply taking the imaging date as a feature will reduce the generalization ability of the model. To deal with this problem, inspired by the word vector technique in artificial intelligence (AI), we propose the use of timestamp embedding to encode time information. In detail, one year is divided into 25 time groups with 15 days as a time stage, and each group is represented by an n-dimensional vector.

3.2.4. Objective Function

In this subsection, we discuss in detail the objective function of the GSDNN for maize LAI and biomass inversion. Here, we denote by y LAI , y Biomass wet , and y Biomass dry the in situ data of the LAI and biomass, and by x opt and x SAR the optical and SAR data. We can then express the training sample as ( x opt , x SAR , y LAI , y Biomass wet , y Biomass dry ) . According to the minimum mean-square error (MMSE), the objective function based on multitask learning is as follows:
L = min 1 3 [ ( MSE ( LAI , y LAI ) + MSE ( Biomass wet , y Biomass wet ) + MSE ( Biomass dry , y Biomass dry ) ] ,
where LAI , Biomass wet , and Biomass dry are the estimation values of the GSDNN. MSE is the mean-square error, which is defined as
MSE ( y ^ , y ) = 1 n i = 1 n y ^ i y i 2 .
To obtain the final objective function based on the training examples from mixup + , we combine Equations (3), (11) and (12):
L = min 1 3 m i = 1 m T LAI ( x ^ opt , x ^ SAR ) y LAI , 2 y LAI , 1 y LAI , 2 λ 2 + T Biomass wet ( x ^ opt , x ^ SAR ) y Biomass wet , 2 y Biomass wet , 1 y Biomass wet , 2 λ 2 + T Biomass dry ( x ^ opt , x ^ SAR ) y Biomass dry , 2 y Biomass dry , 1 y Biomass dry , 2 λ 2 ,
where T LAI ( · , · ) , T Biomass wet ( · , · ) , and T Biomass dry ( · , · ) denote the GSDNN models for the LAI, Biomass_wet, and Biomass_dry. In this paper, Equations (3) and (13) are used as objective functions for in situ data and augmentation data, respectively.

3.2.5. Accuracy Assessment

To reduce the impact of data randomness, fivefold cross-validation was used to assess the accuracy of the LAI and biomass estimation models. First, the dataset was randomly divided into five parts, and each of these was used as test data to alternately train the model. Then, 10% of the data from the remaining four parts were randomly selected as the verification set, with all of the remaining data as the training set. The coefficient of determination ( R 2 ) and the root-mean-square error (RMSE) were used to assess the precision of the LAI and biomass estimation models. Each model was trained and tested five times, and the mean value of 100 runs was taken as the final result.

3.3. GSDNN Workflow

Figure 3 shows the workflow of the GSDNN, which includes the following procedures:
  • Preprocessing optical and SAR data with the methods described in Section 2.2.
  • Normalizing the input data obtained in the previous step using the standard preprocessing method of deep learning to avoid the optimization difficulties caused by excessive differences among different dimensions in the data. In detail, given input data ( x , y ) , the specific method of data normalization is as follows:
    x : = x x ¯ σ 2 ,
    where x denotes the average of the input data and σ 2 denotes the variance of the input data.
  • Initializing the model parameters using He’s initialization method [45].
  • Randomly sampling a batch of data as the input of the GSDNN model and then performing forward propagation using Equations (4)–(10) to predict the LAI and biomass.
  • Computing the prediction loss using Equation (13) and judging the convergence of the model using the validation set. If the prediction loss on the validation set keep decreasing, which indicates that the model has not converged, then it goes to the next step. Otherwise, the training process is ended.
  • Using the stochastic gradient descent method to update the parameters of the GSDNN. Go to Step 4.

3.4. Experimental Data Preparation

In this paper, 158 samples were used in four growth stages, with each sample consisting of in situ data (LAI and biomass) and the corresponding optical and SAR features. First, we selected six bands (B2–B4, B8, and B11–B12 for Sentinel-2 and B2–B8 for Landsat8) and five vegetation indexes (NDVI, RVI, EVI, SAVI, and MSAVI) and extracted their GLCM texture features as the optical features (59 dimensions). Then, we selected two bands (VV and VH) and their ratio (VV/VH), polarization parameters (H, A, and α ), and GLCM texture features as the SAR features (22 dimensions). Finally, we combined the optical and SAR features as the input feature and the in situ LAI and biomass as their label.
Considering the different ranges of features and their labels, we normalized them as in Equation (14). When the GSDNN performs the LAI and biomass inversion, the original value must be estimated before data normalization. This is done by the inverse operation of Equation (14), as follows:
y ^ : = y · σ 2 + y ¯ .
In mixup + , α was set to 0.2 [33]. Multiple random sampling was carried out to synthesize the new data during the mixup + procedure. The number of samples was the synthetic dataset size. Specifically, for each sampling, we randomly selected two samples from the above 158 samples. The specific calculation method is shown in Equation (1).

3.5. Settings and Training Details for the GSDNN

As shown in Figure 2, the GSDNN consists of two main parts: a fusion layer and a regression layer. In this paper, three fusion layers were used to fuse the optical and SAR features. The optical and SAR input features were, respectively, 59-dimensional and 22-dimensional, and thus, the total input features were 81-dimensional. Other parameters of each layer for the GSDNN were defined as follows:
  • Fusion layer: The hidden layer sizes of both the gate control layer and FCL were set to 300, the output dimension was 300, and the internal parameter sizes of the three fusion layers were consistent.
  • Regression layer: This layer took as input the concatenation of the outputs of the SAR and optical channels. In the present case, the size of the input layer for the regression layer was set to 600, the hidden layer sizes of the LAI, Biomass_wet, and Biomass_dry predictors were set to 300, and the final outputs were three scalars.
  • The timestamp embedding dimension n was set to 10.
For training, Adam [46] was used as the optimizer, with a learning rate 0.0001. The batch size was set to 100. To reduce the impact of the randomness of training and data, 100 runs of fivefold cross-validation were conducted, and then the mean and variance of the results of 500 experiments in total were calculated and reported. Specifically, the data were randomly divided into five copies each time, and then each copy was recycled as a test set. Because the depth of model training depended on the convergence of the verification set, in this paper, a random selection of 10% of the data from the remaining four copies was taken as the verification set, with all of the remaining data as the training set. In this study, we used the PyTorch deep learning framework based on Python 3.6.8. We ran all experiments with the Ubuntu 18.04 operating system on a GeForce RTX 2080Ti GPU.

4. Results

4.1. Comparison of the GSDNN with Other Machine Learning Models

As already mentioned, a variety of traditional machine learning methods, such as MLR and SVR, and simple neural networks, such as multilayer perceptrons (MLPs), have been used to retrieve the LAI and biomass. To evaluate the performance of the GSDNN, we compared the accuracy of the GSDNN in LAI and biomass estimation with MLR, RFR, SVR, and an MLP. Table 3 shows the R 2 and RMSE of different models for the LAI, Biomass_wet, and Biomass_dry (all of them were testing results). On the whole, our method of the GSDNN with mixup + achieved the best results in terms of R 2 and RMSE. The R 2 values for the LAI, Biomass_wet, and Biomass_dry were 0.71, 0.78, and 0.86, respectively, and the RMSEs were 0.58, 871.83 g/m 2 , and 150.76 g/m 2 , respectively. Figure 4 shows a comparison of the LAI and biomass estimated by the GSDNN-based model with the measured values (all of them were testing results). It can be seen that the measured value was in good agreement with the estimated value for both the LAI and biomass. Figure 5 shows the training and testing results of the LAI and biomass estimated by the GSDNN-based model. It can be seen that, as the number of iterations increased, the training loss gradually decreased and the test R 2 gradually increased for both the LAI and biomass. The MLP, as a simple neural network, achieved the lowest accuracy (for LAI, R 2 = 0.58 and RMSE = 0.65; for Biomass_wet, R 2 = 0.61 and RMSE = 1043.04 g/m 2 ; for Biomass_dry, R 2 = 0.57 and RMSE = 246.55 g/m 2 ), and MLR had slightly better accuracy (for LAI, R 2 = 0.61 and RMSE = 0.56; for Biomass_wet, R 2 = 0.64 and RMSE = 1153.90 g/m 2 ; for Biomass_dry, R 2 = 0.58 and RMSE = 202.53 g/m 2 ). SVR and RFR, the most widely used machine learning methods, were nearly equal in their accuracy, which was higher than that of MLR and MLP, but lower than that of the GSDNN with mixup + . Overall, GSDNN with mixup + had the best performance in both LAI and biomass estimation for maize.

4.2. Comparison of Multiple Machine Learning Models before and after Use of Mixup +

Most machine learning methods are trained to minimize their average error with a large amount of training data, especially for those with a complex structure. With the help of mixup + , we could construct massive virtual feature–target training samples. We compared the most commonly used multiple machine learning methods (MLR, RFR, SVR, and MLP) with the GSDNN before and after the use of mixup + . Table 4 shows the results of this comparison in terms of R 2 and RMSE (all of them were testing results). It can be seen that after the use of mixup + , the traditional machine learning methods, namely, MLR, RFR, and SVR, showed no marked differences, and the variations in R 2 were within 0.05. However, the results from the neural networks, namely, MLP and GSDNN, were clearly better after mixup + was used, with R 2 increasing by 0.05–0.08 and by 0.13–0.14 for the MLP and GSDNN, respectively. Thus, after the use of mixup + to augment the training data, the results from the GSDNN showed the greatest improvement, followed by those from the MLP, while the results from the other traditional machine learning methods (MLR, RFR, and SVR) showed little improvement.

4.3. Results of Maize LAI and Biomass Estimation Based on the GSDNN with Mixup +

As we have seen, the GSDNN with mixup + achieved the best results for maize LAI and biomass estimation. Therefore, we estimated the maize LAI and biomass in Wuqing District using the GSDNN with mixup + . Figure 6 shows the results of maize LAI and biomass estimation in the trumpet and filling growth stages. From Figure 6a,b, it can be seen that the maize LAI became much lower in the filling stage than in the trumpet stage, which was due to the reduction in the number of leaves during the late growth period. Looking at the growth period as a whole, it can be seen that the LAI first increased, reached a peak in the flowering stage, and then slightly decreased. From Figure 6c,d, it can be seen that the biomass in the filling stage was significantly higher than that in the trumpet stage because the maize in the trumpet stage was still undergoing robust growth, and the biomass was gradually accumulating. Therefore, the results of maize LAI and biomass estimation in this paper were consistent with the trends of change found from field measurements, indicating that the inversion method proposed in this paper is reliable.

5. Discussion

5.1. GSDNN Compared with Other Machine Learning Methods

In this paper, we have proposed a novel method, GSDNN + mixup + , for integrating optical and SAR data to estimate the maize LAI and biomass. To evaluate the performance of the GSDNN + mixup + , we compared it with other machine learning methods, namely, MLR, SVR, RFR, and MLP. The results show that the GSDNN + mixup + achieves the best accuracy in terms of R 2 and RMSE among all LAI and biomass estimation models (Table 3 and Table 4). Although SVR and RFR are the most popular methods for LAI and biomass estimation, they both give results that are a little poorer than those from GSDNN + mixup + . This indicates that the deep neural network has advantages in integrating optical and SAR data and fitting the complex relationship between remote sensing features and LAI or biomass due to its multilayer structure and its large number of parameters compared with traditional machine learning methods, which is in agreement with what was found by Bahrami et al. [47]. MLR, a simple linear method, has an accuracy that is much lower than those of SVR and RFR, which may be because MLR has limited fitting capabilities for LAI and biomass variation. The neural network MLP has the lowest accuracy among all methods, which may appear surprising, but it actually illustrates another problem, namely, that it is difficult for neural networks to achieve good results with limited training data. This is why the GSDNN method gives good results when combined with mixup + . Overall, compared with other machine learning methods, the combination of the GSDNN + mixup + proposed in this paper has greater potential for LAI and biomass estimation.

5.2. Effects of Combining Mixup + with Different Machine Learning Models

As discussed above, mixup + can be used to augment training data to avoid the overfitting problem. Therefore, the effects of combining mixup + with different machine learning methods, namely, MLR, SVR, RFR, MLP, and GSDNN, were analyzed. From Table 4, it can be seen that the performances of MLP and GSDNN both improved greatly when mixup + was used, with R 2 increasing by 0.05–0.08 and 0.13–0.14, respectively. However, there was little improvement in the performances of the traditional machine learning methods, with R 2 increasing by less than 0.05. These results indicate that a deep neural network, as a data-driven method, needs more training data than a traditional learning method. This may be due to the fact that deep neural networks have many more parameters and complex structures, and so more training data are used in their training. The GSDNN proposed in this paper exploits both a gating mechanism and a Siamese architecture to realize effective fusion of optical and SAR data with the aim of improving the accuracy of LAI and biomass estimates. Therefore, the GSDNN is much more sensitive to the size of training data than other methods, and thus, it is the method that is most affected by the use of mixup + .

5.3. Effects of Integrating Optical and SAR Data on LAI and Biomass Estimates

Owing to the different imaging mechanisms involved in obtaining optical and SAR data, each type of data has its own advantages and disadvantages when used for LAI and biomass estimates. As Table 5 shows that, with the use of integrated optical and SAR data, the accuracy of the MLP is improved compared with that of an MLP model using only either optical or SAR data. Because the GSDNN is designed for use with integrated optical and SAR data, it cannot provide results when supplied with only optical or SAR data. Therefore, we compared the GSDNN with the MLP while only using optical or SAR data, and this comparison also indicated that integrated optical and SAR data can help to give a better result, which is consistent with the results of Luo et al. [17] and Bahrami et al. [48]. Moreover, without the use of mixup + , although the GSDNN uses a gating mechanism and Siamese architecture, it achieves a similar result in terms of the LAI and a slightly better result for Biomass_wet and Biomass_dry compared with those of the MLP. This is because a shortage of training data limits the performance of the GSDNN. Therefore, when the augmented data from mixup + is used, GSDNN achieves the best result, and its R 2 increases much more than that of MLP. Consequently, integrating optical and SAR data contributes to the improved performance of the GSDNN in providing LAI and biomass estimates.

5.4. Effects of the Amount of Synthetic Data from Mixup + on LAI and Biomass Estimates

As already mentioned, mixup + can construct synthetic data to increase the number of training data with the aim of preventing overfitting. The ratio of synthetic data to original data then becomes an important factor affecting experimental results. Therefore, the influence of the amount of synthetic data on the LAI and biomass estimates obtained using the GSDNN is investigated. Table 6 shows the effect on the accuracies of LAI and biomass estimates as the ratio of synthetic data to original data increases. At the beginning, with an increasing proportion of synthetic data, the estimation accuracies of the LAI and biomass steadily improve, with the greatest accuracy being achieved when there are five times as much synthetic data as original data. At this point, for the LAI, R 2 has increased from 0.58 to 0.71 and RMSE has decreased from 0.64 to 0.58, while for Biomass_dry, R 2 has increased from 0.73 to 0.86 and the RMSE has decreased from 181.64 g/m 2 to 150.76 g/m 2 . This indicates that the addition of synthetic data can improve the convergence point of the model, thereby improving its inversion accuracy. With an increasing amount of synthetic data, the accuracies of LAI and biomass estimates improve steadily in terms of R 2 , and although the RMSEs fluctuate slightly, they generally exhibit a downward trend. However, further increases in the amount of synthetic data will not always improve the inversion accuracy. When the ratio of synthetic data to original data reaches 5, the estimation accuracies of the LAI and biomass are highest, but they then decrease as synthetic data continue to be added. It can be seen from Table 6 that when the ratio of synthetic data to original data has increased to 10, the estimation accuracies for both the LAI and biomass have decreased to varying degrees. Thus, for the case considered in this paper, the most appropriate amount of synthetic data is five times the amount of original data. After that, the addition of further synthetic data will cause the accuracies of LAI and biomass estimates to decrease slightly. This may be because an excessive amount of synthetic data will introduce noise into the model. Therefore, the amount of synthetic data needs to be analyzed under specific conditions.

6. Conclusions

In this study, a novel method, the GSDNN + mixup + , was proposed to integrate optical and SAR data for the estimation of the maize LAI and biomass from limited in situ data. We proposed a modified version of the mixup training method, which is called mixup + , to deal with the problem of data shortage, and we found that the most appropriate amount of synthetic data from mixup + was five times the amount of original data. The GSDNN proposed in this study can realize deep fusion of optical and SAR data, and its use of a gating mechanism and Siamese architecture leads to more accurate estimation of the maize LAI and biomass. GSDNN + mixup + gives significantly more accurate estimates of the LAI and biomass than other machine learning methods, such as MLR, SVR, RFR, and MLP, with R 2 values of 0.71, 0.78, and 0.86 and RMSEs of 0.58, 871.83 g/m 2 , 150.76 g/m 2 for LAI, Biomass_wet, and Biomass_dry, respectively (Table 3 and Table 4). This study of the GSDNN + mixup + provides insights into how a deep neural network with integrated optical and SAR data can be used to estimate maize LAI and biomass with limited data. Evaluation of the performance of the GSDNN + mixup + for the estimation of other crop growth parameters and further exploration of novel methods to overcome in situ data shortage will be topics for our future work.

Author Contributions

Conceptualization, P.L. and W.H.; methodology, P.L.; software, P.L.; validation, P.L., H.Y., Q.J., J.L., A.G. and B.Q.; formal analysis, P.L. and J.L.; investigation, P.L. and Q.J.; writing—original draft preparation, P.L.; writing—review and editing, W.H., H.Y. and Q.J.; supervision, W.H.; funding acquisition, W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (41871339), the National Key Research and Development Program of China (2021YFB3900501), the Open Fund of the State Key Laboratory of Remote Sensing Science (OFSLRSS202222), the Hainan Provincial Key R&D Program of China (ZDYF2021GXJS038), the Youth Innovation Promotion Association CAS (2021119), and the Future Star Talent Program of the Aerospace Information Research Institute, Chinese Academy of Sciences (2020KTYWLZX08).

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge ESA and USGS for providing Sentinel-1, Sentinel-2, and Landsat-8 data. We thank our colleagues who participated in the field surveys and data collection. We would also like to thank the editor and reviewers for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ranum, P.; Peña-Rosas, J.P.; Garcia-Casal, M.N. Global maize production, utilization, and consumption. Ann. N. Y. Acad. Sci. 2014, 1312, 105–112. [Google Scholar] [CrossRef] [PubMed]
  2. Shiferaw, B.; Prasanna, B.M.; Hellin, J.; Bänziger, M. Crops that feed the world 6. Past successes and future challenges to the role played by maize in global food security. Food Secur. 2011, 3, 307–327. [Google Scholar] [CrossRef] [Green Version]
  3. Nuss, E.T.; Tanumihardjo, S.A. Maize: A paramount staple crop in the context of global nutrition. Compr. Rev. Food Sci. Food Saf. 2010, 9, 417–436. [Google Scholar] [CrossRef] [PubMed]
  4. Xia, T.; Miao, Y.; Wu, D.; Shao, H.; Khosla, R.; Mi, G. Active optical sensing of spring maize for in-season diagnosis of nitrogen status based on nitrogen nutrition index. Remote Sens. 2016, 8, 605. [Google Scholar] [CrossRef] [Green Version]
  5. Zhang, J.; Huang, Y.; Pu, R.; Gonzalez-Moreno, P.; Yuan, L.; Wu, K.; Huang, W. Monitoring plant diseases and pests through remote sensing technology: A review. Comput. Electron. Agric. 2019, 165, 104943. [Google Scholar] [CrossRef]
  6. Bi, W.; Wang, M.; Weng, B.; Yan, D.; Yang, Y.; Wang, J. Effects of drought–flood abrupt alternation on the growth of summer maize. Atmosphere 2019, 11, 21. [Google Scholar] [CrossRef] [Green Version]
  7. Yang, J.; Gong, P.; Fu, R.; Zhang, M.; Chen, J.; Liang, S.; Xu, B.; Shi, J.; Dickinson, R. The role of satellite remote sensing in climate change studies. Nat. Clim. Chang. 2013, 3, 875–883. [Google Scholar] [CrossRef]
  8. Che, Y.; Wang, Q.; Zhou, L.; Wang, X.; Li, B.; Ma, Y. The effect of growth stage and plant counting accuracy of maize inbred lines on LAI and biomass prediction. Precis. Agric. 2022, 1–27. [Google Scholar] [CrossRef]
  9. Fang, H.; Baret, F.; Plummer, S.; Schaepman-Strub, G. An overview of global leaf area index (LAI): Methods, products, validation, and applications. Rev. Geophys. 2019, 57, 739–799. [Google Scholar] [CrossRef]
  10. Jinsong, C.; Yu, H.; Xinping, D. Monitoring rice growth in Southern China using TerraSAR-X dual polarization data. In Proceedings of the 2017 6th International Conference on Agro-Geoinformatics, Fairfax, VA, USA, 7–10 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar]
  11. Chen, P.F.; Nicolas, T.; Wang, J.H.; Philippe, V.; Huang, W.J.; Li, B.G. New index for crop canopy fresh biomass estimation. Spectrosc. Spectr. Anal. 2010, 30, 512–517. [Google Scholar]
  12. Gitelson, A.A.; Viña, A.; Arkebauer, T.J.; Rundquist, D.C.; Keydan, G.; Leavitt, B. Remote estimation of leaf area index and green leaf biomass in maize canopies. Geophys. Res. Lett. 2003, 30. [Google Scholar] [CrossRef]
  13. Price, J.C. Estimating leaf area index from satellite data. IEEE Trans. Geosci. Remote Sens. 1993, 31, 727–734. [Google Scholar] [CrossRef]
  14. Fei, Y.; Jiulin, S.; Hongliang, F.; Zuofang, Y.; Jiahua, Z.; Yunqiang, Z.; Kaishan, S.; Zongming, W.; Maogui, H. Comparison of different methods for corn LAI estimation over northeastern China. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 462–471. [Google Scholar] [CrossRef]
  15. Mandal, D.; Hosseini, M.; McNairn, H.; Kumar, V.; Bhattacharya, A.; Rao, Y.; Mitchell, S.; Robertson, L.D.; Davidson, A.; Dabrowska-Zielinska, K. An investigation of inversion methodologies to retrieve the leaf area index of corn from C-band SAR data. Int. J. Appl. Earth Obs. Geoinf. 2019, 82, 101893. [Google Scholar] [CrossRef]
  16. Darvishzadeh, R.; Atzberger, C.; Skidmore, A.; Schlerf, M. Mapping grassland leaf area index with airborne hyperspectral imagery: A comparison study of statistical approaches and inversion of radiative transfer models. ISPRS J. Photogramm. Remote Sens. 2011, 66, 894–906. [Google Scholar] [CrossRef]
  17. Luo, P.; Liao, J.; Shen, G. Combining spectral and texture features for estimating leaf area index and biomass of maize using Sentinel-1/2, and Landsat-8 data. IEEE Access 2020, 8, 53614–53626. [Google Scholar] [CrossRef]
  18. Wang, J.; Xiao, X.; Bajgain, R.; Starks, P.; Steiner, J.; Doughty, R.B.; Chang, Q. Estimating leaf area index and aboveground biomass of grazing pastures using Sentinel-1, Sentinel-2 and Landsat images. ISPRS J. Photogramm. Remote Sens. 2019, 154, 189–201. [Google Scholar] [CrossRef] [Green Version]
  19. Shafique, A.; Cao, G.; Khan, Z.; Asad, M.; Aslam, M. Deep learning-based change detection in remote sensing images: A review. Remote Sens. 2022, 14, 871. [Google Scholar] [CrossRef]
  20. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  21. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Rem. Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  22. Osco, L.P.; Junior, J.M.; Ramos, A.P.M.; de Castro Jorge, L.A.; Fatholahi, S.N.; de Andrade Silva, J.; Matsubara, E.T.; Pistori, H.; Gonçalves, W.N.; Li, J. A review on deep learning in UAV remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102456. [Google Scholar] [CrossRef]
  23. Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A review of deep learning in multiscale agricultural sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  24. Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J.; et al. Deep learning in environmental remote sensing: Achievements and challenges. Remote Sens. Environ. 2020, 241, 111716. [Google Scholar] [CrossRef]
  25. Xu, X.; Chen, Y.; Zhang, J.; Chen, Y.; Anandhan, P.; Manickam, A. A novel approach for scene classification from remote sensing images using deep learning methods. Eur. J. Remote Sens. 2021, 54, 383–395. [Google Scholar] [CrossRef]
  26. Zheng, L.; Xu, W. An improved adaptive spatial preprocessing method for remote sensing images. Sensors 2021, 21, 5684. [Google Scholar] [CrossRef]
  27. Sun, X.; Wang, P.; Wang, C.; Liu, Y.; Fu, K. PBNet: Part-based convolutional neural network for complex composite object detection in remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2021, 173, 50–65. [Google Scholar] [CrossRef]
  28. Zhang, S.; Wang, X.; Li, P.; Wang, L.; Zhu, M.; Zhang, H.; Zeng, Z. An improved YOLO algorithm for rotated object detection in remote sensing images. In Proceedings of the 2021 IEEE 4th Advanced Information Management, Communications, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 18–20 June 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 4, pp. 840–845. [Google Scholar]
  29. Potnis, A.V.; Durbha, S.S.; Shinde, R.C. Semantics-driven remote sensing scene understanding framework for grounded spatio-contextual scene descriptions. ISPRS Int. J. Geo-Inf. 2021, 10, 32. [Google Scholar] [CrossRef]
  30. Rahnemoonfar, M.; Chowdhury, T.; Sarkar, A.; Varshney, D.; Yari, M.; Murphy, R.R. Floodnet: A high resolution aerial imagery dataset for post flood scene understanding. IEEE Access 2021, 9, 89644–89654. [Google Scholar] [CrossRef]
  31. Wong, S.C.; Gatt, A.; Stamatescu, V.; McDonnell, M.D. Understanding data augmentation for classification: When to warp? In Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, 30 November–2 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
  32. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  33. Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
  34. Viña, A.; Gitelson, A.A.; Nguy-Robertson, A.L.; Peng, Y. Comparison of different vegetation indices for the remote assessment of green leaf area index of crops. Remote Sens. Environ. 2011, 115, 3468–3478. [Google Scholar] [CrossRef]
  35. Kross, A.; McNairn, H.; Lapen, D.; Sunohara, M.; Champagne, C. Assessment of RapidEye vegetation indices for estimation of leaf area index and biomass in corn and soybean crops. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 235–248. [Google Scholar] [CrossRef] [Green Version]
  36. Jin, X.; Yang, G.; Xu, X.; Yang, H.; Feng, H.; Li, Z.; Shen, J.; Zhao, C.; Lan, Y. Combined multi-temporal optical and radar parameters for estimating LAI and biomass in winter wheat using HJ and RADARSAR-2 data. Remote Sens. 2015, 7, 13251–13272. [Google Scholar] [CrossRef]
  37. Karimi, S.; Sadraddini, A.A.; Nazemi, A.H.; Xu, T.; Fard, A.F. Generalizability of gene expression programming and random forest methodologies in estimating cropland and grassland leaf area index. Comput. Electron. Agric. 2018, 144, 232–240. [Google Scholar] [CrossRef]
  38. Dey, R.; Salem, F.M. Gate-variants of gated recurrent unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1597–1600. [Google Scholar]
  39. Koch, G.; Zemel, R.; Salakhutdinov, R. Siamese neural networks for one-shot image recognition. In Proceedings of the ICML Deep Learning Workshop, Lille, France, 6–11 July 2015; Volume 2. [Google Scholar]
  40. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  41. Xu, L.; Zhang, H.; Wang, C.; Zhang, B.; Liu, M. Crop classification based on temporal information using Sentinel-1 SAR time-series data. Remote Sens. 2018, 11, 53. [Google Scholar] [CrossRef] [Green Version]
  42. Chapelle, O.; Weston, J.; Bottou, L.; Vapnik, V. Vicinal risk minimization. In Advances in Neural Information Processing Systems 13; MIT Press: London, UK, 2000. [Google Scholar]
  43. Ruder, S. An overview of multi-task learning in deep neural networks. arXiv 2017, arXiv:1706.05098. [Google Scholar]
  44. Zhang, Y.; Yang, Q. A survey on multi-task learning. IEEE Trans. Knowledge Data Eng. 2021, 34, 5586–5609. [Google Scholar] [CrossRef]
  45. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  46. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  47. Bahrami, H.; Homayouni, S.; Safari, A.; Mirzaei, S.; Mahdianpari, M.; Reisi-Gahrouei, O. Deep learning-based estimation of crop biophysical parameters using multi-source and multi-temporal remote sensing observations. Agronomy 2021, 11, 1363. [Google Scholar] [CrossRef]
  48. Bahrami, H.; Homayouni, S.; McNairn, H.; Hosseini, M.; Mahdianpari, M. Regional crop characterization using multi-temporal optical and synthetic aperture radar earth observations data. Can. J. Remote Sens. 2022, 48, 258–277. [Google Scholar] [CrossRef]
Figure 1. Location of the study area and distribution of sample points.
Figure 1. Location of the study area and distribution of sample points.
Remotesensing 14 05624 g001
Figure 2. Gated Siamese Deep Neural Network(GSDNN) architecture for maize LAI and biomass estimation.
Figure 2. Gated Siamese Deep Neural Network(GSDNN) architecture for maize LAI and biomass estimation.
Remotesensing 14 05624 g002
Figure 3. GSDNN architecture for maize LAI and biomass estimation.
Figure 3. GSDNN architecture for maize LAI and biomass estimation.
Remotesensing 14 05624 g003
Figure 4. Comparison of the measured values with estimates from the model based on the GSDNN: (a) LAI; (b) Biomass_wet; (c) Biomass_dry.
Figure 4. Comparison of the measured values with estimates from the model based on the GSDNN: (a) LAI; (b) Biomass_wet; (c) Biomass_dry.
Remotesensing 14 05624 g004
Figure 5. Training and testing results of the LAI and biomass with estimates from the model based on the GSDNN: (a) LAI; (b) Biomass_wet; (c) Biomass_dry.
Figure 5. Training and testing results of the LAI and biomass with estimates from the model based on the GSDNN: (a) LAI; (b) Biomass_wet; (c) Biomass_dry.
Remotesensing 14 05624 g005
Figure 6. Results of maize LAI and biomass estimation based on the GSDNN with mixup + : (a) LAI—Trumpet; (b) LAI—Filling; (c) Biomass—Trumpet; (d) Biomass—Filling.
Figure 6. Results of maize LAI and biomass estimation based on the GSDNN with mixup + : (a) LAI—Trumpet; (b) LAI—Filling; (c) Biomass—Trumpet; (d) Biomass—Filling.
Remotesensing 14 05624 g006
Table 1. List of satellite data parameters in this study.
Table 1. List of satellite data parameters in this study.
Satellite DatasetsDate of Acquisition (Month/Day)ResolutionSourceRevisit
SAR dataSentinel-1B SLC07/20, 08/01, 08/18, 08/30 5 m × 20 m ESA12-day
Sentinel-1B GRDH07/20, 08/01, 08/18, 08/30 22 m × 20 m ESA12-day
Optical dataSentinel-2A MSI08/03, 08/16, 09/05 10 m × 10 m ESA10-day
Landsat-8 OLI07/22 15 m × 15 m USGS16-day
Table 2. Statistics of the in situ LAI and biomass data.
Table 2. Statistics of the in situ LAI and biomass data.
Growth StageLAIBiomass_Wet (g/m 2 )Biomass_Dry (g/m 2 )
MinMaxMeanMinMaxMeanMinMaxMean
07/20–07/23 (Jointing)0.382.341.23193.542091.26852.3216.48190.1484.46
07/31–08/02 (Trumpet)1.374.582.88949.576201.512653.8593.6697.78307.55
08/15–08/17 (Heading)2.204.843.491989.145979.484330.61242.68963.31628.28
09/03–09/05 (Filling)1.555.133.282657.978448.795013.90497.971559.081051.33
Table 3. Comparison of the GSDNN with other machine learning methods.
Table 3. Comparison of the GSDNN with other machine learning methods.
ModelLAIBiomass_WetBiomass_Dry
R 2 RMSE R 2 RMSE (g/m 2 ) R 2 RMSE (g/m 2 )
MLR0.610.560.641153.900.58202.53
SVR0.670.630.70958.270.71202.85
RFR0.640.630.70995.390.74200.03
MLP0.580.650.611043.040.57246.55
GSDNN + mixup + 0.710.580.78871.830.86150.76
Table 4. Comparison of multiple machine learning methods before and after the use of mixup + .
Table 4. Comparison of multiple machine learning methods before and after the use of mixup + .
ModelLAIBiomass_WetBiomass_Dry
R 2 RMSE R 2 RMSE (g/m 2 ) R 2 RMSE (g/m 2 )
MLR0.610.560.641153.900.58202.53
MLR + mixup + 0.640.640.681029.950.56257.92
SVR0.670.630.70958.270.71202.85
SVR + mixup + 0.630.660.711003.860.71211.23
RF0.640.630.70995.390.74200.03
RF + mixup + 0.670.610.71938.540.75192.11
MLP0.580.650.611043.040.57246.55
MLP + mixup + 0.640.590.69923.980.62212.22
GSDNN0.580.640.641027.310.73181.62
GSDNN + mixup + 0.710.580.78871.830.86150.76
Table 5. Comparison of results between different deep learning models.
Table 5. Comparison of results between different deep learning models.
ModelLAIBiomass_WetBiomass_Dry
R 2 RMSE R 2 RMSE (g/m 2 ) R 2 RMSE (g/m 2 )
MLP (optical)0.560.750.551157.710.51252.65
MLP (SAR)0.260.930.281600.630.25333.54
MLP0.580.650.611043.040.57246.55
MLP + mixup + 0.640.590.69923.980.62212.22
GSDNN0.580.640.641027.310.73181.62
GSDNN + mixup + 0.710.580.78871.830.86150.76
Table 6. Influence of the amount of synthetic data in mixup + on the inversion accuracies of the LAI and biomass.
Table 6. Influence of the amount of synthetic data in mixup + on the inversion accuracies of the LAI and biomass.
Synthetic DataLAIBiomass_WetBiomass_Dry
R 2 RMSE R 2 RMSE (g/m 2 ) R 2 RMSE (g/m 2 )
0.00.580.640.641027.310.73181.62
0.20.600.680.681032.450.76180.75
0.50.640.650.711022.830.79180.37
1.00.670.630.74940.030.80167.30
2.00.670.640.74971.360.81169.82
5.00.710.580.78871.830.86150.76
10.00.710.590.64876.170.85151.48.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luo, P.; Ye, H.; Huang, W.; Liao, J.; Jiao, Q.; Guo, A.; Qian, B. Enabling Deep-Neural-Network-Integrated Optical and SAR Data to Estimate the Maize Leaf Area Index and Biomass with Limited In Situ Data. Remote Sens. 2022, 14, 5624. https://doi.org/10.3390/rs14215624

AMA Style

Luo P, Ye H, Huang W, Liao J, Jiao Q, Guo A, Qian B. Enabling Deep-Neural-Network-Integrated Optical and SAR Data to Estimate the Maize Leaf Area Index and Biomass with Limited In Situ Data. Remote Sensing. 2022; 14(21):5624. https://doi.org/10.3390/rs14215624

Chicago/Turabian Style

Luo, Peilei, Huichun Ye, Wenjiang Huang, Jingjuan Liao, Quanjun Jiao, Anting Guo, and Binxiang Qian. 2022. "Enabling Deep-Neural-Network-Integrated Optical and SAR Data to Estimate the Maize Leaf Area Index and Biomass with Limited In Situ Data" Remote Sensing 14, no. 21: 5624. https://doi.org/10.3390/rs14215624

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop