Next Article in Journal
Application and Development of Autonomous Robots in Concrete Construction: Challenges and Opportunities
Next Article in Special Issue
Retrieval of Fractional Vegetation Cover from Remote Sensing Image of Unmanned Aerial Vehicle Based on Mixed Pixel Decomposition Method
Previous Article in Journal
Mapping the Leaf Area Index of Castanea sativa Miller Using UAV-Based Multispectral and Geometrical Data
Previous Article in Special Issue
A Global Multi-Scale Channel Adaptation Network for Pine Wilt Disease Tree Detection on UAV Imagery by Circle Sampling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transferability of Models for Predicting Rice Grain Yield from Unmanned Aerial Vehicle (UAV) Multispectral Imagery across Years, Cultivars and Sensors

1
National Engineering and Technology Center for Information Agriculture, MARA Key Laboratory for Crop System Analysis and Decision Making, Jiangsu Key Laboratory for Information Agriculture, Nanjing 210095, China
2
Collaborative Innovation Center for Modern Crop Production, Nanjing Agricultural University, Nanjing 210095, China
*
Author to whom correspondence should be addressed.
Drones 2022, 6(12), 423; https://doi.org/10.3390/drones6120423
Submission received: 15 November 2022 / Revised: 9 December 2022 / Accepted: 14 December 2022 / Published: 16 December 2022

Abstract

:
Timely and accurate prediction of crop yield prior to harvest is vital for precise agricultural management. Unmanned aerial vehicles (UAVs) provide a fast and convenient approach to crop yield prediction, but most existing crop yield models have rarely been tested across different years, cultivars and sensors. This has limited the ability of these yield models to be transferred to other years or regions or to be potentially used with data from other sensors. In this study, UAV-based multispectral imagery was used to predict rice grain yield at the booting and filling stages from four field experiments, involving three years, two rice cultivars, and two UAV sensors. Reflectance and texture features were extracted from the UAV imagery, and vegetation indices (VIs) and normalized difference texture indices (NDTIs) were computed. The models were independently validated to test the stability and transferability across years, rice cultivars, and sensors. The results showed that the red edge normalized difference texture index (RENDTI) was superior to other texture indices and vegetation indices for model regression with grain yield in most cases. However, the green normalized difference texture index (GNDTI) achieved the highest prediction accuracy in model validation across rice cultivars and sensors. The yield prediction model of Japonica rice achieved stronger transferability to Indica rice with root mean square error (RMSE), bias, and relative RMSE (RRMSE) of 1.16 t/ha, 0.08, and 11.04%, respectively. Model transferability was improved significantly between different sensors after band correction with a decrease of 15.05–59.99% in RRMSE. Random forest (RF) was found to be a good solution to improve the model transferability across different years and cultivars and obtained the highest prediction accuracy with RMSE, bias, and RRMSE of 0.94 t/ha, −0.21, and 9.37%, respectively. This study provides a valuable reference for crop yield prediction when existing models are transferred across different years, cultivars and sensors.

1. Introduction

In the 21st century, one of the greatest challenges is to improve crop production to meet the increasing demand for cereals with the rising population. Rice (Oryza sativa L.) is one of the most important cereal crops, feeding the largest population in the world [1]. Accurate and timely prediction of rice grain yield is crucial for global food security and in-season crop management. Traditionally, crop yield prediction depends on field investigations, which are costly and labor-consuming with poor timeliness.
Remote sensing techniques have been widely applied in precision agriculture, especially for biomass [2,3], LAI [4,5], N/chlorophyll content estimation [6,7], and disease detection [8,9]. Furthermore, crop yield prediction with remote sensing techniques also attracted much attention in recent decades. The majority of studies about crop yield prediction were based on ground-based and satellite remote sensing platforms. Due to the large coverage, satellite images were widely adopted in crop yield prediction [10,11], but the limited spatial and temporal resolution limits the capability of satellite images in southern China with fracted farmland. Otherwise, portable ground-based sensors (e.g., GreenSeeker, Crop Circle, ASD hyperspectral sensors) were convenient and easy to operate for crop yield prediction in a small region [12,13]. In terms of efficiency and prediction accuracy, both satellite images and ground-based remote sensing sensors might not be optimal for crop yield prediction in smallholder fields.
The emergence of unmanned aerial vehicles (UAVs) has provided the convenience to increase monitoring frequency and spatio-temporal resolution at low costs. To date, UAV has been widely used to estimate various structural, biophysical, and biochemical parameters in precision agriculture [14], such as plant height [15,16], LAI [17,18], biomass [19,20], N/chlorophyll content [21,22,23] and yield [24,25,26]. Numerous studies have adopted various UAV imagery to predict crop yield with different methodologies (Table 1), with the most commonly used being multispectral imagery. As sensor technology has rapidly developed, numerous multispectral cameras have appeared in the market that can be loaded on a UAV (Table 1). Parrot sequoia, one of the most widely used sensors, is equipped with four core spectral bands (550, 660, 735, and 790 nm), and can be loaded on multi-rotor and fixed-wing UAVs [27,28,29,30]. Another popular multispectral camera is RedEdge developed by MicaSense, which has six spectral bands (475, 560, 668, 717, and 842 nm) [28,31,32]. Other multispectral cameras have also been used in crop yield prediction, such as the Mini-MCA 6/12 and MQ022MG-CM. Deng et al. [33] compared two different UAV-based multispectral cameras (Mini-MCA6 and Sequoia) in data acquisition, processing, and application. They found that the narrowband Mini-MCA6 camera could produce more accurate reflectance values than the broadband Sequoia camera, but the accuracy of the VIs was not completely dependent on the accuracy of the reflectance. Ramos et al. [28] adopted two multispectral sensors (Sequoia and RedEdge) to predict maize yield with random forest, but the difference between these two sensors on yield prediction remained unclear. Due to the rapid development of sensor technology, UAV sensors update year by year. The main problem is that it is difficult to transfer a prediction model developed for one sensor to a dataset collected by another sensor since different sensors had different spectral bands, bandwidth, field of view, and processing procedures. Thus, in this study, we employed the hyperspectral reflectance from the rice canopy to simulate the sensor bands, and the relationships between the same bands of different sensors were used for band correction. To the best of our knowledge, this is the first time that model transfer with a model-updating strategy was proposed to validate the robustness of the prediction model of grain yield in different UAV sensors.
For crop yield prediction, most studies used spectral vegetation indices (VIs) or color indices derived from UAV imagery [26,43]. The problem is that the majority of VIs are easily saturated at high biomass levels, resulting in low accuracy of crop yield prediction at late vegetative stages (e.g., booting stage). In addition, the heading and filling stages were also taken as the optimal period for crop yield estimation [25,32,34], because these two stages were important for crop yield formation. However, the emergence of panicles at the heading and filling stages impeded the sensibility of VIs on yield prediction [26,41]. It is a great challenge to reduce the adverse influence of panicles; thus, we employed texture information to improve the prediction accuracy of crop yield. Unlike spectral reflectance, texture in the context of image analysis is the spatial variability of image tones and describes the relationship between two elements of surface cover [44]. Therefore, texture contains structural information, as the variation in texture is related to changes in the spatial distribution of vegetation [45]. Zheng et al. [20] proposed two texture indices to estimate rice aboveground biomass and demonstrated they outperformed other VIs throughout the whole growing season [8]. Due to the close relationship between aboveground biomass and yield, texture analysis might have great potential in crop yield prediction. Furthermore, the canopy structure varied significantly from the vegetative stage to the reproductive stage due to the emergence of panicles, which might be well captured by image texture. Therefore, it is worth exploring the capability of image texture in crop yield prediction.
Although various models have been developed for the estimation of crop yield using UAV-based image data such as multivariate linear regression and statistical models [26], they are not as robust as data-driven empirical models, varying with years, cultivars, or climatic zones. Therefore, researchers have been occupied with the problem of how to improve model transferability between different years, crop cultivars, and climatic zones. Duan et al. [35] found that single-stage VIs weakly correlated with the grain yield of different rice cultivars, which indicated that cultivar had an influence on the relationship between VIs and grain yield. In addition, when the yield prediction model was applied in different climatic zones, the difference in rice growth duration needed to be considered. Consequently, verification and calibration of yield prediction models between different years are not only useful for understanding the productive potential among different years but also helpful in developing long-term yield forecasting strategies. Compared to other prediction models, machine-learning methods provide nonlinear and hierarchical relationships between multiple variables, which might be a good solution to deal with a dataset involving different years, cultivars, and climatic zones. Random forest (RF) is one of the most popular machine-learning methods, having the advantages of insensitivity to noise characteristics and resistance to overfitting [46]. Therefore, the purposes of this study are: (1) to explore the potential of texture information of UAV multispectral images in rice yield prediction; (2) to investigate the feasibility of band correction with canopy hyperspectral reflectance to improve the transferability of prediction models from different sensors; and (3) to validate the robustness of RF in yield prediction with a dataset from different years and cultivars.
This paper is organized as follows: The materials and methods section describes the multispectral imagery collection and processing, grain yield collection, and model calibration and validation. The results section shows our findings, including the performance of different VIs and NDTIs on yield prediction across different years, rice cultivars, and sensors. In the discussion section, we discuss the advantage of texture indices on yield prediction, solutions to improve the model transferability across different years, rice cultivars, and sensors, and the implication and limitations of this study. Finally, the conclusion describes the main findings and achievements of this study.

2. Materials and Methods

2.1. Experimental Design

Four experiments were conducted in the Rugao (120°45′ E, 32°16′ N) and Xinghua (119°53′ E, 33°05′ N) districts of Jiangsu province, China from 2015–2018 (Table 2). The annual average temperature, number of precipitation days, and precipitation for Rugao and Xinghua were 15.6 °C and 15 °C, 115.1 and 124.3, and 1002.4 and 1040.4 mm, respectively. All four experiments involved different rice (Oryza sativa L.) cultivars, nitrogen application rates, and planting densities. In-season weed and pest controls were practiced according to regional recommendations.

2.2. Yield Data Collection

Before harvesting, one 1 m × 1 m subplot was randomly selected in the non-sampling area of each experimental plot. After harvesting, the rice grains were dehydrated under sunlight and the final yield was determined (t/ha).

2.3. UAV, Sensor, and Image Acquisition

In Rugao, an eight-rotor UAV (Mikrokopter Inc., Moormerland, Germany) was used. This UAV has a maximum payload capacity of 2.5 kg and a flight duration of 8–25 min, depending on the battery and actual payload. A six-band multispectral camera (Mini-MCA6, Tetracam, Chatsworth, CA, USA) was employed to acquire rice canopy images during the UAV flights. This camera includes six individual image sensors with filters of center wavelengths and full-width at half-maximum bandwidths of 490 ± 10 nm, 550 ± 10 nm, 680 ± 10 nm, 720 ± 10 nm, 800 ± 10 nm, and 900 ± 20 nm. The speed of the UAV was set at 1.0 m/s, and the camera view angle was set as vertically downward. Image acquisition occurred every 3 s for the flight duration at 100 m above ground level, resulting in each image having a ground spatial resolution of 5.4 cm per pixel. The UAV was flown over the paddy field at the booting and filling stages during the three growing seasons (Table 2). After each flight, only one image (covering all the plots) was selected for subsequent analysis due to the small study area. Since each flight took about 10 min, the influence of the solar zenith angle was not considered. Before the flight campaign, 25 ground control points (GCPs) were evenly distributed in the study area, and the geographic coordinates were obtained with an X900 GNSS receiver (Huace Inc., Beijing, China). The GCPs were used for band registration and to geo-reference the UAV images from different growth stages.
In Xinghua, a six-rotor UAV system (DJI M600PRO, Shenzhen, China) equipped with an Airphen multispectral camera (HI-PHEN, Avignon, France) was used to collect images over the paddy field. The Airphen multispectral camera is equipped with six channels of filter center at 450 nm, 530 nm, 570 nm, 675 nm, 730 nm, and 850 nm with a spectral resolution of 10 nm. The focal length of the camera lens is 8 mm, and the view angle of the camera was set as vertically downward. The images were obtained at a frequency of 1 Hz with 1280 × 960 pixels saved in TIFF format. The flight plan was kept constant across the whole season and designed using the DJI GS PRO software package. The across-track and along-track overlapping rates were both 90%. The flight speed and flight altitude were 3 m/s and 100 m (pixel size = 4.7 cm), respectively.
During the study, the ground-hyperspectral reflectance of the growing rice was measured by an ASD FieldSpec Pro spectrometer (Analytical Spectral Devices, Boulder, CO, USA). The measurements were conducted immediately after the flight, and a standard white calibration was performed before data acquisition for each plot. The bare fiber was approximately 1 m vertically above the rice canopy during the test. Each plot was tested with three points, three reflectance spectra were acquired at each point, and the average value was taken as the spectral reflectance of the plot. Since the two UAV sensors had different spectral bands, the correlation was between the same bands from the two sensors (Figure 1). All the corresponding bands from the two sensors were highly correlated (R2 ≥ 0.98), except for the red-edge band. The correlation between different bands was used for band correction of these two cameras.

2.4. Calculation of VIs and Texture Indices

After the flights, the imagery from the MCA camera was proceed as described in [47], including noise reduction, lens vignetting correction, removal of lens distortion, band registration, and radiometric correction. All the procedures were conducted in the IDL/ENVI environment (Exelis Visual Information Solutions, Boulder, CO, USA). For the imagery from the Airphen camera, the workflow of imagery processing followed [48], including image, geometric correction, and radiometric calibration. The image mosaicking was conducted with Agisoft Photoscan Professional software (1.4.5 version, Agisoft LLC, Saint Petersburg, Russia). The reflectance of each plot was extracted through a region of interest (ROI) tool over the non-sampling area of the plot. The mean reflectance derived from the ROI represents the reflectance of the corresponding plot.
A suite of VIs was calculated from the multispectral images to examine their correlations with rice grain yield (Table 3). In addition, the normalized difference texture index (NDTI) proposed by [8] was calculated by texture metrics from different spectral bands. Zheng et al., (2019) reported that the red-edge normalized difference texture index (RENDTI) and green normalized difference texture index (GNDTI) had strong capabilities for estimating rice aboveground biomass [8]. Due to the close correlation between biomass and grain yield, both RENDTI and GNDTI were employed to predict rice grain yield.

2.5. Meteorological Data Collection and Analysis

Meteorological data were acquired from the China Meteorological Administration (http://data.cma.cn/ accessed on 1 Octomber 2021) including daily average temperature (Temdaily) (°C), daily maximum temperature (°C), daily minimum temperature (°C), daily precipitation (mm), and hours of sunshine (hours). The temperature data were divided into Temdaily and accumulative growing degree day (AGDD), which represents the sum of growing degree days (GDD = Temdaily − 12.5 °C) from transplanting to the day of destructive sampling.
Figure 2 shows the meteorological factors during the rice growing seasons in different years. The lowest monthly average temperature is Exp. 1, and the highest is Exp. 4, especially for the critical growth stages in July and August. The most precipitation was in Exp. 2, and the least in Exp. 3 in July, September, and October. Furthermore, there was more sunshine in Exp. 3 than in other years, especially in July and August.

2.6. Model Calibration and Validation

A total of 168 samples from the four experiments involved two climatic zones, three years, two rice cultivars, two growth stages, and two UAV sensors. The processing of data was conducted in Matlab R2021a software (The MathWorks, Inc., Natick, MA, USA). Simple linear regression was employed to fit the relationships between rice grain yield and the remote sensing variables (VIs and texture indices) derived from UAV multispectral images. The samples from the four experiments were separated into different categories for model calibration and validation in consideration of years, rice cultivars, and sensors. In addition, the RF regression algorithm was employed to predict yield with the whole dataset, which was randomly partitioned into training (75%) and validation (25%) sets. The predictive capability of those models was assessed by the Root Mean Square Error (RMSE), bias, and Relative Root Mean Square Error (RRMSE).

3. Results

3.1. Variations of Rice Grain Yield and Meteorological Factors

Table 4 shows the descriptive statistics of grain yield in this study. The highest yield was obtained by Indica rice in Exp. 1 with a value of 14.33 t/ha, and the lowest yield was obtained by Japonica rice in Exp. 2 with a value of 4.6 t/ha. In all four experiments, the grain yield of Indica rice was higher than that of Japonica rice, and the yield variation of Japonica rice was higher than that of Indica rice with a higher C.V. The yield of Japonica rice in Exp. 2 had the maximum fluctuation of 4.6–12.4 t/ha with a C.V of 27.88%. However, the yield of Indica rice in Exp. 3 had the minimum fluctuation of 9.19–11.5 t/ha with a C.V of 6.31%.

3.2. Relationships of VIs and Texture Indices with Rice Grain Yield

Figure 3 shows the relationships between VIs, texture indices, and rice grain yield. The color index (VARI) was weakly related to grain yield in most cases. At the booting stage, RENDTI was superior to other VIs in rice yield prediction with R2 of 0.68–0.83 for different years and different sensors. GNDTI outperformed other VIs for different rice cultivars. At the filling stage, NDRE was found to be the optimal index for rice yield prediction for different years. For different rice cultivars and sensors, RENDTI showed the highest correlation (R2 = 0.57–0.71) with rice yield.
In consideration of years, rice cultivars, and sensors, VIs and texture indices obtained a higher correlation with rice yield in 2016RG, and the highest correlation (R2 = 0.83) was obtained by RENDTI at the booting stage. An obvious saturation phenomenon was found at the filling stage for NDVI (Figure 4). The yield of Japonica rice showed a higher correlation with VIs and texture indices than Indica rice in most cases (Figure 5), and GNDVI obtained the highest correlation (R2 = 0.73) with the yield at the booting stage. For different sensors, VIs and texture indices from the Airphen exhibited a stronger correlation with yield than from the MCA in most cases, and the highest correlation (R2 = 0.76) was obtained by RENDTI at the booting stage. In addition, the index values of the MCA were higher than those of the Airphen (Figure 6).

3.3. Validation of Yield Prediction Models

3.3.1. Model Validation across Different Years

Table 5 shows the validation results of the prediction models across different years. When regression models of 2016 were applied to 2015, RENDTI exhibited the highest prediction accuracy at the booting stage with RMSE, bias, and RRMSE of 1.23 t/ha, 0.42, and 12.25%, respectively. However, the model of 2016 overestimated the yield of 2015 at a low level and underestimated it at a high level (Figure 7a). In addition, models at the booting stage obtained higher prediction accuracy than that at the filling stage. However, when the models of 2016 were applied to 2018, NDRE outperformed other indices, and obtained the highest prediction accuracy at the filling stage with RMSE, bias, and RRMSE of 1.19 t/ha, −0.34, and 13.32%, respectively. The model of 2016 slightly overestimated the yield of 2018, and two clusters were observed (Figure 7b). Moreover, models at the filling stage obtained higher prediction accuracy than that at the booting stage.

3.3.2. Model Validation across Different Rice Cultivars

Table 6 shows the validation results of the prediction models across different rice cultivars. When the models of Indica rice were applied to Japonica rice, GNDTI exhibited the highest prediction accuracy at the booting stage with RMSE, bias, and RRMSE of 1.39 t/ha, −0.60, and 16.46%, respectively. However, models in Indica overestimated yield in Japonica at a low level and underestimated at a high level (Figure 8a). In addition, models at the booting stage obtained higher prediction accuracy than that at the filling stage. When the models of Japonica rice were applied to Indica rice, similar results were found, but the validation performance was superior to the former. GNDTI obtained the highest prediction accuracy at the booting stage with RMSE, bias, and RRMSE of 1.16 t/ha, 0.08, and 11.04%, respectively. The estimated yield is distributed evenly around the 1:1 line (Figure 8b).

3.3.3. Model Validation across Different Sensors

Table 7 shows the validation results of the prediction models across different sensors. When MCA models were applied to Airphen, GNDTI exhibited the highest prediction accuracy of rice grain yield at the booting stage with RMSE, bias, and RRMSE of 1.69 t/ha, 1.49, and 16.13%, respectively. However, the MCA models severely underestimated yield in Airphen (Figure 9a). In addition, models at the booting stage obtained higher prediction accuracy than that at the filling stage. When Airphen models were applied to MCA, similar results were found, but the validation performance was worse than the former. GNDTI obtained the highest prediction accuracy at the booting stage with RMSE, bias, and RRMSE of 1.82 t/ha, −1.49, and 20.40%, respectively. However, the Airphen models severely overestimated yield in MCA (Figure 9b). After band correction in red-edge bands, the prediction accuracy was improved significantly (Table 8), and the highest accuracy (RMSE = 1.29 t/ha, bias = 0.88, RRMSE = 12.37%) was obtained by RENDTI at the filling stage when the model was transferred from MCA to Airphen.

3.3.4. Yield Prediction with Random Forest

Figure 10 shows the performance of RF models in rice yield prediction. The model at the booting stage was superior to that at the filling stage with RMSE, bias, and RRMSE of 0.94 t/ha, −0.21, and 9.37%, respectively. Obviously, the RF models outperformed the linear models with VIs and NDTIs. From the relative importance analysis, GNDTI was selected as the most important variable of the model at the booting stage, and RENDTI ranked first among all variables of the model at the filling stage.

4. Discussion

4.1. Advantages of Texture Information on Predicting Rice Grain Yield

Most previous studies used spectral information such as VIs for crop yield prediction [25,26,29,35]. In this study, various VIs were employed to predict rice grain yield, and NDRE was the optimal VI for rice grain yield prediction, which was constant with the findings of [26,39]. This is because the red-edge bands are sensitive to changes in pigment content, while the near-infrared bands are sensitive to canopy structure variations, and can indicate the health status of leaves [53]. However, compared to the optimal VI, GNDTI exhibited a higher accuracy for rice grain yield prediction, which indicated the advantage of texture information in grain yield prediction. Compared with NDRE, GNDTI showed a broader variation at the filling stage, which might solve the saturation problem of spectral indices at high biomass levels, and keep the sensitivity to the yield variations in the reproductive growth stages. In addition, texture features could be used to provide structural information when the strength of the relationship between spectral information and yield diminished [20,45]. Similarly, both Yue et al. [54] and Zheng et al. [20] found textural measures were superior to the spectral indices for crop biomass estimation. Furthermore, NDTIs could describe a higher variability of grain yield at the filling stage, because texture indices could enhance the canopy signals and minimize the adverse interference from soil background, sun angle, sensor view angle, and canopy geometry [45,55]. GNDTI and RENDTI ranked first in the RF model through relative importance analysis at the booting and filling stages, respectively. That further indicated that texture indices have great potential in crop yield prediction. In addition, Wang et al. [34] found the combination of spectral and textural information could improve the prediction accuracy of grain yield in rice. Therefore, the fusion of spectral indices and texture indices will be explored to improve the accuracy of crop yield prediction in the future.

4.2. Influence of Rice Cultivar and Year on Yield Prediction

Most previous studies predicted crop yield using datasets from multiple years and cultivars, and most used cross-validation to test the models [24,28,32,40]; thus, the influence of year and cultivar on prediction models remained unclear. Duan et al. [35] reported that spectral indices showed a weak correlation with grain yield in different rice cultivars. Similarly, we found cultivar had an influence on the relationship between VIs and grain yield, and the prediction accuracy of Japonica rice was higher than that of Indica rice in most cases (Table 4). This can be explained by the following three reasons. First, Indica rice had higher yield potential and stronger tillering ability than Japonica rice, which led to higher canopy coverage with Indica rice than Japonica rice (Figure 11), and made the spectrum of Indica rice more easily saturated [56], thus reducing the accuracy of yield prediction. Second, the panicle type of Japonica rice was erect, while that of Indica rice was scattered. The proportion of panicles of Indica rice in the filling stage was larger than that of Japonica rice (Figure 11e,f), which made the canopy more complex and the spectral signal of leaves weaker, which was not conducive to yield prediction [26]. Third, the yield variation of Japonica rice was larger than Indica rice, which made the prediction model more applicable. Therefore, the prediction models with simple linear regression had weak transferability across different rice cultivars.
Rice yield varied from year to year due to the differences in climate, nitrogen application levels, and rice cultivars (Table 4), which was consistent with [25,26,35]. The yield prediction models in different years showed a significant difference, and the models in 2016 achieved the highest accuracy (Figure 3a,b). That could be explained by the fact that the meteorological data varied in different years, and had an influence on the final grain yield [35]. The average temperature of 2015–2018 was close, but the precipitation and hours of sunshine varied significantly. The accumulative precipitation was highest in July of 2016, and lowest in August of 2016 among the three years. The accumulative hours of sunshine in July and August of 2016 were the highest in the three years. Abundant precipitation and sunshine were of benefit to the vegetative growth of rice plants, but too much precipitation and little sunshine impeded the grain filling in the reproductive stage, resulting in low grain yield. Although the correlation between remote sensing indicators and yield was highest in 2016, the transferability of models was weak between different years. Wan et al. [25] improved the robustness of their prediction model using a model updating strategy, which added some samplings from the testing dataset into the training dataset. However, RF models showed stronger predictive ability in rice grain yield compared to the simple linear regression method, especially when the meteorological data was added into the models [2]. That could be explained by the fact that the RF algorithm can efficiently handle big and highly dimensional datasets [57]. Therefore, the transferability of yield prediction models could be improved by the use of the RF algorithm and fusing multi-source data.

4.3. The Variable Performance of Different Sensors on Rice Grain Yield Prediction

Previous studies have used various multispectral cameras loaded on UAVs to predict crop yield, such as Sequoia [24,29], Rededge [31,32], Mini-MCA [26,35,41], Airphen [37], etc. Different sensors have different wavelength bands, bandwidths, and field of view angles, but few studies have compared the performance of different sensors in predicting crop yield. This study found that the vegetation index or texture index from the Airphen camera had a higher correlation with the yield (Figure 3e,f). This could be explained by the fact that the Airphen had a narrower field of view and higher spatial resolution, thus more details could be captured from the canopy. However, when the yield prediction models from Airphen were transferred to MCA, the results were not satisfactory, and the highest accuracy was only RMSE = 1.69 t/ha, bias = 1.49 and RRMSE = 16.13% (Table 4). The main reason was that the core bands of the two sensors were inconsistent, and there were 50 nm and 10 nm gaps in the near-infrared and red-edge bands, respectively. The near-infrared band is sensitive to the canopy structure, while the red-edge band is sensitive to the chlorophyll content [53]. The difference in spectral bands resulted in different sensitivity of the sensors to the canopy structure and chlorophyll content, simultaneously leading to the difference in the vegetation index value and the regression models. In addition, these two cameras used different radiation correction methods. The Mini-MCA6 camera used the classical empirical linear correction method [26], while the Airphen used a rectangular gray panel with 8% reflectance to transform digital numbers into reflectance values [48]. However, the model transferability between different sensors could be improved significantly through band correction with hyperspectral reflectance data (Table 8). Therefore, band correction is a prerequisite when prediction models are transferred between different sensors with different central bands [33].

4.4. Implications for Further Work

Grain yield is a crucial trait for crop breeding programs [58]. However, the traditional manual measurement of crop yield is time and labor-consuming, and it is not viable to measure all traits in a breeding program, given the cost of labor and the time required to perform these tasks. By contrast, UAV-based remote sensing is a fast and efficient technology that can obtain the yield information of a large number of breeding plots simultaneously [16,59]. Therefore, it is suitable for crop breeding and can improve the efficiency of the breeding program [16,60]. In this study, we found GNDTI was superior to the spectral indices in rice grain yield prediction, which provided new light on the method for crop yield prediction. For a high-yield rice breeding program, using GNDTI from UAV-based multispectral imagery might be an efficient way to select high-yield rice cultivars. Moreover, the booting stage was identified as the optimal stage for yield prediction, which indicated crop yield could be predicted at an earlier stage.
However, since crop breeding programs involve a large number of cultivars, the single-stage VIs would have a weak correlation with grain yield [35]. When predicting grain yield for a breeding program, the canopy structure of each cultivar should be taken into consideration in the construction of prediction models. Moreover, crop breeding programs are also conducted in different climatic zones across multiple years [61]. Therefore, meteorological data should be employed to improve the applicability of crop yield prediction models. Machine learning or deep learning algorithms should be employed to deal with datasets involving multiple years, ecological sites, crop cultivars, and sensors since they possess the ability to handle high-dimensional datasets.

5. Conclusions

This study predicted rice grain yield with UAV-based multispectral images and assessed the model transferability across different years, rice cultivars, and sensors. Spectral and texture information was extracted from UAV images, and the yield prediction models were independently constructed and validated with the dataset from different years, cultivars, and sensors. It was found that GNDTI was the best index for rice yield prediction across different years, rice cultivars, and sensors. Across different years, the prediction models of 2016 achieved higher prediction accuracy in 2015 with RMSE, bias, and RRMSE of 1.23 t/ha, 0.42, and 12.25%, respectively. Across different rice cultivars, the prediction model of Japonica rice obtained higher prediction accuracy on Indica rice with RMSE, bias, and RRMSE of 1.16 t/ha, 0.08, and 11.04%, respectively. Furthermore, a sensor calibration strategy successfully improved the model transferability between different sensors. The RF model proved to be a good solution to deal with a complicated dataset involving different years and cultivars, and the highest prediction accuracy (RMSE = 0.94, bias = −0.21, RRMSE = 9.37%) was obtained by the RF model at the booting stage of cultivation. In addition, RF models also proved the advantage of texture indices for grain yield prediction through relative importance analysis. Based on these results, the proposed approach of sensor correction is promising for model transferability between different sensors and realizes accurate estimation of crop yield in smallholdings or breeding programs with texture indices from the high spatial-temporal imagery. However, more data from multiple years, ecological sites, and cultivars should be collected to validate the findings of this study, and more attention will be paid to testing the methodology of this study on other crops.

Author Contributions

Conceptualization, H.Z. and T.C.; methodology, H.Z. and J.L.; formal analysis, H.Z., W.J., W.W. and D.L.; writing-original draft, H.Z. and C.G.; writing-review and editing, T.C. and X.Y.; funding acquisition, Y.Z., Y.T. and W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by grants from the National Key Research and Development Program of China (2019YFE0125500-04), the Fundamental Research Funds for the Central Universities, the National Natural Science Foundation of China (32101617), the China Postdoctoral Science Foundation (2022T150327), the Natural Science Fund of Jiangsu Province (BK20190517), the earmarked fund for the Jiangsu Agricultural Industry Technology System (JATS [2018]290).

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank Xiaoqing Xu and Xiang Zhou for their help with field data collection, and the reviewers for recommendations that improved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cantrell, R.P.; Reeves, T.G. The cereal of the world’s poor takes center stage. Science 2002, 296, 53. [Google Scholar] [CrossRef]
  2. Li, Z.; Zhao, Y.; Taylor, J.; Gaulton, R.; Jin, X.; Song, X.; Li, Z.; Meng, Y.; Chen, P.; Feng, H.; et al. Comparison and transferability of thermal, temporal and phenological-based in-season predictions of above-ground biomass in wheat crops from proximal crop reflectance data. Remote Sens. Environ. 2022, 273, 112967. [Google Scholar] [CrossRef]
  3. Ma, J.; Li, Y.; Chen, Y.; Du, K.; Zheng, F.; Zhang, L.; Sun, Z. Estimating above ground biomass of winter wheat at early growth stages using digital images and deep convolutional neural network. Eur. J. Agron. 2019, 103, 117–129. [Google Scholar] [CrossRef]
  4. Nandan, R.; Bandaru, V.; He, J.Y.; Daughtry, C.; Gowda, P.; Suyker, A.E. Evaluating optical remote sensing methods for estimating leaf area index for corn and soybean. Remote Sens. 2022, 14, 5301. [Google Scholar] [CrossRef]
  5. Liang, L.; Geng, D.; Yan, J.; Qiu, S.Y.; Di, L.P.; Wang, S.G.; Xu, L.; Wang, L.J.; Kang, J.R.; Li, L. Estimating crop LAI using spectral feature extraction and the hybrid inversion method. Remote Sens. 2020, 12, 3534. [Google Scholar] [CrossRef]
  6. Li, D.; Chen, J.M.; Yu, W.; Zheng, H.; Yao, X.; Cao, W.; Wei, D.; Xiao, C.; Zhu, Y.; Cheng, T. Assessing a soil-removed semi-empirical model for estimating leaf chlorophyll content. Remote Sens. Environ. 2022, 282, 113284. [Google Scholar] [CrossRef]
  7. Sharma, L.K.; Bali, S.K. A review of methods to improve nitrogen use efficiency in agriculture. Sustainability 2018, 10, 51. [Google Scholar] [CrossRef] [Green Version]
  8. Carella, E.; Orusa, T.; Viani, A.; Meloni, D.; Borgogno-Mondino, E.; Orusa, R. An integrated, tentative remote-sensing approach based on NDVI entropy to model canine distemper virus in wildlife and to prompt science-based management policies. Animals 2022, 12, 1049. [Google Scholar] [CrossRef]
  9. Mustafa, G.; Zheng, H.; Khan, I.H.; Tian, L.; Jia, H.; Li, G.; Cheng, T.; Tian, Y.; Cao, W.; Zhu, Y.; et al. Hyperspectral reflectance proxies to diagnose in-field fusarium head blight in wheat with machine learning. Remote Sens. 2022, 14, 2784. [Google Scholar] [CrossRef]
  10. Vallentin, C.; Harfenmeister, K.; Itzerott, S.; Kleinschmit, B.; Conrad, C.; Spengler, D. Suitability of satellite remote sensing data for yield estimation in northeast Germany. Precis. Agric. 2022, 23, 52–82. [Google Scholar] [CrossRef]
  11. Dong, J.; Lu, H.B.; Wang, Y.W.; Ye, T.; Yuan, W.P. Estimating winter wheat yield based on a light use efficiency model and wheat variety data. ISPRS J. Photogramm. Remote Sens. 2020, 160, 18–32. [Google Scholar] [CrossRef]
  12. Yang, W.; Nigon, T.; Hao, Z.; Paiao, G.D.; Yang, C. Estimation of corn yield based on hyperspectral imagery and convolutional neural network. Comput. Electron. Agric. 2021, 184, 106092. [Google Scholar] [CrossRef]
  13. Zhang, K.; Ge, X.; Shen, P.; Li, W.; Liu, X.; Cao, Q.; Zhu, Y.; Cao, W.; Tian, Y. Predicting rice grain yield based on dynamic changes in vegetation indexes during early to mid-growth stages. Remote Sens. 2019, 11, 387. [Google Scholar] [CrossRef] [Green Version]
  14. Zhang, H.; Wang, L.; Tian, T.; Yin, J. A review of unmanned aerial vehicle low-altitude remote sensing (UAV-LARS) use in agricultural monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar] [CrossRef]
  15. Chang, A.; Jung, J.; Maeda, M.M.; Landivar, J. Crop height monitoring with digital imagery from Unmanned Aerial System (UAS). Comput. Electron. Agric. 2017, 141, 232–237. [Google Scholar] [CrossRef]
  16. Volpato, L.; Pinto, F.; Gonzalez-Perez, L.; Thompson, I.G.; Borem, A.; Reynolds, M.; Gerard, B.; Molero, G.; Rodrigues, F.A., Jr. High throughput field phenotyping for plant height using UAV-based RGB imagery in wheat breeding lines: Feasibility and validation. Front. Plant Sci. 2021, 12, 591587. [Google Scholar] [CrossRef]
  17. Zhang, X.; Zhang, K.; Sun, Y.; Zhao, Y.; Zhuang, H.; Ban, W.; Chen, Y.; Fu, E.; Chen, S.; Liu, J.; et al. Combining spectral and texture features of UAS-based multispectral images for maize leaf area index estimation. Remote Sens. 2022, 14, 331. [Google Scholar] [CrossRef]
  18. Zhou, C.; Gong, Y.; Fang, S.; Yang, K.; Peng, Y.; Wu, X.; Zhu, R. Combining spectral and wavelet texture features for unmanned aerial vehicles remote estimation of rice leaf area index. Front. Plant Sci. 2022, 13, 957870. [Google Scholar] [CrossRef]
  19. Lu, N.; Zhou, J.; Han, Z.; Li, D.; Cao, Q.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W.; Cheng, T. Improved estimation of aboveground biomass in wheat from RGB imagery and point cloud data acquired with a low-cost unmanned aerial vehicle system. Plant Methods 2019, 15, 17. [Google Scholar] [CrossRef] [Green Version]
  20. Zheng, H.; Cheng, T.; Zhou, M.; Li, D.; Yao, X.; Tian, Y.; Cao, W.; Zhu, Y. Improved estimation of rice aboveground biomass combining textural and spectral analysis of UAV imagery. Precis. Agric. 2019, 20, 611–629. [Google Scholar] [CrossRef]
  21. Lu, N.; Wang, W.; Zhang, Q.; Li, D.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W.; Baret, F.; Liu, S.; et al. Estimation of nitrogen nutrition status in winter wheat from unmanned aerial vehicle based multi-angular multispectral imagery. Front. Plant Sci. 2019, 10, 1601. [Google Scholar] [CrossRef] [Green Version]
  22. Jay, S.; Baret, F.; Dutartre, D.; Malatesta, G.; Heno, S.; Comar, A.; Weiss, M.; Maupas, F. Exploiting the centimeter resolution of UAV multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops. Remote Sens. Environ. 2019, 231, 110898. [Google Scholar] [CrossRef]
  23. Wang, W.; Zheng, H.; Wu, Y.; Yao, X.; Zhu, Y.; Cao, W.; Cheng, T. An assessment of background removal approaches for improved estimation of rice leaf nitrogen concentration with unmanned aerial vehicle multispectral imagery at various observation times. Field Crops Res. 2022, 283, 108543. [Google Scholar] [CrossRef]
  24. Nevavuori, P.; Narra, N.; Linna, P.; Lipping, T. Crop yield prediction using multitemporal UAV data and spatio-temporal deep learning models. Remote Sens. 2020, 12, 4000. [Google Scholar] [CrossRef]
  25. Wan, L.; Cen, H.; Zhu, J.; Zhang, J.; Zhu, Y.; Sun, D.; Du, X.; Zhai, L.; Weng, H.; Li, Y.; et al. Grain yield prediction of rice using multi-temporal UAV-based RGB and multispectral images and model transfer—A case study of small farmlands in the South of China. Agric. For. Meteorol. 2020, 291, 108096. [Google Scholar] [CrossRef]
  26. Zhou, X.; Zheng, H.; Xu, X.; He, J.; Ge, X.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.; Tian, Y. Predicting grain yield in rice using multi-temporal vegetation indices from UAV-based multispectral and digital imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 246–255. [Google Scholar] [CrossRef]
  27. Maresma, A.; Chamberlain, L.; Tagarakis, A.; Kharel, T.; Ketterings, Q.M. Accuracy of NDVI-derived corn yield predictions is impacted by time of sensing. Comput. Electron. Agric. 2020, 169, 105236. [Google Scholar] [CrossRef]
  28. Ramos, A.; Osco, L.P.; Furuya, D.; Gonalves, W.N.; Pistori, H. A random forest ranking approach to predict yield in maize with UAV-based vegetation spectral indices. Comput. Electron. Agric. 2020, 178, 105791. [Google Scholar] [CrossRef]
  29. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  30. Nevavuori, P.; Narra, N.; Lipping, T. Crop yield prediction with deep convolutional neural networks. Comput. Electron. Agric. 2019, 163, 104859. [Google Scholar] [CrossRef]
  31. Shafiee, S.; Lied, L.M.; Burud, I.; Dieseth, J.A.; Lillemo, M. Sequential forward selection and support vector regression in comparison to LASSO regression for spring wheat yield prediction based on UAV imagery. Comput. Electron. Agric. 2021, 183, 106036. [Google Scholar] [CrossRef]
  32. Fei, S.P.; Hassan, M.A.; Xiao, Y.G.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.Y.; Chen, R.Q.; Ma, Y.T. UAV-based multi-sensor data fusion and machine learning algorithm for yield prediction in wheat. Precis. Agric. 2022. [Google Scholar] [CrossRef]
  33. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  34. Wang, F.M.; Yi, Q.X.; Hu, J.H.; Xie, L.L.; Yao, X.P.; Xu, T.Y.; Zheng, J.Y. Combining spectral and textural information in UAV hyperspectral images to estimate rice grain yield. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102397. [Google Scholar] [CrossRef]
  35. Duan, B.; Fang, S.; Gong, Y.; Peng, Y.; Wu, X.; Zhu, R. Remote estimation of grain yield based on UAV data in different rice cultivars under contrasting climatic zone. Field Crops Res. 2021, 267, 108148. [Google Scholar] [CrossRef]
  36. Garcia-Martinez, H.; Flores-Magdaleno, H.; Ascencio-Hernandez, R.; Khalil-Gardezi, A.; Tijerina-Chavez, L.; Mancilla-Villa, O.R.; Vazquez-Pena, M.A. Corn grain yield estimation from vegetation indices, canopy cover, plant density, and a neural network using multispectral and RGB images acquired with unmanned aerial vehicles. Agriculture 2020, 10, 277. [Google Scholar] [CrossRef]
  37. Fu, Z.; Jiang, J.; Gao, Y.; Krienke, B.; Wang, M.; Zhong, K.; Cao, Q.; Tian, Y.; Zhu, Y.; Cao, W.; et al. Wheat growth monitoring and yield estimation based on multi-rotor unmanned aerial vehicle. Remote Sens. 2020, 12, 508. [Google Scholar] [CrossRef] [Green Version]
  38. Tao, H.L.; Feng, H.K.; Xu, L.J.; Miao, M.K.; Yang, G.J.; Yang, X.D.; Fan, L.L. Estimation of the yield and plant height of winter wheat using UAV-based hyperspectral images. Sensors 2020, 20, 1231. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, F.; Wang, F.; Zhang, Y.; Hu, J.; Huang, J.; Xie, J. Rice yield estimation using parcel-level relative spectral variables from UAV-based hyperspectral imagery. Front. Plant Sci. 2019, 10, 453. [Google Scholar] [CrossRef] [Green Version]
  40. Zhang, X.Y.; Zhao, J.M.; Yang, G.J.; Liu, J.G.; Cao, J.Q.; Li, C.Y.; Zhao, X.Q.; Gai, J.Y. Establishment of plot-yield prediction models in soybean breeding programs using UAV-based hyperspectral remote sensing. Remote Sens. 2019, 11, 2752. [Google Scholar] [CrossRef]
  41. Duan, B.; Fang, S.; Zhu, R.; Wu, X.; Wang, S.; Gong, Y.; Peng, Y. Remote estimation of rice yield with unmanned aerial vehicle (UAV) data and spectral mixture analysis. Front. Plant Sci. 2019, 10, 204. [Google Scholar] [CrossRef] [Green Version]
  42. Kanning, M.; Kuhling, I.; Trautz, D.; Jarmer, T. High-resolution UAV-based hyperspectral imagery for LAI and chlorophyll estimations from wheat for yield prediction. Remote Sens. 2018, 10, 2000. [Google Scholar] [CrossRef] [Green Version]
  43. Du, M.; Noguchi, N. Monitoring of wheat growth status and mapping of wheat yield’s within-field spatial variations using color images acquired from UAV-camera system. Remote Sens. 2017, 9, 289. [Google Scholar] [CrossRef] [Green Version]
  44. Haralick, R.M.; Shanmugam, K. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 6, 610–621. [Google Scholar] [CrossRef] [Green Version]
  45. Sarker, L.R.; Nichol, J.E. Improved forest biomass estimates using ALOS AVNIR-2 texture indices. Remote Sens. Environ. 2011, 115, 968–977. [Google Scholar] [CrossRef]
  46. Trifi, M.; Gasmi, A.; Carbone, C.; Majzlan, J.; Nasri, N.; Dermech, M.; Charef, A.; Elfil, H. Machine learning-based prediction of toxic metals concentration in an acid mine drainage environment, northern Tunisia. Environ. Sci. Pollut. Res. 2022, 29, 87490–87508. [Google Scholar] [CrossRef]
  47. Zheng, H.; Cheng, T.; Li, D.; Zhou, X.; Yao, X.; Tian, Y.; Cao, W.; Zhu, Y. Evaluation of RGB, color-infrared and multispectral images acquired from unmanned aerial systems for the estimation of nitrogen accumulation in rice. Remote Sens. 2018, 10, 824. [Google Scholar] [CrossRef] [Green Version]
  48. Wang, W.; Wu, Y.; Zhang, Q.; Zheng, H.; Yao, X.; Zhu, Y.; Cao, W.; Cheng, T. AAVI: A novel approach to estimating leaf nitrogen concentration in rice from unmanned aerial vehicle multispectral imagery at early and middle growth stages. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6716–6728. [Google Scholar] [CrossRef]
  49. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef] [Green Version]
  50. Rouse, J.; Haas, R.; Schell, J.; Deering, D. Monitoring vegetation systems in the Great Plains with ERTS. NASA Spec. Publ. 1974, 351, 309. [Google Scholar]
  51. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  52. Gitelson, A.; Merzlyak, M.N. Spectral reflectance changes associated with autumn senescence of Aesculus hippocastanum L. and Acer platanoides L. leaves—Spectral features and relation to chlorophyll estimation. J. Plant Physiol. 1994, 143, 286–292. [Google Scholar] [CrossRef]
  53. Asner, G.P. Biophysical and biochemical sources of variability in canopy reflectance. Remote Sens. Environ. 1998, 64, 234–253. [Google Scholar] [CrossRef]
  54. Yue, J.; Yang, G.; Tian, Q.; Feng, H.; Xu, K.; Zhou, C. Estimate of winter-wheat above-ground biomass based on UAV ultrahigh-ground-resolution image textures and vegetation indices. ISPRS J. Photogramm. Remote Sens. 2019, 150, 226–244. [Google Scholar] [CrossRef]
  55. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  56. Hatfield, J.; Gitelson, A.A.; Schepers, J.S.; Walthall, C. Application of spectral remote sensing for agronomic decisions. Agron. J. 2008, 100, 117–131. [Google Scholar] [CrossRef] [Green Version]
  57. Gasmi, A.; Gomez, C.; Chehbouni, A.; Dhiba, D.; El Gharous, M. Using PRISMA hyperspectral satellite imagery and GIS approaches for soil fertility mapping (FertiMap) in northern Morocco. Remote Sens. 2022, 14, 4080. [Google Scholar] [CrossRef]
  58. Yang, J.C.; Zhang, J.H. Grain-filling problem in ‘super’ rice. J. Exp. Bot. 2010, 61, 1–5. [Google Scholar] [CrossRef] [Green Version]
  59. Xie, C.; Yang, C. A review on plant high-throughput phenotyping traits using UAV-based sensors. Comput. Electron. Agric. 2020, 178, 105731. [Google Scholar] [CrossRef]
  60. Hassan, M.A.; Yang, M.; Fu, L.; Rasheed, A.; Zheng, B.; Xia, X.; Xiao, Y.; He, Z. Accuracy assessment of plant height using an unmanned aerial vehicle for quantitative genomic analysis in bread wheat. Plant Methods 2019, 15, 37. [Google Scholar] [CrossRef] [Green Version]
  61. Melandri, G.; AbdElgawad, H.; Riewe, D.; Hageman, J.A.; Asard, H.; Beemster, G.T.S.; Kadam, N.; Jagadish, K.; Altmann, T.; Ruyter-Spira, C.; et al. Biomarkers for grain yield stability in rice under drought stress. J. Exp. Bot. 2020, 71, 669–683. [Google Scholar] [CrossRef]
Figure 1. Relationship between spectral bands from different sensors at the booting (a) and filling (b) stages.
Figure 1. Relationship between spectral bands from different sensors at the booting (a) and filling (b) stages.
Drones 06 00423 g001
Figure 2. Monthly average temperature (a), accumulative precipitation (b), and accumulative hours of sunshine (c) during the rice growing seasons in different experiments.
Figure 2. Monthly average temperature (a), accumulative precipitation (b), and accumulative hours of sunshine (c) during the rice growing seasons in different experiments.
Drones 06 00423 g002
Figure 3. Regressions between rice yield and VIs, NDTIs at booting (a,c,e) and filling stages (b,d,f).
Figure 3. Regressions between rice yield and VIs, NDTIs at booting (a,c,e) and filling stages (b,d,f).
Drones 06 00423 g003
Figure 4. Relationship of rice yield with NDVI (a,c), and RENDTI (b,d) across years.
Figure 4. Relationship of rice yield with NDVI (a,c), and RENDTI (b,d) across years.
Drones 06 00423 g004
Figure 5. Correlation between rice yield and NDVI (a,c), and RENDTI (b,d) across different rice cultivars.
Figure 5. Correlation between rice yield and NDVI (a,c), and RENDTI (b,d) across different rice cultivars.
Drones 06 00423 g005
Figure 6. Correlation between rice yield and NDVI (a,c), and RENDTI (b,d) across different sensors.
Figure 6. Correlation between rice yield and NDVI (a,c), and RENDTI (b,d) across different sensors.
Drones 06 00423 g006
Figure 7. Validation results of rice yield prediction across different years ((a): 2016 to 2015; (b): 2015 to 2016).
Figure 7. Validation results of rice yield prediction across different years ((a): 2016 to 2015; (b): 2015 to 2016).
Drones 06 00423 g007
Figure 8. Validation results of rice yield prediction across different rice cultivars ((a): Indica to Japonica; (b): Japonica to Indica).
Figure 8. Validation results of rice yield prediction across different rice cultivars ((a): Indica to Japonica; (b): Japonica to Indica).
Drones 06 00423 g008
Figure 9. Validation results of rice yield prediction across different sensors ((a): MCA to Airphen; (b): Airphen to MCA).
Figure 9. Validation results of rice yield prediction across different sensors ((a): MCA to Airphen; (b): Airphen to MCA).
Drones 06 00423 g009
Figure 10. The performance of random forest in grain yield prediction at the booting (a) and filling (c) stages, and the relative importance of input variables (b,d). AccPRE, AccSSH, and AGDD denote accumulative precipitation, accumulative hours of sunshine, and accumulative growing degree day.
Figure 10. The performance of random forest in grain yield prediction at the booting (a) and filling (c) stages, and the relative importance of input variables (b,d). AccPRE, AccSSH, and AGDD denote accumulative precipitation, accumulative hours of sunshine, and accumulative growing degree day.
Drones 06 00423 g010
Figure 11. The false color maps of multispectral imagery at booting (a) and filling (b) stages, and the canopy scene of different rice cultivars (Japonica rice: (c,e)); Indica rice: (d,f) at different growth stages.
Figure 11. The false color maps of multispectral imagery at booting (a) and filling (b) stages, and the canopy scene of different rice cultivars (Japonica rice: (c,e)); Indica rice: (d,f) at different growth stages.
Drones 06 00423 g011
Table 1. Summary of crop yield prediction studies with UAV imagery in the last five years.
Table 1. Summary of crop yield prediction studies with UAV imagery in the last five years.
ReferenceCropUAV SensorMethodAccuracyValidation
[32]WheatRedEdge-MX,
Zenmuse XT2
Ensemble learningR2 = 0.692,
RMSE = 0.916 t ha−1,
Cross-validation
[34]RiceRikola Multiple linear regressionRMSE = 0.521 Mg ha−1, MAPE = 6.63%Independent validation
[35]RiceMini-MCA12Neural network regressionR2 = 0.57,
RMSE = 47.895 g m−2, RRMSE = 5.3%
Cross-validation
[13]RiceMQ022MG-CM,
Sony NEX-7
Random forestR2 = 0.83,
RRMSE = 2.75%
Cross-validation
[27]CornParrot SequoiaExponential regressionR2 = 0.63×
[28]MaizeParrot Sequoia, RedEdge-MRandom forestr = 0.78,
MAE = 853.11 kg ha−1
Cross-validation
[24]Wheat, BarleyParrot Sequoia3D- Convolutional neural networkMAE = 218.9 kg ha−1, MAPE = 5.51%Cross-validation
[36]CornParrot Sequoia,
DJI RGB
Neural network modelr = 0.96,
MAE = 0.209 kg ha−1, RMSE = 0.449 kg ha−1
Independent validation
[37]WheatAirphenRandom forestR2 = 0.78,
RRMSE = 10.3%
Independent validation
[29]SoybeanMapir Survey-2,
Parrot Sequoia,
FLIR R 640
Deep neural networkR2 = 0.72,
RMSE = 478.9 kg ha−1, RRMSE = 15.9%
Independent validation
[38]Winter wheatUHD 185Partial least-squares regressionR2 = 0.77,
RMSE = 648.90 kg ha−1, NRMSE = 10.63%
Independent validation
[39]RiceRikola Multiple linear regressionRMSE = 215.08 kg ha−1, RRMSE = 3%Cross-validation
[40]SoybeanUHD 185Linear regressionR2 = 0.67,
RMSE = 0.142 t ha−1
Cross-validation
[30]Wheat, BarleyParrot SequoiaConvolutional neural networkMAE = 484.3 kg ha−1, MAPE = 8.8%Cross-validation
[41]RiceMini-MCA6Linear regressionR2 = 0.593,
RMSE = 0.268 kg
Cross-validation
[42]WheatResonon Pika-LPartial least-squares regressionR2 = 0.88,
RMSE = 4.18 dt ha−1
Cross-validation
[43]WheatSony ILCE-6000Stepwise regressionr = 0.69,
RSMEP = 0.06 t ha−1
Cross-validation
[26]RiceMini-MCA6,
Cannon 5D
Multiple linear regressionR2 = 0.76×
MAE = Mean Absolute Error, MAPE = Mean Absolute Percentage Error. × denotes no validation in the study.
Table 2. Synthesis of experimental design and data acquisition calendar.
Table 2. Synthesis of experimental design and data acquisition calendar.
ExpYearSiteRice CultivarN Rate
(kg/ha)
Density
(cm)
UAV Flight
Date
Sampling
Date
Sample Size
12015RugaoWuyunjing24 (Japonica)
Yliangyou1 (Indica)
0, 100,
200, 300
30 × 1514 August (Booting)15 August36
50 × 159 September (Filling)10 September36
22016RugaoWuyunjing24 (Japonica)
Yliangyou1 (Indica)
0, 150, 30030 × 1514 August (Booting)14 August36
50 × 158 September (Filling)8 September36
32018RugaoWuyunjing27 (Japonica)
Liangyou728 (Indica)
100, 30030 × 1514 August (Booting)14 August48
4 September (Filling)4 September48
42018XinghuaNangeng9108 (Japonica)
Yongyou2640 (Indica)
0, 135,
270, 405
30 × 1519 August (Booting)19 August48
11 September (Filling)11 September48
Table 3. Vegetation indices and texture indices used in this study.
Table 3. Vegetation indices and texture indices used in this study.
Vegetation IndexFormulationReference
Visible atmospherically resistant index (VARI) V A R I = ( R 550 R 680 ) / ( R 550 + R 680 R 490 ) [49]
Normalized difference vegetation index (NDVI) N D V I = ( R 800 R 680 ) / ( R 800 + R 680 ) [50]
Optimized soil-adjusted vegetation index (OSAVI) O S A V I = ( 1 + 0.16 ) ( R 800 R 670 ) / ( R 800 + R 670 + 0.16 ) [51]
Normalized difference red edge index (NDRE) N D R E = ( R 800 R 720 ) / ( R 800 + R 720 ) [52]
Red edge normalized difference texture index (RENDTI) R E N D T I = ( M E A 800 M E A 720 ) / ( M E A 800 + M E A 720 ) [20]
Green normalized difference texture index (GNDTI) G N D T I = ( M E A 800 M E A 550 ) / ( M E A 800 + M E A 550 ) [20]
Table 4. Descriptive statistics of grain yield in this study.
Table 4. Descriptive statistics of grain yield in this study.
ExpCultivarMinMaxMeanSDC.V
1Japonica4.9613.599.1962.40626.16%
Indica6.3414.3310.972.29720.94%
2Japonica4.612.48.6112.40127.88%
Indica7.412.610.441.77316.98%
3Japonica4.9810.667.7531.83223.63%
Indica9.1911.510.130.6396.31%
4Japonica7.3113.0210.241.68716.47%
Indica7.0612.5110.691.55314.53%
Table 5. Validation results of rice grain yield prediction across different years.
Table 5. Validation results of rice grain yield prediction across different years.
StageIndex2016 to 20152016 to 2018
RMSEBiasRRMSERMSEBiasRRMSE
BootingVARI2.521.8124.97%1.630.1518.21%
NDVI1.811.2817.96%1.67−0.3918.62%
OSAVI1.751.2217.40%2.72−1.5930.36%
NDRE1.360.7413.53%2.26−1.6025.31%
RENDTI1.230.4212.25%2.09−1.7823.38%
GNDTI1.25−0.2712.35%1.34−0.7814.97%
FillingVARI4.66−3.9546.23%3.52−3.1439.40%
NDVI3.03−2.3830.03%1.30−0.1614.50%
OSAVI1.590.7215.74%1.38−0.2115.43%
NDRE1.85−1.2518.36%1.19−0.3413.32%
RENDTI1.43−0.2514.16%1.24−0.3613.86%
GNDTI5.95−5.6559.01%2.56−2.0428.62%
Note: The numbers in bold denote the highest accuracy in each column.
Table 6. Validation results of rice grain yield prediction across different rice cultivars.
Table 6. Validation results of rice grain yield prediction across different rice cultivars.
StageIndexIndica to JaponicaJaponica to Indica
RMSEBiasRRMSERMSEBiasRRMSE
BootingVARI2.26−0.9326.75%1.841.3017.55%
NDVI1.72−0.1520.35%1.280.5812.20%
OSAVI2.10−0.8424.83%1.550.7114.76%
NDRE2.02−1.0423.94%1.72−0.1216.46%
RENDTI1.93−1.0822.87%1.640.1415.61%
GNDTI1.39−0.6016.46%1.160.0811.04%
FillingVARI2.65−1.7331.39%2.211.5821.10%
NDVI2.26−1.5226.74%1.991.4118.97%
OSAVI1.590.3218.83%1.18−0.2911.22%
NDRE1.85−1.3621.97%1.701.3716.23%
RENDTI1.87−1.4422.21%1.741.4716.56%
GNDTI1.810.7621.49%2.241.8121.43%
Note: The numbers in bold denote the highest accuracy in each column.
Table 7. Validation of rice grain yield prediction across sensors.
Table 7. Validation of rice grain yield prediction across sensors.
StageIndexMCA to AirphenAirphen to MCA
RMSEBiasRRMSERMSEBiasRRMSE
BootingVARI2.562.1824.44%2.67−2.1229.82%
NDVI2.101.8620.10%2.33−1.9026.07%
OSAVI2.902.6427.74%3.88−3.5043.33%
NDRE3.873.7436.97%5.66−5.4463.32%
RENDTI2.602.4424.83%3.06−2.8234.26%
GNDTI1.691.4916.13%1.82−1.4920.40%
FillingVARI5.905.6956.37%4.92−4.6755.05%
NDVI3.082.8929.41%3.01−2.7233.62%
OSAVI5.114.9848.83%5.17−5.0257.83%
NDRE4.284.1840.91%4.18−4.0246.70%
RENDTI3.032.8928.94%3.11−2.8834.77%
GNDTI3.503.3433.41%3.43−3.1238.30%
Note: The numbers in bold denote the highest accuracy in each column.
Table 8. Validation of rice grain yield prediction across sensors after band correction.
Table 8. Validation of rice grain yield prediction across sensors after band correction.
StageIndexMCA Models Applied to AirphenAirphen Models Applied to MCA
RMSEBiasRRMSERMSEBiasRRMSE
BootingNDRE3.283.1431.38%2.26−1.8925.30%
RENDTI1.701.4716.23%1.801.3820.12%
FillingNDRE2.602.4024.82%2.17−1.8024.29%
RENDTI1.290.8812.37%1.36−0.5615.21%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zheng, H.; Ji, W.; Wang, W.; Lu, J.; Li, D.; Guo, C.; Yao, X.; Tian, Y.; Cao, W.; Zhu, Y.; et al. Transferability of Models for Predicting Rice Grain Yield from Unmanned Aerial Vehicle (UAV) Multispectral Imagery across Years, Cultivars and Sensors. Drones 2022, 6, 423. https://doi.org/10.3390/drones6120423

AMA Style

Zheng H, Ji W, Wang W, Lu J, Li D, Guo C, Yao X, Tian Y, Cao W, Zhu Y, et al. Transferability of Models for Predicting Rice Grain Yield from Unmanned Aerial Vehicle (UAV) Multispectral Imagery across Years, Cultivars and Sensors. Drones. 2022; 6(12):423. https://doi.org/10.3390/drones6120423

Chicago/Turabian Style

Zheng, Hengbiao, Wenhan Ji, Wenhui Wang, Jingshan Lu, Dong Li, Caili Guo, Xia Yao, Yongchao Tian, Weixing Cao, Yan Zhu, and et al. 2022. "Transferability of Models for Predicting Rice Grain Yield from Unmanned Aerial Vehicle (UAV) Multispectral Imagery across Years, Cultivars and Sensors" Drones 6, no. 12: 423. https://doi.org/10.3390/drones6120423

Article Metrics

Back to TopTop