Next Article in Journal
Semi-Automated Mapping of Complex-Terrain Mountain Glaciers by Integrating L-Band SAR Amplitude and Interferometric Coherence
Previous Article in Journal
Distinguishing Algal Blooms from Aquatic Vegetation in Chinese Lakes Using Sentinel 2 Image
Previous Article in Special Issue
Common Latent Space Exploration for Calibration Transfer across Hyperspectral Imaging-Based Phenotyping Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing

by
Priyanga Muruganantham
*,
Santoso Wibowo
,
Srimannarayana Grandhi
,
Nahidul Hoque Samrat
and
Nahina Islam
School of Engineering and Technology, Central Queensland University, Melbourne 3000, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(9), 1990; https://doi.org/10.3390/rs14091990
Submission received: 28 February 2022 / Revised: 16 April 2022 / Accepted: 20 April 2022 / Published: 21 April 2022
(This article belongs to the Special Issue Artificial Intelligence and Automation in Sustainable Smart Farming)

Abstract

:
Deep learning has emerged as a potential tool for crop yield prediction, allowing the model to automatically extract features and learn from the datasets. Meanwhile, smart farming technology enables the farmers to achieve maximum crop yield by extracting essential parameters of crop growth. This systematic literature review highlights the existing research gaps in a particular area of deep learning methodologies and guides us in analyzing the impact of vegetation indices and environmental factors on crop yield. To achieve the aims of this study, prior studies from 2012 to 2022 from various databases are collected and analyzed. The study focuses on the advantages of using deep learning in crop yield prediction, the suitable remote sensing technology based on the data acquisition requirements, and the various features that influence crop yield prediction. This study finds that Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN) are the most widely used deep learning approaches for crop yield prediction. The commonly used remote sensing technology is satellite remote sensing technology—in particular, the use of the Moderate-Resolution Imaging Spectroradiometer (MODIS). Findings show that vegetation indices are the most used feature for crop yield prediction. However, it is also observed that the most used features in the literature do not always work for all the approaches. The main challenges of using deep learning approaches and remote sensing for crop yield prediction are how to improve the working model for better accuracy, the practical implication of the model for providing accurate information about crop yield to agriculturalists, growers, and policymakers, and the issue with the black box property.

Graphical Abstract

1. Introduction

Crop yield prediction is becoming more important because of the growing concern about food security [1,2,3]. Early crop yield prediction plays an important role in reducing famine by estimating the food availability for the growing world population [2]. Hunger is one of the most devastating issues in the world and increasing crop yield production is a feasible solution to overcome this problem. The World Health Organization [1] estimated that there is still an inadequate food supply for 820 million people around the world. The target for the Sustainable Development Goals of the United Nations is to eliminate hunger, accomplish food security, and encourage sustainable agriculture by 2030. The Food and Agriculture Organization (FAO) estimated that there will be a 60 per cent demand for food to supply the world population of 9.3 billion by 2050 [2]. Therefore, crop yield prediction can offer crucial information required for developing a reasonable solution to achieve the target and end hunger [1].
Crop yield is influenced by various parameters, and it is difficult to build a reliable prediction model with traditional methods. However, with advancements in computational technology, the development and training of a novel approach for crop yield prediction have become feasible [3]. Deep learning is a significant technique that is extensively used in the agricultural domain because of its numerous data technologies and high-performance computing [4]. Deep learning is a class of machine learning that has multiple layers of neural networks capable of learning from data that are unstructured and unlabeled, whereby the learning can be supervised, semi-supervised, or unsupervised. Sarker [4] pointed out that deep learning techniques focus on learning abstract features of large datasets. To accurately predict crop yield requires primary knowledge of the association between functional attributes and interactive factors. To study such correlation requires both comprehensive datasets and high-efficiency algorithms, which can be achieved by using deep learning.
This paper conducts a systematic literature review on the application of deep learning approaches in crop yield prediction using remote sensing data. The rationale of conducting this systematic literature review is because it has the potential to highlight the existing research gaps in a particular area of deep learning methodologies and guides us in analyzing the impact of vegetation indices and environmental factors on crop growth. This systematic literature review provides a new perspective of research by investigating the advantages of using deep learning in crop yield prediction, the suitable remote sensing technology based on the data acquisition requirements, and the various features that influence crop yield prediction.
The structure of this paper is organized as follows. Section 2 provides the overview of crop yield prediction. Section 3 presents the research methodology adopted in conducting the systematic review. Section 4 presents the results and discussion, followed by the conclusions in Section 5.

2. Research Methods

2.1. Review Methodology

This systematic literature review helps us to understand the application of deep learning approaches in crop yield prediction using remote sensing data. This systematic literature review is carried out to highlight the existing research gaps in a particular area of deep learning methodologies and guide us in analyzing the impact of vegetation indices and environmental factors on crop growth. For the systematic literature review, not only are all research studies from journals, conferences, and other electronic databases assessed, but they are also integrated and presented in correspondence to the research questions mentioned in our study.
A systematic literature review is an exceptional way to evaluate theory or evidence in a specific area or to study the accuracy or validity of a specific theory [5]. The review guidelines given by Kitchenham and Charters [6] are appropriate for our systematic literature review as they provide objectivity and transparency. Based on the review guidelines, initially, the research questions are formulated. The review is undertaken in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) statement [7]. Several databases, such as IEEE Explorer, ScienceDirect, Scopus, Google Scholar, MDPI, and Web of Science, are used for selecting the relevant research articles. These research articles are assessed and filtered based on the quality criteria. A complete checklist of PRISMA (prisma-statement.org) is used for conducting and reporting the results of the review.

2.2. Research Questions

The following research questions are developed to guide the systematic review:
Q1. What deep learning approaches are used for crop yield prediction?
This question helps us to analyze both the advantages and limitations of using deep learning approaches in crop yield prediction.
Q2. What remote sensing technologies are used with deep learning approaches for crop yield prediction?
With various remote sensing technologies in existence, this question helps us to understand the suitable remote sensing technology based on the data acquisition requirements for the study of crop yield prediction, such as land area and crop type.
Q3. What are the vegetation indices and environmental parameters used in crop yield prediction?
Answering this question helps us to learn about the various features that are influencing the deep learning approaches in crop yield prediction.
Q4. What are the challenges in using deep learning approaches and remote sensing for crop yield prediction?
This question helps us to understand the limitations and challenges in the existing approaches.

2.3. Procedure for Article Search

The approach to searching the articles is designed based on the framed research questions and the aim of the systematic literature review. Narrowing down the focus from a major concept to the central idea of the review helps in creating an effective search strategy. Using “deep learning” alone as a search string will generate a lot of published articles from various application fields that are not likely related to the aim of the review and cause the search to be complicated. Redefining the search strategy as “crop yield prediction” AND “remote sensing” AND “deep learning” can reduce the probability of deviating from the scope of the review. Initially, by using these search strings, the articles were retrieved from five databases, including IEEE Explorer, Science Direct, Scopus, Google Scholar, and MDPI. Further, to include any other relevant studies, the following keywords, namely “crop yield prediction” OR “crop yield estimation” AND “deep learning” AND “remote sensing” OR “artificial intelligence” AND “smart farming”, were used to retrieve the articles from the databases. The articles from the last 10 years (2012–2022) were used for the study as deep learning approaches gained momentum after 2012 [8]. Since then, much research has been conducted on deep learning approaches.

2.4. Article Selection Criteria

The retrieved articles are initially selected based on aspects such as the quality of a journal, any type of remote sensing technology used for the study, and the type of deep learning approaches adopted. Analyzing the abstracts of articles helps in understanding the keywords and the selection of articles. Exclusion of irrelevant articles was carried out based on the following criteria:
  • Articles that belong to the agricultural sector but that do not fall under crop yield prediction;
  • Publications that include machine learning approaches for crop yield prediction;
  • Publications that have no open access;
  • Literature search for articles that are published before 2012;
  • Articles in different languages other than English.
After applying all the exclusion criteria, a total of 51 articles are selected. Further removing the repeated articles across the selected databases, 44 articles are selected for the review. In Figure 1, we explain the process for article selection and rejection from databases for the review based on PRISMA. Table 1 shows the number of articles retrieved after selection criteria are applied and the number of articles obtained after excluding the repeated articles from the selected databases. The research questions are addressed after all the data from retrieved articles are summarized and synthesized.

3. Overview of the Existing Approaches

3.1. Deep Learning

Developing a reliable crop yield prediction model with traditional approaches such as the static regression approach and the mechanistic approach is difficult due to their limited applicability and uncertainty [9,10]. Many studies have used machine learning approaches such as regression tree, random forest, multivariate regression, association rule mining, and artificial neural networks for crop yield prediction [9,11]. Machine learning models treat the output, crop yield, as an implicit function of the input variables, such as weather components and soil conditions, which could be very complex [11]. Moreover, supervised learning approaches in machine learning fail to capture the nonlinear relationship between input and output variables [12]. However, the advancements in technology in recent years have made it possible to develop an advanced crop yield prediction model utilizing deep learning. Deep learning is a class of machine learning that uses hierarchical structures to link with the other layers, and the capability to analyze both unlabeled and unstructured data makes it a class apart from other traditional machine learning approaches [13]. Deep learning is broadly used in the agricultural field as it can analyze huge datasets, learn the relationships between various variables, and use nonlinear functions. These approaches can extract features for huge datasets in an unsupervised environment. When compared to traditional machine learning approaches, deep learning approaches perform better in feature extraction [13]. Since an accurate crop yield prediction relies on the factors influencing crop growth, deep learning has a strong ability to extract features from available data.
Deep neural networks have a collection of nonlinear layers that convert the untested input data into an extracted form at each layer [14]. Deep neural networks with various hidden layers are important to discover the nonlinear correlation between input and response variables [14]. Nevertheless, they are difficult to train and need recently developed hardware and optimization methodologies [15]. Thus, a rise in the number of hidden layers can be effective but it has some restrictions, which can be resolved by implementing some techniques. The vanishing gradient problem in deeper neural networks can be reduced by making use of residual skip connections for the network [15,16,17]. Moreover, the performance of deep learning approaches has been improved by undertaking various techniques such as stochastic gradient descent (SGD), batch normalization, and dropout. Some of the deep learning approaches are given below.

3.1.1. Artificial Neural Networks (ANN)

Artificial neural networks are simple neural networks that were modeled based on the human brain’s neural structure [14]. The neural network consists of nodes, which are connected to each other, where the neurons are grouped into layers. The network has three layers: the input layer, the hidden layer, and the out layer. The inputs are received by the input layer of neurons; the hidden layer with interconnected neurons performs the function and then provides the output to the output layer [16]. Moreover, to initiate the process, initial weights are assigned randomly.

3.1.2. Deep Neural Networks (DNN)

A DNN is a special kind of feed-forward neural network with many hidden layers that are fully connected. Generally, activation functions such as ReLU and loss functions such as L2 regularization and mean squared error are used with the hidden layers [13].

3.1.3. Bayesian Neural Networks (BNN)

BNN uses a neural network with Bayesian inference and probability distributions are used as weights in BNN. Using a Bayesian neural network can prevent the problem of overfitting without necessary validation data to evaluate the regularization parameter [18]. For better accuracy, training a BNN with a large dataset can be helpful.

3.1.4. Convolution Neural Network (CNN)

Compared to conventional neural network approaches, a CNN includes layers such as convolution layers, pooling layers, and fully connected layers, which helps in efficiently finding salient features within the data. The convolution layer consists of a convolution operation and activation function, which perform feature extraction [19]. Convolution operation includes a filter and feature map. A filter is a group of weights applied across the input and a feature map is the corresponding output for a given filter. Moreover, a pooling operation is used to perform down-sampling as it helps to detect features effectively [19]. The outputs are then passed through a nonlinear activation function as it generates nonlinearity into the output. Fully connected (FC) layers are used after convolution layers; the network has the capacity to learn the mapping between the feature and the target by increasing the FC layers [20].

3.1.5. 2D-CNN and 3D-CNN

A 2D-CNN is called a spatial method whereas 3D-CNN is called a spatio-temporal method [16]. In a 2D-CNN, the input data are considered as the spatial–spectral volume, where the kernel slides along the two spatial dimensions that are across width and height. In a 3D-CNN, to the two spatial dimensions, a temporal dimension is also added. A 3D-CNN uses three-dimensional kernels, which slide along width, height, and depth and help in generating a 3D feature map [21]. The 3D-CNN approach is developed by implementing 3D convolutional layers [21].

3.1.6. Faster R-CNN

The region-based convolutional neural network (R-CNN) is predominantly used for object localization and object detection [22]. There are four different kinds of R-CNN; they are R-CNN, Fast R-CNN, Faster R-CNN, and Mask R-CNN. The difference in pooling methods and region proposal methods makes the R-CNNs different and their process faster.

3.1.7. Long Short-Term Memory (LSTM)

LSTM is a special kind of recurrent neural network (RNN) that can learn time-dependent information with an appropriate gradient-based algorithm. The LSTM comprises a chain structure, which starts with an input layer, one or more LSTM layers, and the output layer. To control the cell state and output, the LSTM uses three gates, namely the input gate, forget gate, and output gate. These gates are more likely as neural network layers, which can control the information transfer [14]. Each cell in LSTM layers has three gates; the input gate decides which information needs to be retained, the forget gate determines the amount of previous information that must be forgotten and the amount of current input that needs to be reserved, the output gate uses both the current input and previous output to decide the final output [23]. Tian et al. [24] proposed the ALSTM model, which had six layers, namely one input layer, one LSTM layer, one attention mechanism layer, two dropout layers, and one output layer.

3.2. Remote Sensing for Data Acquisition

Crop yield can vary according to environmental factors, climatic conditions, disease, and other parameters. The crop growth during different stages is influenced by these above-mentioned factors and this reflects on crop production [10]. Monitoring of environmental factors, other parameters, and crop growth can be carried out using various instruments and methodologies, such as ground observation, remote sensing, global positioning systems, and on-field surveying. In ground observation and other traditional methods, it is challenging to personally acquire data for a large area and the result will be less accurate and unreliable [10]. To counteract this limitation, nowadays, remote sensing is increasingly utilized for crop monitoring.
Remote sensing techniques provide details about the status of crops at various growth levels through the use of spectral signatures, and all at a minimum cost when compared to extensive on-field surveying. Remote sensing technology is the acquisition and analysis of information about the world and its objects by an instrument placed in the atmosphere or a satellite, without any physical contact [25]. Remote sensing has the ability to produce an adequate number of data when compared with other data acquiring techniques such as field surveying [26]. It is the process of monitoring and recognizing places on Earth by measuring the emitted and reflected radiation with the help of sensors [27]. Data acquired using remote sensing have several applications in agriculture, which include crop type classification, crop yield prediction, soil property detection, crop health monitoring, weather data assessment, and soil moisture retrieval [28].
One of the most important reasons to use optical remote sensing for acquiring crop information is due to the computation of vegetation indices. Combinations of spectral measurements at various wavelengths are known as spectral indices. They are employed to derive vegetation phenology and calculate biophysical parameters [29]. Among various spectral indices, vegetation indices are the desired indices significantly used in crop yield prediction. Crops in healthy condition are indicated by strong absorption and reflectance of red and near-infrared bands [30]. The strong difference in the intensities of the absorption and reflectance of red and near-infrared bands can be integrated into various quantitative indices of the vegetation environment. These linear or nonlinear combinational operations are known as vegetation indices (VI) [31,32]. Some of the VI are the normalized difference vegetation index (NDVI), green vegetation index (GVI), chlorophyll absorption ratio index (CARI), and many others.

3.3. Impact of Vegetation Indices and Environmental Factors

Vegetation indices are formulated in which the sensitivity to the vegetation characteristics is maximized while the factors such as soil background reflectance and directional or atmospheric effects are minimized. Most of the vegetation indices use information involving the red and near-infrared (NIR) canopy reflectances or radiances [33]. A satellite with a multispectral sensor and several bands covers the visible, near-infrared, and short-wave infrared wavelength regions, which provides numerous vegetation indices.
Different types of vegetation indices are designed by various researchers and are extensively utilized in several research areas. Even though there is some variation in these proposed indices, all these designed indices are sensitive to biochemical attributes and biophysical parameters such as leaf angle distribution function, leaf chemical contents, fraction of absorbed photosynthetically active radiation, biomass, fraction of green coverage, and leaf area index (LAI). Due to the strong correlation between vegetation indices and biophysical parameters, they are widely used to determine the nutritional level of plants, mostly relative to nitrogen [34,35], to classify vegetation and to schedule crop management. However, the phenological stage of evaluation and the type of indices utilized influence their accuracy. Zhao et al. [35] developed a function that established a relationship between the crop coefficient (Kc) for irrigation management and the vegetation index, and it was used in water conservation. Other significant areas where these indices are used are the estimation of crop yield, protein content, biomass, weed management, and fertilizer management [36,37,38]. Some of the most commonly used indices are the normalized difference vegetation index (NDVI), green vegetation index (GVI), enhanced vegetation index (EVI), chlorophyll absorption ratio index (CARI), and many others.
Various studies have investigated how the correlation between remotely sensed data and crop yield differs as a function of time in the course of the growing season [39,40]. The studies have indicated that the relationship between vegetation indices and crop yield varies during the crop growth cycle [41,42,43,44]. Moreover, the relationship between vegetation indices and crop yield is not consistent in every growth stage. For example, the suitable phenological growth stages for wheat to obtain spatial yield data from satellite remote sensing are stem elongation, heading, and the development of fruit until early ripening [45,46,47]. Similarly, Ali et al. [48] proposed that the appropriate crop growth stage to study the correlation between vegetation index and crop yield and to estimate crop yield and biomass for oat grain was the appearance stage of the leaf health and kernel watery ripe stage. Most previous studies have performed correlation analyses between vegetation indices, soil data, and yield data and were mostly carried out for particular crop types, specific years, and a restricted number of vegetation indices [48,49,50,51]. The ultimate goal in correlation analysis between vegetation indices and crop yield data is to develop an optimal crop yield prediction model [52,53,54].
For developing a workable crop yield prediction model, it is important to determine the appropriate vegetation indices and environmental factors [55,56]. You et al. [57] predicted corn yield using the greenness index with 90% accuracy. Fernandes et al. [58] noticed that the crop yield is influenced by vegetation indices selection. Their study on maize yield prediction showed that NDVIre, NDVI, and GNDVI performed well for field variability. Haghverdi et al. [59] observed that crop yield prediction with NDVIre was more effective when compared to NDVI and GNDVI. Wang et al. [60] estimated the corn yield by combining vegetation indices such as the normalized vegetation index (NVDI) and Absorbed Photosynthetically Active Radiation (APAR) with environmental factors including canopy surface temperature and water stress index [56]. Further, other features, such as humidity, nutrients, and soil information, are also used in crop yield prediction. As so many features are already used in crop yield prediction, there is less investigation related to finding specific features majorly impacting crop yield prediction. Hence, detailed research is essential to achieve a better overview of these variables and factors influencing crop yield prediction.

4. Results and Discussion

The articles selected for the review are analyzed and summarized. Figure 2 shows the number of articles published between 2012 and 2022. For the years 2012–2014, the articles that we retrieved did not satisfy the condition of using both deep learning and remote sensing to predict crop yield. It is evident that the study of crop yield prediction using remote sensing has increased in recent years. Table 2 provides a detailed review of the type of remote sensing used for the study, data and parameters used in the study, and the model.
Based on the data analysis, the following research questions can be addressed:
RQ1
—Approaches used in literature discussion:
For the first research question (RQ1), the deep learning approaches used for crop yield prediction are summarized in Table 2. Some unique approaches used are Neuroevolution of Augmenting Topologies (NEAT) and YieldNet. Table 2 shows that LSTM and CNN are the most used deep learning approaches for crop yield prediction. Apart from using CNN and LSTM approaches, these approaches are widely used together with CNN-LSTM, multilevel deep learning approaches with multiple levels, and when using multimodal fusion approaches. It is found that simple neural networks are least used for crop yield prediction when using remote sensing data. The use of Neuroevolution of Augmenting Topologies (NEAT) in ANN, Caffe for implementing CNN, and Faster R-CNN are some promising aspects of deep learning. Most of the approaches included data-pre-processing, and in some approaches, the Gaussian process was used along with CNN and LSTM. Figure 3 shows the distribution of crop yield prediction articles based on deep learning approaches.
Table 2 shows that one of the most frequently used deep learning approaches for crop yield prediction is the convolutional neural network (CNN). It is widely used because of its special ability to find the important features within the data. Nevavuori et al. [20] studied crop yield prediction with a unique profile of temperature and photoperiod, developing a region-specific deep learning model. Terliksiz and Altýlar [66] explained that choosing the right data frame for 3D-CNN is significant for crop yield prediction. Wolanin et al. [67] analyzed the influence of reducing the input time series in 25 days on model performance, and the CNN model with 10 variables had a better prediction for days during the flowering stage of wheat. Nevavuori et al. [85] developed 3D-CNN, ConvLSTM, and CNN-LSTM for modeling crop yield. Among them, a model with five 3D-CNN layers had good performance. Yang et al. [86] studied crop yield prediction during the ripening stage, in which the VI-based model had poor performance as the maximum greenness stage had an influence on the VIs, but the CNN with RGB and multispectral images performed well even in the fully ripened stage. Yang et al. [87] proposed a CNN approach to study corn yield, in which the CNN using spectral and color image features performed well when compared to the 1D-CNN and 2D-CNN using spectral and spatial features.
LSTM was also used extensively in crop yield prediction, and its ability to learn time-dependent information makes it different from the other deep learning approaches. Wang et al. [60] studied the model performance accuracy by using a different combination of remote sensing and meteorological data and soil data. The addition of soil data was an advantage for the model in acquiring the spatial variability of yield. Tian et al. [23] used different combinations of input data, such as a combination of vegetation temperature condition index (VTCI) and remote sensing data and a combination of VTCI, meteorological data, and remote sensing data, to identify the best-performing wheat yield estimation model. The authors pointed out that the incorporation of VTCI with remote sensing and meteorological data can acquire information about various influences on crop growth. Cunha and Silva [61] studied crop yield prediction using LSTM and performed thirty different train–test cycles to assess the performance. Their approach was to use weather and soil data rather than satellite data as NDVI-based crop yield prediction is achievable only after a certain plant growth stage, whereas weather-based prediction is feasible before the planting period. Kaneko et al. [77] studied crop yield prediction in a group of countries using the LSTM approach. LSTM with Gaussian processing performed well for random splits when compared to the chronological splits. Jiang et al. [78] proposed an LSTM approach; prediction accuracy was good in extreme weather conditions compared to LASSO and RF. Moreover, LSTM performed well in learning temporal features when compared to ML approaches such as RF and LASSO. Zhang et al. [80] also found that the combination of input data performs well for crop yield prediction and estimated a suitable VI for crop yield prediction. The authors trained the model with six VIs and two climatic indices individually to choose the optimal CI, and then found the optimal combination for crop yield prediction from the following optimal VI and environmental indices (EI) combination, optimal CI and EI combination, optimal VI and optimal CI combination, and optimal VI, optimal CI, and EI combination. Incorporating all these indices helped to capture important variations in maize yield prediction.
Tian et al. [24] applied a stepwise sensitive analysis method to investigate the significance of each input variable in wheat yield prediction, which can help in understanding the ALSTM algorithm. ALSTM has two parts: one is the LSTM network and the other is the attention mechanism. The ALSTM’s performance was better when compared to LSTM. Moreover, ALSTM has good generalization ability and performed well even though the sampling sites had varied farming systems.
The CNN and LSTM approaches are combined and are studied as CNN-LSTM approaches. Sharma et al. [70] studied crop yield estimation using the CNN-LSTM approach. They studied the influence of contextual information such as water bodies, farmlands, and urban landscape in crop yield prediction by comparing it with another prediction model using CNN-LSTM without these factors. The CNN-LSTM along with contextual information performed well, thus improving the yield estimation. Gastli et al. [72] used CNN-LSTM with Gaussian Process (GP) to predict crop yield in California and compared the approach with CNN and LSTM. It was found that CNN-LSTM with GP performed better on data with and without moisture and histograms. When compared to CNN and LSTM, the CNN-LSTM aims to capture features effectively. Sun et al. [92] used the CNN-LSTM approach to study the end-of-season crop yield prediction and compared its performance with the CNN and LSTM approaches. The CNN-LSTM-based end-of-season model performed well when compared to both CNN and LSTM for five years. Compared to environmental features, the MODIS surface reflectance had a significant influence on the CNN-LSTM-based model.
DNN was also mostly used in crop yield prediction, either individually or in multimodal fusion. Cao et al. [68] compared DNN with ML approaches such as SVM and random forest to find the best winter wheat yield prediction model. Jin et al. [82] studied biomass estimation using the DNN approach. Initially, the DNN performed well for biomass estimation with 15 vegetation indices; however, the biomass estimation accuracy was improved when LAI was combined with 15 vegetation indices. Ma et al. [79] compared BNN with ML approaches, and BNN performed well in both end-of-season and within-season crop yield prediction.
Chen et al. [22] used a region-based convolutional neural network (R-CNN) as this reduces the number of proposed regions generated while ensuring precise object detection. Moreover, the authors compared the R-CNN, Fast R-CNN, and Faster R-CNN performance for detecting strawberry flower and fruit. The Faster R-CNN showed better performance with the lowest training time and had the fastest detection rate.
Multimodal data fusion, a fundamental method of multimodal data mining, aims to integrate the data of different distributions, sources, and types into a global space in which both intermodality and cross-modality can be represented uniformly [93,94,95,96]. Gavahi et al. [62] proposed a deep yield approach by combining 3D-CNN and Conv-LSTM networks together. Maimaitijiang et al. [90] used feature fusion at the input level and intermediate level within a DNN framework to predict crop yield. The proposed multimodal deep learning performed well, and also when considering prediction accuracy, spatial adaptivity, and robustness, the overall performance of the intermediate-level feature fusion DNN framework was comparatively higher than the input-level feature fusion DNN framework. Similarly, Danilevicz et al. [91] used multimodal deep learning by incorporating tab-DNN, sp-DNN, a fusion module with two linear layers, and ReLU. For the fusion module input, the authors concatenated the weights from the last layer of tab-DNN and sp-DNN. The approach performed well for early crop yield prediction. However, in MLDL/ensembling techniques, several networks are stacked in levels and the model uses the features extracted by several networks. Moreover, Sun et al. [92] proposed the MLDL, which is based on combining models together to form a single network and stacking it in different levels with convolution layers, pooling layers, and fully connected layers.
A deep residual network or ResNet is a specific type of neural network that was introduced to avoid the vanishing/gradient problem. ResNet can assist in training up to a thousand layers of deep networks. The core element of ResNet is the residual blocks, and the skip connections in ResNet help to skip any layer that affects the performance of the approach using regularization [77]. The YieldNet framework was proposed by Khaki et al. [97], where they used a transfer learning methodology to share weights for the backbone feature extractor. Based on this approach, the authors predicted the yield of corn and soy simultaneously. Moreover, when compared to YieldNet, other approaches such as 3D-CNN and DFNN required more parameters for prediction and longer training time. Barbosa et al. [81] proposed an approach called NEAT, where they used a genetic algorithm called Neuroevolution of Augmenting Topologies (NEAT) on the proposed dataset. NEAT is used to adjust both topology and weight parameters in the development of the artificial neural network for crop yield prediction.
It can be seen that CNN approaches have performed well in dealing with data with similar features. Yang et al. [87] used a CNN to predict crop yield at the ripening stage. The CNN approach performed well when both the test set and training set had the same phenological stage. The CNN-LSTM model performs better when predicting crop yield with a large dataset [60,68,78,93]. From Table 2, based on the evaluation metrics, the CNN- and LSTM-based approaches have better performance when compared to ANN and DNN approaches.
From the study, we have observed that multiple software programs were used to process the data and various techniques were adopted to overcome overfitting in deep learning approaches. Fernandes et al. [58] used TIMESAT to extract a set of metrics from the NDVI growing curves and they only considered the unmixed sugarcane pixels while extracting metrics for each municipality in Brazil using an Interactive Data Language (IDL) technique. The wrapper with sequential backward elimination technique with ANN for feature extraction was carried out as input in a stacking ensemble neural network model. In order to overcome overfitting, the authors divided the datasets into three parts (70% for training, 10% for validation, and 20% for testing). Haghervedi et al. [59] calculated vegetation indices and implemented the ANN approach using Neurosolution V 7.1.1.1 software. The authors used the technique of stopping the training when the mean square error of the cross-validation subset started to increase or showed no improvement after 100 iterations to avoid overfitting. For UAV images, Chen et al. [22] incorporated the idea of using orthoimages to avoid the distortion of photos, and all the images were labeled manually using the labeling program developed by the Computer Science and Artificial Intelligence Laboratory (MIT, MA, USA). Kaneko et al. [77] used the surface reflectance data (all seven bands of the MOD09A1.006) and temperature data (two bands of the MOD11A2.006) of MODIS to generate features such as NDVI and EVI. The authors employed the histogram dimensionality reduction approach with LSTM to counteract overfitting as the quantity of label data might be sparse. Nevavuori et al. [20] processed the irregular set of data points obtained from the yield measurement devices as rasters of field-wise yield. FarmWorks software was used for filtering and the generation of raterizable vector files. The authors extracted field-wise image data (UAV images) and yield values using the sliding window technique to obtain geolocational matching pairs of inputs and targets. To overcome the problem of overfitting, two distinct regularization strategies with CNN, such as weight decay and early stopping, are used. The GEE-based transformation was used by Sun et al. [69] to transform an entire image collection composed of remote sensing data into a 32-bin normalized histogram by country level. The authors employed the early stopping regularization technique to reduce the generalization error in the CNN-LSTM model. The pooling layer with convolution layer were used to extract features [62]. Sharma et al. [70] did not include a pooling layer in the CNN-LSTM approach because of strided convolutions. Regularization strategies such as early stopping, L2 penalty, and the dropout technique were used in CNN- and LSTM-based approaches to overcome the problem of overfitting.
Some of the software and techniques used for calculating VIs, processing, and pre-processing the data are summarized below:
  • TIMESAT;
  • Neurosolution version 7.1.1.1;
  • Use of Orthoimages;
  • The labeling program developed by the Computer Science and Artificial Intelligence Laboratory (MIT, Massachusetts, USA);
  • FarmWorks;
  • Google Earth Engine-based tensor generation;
  • Orthomosaic map generation (RGB images)—Agisoft PhotoScan Professional 1.2.5;
  • Orthomosaic reflectance map (multispectral images)—Pix4Dmapper 4.0;
  • Georeferencing—Esri ArcGIS 10.3;
  • The clipping process—“ExtractByMask” function from Arcpy Python module;
  • Layer stacking of images;
  • Mosaic and orthorectify, lens distortion, and vignetting issue correction (UAV RGB images)—Pix4Dmapper software;
  • Drawing of shapefile—plotshpcreate of R library;
  • Conversion of Geotiff plots to Numpy arrays—Python script;
  • Dimension transform technique—irregular shaped images;
  • MODIS products;
  • Image pre-processing—Spectronon software (version 2.134; Resonon, Inc., Bozeman, Montana);
  • Spectral data denoising—wavelet transform technique.
The overall result of this study indicates that the use of deep learning approaches for crop yield estimation has increased over the years, and the number of articles retrieved from our initial search strategy before applying exclusion criteria indicates the considerable amount of research carried out on crop yield prediction. However, the number of articles using remote sensing and deep learning approaches for crop yield prediction is still relatively low. Even after selecting possible potential articles for this review, there is a possibility that some valuable articles might have been missed. However, the effectiveness of this systematic literature review can be validated considering the process adopted to conduct this review and the search string used to retrieve the maximum number of articles related to this review.
RQ2
—Remote sensing used with deep learning:
Table 2 shows that the most commonly used remote sensing technology was satellite remote sensing technology—in particular, MODIS. The data, such as surface reflectance, land surface temperature, and annual land cover, were obtained from MODIS. The other satellite remote sensing technologies used along with deep learning were Landsat 8 and Landsat 7, Sentinel-2, WorldView-3, and PlanetScope. It can be observed that satellite remote sensing has the ability to produce an adequate amount of data. The satellite remote sensing is a reliable source of data acquisition as it is easily available and inexpensive when compared to other remote sensing technologies. The multispectral satellites can offer huge amounts of information on crop yield using high spatial and temporal resolution [57]. The raw bands from the remote sensing technology help to derive the vegetation indices and the extracted features are used with DL for crop yield prediction. The correlation between these features and the crop yield was studied in the articles to understand the efficiency of these features in crop yield prediction. Sometimes, the use of all the derived vegetation indices does not help significantly in generating better performance of the approach. It is observed that the use of remote sensing technology for data acquisition is based on the requirements of the study. For instance, satellite images are more useful when dealing with a large area of land, such as a state or district. Figure 4 depicts the distribution of articles using satellite data across years 2015–2022. It can be seen that the majority of the articles on the use of satellite data with deep learning were published in 2015 and 2020, with 39% and 25%, respectively.
RQ3
—Features used in crop yield prediction
The features used for crop yield prediction are provided in Table 3. Following this method helped in viewing the different types of features respective to their selected crop and the selected approach used for crop yield prediction. Crop yield data were uniquely used in all the articles as it is a key feature in crop yield prediction. Apart from vegetation indices obtained from remote sensing technologies, other features such as temperature and precipitation were grouped under meteorological data in the articles retrieved. Similarly, features such as soil type, clay content, silt content, sand content, bulk density, and coarse fragments were grouped under soil data. Moreover, features such as growing degree days and killing degree days were used in country-level model performance. Features such as pH value, Total Potential Evapotranspiration (PET), biomass, and cropland information were also used in the studies.
Zhang et al. [80] compared the satellite-derived cumulative precipitation, standardized precipitation index, and land surface temperature (MODIS daily LST) performance with the ground-observed climate metrics, and the satellite-derived climate metrics performed well. The authors state that the use of MODIS Land Surface Temperature over air temperature is effective as the LST-based growing, killing, and frozen degree days were able to explain most of the inter-annual variation in yield. Moreover, LST is reactive to the surface–energy balance, leaf canopy temperatures, and plant moisture, which makes it advantageous in crop yield prediction. However, the performance of satellite-based precipitation was comparatively lower than the temperature-based metrics. Moreover, in some articles, RGB images have outperformed spectral images. Nevavuori et al. [85] established that RGB images’ performance was better than that of NDVI images. Maimaitijiang et al. [90] found that the RGB-based information had comparatively better prediction accuracy than the spectral features.
You et al. [57] observed that the growth-related infrared band and the short-wave infrared band were significant during crop growing months, whereas land surface temperature bands had importance in earlier crop growing months. Fernandes et al. [58] studied the multi-temporal component of Sentinel-2 images as an important element in the improvement of rice crop classification accuracy. The use of auxiliary climate and soil data helped in crop yield estimation only for small patch sizes. Haghverdi et al. [59] noted that the Greenness Index and Wetness Index were the most effective features compared to the normalized difference vegetation index (NDVI), red band, simple ratio, near-infrared band, green normalized difference vegetation index, and soil brightness index. Barbosa et al. [94] observed that LAI was the most effective feature among the Leaf Area Index (LAI), tree height, crown diameter, and the RGB values based on seven different regression algorithms and feature selection. The features that influence crop yield are grouped and summarized in Table 3 below. In Figure 5, we present the distribution features across articles. It can be seen that vegetation indices are the most commonly used feature for crop yield prediction. However, it is also observed that the most used features in the literature do not always work for all the approaches.
RQ4
—Challenges in using deep learning approaches and remote sensing for crop yield prediction
The challenges described here are based on the limitations mentioned in the retrieved articles. The challenge is in how to improve the working model for better accuracy by using a larger dataset to train, testing the model for better performance, and learning the micro-topological changes affecting the crop yield prediction [98]. Another challenge is to investigate the significant practical implication of the model for providing accurate information about crop yield to agriculturalists, growers, and policymakers. The black box property was also one of the major challenges faced when using deep learning approaches as it is difficult to understand the working behind the neural networks, and understanding this may help to develop a better model. Khaki et al. [99] have trained DNN and CNN-RNN using the backpropagation method to perform feature selection for reducing the black box nature of the model. The authors used this feature selection method to effectively evaluate the individual effects of weather components, soil conditions, and management variables, as well as the time period. To improve the traceability and interpretability of the LSTM model, recurrent network structures such as the Gated Recurrent Unit and basic RNN, and advanced analytical tools such as the attention network, can be further explored in the model [67]. Moreover, as the influence of the crop growth phase/phenological stage in crop yield prediction is significant, any change in the climatic conditions during crop growth stages can drastically affect the plant growth, which eventually leads to variation in crop yield estimation. Nevavuori et al. [85] observed the significance of the crop growth phase in crop yield prediction using weather information at a city scale; however, the use of local temperatures and other climatic variables from local weather stations can improve the possibility to learn the accuracy of change in crop growth stages.
The ANN approach was not efficient to capture the variability in micro-topography, which affected the prediction and had large model errors for fields with high yield variability [75]. The challenge with deep learning approaches is that large amounts of data are required to achieve good accuracy, and also the complexity of deep learning approaches increases the time complexity of the algorithm [76]. Terliksiz and Altýlar [66] observed that the results can be affected when data frames with different cropland coverage are fed to the 3D-CNN. Ma et al. [79] mentioned that incorporating the Bayesian deep learning approach with another deep learning approach such as LSTM, which has the potential to handle time-series features, can help in improving the model. Apart from the features mentioned in the previous section, the inclusion of other factors such as phenological stages, crop varieties, water availability, and others can help in the development of a potential model [69,75]. Moreover, the challenge lies in developing an approach with potential variables that is likely to perform well for all crops or specific kinds of crops.

5. Conclusions

This paper has presented a systematic literature review on the application of deep learning approaches for crop yield prediction using remote sensing data. The rationale of conducting this systematic literature review was to highlight the existing research gaps in a particular area of deep learning methodologies and provide useful information on the impact of vegetation indices and environmental factors on crop yield prediction. This systematic literature review has provided various deep learning approaches, features, and factors adopted for crop yield prediction. All the studies were carried out on different types of crops, geological positions, and various features. Overall, the performance and accuracy of the deep learning approach for crop yield prediction are better when compared to traditional machine learning approaches. The deep learning approaches are all equally capable in crop yield prediction based on the factors/parameters used in the model. However, the most effective deep learning approaches for crop yield prediction are the CNN- and LSTM-based approaches. CNN has the ability to find important features that can influence the crop yield prediction. Moreover, LSTM does not only identify the data’s variation pattern, but also the time-series data’s dependent connection [23]. Based on this study, it is observed that the vegetation indices and the meteorological data are the most used features, where the vegetation indices explain the crops’ characteristics and the meteorological data help to monitor the climatic conditions, which has a direct influence on crop yield prediction. It can also be seen that the factors influencing crop yield prediction are selected based on the crop yield and its correlation with other factors. It is also shown that the main challenges of using deep learning approaches and remote sensing for crop yield prediction are in how to improve the working model for better accuracy, the practical implications of the model for providing accurate information about crop yield to agriculturalists, growers, and policymakers, and the issue concerning the black box property.
In future work, we will consider conducting a systematic literature review on articles that use data from hyperspectral sensors for crop yield prediction. Moreover, we will include the specific food crops and classify the techniques and approaches used for different crop yield predictions.

Author Contributions

Conceptualization, P.M.; methodology, P.M., S.W. and S.G.; validation, N.H.S. and N.I.; writing—original draft preparation, P.M.; writing—review and editing, S.W., S.G., N.H.S. and N.I.; supervision, S.W., S.G., N.H.S. and N.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. WHO. World Hunger Is Still Not Going Down after Three Years and Obesity Is Still Growing—UN Report. Available online: https://www.who.int/news/item/15-07-2019-world-hunger-is-still-not-going-down-after-three-years-and-obesity-is-still-growing-un-report (accessed on 15 December 2021).
  2. UN. Pathways to Zero Hunger. Available online: https://www.un.org/zerohunger/content/challenge-hunger-can-be-eliminated-our-lifetimes (accessed on 15 December 2021).
  3. Kheir, A.M.S.; Alkharabsheh, H.M.; Seleiman, M.F.; Al-Saif, A.M.; Ammar, K.A.; Attia, A.; Zoghdan, M.G.; Shabana, M.M.A.; Aboelsoud, H.; Schillaci, C. Calibration and validation of AQUACROP and APSIM models to optimize wheat yield and water saving in Arid regions. Land 2021, 10, 1375. [Google Scholar] [CrossRef]
  4. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 1–20. [Google Scholar] [CrossRef] [PubMed]
  5. Tranfield, D.; Denyer, D.; Smart, P. Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br. J. Manag. 2003, 14, 207–222. [Google Scholar] [CrossRef]
  6. Kitchenham, B.A.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering (EBSE 2007-001); Keele University: Keele, UK; Durham University: Durham, UK, 2007. [Google Scholar]
  7. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 2021, 88, 105906. [Google Scholar] [CrossRef] [PubMed]
  8. Shen, C. A Transdisciplinary review of deep learning research and its relevance for water resources scientists. Water Resour. Res. 2018, 54, 8558–8593. [Google Scholar] [CrossRef]
  9. Basso, B.; Cammarano, D.; Carfagna, E. Review of crop yield forecasting methods and early warning systems. In Proceedings of the First Meeting of the Scientific Advisory Committee of the Global Strategy to Improve Agricultural and Rural Statistics, FAO, Rome, Italy, 9–10 April 2013. [Google Scholar]
  10. Horie, T.; Yajima, M.; Nakagawa, H. Yield forecasting. Agric. Syst. 1992, 40, 211–236. [Google Scholar] [CrossRef]
  11. Jeong, J.H.; Resop, J.P.; Mueller, N.D.; Fleisher, D.H.; Yun, K.; Butler, E.E.; Timlin, D.J.; Shim, K.-M.; Gerber, J.S.; Reddy, V.R. Random Forests for global and regional crop yield predictions. PLoS ONE 2016, 11, e0156571. [Google Scholar] [CrossRef]
  12. Islam, N.; Rashid, M.; Wibowo, S.; Xu, C.-Y.; Morshed, A.; Wasimi, S.; Moore, S.; Rahman, S. Early weed detection using image processing and machine learning techniques in an Australian chilli farm. Agriculture 2021, 11, 387. [Google Scholar] [CrossRef]
  13. Islam, N.; Rashid, M.M.; Wibowo, S.; Wasimi, S.; Morshed, A.; Xu, C.; Moore, S.T. Machine learning based approach for weed detection in chilli field using RGB images. In Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery; Meng, H., Lei, T., Li, M., Li, K., Xiong, N., Wang, L., Eds.; Springer: Cham, Switzerland, 2021; Volume 88, pp. 1097–1105. [Google Scholar]
  14. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  15. Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  16. Khaki, S.; Wang, L.; Archontoulis, S.V. A CNN-RNN framework for crop yield prediction. Front. Plant Sci. 2020, 10, 1750. [Google Scholar] [CrossRef]
  17. Szegedy, C.; Wei, L.; Yangqing, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  18. Johnson, M.D.; Hsieh, W.W.; Cannon, A.J.; Davidson, A.; Bédard, F. Crop yield forecasting on the Canadian Prairies by remotely sensed vegetation indices and machine learning methods. Agric. For. Meteorol. 2016, 218–219, 74–84. [Google Scholar] [CrossRef]
  19. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Nevavuori, P.; Narra, N.; Lipping, T. Crop yield prediction with deep convolutional neural networks. Comput. Electron. Agric. 2019, 163, 104859. [Google Scholar] [CrossRef]
  21. Fernandez-Beltran, R.; Baidar, T.; Kang, J.; Pla, F. Rice-yield prediction with multi-temporal Sentinel-2 Data and 3D CNN: A case study in Nepal. Remote Sens. 2021, 13, 1391. [Google Scholar] [CrossRef]
  22. Chen, Y.; Lee, W.S.; Gan, H.; Peres, N.; Fraisse, C.; Zhang, Y.; He, Y. Strawberry Yield Prediction Based on a Deep Neural Network Using High-Resolution Aerial Orthoimages. Remote Sens. 2019, 11, 1584. [Google Scholar] [CrossRef] [Green Version]
  23. Tian, H.; Wang, P.; Tansey, K.; Zhang, J.; Zhang, S.; Li, H. An LSTM neural network for improving wheat yield estimates by integrating remote sensing data and meteorological data in the Guanzhong Plain, PR China. Agric. For. Meteorol. 2021, 310, 108629. [Google Scholar] [CrossRef]
  24. Tian, H.; Wang, P.; Tansey, K.; Han, D.; Zhang, J.; Zhang, S.; Li, H. A deep learning framework under attention mechanism for wheat yield estimation using remotely sensed indices in the Guanzhong Plain, PR China. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102375. [Google Scholar] [CrossRef]
  25. Baghdadi, N.; Zribi, M. Optical Remote Sensing of Land Surface: Techniques and Methods; Elsevier: Oxford, UK, 2016. [Google Scholar]
  26. Eliu, P. A survey of remote-sensing big data. Front. Environ. Sci. 2015, 3, 45. [Google Scholar] [CrossRef] [Green Version]
  27. Kirkaya, A. Smart farming—Precision agriculture technologies and practices. J. Sci. Perspect. 2020, 4, 123–136. [Google Scholar] [CrossRef]
  28. Gómez, D.; Salvador, P.; Sanz, J.; Casanova, J.L. Potato yield prediction using machine learning techniques and Sentinel 2 data. Remote Sens. 2019, 11, 1745. [Google Scholar] [CrossRef] [Green Version]
  29. Kobayashi, N.; Tani, H.; Wang, X.; Sonobe, R. Crop classification using spectral indices derived from Sentinel-2A imagery. J. Inf. Telecommun. 2020, 4, 67–90. [Google Scholar] [CrossRef]
  30. Thomas, G.; Taylor, J.; Wood, G. Mapping yield potential with remote sensing. In Proceedings of the First European Conference on Precision Agriculture, Warwick University Conference Centre, Coventry, UK, 7–10 September 1997. [Google Scholar]
  31. Liang, S.; Wang, J. Advanced Remote Sensing: Terrestrial Information Extraction and Applications; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
  32. Liang, S.; Wang, J. Chapter 3—Compositing, Smoothing, and Gap-Filling Techniques; Academic Press: Cambridge, MA, USA, 2020; pp. 107–130. [Google Scholar]
  33. Fang, H.; Liang, S. Leaf area index models. In Reference Module in Earth Systems and Environmental Sciences; Elsevier: Burlington, MA, USA, 2014. [Google Scholar]
  34. Santos, G.O.; Rosalen, D.L.; de Faria, R.T. Use of active optical sensor in the characteristics analysis of the fertigated brachiaria with treated sewage. Eng. Agrícola 2017, 37, 1213–1221. [Google Scholar] [CrossRef] [Green Version]
  35. Zhao, B.; Duan, A.; Ata-Ul-Karim, S.T.; Liu, Z.; Chen, Z.; Gong, Z.; Zhang, J.; Xiao, J.; Liu, Z.; Qin, A.; et al. Exploring new spectral bands and vegetation indices for estimating nitrogen nutrition index of summer maize. Eur. J. Agron. 2018, 93, 113–125. [Google Scholar] [CrossRef]
  36. Junior, C.K.; Guimarães, A.M.; Caires, E.F. Use of active canopy sensors to discriminate wheat response to nitrogen fertilization under no-tillage. Eng. Agrícola 2016, 36, 886–894. [Google Scholar] [CrossRef] [Green Version]
  37. Pantazi, X.-E.; Moshou, D.; Bravo, C. Active learning system for weed species recognition based on hyperspectral sensing. Biosyst. Eng. 2016, 146, 193–202. [Google Scholar] [CrossRef]
  38. Yao, Y.; Miao, Y.; Huang, S.; Gao, L.; Ma, X.; Zhao, G.; Jiang, R.; Chen, X.; Zhang, F.; Yu, K.; et al. Active canopy sensor-based precision N management strategy for rice. Agron. Sustain. Dev. 2012, 32, 925–933. [Google Scholar] [CrossRef] [Green Version]
  39. Mkhabela, M.; Bullock, P.; Raj, S.; Wang, S.; Yang, Y. Crop yield forecasting on the Canadian Prairies using MODIS NDVI data. Agric. For. Meteorol. 2011, 151, 385–393. [Google Scholar] [CrossRef]
  40. Wall, L.; Larocque, D.; Léger, P. The early explanatory power of NDVI in crop yield modelling. Int. J. Remote Sens. 2008, 29, 2211–2225. [Google Scholar] [CrossRef]
  41. Bolton, D.K.; Friedl, M.A. Forecasting crop yield using remotely sensed vegetation indices and crop phenology metrics. Agric. For. Meteorol. 2013, 173, 74–84. [Google Scholar] [CrossRef]
  42. Basnyat, B.M.P.; Lafond, G.P.; Moulin, A.; Pelcat, Y. Optimal time for remote sensing to relate to crop grain yield on the Canadian prairies. Can. J. Plant Sci. 2004, 84, 97–103. [Google Scholar] [CrossRef]
  43. Ren, J.; Chen, Z.; Zhou, Q.; Tang, H. Regional yield estimation for winter wheat with MODIS-NDVI data in Shandong, China. Int. J. Appl. Earth Obs. Geoinf. 2008, 10, 403–413. [Google Scholar] [CrossRef]
  44. Salazar, L.; Kogan, F.; Roytman, L. Use of remote sensing data for estimation of winter wheat yield in the United States. Int. J. Remote Sens. 2007, 28, 3795–3811. [Google Scholar] [CrossRef]
  45. Hack, H.; Bleiholder, H.; Buhr, L.; Meier, U.; Schnock-Fricke, U.; Weber, E.; Witzenberger, A. A uniform code for phenological growth stages of mono-and dicotyledonous plants - Extended BBCH scale, general. Nachr. Des. Dtsch. Pflan-Zenschutzd. 1992, 44, 265–270. [Google Scholar]
  46. Knoblauch, C.; Watson, C.; Berendonk, C.; Becker, R.; Wrage-Mönnig, N.; Wichern, F. Relationship between remote sensing data, plant biomass and soil nitrogen dynamics in intensively managed grasslands under controlled conditions. Sensors 2017, 17, 1483. [Google Scholar] [CrossRef] [Green Version]
  47. Marti, J.; Bort, J.; Slafer, G.A.; Araus, J.L. Can wheat yield be assessed by early measurements of normalized difference vege-tation index? Ann. Appl. Biol. 2007, 150, 253–257. [Google Scholar] [CrossRef]
  48. Ali, A.; Martelli, R.; Lupia, F.; Barbanti, L. Assessing multiple years’ spatial variability of crop yields using satellite vegetation indices. Remote Sens. 2019, 11, 2384. [Google Scholar] [CrossRef] [Green Version]
  49. Domínguez, J.; Kumhálová, J.; Novák, P. Assessment of the relationship between spectral indices from satellite remote sensing and winter oilseed rape yield. Agron. Res. 2017, 15, 055–068. [Google Scholar]
  50. Panek, E.; Gozdowski, D. Analysis of relationship between cereal yield and NDVI for selected regions of Central Europe based on MODIS satellite data. Remote Sens. Appl. Soc. Environ. 2019, 17, 100286. [Google Scholar] [CrossRef]
  51. Vallentin, C.; Harfenmeister, K.; Itzerott, S.; Kleinschmit, B.; Conrad, C.; Spengler, D. Suitability of satellite remote sensing data for yield estimation in northeast Germany. Precis. Agric. 2021, 23, 52–82. [Google Scholar] [CrossRef]
  52. Prey, L.; Hu, Y.; Schmidhalter, U. High-throughput field phenotyping traits of grain yield formation and nitrogen use efficiency: Optimizing the selection of vegetation indices and growth stages. Front. Plant Sci. 2020, 10, 1672. [Google Scholar] [CrossRef] [Green Version]
  53. Thenkabail, P.S.; Smith, R.B.; De Pauw, E. Hyperspectral vegetation indices and their relationships with agricultural crop characteristics. Remote Sens. Environ. 2000, 71, 158–182. [Google Scholar] [CrossRef]
  54. Zhang, Y.; Qin, Q.; Ren, H.; Sun, Y.; Li, M.; Zhang, T.; Ren, S. Optimal hyperspectral characteristics determination for winter wheat yield prediction. Remote Sens. 2018, 10, 2015. [Google Scholar] [CrossRef] [Green Version]
  55. Kayad, A.G.; Al-Gaadi, K.A.; Tola, E.; Madugundu, R.; Zeyada, A.M.; Kalaitzidis, C. Assessing the spatial variability of alfalfa yield using satellite imagery and ground-based data. PLoS ONE 2016, 11, e0157166. [Google Scholar] [CrossRef] [PubMed]
  56. Kuwata, K.; Shibasaki, R. Estimating crop yields with deep learning and remotely sensed data. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 858–861. [Google Scholar]
  57. You, J.; Li, X.; Low, M.; Lobell, D.; Ermon, S. Deep Gaussian process for crop yield prediction based on remote sensing data. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–10 February 2017; pp. 4559–4565. [Google Scholar]
  58. Fernandes, J.L.; Ebecken, N.F.F.; Esquerdo, J. Sugarcane yield prediction in Brazil using NDVI time series and neural networks ensemble. Int. J. Remote Sens. 2017, 38, 4631–4644. [Google Scholar] [CrossRef]
  59. Haghverdi, A.; Washington-Allen, R.A.; Leib, B.G. Prediction of cotton lint yield from phenology of crop indices using artificial neural networks. Comput. Electron. Agric. 2018, 152, 186–197. [Google Scholar] [CrossRef]
  60. Wang, X.; Huang, J.; Feng, Q.; Yin, D. Winter wheat yield prediction at county level and uncertainty analysis in main wheat-producing regions of China with deep learning approaches. Remote Sens. 2020, 12, 1744. [Google Scholar] [CrossRef]
  61. de Freitas Cunha, R.L.; Silva, B. Estimating crop yields with remote sensing and deep learning. In Proceedings of the 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS), Santiago, Chile, 22–26 March 2020; pp. 273–278. [Google Scholar]
  62. Gavahi, K.; Abbaszadeh, P.; Moradkhani, H. DeepYield: A combined convolutional neural network with long short-term memory for crop yield forecasting. Expert Syst. Appl. 2021, 184, 115511. [Google Scholar] [CrossRef]
  63. Qiao, M.; He, X.; Cheng, X.; Li, P.; Luo, H.; Zhang, L.; Tian, Z. Crop yield prediction from multi-spectral, multi-temporal re-motely sensed imagery using recurrent 3D convolutional neural networks. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102436. [Google Scholar] [CrossRef]
  64. Abbaszadeh, P.; Gavahi, K.; Alipour, A.; Deb, P.; Moradkhani, H. Bayesian multi-modeling of deep neural nets for probabilistic crop yield prediction. Agric. For. Meteorol. 2021, 314, 108773. [Google Scholar] [CrossRef]
  65. Mu, H.; Zhou, L.; Dang, X.; Yuan, B. Winter wheat yield estimation from multitemporal remote sensing images based on convolutional neural networks. In Proceedings of the 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Shanghai, China, 5–7 August 2019; pp. 1–4. [Google Scholar]
  66. Terliksiz, A.S.; Altylar, D.T. Use of deep neural networks for crop yield prediction: A case study of soybean yield in Lauderdale County, Alabama, USA. In Proceedings of the 2019 8th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Istanbul, Turkey, 16–19 July 2019; pp. 1–4. [Google Scholar] [CrossRef]
  67. Wolanin, A.; Mateo-García, G.; Camps-Valls, G.; Gómez-Chova, L.; Meroni, M.; Duveiller, G.; Liangzhi, Y.; Guanter, L. Esti-mating and understanding crop yields with explainable deep learning in the Indian wheat belt. Environ. Res. Lett. 2020, 15, 024019. [Google Scholar] [CrossRef]
  68. Cao, J.; Zhang, Z.; Luo, Y.; Zhang, L.; Zhang, J.; Li, Z.; Tao, F. Wheat yield predictions at a county and field scale with deep learning, machine learning, and google earth engine. Eur. J. Agron. 2020, 123, 126204. [Google Scholar] [CrossRef]
  69. Sun, J.; Di, L.; Sun, Z.; Shen, Y.; Lai, Z. County-level soybean yield prediction using deep CNN-LSTM model. Sensors 2019, 19, 4363. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Sharma, S.; Rai, S.; Krishnan, N.C. Wheat crop yield prediction using deep LSTM model. arXiv 2020, arXiv:2011.01498 2020. [Google Scholar]
  71. Ghazaryan, G.; Skakun, S.; König, S.; Rezaei, E.E.; Siebert, S.; Dubovyk, O. Crop yield estimation using multi-source satellite image series and deep learning. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium, Online, 26 September–2 October 2020; pp. 5163–5166. [Google Scholar]
  72. Gastli, M.S.; Nassar, L.; Karray, F. Satellite images and deep learning tools for crop yield prediction and price forecasting. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar] [CrossRef]
  73. Jeong, S.; Ko, J.; Yeom, J.M. Predicting rice yield at pixel scale through synthetic use of crop and deep learning models with satellite data in South and North Korea. Sci. Total Environ. 2021, 802, 149726. [Google Scholar] [CrossRef]
  74. Dang, C.; Liu, Y.; Yue, H.; Qian, J.; Zhu, R. Autumn crop yield prediction using data-driven approaches: Support vector machines, random forest, and deep neural network methods. Can. J. Remote Sens. 2020, 47, 162–181. [Google Scholar] [CrossRef]
  75. Gao, J.; Li, P.; Chen, Z.; Zhang, J. A survey on deep learning for multimodal data fusion. Neural Comput. 2020, 32, 829–864. [Google Scholar] [CrossRef]
  76. Khaki, S.; Pham, H.; Wang, L. Simultaneous corn and soybean yield prediction from remote sensing data using deep transfer learning. Sci. Rep. 2021, 11, 1–14. [Google Scholar] [CrossRef]
  77. Kaneko, A.; Kennedy, T.; Mei, L.; Sintek, C.; Burke, M.; Ermon, S.; Lobell, D. Deep learning for crop yield prediction in Africa. In Proceedings of the International Conference on Machine Learning AI for Social Good Workshop, Long Beach, CA, USA, 10–15 June 2019; pp. 1–5. [Google Scholar]
  78. Jiang, H.; Hu, H.; Zhong, R.; Xu, J.; Xu, J.; Huang, J.; Wang, S.; Ying, Y.; Lin, T. A deep learning approach to conflating het-erogeneous geospatial data for corn yield estimation: A case study of the US corn belt at the county level. Glob. Chang. Biol. 2020, 26, 1754–1766. [Google Scholar] [CrossRef]
  79. Ma, Y.; Zhang, Z.; Kang, Y.; Özdoğan, M. Corn yield prediction and uncertainty analysis based on remotely sensed variables using a Bayesian neural network approach. Remote Sens. Environ. 2021, 259, 112408. [Google Scholar] [CrossRef]
  80. Zhang, L.; Zhang, Z.; Luo, Y.; Cao, J.; Xie, R.; Li, S. Integrating satellite-derived climatic and vegetation indices to predict smallholder maize yield using deep learning. Agric. For. Meteorol. 2021, 311, 108666. [Google Scholar] [CrossRef]
  81. Xie, Y.; Huang, J. Integration of a Crop Growth Model and Deep Learning Methods to Improve Satellite-Based Yield Estimation of Winter Wheat in Henan Province, China. Remote Sens. 2021, 13, 4372. [Google Scholar] [CrossRef]
  82. Jin, X.; Li, Z.; Feng, H.; Ren, Z.; Li, S. Deep neural network algorithm for estimating maize biomass based on simulated Sentinel 2A vegetation indices and leaf area index. Crop J. 2019, 8, 87–97. [Google Scholar] [CrossRef]
  83. Engen, M.; Sandø, E.; Sjølander, B.L.O.; Arenberg, S.; Gupta, R.; Goodwin, M. Farm-scale crop yield prediction from multi-temporal data using deep hybrid neural networks. Agronomy 2021, 11, 2576. [Google Scholar] [CrossRef]
  84. Xie, Y. Combining CERES-wheat model, Sentinel-2 data, and deep learning method for winter wheat yield estimation. Int. J. Remote Sens. 2022, 43, 630–648. [Google Scholar] [CrossRef]
  85. Nevavuori, P.; Narra, N.; Linna, P.; Lipping, T. Crop yield prediction using multitemporal UAV data and spatio-temporal deep learning models. Remote Sens. 2020, 12, 4000. [Google Scholar] [CrossRef]
  86. Yang, Q.; Shi, L.; Han, J.; Zha, Y.; Zhu, P. Deep convolutional neural networks for rice grain yield estimation at the ripening stage using UAV-based remotely sensed images. Field Crop. Res. 2019, 235, 142–153. [Google Scholar] [CrossRef]
  87. Yang, Q.; Shi, L.; Lin, L. Plot-scale rice grain yield estimation using UAV-based remotely sensed images via CNN with time-invariant deep features decomposition. In Proceedings of the 2019 IEEE International Geoscience and Remote Sensing Symposium, 28 July–2 August 2019; pp. 7180–7183. [Google Scholar] [CrossRef]
  88. Yang, W.; Nigon, T.; Hao, Z.; Dias Paiao, G.; Fernández, F.G.; Mulla, D.; Yang, C. Estimation of corn yield based on hyper-spectral imagery and convolutional neural network. Comput. Elect. Agric. 2021, 184, 106092. [Google Scholar] [CrossRef]
  89. Sagan, V.; Maimaitijiang, M.; Bhadra, S.; Maimaitiyiming, M.; Brown, D.R.; Sidike, P.; Fritschi, F.B. Field-scale crop yield pre-diction using multi-temporal WorldView-3 and PlanetScope satellite data and deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 174, 265–281. [Google Scholar] [CrossRef]
  90. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2019, 237, 111599. [Google Scholar] [CrossRef]
  91. Danilevicz, M.F.; Bayer, P.E.; Boussaid, F.; Bennamoun, M.; Edwards, D. Maize yield prediction at an early developmental stage using multispectral images and genotype data for preliminary hybrid selection. Remote Sens. 2021, 13, 3976. [Google Scholar] [CrossRef]
  92. Sun, J.; Lai, Z.; Di, L.; Sun, Z.; Tao, J.; Shen, Y. Multilevel deep learning network for county-level corn yield estimation in the U.S. corn belt. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5048–5060. [Google Scholar] [CrossRef]
  93. Kross, A.; Znoj, E.; Callegari, D.; Kaur, G.; Sunohara, M.; Lapen, D.; McNairn, H. Using artificial neural networks and remotely sensed data to evaluate the relative importance of variables for prediction of within-field corn and soybean yields. Remote Sens. 2020, 12, 2230. [Google Scholar] [CrossRef]
  94. Barbosa, B.D.S.; Ferraz, G.A.e.S.; Costa, L.; Ampatzidis, Y.; Vijayakumar, V.; dos Santos, L.M. UAV-based coffee yield pre-diction utilizing feature selection and deep learning. Smart Agric. Technol. 2021, 1, 100010. [Google Scholar] [CrossRef]
  95. Bronstein, M.M.; Bronstein, A.M.; Michel, F.; Paragios, N. Data fusion through cross-modality metric learning using similarity-sensitive hashing. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 3594–3601. [Google Scholar]
  96. Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A review of affective computing: From unimodal analysis to multimodal fusion. Inf. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef] [Green Version]
  97. Khaki, S.; Wang, L. Crop yield prediction using deep neural networks. Front. Plant Sci. 2019, 10, 621. [Google Scholar] [CrossRef] [Green Version]
  98. Bramon, R.; Boada, I.; Bardera, A.; Rodriguez, J.; Feixas, M.; Puig, J.; Sbert, M. Multimodal data fusion based on mutual in-formation. IEEE Trans. Vis. Comput. Graph. 2011, 18, 1574–1587. [Google Scholar] [CrossRef] [PubMed]
  99. Matese, A.; Toscano, P.; Di Gennaro, S.F.; Genesio, L.; Vaccari, F.P.; Primicerio, J.; Belli, C.; Zaldei, A.; Bianconi, R.; Gioli, B. Intercomparison of UAV, Aircraft and satellite remote sensing platforms for precision viticulture. Remote Sens. 2015, 7, 2971–2990. [Google Scholar] [CrossRef] [Green Version]
Figure 1. PRISMA flow chart showing the results of searches.
Figure 1. PRISMA flow chart showing the results of searches.
Remotesensing 14 01990 g001
Figure 2. Distribution of articles between 2012 and 2022.
Figure 2. Distribution of articles between 2012 and 2022.
Remotesensing 14 01990 g002
Figure 3. Distribution of articles based on deep learning approaches.
Figure 3. Distribution of articles based on deep learning approaches.
Remotesensing 14 01990 g003
Figure 4. Distribution of articles on the use of satellite data with deep learning between 2015 and 2022.
Figure 4. Distribution of articles on the use of satellite data with deep learning between 2015 and 2022.
Remotesensing 14 01990 g004
Figure 5. Distribution of features across articles.
Figure 5. Distribution of features across articles.
Remotesensing 14 01990 g005
Table 1. Distribution of articles among various databases.
Table 1. Distribution of articles among various databases.
Databases# of Retrieved Articles after Search Protocol# of Articles after Exclusion Criteria# of Articles after Removing Repeated Articles
Scopus7794
Science Direct2431711
IEEE Xplore2077
Google Scholar1541414
MDPI1344
Web of Science6444
Total5715544
Table 2. Summary of articles between 2012 and 2022.
Table 2. Summary of articles between 2012 and 2022.
Type of Remote SensingModelData and Features Used in StudyAuthors
AVHRRBayesian neural networks (BNN)Crop yield dataJohnson et al. [16]
LSTMYield data, surface reflectance, NDVI, air temperature, precipitation, air pressure, humidityWang et al. [60]
ERA5 reanalysisLSTMGrowing Degree Days (GDD), plant–harvest cycle, average yield data, maximum temperature, minimum temperatureCunha and Silva [61]
Landsat 8ANNNDVI, SR, NIR, GNDVI, GI, WI, and SBI Haghverdi et al. [57]
MODIS 3D-CNNSurface reflectance, land surface temperature, and land cover typeGavahi et al. [62], Qiao et al. [63], Abbaszadeh et al. [64]
CNNNDVI, NDWI, NIR, precipitation, minimum, mean and maximum air temperaturesKuwata and Shibasaki [56], Mu et al. [65], Terliksiz and Altýlar [66], Wolanin et al. [67], Cao et al. [68]
CNN-LSTMSurface reflectance, land surface temperatureYou et al. [57], Sun et al. [69]. Sharma et al. [70], Ghazaryan et al. [71], Gastli et al. [72], Jeong et al. [73]
DNNNDVI, Absorbed Photosynthetically Active Radiation (APAR), land surface temperatureDang et al. [74], Gao et al. [75]
Deep forward neural network (DFNN)Yield data, surface reflectance, land surface temperature, cropland data layersKhaki et al. [76]
LSTMNDVI, EVI, land surface temperatureTian et al. [23], Tian et al. [24], Kaneko. et al. [77], Jiang et al. [78], Ma et al. [79], Zhang et al. [80], Xie et al. [81]
Neural networks ensembleNDVI, Red, SR, NIR, GNDVI, GI, WI, and SBI Fernandes et al. [58]
Sentinel-23D-CNNCrop yield, rice crop mask, B02-B08, B8A, B11, B12 and NDVI, climate dataFernandez-Beltran et al. [21]
DNN Precipitation, temperature, NIR, and SWIRJin et al. [82], Engen et al. [83]
LSTMMinimum and maximum temperature, integrated solar radiation, cumulative precipitation, soil texture, soil chemical parameters, hydrological propertiesXie et al. [84]
UAVCNNEVI, GRVI, GNDVI, MSAVI, OSAVI, NDVI, SAVI, WDRVI Nevavuori et al. [85], Yang et al. [86], Yang et al. [87], Yang et al. [88]
CNN-LSTMRGB images, thermal time, crop yield data, cumulative temperatureNevavuori et al. [85]
DNNNDVI, GNDVI, EVI, EVI2, WDRVI, SIPI, NRVI, VARI, TVI, OSAVI, MCARI, TCARI, NDWI, NDRE, RECI, GLCMSagan et al. [89]
Faster R-CNNWeather images Chen et al. [22]
Multimodal data fusionSurface temperature, air temperature, humidity, normalized relative canopy temperature (NRCT), Vegetation Fraction (VF)Maimaitijiang et al. [90]
Spectral deep neural network (sp-DNN)Crop yield, harvested yield, multispectral images, NIR, NDVI, NDVI-RE, NDRE, ENVI, CCCI, GNDVI, GLI, and OSAVIDanilevica et al. [91]
Table 3. Features that influence crop yield.
Table 3. Features that influence crop yield.
Vegetation IndicesNDVI—Normalized Difference Vegetation Index, EVI—Enhanced Vegetation Index, GCI—Green Chlorophyll Index, NDWI—Normalized Difference Water Index, NIR—Near-Infrared, MODIS Surface Reflectance, MODIS Land Surface Temperature, GNDVI—Green Normalized Difference Vegetation Index, EVI2—Two-Band Enhanced Vegetation Index, WDRVI—Wide Dynamic Range Vegetation Index, SIPI—Structure Insensitive Pigment Index, NRVI—Normalized Ration Vegetation Index, VARI—Visible Atmospherically Resistant Index, TVI—Triangular Vegetation Index, OSAVI—Optimized Soil Adjusted Vegetation Index, MCARI—Modified Chlorophyll Absorption Ratio Index, TCARI—Transformed Chlorophyll Absorption Reflectance Index, NDRE—Normalized Difference Red-Edge Index, RECI—Red-Edge Chlorophyll Index, GLCM—Gray-Level Co-Occurrence Matrix, GI—Greenness Index, WI—Wetness Index, Red Band, SBI—Soil Brightness Index, SAVI—Soil-Adjusted Vegetation Index, MSAVI—Modified Soil-Adjusted Vegetation Index, SIF—Solar-Induced Chlorophyll Fluorescence, APAR—Absorbed Photosynthetically Active Radiation, PCI—Precipitation Condition Index, VHI—Vegetation Health Index, PAR—Photosynthetically Active Radiation, TVDI—Temperature Vegetation Dryness Index, VSWI—Vegetation Supply Water Index, PDI—Perpendicular Drought Index, RZSM—Root Zone Soil Moisture, NDMI—Normalized Difference Moisture Index, LAI—Leaf Area Index, ET—Total Evapotranspiration, LE—Average Latent Heat Flux, PET—Total Potential Evapotranspiration, PLE—Average Potential Latent Heat Flux, GPP—Gross Primary Productivity, PsnNet—Net Photosynthesis, CCCI—Canopy Chlorophyll Content Index, GLI—Green Leaf Index, NDVI-RE—Normalized Difference Vegetation Index Red Edge, NDRE-R—Normalized Difference Red Edge Red, VTCI—Vegetation temperature Condition Index, NRCT—Normalized Relative Canopy Temperature, VF—Vegetation Fraction, Sentinel—2—Bands B02 to B08, B8A, B11, B12, RDVI—Renormalized Difference Vegetation index, MTVI1—Modified Triangular Vegetation Index, TBWI—Three-Band Water Index, WDRVI—Wide Dynamic Range Vegetation Index, NDII—Normalized Difference Infrared Index, DCNI—Canopy Nitrogen Index
Meteorological Data/Weather ConditionsPrecipitation, minimum temperature, mean temperature, maximum air temperature, temperature, weather, accumulated precipitation, cumulative temperature, average precipitation, average temperature, GDD—Growing Degree Days, KDD—Killing Degree Days, FDD—Frozen Degree Days, Surface Downward Shortwave Radiation Flux (SWdown), Water Vapor Pressure Deficit (VPD), air pressure, air-specific humidity, surface downward longwave radiation, wind speed, evapotranspiration, water stress indicator
Crop Yield Information (Excluding Crop Yield Data)Growing phase as percentage of total thermal time, start of crop season, end of crop season, length of crop season, harvest cycle, country-level yield, field-level yield, biomass measure, fresh grain yield, dry grain yield, wheat crop fraction
ImagesRGB images, hyperspectral images, moisture images
Soil DataClay content mass fraction, sand content mass fraction, water content, pH, bulk density, carbon content, silt content, coarse fragments, cation exchange capacity, pH in H2O, pH in KCL, Soil Available Water Holding Capacity (AWC), particle size distribution, total nitrogen
OthersAnnual land cover, plant growth stage, micro-topographic fields, plant height, crop planting areas, thermal time, solar radiation, crop land data
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Muruganantham, P.; Wibowo, S.; Grandhi, S.; Samrat, N.H.; Islam, N. A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing. Remote Sens. 2022, 14, 1990. https://doi.org/10.3390/rs14091990

AMA Style

Muruganantham P, Wibowo S, Grandhi S, Samrat NH, Islam N. A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing. Remote Sensing. 2022; 14(9):1990. https://doi.org/10.3390/rs14091990

Chicago/Turabian Style

Muruganantham, Priyanga, Santoso Wibowo, Srimannarayana Grandhi, Nahidul Hoque Samrat, and Nahina Islam. 2022. "A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing" Remote Sensing 14, no. 9: 1990. https://doi.org/10.3390/rs14091990

APA Style

Muruganantham, P., Wibowo, S., Grandhi, S., Samrat, N. H., & Islam, N. (2022). A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing. Remote Sensing, 14(9), 1990. https://doi.org/10.3390/rs14091990

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop