Next Article in Journal
Point Cloud Completion of Occluded Corn with a 3D Positional Gated Multilayer Perceptron and Prior Shape Encoder
Previous Article in Journal
Ethephon Application on Hazelnut (Corylus avellana L.) Trees: Productive and Physiological Experience in a Temperate Climate Zone
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Artificial Intelligence Techniques for Wheat Crop Monitoring and Management

by
Jayme Garcia Arnal Barbedo
Embrapa Digital Agriculture, Campinas 13083-886, SP, Brazil
Agronomy 2025, 15(5), 1157; https://doi.org/10.3390/agronomy15051157
Submission received: 17 April 2025 / Revised: 28 April 2025 / Accepted: 6 May 2025 / Published: 9 May 2025
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Artificial intelligence (AI) techniques, particularly machine learning and deep learning, have shown great promise in advancing wheat crop monitoring and management. However, the application of AI in this domain faces persistent challenges that hinder its full potential. Key limitations include the high variability of agricultural environments, which complicates data acquisition and model generalization; the scarcity and limited diversity of labeled datasets; and the substantial computational demands associated with training and deploying deep learning models. Additionally, difficulties in ground-truth generation, cloud contamination in remote sensing imagery, coarse spatial resolution, and the “black-box” nature of deep learning models pose significant barriers. Although strategies such as data augmentation, semi-supervised learning, and crowdsourcing have been explored, they are often insufficient to fully overcome these obstacles. This review provides a comprehensive synthesis of recent advancements in AI for wheat applications, critically examines the major unresolved challenges, and highlights promising directions for future research aimed at bridging the gap between academic development and real-world agricultural practices.

1. Introduction

Wheat (Triticum aestivum L.) is one of the most important staple crops worldwide, providing a significant portion of daily caloric intake for millions of people. Given its global significance, optimizing wheat production is crucial to ensuring food security. However, challenges such as climate change, pest infestations, and resource inefficiencies continue to impact wheat yields and quality [1,2]. As the demand for wheat continues to grow, innovative solutions that leverage modern technology are needed to enhance productivity while promoting sustainable agricultural practices.
Most potential technological solutions for agriculture are inherently data driven, that is, they can only be effective if data covering the whole variety of conditions found for that specific application are available. Although sensors to collect data from crop fields have been available for many decades, this kind of technology has experienced accelerated evolution and growth since the turn of the twenty-first century [3]. Soil and meteorological sensors are now sensitive and affordable enough for a detailed characterization and modeling of the cultivation process [4]. Digital cameras can be utilized to monitor diseases, pests, nutrient deficiencies, and other stress factors, which are major contributors to agricultural losses [5]. Meanwhile, advanced multispectral and hyperspectral cameras are enabling the early detection of issues, allowing for timely intervention to prevent significant yield losses [6]. Drones have revolutionized data collection, enabling the coverage of vast areas while capturing high-resolution images with efficiency and precision [7]. A growing number of satellites now continuously monitor the Earth, with increasing revisit frequencies and ever-improving sensor resolution and sensitivity [8]. Internet of Things (IoT) technologies have enabled the seamless interconnection of devices, allowing them to communicate and exchange data autonomously over the internet, without the need for human intervention [9]. As a result, the volume of collected data has been rapidly increasing, even in previously inaccessible areas where data collection was once logistically impractical. Extracting meaningful insights from these diverse data types is a complex challenge, but artificial intelligence techniques and models have proven highly effective in overcoming it [10].
Artificial intelligence (AI) has emerged as a transformative tool in addressing these challenges. Recent advancements in AI-driven agriculture have led to notable progress in key areas such as disease detection, yield prediction, weed management, and phenotyping [11]. For example, ref. [12] developed a deep learning model achieving high accuracy in early detection of wheat rust based on hyperspectral imaging. Similarly, ref. [13] demonstrated that convolutional neural networks (CNNs) could outperform traditional machine learning models in predicting wheat yield from UAV-acquired imagery. These and other studies underscore the growing reliability and precision of AI-driven approaches in wheat production monitoring.
Emerging deep learning models, including self-supervised learning and attention-based architectures, are proving to be highly effective in automating large-scale wheat monitoring and optimizing crop management decisions [14]. The integration of multispectral and hyperspectral imaging, UAV-based monitoring, and remote sensing technologies has further strengthened AI applications in precision wheat farming [15]. Additionally, the integration of transfer learning, multi-source data fusion [16], and hybrid AI models has contributed to overcoming challenges associated with data scarcity and model generalization [17].
Despite recent advancements, the application of artificial intelligence to wheat management and monitoring still faces a range of persistent challenges that extend beyond data-related issues. Among the foremost limitations are the high computational demands of training and deploying complex models, which can hinder adoption in settings with limited infrastructure [18]. Additionally, enhancing model interpretability remains a crucial concern, as current deep learning architectures often function as “black boxes”, limiting their usability in decision-making processes that require transparency and trust [19,20,21]. The dynamic nature of agricultural environments adds another layer of complexity—fields are unstructured and influenced by constantly changing variables such as weather conditions, light incidence, phenological stages, and the presence of pests or diseases.
Amid these broader issues, data limitations continue to be a major bottleneck. Unlike more stable domains like urban environments, agricultural systems demand datasets that are not only large in volume but also diverse enough to capture complex interactions among environmental and biological factors [5]. This challenge is particularly acute for digital imagery, where the cost and logistics of acquiring representative samples under varied conditions are considerable. While fixed sensor networks may partially alleviate this burden, alternative strategies are still necessary. Methods such as semi-supervised learning, domain adaptation, and improved annotation techniques have shown promise, but they cannot entirely substitute for robust, well-curated datasets. To address this gap, innovative approaches based on crowdsourcing and citizen science have demonstrated potential [11,22]. These participatory methods can contribute valuable, real-world data at scale, though further refinement is needed to ensure quality, standardization, and integration into existing AI workflows. A holistic response to these challenges requires coordinated progress across model development, data infrastructure, and interdisciplinary collaboration.
Thus, although substantial progress has been made in applying machine learning and deep learning techniques to wheat crop monitoring and management, significant gaps remain. Existing studies often rely on limited, site-specific datasets, which restrict the generalizability of the proposed models across diverse agroecological environments. Moreover, challenges such as data scarcity, ground-truthing difficulties, limited temporal resolution, and the lack of interpretable AI models continue to hinder practical deployment in real-world agricultural settings. While recent works have explored advanced methods such as data augmentation, transfer learning, and multi-source fusion, these approaches have yet to fully bridge the gap between controlled experimental results and scalable field applications. This review aims to critically examine these persistent challenges, synthesize emerging strategies, and identify directions for future research to advance the robust integration of AI in wheat production systems.
Numerous studies in the literature address one or more of the challenges and research gaps outlined above, as well as various application-specific difficulties. However, the diversity of methodologies and approaches can make it difficult to determine which solutions are most appropriate for specific problems. To help organize the growing body of scientific knowledge and provide a clearer view of the current landscape, this article presents a comprehensive review of state-of-the-art artificial intelligence applications in wheat monitoring and management. It examines recent advances, highlights persistent challenges, and outlines promising directions for future research and integration. By assessing the capabilities and limitations of current AI models, this review seeks to bridge the gap between academic research and practical implementation in agricultural settings, ultimately contributing to improved food security and the promotion of more sustainable wheat production practices.
The remainder of this article is organized as follows. Section 2 defines the key terms and acronyms used throughout the review. Section 3 examines the state-of-the-art AI applications in various stages of wheat cultivation. Section 4 provides an in-depth discussion of the main technical and practical challenges, as well as unresolved research gaps. Section 5 concludes with final remarks and reflections on future directions.

2. Definitions and Acronyms

Some terms considered particularly important in the context of this work are defined in this section. Most definitions have been adapted from [23]. A list of acronyms used in this article, along with their respective meanings, is provided in Abbreviations.
Artificial intelligence: It is a computational, data-driven approach capable of performing tasks that typically require human intelligence, such as detecting, tracking, or classifying plant diseases autonomously.
Big data: This is a term used to describe large, complex, and high-volume datasets that exceed the capabilities of traditional data processing methods.
Data annotation: This is the process of adding metadata to a dataset, such as marking symptom locations in an image. This task is typically performed manually by human specialists using image analysis software.
Data fusion: This is the process in which different types of data are combined in order to provide results that could not be achieved using single data sources.
Deep learning: This is a specialized subset of machine learning that utilizes artificial neural networks with multiple processing layers to extract features from data and recognize patterns of interest. Deep learning is particularly suited for large datasets with complex features and unknown relationships.
Domain adaptation: This is a subfield of transfer learning in machine learning where a model trained on one source domain (the dataset on which the model is originally trained) is adapted to perform well on a different but related target domain (the dataset on which the model needs to perform but has different characteristics).
Ensemble learning: This is a machine learning technique that combines multiple models, often called “base learners” or “weak learners”, to create a more accurate and robust predictive model.
Feature: This is a measurable property of a data sample, such as color, texture, shape, reflectance intensity, index values, or spatial information.
Hyperspectral imaging: This is the process of using a spectral imaging sensor to capture and analyze reflectance information across the electromagnetic spectrum, generating a unique spectral signature for each pixel in the specimen’s image. Hyperspectral imaging typically evaluates hundreds of narrow wavebands, extending beyond the visible spectrum to provide detailed spectral insights.
Image augmentation: This is the process of applying image processing techniques to modify existing images, thereby generating additional training data for a model.
Imaging: This is the use of sensors to capture images across specific ranges of the electromagnetic spectrum. Imaging sensors include RGB (red-green-blue), multispectral, hyperspectral, and thermal cameras.
Internet of Things: This is a network of interconnected physical devices embedded with sensors, software, and communication technologies that enable them to collect, exchange, and analyze data over the internet without human intervention.
Interpretability: This refers to the degree to which a human can understand and explain how an AI model makes its decisions.
Machine learning: This is a subset of artificial intelligence (AI) that enables algorithms to learn patterns of plant diseases by extracting features from large datasets. Machine learning models are often trained using annotated data and, once developed, can predict outcomes for new, unseen data.
Model: This is a representation of the knowledge learned by a machine learning algorithm from training data.
Model generalization: This is the ability of a machine learning model to perform well on new, unseen data after being trained on a given dataset.
Multimodality: It refers to the ability of a system, particularly in artificial intelligence (AI) and machine learning, to process, integrate, and interpret multiple types of data or sensory inputs simultaneously.
Multispectral imaging: This is a sensor-based technique for capturing and processing reflectance information from multiple wavebands of the electromagnetic spectrum. Typically, up to 10 wavebands in the visible or near-infrared range are analyzed to support disease detection.
Overfitting: This is a phenomenon where a model performs well on training data but fails to generalize to new, unseen test data.
Proximal sensing: This is the acquisition of optical information from a crop specimen under controlled conditions, without direct physical contact, but at relatively close distances—typically conducted in a greenhouse or laboratory setting.
Remote sensing: This is the acquisition of optical information from an object in the field or landscape through a noninvasive, contactless approach, using sensors such as the human eye or artificial spectral sensors.
Segmentation: This is the process of dividing a digital image into multiple distinct segments or classes, based on similar pixel characteristics such as hue, saturation, and intensity. This can be performed automatically using algorithms or manually by human annotators.
Semi-supervised learning: This is a hybrid approach combining supervised and unsupervised learning, where a small portion of labeled data are used for initial training, while the remaining process relies on unlabeled data.
Supervised learning: This is a machine learning approach where a model is trained on labeled data to predict either categorical labels (classification) or numerical values (regression) for new data.
Transfer learning: This refers to a machine learning technique where a model trained on one task or dataset (source domain) is adapted to perform well on a different but related task or dataset (target domain).
Unsupervised learning: This is a machine learning technique that identifies patterns and structures in unlabeled data without predefined categories.

3. Literature Review

The article selection process was conducted in March 2025 using Scopus and Google Scholar, two comprehensive bibliographic databases. The search employed a Boolean expression: wheat AND (artificial intelligence OR deep learning OR machine learning). Conference papers were immediately excluded, based on the rationale that such publications often lack rigorous peer review. This initial search returned approximately 320 articles.
To refine this large set, two exclusion criteria were systematically applied:
Thematic Focus: Studies were included only if they focused exclusively on wheat or, at most, included one additional crop.
Methodological Relevance: Articles in which artificial intelligence or machine learning techniques were not the primary focus of the investigation were excluded.
Applying these criteria, 96 articles were excluded for not meeting the thematic focus, and 31 for not prioritizing AI/ML methodologies. After this screening process, 193 articles remained. An additional eight relevant articles were identified through manual examination of the reference lists of these papers, leading to a final selection of 201 articles for in-depth review. Although no formal quality assessment (e.g., minimum dataset size or standardized validation procedures) was applied during selection, studies were critically evaluated regarding dataset characteristics, validation strategies, and model robustness as discussed in the Results and Discussion sections.
The selected articles were categorized into seven main research areas: yield prediction (46 articles), disease management (44 articles), other stresses and damages (22 articles), phenotyping/genetic selection (21 articles), spike/ear/head detection (31 articles), grain/kernel classification (18 articles), and other applications (19 articles). It is worth noting that additional articles not included in the selected set are cited throughout the text whenever they provide relevant clarification or support for specific aspects discussed.

3.1. Yield Prediction and LAI/Biomass Estimation

Table 1 presents all articles focused on yield prediction and LAI/biomass estimation, outlining each reference alongside its key challenges, limitations, tested techniques, and best-reported accuracy(ies).
Yield prediction, along with the related tasks of LAI and biomass estimation, remains one of the most extensively studied applications of AI in wheat-related research. Several factors contribute to this focus. The widespread availability of satellite-derived data, including long-term time series spanning several decades, provides a rich foundation for developing and validating AI models. Additionally, the use of unmanned aerial vehicles (UAVs) for data collection in this context is becoming increasingly common [16,27,32,45,50,59,61,62,63]. This abundance and accessibility of data make yield prediction a particularly attractive and feasible problem for AI-based approaches.
AI excels at extracting meaningful insights from complex, high-dimensional agricultural datasets, enabling it to capture subtle patterns and relationships that might be difficult to detect using traditional analytical methods [51]. This capability makes AI particularly well suited for tasks like yield prediction, where multiple interacting variables must be considered. Additionally, wheat yield data are highly nonlinear [58,63], requiring techniques capable of effectively modeling nonlinear relationships [33,51,53]. However, while many AI techniques are inherently well suited for this purpose, selecting the optimal model architecture, parameters, and activation functions can be challenging [57]. In extreme cases of nonlinearity, even sophisticated AI techniques may struggle to capture the underlying patterns accurately [26].
Another challenge associated with AI models is the difficulty in interpreting and explaining their outputs [58], largely due to their inherent “black-box” nature [48,56]. Although a deep understanding of the model’s internal workings is not strictly required for its application, the lack of transparency makes it harder to identify weaknesses and refine aspects that do not perform as expected [42,64]. Ensemble learning models pose a particular challenge for interpretability, as their potential to improve accuracy often comes at the cost of reduced transparency, limiting their practical applicability [16,43]. In response, some researchers have sought to enhance interpretability [48], though many note that domain experts often find certain relationships identified by these models to be counterintuitive [21].
The difficulty of yield prediction varies significantly depending on both the type of data used and the representativity of the datasets in the experiments. Studies focused on a single geographic area tend to achieve higher accuracy but at the expense of lower generalizability [2,16,20,21,24,25,26,27,28,29,30,31,32,34,35,36,39,40,41,42,43,44,48,49,50,53,54,57,58,59,60,61,62]. Generalization between different wheat varieties can also be difficult to achieve [38,45,46,51]. Additionally, the time series length used in the experiments is often insufficient to fully capture the seasonal variability of crops, as agricultural conditions can vary significantly across different growing seasons [24,40]. As a result, the accuracy levels reported in the literature vary widely, reflecting differences in data sources, environmental conditions, and modeling approaches.
Poor generalization capabilities are often a direct consequence of overfitting. As discussed earlier, if the dataset used for model development fails to capture the full variability of the problem [20,29,46,61], the model may fit the training data distribution too closely but struggle to generalize when applied to unseen data with different distributions [30,47,63]. This issue is further exacerbated in complex models with a large number of parameters [25,32,35], as their increased degrees of freedom allow them to memorize training data rather than learning meaningful patterns [32]. Striking a balance between dataset representativity, model complexity, and predictive accuracy remains a significant challenge [43] and a major limitation in many studies [2].
If factors such as dataset representativeness and overfitting are not properly addressed, the reliability of the reported results may be compromised. Some studies [2,35] report extremely high accuracy values in their experiments. While these results are impressive, they raise concerns regarding the realism and generalizability of the models. Such high performance often suggests potential overfitting, particularly when models are trained and tested on limited or insufficiently diverse datasets. In many cases, datasets may be collected from homogeneous environments, or validation may be conducted using simple train/test splits without employing more robust methods like k-fold cross-validation or independent external testing. Consequently, the reported accuracies may not translate effectively to broader, more variable agricultural conditions. It is therefore critical to interpret these results cautiously, recognizing that reported metrics may not fully reflect model performance under real-world, field-scale applications. Future research should prioritize rigorous validation protocols and the use of diverse, multi-location datasets to ensure the development of more generalizable and reproducible AI models.
Although deep learning has been steadily replacing traditional AI techniques in many domains, shallow neural networks and other machine learning models still predominate in yield and biomass prediction [16,26,43,61], with some exceptions [25,37,41]. This is primarily because satellite-derived data, which have been widely used for decades, have already been successfully processed using well-established traditional methods [39]. Additionally, time-series analysis with deep learning remains challenging in certain scenarios, particularly when the number of available samples is relatively low [21,24,39]. Another challenge in applying deep learning techniques to yield estimation is the limited availability of large, annotated yield datasets that can serve as reliable references for model development [40,48,49,56,60,61,64].
One of the challenges associated with traditional machine learning models is their reliance on carefully designed feature extraction for optimal performance [49]. In many cases, standard features such as vegetation indices are insufficient for producing reliable estimates [31,38,41], particularly due to the variability introduced by different crop growth stages [27,57] and to limited sensitivity to photosynthesis [39]. As a result, there is often a need to develop custom features tailored to the specific conditions of the dataset in order to improve model accuracy [33]. However, these tailor-made features can be highly sensitive to even minor variations in data distribution, which can compromise model robustness and make the entire process more challenging [2]. Additionally, when the number of features is too high, the dataset may include a significant amount of redundancy and irrelevant variables, which can negatively impact model performance. In such cases, effective feature selection or combination becomes essential to reduce dimensionality, eliminate noise, and enhance model accuracy [35,42,59].
One way to avoid complex feature engineering is through the use of deep learning techniques, which can implicitly learn and extract relevant features to characterize the data under analysis. While this approach is often practical and efficient, the inherent “black-box” nature of deep learning models poses challenges. It becomes difficult to verify whether the extracted features are scientifically meaningful, and manual fine-tuning of the models is often hindered [21].
In many cases, obtaining high-quality, long-term satellite and climatic data for a specific region is challenging due to missing values, inconsistencies [24,48], and data corruption caused by factors such as cloud cover [31,40,52,53] and noise [2,53]. Additionally, limited satellite coverage and low revisit frequency are common issues that not only hinder the use of data-intensive techniques but also significantly restrict the generalizability of models [56]. Other types of data, such as historical production records and agronomic field data, may also exhibit inconsistencies, which can negatively affect model performance if left unaddressed [42]. As such, the application of correction or normalization techniques is often necessary to ensure data quality and reliability [33].
Data inconsistencies and fluctuations can often be partially mitigated through preprocessing techniques [32,34,50,61]. However, challenges such as handling missing values and normalizing datasets may never be fully resolved, as these issues can persist depending on the quality and variability of the data [25,46]. While some preprocessing techniques are standardized and validated across diverse conditions, others are specifically tailored to the dataset used in individual studies [46]. This case-specific approach may limit the direct applicability of preprocessing methods to different regions or crops [24], further exacerbating the lack of generalizability. Moreover, preprocessing is often applied without prior evaluation of its effects, which can be problematic. In many cases, results may actually improve without preprocessing, highlighting the need for careful assessment before its implementation [5].
Wheat yield is highly sensitive to climate variability [20,26,32,38,42,57,58], including factors such as drought, rainfall, and temperature fluctuations, which are inherently difficult to predict [25,35,43]. In addition, variations in soil properties and management practices can exert a substantial influence on yield [16,30,31,35,63]. Even government policies, such as subsidies, land use regulations, and water access restrictions, can significantly affect crop productivity [33]. This adds complexity to modeling efforts and can result in large estimation errors under certain conditions [2,21,24,64], especially if some of those variables are not explicitly incorporated to the model [44,52,53,54,60]. The challenge is further compounded by the fact that certain climatic variables exhibit weak or nonlinear correlations with wheat yield [44].
The variability issue can be mitigated when long-term temporal datasets are available (which is not always the case [41,42,49,50,59]), as they increase the likelihood of capturing rare or extreme events [39], thereby enhancing the model’s robustness and adaptability to such variations. However, if the temporal resolution of the data is too coarse [57], it may fail to capture short-term yield fluctuations [34], potentially overlooking critical growth stages or environmental events that significantly impact crop performance [28]. Additionally, with longer time series, the influence of technological advancements becomes significant, necessitating preprocessing and detrending to ensure data consistency [44,55]. In any case, incorporating a diverse set of variables, rather than relying on a single data type, can significantly enhance model robustness by providing a more comprehensive representation of the crop system and its interactions with environmental factors [28,54]. Failing to adopt a more systemic perspective may compromise model performance, as essential components of the system can be overlooked or inadequately represented [38,50,51,53,57,62].
Integrating multiple data sources presents a significant challenge [30,54,55], particularly when datasets differ in temporal and spatial resolutions [28,40,43,45,49,59]. Addressing these inconsistencies often requires extensive preprocessing and feature engineering [29,31,39,43,47,60], which can be both time-consuming and error-prone [30]. As a result, some studies opt to use a more limited set of variables [31,33,43], which may be insufficient to fully capture the complexity and variability of the crop [25,26,27,39,40]. Emerging research areas such as data fusion [16,52,56,65] and multimodality [66] are already making significant strides in tackling these challenges [32], enabling more effective and comprehensive data integration.
While a significant portion of satellite data still lacks the spatial resolution necessary for fine-grained yield estimation [54,55,57,58,60], high-resolution imagery (with a GSD better than 20 m) is becoming increasingly accessible. However, as resolution improves, so do the associated computational demands [52]. The computational power required for model training is a frequently cited bottleneck in the literature [2,16,24,36,38,53,54,55,56,60]. As computational infrastructure continues to advance, the development of increasingly larger AI models poses challenges for institutions without dedicated data centers or with limited resources to afford cloud services capable of supporting such demands [52]. However, it is important to note that while many models require substantial computational resources for training, their inference phase is often much less demanding. In some cases, these models can even run efficiently on portable devices with limited computational power, making them more accessible for real-world applications. On the other hand, models that are computationally expensive during inference may face significant constraints for real-time or mobile deployment [2,21,33,34,40,46,58]. This limitation often necessitates further research and development to optimize model efficiency and make the technology practical and deployable.

3.2. Disease Management

Table 2 presents all articles focused on disease management, following a structure similar to that of Table 1 for consistency and ease of comparison.
In contrast to yield prediction, which still sees widespread use of traditional machine learning approaches, disease detection is overwhelmingly dominated by deep learning techniques, particularly convolutional neural networks (CNNs), with only a few notable exceptions [70,83,96,101]. For most crops, disease detection and management relies heavily on leaf images, as leaves are typically where the earliest and most visible symptoms appear [110]. However, in the case of wheat, the narrow shape and positioning of leaves make them difficult to image effectively. As a result, many approaches instead focus on kernel [82,99,100,111] or ear (spike) images [68,69,74,93,94,97,101,103,105,106,108,111], which sometimes provide more accessible and informative visual cues for detecting diseases. Most studies focused on wheat imagery have utilized ground-based image collection, which offers high resolution and close-range detail. However, an increasing number of studies have also explored the use of the UAV-based method [71,85,91,98,101,104,107,109], broadening the scope of data sources for wheat analysis.
Deep learning techniques are inherently data intensive, requiring large, diverse datasets that capture the full variability of the problem to achieve reliable performance [5]. With the exception of highly specific applications constrained to a narrow set of conditions, building truly representative datasets for disease detection and recognition has proven largely unfeasible [1,74,76,79,84,86,96,97,106]. As a result, many studies rely on limited datasets for both training and testing, often producing overly optimistic and unrealistic performance results [68,69,70,88,102,107]. For example, Azimi et al. [68] reported a perfect accuracy of 1.00 in their classification tasks. However, their model was trained and tested on a relatively small dataset collected under controlled conditions, which limits environmental variability and may inflate performance metrics. Similarly, ref. [94] also achieved an accuracy of 1.00, but the lack of external validation across diverse geographic regions raises concerns regarding model generalizability. These results suggest that overly optimistic performance metrics may stem from methodological oversights, such as insufficient dataset diversity, inadequate validation protocols, or overfitting to training data. A more critical evaluation of dataset composition and validation strategies is essential to assess the true robustness and practical applicability of AI models in wheat research.
To address the lack of data, data augmentation is commonly applied, particularly in the case of digital images [1,69,72,73,74,78,87,108,109,111]. While this strategy can help mitigate data scarcity, even advanced techniques like Generative Adversarial Networks (GANs) and Frequency Domain Adaptation (FDA) generate synthetic data that may introduce biases and unrealistic artifacts, ultimately limiting their effectiveness [75]. Given these constraints, the results reported in the literature must be interpreted with caution and considered in light of the experimental context in which they were obtained, as they are unlikely to reflect the true accuracy achievable under real-world conditions [5]. While some studies acknowledge these limitations, many fail to report this critical caveat, which can undermine the credibility and generalizability of their findings.
Most disease detection and recognition efforts rely on digital images of symptoms that are either visibly apparent or detectable through spectrum-based sensors [95,112]. A major challenge in this context is the wide variety of plant disorders, many of which produce similar physiological and visual alterations [73,78,80,81,84,86,90,102,111]. Ideally, a dataset should include examples of all relevant disorders to enable accurate discrimination. However, despite significant strides made by some studies to achieve this goal [87], attaining truly comprehensive coverage remains virtually unfeasible. As a result, most studies are limited to a narrow subset of disorders, often ignoring other potential causes of the observed symptoms [1,69,70,74,78,80,83,95,96]. This leads to models that are constrained to select from the known classes, even when the input belongs to an unseen or unrelated category, potentially yielding inaccurate predictions [72,90,95].
Some researchers have attempted to address this by introducing an “other”, or “I do not know”, class to capture unknown or unmodeled conditions, but defining and representing this class meaningfully in the training data remains a significant challenge [87]. This issue is somewhat less critical when the focus is on a single disease, turning the problem into a binary classification between the target disease and all other conditions [68,71,74,89,91,94,97,106,107,111]. Still, this approach is not without limitations, as many non-target disorders may exhibit symptoms that overlap with the class of interest, leading to potential misclassifications [104].
To address the limitations of traditional classification methods, more advanced techniques, such as few-shot learning and one-shot learning, have been explored for their potential to recognize previously unseen classes with limited labeled examples. These approaches have shown promise in plant disease monitoring and detection [113,114], offering a pathway toward more adaptable diagnostic systems. However, in the specific context of wheat diseases, the existing literature remains scarce; only a handful of conference proceedings mention the use of such methods, and to date, no peer-reviewed journal articles have demonstrated their successful application. As a result, the problem of generalizing to unseen disease classes in wheat remains a fundamental and unresolved challenge, for which no robust or scalable solutions have yet been established.
Almost all studies included in this review assume the presence of only a single disease at the time of detection. However, in real-world scenarios, it is common for multiple diseases or disorders to co-occur, leading to overlapping symptoms and increased diagnostic complexity [81,86,89,95]. Under such conditions, model behavior can become unpredictable, and error rates typically rise [73,75,78,87,92,96,102]. One potential approach to address this issue is to shift the focus from diagnosing the entire plant organ to analyzing individual lesions or symptomatic regions, enabling multi-label classification [110]. However, this strategy introduces significant challenges, particularly the need for accurate localization and segmentation of each lesion prior to classification, steps that are often complex and computationally demanding. Some authors have attempted to treat different combinations of diseases as distinct classes; however, the limited number of samples representing these combinations resulted in relatively low classification accuracy [75].
The primary goal of plant disease recognition technologies is to enable the earliest possible detection of problems, allowing for timely interventions that can minimize crop losses [81,91,95,101,107]. Conventional RGB sensors have become widely available, and even low-end consumer-grade devices are capable of capturing images with sufficient quality and resolution. As a result, RGB imaging has been extensively employed in disease detection efforts [1,67,69,70,73,74,75,76,80,81,86,90,102,103]. However, a significant limitation of RGB-based methods is that visible symptoms often appear only after substantial damage has already occurred, at which point preventive measures may no longer be effective [69]. This has driven growing interest in more advanced sensing technologies [112], including spectrometry [99], multispectral [71,85,98,107,115], thermal [85], and particularly hyperspectral sensors [82], which offer high spectral resolution capable of detecting subtle physiological changes in plants before visual symptoms manifest [94,95,101,104,105,116].
Numerous studies have demonstrated the potential of hyperspectral imaging for early-stage disease detection; however, even in these cases, detection accuracy typically improves at later stages of disease development [104]. In addition, the high cost of these sensors remains a major barrier to widespread adoption [94,100,105]. The challenge is even more pronounced when such sensors are mounted on unmanned aerial vehicles (UAVs) [95,101,104], as the risk of damage or accidents is relatively high and obtaining insurance coverage for such equipment is often difficult [117]. An alternative approach involves deploying hyperspectral sensors on satellites, which eliminates some logistical risks. However, the ground sampling distance (GSD) of current hyperspectral satellite platforms is still too coarse for early stress detection, limiting their utility to cases where the affected area is already sufficiently large to be detected from orbit [91].
In many cases, relying on a single type of sensor does not provide sufficient information to fully resolve complex agricultural problems. Combining multiple sensor types offers a promising solution, and recent studies have successfully applied multimodal learning and data fusion techniques to improve the detection and recognition of wheat diseases. However, integrating heterogeneous data remains a technically challenging task, often requiring sophisticated preprocessing, normalization, and the development of custom features to ensure compatibility and effectiveness across data sources [71].
With the predominance of deep learning techniques in plant disease detection and recognition, computational requirements have become a critical consideration, particularly during the training phase. Many of the challenges discussed in the context of yield prediction also apply here and will not be reiterated. However, a key distinction lies in the operational requirements of each task. Unlike yield prediction, which typically does not demand real-time processing, disease recognition often requires rapid responses, especially for field-based applications such as smartphone apps for symptom identification [72]. In such scenarios, it is essential to consider the use of lightweight models optimized for fast inference, even if this comes at the expense of a modest reduction in accuracy. Prioritizing efficiency and responsiveness is crucial when deploying AI tools in real-world agricultural settings where timely decision-making can significantly impact outcomes [1,74,87,89,92,102].

3.3. Other Stresses and Damages

Table 3 presents all the articles that focus on plant stresses other than diseases.
The techniques and methods found in the literature addressing plant stresses share many similarities, meaning that several observations made in the section on plant diseases are also applicable here. Nevertheless, certain stress-specific approaches warrant distinct discussion.
In the context of weed detection and management, a major challenge lies in distinguishing weeds from wheat when their visual characteristics are highly similar [118,119,121,123], and even their spectral signatures can be closely related [124]. Additional complexity arises from plant overlapping and occlusion, which significantly hampers accurate detection [120,121,123,125,126]. To enhance model accuracy and prepare for future herbicide-specific recommendations, some studies have opted to create separate classes for each weed species [17,119,120,123,125], with a few works considering up to ten species [121]. However, this strategy presents challenges, especially in detecting and classifying weed species not included in the training set [17]. Moreover, class imbalance can negatively impact the recall of underrepresented classes [17,121,123].
Due to limitations in the datasets used during experiments, such as restricted diversity in conditions, geography, and species, generalizing to unseen data remains difficult [118]. To address this, several authors have adopted data augmentation techniques [17,119,120,125], with some employing advanced augmentation strategies [122]. Although conventional RGB sensors are the most commonly used [17], some studies have explored multispectral imaging as an alternative for enhancing spectral discrimination [124]. Data collection is typically performed using ground-based cameras [119,120,121,122,125,126] or UAV-mounted systems [17,118,124], while satellite imagery is generally avoided due to its insufficient spatial resolution for weed-level analysis.
Although substantial progress has been made in weed detection using AI techniques, the majority of studies focus on post-emergence weeds, where plants are already well developed and easier to distinguish. Early-stage weed detection, however, is critical for timely management interventions and minimizing crop losses. This remains a significant challenge due to the small size of seedlings, spectral and morphological similarity to crop plants, and limited availability of annotated datasets. Addressing these challenges through improved imaging techniques, data augmentation, and transfer learning approaches represents a key opportunity for future research. A few studies have tackled the problem of early weed detection [126], although performance tends to be limited for seedling recognition [120,121]. In contrast, better results have been observed when the task involves semantic segmentation rather than classification [122]. Notably, among the reviewed literature, only one study deliberately did not employ deep learning techniques, Su et al. [124] opted for alternative approaches due to a lack of sufficient labeled data.
Pest management and recognition present a distinct set of challenges. Agricultural pests are typically small and may appear in a variety of poses and orientations, making accurate detection difficult for most models [128,130,131]. This results in high variability, and because many datasets fail to capture the full spectrum of visual variations, data augmentation is commonly employed to improve model generalization [130,131].
The use of traps specifically designed to attract target pest species is a common practice in agricultural monitoring. However, within the scope of this review, no study employing such traps for image-based pest detection was identified. Instead, all reviewed works focused on the direct imaging of pests on plant organs, such as leaves and stems [128,130]. One possible reason for this is that traps often accumulate non-target objects, such as other insects, debris, spores, or plant material, which can complicate detection, particularly when the target pests are very small [139].
Most studies concentrate on the detection of a single pest species [128], though some propose methods capable of classifying multiple species [130]. While the latter approach offers richer and more informative outputs, it also introduces the risk of misclassification when species not seen during training are present during inference.
Although the majority of pest detection methods rely on conventional RGB imaging [128], some studies have explored alternative sensing technologies that aim to detect indirect physiological responses of plants to pest presence. These include near-infrared spectroscopy and electronic nose (E-nose) systems [129]. Rather than detecting the pest itself, these approaches attempt to identify plant-level changes, such as variations in volatile organic compound (VOC) emissions, that may indicate pest activity. However, e-nose systems face specific challenges: different plant cultivars emit distinct VOC profiles, and the compounds released may not be pest specific, as they can also reflect responses to other biotic or abiotic stressors [129].
Only two studies listed in Table 3 address evapotranspiration estimation and drought monitoring, yet a few domain-specific challenges can be identified in this context. Climatic data are a crucial input for estimating evapotranspiration; however, such data are often limited in spatial and temporal availability, and the parameters commonly used in modeling may be insufficient to account for the full complexity of factors influencing evapotranspiration dynamics [132]. Moreover, drought is a multifactorial phenomenon influenced by a combination of variables such as precipitation, soil moisture, and vegetation conditions, which complicates the development of a unified predictive model. To address this, studies frequently rely on multi-source data integration, which demands extensive preprocessing and harmonization to ensure consistency across spatial resolutions, formats, and temporal coverage [133].
One of the main challenges in detecting herbicide and pesticide stress is that symptoms often manifest only at later stages, making early identification difficult with traditional methods. To address this, many studies employ sensors capable of capturing the reflectance spectrum of the target, enabling the detection of physiological changes at earlier stages. Common approaches include near-infrared hyperspectral imaging [134] and surface-enhanced Raman spectroscopy (SERS) [135]. Another limitation is that some studies are conducted under controlled conditions, which can hinder the applicability of their models in real-world scenarios [134]. Notably, both studies reviewed here used deep learning algorithms for stress detection [134,135].
The final wheat disorder addressed in this study is lodging, which affects the plant at a structural level. Because lodging is a broad, canopy-level phenomenon, UAV-based imaging is commonly used for its detection [136,137,138]. To improve accuracy under complex field conditions, some studies have combined digital imagery with additional data sources such as Digital Surface Models (DSM) [136]. Multispectral imagery has also been employed, often outperforming RGB sensors in detecting lodging [137,138].
A major limitation in this area of research is the difficulty in collecting large, diverse datasets, which often restricts studies to a single geographic region and wheat variety, limiting model generalization [136,137,138]. Another challenge is that lodging manifests differently depending on the plant’s growth stage, adding further complexity to detection efforts [137]. In most cases, healthy plant data are far more abundant than lodging data, necessitating the use of data augmentation [136,138] or class-balancing techniques such as the Tversky loss function [137]. Notably, all studies reviewed in this context have adopted deep learning approaches for lodging detection [136,137,138].

3.4. Phenotyping and Genetic Selection

Table 4 presents all the articles that focus on phenotyping and genetic selection.
Phenotyping and genotyping are complementary approaches that, when combined, provide powerful insights into the genetic control and environmental expression of plant traits. This integrated perspective is crucial for advancing crop productivity, resilience, and sustainability. Accordingly, this subsection groups together studies that address either or both dimensions.
Studies focused on phenotyping often face challenges similar to those encountered in yield prediction and stress management. A recurrent issue is the difficulty in constructing truly representative datasets. This limitation undermines model generalization, particularly under high variability conditions [140,146]. Class imbalance is another common challenge [156], which frequently motivates the use of data augmentation techniques [142,146,157,158,159,160]. Occlusions further complicate image-based phenotyping, leading to errors in trait estimation [142,157]. Hyperparameter tuning is also cited as a non-trivial hurdle [145].
Among the sensors used for phenotyping, RGB cameras are the most prevalent [140,146,152,156,157,158], but others such as microscopy [160], multispectral cameras [146], multispectral radiometers [153], and hyperspectral sensors [147,159] are also employed. Hyperspectral imaging, in particular, is effective for detecting physiological traits invisible to the naked eye, although it may suffer from noise due to atmospheric and sensor-related artifacts [147]. In some studies, data are collected in controlled environments using lab or field experiments rather than onboard sensors [145].
Ground-based phenotyping remains the most common practice [140,142,153,156,157,158], although the use of UAVs has expanded since the early 2010s [146]. Nonetheless, determining optimal flight altitude and camera configurations is challenging, especially for hyperspectral setups [147]. Additionally, the ground sampling distance (GSD) from UAVs may be insufficient for capturing early-stage plant traits, which are vital for genetic selection [152]. Satellite imagery currently lacks the spatial resolution needed for most phenotyping applications [152].
Ground-truth generation is another major constraint, particularly when destructive sampling or complex measurements are involved [140]. Moreover, some agronomic indicators like yield lack the spatial precision required for robust model training and evaluation [147]. Annotation challenges are widespread, especially for high-volume datasets [156,160] and traits that involve subjective interpretation [142,158,159]. When field visits are necessary, logistical constraints often limit the number of measurements, prompting the use of interpolation techniques [152].
The traits targeted in phenotyping studies include grain yield [147], leaf area index [140], plant biomass [146], ear counting and length [142], flowering time [156], root characteristics [157], plant counting, height, and tillering [152,159], shoot regeneration frequency [145], awn morphology [156], vegetative cover [158], drought responses [159], stomatal index [160], and stem elongation onset [152].
Genotyping brings its own set of challenges, largely due to the nature of genomic data, which require specialized processing methods. Model tuning in this domain may be more complex than in image-based tasks, due to fewer reference studies, smaller datasets [151], and the intrinsic complexity of the data, which demands meticulous selection of model architecture and parameters [141,143,148,155]. Some studies must handle a mix of binary, ordinal, and continuous variables [149,150]. Additionally, certain traits are influenced by both major and minor genes, which can lead to underfitting or overfitting [143].
Data quality is another concern in genotyping. Missing data are common and need to be managed through filtering [141,153] or manual imputation [150]. Furthermore, effective genomic selection requires accounting for genotype-by-environment interactions [141,148,149,150], a non-trivial modeling challenge. Ground-truth acquisition can also be problematic due to subjective evaluation [143,149].
Traits studied in genotyping-based research include grain yield [141,143,148,149,155], plant height [148,149,155], disease resistance [143,149], days to heading and maturity [148,149,150,155], grain color and protein content [149,155], lodging [149], and anthesis-silking interval [148]. While many studies address one trait at a time, multi-trait models have been proposed to enhance genomic prediction [148], although they are more susceptible to overfitting [151].
Some studies integrate phenotyping and genotyping for a comprehensive trait characterization [144,150,151,153,154,159]. For example, Guo et al. [144] combined manual phenotyping with genotyping-by-sequencing to assess grain yield and related traits. Montesinos-López et al. [150,151] integrated SNP and phenotypic data to predict multiple agronomic traits. Zhang et al. [159] combined high-throughput phenotyping with GWAS to improve drought resistance and yield predictions.
As with other domains in agricultural research, both phenotyping and genotyping are increasingly leveraging deep learning [140,141,142,143,144,146,147,148,149,153,156,157,158,159,160], though shallow neural networks [145,150] and conventional machine learning approaches [152] remain in use for specific data types.

3.5. Spike Detection

Table 5 presents all the articles that focus on spike (ear) detection.
In this review, all studies focused on spike detection and counting rely on digital RGB imagery combined with deep learning techniques. Minor deviations from the standard include the use of stereo RGB images [162] and ultra-wide-angle lenses [177]. Due to limited dataset diversity, data augmentation is commonly employed [13,165,167,169,170,171,172,173,178,179,180,182,186,187,188,189]. Most datasets were built with ground-based images due to the relatively small size of wheat spikes, although UAV imagery has also been widely adopted [13,169,173,176,179,188,189]. A notable portion of the literature relies on the Global Wheat Head Detection (GWHD) dataset [165,167,169,178,179,182,186], which was specifically developed for spike detection tasks [163,164].
Spike detection differs from other detection tasks discussed earlier in several key ways: it is almost always conducted in-field (with a few exceptions [166,175]), the objects of interest are almost always present, and occlusion is significantly more frequent and problematic [13,161,162,163,164,165,166,167,168,169,171,172,173,174,175,176,177,178,179,181,182,183,184,186,187,188,189]. Accordingly, individual spike separation becomes a central challenge in most works [171], with varying levels of success. While many authors have attempted to overcome occlusion through model fine-tuning [13,171,173,179,182,188], others seek improvements at the image acquisition stage [168].
Another major hurdle is the heterogeneity in spike density [165,184]. In some cases, a single image patch may contain between 0 and 120 spikes [163], while in others, up to 10,000 spikes may appear in one image [184]. Such variation introduces difficulties in both annotation and model training/inference.
Due to the complexity of annotation, multiple strategies are found in the literature. The most commonly used are bounding boxes, which offer a straightforward method for object counting and are comparatively easier to annotate [182]. However, they remain labor intensive and prone to subjectivity and error [164,170,174,179,186,187,188,189]. Furthermore, bounding boxes do not easily accommodate occlusions, nor do they enable extraction of more detailed morphological information [163,165,178]. To increase annotation reliability, some authors employed multiple experts and repeated labeling for each image to produce a robust ground-truth [186].
Despite being simpler than segmentation, bounding box annotation may still pose a heavy workload. This has led some researchers to explore point-level annotation, where each spike is marked with a single point, usually at the center [13,184]. This approach reduces annotation time and is effective for object counting, though it can reduce the accuracy of object localization.
A third approach involves pixel-level segmentation of the spikes, and occasionally awns [166], which allows for precise delineation and facilitates the extraction of additional traits [162,172]. However, this method is highly labor intensive and subjective, even when supported by computational tools [161,166,172,173,175,177,181,183]. Some authors have combined bounding boxes for detection with segmentation for refinement, achieving enhanced performance [162]. The literature suggests that segmentation is more accurate, particularly under occlusion [161,162,177], but the annotation effort remains a limiting factor.
A fourth, less common approach divides images into patches and performs binary classification (“spikes present” or “spikes absent”) [180]. This technique, used for automatic estimation of the wheat-heading date, is noted to be more robust and easier to annotate than bounding box or segmentation methods in phenological studies.
Although it is desirable to detect viable spikes as early as possible [170,173,175,180], many models struggle during the booting and heading stages, primarily due to confusion with background elements and limited training samples [161,163,170]. Conversely, spike detection at maturity can also be problematic, as ears bend under grain weight and become harder to identify [162,164].

3.6. Grain Classification

Table 6 presents all the articles that focus on grain classification.
The application of AI techniques to wheat grain analysis is primarily concentrated in four areas: the classification of wheat varieties, the identification of damage types, discrimination between bread and durum wheat, and grain counting. While most of the studies reviewed adopt deep learning approaches for these tasks, shallow neural networks and other conventional machine learning methods are still in use [191,197,198,199]. All studies mentioned in this section have used data collected in a controlled environment and not on the field.
The classification of wheat grains by variety is crucial for multiple reasons, including quality control, market segmentation, economic valuation, and supply chain management. Consequently, the topic has received considerable attention in the literature. The complexity of this classification task is strongly influenced by the number of varieties involved, which in the studies reviewed ranges from as few as 3 [191] to as many as 41 [14].
While RGB imaging remains widely used, there is a growing interest in sensors capable of capturing the spectral characteristics of wheat kernels. This includes hyperspectral imaging [196,205,206] and soft X-ray imaging [191]. Additionally, sensor fusion strategies, such as combining RGB, SWIR, and VNIR data, have been explored to enhance classification performance [195].
To improve model generalization, data augmentation is commonly applied. Most studies employ standard techniques such as rotation, flipping, cropping, translation, and scaling [14,194,201]. However, more advanced methods have also been adopted. Notably, Passos and Mishra [196] enhanced the input feature space by stacking multiple chemometrically preprocessed versions of the reflectance spectra (e.g., SNV, first and second derivatives), expanding the number of features from 200 to 1200.
It is important to note that differences between wheat varieties can be subtle, making classification highly sensitive to minor alterations, such as those induced by storage conditions. Although this concern has been acknowledged in the literature [192], none of the reviewed studies explicitly examined whether classification accuracy is maintained when using stored grains as opposed to freshly harvested samples.
The detection of damaged kernels is critical for assessing the quality and marketability of wheat batches. Although only five studies on this topic were included in this review, they employ a diverse array of methods to address the challenge. RGB imaging was used by Gao et al. [190] to classify broken, sprouted, injured, moldy, and spotted kernels, and by Sabanci [199] to detect kernels damaged by sunn pests. Gao et al. noted that distinguishing between five visually similar damage categories posed significant challenges, not only in terms of model performance but also due to increased annotation errors during dataset preparation.
Hyperspectral imaging has also been employed to detect damaged, germinated, and mildewed grains [193], as well as to identify slightly sprouted kernels [204], offering richer spectral information for nuanced classification. In an alternative approach, Yang et al. [203] explored the use of impact acoustic signals to identify kernels affected by mildew or insect damage. In this method, kernels are dropped from a height of 50 cm onto a metal surface, and the resulting sounds are captured by a microphone. These audio signals are then transformed into spectrograms, two-dimensional visual representations of frequency and intensity over time, which serve as inputs for a deep learning model.
The task of distinguishing between bread and durum wheat was explored in three studies, all led by the same first author [197,198,200]. Two of these studies employed RGB imaging to perform the classification [197,200], while the third utilized a multispectral imaging system covering a broad spectral range from the ultraviolet to the near-infrared [198], thereby capturing more detailed spectral information to improve discrimination. Additionally, the problem of grain counting, important for yield estimation and crop assessment, was addressed by Wei et al. [202], who combined RGB imaging with image augmentation techniques to enhance model robustness and performance. This model was designed for healthy wheat grains and may struggle with broken or irregular grains.

3.7. Other Applications

Table 7 presents all the remaining articles considered in this review.
The task of wheat mapping and row identification is inherently grounded in the use of remote sensing imagery, predominantly captured by satellites [52,208,209,210,211], though some studies have also relied on UAV-based data [207]. Among the five studies reviewed on this topic, three employed deep learning models [207,209,211], while the remaining two applied traditional machine learning algorithms [52,208]. Multispectral imagery was the most frequently used data type [52,208,209,211], although RGB [207] and Synthetic Aperture Radar (SAR) imagery [52] have also been incorporated.
With the exception of Fang et al. [208], all studies reviewed applied some form of data fusion. For instance, Cai et al. [207] integrated texture, grayscale, and hue–saturation–value (HSV) features extracted from UAV imagery using a deep learning-based feature fusion framework. Similarly, Luo et al. [209] combined diverse data sources—including satellite-derived vegetation indices (NDVI and LAI), climate variables from TerraClimate, soil properties from the Harmonized World Soil Database, and cropland masks from GFSAD1k—to enhance wheat area mapping and yield estimation. In another example, Tian et al. [52] fused optical imagery (Sentinel-2 and Landsat-8) with SAR data (Sentinel-1) to differentiate between garlic and winter wheat cropping areas. Lastly, Zhong et al. [211] trained deep learning models for winter wheat mapping using fused MODIS time-series NDVI data (from Terra and Aqua satellites) and county-level agricultural statistics from the USDA NASS.
Three main challenges are frequently associated with wheat mapping. First, cloud contamination in optical imagery can significantly degrade dataset quality [52,208]. Second, the spatial resolution (GSD) of some satellite platforms may be too coarse to capture fine-scale variations in wheat fields, leading to mixed pixels that contain multiple land cover classes. While constellations such as Sentinel and Landsat offer moderate resolutions (10–30 m) [52,208], others like MODIS provide much coarser resolutions [209,211]. Third, ground-truth generation presents substantial difficulties across all reviewed studies. For example, Cai et al. [207] noted the complexity of annotating UAV images due to irregular crop row structures and the presence of vacant or cluttered areas. Other studies relied on manual visual interpretation, a process that is both labor intensive and inherently subjective [208]. To improve annotation accuracy, some authors incorporated field surveys [52]. In the case of Luo et al. [209], subnational agricultural census data were used, though these datasets varied in format, quality, and temporal coverage across different countries. Finally, the lack of pixel-level labeled training data was highlighted as a major limitation, impacting both the training and validation of models.
In the context of wheat flour classification, Nargesi et al. [213] employed a hyperspectral imaging system to differentiate between various wheat flour types. Accurate classification is critical, as the misuse of specific flour types can compromise the quality of the final product. The authors noted the need for manual preprocessing, such as sieving to 300 µm, to mitigate spectral noise caused by particle size variation. Complementing this, Shen et al. [214] developed a deep learning model to identify wheat impurities using RGB image data. While the method proved effective, the authors observed that occlusion and overlap between wheat and impurities (e.g., straw or insects) impaired classification accuracy. To improve model robustness, data augmentation techniques, including image rotation and flipping, were applied to the training set.
A more sophisticated approach to impurity detection was proposed by Shen et al. [215], who introduced a method integrating terahertz spectral imaging with convolutional neural networks. This fusion of spectral and spatial information yielded pseudo-color THz images that improved classification accuracy. Despite promising results, the system faced limitations in scalability due to the high cost of THz sensors and the restricted range of impurity types analyzed. Like the previous study, data augmentation was utilized to enhance model generalization.
Beyond the realm of image classification, Bourguet et al. [212] proposed an AI-based argumentation framework to support policy decisions related to wheat-based food quality. Their system synthesizes knowledge from the scientific literature, expert interviews, and regulatory documents to evaluate trade-offs in public health policies, particularly those concerning bread production. Applied to the French PNNS (Programme National Nutrition Santé), the framework facilitated decisions about promoting whole-grain versus refined flour by considering factors such as nutritional benefits, sanitary risks, economic feasibility, and consumer preferences. The study emphasized the complexity of formalizing stakeholder arguments and the reliance on manual expert input.
Both Bartley et al. [216] and Shafaei et al. [217] aimed to estimate grain moisture content, a key factor affecting quality, shelf-life, pricing, and storage risk. The first proposed a non-destructive, real-time method using a microwave transmission system with horn antennas and a network analyzer. The study employed artificial neural networks (ANNs) with input features derived from amplitude, phase, and permittivity values, constituting a form of data fusion. In contrast, Shafaei et al. [217] used the hydration time and temperature to predict hydration characteristics, including moisture content, through AI models. Measurements were based on weight changes, without electronic sensors or data fusion. The models used were not deep learning based but relied on traditional methods such as MLP and ANFIS. While both studies addressed moisture prediction, Bartley et al. [216] focused on sensor-driven, real-time estimation, whereas Shafaei et al. [217] employed a lab-based, classical modeling approach.
Two studies addressed nitrogen monitoring in wheat, highlighting its importance for crop health, yield, and environmental sustainability. Nitrogen is vital for chlorophyll production and photosynthesis, and its accurate estimation enables precision fertilization and improved nitrogen use efficiency. Singh et al. [218] used a proximal hyperspectral sensor (ASD FieldSpec) to collect high-resolution canopy reflectance data and applied traditional machine learning models to estimate nitrogen content directly. This method provided detailed spectral insights under controlled conditions. Wu et al. [219] employed multi-temporal UAV multispectral imagery to estimate chlorophyll content (SPAD), a proxy for nitrogen status. Using a DJI Phantom 4 Multispectral UAV, they combined multiple vegetation indices across four time points after wheat heading. This approach, which involved feature- and temporal-level data fusion, supported broad-scale, non-destructive nitrogen monitoring. Four models were tested, including one deep learning algorithm.
Yang et al. [220] proposed DeepTranSpectra (DTS), a deep learning method for transferring calibration models across five different NIR spectrometers. To ensure consistency, spectral data were harmonized through wavelength transformation and interpolation, a form of instrument-level data fusion. The study aimed to predict crude protein content in wheat and soybean meal, an essential parameter for quality control and non-destructive analysis. Due to limited data, the training sets were augmented tenfold using random spectral variations. Although based on simulated scenarios, DTS demonstrated strong potential for improving model transferability and reliability across heterogeneous NIR devices.
Akkem et al. [221] developed a machine learning-based crop recommendation system aimed at improving transparency and trust. The system utilized tabular data from sources like soil, weather, and historical yields, integrating features without applying full data fusion. To address the “black-box” issue, the study employed XAI methods, helping users interpret model outputs. A Streamlit-based interface was also created for interactive visualization. While effective, the authors noted that counterfactual explanations still require further validation in real-world applications.
Bai et al. [222] investigated the use of wheat germ oil and hydrogen in dual fuel mode to improve diesel engine performance and reduce emissions. To avoid extensive experimental trials, the study employed traditional machine learning algorithms to predict key engine parameters. The experimental setup included gas analyzers for emissions, a smoke meter, a piezoelectric pressure transducer, flow meters, and a crank angle encoder. This combination of dual-fuel combustion and machine learning enabled accurate predictions while reducing the need for costly physical testing.
Ghasemi-Mobtaker et al. [223] aimed to support sustainable wheat farming by predicting output energy, economic profit, and global warming potential (GWP). They compared the performance of different ML models to evaluate environmental impacts. Data were collected through field surveys and farmer interviews, without using sensors or remote sensing tools. While this method offered valuable insights, it also posed a risk of response bias due to the subjective nature of interview-based data.
Núñez et al. [224] aimed to optimize amylase production using solid-state fermentation with Rhizopus microsporus and low-cost agro-industrial wastes. The study compared traditional response surface methodology with ANNs combined with genetic algorithms to improve modeling and prediction accuracy. Using ternary mixtures of substrates, the study applied composition-level data fusion to identify optimal substrate combinations. While ANN-GA provided strong predictive performance, the research was limited to laboratory-scale experiments, with no industrial validation.

4. Discussion

The challenges associated with applying AI to wheat production are diverse, encompassing both application-specific issues and broader, cross-cutting barriers that affect nearly all research in the field. Some of these general challenges stand out as the most pervasive obstacles to the wider and more effective adoption of AI technologies in agriculture. This section focuses on discussing these key challenges and proposing potential solutions to address them.
Deep learning methods have generally outperformed traditional machine learning approaches, such as support vector machines (SVMs) and random forests, in tasks like disease detection, yield prediction, and phenotypic trait estimation. This superiority stems from their ability to automatically extract hierarchical features from raw data without the need for handcrafted feature engineering, which is often required in traditional models. For instance, ref. [13] reported that convolutional neural networks (CNNs) achieved higher prediction accuracies for wheat yield compared to classical regression models when applied to UAV imagery. Similarly, ref. [12] demonstrated that deep learning models provided more robust disease classification under variable field conditions than support vector machines. However, it is important to note that deep learning approaches typically demand larger datasets and higher computational resources, which may limit their applicability in certain agricultural contexts.
Crop fields are inherently unstructured environments, where both intrinsic and extrinsic factors introduce significant variability into nearly all types of data collected [81,118]. This issue is especially pronounced in the case of digital images [225], as conditions such as lighting, angle of insolation, plant architecture, soil background, and sensor settings vary widely [226], making it virtually impossible to capture two images under identical conditions [5,110]. High levels of variability usually lead models with poor generalization capabilities [227,228]. Deep learning models, in particular, are vulnerable to unseen conditions and thus require exposure to data from diverse environments and conditions for reliable predictions [229].
Building datasets that fully capture the entire range of real-world variation is largely unfeasible [230]. In practice, most published studies rely on datasets that fall far short of representing the true diversity of field conditions [227,231]. Consequently, the models developed under such constrained scenarios tend to produce overly optimistic results that fail to reflect real-world performance [232]. This issue is especially pronounced when model performance is validated using a subset of the original dataset rather than an independent, external dataset, which can lead to inflated accuracy metrics and misleading conclusions [35,44]. It is important to note, however, that efforts are currently underway to generate large-scale, annotated public datasets with different types of data [233].
While data augmentation is often used in an attempt to enhance dataset representativity [227,234], it remains an imperfect and limited solution, frequently insufficient for producing technologies that are truly ready for field deployment [5]. Even with the support of advanced techniques such as GANs [235,236,237,238,239], constructing truly representative datasets remains a significant challenge [240]. In addition, augmentation is not always applied correctly. If data augmentation is performed prior to dividing the dataset into training and test subsets, the random split may result in nearly identical images (differing only slightly due to augmentation) appearing across all subsets. This introduces significant bias into the results. Unfortunately, this flawed approach has been adopted in numerous published studies [96] and is often cited as justification for its continued use. Ultimately, the most effective way to overcome data limitations is by collecting additional data across a broader range of environmental and operational conditions. However, achieving such diversity demands considerable effort, which in turn calls for collaboration among research groups and the development of data-sharing networks aligned around common goals.
Promoting interdisciplinary collaboration is essential for advancing AI-driven solutions in wheat research. Agronomists and plant pathologists can contribute domain-specific knowledge for accurate ground-truth labeling and agronomic interpretation of results. Remote sensing specialists can aid in selecting optimal data acquisition strategies, while computer scientists and AI researchers can focus on model development, optimization, and explainability. Collaborative efforts should prioritize the creation of large, diverse, and standardized datasets to improve model generalizability. Additionally, the establishment of shared research platforms, open benchmarks, and coordinated field trials would accelerate the transition from experimental results to real-world applications. Funding agencies and academic institutions are encouraged to support interdisciplinary research initiatives that bridge gaps between agriculture and AI.
In particularly complex domains such as plant pathology, even collaborative research efforts may not be sufficient to overcome data scarcity. In such cases, leveraging citizen science and social media-based data collection emerges as a promising solution [110,239]. Citizen science initiatives, which engage farmers and non-expert volunteers in data collection, have already shown success in supporting agricultural machine learning models. For example, the Radiant Earth Foundation [241] has utilized citizen-contributed data for land cover classification and crop type identification across Africa, while the PlantVillage Nuru app [242] enables farmers to monitor plant health through smartphone imagery, generating large and diverse datasets [78,87]. Encouraging similar frameworks in wheat monitoring could greatly enhance the geographic and phenotypic diversity of datasets, while fostering user engagement and technology adoption. Nonetheless, effectively engaging stakeholders across the agricultural ecosystem remains a challenge, often dependent on favorable conditions and appropriate incentives. Moreover, more informal forms of citizen science, such as compiling datasets from online sources, can introduce substantial noise due to inconsistencies in image quality, resolution, and background conditions [227], underscoring the need for careful data curation and validation.
Beyond expanding datasets, advanced learning strategies such as few-shot learning (FSL) and self-supervised learning (SSL) offer promising alternatives to traditional supervised approaches. Few-shot learning methods enable models to generalize from a very limited number of labeled examples, thereby reducing the dependency on extensive annotated datasets. For instance, Uzhinskiy [243] evaluated different few-shot learning methods for plant disease recognition, demonstrating that accurate classification could be achieved even with a minimal number of training samples. Similarly, Ghanbarzadeh and Soleimani [244] showed that self-supervised learning approaches significantly improved remote sensing image classification by enabling models to learn meaningful representations from unlabeled data. Applying such methodologies to wheat monitoring tasks could help address current data limitations, enhancing model robustness and facilitating reliable performance in data-scarce environments.
The integration of heterogeneous data sources such as genomic, phenotypic, environmental, and management information has become essential in agricultural AI research [245,246]. Combining different types of images has also been frequently explored [11]. Known as data fusion, this process allows models to capture complex interactions and improve predictive performance [111,232,247,248]. Farooq et al. [249] highlight its role in strengthening genotype–phenotype associations, while other authors note that combining different types of remote sensing data enhances the accuracy of deep learning models [22,235,250,251]. In addition, Darwin et al. [252] emphasize that including contextual variables during modeling is crucial for improving reliability. Despite its advantages, implementing data fusion poses technical challenges. These include the need for dense, high-quality datasets and robust models capable of handling variable formats and scales [253,254]. Overall, while data fusion holds clear potential, its success depends on both computational strategies and comprehensive datasets.
Some problems require multi-class classification, where the data must be categorized into one of several possible classes. In such cases, it is common for some classes to be significantly more frequent than others [92,111,173,227,237]. For example, certain wheat diseases may occur almost every season, while others appear only sporadically [238]. This results in severe class imbalance, which must be properly addressed to prevent the development of biased models that underperform on underrepresented classes [68,87,95,98,255]. A variety of techniques are available to handle class imbalance, including resampling methods, cost-sensitive learning, and data augmentation [69,73,90,233]. However, the choice of method should be made carefully, taking into account the specific characteristics and constraints of the problem at hand [240].
Another important data-related challenge, particularly relevant to prediction and estimation tasks, is the need for accurate ground-truth values to serve as reference points and training targets for the models [227]. However, generating ground-truth data is often labor intensive [1,71,76,108,233], costly, and, in some cases, destructive [140], which adds logistical complexity and increases the overall cost of the research [32,60]. Crowdsourcing [11,22] and automated labeling tools [235] offer valuable support, but they frequently introduce errors that can distort both training and validation processes. To mitigate these ground-truth issues, some studies have adopted weak supervision strategies [22], for example using high-accuracy classification outputs from traditional machine learning methods as proxy labels for training deep learning models [91]. Some authors have emphasized the need for semi-supervised, unsupervised, and self-supervised learning approaches to reduce reliance on manually labeled data [226].
Moreover, the process of establishing ground-truth can involve subjective judgment, especially in field-based evaluations [91,102,111,227], which introduces uncertainty and reduces the reliability and reproducibility of the results [78,92,232,256]. Additionally, inter-annotator variability can be substantial, underscoring the importance of involving multiple experts or adopting consensus-based strategies to ensure reliable labeling [227]. Although there are no straightforward solutions to the challenges of ground-truth generation, it is crucial that studies explicitly disclose potential sources of error in their annotation processes. Such transparency enables a more nuanced interpretation of the findings and enhances the overall credibility and reproducibility of the research.
High computational demand is a recurring challenge in the application of artificial intelligence, particularly in deep learning [226]. When computational burden arises on the training side, there are technically viable solutions, such as the use of GPUs, cloud computing, or model parallelization, that can reduce training time to acceptable levels [72]. However, these solutions often come with significant financial costs, which may be prohibitive for some research groups or institutions [257]. In contrast, when models are computationally intensive during inference, it can severely limit their practical usability, especially when deployment is intended on devices with limited processing capabilities, such as smartphones or edge devices [258].
That said, it is important to recognize that not all applications require real-time or near real-time operation [237,240,247,259]. In some cases, inference times measured in minutes or even hours may be perfectly acceptable, depending on the urgency and context of the task at hand [227,233]. This flexibility opens the door for the use of more complex models in offline or batch processing scenarios, where immediate feedback is not critical. Nonetheless, it is important to note that in precision agriculture applications involving UAVs or robotic systems, near-real-time inference becomes particularly relevant, thereby favoring the use of lightweight and computationally efficient models [225,227,252,254].
An often overlooked but increasingly important issue in agricultural AI applications is data privacy [226,245]. With many countries enforcing strict regulations on data sharing and processing, including the need for explicit consent from landowners or data subjects, ensuring compliance has become a significant challenge [260]. This is particularly problematic for technologies intended for direct use by farmers and rural workers, where ease of deployment is crucial. In response, some studies focused on real-world applications have adopted security measures such as encrypted communication and token-based access [87]. Additionally, recent research has investigated privacy-preserving approaches that eliminate the need for centralized data transfer or sharing. Techniques like federated learning allow models to be trained locally on users’ devices, thereby mitigating legal and ethical concerns related to data movement and aligning with emerging privacy regulations [72,226].
Federated learning (FL) offers a promising decentralized framework for developing AI models while preserving data privacy across different farms and institutions. Although applications of FL in wheat research are still emerging, several case studies in agriculture highlight its potential. For instance, ref. [261] demonstrated the use of FL to collaboratively train crop disease detection models across geographically distributed farms without sharing sensitive data. Similarly, ref. [262] applied FL to precision irrigation management, enabling multiple farms to optimize water usage based on shared model improvements. These examples illustrate how FL can overcome data-sharing barriers, making it a promising approach for future wheat disease monitoring and yield prediction systems across diverse agroecological regions.
Despite the growing number of studies exploring the application of AI in wheat production, relatively few practical technologies have successfully transitioned from academic research to real-world farm implementation [232,236,237]. Several factors contribute to this gap between research and adoption. First, the cost–benefit ratio of many AI-based solutions may not be compelling enough to justify their adoption, particularly for small- and medium-sized producers [237,263]. Second, some models are computationally intensive, making them incompatible with the hardware constraints of field-deployable devices [11,225,258]. Third, in some cases, the technologies developed are misaligned with the actual needs and constraints of the intended users, limiting their relevance and usability [12,232,246,264,265]. Fourth, even promising models may underperform under real-world conditions due to the challenges previously discussed, such as poor generalizability, data limitations, and environmental variability [225,227,237]. Finally, some authors cite the lack of connectivity in production areas as a major hurdle for the adoption of the technologies [235].
To bridge this gap, greater emphasis must be placed on translating academic advancements into practical, user-centered technologies that are cost effective, scalable, and responsive to the real needs of farmers and agricultural stakeholders [226,266]. This includes stronger collaboration between researchers [229,247], technology developers, and end users, as well as investments in infrastructure, training, and extension services to support adoption [230]. A simple framework to enable this is suggested in Figure 1. Following these guidelines, successful applications have emerged in areas like cereal quality [255], plant phenotyping [233], yield estimation [257], crop monitoring [266], autonomous irrigation systems [226,263], and beyond. For instance, ref. [91] demonstrated the practical application of drone-based imaging for wheat disease detection under real farm conditions, achieving high classification accuracy despite environmental variability. Similarly, Schirrmann et al. [95] successfully employed UAV-mounted multispectral cameras to detect wheat leaf rust in operational agricultural settings.
Deployment strategies for such AI-driven tools often require accessible and cost-effective UAV platforms, standardized flight protocols, and basic training for farmers or agricultural technicians to interpret outputs. However, infrastructural needs, including reliable internet connectivity for cloud-based processing and availability of affordable sensor equipment, remain critical barriers to large-scale adoption. Costs for drones and multispectral or hyperspectral sensors, though decreasing, still represent a significant investment for smallholder farmers.
To facilitate the adoption of these technologies, protocols could be developed, emphasizing low-cost drone models equipped with simplified imaging systems, integration with farmer-friendly mobile applications for disease alerts, and partnerships with extension services for capacity building. Successful pilot programs that bundle equipment, software, and training could serve as scalable prototypes for broader deployment.

5. Conclusions

This review examined the current state of the art in artificial intelligence (AI) techniques and models applied to challenges related to wheat crops. The volume of research in this area has been growing steadily, and substantial advances have been made not only in prediction accuracy but also in understanding how AI models generate their outputs. Despite these achievements, numerous challenges and research gaps remain unresolved. Many of these were identified and discussed throughout the article, with potential solutions proposed where feasible.
Emerging trends point to promising directions for future research, particularly in the fusion of heterogeneous data sources and the development of hybrid modeling approaches. For instance, Shen et al. [47] demonstrated that integrating multispectral and thermal imagery significantly improved wheat yield estimation accuracy compared to using either modality alone, highlighting the value of multi-source data fusion in enhancing model robustness and sensitivity to key crop parameters. Such approaches can better capture the complexity of agricultural systems by leveraging complementary information from different sensor types.
Another important trend involves combining deep learning techniques with physical modeling. Cao et al. [30] proposed a hybrid framework that integrates process-based crop models with deep neural networks, enabling models to incorporate domain-specific knowledge while retaining the flexibility and pattern recognition capabilities of AI methods. This hybridization has the potential to improve model generalization under diverse and changing environmental conditions, addressing some of the limitations associated with purely data-driven models. Future research should prioritize the exploration of data fusion strategies that combine satellite, UAV, ground sensor, and meteorological data, as well as the further development of hybrid AI-physical models tailored to specific agricultural tasks such as yield prediction, disease monitoring, and stress detection.
Looking ahead, based on recent developments in AI and crop management, several trajectories appear likely to dominate. AI and deep learning methods are expected to continue advancing rapidly, broadening their applicability across a wide range of crop management tasks. At the same time, progress in model interpretability may enable the development of lighter, more robust architectures suited for deployment in real-world environments. As technical barriers diminish, an increasing number of AI-based technologies should become viable under operational conditions. Although limitations related to data representativeness and model generalization will persist, these challenges are likely to diminish as sensor technologies and data acquisition methods evolve. Additionally, the swift progress in other AI domains may yield unforeseen impacts as illustrated by the societal influence of conversational models.

Funding

This research was funded by Fapesp, process numbers 2022/09319-9 and 2024/01308-3.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

AcronymMeaning
ACOAnt Colony Optimization
AdaBoostAdaptive Boosting
AIArtificial Intelligence
AKArc-Cosine Kernel
ANFISAdaptive Neuro-Fuzzy Inference System
ANNArtificial Neural Network
ARIMAAuto-Regressive Integrated Moving Average
BPNNBackpropagation Neural Network
BMTMEBayesian Multi-Trait and Multi-Environment model
CEEMDANComplete Ensemble Empirical Mode Decomposition with Adaptive Noise
CNNConvolutional Neural Network
CWCERES-Wheat
DFDeep Forest
DLDeep Learning
DNNDeep Neural Network
DONDeoxynivalenol
DTDecision Tree
E-MMCElliptical-Maximum Margin Criterion
EnKFEnsemble Kalman Filter
FCNFully Convolutional Network
GAGenetic Algorithm
GANGenerative Adversarial Network
GBDTGradient Boosting Decision Trees
GBMGradient Boosting Machine
GBRTGradient Boost Regression Tree
GBLUPGenomic Best Linear Unbiased Prediction
GKGaussian Kernel
GPRGaussian Process Regression
GRNNGeneralized Regression Neural Network
GRUGated Recurrent Unit
GSDGround Sample Distance
GWOGrey Wolf Optimization
IABCImproved Artificial Bee Colony
IPSOImproved Particle Swarm Optimization
kNNk-Nearest Neighbors
KRRKernel Ridge Regression
LAILeaf Area Index
LassoLeast Absolute Shrinkage and Selection Operator
LDALinear Discriminant Analysis
LRLinear Regression
LSTMLong Short-Term Memory
MLMachine Learning
MLPMultilayer Perceptron
MLRMultiple Linear Regression
MTDLMulti-Trait Deep Learning
NBNaive Bayes
NDVINormalized Difference Vegetation Index
NLBNon-Local Block
OLSOrdinary Least Squares
PCANetPrincipal Component Analysis Network
PCNNPulse-Coupled Neural Network
PLSPartial Least Squares
PLSDAPartial Least Squares Discriminant Analysis
PLSRPartial Least Squares Regression
PSPNetPyramid Scene Parsing Network
RCTCResidual-Capsule Network with Threshold Convolution
RFRandom Forest
RFRRandom Forest Regression
RGBRed-Green-Blue
RNNRecurrent Neural Network
RPNRegion Proposal Networks
RRRidge Regression
RRBLUPRidge Regression Best Linear Unbiased Predictor
SARSynthetic Aperture Radar
SCNNShallow Convolutional Neural Networks
SIFSolar-Induced Fluorescence
SPGANSpectrogram Generative Adversarial Networks
SSDSingle-Shot Detector
SVMSupport Vector Machine
SVRSupport Vector Machine Regression
TGBLUPThreshold Genomic Best Linear Unbiased Prediction
TRMMTropical Rainfall Measuring Mission
UAVUnmanned Aerial Vehicle
XGBoostExtreme Gradient Boosting
YOLOYou Only Look Once

References

  1. Aboneh, T.; Rorissa, A.; Srinivasagan, R.; Gemechu, A. Computer Vision Framework for Wheat Disease Identification and Classification Using Jetson GPU Infrastructure. Technologies 2021, 9, 47. [Google Scholar] [CrossRef]
  2. Ahmed, A.A.M.; Sharma, E.; Jui, S.J.J.; Deo, R.C.; Nguyen-Huy, T.; Ali, M. Kernel Ridge Regression Hybrid Method for Wheat Yield Prediction with Satellite-Derived Predictors. Remote Sens. 2022, 14, 1136. [Google Scholar] [CrossRef]
  3. Soussi, A.; Zero, E.; Sacile, R.; Trinchero, D.; Fossa, M. Smart Sensors and Smart Data for Precision Agriculture: A Review. Sensors 2024, 24, 2647. [Google Scholar] [CrossRef] [PubMed]
  4. Elashmawy, R.; Uysal, I. Precision Agriculture Using Soil Sensor Driven Machine Learning for Smart Strawberry Production. Sensors 2023, 23, 2247. [Google Scholar] [CrossRef]
  5. Barbedo, J.G.A. Deep learning applied to plant pathology: The problem of data representativeness. Trop. Plant Pathol. 2021, 47, 85–94. [Google Scholar] [CrossRef]
  6. Bock, C.H.; Barbedo, J.G.A.; Ponte, E.M.D.; Bohnenkamp, D.; Mahlein, A.K. From visual estimates to fully automated sensor-based measurements of plant disease severity: Status and challenges for improving accuracy. Phytopathol. Res. 2020, 2, 9. [Google Scholar] [CrossRef]
  7. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. Drones in agriculture: A review and bibliometric analysis. Comput. Electron. Agric. 2022, 198, 107017. [Google Scholar] [CrossRef]
  8. Atzberger, C. Advances in Remote Sensing of Agriculture: Context Description, Existing Operational Monitoring Systems and Major Information Needs. Remote Sens. 2013, 5, 949–981. [Google Scholar] [CrossRef]
  9. de Moraes Navarro, E.; Costa, N.; de Jesus Pereira, A.M. A Systematic Review of IoT Solutions for Smart Farming. Sensors 2020, 20, 4231. [Google Scholar] [CrossRef]
  10. Bhat, S.A.; Huang, N. Big Data and AI Revolution in Precision Agriculture: Survey and Challenges. IEEE Access 2021, 9, 110209–110222. [Google Scholar] [CrossRef]
  11. Shaikh, T.A.; Rasool, T.; Lone, F.R. Towards Leveraging the Role of Machine Learning and Artificial Intelligence in Precision Agriculture and Smart Farming. Comput. Electron. Agric. 2022, 198, 107119. [Google Scholar] [CrossRef]
  12. Shafi, U.; Mumtaz, R.; Shafaq, Z.; Zaidi, S.M.H.; Kaifi, M.O.; Mahmood, Z.; Zaidi, S.A.R. Wheat rust disease detection techniques: A technical perspective. J. Plant Dis. Prot. 2022, 129, 489–504. [Google Scholar] [CrossRef]
  13. Khaki, S.; Safaei, N.; Pham, H.; Wang, L. WheatNet: A lightweight convolutional neural network for high-throughput image-based wheat head detection and counting. Neurocomputing 2022, 489, 78–89. [Google Scholar] [CrossRef]
  14. Çelik, Y.; Başaran, E.; Dilay, Y. Identification of durum wheat grains by using hybrid convolution neural network and deep features. Signal Image Video Process. 2022, 16, 1135–1142. [Google Scholar] [CrossRef]
  15. Balaska, V.; Adamidou, Z.; Vryzas, Z.; Gasteratos, A. Sustainable Crop Protection via Robotics and Artificial Intelligence Solutions. Machines 2023, 11, 774. [Google Scholar] [CrossRef]
  16. Yang, S.; Li, L.; Fei, S.; Yang, M.; Tao, Z.; Meng, Y.; Xiao, Y. Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data. Drones 2024, 8, 284. [Google Scholar] [CrossRef]
  17. de Camargo, T.; Schirrmann, M.; Landwehr, N.; Dammer, K.H.; Pflanz, M. Optimized Deep Learning Model as a Basis for Fast UAV Mapping of Weed Species in Winter Wheat Crops. Remote Sens. 2021, 13, 1704. [Google Scholar] [CrossRef]
  18. Oliveira, F.; Costa, D.G.; Assis, F.; Silva, I. Internet of Intelligent Things: A convergence of embedded systems, edge computing and machine learning. Internet Things 2024, 26, 101153. [Google Scholar] [CrossRef]
  19. Ryo, M. Explainable artificial intelligence and interpretable machine learning for agricultural data analysis. Artif. Intell. Agric. 2022, 6, 46–50. [Google Scholar] [CrossRef]
  20. Mostafaeipour, A.; Fakhrzad, M.B.; Gharaat, S.; Jahangiri, M.; Dhanraj, J.A.; Band, S.S.; Issakhov, A.; Mosavi, A. Machine Learning for Prediction of Energy in Wheat Production. Agriculture 2020, 10, 517. [Google Scholar] [CrossRef]
  21. Paudel, D.; de Wit, A.; Boogaard, H.; Marcos, D.; Osinga, S.; Athanasiadis, I.N. Interpretability of deep learning models for crop yield forecasting. Comput. Electron. Agric. 2023, 206, 107663. [Google Scholar] [CrossRef]
  22. Joshi, A.; Pradhan, B.; Gite, S.; Chakraborty, S. Remote-Sensing Data and Deep-Learning Techniques in Crop Mapping and Yield Prediction: A Systematic Review. Remote Sens. 2023, 15, 2014. [Google Scholar] [CrossRef]
  23. Bock, C.H.; Pethybridge, S.J.; Barbedo, J.G.A.; Esker, P.D.; Mahlein, A.K.; Ponte, E.M.D. A phytopathometry glossary for the twenty-first century: Towards consistency and precision in intra- and inter-disciplinary dialogues. Trop. Plant Pathol. 2022, 47, 14–24. [Google Scholar] [CrossRef]
  24. Ahmed, M.U.; Hussain, I. Prediction of Wheat Production Using Machine Learning Algorithms in northern areas of Pakistan. Telecommun. Policy 2022, 46, 102370. [Google Scholar] [CrossRef]
  25. Bali, N.; Singla, A. Deep Learning Based Wheat Crop Yield Prediction Model in Punjab Region of North India. Appl. Artif. Intell. 2021, 35, 1304–1328. [Google Scholar] [CrossRef]
  26. Bhojani, S.H.; Bhatt, N. Wheat crop yield prediction using new activation functions in neural network. Neural Comput. Appl. 2020, 32, 13941–13951. [Google Scholar] [CrossRef]
  27. Bian, C.; Shi, H.; Wu, S.; Zhang, K.; Wei, M.; Zhao, Y.; Sun, Y.; Zhuang, H.; Zhang, X.; Chen, S. Prediction of Field-Scale Wheat Yield Using Machine Learning Method and Multi-Spectral UAV Data. Remote Sens. 2022, 14, 1474. [Google Scholar] [CrossRef]
  28. Cao, J.; Zhang, Z.; Tao, F.; Zhang, L.; Luo, Y.; Han, J.; Li, Z. Identifying the Contributions of Multi-Source Data for Winter Wheat Yield Prediction in China. Remote Sens. 2020, 12, 750. [Google Scholar] [CrossRef]
  29. Cao, J.; Zhang, Z.; Luo, Y.; Zhang, L.; Zhang, J.; Li, Z.; Tao, F. Wheat yield predictions at a county and field scale with deep learning, machine learning, and google earth engine. Eur. J. Agron. 2021, 123, 126204. [Google Scholar] [CrossRef]
  30. Cao, J.; Wang, H.; Li, J.; Tian, Q.; Niyogi, D. Improving the Forecasting of Winter Wheat Yields in Northern China with Machine Learning–Dynamical Hybrid Subseasonal-to-Seasonal Ensemble Prediction. Remote Sens. 2022, 14, 1707. [Google Scholar] [CrossRef]
  31. Cheng, E.; Zhang, B.; Peng, D.; Zhong, L.; Yu, L.; Liu, Y.; Xiao, C.; Li, C.; Li, X.; Chen, Y.; et al. Wheat yield estimation using remote sensing data based on machine learning approaches. Front. Plant Sci. 2022, 13, 1090970. [Google Scholar] [CrossRef]
  32. Fei, S.; Hassan, M.A.; Xiao, Y.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.; Chen, R.; Ma, Y. UAV-based Multi-Sensor Data Fusion and Machine Learning Algorithm for Yield Prediction in Wheat. Precis. Agric. 2023, 24, 187–212. [Google Scholar] [CrossRef]
  33. Haider, S.A.; Naqvi, S.R.; Akram, T.; Umar, G.A.; Shahzad, A.; Sial, M.R.; Khaliq, S.; Kamran, M. LSTM Neural Network Based Forecasting Model for Wheat Production in Pakistan. Agronomy 2019, 9, 72. [Google Scholar] [CrossRef]
  34. Huang, H.; Huang, J.; Wu, Y.; Zhuo, W.; Song, J.; Li, X.; Li, L.; Su, W.; Ma, H.; Liang, S. The Improved Winter Wheat Yield Estimation by Assimilating GLASS LAI Into a Crop Growth Model With the Proposed Bayesian Posterior-Based Ensemble Kalman Filter. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4401818. [Google Scholar] [CrossRef]
  35. Kheir, A.M.S.; Ammar, K.A.; Amer, A.; Ali, M.G.M.; Ding, Z.; Elnashar, A. Machine learning-based cloud computing improved wheat yield simulation in arid regions. Comput. Electron. Agric. 2022, 203, 107457. [Google Scholar] [CrossRef]
  36. Khoshnevisan, B.; Rafiee, S.; Omid, M.; Mousazadeh, H. Development of an intelligent system based on ANFIS for predicting wheat grain yield on the basis of energy inputs. Inf. Process. Agric. 2014, 1, 14–22. [Google Scholar] [CrossRef]
  37. Li, Y.; Liu, H.; Ma, J.; Zhang, L. Estimation of Leaf Area Index for Winter Wheat at Early Stages Based on Convolutional Neural Networks. Comput. Electron. Agric. 2021, 190, 106480. [Google Scholar] [CrossRef]
  38. Li, L.; Wang, B.; Feng, P.; Liu, D.L.; He, Q.; Zhang, Y.; Wang, Y.; Li, S.; Lu, X.; Yue, C.; et al. Developing machine learning models with multi-source environmental data to predict wheat yield in China. Comput. Electron. Agric. 2022, 194, 106790. [Google Scholar] [CrossRef]
  39. Liu, Y.; Wang, S.; Wang, X.; Chen, B.; Chen, J.; Wang, J.; Huang, M.; Wang, Z.; Ma, L.; Wang, P.; et al. Exploring the Superiority of Solar-Induced Chlorophyll Fluorescence Data in Predicting Wheat Yield Using Machine Learning and Deep Learning Methods. Comput. Electron. Agric. 2022, 192, 106612. [Google Scholar] [CrossRef]
  40. Aamir, R.; Shahid, M.A.; Zaman, M.; Miao, Y.; Huang, Y.; Safdar, M.; Maqbool, S.; Muhammad, N.E. Improving Wheat Yield Prediction with Multi-Source Remote Sensing Data and Machine Learning in Arid Regions. Comput. Electron. Agric. 2023, 209, 108317. [Google Scholar] [CrossRef]
  41. Nevavuori, P.; Narra, N.; Lipping, T. Crop yield prediction with deep convolutional neural networks. Comput. Electron. Agric. 2019, 163, 104859. [Google Scholar] [CrossRef]
  42. Romero, J.R.; Roncallo, P.F.; Akkiraju, P.C.; Ponzoni, I.; Echenique, V.C.; Carballido, J.A. Using classification algorithms for predicting durum wheat yield in the province of Buenos Aires. Comput. Electron. Agric. 2013, 96, 173–179. [Google Scholar] [CrossRef]
  43. Ruan, G.; Li, X.; Yuan, F.; Cammarano, D.; Ata-UI-Karim, S.T.; Liu, X.; Tian, Y.; Zhu, Y.; Cao, W.; Cao, Q. Improving wheat yield prediction integrating proximal sensing and weather data with machine learning. Comput. Electron. Agric. 2022, 195, 106852. [Google Scholar] [CrossRef]
  44. Salehnia, N.; Salehnia, N.; Ansari, H.; Kolsoumi, S.; Bannayan, M. Climate data clustering effects on arid and semi-arid rainfed wheat yield: A comparison of artificial intelligence and K-means approaches. Int. J. Biometeorol. 2019, 63, 861–872. [Google Scholar] [CrossRef]
  45. Schreiber, L.V.; Amorim, J.G.A.; Guimarães, L.; Matos, D.M.; Maciel da Costa, C.; Parraga, A. Above-ground Biomass Wheat Estimation: Deep Learning with UAV-based RGB Images. Appl. Artif. Intell. 2022, 36, 2055392. [Google Scholar] [CrossRef]
  46. Sharma, A.; Georgi, M.; Tregubenko, M.; Tselykh, A.; Tselykh, A. Enabling smart agriculture by implementing artificial intelligence and embedded sensing. Comput. Ind. Eng. 2022, 165, 107936. [Google Scholar] [CrossRef]
  47. Shen, Y.; Mercatoris, B.; Cao, Z.; Kwan, P.; Guo, L.; Yao, H.; Cheng, Q. Improving Wheat Yield Prediction Accuracy Using LSTM-RF Framework Based on UAV Thermal Infrared and Multispectral Imagery. Agriculture 2022, 12, 892. [Google Scholar] [CrossRef]
  48. Srivastava, A.K.; Safaei, N.; Khaki, S.; Lopez, G.; Zeng, W.; Ewert, F.; Gaiser, T.; Rahimi, J. Winter wheat yield prediction using convolutional neural networks from environmental and phenological data. Sci. Rep. 2022, 12, 3215. [Google Scholar] [CrossRef]
  49. Sun, Z.; Li, Q.; Jin, S.; Song, Y.; Xu, S.; Wang, X.; Cai, J.; Zhou, Q.; Ge, Y.; Zhang, R.; et al. Simultaneous Prediction of Wheat Yield and Grain Protein Content Using Multitask Deep Learning from Time-Series Proximal Sensing. Plant Phenomics 2022, 2022, 9757948. [Google Scholar] [CrossRef]
  50. Tanabe, R.; Matsui, T.; Tanaka, T.S. Winter wheat yield prediction using convolutional neural networks and UAV-based multispectral imagery. Field Crop. Res. 2023, 291, 108786. [Google Scholar] [CrossRef]
  51. Tian, H.; Wang, P.; Tansey, K.; Zhang, S.; Zhang, J.; Li, H. An IPSO-BP neural network for estimating wheat yield using two remotely sensed variables in the Guanzhong Plain, PR China. Comput. Electron. Agric. 2020, 169, 105180. [Google Scholar] [CrossRef]
  52. Tian, H.; Pei, J.; Huang, J.; Li, X.; Wang, J.; Zhou, B.; Qin, Y.; Wang, L. Garlic and Winter Wheat Identification Based on Active and Passive Satellite Imagery and the Google Earth Engine in Northern China. Remote Sens. 2020, 12, 3539. [Google Scholar] [CrossRef]
  53. Tripathi, A.; Tiwari, R.K.; Tiwari, S.P. A Deep Learning Multi-Layer Perceptron and Remote Sensing Approach for Soil Health-Based Crop Yield Estimation. Int. J. Appl. Earth Obs. Geoinf. 2022, 113, 102959. [Google Scholar] [CrossRef]
  54. Wang, Y.; Zhang, Z.; Feng, L.; Du, Q.; Runge, T. Combining Multi-Source Data and Machine Learning Approaches to Predict Winter Wheat Yield in the Conterminous United States. Remote Sens. 2020, 12, 1232. [Google Scholar] [CrossRef]
  55. Wang, X.; Huang, J.; Feng, Q.; Yin, D. Winter Wheat Yield Prediction at County Level and Uncertainty Analysis in Main Wheat-Producing Regions of China with Deep Learning Approaches. Remote Sens. 2020, 12, 1744. [Google Scholar] [CrossRef]
  56. Wang, J.; Si, H.; Gao, Z.; Shi, L. Winter Wheat Yield Prediction Using an LSTM Model from MODIS LAI Products. Agriculture 2022, 12, 1707. [Google Scholar] [CrossRef]
  57. Wang, J.; Wang, P.; Tian, H.; Tansey, K.; Liu, J.; Quan, W. A deep learning framework combining CNN and GRU for improving wheat yield estimates using time series remotely sensed multi-variables. Comput. Electron. Agric. 2023, 206, 107705. [Google Scholar] [CrossRef]
  58. Wolanin, A.; Mateo-García, G.; Camps-Valls, G.; Gómez-Chova, L.; Meroni, M.; Duveiller, G.; Liangzhi, Y.; Guanter, L. Estimating and understanding crop yields with explainable deep learning in the Indian Wheat Belt. Environ. Res. Lett. 2020, 15, 024019. [Google Scholar] [CrossRef]
  59. Wu, S.; Deng, L.; Guo, L.; Wu, Y. Wheat leaf area index prediction using data fusion based on high-resolution unmanned aerial vehicle imagery. Plant Methods 2022, 18, 68. [Google Scholar] [CrossRef]
  60. Xie, Y.; Huang, J. Integration of a Crop Growth Model and Deep Learning Methods to Improve Satellite-Based Yield Estimation of Winter Wheat in Henan Province, China. Remote Sens. 2021, 13, 4372. [Google Scholar] [CrossRef]
  61. Yang, S.; Hu, L.; Wu, H.; Ren, H.; Qiao, H.; Li, P.; Fan, W. Integration of Crop Growth Model and Random Forest for Winter Wheat Yield Estimation From UAV Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6253–6268. [Google Scholar] [CrossRef]
  62. Zhang, J.; Cheng, T.; Guo, W.; Xu, X.; Qiao, H.; Xie, Y.; Ma, X. Leaf area index estimation model for UAV image hyperspectral data based on wavelength variable selection and machine learning methods. Plant Methods 2021, 17, 49. [Google Scholar] [CrossRef] [PubMed]
  63. Zhou, X.; Kono, Y.; Win, A.; Matsui, T.; Tanaka, T.S.T. Predicting within-field variability in grain yield and protein content of winter wheat using UAV-based multispectral imagery and machine learning approaches. Plant Prod. Sci. 2021, 24, 137–151. [Google Scholar] [CrossRef]
  64. Zhou, W.; Liu, Y.; Ata-Ul-Karim, S.T.; Ge, Q.; Li, X.; Xiao, J. Integrating climate and satellite remote sensing data for predicting county-level wheat yield in China using machine learning methods. Int. J. Appl. Earth Obs. Geoinf. 2022, 111, 102861. [Google Scholar] [CrossRef]
  65. Barbedo, J.G.A. Data Fusion in Agriculture: Resolving Ambiguities and Closing Data Gaps. Sensors 2022, 22, 2285. [Google Scholar] [CrossRef]
  66. Zhou, J.; Li, J.; Wang, C.; Wu, H.; Zhao, C.; Teng, G. Crop disease identification and interpretation method based on multimodal deep learning. Comput. Electron. Agric. 2021, 189, 106408. [Google Scholar] [CrossRef]
  67. Akbar, S.; Ahmad, K.T.; Abid, M.K.; Aslam, N. Wheat Disease Detection for Yield Management Using IoT and Deep Learning Techniques. VFAST Trans. Softw. Eng. 2020, 9, 19–30. [Google Scholar] [CrossRef]
  68. Azimi, N.; Sofalian, O.; Davari, M.; Asghari, A.; Zare, N. Statistical and Machine Learning-Based FHB Detection in Durum Wheat. Plant Breed. Biotechnol. 2020, 8, 265–280. [Google Scholar] [CrossRef]
  69. Bao, W.; Yang, X.; Liang, D.; Hu, G.; Yang, X. Lightweight convolutional neural network model for field wheat ear disease identification. Comput. Electron. Agric. 2021, 189, 106367. [Google Scholar] [CrossRef]
  70. Bao, W.; Zhao, J.; Hu, G.; Zhang, D.; Huang, L.; Liang, D. Identification of wheat leaf diseases and their severity based on elliptical-maximum margin criterion metric learning. Sustain. Comput. Inform. Syst. 2021, 30, 100526. [Google Scholar] [CrossRef]
  71. Deng, J.; Hong, D.; Li, C.; Yao, J.; Yang, Z.; Zhang, Z.; Chanussot, J. RustQNet: Multimodal deep learning for quantitative inversion of wheat stripe rust disease index. Comput. Electron. Agric. 2024, 225, 109245. [Google Scholar] [CrossRef]
  72. Fahim-Ul-Islam, M.; Chakrabarty, A.; Ahmed, S.T.; Rahman, R.; Kwon, H.H.; Piran, M.J. A Comprehensive Approach Toward Wheat Leaf Disease Identification Leveraging Transformer Models and Federated Learning. IEEE Access 2024, 12, 109128–109136. [Google Scholar] [CrossRef]
  73. Fang, X.; Zhen, T.; Li, Z. Lightweight Multiscale CNN Model for Wheat Disease Detection. Appl. Sci. 2023, 13, 5801. [Google Scholar] [CrossRef]
  74. Gao, Y.; Wang, H.; Li, M.; Su, W.H. Automatic Tandem Dual BlendMask Networks for Severity Assessment of Wheat Fusarium Head Blight. Agriculture 2022, 12, 1493. [Google Scholar] [CrossRef]
  75. Genaev, M.A.; Skolotneva, E.S.; Gultyaeva, E.I.; Orlova, E.A.; Bechtold, N.P.; Afonnikov, D.A. Image-Based Wheat Fungi Diseases Identification by Deep Learning. Plants 2021, 10, 1500. [Google Scholar] [CrossRef]
  76. Gonçalves, J.P.; Pinto, F.A.; Queiroz, D.M.; Villar, F.M.; Barbedo, J.G.; Ponte, E.M.D. Deep learning architectures for semantic segmentation and automatic estimation of severity of foliar symptoms caused by diseases or pests. Biosyst. Eng. 2021, 210, 129–142. [Google Scholar] [CrossRef]
  77. Goyal, L.; Sharma, C.M.; Singh, A.; Singh, P.K. Leaf and spike wheat disease detection & classification using an improved deep convolutional architecture. Inform. Med. Unlocked 2021, 25, 100642. [Google Scholar] [CrossRef]
  78. Haider, W.; Rehman, A.-U.; Durrani, N.M.; Rehman, S.U. A Generic Approach for Wheat Disease Classification and Verification Using Expert Opinion. IEEE Access 2021, 9, 31122–31135. [Google Scholar] [CrossRef]
  79. Hayit, T.; Erbay, H.; Varçın, F.; Hayit, F.; Akci, N. Determination of the Severity Level of Yellow Rust Disease in Wheat by Using Convolutional Neural Networks. J. Plant Pathol. 2021, 103, 923–934. [Google Scholar] [CrossRef]
  80. Jiang, Z.; Dong, Z.; Jiang, W.; Yang, Y. Recognition of Rice Leaf Diseases and Wheat Leaf Diseases Based on Multi-Task Deep Transfer Learning. Comput. Electron. Agric. 2021, 186, 106184. [Google Scholar] [CrossRef]
  81. Jiang, J.; Liu, H.; Zhao, C.; He, C.; Ma, J.; Cheng, T.; Zhu, Y.; Cao, W.; Yao, X. Evaluation of Diverse Convolutional Neural Networks and Training Strategies for Wheat Leaf Disease Identification with Field-Acquired Photographs. Remote Sens. 2022, 14, 3446. [Google Scholar] [CrossRef]
  82. Jin, X.; Jie, L.; Wang, S.; Qi, H.J.; Li, S.W. Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field. Remote Sens. 2018, 10, 395. [Google Scholar] [CrossRef]
  83. Khan, H.; Haq, I.U.; Munsif, M.; Mustaqeem; Khan, S.U.; Lee, M.Y. Automated Wheat Diseases Classification Framework Using Advanced Machine Learning Technique. Agriculture 2022, 12, 1226. [Google Scholar] [CrossRef]
  84. Lin, Z.; Mu, S.; Huang, F.; Mateen, K.A.; Wang, M.; Gao, W.; Jia, J. A Unified Matrix-Based Convolutional Neural Network for Fine-Grained Image Classification of Wheat Leaf Diseases. IEEE Access 2019, 7, 11570–11586. [Google Scholar] [CrossRef]
  85. Liu, Y.; Liu, G.; Sun, H.; An, L.; Zhao, R.; Liu, M.; Tang, W.; Li, M.; Yan, X.; Ma, Y.; et al. Exploring multi-features in UAV based optical and thermal infrared images to estimate disease severity of wheat powdery mildew. Comput. Electron. Agric. 2024, 225, 109285. [Google Scholar] [CrossRef]
  86. Lu, J.; Hu, J.; Zhao, G.; Mei, F.; Zhang, C. An in-field automatic wheat disease diagnosis system. Comput. Electron. Agric. 2017, 142, 369–379. [Google Scholar] [CrossRef]
  87. Dainelli, R.; Bruno, A.; Martinelli, M.; Moroni, D.; Rocchi, L.; Morelli, S.; Ferrari, E.; Silvestri, M.; La Cava, P.; Toscano, P. GranoScan: An AI-powered mobile app for in-field identification of biotic threats of wheat. Front. Plant Sci. 2023, 14, 1119032. [Google Scholar] [CrossRef]
  88. Maqsood, M.H.; Mumtaz, R.; Haq, I.U.; Shafi, U.; Zaidi, S.M.H.; Hafeez, M. Super Resolution Generative Adversarial Network (SRGANs) for Wheat Stripe Rust Classification. Sensors 2021, 21, 7903. [Google Scholar] [CrossRef]
  89. Mi, Z.; Zhang, X.; Su, J.; Han, D.; Su, B. Wheat Stripe Rust Grading by Deep Learning With Attention Mechanism and Images From Mobile Devices. Front. Plant Sci. 2020, 11, 558126. [Google Scholar] [CrossRef]
  90. Nigam, S.; Jain, R.; Marwaha, S.; Arora, A.; Haque, M.A.; Dheeraj, A.; Singh, V.K. Deep transfer learning model for disease identification in wheat crop. Ecol. Inform. 2023, 75, 102068. [Google Scholar] [CrossRef]
  91. Pan, Q.; Gao, M.; Wu, P.; Yan, J.; Li, S. A Deep-Learning-Based Approach for Wheat Yellow Rust Disease Recognition from Unmanned Aerial Vehicle Images. Sensors 2021, 21, 6540. [Google Scholar] [CrossRef]
  92. Pan, Q.; Gao, M.; Wu, P.; Yan, J.; AbdelRahman, M.A.E. Image Classification of Wheat Rust Based on Ensemble Learning. Sensors 2022, 22, 6047. [Google Scholar] [CrossRef] [PubMed]
  93. Qiu, R.; Yang, C.; Moghimi, A.; Zhang, M.; Steffenson, B.J.; Hirsch, C.D. Detection of Fusarium Head Blight in Wheat Using a Deep Neural Network and Color Imaging. Remote Sens. 2019, 11, 2658. [Google Scholar] [CrossRef]
  94. Rangarajan, A.K.; Whetton, R.L.; Mouazen, A.M. Detection of fusarium head blight in wheat using hyperspectral data and deep learning. Expert Syst. Appl. 2022, 208, 118240. [Google Scholar] [CrossRef]
  95. Schirrmann, M.; Landwehr, N.; Giebel, A.; Garz, A.; Dammer, K.H. Early Detection of Stripe Rust in Winter Wheat Using Deep Residual Neural Networks. Front. Plant Sci. 2021, 12, 469689. [Google Scholar] [CrossRef]
  96. Shafi, U.; Mumtaz, R.; Haq, I.U.; Hafeez, M.; Iqbal, N.; Shaukat, A.; Zaidi, S.M.H.; Mahmood, Z. Wheat Yellow Rust Disease Infection Type Classification Using Texture Features. Sensors 2022, 22, 146. [Google Scholar] [CrossRef] [PubMed]
  97. Su, W.H.; Zhang, J.; Yang, C.; Page, R.; Szinyei, T.; Hirsch, C.D.; Steffenson, B.J. Automatic Evaluation of Wheat Resistance to Fusarium Head Blight Using Dual Mask-RCNN Deep Learning Frameworks in Computer Vision. Remote Sens. 2021, 13, 26. [Google Scholar] [CrossRef]
  98. Su, J.; Yi, D.; Su, B.; Mi, Z.; Liu, C.; Hu, X.; Xu, X.; Guo, L.; Chen, W.H. Aerial Visual Perception in Smart Farming: Field Study of Wheat Yellow Rust Monitoring. IEEE Trans. Ind. Inform. 2021, 17, 2242–2252. [Google Scholar] [CrossRef]
  99. Weng, S.; Hu, X.; Zhu, W.; Li, P.; Zheng, S.; Zheng, L.; Huang, L.; Zhang, D. Surface-enhanced Raman spectroscopy with gold nanorods modified by sodium citrate and liquid–liquid interface self-extraction for detection of deoxynivalenol in Fusarium head blight-infected wheat kernels coupled with a fully convolution network. Food Chem. 2021, 359, 129847. [Google Scholar] [CrossRef]
  100. Weng, S.; Han, K.; Chu, Z.; Zhu, G.; Liu, C.; Zhu, Z.; Zhang, Z.; Zheng, L.; Huang, L. Reflectance images of effective wavelengths from hyperspectral imaging for identification of Fusarium head blight-infected wheat kernels combined with a residual attention convolution neural network. Comput. Electron. Agric. 2021, 190, 106483. [Google Scholar] [CrossRef]
  101. Xiao, Y.; Dong, Y.; Huang, W.; Liu, L.; Ma, H. Wheat Fusarium Head Blight Detection Using UAV-Based Spectral and Texture Features in Optimal Window Size. Remote Sens. 2021, 13, 2437. [Google Scholar] [CrossRef]
  102. Xu, L.; Cao, B.; Zhao, F.; Ning, S.; Xu, P.; Zhang, W.; Hou, X. Wheat Leaf Disease Identification Based on Deep Learning Algorithms. Physiol. Mol. Plant Pathol. 2023, 123, 101940. [Google Scholar] [CrossRef]
  103. Zhang, D.; Wang, D.; Gu, C.; Jin, N.; Zhao, H.; Chen, G.; Liang, H.; Liang, D. Using Neural Network to Identify the Severity of Wheat Fusarium Head Blight in the Field Environment. Remote Sens. 2019, 11, 2375. [Google Scholar] [CrossRef]
  104. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef]
  105. Zhang, D.Y.; Chen, G.; Yin, X.; Hu, R.J.; Gu, C.Y.; Pan, Z.G.; Zhou, X.G.; Chen, Y. Integrating Spectral and Image Data to Detect Fusarium Head Blight of Wheat. Comput. Electron. Agric. 2020, 175, 105588. [Google Scholar] [CrossRef]
  106. Zhang, D.; Wang, Z.; Jin, N.; Gu, C.; Chen, Y.; Huang, Y. Evaluation of Efficacy of Fungicides for Control of Wheat Fusarium Head Blight Based on Digital Imaging. IEEE Access 2020, 8, 109876–109890. [Google Scholar] [CrossRef]
  107. Zhang, T.; Xu, Z.; Su, J.; Yang, Z.; Liu, C.; Chen, W.H.; Li, J. Ir-UNet: Irregular Segmentation U-Shape Network for Wheat Yellow Rust Detection by UAV Multispectral Imagery. Remote Sens. 2021, 13, 3892. [Google Scholar] [CrossRef]
  108. Zhang, D.Y.; Luo, H.S.; Wang, D.Y.; Zhou, X.G.; Li, W.F.; Gu, C.Y.; Zhang, G.; He, F.M. Assessment of the levels of damage caused by Fusarium head blight in wheat using an improved YoloV5 method. Comput. Electron. Agric. 2022, 198, 107086. [Google Scholar] [CrossRef]
  109. Zhang, T.; Yang, Z.; Xu, Z.; Li, J. Wheat Yellow Rust Severity Detection by Efficient DF-UNet and UAV Multispectral Imagery. IEEE Sens. J. 2022, 22, 9057–9068. [Google Scholar] [CrossRef]
  110. Barbedo, J.G.A. Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 2019, 180, 96–107. [Google Scholar] [CrossRef]
  111. Feng, G.; Gu, Y.; Wang, C.; Zhou, Y.; Huang, S.; Luo, B. Wheat Fusarium Head Blight Automatic Non-Destructive Detection Based on Multi-Scale Imaging: A Technical Perspective. Plants 2024, 13, 1722. [Google Scholar] [CrossRef]
  112. Mahlein, A.K.; Barbedo, J.G.A.; Chiang, K.S.; Ponte, E.M.D.; Bock, C.H. From Detection to Protection: The Role of Optical Sensors, Robots, and Artificial Intelligence in Modern Plant Disease Management. Phytopathology 2024, 114, 6–16. [Google Scholar] [CrossRef]
  113. Saad, M.H.; Salman, A.E. A plant disease classification using one-shot learning technique with field images. Multimed. Tools Appl. 2024, 83, 58935–58960. [Google Scholar] [CrossRef]
  114. Argüeso, D.; Picón, A.; Irusta, U.; Medela, A.; San-Emeterio, M.G.; Bereciartua, A.; Álvarez Gila, A. Few-Shot Learning Approach for Plant Disease Classification Using Images Taken in the Field. Comput. Electron. Agric. 2020, 175, 105542. [Google Scholar] [CrossRef]
  115. Zhou, L.; Tan, L.; Zhang, C.; Zhao, N.; He, Y.; Qiu, Z. A Portable NIR-System for Mixture Powdery Food Analysis Using Deep Learning. LWT Food Sci. Technol. 2022, 153, 112456. [Google Scholar] [CrossRef]
  116. Barbedo, J.G.A.; Tibola, C.S.; Fernandes, J.M.C. Detecting Fusarium head blight in wheat kernels using hyperspectral imaging. Biosyst. Eng. 2015, 131, 65–76. [Google Scholar] [CrossRef]
  117. Thompson, M.; Tarr, A.; Tarr, J.A.; Ritterband, S. Unmanned Aerial Vehicles. In Drone Law and Policy: Global Development, Risks, Regulation and Insurance; Tarr, A.A., Tarr, J.A., Thompson, M., Ellis, J., Eds.; Routledge: New York, NY, USA, 2021; pp. 153–168. [Google Scholar] [CrossRef]
  118. El-Kenawy, E.S.M.; Khodadadi, N.; Mirjalili, S.; Makarovskikh, T.; Abotaleb, M.; Karim, F.K.; Alkahtani, H.K.; Abdelhamid, A.A.; Eid, M.M.; Horiuchi, T.; et al. Metaheuristic Optimization for Improving Weed Detection in Wheat Images Captured by Drones. Mathematics 2022, 10, 4421. [Google Scholar] [CrossRef]
  119. Jabir, B.; Falih, N. Deep Learning-Based Decision Support System for Weeds Detection in Wheat Fields. Int. J. Electr. Comput. Eng. 2022, 12, 816–825. [Google Scholar] [CrossRef]
  120. Li, Z.; Wang, D.; Yan, Q.; Zhao, M.; Wu, X.; Liu, X. Winter wheat weed detection based on deep learning models. Comput. Electron. Agric. 2024, 227, 109448. [Google Scholar] [CrossRef]
  121. Mishra, A.M.; Harnal, S.; Mohiuddin, K.; Gautam, V.; Nasr, O.A.; Goyal, N.; Alwetaishi, M.; Singh, A. A Deep Learning-Based Novel Approach for Weed Growth Estimation. Intell. Autom. Soft Comput. 2022, 31, 1156–1172. [Google Scholar] [CrossRef]
  122. Su, D.; Kong, H.; Qiao, Y.; Sukkarieh, S. Data augmentation for deep learning based semantic segmentation and crop-weed classification in agricultural robotics. Comput. Electron. Agric. 2021, 190, 106418. [Google Scholar] [CrossRef]
  123. Su, D.; Qiao, Y.; Kong, H.; Sukkarieh, S. Real-time detection of inter-row ryegrass in wheat farms using deep learning. Biosyst. Eng. 2021, 204, 198–211. [Google Scholar] [CrossRef]
  124. Su, J.; Yi, D.; Coombes, M.; Liu, C.; Zhai, X.; McDonald-Maier, K.; Chen, W.H. Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [Google Scholar] [CrossRef]
  125. Wang, K.; Hu, X.; Zheng, H.; Lan, M.; Liu, C.; Liu, Y.; Zhong, L.; Li, H.; Tan, S. Weed Detection and Recognition in Complex Wheat Fields Based on an Improved YOLOv7. Front. Plant Sci. 2024, 15, 1372237. [Google Scholar] [CrossRef]
  126. Zhuang, J.; Li, X.; Bagavathiannan, M.; Jin, X.; Yang, J.; Meng, W.; Li, T.; Li, L.; Wang, Y.; Chen, Y.; et al. Evaluation of different deep convolutional neural networks for detection of broadleaf weed seedlings in wheat. Pest Manag. Sci. 2022, 78, 521–529. [Google Scholar] [CrossRef] [PubMed]
  127. Zou, K.; Liao, Q.; Zhang, F.; Che, X.; Zhang, C. A segmentation network for smart weed management in wheat fields. Comput. Electron. Agric. 2022, 202, 107303. [Google Scholar] [CrossRef]
  128. Chen, P.; Li, W.; Yao, S.; Ma, C.; Zhang, J.; Wang, B.; Zheng, C.; Xie, C.; Liang, D. Recognition and counting of wheat mites in wheat fields by a three-step deep learning method. Neurocomputing 2021, 437, 21–30. [Google Scholar] [CrossRef]
  129. Fuentes, S.; Tongson, E.; Unnithan, R.R.; Viejo, C.G. Early Detection of Aphid Infestation and Insect-Plant Interaction Assessment in Wheat Using a Low-Cost Electronic Nose (E-Nose), Near-Infrared Spectroscopy and Machine Learning Modeling. Sensors 2021, 21, 5948. [Google Scholar] [CrossRef]
  130. Li, R.; Wang, R.; Zhang, J.; Xie, C.; Liu, L.; Wang, F.; Chen, H.; Chen, T.; Hu, H.; Jia, X.; et al. An Effective Data Augmentation Strategy for CNN-Based Pest Localization and Recognition in the Field. IEEE Access 2019, 7, 160274–160283. [Google Scholar] [CrossRef]
  131. Li, W.; Chen, P.; Wang, B.; Xie, C. Automatic Localization and Count of Agricultural Crop Pests Based on an Improved Deep Learning Pipeline. Sci. Rep. 2019, 9, 7024. [Google Scholar] [CrossRef]
  132. Elbeltagi, A.; Deng, J.; Wang, K.; Malik, A.; Maroufpoor, S. Modeling long-term dynamics of crop evapotranspiration using deep learning in a semi-arid environment. Agric. Water Manag. 2020, 241, 106334. [Google Scholar] [CrossRef]
  133. Shen, R.; Huang, A.; Li, B.; Guo, J. Construction of a drought monitoring model using deep learning based on multi-source remote sensing data. Int. J. Appl. Earth Obs. Geoinf. 2019, 79, 48–57. [Google Scholar] [CrossRef]
  134. Chu, H.; Zhang, C.; Wang, M.; Gouda, M.; Wei, X.; He, Y.; Liu, Y. Hyperspectral imaging with shallow convolutional neural networks (SCNN) predicts the early herbicide stress in wheat cultivars. J. Hazard. Mater. 2022, 421, 126706. [Google Scholar] [CrossRef]
  135. Weng, S.; Yuan, H.; Zhang, X.; Li, P.; Zheng, L.; Zhao, J.; Huang, L. Deep learning networks for the recognition and quantitation of surface-enhanced Raman spectroscopy. Analyst 2020, 145, 4827–4835. [Google Scholar] [CrossRef]
  136. Yang, B.; Zhu, Y.; Zhou, S. Accurate Wheat Lodging Extraction from Multi-Channel UAV Images Using a Lightweight Network Model. Sensors 2021, 21, 6826. [Google Scholar] [CrossRef] [PubMed]
  137. Zhang, D.; Ding, Y.; Chen, P.; Zhang, X.; Pan, Z.; Liang, D. Automatic extraction of wheat lodging area based on transfer learning method and DeepLabv3+ network. Comput. Electron. Agric. 2020, 179, 105845. [Google Scholar] [CrossRef]
  138. Zhang, Z.; Flores, P.; Igathinathane, C.; Naik, D.L.; Kiran, R.; Ransom, J.K. Wheat Lodging Detection from UAS Imagery Using Machine Learning Algorithms. Remote Sens. 2020, 12, 1838. [Google Scholar] [CrossRef]
  139. Barbedo, J.G.A. Detecting and Classifying Pests in Crops Using Proximal Images and Machine Learning: A Review. AI 2020, 1, 312–328. [Google Scholar] [CrossRef]
  140. Apolo-Apolo, O.E.; Pérez-Ruiz, M.; Martínez-Guanter, J.; Egea, G. A Mixed Data-Based Deep Neural Network to Estimate Leaf Area Index in Wheat Breeding Trials. Agronomy 2020, 10, 175. [Google Scholar] [CrossRef]
  141. Crossa, J.; Martini, J.W.R.; Gianola, D.; Pérez-Rodríguez, P.; Jarquin, D.; Juliana, P.; Montesinos-López, O.; Cuevas, J. Deep Kernel and Deep Learning for Genome-Based Prediction of Single Traits in Multienvironment Breeding Trials. Front. Genet. 2019, 10, 1168. [Google Scholar] [CrossRef]
  142. Ghahremani, M.; Williams, K.; Corke, F.M.K.; Tiddeman, B.; Liu, Y.; Doonan, J.H. Deep Segmentation of Point Clouds of Wheat. Front. Plant Sci. 2021, 12, 608732. [Google Scholar] [CrossRef]
  143. González-Camacho, J.M.; Ornella, L.; Pérez-Rodríguez, P.; Gianola, D.; Dreisigacker, S.; Crossa, J. Applications of Machine Learning Methods to Genomic Selection in Breeding Wheat for Rust Resistance. Plant Genome 2018, 11, 170104. [Google Scholar] [CrossRef] [PubMed]
  144. Guo, J.; Khan, J.; Pradhan, S.; Shahi, D.; Khan, N.; Avci, M.; Mcbreen, J.; Harrison, S.; Brown-Guedira, G.; Murphy, J.P.; et al. Multi-Trait Genomic Prediction of Yield-Related Traits in US Soft Wheat under Variable Water Regimes. Genes 2020, 11, 1270. [Google Scholar] [CrossRef]
  145. Hesami, M.; Condori-Apfata, J.A.; Valencia, M.V.; Mohammadi, M. Application of Artificial Neural Network for Modeling and Studying In Vitro Genotype-Independent Shoot Regeneration in Wheat. Appl. Sci. 2020, 10, 5370. [Google Scholar] [CrossRef]
  146. Khan, Z.; Rahimi-Eichi, V.; Haefele, S.; Garnett, T.; Miklavcic, S.J. Estimation of Vegetation Indices for High-Throughput Phenotyping of Wheat Using Aerial Imaging. Plant Methods 2018, 14, 20. [Google Scholar] [CrossRef] [PubMed]
  147. Moghimi, A.; Yang, C.; Anderson, J.A. Aerial hyperspectral imagery and deep neural networks for high-throughput yield phenotyping in wheat. Comput. Electron. Agric. 2020, 172, 105299. [Google Scholar] [CrossRef]
  148. Montesinos-López, O.A.; Montesinos-López, A.; Crossa, J.; Gianola, D.; Hernández-Suárez, C.M.; Martín-Vallejo, J. Multi-trait, Multi-environment Deep Learning Modeling for Genomic-Enabled Prediction of Plant Traits. G3 Genes Genomes Genet. 2018, 8, 3829–3840. [Google Scholar] [CrossRef]
  149. Montesinos-López, O.A.; Martín-Vallejo, J.; Crossa, J.; Gianola, D.; Hernández-Suárez, C.M.; Montesinos-López, A.; Juliana, P.; Singh, R. New Deep Learning Genomic-Based Prediction Model for Multiple Traits with Binary, Ordinal, and Continuous Phenotypes. G3 Genes Genomes Genet. 2019, 9, 1545–1556. [Google Scholar] [CrossRef]
  150. Montesinos-López, O.A.; Martín-Vallejo, J.; Crossa, J.; Gianola, D.; Hernández-Suárez, C.M.; Montesinos-López, A.; Juliana, P.; Singh, R. A Benchmarking Between Deep Learning, Support Vector Machine and Bayesian Threshold Best Linear Unbiased Prediction for Predicting Ordinal Traits in Plant Breeding. G3 Genes Genomes Genet. 2019, 9, 601–616. [Google Scholar] [CrossRef]
  151. Montesinos-López, O.A.; Montesinos-López, A.; Tuberosa, R.; Maccaferri, M.; Sciara, G.; Ammar, K.; Crossa, J. Multi-Trait, Multi-Environment Genomic Prediction of Durum Wheat With Genomic Best Linear Unbiased Predictor and Deep Learning Methods. Front. Plant Sci. 2019, 10, 1311. [Google Scholar] [CrossRef]
  152. Roth, L.; Camenzind, M.; Aasen, H.; Kronenberg, L.; Barendregt, C.; Camp, K.H.; Walter, A.; Kirchgessner, N.; Hund, A. Repeated Multiview Imaging for Estimating Seedling Tiller Counts of Wheat Genotypes Using Drones. Plant Phenomics 2020, 2020, 3729715. [Google Scholar] [CrossRef] [PubMed]
  153. Sandhu, K.; Patil, S.S.; Pumphrey, M.; Carter, A. Multitrait machine- and deep-learning models for genomic selection using spectral information in a wheat breeding program. Plant Genome 2021, 14, e20119. [Google Scholar] [CrossRef] [PubMed]
  154. Sandhu, K.S.; Aoun, M.; Morris, C.F.; Carter, A.H. Genomic Selection for End-Use Quality and Processing Traits in Soft White Winter Wheat Breeding Program with Machine and Deep Learning Models. Biology 2021, 10, 689. [Google Scholar] [CrossRef] [PubMed]
  155. Sandhu, K.S.; Lozada, D.N.; Zhang, Z.; Pumphrey, M.O.; Carter, A.H. Deep Learning for Predicting Complex Traits in Spring Wheat Breeding Program. Front. Plant Sci. 2021, 11, 613325. [Google Scholar] [CrossRef]
  156. Wang, X.; Xuan, H.; Evers, B.; Shrestha, S.; Pless, R.; Poland, J. High-throughput phenotyping with deep learning gives insight into the genetic architecture of flowering time in wheat. GigaScience 2019, 8, giz120. [Google Scholar] [CrossRef]
  157. Yasrab, R.; Atkinson, J.A.; Wells, D.M.; French, A.P.; Pridmore, T.P.; Pound, M.P. RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures. GigaScience 2019, 8, giz123. [Google Scholar] [CrossRef]
  158. Zenkl, R.; Timofte, R.; Kirchgessner, N.; Roth, L.; Hund, A.; Van Gool, L.; Walter, A.; Aasen, H. Outdoor Plant Segmentation With Deep Learning for High-Throughput Field Phenotyping on a Diverse Wheat Dataset. Front. Plant Sci. 2022, 12, 774068. [Google Scholar] [CrossRef]
  159. Zhang, Z.; Qu, Y.; Ma, F.; Lv, Q.; Zhu, X.; Guo, G.; Li, M.; Yang, W.; Que, B.; Zhang, Y.; et al. Integrating high-throughput phenotyping and genome-wide association studies for enhanced drought resistance and yield prediction in wheat. New Phytol. 2024, 243, 1758–1775. [Google Scholar] [CrossRef]
  160. Zhu, C.; Hu, Y.; Mao, H.; Li, S.; Li, F.; Zhao, C.; Luo, L.; Liu, W.; Yuan, X. A Deep Learning-Based Method for Automatic Assessment of Stomatal Index in Wheat Microscopic Images of Leaf Epidermis. Front. Plant Sci. 2021, 12, 716784. [Google Scholar] [CrossRef]
  161. Alkhudaydi, T.; Reynolds, D.; Griffiths, S.; Zhou, J.; de la Iglesia, B. An Exploration of Deep-Learning Based Phenotypic Analysis to Detect Spike Regions in Field Conditions for UK Bread Wheat. Plant Phenomics 2019, 2019, 7368761. [Google Scholar] [CrossRef]
  162. Dandrifosse, S.; Ennadifi, E.; Carlier, A.; Gosselin, B.; Dumont, B.; Mercatoris, B. Deep learning for wheat ear segmentation and ear density measurement: From heading to maturity. Comput. Electron. Agric. 2022, 199, 107161. [Google Scholar] [CrossRef]
  163. David, E.; Madec, S.; Sadeghi-Tehran, P.; Aasen, H.; Zheng, B.; Liu, S.; Kirchgessner, N.; Ishikawa, G.; Nagasawa, K.; Badhon, M.A.; et al. Global Wheat Head Detection (GWHD) Dataset: A Large and Diverse Dataset of High-Resolution RGB-Labelled Images to Develop and Benchmark Wheat Head Detection Methods. Plant Phenomics 2020, 2020, 3521852. [Google Scholar] [CrossRef] [PubMed]
  164. David, E.; Serouart, M.; Smith, D.; Madec, S.; Velumani, K.; Liu, S.; Wang, X.; Pinto, F.; Shafiee, S.; Tahir, I.S.A.; et al. Global Wheat Head Detection 2021: An Improved Dataset for Benchmarking Wheat Head Detection Methods. Plant Phenomics 2021, 2021, 9846158. [Google Scholar] [CrossRef]
  165. Fourati, F.; Mseddi, W.S.; Attia, R. Wheat Head Detection using Deep, Semi-Supervised and Ensemble Learning. Can. J. Remote Sens. 2021, 47, 198–208. [Google Scholar] [CrossRef]
  166. Genaev, M.A.; Komyshev, E.G.; Smirnov, N.V.; Kruchinina, Y.V.; Goncharov, N.P.; Afonnikov, D.A. Morphometry of the Wheat Spike by Analyzing 2D Images. Agronomy 2019, 9, 390. [Google Scholar] [CrossRef]
  167. Gong, B.; Ergu, D.; Cai, Y.; Ma, B. Real-Time Detection for Wheat Head Applying Deep Neural Network. Sensors 2021, 21, 191. [Google Scholar] [CrossRef] [PubMed]
  168. Hasan, M.M.; Chopin, J.P.; Laga, H.; Miklavcic, S.J. Detection and Analysis of Wheat Spikes Using Convolutional Neural Networks. Plant Methods 2018, 14, 100. [Google Scholar] [CrossRef]
  169. He, M.X.; Hao, P.; Xin, Y.Z. A Robust Method for Wheatear Detection Using UAV in Natural Scenes. IEEE Access 2020, 8, 189043–189053. [Google Scholar] [CrossRef]
  170. Li, J.; Li, C.; Fei, S.; Ma, C.; Chen, W.; Ding, F.; Wang, Y.; Li, Y.; Shi, J.; Xiao, Z. Wheat Ear Recognition Based on RetinaNet and Transfer Learning. Sensors 2021, 21, 4845. [Google Scholar] [CrossRef]
  171. Li, R.; Wu, Y. Improved YOLO v5 Wheat Ear Detection Algorithm Based on Attention Mechanism. Electronics 2022, 11, 1673. [Google Scholar] [CrossRef]
  172. Ma, J.; Li, Y.; Liu, H.; Du, K.; Zheng, F.; Wu, Y.; Zhang, L. Improving segmentation accuracy for ears of winter wheat at flowering stage by semantic segmentation. Comput. Electron. Agric. 2020, 176, 105662. [Google Scholar] [CrossRef]
  173. Ma, J.; Li, Y.; Du, K.; Zheng, F.; Zhang, L.; Gong, Z.; Jiao, W. Segmenting ears of winter wheat at flowering stage using digital images and deep learning. Comput. Electron. Agric. 2020, 168, 105159. [Google Scholar] [CrossRef]
  174. Madec, S.; Jin, X.; Lu, H.; Solan, B.D.; Liu, S.; Duyme, F.; Heritier, E.; Baret, F. Ear density estimation from high resolution RGB imagery using deep learning technique. Agric. For. Meteorol. 2019, 264, 225–234. [Google Scholar] [CrossRef]
  175. Misra, T.; Arora, A.; Marwaha, S.; Chinnusamy, V.; Rao, A.R.; Jain, R.; Sahoo, R.N.; Ray, M.; Kumar, S.; Raju, D.; et al. SpikeSegNet: A deep learning approach utilizing encoder-decoder network with hourglass for spike segmentation and counting in wheat plant from visual imaging. Plant Methods 2020, 16, 40. [Google Scholar] [CrossRef]
  176. Qing, S.; Qiu, Z.; Wang, W.; Wang, F.; Jin, X.; Ji, J.; Zhao, L.; Shi, Y. Improved YOLO-FastestV2 wheat spike detection model based on a multi-stage attention mechanism with a LightFPN detection head. Front. Plant Sci. 2024, 15, 1411510. [Google Scholar] [CrossRef]
  177. Sadeghi-Tehran, P.; Virlet, N.; Ampe, E.M.; Reyns, P.; Hawkesford, M.J. DeepCount: In-Field Automatic Quantification of Wheat Spikes Using Simple Linear Iterative Clustering and Deep Convolutional Neural Networks. Front. Plant Sci. 2019, 10, 1176. [Google Scholar] [CrossRef]
  178. Shen, R.; Zhen, T.; Li, Z. YOLOv5-Based Model Integrating Separable Convolutions for Detection of Wheat Head Images. IEEE Access 2023, 11, 12059–12074. [Google Scholar] [CrossRef]
  179. Sun, J.; Yang, K.; Chen, C.; Shen, J.; Yang, Y.; Wu, X.; Norton, T. Wheat head counting in the wild by an augmented feature pyramid networks-based convolutional neural network. Comput. Electron. Agric. 2022, 193, 106705. [Google Scholar] [CrossRef]
  180. Velumani, K.; Madec, S.; de Solan, B.; Lopez-Lozano, R.; Gillet, J.; Labrosse, J.; Jezequel, S.; Comar, A.; Baret, F. An automatic method based on daily in situ images and deep learning to date wheat heading stage. Field Crop. Res. 2020, 252, 107793. [Google Scholar] [CrossRef]
  181. Wang, D.; Fu, Y.; Yang, G.; Yang, X.; Liang, D.; Zhou, C.; Zhang, N.; Wu, H.; Zhang, D. Combined Use of FCN and Harris Corner Detection for Counting Wheat Ears in Field Conditions. IEEE Access 2019, 7, 178930–178941. [Google Scholar] [CrossRef]
  182. Wang, Y.; Qin, Y.; Cui, J. Occlusion Robust Wheat Ear Counting Algorithm Based on Deep Learning. Front. Plant Sci. 2021, 12, 645899. [Google Scholar] [CrossRef] [PubMed]
  183. Wang, D.; Zhang, D.; Yang, G.; Xu, B.; Luo, Y.; Yang, X. SSRNet: In-Field Counting Wheat Ears Using Multi-Stage Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4403311. [Google Scholar] [CrossRef]
  184. Xiong, H.; Cao, Z.; Lu, H.; Madec, S.; Liu, L.; Shen, C. TasselNetv2: In-field counting of wheat spikes with context-augmented local regression networks. Plant Methods 2019, 15, 150. [Google Scholar] [CrossRef]
  185. Xu, X.; Li, H.; Yin, F.; Xi, L.; Qiao, H.; Ma, Z.; Shen, S.; Jiang, B.; Ma, X. Wheat Ear Counting Using K-means Clustering Segmentation and Convolutional Neural Network. Plant Methods 2020, 16, 106. [Google Scholar] [CrossRef] [PubMed]
  186. Yang, B.; Gao, Z.; Gao, Y.; Zhu, Y. Rapid Detection and Counting of Wheat Ears in the Field Using YOLOv4 with Attention Module. Agronomy 2021, 11, 1202. [Google Scholar] [CrossRef]
  187. Zang, H.; Wang, Y.; Ru, L.; Zhou, M.; Chen, D.; Zhao, Q.; Zhang, J.; Li, G.; Zheng, G. Detection method of wheat spike improved YOLOv5s based on the attention mechanism. Front. Plant Sci. 2022, 13, 993244. [Google Scholar] [CrossRef]
  188. Zhao, J.; Zhang, X.; Yan, J.; Qiu, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W. A Wheat Spike Detection Method in UAV Images Based on Improved YOLOv5. Remote Sens. 2021, 13, 3095. [Google Scholar] [CrossRef]
  189. Zhao, J.; Yan, J.; Xue, T.; Wang, S.; Qiu, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W.; Zhang, X. A deep learning method for oriented and small wheat spike detection (OSWSDet) in UAV images. Comput. Electron. Agric. 2022, 198, 107087. [Google Scholar] [CrossRef]
  190. Gao, H.; Zhen, T.; Li, Z. Detection of Wheat Unsound Kernels Based on Improved ResNet. IEEE Access 2022, 10, 20092–20101. [Google Scholar] [CrossRef]
  191. Khatri, A.; Agrawal, S.; Chatterjee, J.M. Wheat Seed Classification: Utilizing Ensemble Machine Learning Approach. Sci. Program. 2022, 2022, 1–9. [Google Scholar] [CrossRef]
  192. Laabassi, K.; Belarbi, M.A.; Mahmoudi, S.; Mahmoudi, S.A.; Ferhat, K. Wheat varieties identification based on a deep learning approach. J. Saudi Soc. Agric. Sci. 2021, 20, 281–289. [Google Scholar] [CrossRef]
  193. Li, H.; Zhang, L.; Sun, H.; Rao, Z.; Ji, H. Discrimination of Unsound Wheat Kernels Based on Deep Convolutional Generative Adversarial Network and Near-Infrared Hyperspectral Imaging Technology. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2022, 268, 120722. [Google Scholar] [CrossRef]
  194. Lingwal, S.; Bhatia, K.K.; Tomer, M.S. Image-Based Wheat Grain Classification Using Convolutional Neural Network. Multimed. Tools Appl. 2021, 80, 35441–35465. [Google Scholar] [CrossRef]
  195. Özkan, K.; Işık, Ş.; Yavuz, B.T. Identification of wheat kernels by fusion of RGB, SWIR, and VNIR samples. J. Sci. Food Agric. 2019, 99, 4977–4984. [Google Scholar] [CrossRef] [PubMed]
  196. Passos, D.; Mishra, P. An automated deep learning pipeline based on advanced optimisations for leveraging spectral classification modelling. Chemom. Intell. Lab. Syst. 2021, 215, 104354. [Google Scholar] [CrossRef]
  197. Sabanci, K.; Kayabasi, A.; Toktas, A. Computer vision-based method for classification of wheat grains using artificial neural network. J. Sci. Food Agric. 2017, 97, 2588–2593. [Google Scholar] [CrossRef]
  198. Sabanci, K.; Aslan, M.F.; Durdu, A. Bread and durum wheat classification using wavelet-based image fusion. J. Sci. Food Agric. 2020, 100, 5577–5585. [Google Scholar] [CrossRef]
  199. Sabanci, K. Detection of sunn pest-damaged wheat grains using artificial bee colony optimization-based artificial intelligence techniques. J. Sci. Food Agric. 2020, 100, 817–824. [Google Scholar] [CrossRef]
  200. Sabanci, K.; Aslan, M.F.; Ropelewska, E.; Unlersen, M.F.; Durdu, A. A Novel Convolutional-Recurrent Hybrid Network for Sunn Pest–Damaged Wheat Grain Detection. Food Anal. Methods 2022, 15, 1748–1760. [Google Scholar] [CrossRef]
  201. Unlersen, M.F.; Sonmez, M.E.; Aslan, M.F.; Demir, B.; Aydin, N.; Sabanci, K.; Ropelewska, E. CNN–SVM hybrid model for varietal classification of wheat based on bulk samples. Eur. Food Res. Technol. 2022, 248, 2043–2052. [Google Scholar] [CrossRef]
  202. Wei, W.; Tian-le, Y.; Rui, L.; Chen, C.; Tao, L.; Kai, Z.; Cheng-ming, S.; Chun-yan, L.; Xin-kai, Z.; Wen-shan, G. Detection and Enumeration of Wheat Grains Based on a Deep Learning Method Under Various Scenarios and Scales. J. Integr. Agric. 2020, 19, 1998–2008. [Google Scholar] [CrossRef]
  203. Yang, X.; Guo, M.; Lyu, Q.; Ma, M. Detection and Classification of Damaged Wheat Kernels Based on Progressive Neural Architecture Search. Biosyst. Eng. 2021, 208, 176–185. [Google Scholar] [CrossRef]
  204. Zhang, L.; Sun, H.; Rao, Z.; Ji, H. Non-destructive identification of slightly sprouted wheat kernels using hyperspectral data on both sides of wheat kernels. Biosyst. Eng. 2020, 200, 188–199. [Google Scholar] [CrossRef]
  205. Zhao, X.; Que, H.; Sun, X.; Zhu, Q.; Huang, M. Hybrid convolutional network based on hyperspectral imaging for wheat seed varieties classification. Infrared Phys. Technol. 2022, 125, 104270. [Google Scholar] [CrossRef]
  206. Zhou, L.; Zhang, C.; Taha, M.F.; Wei, X.; He, Y.; Qiu, Z.; Liu, Y. Wheat Kernel Variety Identification Based on a Large Near-Infrared Spectral Dataset and a Novel Deep Learning-Based Feature Selection Method. Front. Plant Sci. 2020, 11, 575810. [Google Scholar] [CrossRef] [PubMed]
  207. Cai, W.; Wei, Z.; Song, Y.; Li, M.; Yang, X. Residual-capsule networks with threshold convolution for segmentation of wheat plantation rows in UAV images. Multimed. Tools Appl. 2021, 80, 32131–32147. [Google Scholar] [CrossRef]
  208. Fang, P.; Zhang, X.; Wei, P.; Wang, Y.; Zhang, H.; Liu, F.; Zhao, J. The Classification Performance and Mechanism of Machine Learning Algorithms in Winter Wheat Mapping Using Sentinel-2 10 m Resolution Imagery. Appl. Sci. 2020, 10, 5075. [Google Scholar] [CrossRef]
  209. Luo, Y.; Zhang, Z.; Cao, J.; Zhang, L.; Zhang, J.; Han, J.; Zhuang, H.; Cheng, F.; Tao, F. Accurately mapping global wheat production system using deep learning algorithms. Int. J. Appl. Earth Obs. Geoinf. 2022, 110, 102823. [Google Scholar] [CrossRef]
  210. Meng, S.; Wang, X.; Hu, X.; Luo, C.; Zhong, Y. Deep Learning-Based Crop Mapping in the Cloudy Season Using One-Shot Hyperspectral Satellite Imagery. Remote Sens. 2021, 13, 2674. [Google Scholar] [CrossRef]
  211. Zhong, L.; Hu, L.; Zhou, H.; Tao, X. Deep learning based winter wheat mapping using statistical data as ground references in Kansas and northern Texas, US. Remote Sens. Environ. 2019, 233, 111411. [Google Scholar] [CrossRef]
  212. Bourguet, J.R.; Thomopoulos, R.; Mugnier, M.L.; Abécassis, J. An artificial intelligence-based approach to deal with argumentation applied to food quality in a public health policy. Expert Syst. Appl. 2013, 40, 4539–4546. [Google Scholar] [CrossRef]
  213. Nargesi, M.H.; Kheiralipour, K.; Jayas, D.S. Classification of different wheat flour types using hyperspectral imaging and machine learning techniques. Infrared Phys. Technol. 2024, 142, 105520. [Google Scholar] [CrossRef]
  214. Shen, Y.; Yin, Y.; Zhao, C.; Li, B.; Wang, J.; Li, G.; Zhang, Z. Image Recognition Method Based on an Improved Convolutional Neural Network to Detect Impurities in Wheat. IEEE Access 2019, 7, 162206–162218. [Google Scholar] [CrossRef]
  215. Shen, Y.; Yin, Y.; Li, B.; Zhao, C.; Li, G. Detection of impurities in wheat using terahertz spectral imaging and convolutional neural networks. Comput. Electron. Agric. 2021, 181, 105931. [Google Scholar] [CrossRef]
  216. Bartley, P.G.; Nelson, S.O.; McClendon, R.W.; Trabelsi, S. Determining Moisture Content of Wheat with an Artificial Neural Network from Microwave Transmission Measurements. IEEE Trans. Instrum. Meas. 1998, 47, 123–127. [Google Scholar] [CrossRef]
  217. Shafaei, S.; Nourmohamadi-Moghadami, A.; Kamgar, S. Development of artificial intelligence based systems for prediction of hydration characteristics of wheat. Comput. Electron. Agric. 2016, 128, 34–45. [Google Scholar] [CrossRef]
  218. Singh, H.; Roy, A.; Setia, R.K.; Pateriya, B. Estimation of nitrogen content in wheat from proximal hyperspectral data using machine learning and explainable artificial intelligence (XAI) approach. Model. Earth Syst. Environ. 2022, 8, 2505–2511. [Google Scholar] [CrossRef]
  219. Wu, Q.; Zhang, Y.; Zhao, Z.; Xie, M.; Hou, D. Estimation of Relative Chlorophyll Content in Spring Wheat Based on Multi-Temporal UAV Remote Sensing. Agronomy 2023, 13, 211. [Google Scholar] [CrossRef]
  220. Yang, J.; Li, J.; Hu, J.; Yang, W.; Zhang, X.; Xu, J.; Zhang, Y.; Luo, X.; Ting, K.; Lin, T.; et al. An Interpretable Deep Learning Approach for Calibration Transfer Among Multiple Near-Infrared Instruments. Comput. Electron. Agric. 2022, 192, 106584. [Google Scholar] [CrossRef]
  221. Akkem, Y.; Biswas, S.K.; Varanasi, A. Streamlit-based enhancing crop recommendation systems with advanced explainable artificial intelligence for smart farming. Neural Comput. Appl. 2024, 36, 20011–20025. [Google Scholar] [CrossRef]
  222. Bai, F.J.J.S.; Shanmugaiah, K.; Sonthalia, A.; Devarajan, Y.; Varuvel, E.G. Application of machine learning algorithms for predicting the engine characteristics of a wheat germ oil–Hydrogen fueled dual fuel engine. Int. J. Hydrogen Energy 2023, 48, 23308–23322. [Google Scholar] [CrossRef]
  223. Ghasemi-Mobtaker, H.; Kaab, A.; Rafiee, S.; Nabavi-Pelesaraei, A. A comparative modeling techniques and life cycle assessment for prediction of output energy, economic profit, and global warming potential for wheat farms. Energy Rep. 2022, 8, 4922–4934. [Google Scholar] [CrossRef]
  224. Núñez, E.G.F.; Barchi, A.C.; Ito, S.; Escaramboni, B.; Herculano, R.D.; Mayer, C.R.M.; de Oliva Neto, P. Artificial intelligence approach for high-level production of amylase using Rhizopus microsporus var. oligosporus and different agro-industrial wastes. J. Chem. Technol. Biotechnol. 2017, 92, 684–692. [Google Scholar] [CrossRef]
  225. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep Learning in Agriculture: A Survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  226. Attri, I.; Awasthi, L.K.; Sharma, T.P.; Rathee, P. A review of deep learning techniques used in agriculture. Ecol. Inform. 2023, 77, 102217. [Google Scholar] [CrossRef]
  227. Ahmad, A.; Saraswat, D.; El Gamal, A. A survey on using deep learning techniques for plant disease diagnosis and recommendations for development of appropriate tools. Smart Agric. Technol. 2023, 3, 100083. [Google Scholar] [CrossRef]
  228. van Klompenburg, T.; Kassahun, A.; Catal, C. Crop Yield Prediction Using Machine Learning: A Systematic Literature Review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
  229. Bayer, P.E.; Edwards, D. Machine learning in agriculture: From silos to marketplaces. Plant Biotechnol. J. 2021, 19, 648–650. [Google Scholar] [CrossRef]
  230. Barbedo, J.G.A. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
  231. Noon, S.K.; Amjad, M.; Qureshi, M.A.; Mannan, A. Use of deep learning techniques for identification of plant leaf stresses: A review. Inf. Process. Agric. 2020, 7, 341–365. [Google Scholar] [CrossRef]
  232. Jung, J.; Maeda, M.; Chang, A.; Bhandari, M.; Ashapure, A.; Landivar-Bowles, J. The potential of remote sensing and artificial intelligence as tools to improve the resilience of agriculture production systems. Remote Sens. 2021, 13, 3230. [Google Scholar] [CrossRef]
  233. Arya, S.; Sandhu, K.S.; Singh, J.; Kumar, S. Deep learning: As the new frontier in high-throughput plant phenotyping. Euphytica 2022, 218, 47. [Google Scholar] [CrossRef]
  234. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef]
  235. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. A survey on deep learning-based identification of plant and crop diseases from UAV-based aerial images. Clust. Comput. 2023, 26, 1297–1317. [Google Scholar] [CrossRef]
  236. Harfouche, A.L.; Nakhle, F.; Harfouche, A.H.; Sardella, O.G.; Dart, E.; Jacobson, D. A primer on artificial intelligence in plant digital phenomics: Embarking on the data to insights journey. Trends Plant Sci. 2023, 28, 154–184. [Google Scholar] [CrossRef] [PubMed]
  237. Li, W.; Zheng, T.; Yang, Z.; Li, M.; Sun, C.; Yang, X. Classification and detection of insects from field images using deep learning for smart pest management: A systematic review. Ecol. Inform. 2021, 66, 101460. [Google Scholar] [CrossRef]
  238. Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  239. Zhang, L.; Zhang, L. Artificial Intelligence for Remote Sensing Data Analysis: A Review of Challenges and Opportunities. ISPRS J. Photogramm. Remote Sens. 2022, 184, 183–204. [Google Scholar] [CrossRef]
  240. Li, L.; Zhang, S.; Wang, B. Plant Disease Detection and Classification by Deep Learning—A Review. IEEE Access 2021, 9, 56683–56698. [Google Scholar] [CrossRef]
  241. Alemohammad, H.; Booth, K. LandCoverNet: A global benchmark land cover classification training dataset. arXiv 2020, arXiv:2012.03111. [Google Scholar] [CrossRef]
  242. Hughes, D.P.; Legg, J.; Alemohammad, H. PlantVillage Nuru: Pest and disease monitoring using AI. 2019. Available online: https://bigdata.cgiar.org/digital-intervention/plantvillage-nuru-pest-and-disease-monitoring-using-ai/ (accessed on 28 April 2025).
  243. Uzhinskiy, A. Evaluation of Different Few-Shot Learning Methods in the Plant Disease Classification Domain. Biology 2025, 14, 99. [Google Scholar] [CrossRef] [PubMed]
  244. Ghanbarzadeh, A.; Soleimani, H. Self-supervised in-domain representation learning for remote sensing image scene classification. Heliyon 2024, 10, e37962. [Google Scholar] [CrossRef] [PubMed]
  245. Mushtaq, M.A.; Ahmed, H.G.M.D.; Zeng, Y. Applications of Artificial Intelligence in Wheat Breeding for Sustainable Food Security. Sustainability 2024, 16, 5688. [Google Scholar] [CrossRef]
  246. Sheikh, M.; Iqra, F.; Ambreen, H.; Pravin, K.A.; Ikra, M.; Chung, Y.S. Integrating artificial intelligence and high-throughput phenotyping for crop improvement. J. Integr. Agric. 2024, 23, 1787–1802. [Google Scholar] [CrossRef]
  247. Kuswidiyanto, L.W.; Noh, H.H.; Han, X. Plant Disease Diagnosis Using Deep Learning Based on Aerial Hyperspectral Images: A Review. Remote Sens. 2022, 14, 5073. [Google Scholar] [CrossRef]
  248. Zhang, X.; Yang, J.; Lin, T.; Ying, Y. Food and Agro-Product Quality Evaluation Based on Spectroscopy and Deep Learning: A Review. Trends Food Sci. Technol. 2021, 112, 431–441. [Google Scholar] [CrossRef]
  249. Farooq, M.A.; Gao, S.; Hassan, M.A.; Huang, Z.; Rasheed, A.; Hearne, S.; Prasanna, B.; Li, X.; Li, H. Artificial intelligence in plant breeding. Trends Genet. 2024, 40, 891–905. [Google Scholar] [CrossRef]
  250. Ferchichi, A.; Abbes, A.B.; Barra, V.; Farah, I.R. Forecasting vegetation indices from spatio-temporal remotely sensed data using deep learning-based approaches: A systematic literature review. Environ. Res. 2022, 214, 113845. [Google Scholar] [CrossRef]
  251. Muruganantham, P.; Wibowo, S.; Grandhi, S.; Samrat, N.H.; Islam, N. A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing. Remote Sens. 2022, 14, 1990. [Google Scholar] [CrossRef]
  252. Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646. [Google Scholar] [CrossRef]
  253. Campos-Taberner, M.; García-Haro, F.J.; Martínez, B.; Izquierdo-Verdiguier, E.; Atzberger, C.; Camps-Valls, G.; Gilabert, M.A. Understanding deep learning in land use classification based on Sentinel-2 time series. Sci. Rep. 2020, 10, 17188. [Google Scholar] [CrossRef] [PubMed]
  254. Chakraborty, S.K.; Chandel, N.S.; Jat, D.; Tiwari, M.K.; Rajwade, Y.A.; Subeesh, A. Deep learning approaches and interventions for futuristic engineering in agriculture. Neural Comput. Appl. 2022, 34, 20539–20573. [Google Scholar] [CrossRef]
  255. An, D.; Zhang, L.; Liu, Z.; Liu, J.; Wei, Y. Advances in infrared spectroscopy and hyperspectral imaging combined with artificial intelligence for the detection of cereals quality. Crit. Rev. Food Sci. Nutr. 2023, 63, 9766–9796. [Google Scholar] [CrossRef]
  256. Aslan, M.F.; Sabanci, K.; Aslan, B. Artificial Intelligence Techniques in Crop Yield Estimation Based on Sentinel-2 Data: A Comprehensive Survey. Sustainability 2024, 16, 8277. [Google Scholar] [CrossRef]
  257. Mamat, N.; Othman, M.F.; Abdoulghafor, R.; Belhaouari, S.B.; Mamat, N.; Mohd Hussein, S.F. Advanced Technology in Agriculture Industry by Implementing Image Annotation Technique and Deep Learning Approach: A Review. Agriculture 2022, 12, 1033. [Google Scholar] [CrossRef]
  258. Ilyas, Q.M.; Ahmad, M.; Mehmood, A. Automated Estimation of Crop Yield Using Artificial Intelligence and Remote Sensing Technologies. Bioengineering 2023, 10, 125. [Google Scholar] [CrossRef]
  259. Shahi, T.B.; Xu, C.Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
  260. Kaur, J.; Fard, S.M.H.; Amiri-Zarandi, M.; Dara, R. Protecting farmers’ data privacy and confidentiality: Recommendations and considerations. Front. Sustain. Food Syst. 2022, 6, 903230. [Google Scholar] [CrossRef]
  261. Kabala, D.M.; Hafiane, A.; Bobelin, L.; Canals, R. Image-based crop disease detection with federated learning. IEEE Access 2021, 9, 138074–138105. [Google Scholar] [CrossRef]
  262. Bera, S.; Dey, T.; Mukherjee, A..; De, D. FLAG: Federated Learning for Sustainable Irrigation in Agriculture 5.0. IEEE Access 2022, 10, 66693–66715. [Google Scholar] [CrossRef]
  263. Jha, K.; Doshi, A.; Patil, P.S.K.; Kumar, M. A Comprehensive Review on Automation in Agriculture Using Artificial Intelligence. Artif. Intell. Agric. 2019, 2, 48–55. [Google Scholar] [CrossRef]
  264. Sarkar, C.; Gupta, D.; Gupta, U.; Hazarika, B.B. Leaf disease detection using machine learning and deep learning: Review and challenges. Appl. Soft Comput. 2023, 145, 110534. [Google Scholar] [CrossRef]
  265. Siregar, R.R.A.; Seminar, K.B.; Wahjuni, S.; Santosa, E. Vertical Farming Perspectives in Support of Precision Agriculture Using Artificial Intelligence: A Review. Computers 2022, 11, 135. [Google Scholar] [CrossRef]
  266. Pathan, M.; Patel, N.; Yagnik, H.; Shah, M. Artificial Cognition for Applications in Smart Agriculture: A Comprehensive Review. Artif. Intell. Agric. 2020, 4, 81–95. [Google Scholar] [CrossRef]
Figure 1. Proposed framework for translating AI research into practical applications in wheat production.
Figure 1. Proposed framework for translating AI research into practical applications in wheat production.
Agronomy 15 01157 g001
Table 1. References related to yield prediction.
Table 1. References related to yield prediction.
ReferenceChallengesLimitationsProposed TechniquesAccuracy
Ahmed et al. [2]Data limitations, complexity of feature selection, computational complexity, environmental variability, model generalizationDependence on satellite-derived data, regional constraints, potential overfitting, computational costGWO-CEEMDAN-KRR0.998
Ahmed and Hussain [24]Limited availability of high-quality data, lack of soil data, variability in environmental conditions, computational complexity, generalization of the modelDependence on limited data sources, exclusion of critical variables, lack of standardized data preprocessing methods, challenges in handling large-scale agricultural data12 models0.99
Bali and Singla [25]Complexity of climate factors, challenging data preprocessing, computational complexity, limited availability of methods for comparisonLimited geographic scope, dependence on historical data, potential overfitting, need for real-time data integrationRNN-LSTMN/A
Bhojani and Bhatt [26]Problems selecting the best activation function, handling climate variability, optimizing the neural networks, and preprocessing dataLimited geographic scope, lack of comparison with deep learning models, manual selection of random weights and bias values, effect of soil and fertilization not consideredMLP0.90
Bian et al. [27]Variability in growth stages, need for extensive preprocessing, need for careful tuning of hyperparameters, validation across different scalesLimited study region, lack of climate and soil data, single UAV sensor type, destructive sampling for validationGPR, SVR, RFR, DT, Lasso, GBRT0.88
Cao et al. [28]Quantifying the contribution of each data source, balancing spatial vs. temporal variability, computational complexity of ML models, data processing and normalizationLimited generalization beyond China, exclusion of certain biophysical factors, dependence on historical data trends, need for more frequent updatesRR, RF, LightGBM0.75
Cao et al. [29]Need for extensive preprocessing, high spatiotemporal variability, computational complexity, handling different spatial scalesLimited generalization beyond China, deep learning requires more training data, high computational cost for DL models, yield prediction at the field scaleRF, DNN, 1D-DNN, LSTM0.66–0.89
Cao et al. [30]High similarity between different wheat varieties, limited accuracy of single CNN models, computational complexity of DL models, need for a large datasetModel limited to durum wheat grains, reliance on image features only, potential overfitting in deep learning models, lack of real-time testingCNN, SVM, LDA, kNN0.92
Cheng et al. [31]Complexity of wheat growth dynamics, trade-offs between spatial and spectral resolution, data preprocessing and feature selection, high computational demandLimited geographic scope and generalizability, dependence on satellite data quality, lack of real-time environmental factors, computational complexity of DL modelsLSTM, RF, GBDT, SVR0.96
Fei et al. [32]Variability in wheat growth conditions, high-dimensional UAV data processing, machine learning model selection and tuning, limited availability of high-quality ground-truth dataLimited geographic scope, lack of external validation, focus on UAV-based sensors only, potential overfitting of ML modelsSVM, DNN, RR, RF, ensemble0.69
Haider et al. [33]Limited data availability and quality, difficulties choosing of the best prediction model, high computational complexity, influence of external factorsLimited external factors considered, dependence on data preprocessing, scalability issuesARIMA, RNN, LSTM0.81
Huang et al. [34]Limitation in quantifying model uncertainty, limited remote sensing data availability, computational complexity of Bayesian data assimilationLimited generalization of the proposed model, high computational complexity, dependency on high-quality, heavily preprocessed remote sensing dataEnKF0.57
Kheir et al. [35]High degree of data complexity and variability, crop model limitations, feature selection was challenging, need for significant computational resources for training(a) Crop model training on limited data, overestimation in earlier decades, lack of real-time deployment, model not validated in different regionsRFR, ANN, SVR, kNN1.00
Khoshnevisan et al. [36]Complexity of energy consumption data, highly complex selection of the best AI model configuration, complex data collection and preprocessing, high computational costLimited scope in geographical region, poor computational scalability, dependence on historical data, limited comparison with other ML modelsANFIS, ANN0.97
Li et al. [37]Complex backgrounds in field images, limited data for training, network depth and overfitting issuesDependency on RGB images, lack of validation across wheat varieties, LAI underestimation for high-density wheat canopiesCNN0.82
Li et al. [38]Complex interactions between variables, data limitations, variability in vegetation indices, need for large datasets and computational resourcesLimited generalization across different wheat varieties, lack of real-time yield monitoring, model performance varies by region, influence of management practices not consideredRF, SVM0.74
Liu et al. [39]Limitations of vegetation indices, data variability, need for extensive hyperparameter tuning, need for data cleaning and feature scaling, extreme weather eventsIncomplete crop management data, small training dataset, limited generalization across regions, real-world deployment challengesSVR, LSTM, XGBoost, RF, RR, Lasso0.85–0.87
Liu et al. [40]Variability in remote sensing data, lack of large-scale labeled datasets, high computational complexity, model generalization issuesDependence on satellite data availability, limited temporal coverage, sensitivity to environmental factors, high computational costLSTM, CNN, RF, SVR, RR0.88
Mostafaeipour et al. [20]Limited data availability and quality, high environmental variability, limited model interpretabilityPotential generalization issues, high computational power requirements, important factors may not be properly representedRF, SVM, ANN0.96
Nevavuori et al. [41]Variability in yield data, high computational complexity of CNNs, unexpected results from the RGB vs. NDVI data comparisonLimited geographic scope, dataset size and diversity, lack of multi-year dataCNN0.91
Paudel et al. [21]Limited interpretability of DL models, lack of standardized feature engineering, impact of data availability and quality, challenges in capturing extreme eventsInability to capture extreme weather effects, performance depends on data size, limited Integration with domain knowledge, high computational costsLSTM, GBDT, 1D-CNNN/A
Romero et al. [42]Complexity of yield determination, need for extensive data cleaning and preprocessing, limited generalization to new environmentsLimited data scope, sensitivity of yield components to environmental factors, limited model interpretability, absence of external validationRule classifier, kNN, DT0.57–0.93
Ruan et al. [43]Need for careful preprocessing and feature selection, complex feature selection and aggregation, high computational complexity of ensemble learning modelsDependence on historical weather data, limited generalizability, overestimation of low yields, some relevant agronomic factors are not considered11 ML models0.83–0.85
Salehnia et al. [44]High variability in climate data, low effectiveness of some attributes, high computational complexity, need for substantial data preprocessing and detrendingLimited spatial scope, use of limited climate variables, lack of external validation, dependence on historical dataGA, ACO, K-Means0.37–0.54
Schreiber et al. [45]High variability in crop growth, high temporal and spatial variability, temporal color pattern changes, ensuring that the models could generalize across different conditionsLower accuracy in later growth stages, use of only RGB images, limited scalability to very large farms, limited datasetANN, CNN0.90
Sharma et al. [46]Varying lighting conditions, complex crop variability, high computational demand, complex data preprocessingLimited generalization, need for considerable computational resources, set of employed features may not be robust for all conditions, testing performed on a limited datasetANN, GA0.98
Shen et al. [47]Complexity of crop yield prediction, complexity of combining multispectral and thermal data, high computational complexity, insufficient data for proper validationLack of data obtained under uncontrolled environmental conditions, limited sensor diversity, potential overfittingLSTM, LSTM-RF0.78
Srivastava et al. [48]Difficulty in acquiring comprehensive datasets, data inconsistencies across spatial and temporal dimensions, difficulty interpreting modelsLacks of model interpretability, data limited to specific geographical and climatic conditionskNN, RF, XGBoost, Lasso, RR, RT, SVR, DNN, CNN0.81
Sun et al. [49]High data complexity, difficulty integrating multispectral and LiDAR data, complex feature extraction, limited training data, high computational requirementsLimited model generalization, data encompasses a single growth cycle, manual data collection introduces subjectivity, lack of early-stage predictions, high computational costSeveral DL models0.83–0.85
Tanabe et al. [50]Challenging determination of the optimal wheat growth stage, high data heterogeneity, limited training data, need for significant computational powerLimited model generalization, limited to single-year predictions, no external validation, no integration of weather data, limited impact of multi-temporal dataCNN, linear regression0.61
Tian et al. [51]Nonlinearity in crop growth modeling, variability in weather and soil conditions, limited spatial and temporal data, high computational complexityLimited model generalization, absence of weather and soil data, assumption that growth stages remain the same every year, high computational requirementsBPNN, IPSO-BP0.34
Tian et al. [52]Spectral similarity between garlic and winter wheat, cloud cover in optical imagery, integration of optical and Radar data, balancing accuracy and computational efficiencyDependence on satellite data availability, lack of historical data analysis, no inclusion of climate and soil data, potential confusion with other Winter cropsRF0.97
Tripathi et al. [53]Complexity in soil health estimation, variability in satellite data, limited historical validation, high computational complexity, impact of soil parameters on yieldLimited generalization, lack of validation for previous years, dependence on satellite data, no explicit use of weather data, yield underestimation for high-productivity fieldsDL-MLP, RF, DT, SVR, kNN0.68
Wang et al. [54]Challenging combination of multi-source data, high variability in wheat yield, high computational complexity, scaling the model to large regionsNo consideration of management practices, coarse spatial resolution for some inputs, limited generalization, overestimation/underestimation in certain areasOLS, Lasso, SVM, RF, AdaBoost, DNN0.86
Wang et al. [55]Data integration complexity, yield variability across regions, computational demands of deep learning, need for yield detrending, uncertainty quantification was challengingLimited inclusion of socioeconomic factors, yield detrending challenges, no real-time yield prediction, data limitations in rainfed regions, fixed spatial scale limits applicabilityLSTM-CNN, RF, SVM, Lasso0.77
Wang et al. [56]Data quality and availability, limited model interpretability, high computational complexity, high climate variabilityLimited generalizability, time-consuming hyperparameter tuning, data fusion limitations, high cost of time-series data acquisitionAttention Mechanism, CNN, LSTM, RNN0.83
Wang et al. [57]Time-series data complexity, high computational requirements, inter-annual yield variability, feature selection and model tuning, limited high-resolution dataLimited generalization to other crops and regions, yield underestimation in high-yielding areas, no integration of weather and soil data, temporal resolution constraintsGRU, CNN-GRU0.64
Wolanin et al. [58]Complex interactions in yield prediction, lack of interpretability, limited high-resolution data, variability in crop responses across different years, high computational demandLimited generalization beyond one region, dependence on available satellite and meteorological data, poor performance in extreme weather years, no real-time forecastingCNN, RF, RR0.83–0.87
Wu et al. [59]Impact of soil background, feature selection and data fusion, need for extensive preprocessing, complexity of data fusion, high computational demands, limited generalizationLimited temporal scope, dependency on high-resolution UAV data, model generalization, high computational costs, lack of real-time applicationSVR, RFR, MLR0.81
Xie and Huang [60]Data integration complexity, time-series data processing, high computational demand, challenging model generalization, difficult validation and accuracy assessmentLimited spatial resolution, single study region, use of pre-simulated data, no real-time prediction, only LAI-based estimationLSTM, 1D-CNN, RF0.77
Yang et al. [61]High condition variability, limited ground-truth data, complexity of data processing, integration of empirical and mechanistic models, errors in parameter retrievalLimited geographic scope, not tested for large-scale applications, no comparison with other models, uncertainty from crop growth model simulationsCW-RF, empirical0.91
Yang et al. [16]Variability in environmental conditions, integration of multiple sensors, selection of optimal ML model, computational cost of ensemble learningLimited study area, dependence on UAV data, lack of deep learning comparisons, no real-time testingEnsemble, XGBoost, RF, PLS, RR, kNN0.73
Zhang et al. [62]Data collection complexity, high-dimensional data processing, difficult model selection, generalization issuesLimited generalization due to single experimental field, relatively small dataset, the impact of some environmental factors was not explicitly consideredPLSR, SVR, XGBoost0.89
Zhou et al. [63]Models tended to overfit, alternative models did not succeed, uneven fertilizer spreading introduced noise, accuracy of UAV-derived data was influenced by spatial resolutionModel not precise enough to detect small treatment effects, limited generalizability due to nonlinearitiesLR, SVR, RF, ANN0.73
Zhou et al. [64]Limited scalability due to complex variable interactions, large uncertainties for large-scale yield prediction, problems with collinearity and assumptions of stationarityLimited model interpretability, some products had low resolution, model reliability needs improvement, more data are required for accuracy improvementRF, SVM, Lasso0.67–0.78
Table 2. References related to disease management.
Table 2. References related to disease management.
ReferenceChallengesLimitationsProposed TechniquesAccuracy
Aboneh et al. [1]High computational complexity, lack of structured datasets, high variability of images, limited number of training samples, limited awareness and technological adoptionDependence on image quality, limited datasets, lack of real-time implementation, limited model comparisons, poor generalization to other cropsCNN0.96
Akbar et al. [67]Difficulty gathering a dataset of enough size and quality, training required extensive computational resources, risk of overfitting, difficulty making the system real-timeLimited dataset, focus on only two diseases, potentially poor generalizability, IoT implementation is complexCNN0.97
Azimi et al. [68]Extensive manual data collection, high data variability, highly complex feature selection, high computational complexityLimited dataset variability, subjective manual feature extraction, lack of real-time detection, results obtained under controlled greenhouse conditions, DL models were not exploredSVM, DT, kNN, NB1.00
Bao et al. [69]Complex backgrounds in field images, high computational costs, limited availability of disease images, resolution loss during down-samplingLimited dataset may lead to poor generalization, early disease detection difficulty, reliance on a single type of sensor, real time performance needs improvementCNN0.94
Bao et al. [70]Complex backgrounds in field images, limited image dataset, difficulty choosing features, optimization of the metric learning modelLimited data collection area, difficulty in identifying mild disease cases, dependence on a single type of sensor, high computational costsE-MMC, SVM, BPNN0.94
Deng et al. [71]Variability in disease progression, varying spatial and spectral resolutions, time-consuming manual annotation, challenging early disease detectionLack of temporal generalization, challenges in very early disease detection, need for validation in other regions, limited comparison with other methodsRustQNet0.80
Fahim-Ul-Islam et al. [72]Data privacy and security, computational constraints, disease variability and image quality, difficulty ensuring model generalizationLimited dataset diversity, high computational cost, dependence on pretrained modelsTransformer Federated Learning0.98–0.99
Fang et al. [73]Symptom diversity, high computational costs, high levels of data variability, optimization for mobile deployment is difficultLimited dataset size and diversity, lack of hyperspectral and multispectral data, challenges with disease co-occurrence, limited field deployment testingCNN0.99
Gao et al. [74]Complexity of wheat spike segmentation, variability in disease symptoms, labor-intensive data acquisition and annotation, high computational complexityLimited generalization across varieties, lack of hyperspectral data integration, challenges with early-stage and late-stage infectionsBlendMask (DL)0.78–0.85
Genaev et al. [75]Difficulties building the dataset, complexity of wheat disease symptoms, challenges balancing accuracy vs. model efficiency, high computational demandLimited dataset diversity, absence of multispectral data, difficulty in distinguishing co-infections, need for more field validationCNN0.94
Gonçalves et al. [76]High variability in image conditions, time-consuming annotation, difficulties with generalization, high computational costsLimited dataset size, tendency to overestimate severity, need for extensive computing resources, low robustness to noise and poor annotationsCNN0.95–0.98
Goyal et al. [77]Complexity of wheat disease symptoms, limited availability of labeled wheat disease images, significant class imbalance, high computational complexityLimited dataset diversity, high dependency on image quality, high computational demandCNN0.98
Haider et al. [78]Dataset was small and of poor quality, training suffered from high loss and overfitting, symptom similarity between classes, high computational requirementsPotential generalization issues, limited disease coverage, poor model performance on rare diseases, challenging real-time deploymentCNN0.97
Hayit et al. [79]Variability in disease symptoms, labor-intensive annotation, model training was complex, overfitting difficult to prevent, high computational costsPotential generalization issues, class imbalance had a negative impact, computational requirements hinder real-time deploymentCNN0.91
Jiang et al. [80]Limited dataset required extensive augmentation, symptom similarities between diseases, high computational requirementsPotential generalization issues, dependence on transfer learning, computational requirements hinder real-time deploymentCNN0.97–0.99
Jiang et al. [81]High image variability, small dataset and disease imbalance, high symptom similarity, computational constraints for deploymentPotential generalization issues, dependency on one type of sensor, small dataset increase overfitting risk, real-time application is challengingCNN0.90–0.95
Jin et al. [82]High-dimensionality and redundancy in hyperspectral data, variability due to environmental factors, noisy and complex field conditions, large class imbalance, overfitting riskLimited to pixel-level classification, high misclassification rates, high sensitivity to noise, manual ROI labeling requiredCNN, SVM0.74
Khan et al. [83]Lack of diverse datasets, challenges in field image acquisition, challenging disease segmentation, challenging selection of optimal feature extractors and classifiersLimited dataset, high overfitting risk, real-world deployment challenges, high sensitivity to environmental factorsCNN0.97
Lin et al. [84]High similarity between disease, visual interferences in field conditions, high computational complexity, lack of large-scale datasetsLimited geographic coverage, deploying the model on edge devices is a challenge, model generalization needs further testing, limited real-world testingCNN0.90
Liu et al. [85]Complexity of symptoms, canopy-scale detection difficulty, inconsistent feature response, limited sensitivity in early stagesInability to detect early disease stage, data encompasses a single year and single cultivarMLR0.90
Lu et al. [86]Real-world image complexity, dataset representativity limitations, computationally expensive training, similarity between diseasesPotential generalization challenges, difficulty in detecting small or overlapping disease areas, model deployment on edge devices still challenging, absence of multi-crop trainingCNN0.98
Dainelli et al. [87]Lack of high-quality in-field image datasets, challenging image acquisition and annotation, difficulties with poor lighting or low connectivity, social and adoption barriersLimited dataset coverage, incomplete threat representation, poor performance in real-world conditions, need for more field-condition dataCNN0.77
Maqsood et al. [88]Low-resolution images, noise and variability in field images, high computational complexity, challenges balancing model accuracy across disease classesLimited dataset size, untested generalization to other wheat varieties, challenging real-time implementationCNN0.75–0.83
Mi et al. [89]Slight differences between severity levels, challenges in field image collection, high computational costs, difficulties generalizing to different wheat varietiesLack of automated leaf extraction, focus on only one disease, real-time deployment is challenging, untested model generalizationCNN0.98
Nigam et al. [90]Lack of large-scale public datasets, high similarity between diseases, high computational costsLimited dataset size and scope, real-time deployment depends on further optimizations, model developed under controlled conditionsCNN0.99
Pan et al. [91]Poor performance by machine learning methods, manual image labeling was time-consuming and error-prone, ensuring generalization was challengingLimited generalization scope, dependence on UAV and high-resolution data, weakly supervised learning decreases accuracyPSPNet, U-Net, FCN, BPNN, SVM, RF0.96
Pan et al. [92]Difficulty in differentiating diseases, dataset limitations and class imbalance, high computational complexityLimited dataset size and geographic scope, high computational cost, real-world validation neededEnsemble Learning0.92
Qiu et al. [93]Variability in wheat spikes and disease symptoms, laborious data collection and annotation, challenging balance between model accuracy and computational efficiencyLimited dataset size, challenges with partial or occluded spikes, influence of wheat awns on detection, lack of testing with field conditionsR-CNN0.80
Rangarajan et al. [94]High data dimensionality, need for standardizing image acquisition conditions, high computational costsLimited dataset scope, challenges with real-time implementation, spectral data compression affects accuracy, lack of external validationCNN1.00
Schirrmann et al. [95]Highly heterogeneous background, image quality was affected by environmental factors, difficulties identifying early symptomsPoor accuracy in early stages of the disease, no tests focused on model transferability to different fields or crops, image annotation was prone to errorCNN0.77–0.90
Shafi et al. [96]Manual data collection and labeling, high variability in disease symptoms, problems with image quality, small dataset limited model performanceSmall dataset limits the model’s generalizability, high computational demands limited the experiments, limited classification categories, high dependency on feature engineeringDT, RF, XGBoost, LightGBM, CatBoost0.90–0.92
Su et al. [97]Complexity of wheat spike segmentation, variability in infection patterns, labor-intensive manual data annotation, high computational costsHigh dependence on data annotation, limited generalization to different environments, limited model interpretability, limited application in field conditionsDual Mask-RCNN0.77
Su et al. [98]Symptom variations with environmental conditions, limitations of RGB imaging, labor-intensive labeling, significant computational demands, high level of false positivesLimited generalization, dependence on specific spectral bands, potential overfitting, high computational costU-Net, RF0.90
Weng et al. [99]Low DON concentrations are hard to detect, interference from wheat components, complex sample preparation, signal variability, need for large datasets and fine-tuningLimited generalization across wheat varieties, no comparison with traditional methods, low stability due to environmental factors, possibility of overestimating DON levels
Weng et al. [100]Challenging band selection, high data variability, high feature extraction complexity, high computational complexityLimited generalization, overlap of wheat kernels in practical applications, hyperspectral imaging equipment costCNN, kNN, RF0.98
Xiao et al. [101]Interference from environmental factors, spectral feature selection complexity, need for high-precision UAV imaging, need for generalization across wheat varietiesLimited temporal coverage, data collected from a single region, dependency on high cost hyperspectral cameras, no real-time disease monitoringLogistic Regression Model0.90
Xu et al. [102]Variability in wheat leaf appearance, fine-grained disease differences, high computational demand, datasets lack diversity, need for high-quality image acquisitionLimited to five disease classes, suboptimal performance in diverse environments, accuracy decreases with multiple simultaneous diseasesCNN0.98–1.00
Zhang et al. [103]Complex field environment, difficult wheat ear segmentation, need for parameter tuning in neural networks, labor-intensive annotationDependence on RGB images with limited spectral information, high computational complexityFCN, PCNN, IABC0.98
Zhang et al. [104]Variability in spectral profiles, high spatial resolution complexity, high computational complexity, limited training data, laborious comparison with traditional methodsUncertain generalization capabilities, dependence on hyperspectral data, trade-off between accuracy and processing time, poor late-stage detection performanceCNN, RF0.85
Zhang et al. [105]High dimensionality of hyperspectral data, feature selection complexity, variability in disease symptoms, limited data for model trainingUntested generalization across different environments, dependence on expensive equipment, high computational cost, potential overfittingPLSR, SVR, RF, CNN0.97
Zhang et al. [106]Complexity of wheat ear segmentation, occlusion of wheat ears, variability in disease symptoms, laborious selection of relevant features, limited availability of annotated dataHigh dependence on digital imaging conditions, single experimental site, limited comparison with other models, no real-time field deploymentK-means + RF0.86
Zhang et al. [107]Irregular boundaries make segmentation difficult, limited dataset size, high computational complexitySmall training dataset, lack of transformer-based modelsUNet0.97
Zhang et al. [108]Difficulty distinguishing overlapping wheat ears, high computational cost, high field environment complexitySmall training dataset, manual annotation introduces subjectivity, limited validation scopeYOLOv5, RF0.91
Zhang et al. [109]High computational costs, difficulties differentiating between severity levels, high field environment variabilityGeographically limited dataset, poor early detection, limited generalization to different wheat varieties and environmental conditionsUNet0.97
Table 3. References related to other stresses and damages.
Table 3. References related to other stresses and damages.
ReferenceChallengesLimitationsProposed TechniquesAccuracy
Weed Management
de Camargo et al. [17]High computational cost, difficult balance between accuracy and speed, handling of large images, differentiating between similar weed speciesLimited generalizability, exclusion of multispectral data, potential misclassification of unknown species, manual thresholding in optimizationCNN, UNet0.94
El-Kenawy et al. [118]Complexity of infield weed classification, high computational cost, feature selection difficulties, ensuring model generalizationLimited dataset diversity, focus on image-based classification only, potential for overfitting due to ensemble learning, computational complexity of feature selectionNN, SVM, KNN0.98
Jabir and Falih [119]Variation in weed appearance, annotation was labor-intensive, optimization for deployment on edge devices, balancing accuracy vs. speedLimited dataset and generalization, real-world implementation issues, model complexity and computational constraintsYOLOv50.94
Li et al. [120]Complex backgrounds and overlapping weeds, domain adaptation and generalization issues, computational cost and real-time deployment, dataset limitationsLimited dataset size and regional focus, small and medium weed detection difficulties, high computational complexity, lack of tests under real-world field conditionsNLB attention mechanism0.93
Mishra et al. [121]Variation in weed growth due to soil types, similarity between weed and crop, need for large dataset, high computational complexityLimited generalization to other weed species, model high complexity for real-time applications, high impact of environmental conditions, segmentation is done manuallyInception V4, EfficientNet-B70.97
Su et al. [122]Difficulty obtaining large, well-labeled datasets, complicated annotation process, high computational costData augmentation has limited impact, small difference between the methods testedBonnet DNN0.98
Su et al. [123]Visual similarity of ryegrass and wheat, misclassification by off-the-shelf algorithms, real-time processing constraintsSpecific only to ryegrass in wheat fields, method requires a large dataset for training, method requires powerful GPUs for training and inferenceBonnet, SegNet, PSPNet, DeepLabV3, UNet0.95
Su et al. [124]Spectral similarity between weed and wheat, limited labelled data, UAV flight constraints, high computational complexityNo early-season mapping, generalization to other crops or conditions requires further validation, limited temporal analysisRF0.94
Wang et al. [125]Weed and wheat similarities, poor recognition of small weed, occlusion and complex field environments, need for computational efficiencyLimited dataset scope, not yet optimized for UAV deployment, potential false positives on background elements, herbicide decision-making not integratedYOLOv70.98
Zhuang et al. [126]Low recall in object detection models, high weed density issues, similarity in appearance between weeds and wheatIneffectiveness of object detection models, variability in image sizes affects accuracy, need for more robust deep learning architecturesCenterNet, Faster R-CNN, TridentNet, VFNet, YOLOv30.68–0.99
Zou et al. [127]Optimization of network complexity, selection of the best neural network structure, difficulty ensuring generalizationUse of images with simple characteristics, limited number of output classes, no multi-class weed classificationResNet50, MobileNet, VGG16, VGG190.98
Pest Management
Chen et al. [128]Complex background in field images, small object detection, computational costs of deep learning models, balancing accuracy and processing speedLimited generalization to other crops/pests, performance degradation in low-quality images, lack of real-time deployment, manual labeling of training dataCNN, RPN0.94
Fuentes et al. [129]Limited e-nose development for crop protection, variability in infestation patterns, sensor calibration and data integration, computational complexity in real-time detectionLimited field validation, dependence on sensor sensitivity, lack of large-scale deployment, potential cross-detection of other stress factorsANN0.97–0.99
Li et al. [130]Complex backgrounds, pest variability in scale and orientation, limited data for model training, computational complexityDependency on data augmentation, limited number pest categories, lack of real-time deployment evaluation, fixed image resolutions in trainingCNN, GAN0.83
Li et al. [131]Small size and complexity of wheat mites, limited dataset, background complexity, high computational complexity, difficult optimization of key parametersSmall dataset and limited generalization, limited to wheat mites, fixed imaging conditions, lack of real-time testing, model depth and computation constraintsCNN, RPN0.89
Evapotranspiration/Drought Monitoring
Elbeltagi et al. [132]Limited availability of climatic data, complexity of modeling using AI techniques, difficult model calibration and validationModel trained and validated using only three climatic variables, need for significant computational resourcesDNN0.94–0.99
Shen et al. [133]Complexity of drought factors, data integration issues, high computational requirements, difficult generalization and validationLimited comparison with other models, dependency on TRMM data, fixed input variables, scalability concernsDNN0.89
Herbicide/Pesticide Stress
Chu et al. [134]Lack of early visual symptoms, trade-off between spectral resolution and computation, high computational requirements, limited datasets, generalization challengesLimited to controlled greenhouse conditions, focus on three herbicide types, dependence on specific spectral Regions, potential overfittingSCNN0.96
Weng et al. [135]Large-scale data handling, feature extraction complexity, high data variability, selection of optimal modelLimited dataset, high computational intensity, lack of generalizationCNN, FCN, PCANet0.96–1.00
Lodging
Yang et al. [136]Low accuracy of traditional methods, high computational cost, variability in field conditions, selection of input dataLimited study area, dependency on UAV data, not tested for large-scale implementation, limited comparison with other techniquesMobile U-Net, FCN0.89
Zhang et al. [137]Variation in wheat growth stages, imbalanced data, high computational complexity, different imaging modalities, feature extraction optimizationDependence on UAV data, limited generalization, poor multispectral image availability, potential overfittingDeepLabv3+, UNet0.82–0.92
Zhang et al. [138]Low spatial and temporal resolution of satellite imagery, UAV data requires extensive preprocessing, complex feature extraction and selectionThe study was conducted in a single experimental field, need for significant computational resourcesRF, NN, SVM, CNN0.85–0.93
Table 4. References related to phenotyping and genetic selection.
Table 4. References related to phenotyping and genetic selection.
ReferenceChallengesLimitationsProposed TechniquesAccuracy
Apolo-Apolo et al. [140]High data collection complexity, risk of poor model generalization, high computational demands, high environmental variabilityLimited dataset size, dependence on visual features, potential overfitting, lack of comparison with alternative sensorsCNN, MLP0.87–0.90
Crossa et al. [141]Complex hyperparameter optimization, high computational complexity, complex genotype × environment interaction modeling, too small genomic datasetsLimited dataset scope, hyperparameters may not have been fully optimized, single-trait focus may be too limitedDL, ANN, AK, GK0.72
Ghahremani et al. [142]Occlusion in 2D images, high computational cost, boundary classification is a challenge, small datasetsLimited dataset, flawed delimitation of the objects, significant computational constraintsPattern-Net, TasselNetV2+, Faster RCNN0.92
González-Camacho et al. [143]Limited training samples, genotyping errors, complexity of rust resistance, ordinal nature of resistance scales, high training times, difficult feature selectionDataset limited to a few wheat populations, high model performance variability, limited scalability and interpretability, need for large computational resourcesParametric linear regression, ML models0.71–0.80
Guo et al. [144]Fine-tuning of models is complex, high variation of prediction accuracies, computational efficiency is difficult to achieveDeep learning models do not always perform well, stratified cross-validation did not significantly improve accuracyDeep learning models0.03–0.85
Hesami et al. [145]Variability in wheat genotypes, nonlinear and complex interactions between phytohormones, complexity of model trainingPotentially poor model generalization, high computational complexity, limited experimental validationGRNN, GA0.78
Khan et al. [146]Absence of NIR band in RGB images, high variability in environmental conditions, high model training complexity, high computational demandPotentially limited generalization, RGB-based VI estimation was limited, lack of real-time deployment, need for more robust feature engineeringDNN0.99
Moghimi et al. [147]Variability in yield within experimental plots, noise and artifacts in hyperspectral images, computational complexity of DL models, limitations in plot size optimizationLimited generalization across environments, high impact of environmental variability, limited dataset size, UAV and sensor relatively limitedDNN0.79
Montesinos-López et al. [148]Complexity of multi-trait genomic selection, computational cost of the models, challenging genotype-environment interactions, limited data quality and availabilityUncertain generalization across crops and traits, limited interpretability of the models, need for extensive hyperparameter optimizationDL, Bayesian Multi-Trait0.14–1.00
Montesinos-López et al. [149]Handling mixed phenotypes, difficult hyperparameter optimization, high computational costsModest gains in prediction accuracy, limited evaluation of genotype-environment interaction, limited field validationMulti-Trait and Univariate DL0.72
Montesinos-López et al. [150]Difficulty in modeling ordinal traits, complex hyperparameter tuning, high computational requirement, poor generalization across datasetsNo significant improvement using ML models, limited model generalization, difficulty dealing with genotype-environment interactionsTGBLUP, MLP, SVM0.45–0.70
Montesinos-López et al. [151]Complex genotype × environment interaction, complexity of multi-trait analysis, complex hyperparameter selection, small sample sizeSmall dataset size, overfitting in multi-trait models, genomic selection model performance variability, high computational costsGBLUP, Multi-Trait and Univariate DLN/A
Roth et al. [152]Difficult balance between accuracy and scalability, phenotyping early growth stages is challenging, difficult trait assessment, high computational complexityLack of dense point clouds, high sensitivity to variability in plant emergence, potential bias in growth stage estimationSVM, RF0.77–0.86
Sandhu et al. [153]Difficult dealing with lower heritability traits, high data dimensionality, varying performance across environmentsHigh computational complexity, lack of external validation, limited interpretability, high dependence on secondary traitsRF, MLP, CNN, SVM, GBLUP0.67–0.72
Sandhu et al. [154]Cost of quality trait evaluation, complexity of genotype x environment interaction, limited datasetsPotentially limited generalizability, high computational burdenNine parametric, ML and DL models0.27–0.81
Sandhu et al. [155]Complex hyperparameter optimization, high risk of overfitting, high computational costsTrait-specific optimization limits generalizability, lack of biological interpretability, need for large datasetsMLP, CNN, RRBLUP0.24–0.57
Wang et al. [156]Field conditions are difficult and varied, optimization of computational efficiency is difficult, clustered objects are difficult to separate, manual annotation is costlyImages taken with fixed camera angle, dataset is too small, no integration with other types of data, high error levels under some conditionsFCN, CNN0.98
Yasrab et al. [157]Complexity of root systems, errors in early image processing stages, balancing model accuracy with computational efficiency, generalization across plant speciesDependency on high-quality training data, limited testing on real-world field images, overfitting in small datasets, high error rates with overlapping rootsCNN0.95-0.99
Zenkl et al. [158]High lighting variability, changing soil properties, high scene complexity, annotation inconsistenciesSeverely limited dataset, high human annotation variability, limited external validation, no multispectral dataSVM, RF, CNN0.86–0.95
Zhang et al. [159]Difficulty handling large-scale phenotypic data, complex integration of different imaging techniques, complexity of drought traitLimited field validation, hyperspectral data are expensive and computationally complex, small dataset size may produce a biased modelRF, CNN0.70–0.82
Zhu et al. [160]Difficulty distinguishing objects of interest, variability in magnifications affected the stomatal index calculationStomata and epidermal cells were treated as independent tasks, single task CNNs may not be the best option for the problemFaster R-CNN, U-Net0.89–0.98
Table 5. References related to spike detection.
Table 5. References related to spike detection.
ReferenceChallengesLimitationsProposed TechniquesAccuracy
Alkhudaydi et al. [161]Complex field conditions, large and noisy datasets, high computational complexity, difficult generalization across growth stages, lack of balanced datasetsLimited success in early growth stages, high false positive rates, segmentation strongly affected by environmental variability, high dependence on high-quality dataFCN0.76
Dandrifosse et al. [162]High variability in wheat growth stages, lighting and shadow effects, difficult conversion of ear count to density, differences in fertilization scenariosLimited dataset scope, underestimated ear densities, relatively high segmentation error ratesYOLOv5, DeepMAC0.86–0.93
David et al. [163]High variability in image conditions, differences in genotypes and growth stages, difficulties in image labeling, difficulties with occluded wheat heads and dense plantingsGeographic bias in the dataset, flawed detection of overlapping heads, dataset with limited temporal variability, baseline model performance was limitedYOLOv3, Faster R-CNN0.77
David et al. [164]High variability in wheat growth stages, dataset labeling challenges, geographic and environmental differences, non-trivial model evaluationBias toward developed countries, bounding box annotations instead of segmentation, difficulty dealing with overlapping wheat headsFaster R-CNN, ensemble DL0.70
Fourati et al. [165]High density of wheat heads, high data variability, accuracy affected by environmental factors, high computational complexityLimited dataset variability, potential bias due to geographical limitations, evaluation metric limitationsFaster R-CNN, EfficientDet0.74
Genaev et al. [166]Variations in spike characteristics increase complexity, need for large training datasets, different imaging angles can cause distortionsExclusive focus on morphometric features, limited number of wheat varieties consideredMachine learning, regression0.97
Gong et al. [167]Available datasets are small, trade-off between speed and accuracy, high variability in field conditions, presence of small or occluded wheat headsOnly one dataset used, potentially poor generalization, high computational complexityYOLO, Faster R-CNN0.94
Hasan et al. [168]Complex field imaging conditions, labor-intensive data annotation, high variability in spike characteristicsPotentially poor generalization, model too sensitive to growth stages, high computational complexityR-CNN, CNN0.93
He et al. [169]Wheat spike overlapping and motion blur, wheatear variability, high computational demandPotential generalization issues, small objects are often missed, high computational complexity for inferenceImproved YOLOv40.97
Khaki et al. [13]Variability in wheat head appearance, lack of data diversity, difficulty balancing accuracy and efficiency, difficulties with real-time deploymentLimited generalization across wheat varieties, absence of real-world testing, point-level annotations affected accuracy, computational constraints on edge devicesWheatNet0.96
Li et al. [170]Background complexity and visual similarity, differences in wheat growth stages, data limitations, computational and processing constraintsPerformance drops in some growth stages, lack of real-time deployment, influence of environmental factors not fully studiedCNN0.97–0.98
Li and Wu [171]Complex backgrounds and occlusions, small target detection, feature extraction limitationsDependence on specific data augmentation techniques, limited generalization, high computational demandFaster-RCNN, YOLO, SSD0.94
Ma et al. [172]Complexity of wheat canopy images, trade-off between model complexity and efficiency, difficult generalization across different cultivarsLimited dataset, low performance in complex field conditions, models are computationally expensiveEarSegNet, DeepLabv3+0.87
Ma et al. [173]Difficult segmentation in complex field conditions, high computational cost, balancing model complexity and efficiencyDataset diversity limitations, sensitivity to small-scale variability, high computational cost, relatively poor performance with UAV imagesDCNN, FCN, RF0.84
Madec et al. [174]Variability in field conditions, selection of the optimal spatial resolution, high computational complexity, labeling subjectivityPoor generalization capability, errors due to small object size, relatively poor performance with UAV images, low accuracy of manual annotationsFaster-RCNN, TasselNet0.85
Misra et al. [175]Variability in image conditions, complexity of wheat spikes, need for large amounts of labeled data for training, high computational costPotentially poor generalization, counting errors due to overlapping spikes, real-time deployment needs further optimization, limited datasetSpikeSegNet0.99
Qing et al. [176]High-density and overlapping wheat spikes, balancing accuracy and computational efficiency, challenging model optimization and feature extractionLimited generalization across varieties, high computational cost, absence os field validation and real-time testingYOLO-FastestV20.81
Sadeghi-Tehran et al. [177]Variability in environmental conditions, overlapping spikes, dataset diversity limitationsField measurement uncertainties caused inconsistencies, lower spatial resolutions degraded performance, ultra-wide-angle lenses introduced perspective distortionsDeepCount0.57–0.97
Shen et al. [178]Variation in wheat characteristics, occlusion and overlapping wheat heads, complex backgrounds and illumination changes, hardware limitationsAccuracy is affected by varying illumination and backgrounds, poor accuracy in detecting occluded heads, limited generalizability, high computational complexityYOLO, Faster RCNN0.94
Sun et al. [179]High-density targets, scale variation of wheat heads, varying lighting conditions, overlapping wheat heads, limited training dataPotentially poor generalization, no multi-temporal analysis, high computational complexity, image overlapping can lead to duplicate countsWHCnet, SSD, Cascade R-CNN, YOLOv40.96
Velumani et al. [180]Variability in environmental conditions, dataset imbalance and annotation challenges, image noise and artifacts, limited scalability to large fieldsDependence on fixed camera systems, small sampling area, no real-time prediction, potential overfittingCNN0.98
Wang et al. [181]Difficult field conditions, challenges processing high-resolution images, clustered wheat ears are difficult to separate, labor-intensive manual annotationFixed camera angle and small field of view, limited dataset, high error levels when conditions are not ideal, no real-time large-scale field deploymentFCN, Harris Corner Detection0.98
Wang et al. [182]Ear occlusions and overlap, variability in lighting and wheat maturity, excessive data imbalance, difficult optimization of feature fusionDataset captured under specific conditions, dependence on pretrained models, not fully real-time, modest improvement in comparison with previous approachesYOLOv3, SSD, Faster R-CNN, EfficientDet-D10.94
Wang et al. [183]Time-series data complexity, high computational requirements, inter-annual yield variability, difficult hyperparameter optimization, limited high-resolution dataLimited generalization to other crops and regions, yield underestimation in high-yielding areas, temporal resolution constraintsCNN, GRU0.64
Xiong et al. [184]Variability in wheat appearance, high-density wheat fields make it difficult to separate individual spikes, image quality issues, occlusions and partial spikesLimited geographic scope, fixed camera positioning, possible overfitting, not tested in real-time UAV deploymentTasselNet, CNN0.91
Xu et al. [185]Variability in wheat ear appearance, image processing complexity, influence of lighting conditions, balancing accuracy and efficiencyLimited generalization across wheat varieties, dependence on image acquisition conditions, optimal performance only at late grain-filling stageCNN0.96
Yang et al. [186]Occlusions and overlapping wheat ears, background noise interference, variability in image conditions, bounding box localization errorsLimited dataset diversity, fixed image resolution, not tested on real-time UAV deployment, no detection of small wheat earsCBAM-YOLOv4, YOLOv3, YOLOv40.89-0.98
Zang et al. [187]Spike occlusion and overlap, densely packed spikes, impact of image resolution and environmental factorsHigh density and visual similarity decrease accuracy, only one object can be detected per grid cell, model depends on image resolution, potentially limited generalizabilityFaster R-CNN, YOLO0.72
Zhao et al. [188]Small-sized and densely packed wheat spikes, background noise in images, limitations of existing object detection methodsDependence on high-quality labeled data, limited scalability to different environments, high computational complexity, high sensitivity to image resolutionFaster R-CNN, RetinaNet, SSD, YOLOv3, YOLOv50.94
Zhao et al. [189]Small and densely packed spikes, occlusions and overlapping spikes, variability in spike orientation, complex field background interferenceDependence on high-quality UAV images, high computational complexity, limited generalizability, need for manual labeling in trainingSeven detection models0.90
Table 6. References related to grain classification.
Table 6. References related to grain classification.
ReferenceChallengesLimitationsProposed TechniquesAccuracy
Çelik et al. [14]High similarity between different durum wheat grains, limited performance of single CNN models, need for a large datasetPotentially limited generalizability, reliance on image features only, potential overfitting, lack of real-time testingHybrid CNN Model0.92
Gao et al. [190]Difficulty separating touching wheat kernels, equipment dependency, feature redundancy in deep networks, processing efficiencyDataset with limited variability, lack of real-time automation, single-view imaging, limited comparison with other DL methodsResNet0.94
Khatri et al. [191]High similarity between wheat varieties, dataset limitations, difficult feature selection, high computational complexityLimited dataset size, potential limited generalization, need for real-world testing, focus on limited featuresEnsemble, kNN, NB0.95
Laabassi et al. [192]High visual similarity between wheat varieties, variability in growing conditions, high computational demand, complex model validationLimited number of wheat varieties, temporal variability not considered, impact of storage conditions not analyzed, potential for model overfittingCNN0.95–0.99
Li et al. [193]Imbalanced and limited dataset, high similarity between healthy and unsound kernels, proper application of augmentation, classifier selectionDependence on hyperspectral imaging, GAN-based augmentation does not fully replace real data, limited model generalization, limited real-time application testingCNN, SVM0.97
Lingwal et al. [194]High similarity among wheat varieties, need for a large and diverse dataset, selection of optimal hyperparameters, high computational complexityDependence on a specific dataset, generalization challenges, computational constraints in mobile devices, need for real-world validationCNN0.95
Özkan et al. [195]High inter-class similarity of wheat kernels, computational complexity of CNNs, variability in imaging conditionsLimited generalization, feature fusion optimization needed, scalability for large-scale agricultural applicationsCNN, SVM0.98
Passos and Mishra [196]Choosing the right DL architecture, computational cost of optimization, balancing preprocessing techniquesLimited neural architecture search, significant computational constraints, fixed preprocessing methods1D-CNNs0.95
Sabanci et al. [197]Feature selection complexity, data processing challenges, training data limitations, model optimization complexitySmall sample size, dependence on visual features only, fixed experimental setup, potential overfittingANN1.00
Sabanci et al. [198]Selecting the optimal imaging technique, feature extraction from noisy images, image fusion complexity, machine learning model optimizationLimited sample size, dependence on texture features only, experimental setup constraints, potential for overfittingMLP, SVM, kNN0.98
Sabanci [199]Feature extraction from noisy images, feature selection for AI models, time-consuming hyperparameter tuning, small dataset sizeLimited dataset, dependence on visual features only, fixed imaging setup, model generalization issuesANN, ELM1.00
Sabanci et al. [200]Intensive image preprocessing, computational cost of CNN training, model generalization issuesSmall dataset size, dependence on visual features only, fixed imaging conditions, potential overfittingHybrid CNN-BiLSTM, AlexNet0.99
Unlersen et al. [201]Variation in wheat cultivars, need for high-resolution images, limited training data, feature extraction complexity, high computational demandLimited to bulk samples, fixed imaging conditions, no consideration of chemical and rheological propertiesCNN, SVM0.98
Wei et al. [202]Variability in wheat grain images, separation of overlapping grains, computational demand of DL models, lack of pre-existing datasetsDataset limited to three wheat varieties, not tested in real-world field conditions, inability to distinguish damaged or deformed grains, computation speed needs optimizationFaster R-CNN0.91
Yang et al. [203]Data scarcity, variability in kernel appearance, complexity of acoustic signal processing, manual feature engineering, high computational costLimited to three classes, dependence on high-quality acoustic signals, not tested on real-world bulk grain samples, limited scalabilitySPGAN-PNAS, CNN0.96
Zhang et al. [204]Hyperspectral imaging technology is sensitive to several factors, difficult data preprocessing and feature selectionThe study was conducted on a single wheat variety, limited generalizability, overfitting problems when using full-wavelength spectral data, need for optimization for real-worldLDA, SVM, DF0.94
Zhao et al. [205]Difficult extraction from hyperspectral images, balancing spectral and spatial information, high computational requirements, variability in seed appearanceLimited generalizability, dependence on high-quality hyperspectral imaging, substantial computational resource constraints, need for larger training datasets1D-CNN, 2D-CNN0.96
Zhou et al. [206]High dimensionality of data, feature redundancy and selection, high computational complexity, variation in kernel propertiesDependence on large datasets, need for further optimization for real-time applications, limited generalizationCNN, SVM, PLSDA0.93
Table 7. References related to other applications.
Table 7. References related to other applications.
ReferenceChallengesLimitationsProposed TechniquesAccuracy
Wheat Mapping and Row Identification
Cai et al. [207]Difficulty in capturing detailed growth vacancies, feature extraction complexity, need for adaptive feature selectionManual threshold setting, limited training data, absence of multispectral or hyperspectral data, high computational complexityRCTC, CNN0.86
Fang et al. [208]Balancing classification accuracy and generalization, need for careful hyperparameter tuning, remote sensing data limitationsPotentially limited generalizability, only three ML techniques were considered, impact of additional environmental and soil factors was not exploredSVM, RF, CART0.94–0.95
Luo et al. [209]Variability in crop growth and climate conditions, limitations of satellite-based yield estimation, computational complexity of DL models, data availability and consistencyLimited temporal coverage, coarse spatial resolution, challenges in detecting small-scale variations, poor generalizationLSTM, RF, LightGBM0.76
Wheat Mapping and Row Identification
Meng et al. [210]Cloud contamination, spectral complexity of hyperspectral data, fragmented farmland and mixed land use, cloudy and rainy conditionsSensitivity to cloud contamination, limited generalization, no analysis of real-time operation, limited field sampling1D-CNN, 2D-CNN, 3D-CNN, RF, SVM0.95
Tian et al. [52]Spectral similarity between garlic and winter wheat, cloud cover in optical imagery, large data processing requirements, integration of optical and radar dataDependence on Sentinel-1 and Sentinel-2 availability, lack of historical data analysis, no inclusion of climate and soil data, potential confusion with other winter cropsRF0.96
Zhong et al. [211]Trial-and-error approach is time-consuming, difficulty in handling high-dimensional data, pixel misalignment, discrepancies between data sourcesLower pixelwise accuracy in the spatiotemporal model, need for pixel-level reference data, lack of generalizationDeep learning0.99
Food Quality
Bourguet et al. [212]Balancing nutritional and sensory quality, conflicting stakeholder priorities, complex multi-criteria decision-makingDependence on expert knowledge, high computational complexity, limited quantitative validationArgumentation modelsN/A
Nargesi et al. [213]Similarity between flour types, time-consuming data acquisition, high computational demandLimited dataset scope, computational complexity of hyperspectral imaging, practical use needs further validationANN, SVM, LDA0.98
Shen et al. [214]Complexity of impurity detection, some impurities resemble wheat grains, occlusions and overlapping impurities, need for large labeled datasetsHigh error levels with occlusions, limited generalization, need for larger datasetsCNN0.98
Shen et al. [215]Limited impurity dataset, expensive equipment, need for more stable modelsLimited number of wheat impurities, THz detection method too expensive for real-world applicationCNN0.97
Moisture Content
Bartley et al. [216]Complexity of microwave-based moisture measurement, ensuring density independence, limited number of samplesTemperature variations affect accuracy, study conducted on static wheat samples, limited dataset size, need for further hardware optimizationANN0.99
Shafaei et al. [217]The hydration process depends on multiple factors, need for multiple trials and optimizationsHigh model complexity, lack of generalization due to data limitations, only one wheat variety was consideredANN, ANFIS0.99
Nitrogen and Chlorophyll Content
Singh et al. [218]Complexity of nitrogen prediction, machine learning model complexity, high computational demands, need for field validationDataset with limited variability, model does not fully account for environmental conditions, potential overfittingSVR, RF, kNN, MLP, PLSR, GBR0.89
Wu et al. [219]Selection of optimal time for data collection, complex feature selection, best prediction model varied at different growth stagesLimited to the reproductive stage of spring wheat, variation in optimal machine learning models, high computational requirementsDNN, PLS, RF, AdaBoost0.77–0.97
Protein Content
Yang et al. [220]Variability across spectrometers, dependency on standard samples, need for careful fine-tuningTested on only five spectrometers, limited dataset, no comparison with transformer-based models, not evaluated for real-time applicationsDeepTranSpectra, CNN0.98
Crop Recommendation Systems
Akkem et al. [221]Black-box nature of AI models, difficulty meeting real-world agricultural needs, high computational cost of explainability methodsTraining data not always available or accurate, need for domain-specific validation, potential ethical and social transparency challengesML models (not specified)N/A
Wheat as Fuel
Bai et al. [222]High viscosity of wheat germ oil, poor engine efficiency, high nitrogen oxide emissions, hydrogen safety risksEmissions increased with hydrogen addition, limited comparison with other biofuels, high cost of hydrogen infrastructure, low energy output per unit fuelMLR, DT, RF, SVR0.99
Optimization of Energy Use
Ghasemi-Mobtaker et al. [223]Uncertainty in energy efficiency, economic and environmental risks, data collection limitationsLimited generalizability, environmental impact is highANN, ANFIS0.98
Optimization of Amylase Production
Núñez et al. [224]Complexity of optimization, variability in substrate composition, computational demands of AI modelsLimited experimental validation, small dataset size, lack of enzyme characterization, limited comparison with other AI modelsANN, GA0.98
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barbedo, J.G.A. A Review of Artificial Intelligence Techniques for Wheat Crop Monitoring and Management. Agronomy 2025, 15, 1157. https://doi.org/10.3390/agronomy15051157

AMA Style

Barbedo JGA. A Review of Artificial Intelligence Techniques for Wheat Crop Monitoring and Management. Agronomy. 2025; 15(5):1157. https://doi.org/10.3390/agronomy15051157

Chicago/Turabian Style

Barbedo, Jayme Garcia Arnal. 2025. "A Review of Artificial Intelligence Techniques for Wheat Crop Monitoring and Management" Agronomy 15, no. 5: 1157. https://doi.org/10.3390/agronomy15051157

APA Style

Barbedo, J. G. A. (2025). A Review of Artificial Intelligence Techniques for Wheat Crop Monitoring and Management. Agronomy, 15(5), 1157. https://doi.org/10.3390/agronomy15051157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop