Abstract
Artificial intelligence (AI) techniques, particularly machine learning and deep learning, have shown great promise in advancing wheat crop monitoring and management. However, the application of AI in this domain faces persistent challenges that hinder its full potential. Key limitations include the high variability of agricultural environments, which complicates data acquisition and model generalization; the scarcity and limited diversity of labeled datasets; and the substantial computational demands associated with training and deploying deep learning models. Additionally, difficulties in ground-truth generation, cloud contamination in remote sensing imagery, coarse spatial resolution, and the “black-box” nature of deep learning models pose significant barriers. Although strategies such as data augmentation, semi-supervised learning, and crowdsourcing have been explored, they are often insufficient to fully overcome these obstacles. This review provides a comprehensive synthesis of recent advancements in AI for wheat applications, critically examines the major unresolved challenges, and highlights promising directions for future research aimed at bridging the gap between academic development and real-world agricultural practices.
1. Introduction
Wheat (Triticum aestivum L.) is one of the most important staple crops worldwide, providing a significant portion of daily caloric intake for millions of people. Given its global significance, optimizing wheat production is crucial to ensuring food security. However, challenges such as climate change, pest infestations, and resource inefficiencies continue to impact wheat yields and quality [1,2]. As the demand for wheat continues to grow, innovative solutions that leverage modern technology are needed to enhance productivity while promoting sustainable agricultural practices.
Most potential technological solutions for agriculture are inherently data driven, that is, they can only be effective if data covering the whole variety of conditions found for that specific application are available. Although sensors to collect data from crop fields have been available for many decades, this kind of technology has experienced accelerated evolution and growth since the turn of the twenty-first century [3]. Soil and meteorological sensors are now sensitive and affordable enough for a detailed characterization and modeling of the cultivation process [4]. Digital cameras can be utilized to monitor diseases, pests, nutrient deficiencies, and other stress factors, which are major contributors to agricultural losses [5]. Meanwhile, advanced multispectral and hyperspectral cameras are enabling the early detection of issues, allowing for timely intervention to prevent significant yield losses [6]. Drones have revolutionized data collection, enabling the coverage of vast areas while capturing high-resolution images with efficiency and precision [7]. A growing number of satellites now continuously monitor the Earth, with increasing revisit frequencies and ever-improving sensor resolution and sensitivity [8]. Internet of Things (IoT) technologies have enabled the seamless interconnection of devices, allowing them to communicate and exchange data autonomously over the internet, without the need for human intervention [9]. As a result, the volume of collected data has been rapidly increasing, even in previously inaccessible areas where data collection was once logistically impractical. Extracting meaningful insights from these diverse data types is a complex challenge, but artificial intelligence techniques and models have proven highly effective in overcoming it [10].
Artificial intelligence (AI) has emerged as a transformative tool in addressing these challenges. Recent advancements in AI-driven agriculture have led to notable progress in key areas such as disease detection, yield prediction, weed management, and phenotyping [11]. For example, ref. [12] developed a deep learning model achieving high accuracy in early detection of wheat rust based on hyperspectral imaging. Similarly, ref. [13] demonstrated that convolutional neural networks (CNNs) could outperform traditional machine learning models in predicting wheat yield from UAV-acquired imagery. These and other studies underscore the growing reliability and precision of AI-driven approaches in wheat production monitoring.
Emerging deep learning models, including self-supervised learning and attention-based architectures, are proving to be highly effective in automating large-scale wheat monitoring and optimizing crop management decisions [14]. The integration of multispectral and hyperspectral imaging, UAV-based monitoring, and remote sensing technologies has further strengthened AI applications in precision wheat farming [15]. Additionally, the integration of transfer learning, multi-source data fusion [16], and hybrid AI models has contributed to overcoming challenges associated with data scarcity and model generalization [17].
Despite recent advancements, the application of artificial intelligence to wheat management and monitoring still faces a range of persistent challenges that extend beyond data-related issues. Among the foremost limitations are the high computational demands of training and deploying complex models, which can hinder adoption in settings with limited infrastructure [18]. Additionally, enhancing model interpretability remains a crucial concern, as current deep learning architectures often function as “black boxes”, limiting their usability in decision-making processes that require transparency and trust [19,20,21]. The dynamic nature of agricultural environments adds another layer of complexity—fields are unstructured and influenced by constantly changing variables such as weather conditions, light incidence, phenological stages, and the presence of pests or diseases.
Amid these broader issues, data limitations continue to be a major bottleneck. Unlike more stable domains like urban environments, agricultural systems demand datasets that are not only large in volume but also diverse enough to capture complex interactions among environmental and biological factors [5]. This challenge is particularly acute for digital imagery, where the cost and logistics of acquiring representative samples under varied conditions are considerable. While fixed sensor networks may partially alleviate this burden, alternative strategies are still necessary. Methods such as semi-supervised learning, domain adaptation, and improved annotation techniques have shown promise, but they cannot entirely substitute for robust, well-curated datasets. To address this gap, innovative approaches based on crowdsourcing and citizen science have demonstrated potential [11,22]. These participatory methods can contribute valuable, real-world data at scale, though further refinement is needed to ensure quality, standardization, and integration into existing AI workflows. A holistic response to these challenges requires coordinated progress across model development, data infrastructure, and interdisciplinary collaboration.
Thus, although substantial progress has been made in applying machine learning and deep learning techniques to wheat crop monitoring and management, significant gaps remain. Existing studies often rely on limited, site-specific datasets, which restrict the generalizability of the proposed models across diverse agroecological environments. Moreover, challenges such as data scarcity, ground-truthing difficulties, limited temporal resolution, and the lack of interpretable AI models continue to hinder practical deployment in real-world agricultural settings. While recent works have explored advanced methods such as data augmentation, transfer learning, and multi-source fusion, these approaches have yet to fully bridge the gap between controlled experimental results and scalable field applications. This review aims to critically examine these persistent challenges, synthesize emerging strategies, and identify directions for future research to advance the robust integration of AI in wheat production systems.
Numerous studies in the literature address one or more of the challenges and research gaps outlined above, as well as various application-specific difficulties. However, the diversity of methodologies and approaches can make it difficult to determine which solutions are most appropriate for specific problems. To help organize the growing body of scientific knowledge and provide a clearer view of the current landscape, this article presents a comprehensive review of state-of-the-art artificial intelligence applications in wheat monitoring and management. It examines recent advances, highlights persistent challenges, and outlines promising directions for future research and integration. By assessing the capabilities and limitations of current AI models, this review seeks to bridge the gap between academic research and practical implementation in agricultural settings, ultimately contributing to improved food security and the promotion of more sustainable wheat production practices.
The remainder of this article is organized as follows. Section 2 defines the key terms and acronyms used throughout the review. Section 3 examines the state-of-the-art AI applications in various stages of wheat cultivation. Section 4 provides an in-depth discussion of the main technical and practical challenges, as well as unresolved research gaps. Section 5 concludes with final remarks and reflections on future directions.
2. Definitions and Acronyms
Some terms considered particularly important in the context of this work are defined in this section. Most definitions have been adapted from [23]. A list of acronyms used in this article, along with their respective meanings, is provided in Abbreviations.
Artificial intelligence: It is a computational, data-driven approach capable of performing tasks that typically require human intelligence, such as detecting, tracking, or classifying plant diseases autonomously.
Big data: This is a term used to describe large, complex, and high-volume datasets that exceed the capabilities of traditional data processing methods.
Data annotation: This is the process of adding metadata to a dataset, such as marking symptom locations in an image. This task is typically performed manually by human specialists using image analysis software.
Data fusion: This is the process in which different types of data are combined in order to provide results that could not be achieved using single data sources.
Deep learning: This is a specialized subset of machine learning that utilizes artificial neural networks with multiple processing layers to extract features from data and recognize patterns of interest. Deep learning is particularly suited for large datasets with complex features and unknown relationships.
Domain adaptation: This is a subfield of transfer learning in machine learning where a model trained on one source domain (the dataset on which the model is originally trained) is adapted to perform well on a different but related target domain (the dataset on which the model needs to perform but has different characteristics).
Ensemble learning: This is a machine learning technique that combines multiple models, often called “base learners” or “weak learners”, to create a more accurate and robust predictive model.
Feature: This is a measurable property of a data sample, such as color, texture, shape, reflectance intensity, index values, or spatial information.
Hyperspectral imaging: This is the process of using a spectral imaging sensor to capture and analyze reflectance information across the electromagnetic spectrum, generating a unique spectral signature for each pixel in the specimen’s image. Hyperspectral imaging typically evaluates hundreds of narrow wavebands, extending beyond the visible spectrum to provide detailed spectral insights.
Image augmentation: This is the process of applying image processing techniques to modify existing images, thereby generating additional training data for a model.
Imaging: This is the use of sensors to capture images across specific ranges of the electromagnetic spectrum. Imaging sensors include RGB (red-green-blue), multispectral, hyperspectral, and thermal cameras.
Internet of Things: This is a network of interconnected physical devices embedded with sensors, software, and communication technologies that enable them to collect, exchange, and analyze data over the internet without human intervention.
Interpretability: This refers to the degree to which a human can understand and explain how an AI model makes its decisions.
Machine learning: This is a subset of artificial intelligence (AI) that enables algorithms to learn patterns of plant diseases by extracting features from large datasets. Machine learning models are often trained using annotated data and, once developed, can predict outcomes for new, unseen data.
Model: This is a representation of the knowledge learned by a machine learning algorithm from training data.
Model generalization: This is the ability of a machine learning model to perform well on new, unseen data after being trained on a given dataset.
Multimodality: It refers to the ability of a system, particularly in artificial intelligence (AI) and machine learning, to process, integrate, and interpret multiple types of data or sensory inputs simultaneously.
Multispectral imaging: This is a sensor-based technique for capturing and processing reflectance information from multiple wavebands of the electromagnetic spectrum. Typically, up to 10 wavebands in the visible or near-infrared range are analyzed to support disease detection.
Overfitting: This is a phenomenon where a model performs well on training data but fails to generalize to new, unseen test data.
Proximal sensing: This is the acquisition of optical information from a crop specimen under controlled conditions, without direct physical contact, but at relatively close distances—typically conducted in a greenhouse or laboratory setting.
Remote sensing: This is the acquisition of optical information from an object in the field or landscape through a noninvasive, contactless approach, using sensors such as the human eye or artificial spectral sensors.
Segmentation: This is the process of dividing a digital image into multiple distinct segments or classes, based on similar pixel characteristics such as hue, saturation, and intensity. This can be performed automatically using algorithms or manually by human annotators.
Semi-supervised learning: This is a hybrid approach combining supervised and unsupervised learning, where a small portion of labeled data are used for initial training, while the remaining process relies on unlabeled data.
Supervised learning: This is a machine learning approach where a model is trained on labeled data to predict either categorical labels (classification) or numerical values (regression) for new data.
Transfer learning: This refers to a machine learning technique where a model trained on one task or dataset (source domain) is adapted to perform well on a different but related task or dataset (target domain).
Unsupervised learning: This is a machine learning technique that identifies patterns and structures in unlabeled data without predefined categories.
3. Literature Review
The article selection process was conducted in March 2025 using Scopus and Google Scholar, two comprehensive bibliographic databases. The search employed a Boolean expression: wheat AND (artificial intelligence OR deep learning OR machine learning). Conference papers were immediately excluded, based on the rationale that such publications often lack rigorous peer review. This initial search returned approximately 320 articles.
To refine this large set, two exclusion criteria were systematically applied:
Thematic Focus: Studies were included only if they focused exclusively on wheat or, at most, included one additional crop.
Methodological Relevance: Articles in which artificial intelligence or machine learning techniques were not the primary focus of the investigation were excluded.
Applying these criteria, 96 articles were excluded for not meeting the thematic focus, and 31 for not prioritizing AI/ML methodologies. After this screening process, 193 articles remained. An additional eight relevant articles were identified through manual examination of the reference lists of these papers, leading to a final selection of 201 articles for in-depth review. Although no formal quality assessment (e.g., minimum dataset size or standardized validation procedures) was applied during selection, studies were critically evaluated regarding dataset characteristics, validation strategies, and model robustness as discussed in the Results and Discussion sections.
The selected articles were categorized into seven main research areas: yield prediction (46 articles), disease management (44 articles), other stresses and damages (22 articles), phenotyping/genetic selection (21 articles), spike/ear/head detection (31 articles), grain/kernel classification (18 articles), and other applications (19 articles). It is worth noting that additional articles not included in the selected set are cited throughout the text whenever they provide relevant clarification or support for specific aspects discussed.
3.1. Yield Prediction and LAI/Biomass Estimation
Table 1 presents all articles focused on yield prediction and LAI/biomass estimation, outlining each reference alongside its key challenges, limitations, tested techniques, and best-reported accuracy(ies).
Table 1.
References related to yield prediction.
Yield prediction, along with the related tasks of LAI and biomass estimation, remains one of the most extensively studied applications of AI in wheat-related research. Several factors contribute to this focus. The widespread availability of satellite-derived data, including long-term time series spanning several decades, provides a rich foundation for developing and validating AI models. Additionally, the use of unmanned aerial vehicles (UAVs) for data collection in this context is becoming increasingly common [16,27,32,45,50,59,61,62,63]. This abundance and accessibility of data make yield prediction a particularly attractive and feasible problem for AI-based approaches.
AI excels at extracting meaningful insights from complex, high-dimensional agricultural datasets, enabling it to capture subtle patterns and relationships that might be difficult to detect using traditional analytical methods [51]. This capability makes AI particularly well suited for tasks like yield prediction, where multiple interacting variables must be considered. Additionally, wheat yield data are highly nonlinear [58,63], requiring techniques capable of effectively modeling nonlinear relationships [33,51,53]. However, while many AI techniques are inherently well suited for this purpose, selecting the optimal model architecture, parameters, and activation functions can be challenging [57]. In extreme cases of nonlinearity, even sophisticated AI techniques may struggle to capture the underlying patterns accurately [26].
Another challenge associated with AI models is the difficulty in interpreting and explaining their outputs [58], largely due to their inherent “black-box” nature [48,56]. Although a deep understanding of the model’s internal workings is not strictly required for its application, the lack of transparency makes it harder to identify weaknesses and refine aspects that do not perform as expected [42,64]. Ensemble learning models pose a particular challenge for interpretability, as their potential to improve accuracy often comes at the cost of reduced transparency, limiting their practical applicability [16,43]. In response, some researchers have sought to enhance interpretability [48], though many note that domain experts often find certain relationships identified by these models to be counterintuitive [21].
The difficulty of yield prediction varies significantly depending on both the type of data used and the representativity of the datasets in the experiments. Studies focused on a single geographic area tend to achieve higher accuracy but at the expense of lower generalizability [2,16,20,21,24,25,26,27,28,29,30,31,32,34,35,36,39,40,41,42,43,44,48,49,50,53,54,57,58,59,60,61,62]. Generalization between different wheat varieties can also be difficult to achieve [38,45,46,51]. Additionally, the time series length used in the experiments is often insufficient to fully capture the seasonal variability of crops, as agricultural conditions can vary significantly across different growing seasons [24,40]. As a result, the accuracy levels reported in the literature vary widely, reflecting differences in data sources, environmental conditions, and modeling approaches.
Poor generalization capabilities are often a direct consequence of overfitting. As discussed earlier, if the dataset used for model development fails to capture the full variability of the problem [20,29,46,61], the model may fit the training data distribution too closely but struggle to generalize when applied to unseen data with different distributions [30,47,63]. This issue is further exacerbated in complex models with a large number of parameters [25,32,35], as their increased degrees of freedom allow them to memorize training data rather than learning meaningful patterns [32]. Striking a balance between dataset representativity, model complexity, and predictive accuracy remains a significant challenge [43] and a major limitation in many studies [2].
If factors such as dataset representativeness and overfitting are not properly addressed, the reliability of the reported results may be compromised. Some studies [2,35] report extremely high accuracy values in their experiments. While these results are impressive, they raise concerns regarding the realism and generalizability of the models. Such high performance often suggests potential overfitting, particularly when models are trained and tested on limited or insufficiently diverse datasets. In many cases, datasets may be collected from homogeneous environments, or validation may be conducted using simple train/test splits without employing more robust methods like k-fold cross-validation or independent external testing. Consequently, the reported accuracies may not translate effectively to broader, more variable agricultural conditions. It is therefore critical to interpret these results cautiously, recognizing that reported metrics may not fully reflect model performance under real-world, field-scale applications. Future research should prioritize rigorous validation protocols and the use of diverse, multi-location datasets to ensure the development of more generalizable and reproducible AI models.
Although deep learning has been steadily replacing traditional AI techniques in many domains, shallow neural networks and other machine learning models still predominate in yield and biomass prediction [16,26,43,61], with some exceptions [25,37,41]. This is primarily because satellite-derived data, which have been widely used for decades, have already been successfully processed using well-established traditional methods [39]. Additionally, time-series analysis with deep learning remains challenging in certain scenarios, particularly when the number of available samples is relatively low [21,24,39]. Another challenge in applying deep learning techniques to yield estimation is the limited availability of large, annotated yield datasets that can serve as reliable references for model development [40,48,49,56,60,61,64].
One of the challenges associated with traditional machine learning models is their reliance on carefully designed feature extraction for optimal performance [49]. In many cases, standard features such as vegetation indices are insufficient for producing reliable estimates [31,38,41], particularly due to the variability introduced by different crop growth stages [27,57] and to limited sensitivity to photosynthesis [39]. As a result, there is often a need to develop custom features tailored to the specific conditions of the dataset in order to improve model accuracy [33]. However, these tailor-made features can be highly sensitive to even minor variations in data distribution, which can compromise model robustness and make the entire process more challenging [2]. Additionally, when the number of features is too high, the dataset may include a significant amount of redundancy and irrelevant variables, which can negatively impact model performance. In such cases, effective feature selection or combination becomes essential to reduce dimensionality, eliminate noise, and enhance model accuracy [35,42,59].
One way to avoid complex feature engineering is through the use of deep learning techniques, which can implicitly learn and extract relevant features to characterize the data under analysis. While this approach is often practical and efficient, the inherent “black-box” nature of deep learning models poses challenges. It becomes difficult to verify whether the extracted features are scientifically meaningful, and manual fine-tuning of the models is often hindered [21].
In many cases, obtaining high-quality, long-term satellite and climatic data for a specific region is challenging due to missing values, inconsistencies [24,48], and data corruption caused by factors such as cloud cover [31,40,52,53] and noise [2,53]. Additionally, limited satellite coverage and low revisit frequency are common issues that not only hinder the use of data-intensive techniques but also significantly restrict the generalizability of models [56]. Other types of data, such as historical production records and agronomic field data, may also exhibit inconsistencies, which can negatively affect model performance if left unaddressed [42]. As such, the application of correction or normalization techniques is often necessary to ensure data quality and reliability [33].
Data inconsistencies and fluctuations can often be partially mitigated through preprocessing techniques [32,34,50,61]. However, challenges such as handling missing values and normalizing datasets may never be fully resolved, as these issues can persist depending on the quality and variability of the data [25,46]. While some preprocessing techniques are standardized and validated across diverse conditions, others are specifically tailored to the dataset used in individual studies [46]. This case-specific approach may limit the direct applicability of preprocessing methods to different regions or crops [24], further exacerbating the lack of generalizability. Moreover, preprocessing is often applied without prior evaluation of its effects, which can be problematic. In many cases, results may actually improve without preprocessing, highlighting the need for careful assessment before its implementation [5].
Wheat yield is highly sensitive to climate variability [20,26,32,38,42,57,58], including factors such as drought, rainfall, and temperature fluctuations, which are inherently difficult to predict [25,35,43]. In addition, variations in soil properties and management practices can exert a substantial influence on yield [16,30,31,35,63]. Even government policies, such as subsidies, land use regulations, and water access restrictions, can significantly affect crop productivity [33]. This adds complexity to modeling efforts and can result in large estimation errors under certain conditions [2,21,24,64], especially if some of those variables are not explicitly incorporated to the model [44,52,53,54,60]. The challenge is further compounded by the fact that certain climatic variables exhibit weak or nonlinear correlations with wheat yield [44].
The variability issue can be mitigated when long-term temporal datasets are available (which is not always the case [41,42,49,50,59]), as they increase the likelihood of capturing rare or extreme events [39], thereby enhancing the model’s robustness and adaptability to such variations. However, if the temporal resolution of the data is too coarse [57], it may fail to capture short-term yield fluctuations [34], potentially overlooking critical growth stages or environmental events that significantly impact crop performance [28]. Additionally, with longer time series, the influence of technological advancements becomes significant, necessitating preprocessing and detrending to ensure data consistency [44,55]. In any case, incorporating a diverse set of variables, rather than relying on a single data type, can significantly enhance model robustness by providing a more comprehensive representation of the crop system and its interactions with environmental factors [28,54]. Failing to adopt a more systemic perspective may compromise model performance, as essential components of the system can be overlooked or inadequately represented [38,50,51,53,57,62].
Integrating multiple data sources presents a significant challenge [30,54,55], particularly when datasets differ in temporal and spatial resolutions [28,40,43,45,49,59]. Addressing these inconsistencies often requires extensive preprocessing and feature engineering [29,31,39,43,47,60], which can be both time-consuming and error-prone [30]. As a result, some studies opt to use a more limited set of variables [31,33,43], which may be insufficient to fully capture the complexity and variability of the crop [25,26,27,39,40]. Emerging research areas such as data fusion [16,52,56,65] and multimodality [66] are already making significant strides in tackling these challenges [32], enabling more effective and comprehensive data integration.
While a significant portion of satellite data still lacks the spatial resolution necessary for fine-grained yield estimation [54,55,57,58,60], high-resolution imagery (with a GSD better than 20 m) is becoming increasingly accessible. However, as resolution improves, so do the associated computational demands [52]. The computational power required for model training is a frequently cited bottleneck in the literature [2,16,24,36,38,53,54,55,56,60]. As computational infrastructure continues to advance, the development of increasingly larger AI models poses challenges for institutions without dedicated data centers or with limited resources to afford cloud services capable of supporting such demands [52]. However, it is important to note that while many models require substantial computational resources for training, their inference phase is often much less demanding. In some cases, these models can even run efficiently on portable devices with limited computational power, making them more accessible for real-world applications. On the other hand, models that are computationally expensive during inference may face significant constraints for real-time or mobile deployment [2,21,33,34,40,46,58]. This limitation often necessitates further research and development to optimize model efficiency and make the technology practical and deployable.
3.2. Disease Management
Table 2 presents all articles focused on disease management, following a structure similar to that of Table 1 for consistency and ease of comparison.
Table 2.
References related to disease management.
In contrast to yield prediction, which still sees widespread use of traditional machine learning approaches, disease detection is overwhelmingly dominated by deep learning techniques, particularly convolutional neural networks (CNNs), with only a few notable exceptions [70,83,96,101]. For most crops, disease detection and management relies heavily on leaf images, as leaves are typically where the earliest and most visible symptoms appear [110]. However, in the case of wheat, the narrow shape and positioning of leaves make them difficult to image effectively. As a result, many approaches instead focus on kernel [82,99,100,111] or ear (spike) images [68,69,74,93,94,97,101,103,105,106,108,111], which sometimes provide more accessible and informative visual cues for detecting diseases. Most studies focused on wheat imagery have utilized ground-based image collection, which offers high resolution and close-range detail. However, an increasing number of studies have also explored the use of the UAV-based method [71,85,91,98,101,104,107,109], broadening the scope of data sources for wheat analysis.
Deep learning techniques are inherently data intensive, requiring large, diverse datasets that capture the full variability of the problem to achieve reliable performance [5]. With the exception of highly specific applications constrained to a narrow set of conditions, building truly representative datasets for disease detection and recognition has proven largely unfeasible [1,74,76,79,84,86,96,97,106]. As a result, many studies rely on limited datasets for both training and testing, often producing overly optimistic and unrealistic performance results [68,69,70,88,102,107]. For example, Azimi et al. [68] reported a perfect accuracy of 1.00 in their classification tasks. However, their model was trained and tested on a relatively small dataset collected under controlled conditions, which limits environmental variability and may inflate performance metrics. Similarly, ref. [94] also achieved an accuracy of 1.00, but the lack of external validation across diverse geographic regions raises concerns regarding model generalizability. These results suggest that overly optimistic performance metrics may stem from methodological oversights, such as insufficient dataset diversity, inadequate validation protocols, or overfitting to training data. A more critical evaluation of dataset composition and validation strategies is essential to assess the true robustness and practical applicability of AI models in wheat research.
To address the lack of data, data augmentation is commonly applied, particularly in the case of digital images [1,69,72,73,74,78,87,108,109,111]. While this strategy can help mitigate data scarcity, even advanced techniques like Generative Adversarial Networks (GANs) and Frequency Domain Adaptation (FDA) generate synthetic data that may introduce biases and unrealistic artifacts, ultimately limiting their effectiveness [75]. Given these constraints, the results reported in the literature must be interpreted with caution and considered in light of the experimental context in which they were obtained, as they are unlikely to reflect the true accuracy achievable under real-world conditions [5]. While some studies acknowledge these limitations, many fail to report this critical caveat, which can undermine the credibility and generalizability of their findings.
Most disease detection and recognition efforts rely on digital images of symptoms that are either visibly apparent or detectable through spectrum-based sensors [95,112]. A major challenge in this context is the wide variety of plant disorders, many of which produce similar physiological and visual alterations [73,78,80,81,84,86,90,102,111]. Ideally, a dataset should include examples of all relevant disorders to enable accurate discrimination. However, despite significant strides made by some studies to achieve this goal [87], attaining truly comprehensive coverage remains virtually unfeasible. As a result, most studies are limited to a narrow subset of disorders, often ignoring other potential causes of the observed symptoms [1,69,70,74,78,80,83,95,96]. This leads to models that are constrained to select from the known classes, even when the input belongs to an unseen or unrelated category, potentially yielding inaccurate predictions [72,90,95].
Some researchers have attempted to address this by introducing an “other”, or “I do not know”, class to capture unknown or unmodeled conditions, but defining and representing this class meaningfully in the training data remains a significant challenge [87]. This issue is somewhat less critical when the focus is on a single disease, turning the problem into a binary classification between the target disease and all other conditions [68,71,74,89,91,94,97,106,107,111]. Still, this approach is not without limitations, as many non-target disorders may exhibit symptoms that overlap with the class of interest, leading to potential misclassifications [104].
To address the limitations of traditional classification methods, more advanced techniques, such as few-shot learning and one-shot learning, have been explored for their potential to recognize previously unseen classes with limited labeled examples. These approaches have shown promise in plant disease monitoring and detection [113,114], offering a pathway toward more adaptable diagnostic systems. However, in the specific context of wheat diseases, the existing literature remains scarce; only a handful of conference proceedings mention the use of such methods, and to date, no peer-reviewed journal articles have demonstrated their successful application. As a result, the problem of generalizing to unseen disease classes in wheat remains a fundamental and unresolved challenge, for which no robust or scalable solutions have yet been established.
Almost all studies included in this review assume the presence of only a single disease at the time of detection. However, in real-world scenarios, it is common for multiple diseases or disorders to co-occur, leading to overlapping symptoms and increased diagnostic complexity [81,86,89,95]. Under such conditions, model behavior can become unpredictable, and error rates typically rise [73,75,78,87,92,96,102]. One potential approach to address this issue is to shift the focus from diagnosing the entire plant organ to analyzing individual lesions or symptomatic regions, enabling multi-label classification [110]. However, this strategy introduces significant challenges, particularly the need for accurate localization and segmentation of each lesion prior to classification, steps that are often complex and computationally demanding. Some authors have attempted to treat different combinations of diseases as distinct classes; however, the limited number of samples representing these combinations resulted in relatively low classification accuracy [75].
The primary goal of plant disease recognition technologies is to enable the earliest possible detection of problems, allowing for timely interventions that can minimize crop losses [81,91,95,101,107]. Conventional RGB sensors have become widely available, and even low-end consumer-grade devices are capable of capturing images with sufficient quality and resolution. As a result, RGB imaging has been extensively employed in disease detection efforts [1,67,69,70,73,74,75,76,80,81,86,90,102,103]. However, a significant limitation of RGB-based methods is that visible symptoms often appear only after substantial damage has already occurred, at which point preventive measures may no longer be effective [69]. This has driven growing interest in more advanced sensing technologies [112], including spectrometry [99], multispectral [71,85,98,107,115], thermal [85], and particularly hyperspectral sensors [82], which offer high spectral resolution capable of detecting subtle physiological changes in plants before visual symptoms manifest [94,95,101,104,105,116].
Numerous studies have demonstrated the potential of hyperspectral imaging for early-stage disease detection; however, even in these cases, detection accuracy typically improves at later stages of disease development [104]. In addition, the high cost of these sensors remains a major barrier to widespread adoption [94,100,105]. The challenge is even more pronounced when such sensors are mounted on unmanned aerial vehicles (UAVs) [95,101,104], as the risk of damage or accidents is relatively high and obtaining insurance coverage for such equipment is often difficult [117]. An alternative approach involves deploying hyperspectral sensors on satellites, which eliminates some logistical risks. However, the ground sampling distance (GSD) of current hyperspectral satellite platforms is still too coarse for early stress detection, limiting their utility to cases where the affected area is already sufficiently large to be detected from orbit [91].
In many cases, relying on a single type of sensor does not provide sufficient information to fully resolve complex agricultural problems. Combining multiple sensor types offers a promising solution, and recent studies have successfully applied multimodal learning and data fusion techniques to improve the detection and recognition of wheat diseases. However, integrating heterogeneous data remains a technically challenging task, often requiring sophisticated preprocessing, normalization, and the development of custom features to ensure compatibility and effectiveness across data sources [71].
With the predominance of deep learning techniques in plant disease detection and recognition, computational requirements have become a critical consideration, particularly during the training phase. Many of the challenges discussed in the context of yield prediction also apply here and will not be reiterated. However, a key distinction lies in the operational requirements of each task. Unlike yield prediction, which typically does not demand real-time processing, disease recognition often requires rapid responses, especially for field-based applications such as smartphone apps for symptom identification [72]. In such scenarios, it is essential to consider the use of lightweight models optimized for fast inference, even if this comes at the expense of a modest reduction in accuracy. Prioritizing efficiency and responsiveness is crucial when deploying AI tools in real-world agricultural settings where timely decision-making can significantly impact outcomes [1,74,87,89,92,102].
3.3. Other Stresses and Damages
Table 3 presents all the articles that focus on plant stresses other than diseases.
Table 3.
References related to other stresses and damages.
The techniques and methods found in the literature addressing plant stresses share many similarities, meaning that several observations made in the section on plant diseases are also applicable here. Nevertheless, certain stress-specific approaches warrant distinct discussion.
In the context of weed detection and management, a major challenge lies in distinguishing weeds from wheat when their visual characteristics are highly similar [118,119,121,123], and even their spectral signatures can be closely related [124]. Additional complexity arises from plant overlapping and occlusion, which significantly hampers accurate detection [120,121,123,125,126]. To enhance model accuracy and prepare for future herbicide-specific recommendations, some studies have opted to create separate classes for each weed species [17,119,120,123,125], with a few works considering up to ten species [121]. However, this strategy presents challenges, especially in detecting and classifying weed species not included in the training set [17]. Moreover, class imbalance can negatively impact the recall of underrepresented classes [17,121,123].
Due to limitations in the datasets used during experiments, such as restricted diversity in conditions, geography, and species, generalizing to unseen data remains difficult [118]. To address this, several authors have adopted data augmentation techniques [17,119,120,125], with some employing advanced augmentation strategies [122]. Although conventional RGB sensors are the most commonly used [17], some studies have explored multispectral imaging as an alternative for enhancing spectral discrimination [124]. Data collection is typically performed using ground-based cameras [119,120,121,122,125,126] or UAV-mounted systems [17,118,124], while satellite imagery is generally avoided due to its insufficient spatial resolution for weed-level analysis.
Although substantial progress has been made in weed detection using AI techniques, the majority of studies focus on post-emergence weeds, where plants are already well developed and easier to distinguish. Early-stage weed detection, however, is critical for timely management interventions and minimizing crop losses. This remains a significant challenge due to the small size of seedlings, spectral and morphological similarity to crop plants, and limited availability of annotated datasets. Addressing these challenges through improved imaging techniques, data augmentation, and transfer learning approaches represents a key opportunity for future research. A few studies have tackled the problem of early weed detection [126], although performance tends to be limited for seedling recognition [120,121]. In contrast, better results have been observed when the task involves semantic segmentation rather than classification [122]. Notably, among the reviewed literature, only one study deliberately did not employ deep learning techniques, Su et al. [124] opted for alternative approaches due to a lack of sufficient labeled data.
Pest management and recognition present a distinct set of challenges. Agricultural pests are typically small and may appear in a variety of poses and orientations, making accurate detection difficult for most models [128,130,131]. This results in high variability, and because many datasets fail to capture the full spectrum of visual variations, data augmentation is commonly employed to improve model generalization [130,131].
The use of traps specifically designed to attract target pest species is a common practice in agricultural monitoring. However, within the scope of this review, no study employing such traps for image-based pest detection was identified. Instead, all reviewed works focused on the direct imaging of pests on plant organs, such as leaves and stems [128,130]. One possible reason for this is that traps often accumulate non-target objects, such as other insects, debris, spores, or plant material, which can complicate detection, particularly when the target pests are very small [139].
Most studies concentrate on the detection of a single pest species [128], though some propose methods capable of classifying multiple species [130]. While the latter approach offers richer and more informative outputs, it also introduces the risk of misclassification when species not seen during training are present during inference.
Although the majority of pest detection methods rely on conventional RGB imaging [128], some studies have explored alternative sensing technologies that aim to detect indirect physiological responses of plants to pest presence. These include near-infrared spectroscopy and electronic nose (E-nose) systems [129]. Rather than detecting the pest itself, these approaches attempt to identify plant-level changes, such as variations in volatile organic compound (VOC) emissions, that may indicate pest activity. However, e-nose systems face specific challenges: different plant cultivars emit distinct VOC profiles, and the compounds released may not be pest specific, as they can also reflect responses to other biotic or abiotic stressors [129].
Only two studies listed in Table 3 address evapotranspiration estimation and drought monitoring, yet a few domain-specific challenges can be identified in this context. Climatic data are a crucial input for estimating evapotranspiration; however, such data are often limited in spatial and temporal availability, and the parameters commonly used in modeling may be insufficient to account for the full complexity of factors influencing evapotranspiration dynamics [132]. Moreover, drought is a multifactorial phenomenon influenced by a combination of variables such as precipitation, soil moisture, and vegetation conditions, which complicates the development of a unified predictive model. To address this, studies frequently rely on multi-source data integration, which demands extensive preprocessing and harmonization to ensure consistency across spatial resolutions, formats, and temporal coverage [133].
One of the main challenges in detecting herbicide and pesticide stress is that symptoms often manifest only at later stages, making early identification difficult with traditional methods. To address this, many studies employ sensors capable of capturing the reflectance spectrum of the target, enabling the detection of physiological changes at earlier stages. Common approaches include near-infrared hyperspectral imaging [134] and surface-enhanced Raman spectroscopy (SERS) [135]. Another limitation is that some studies are conducted under controlled conditions, which can hinder the applicability of their models in real-world scenarios [134]. Notably, both studies reviewed here used deep learning algorithms for stress detection [134,135].
The final wheat disorder addressed in this study is lodging, which affects the plant at a structural level. Because lodging is a broad, canopy-level phenomenon, UAV-based imaging is commonly used for its detection [136,137,138]. To improve accuracy under complex field conditions, some studies have combined digital imagery with additional data sources such as Digital Surface Models (DSM) [136]. Multispectral imagery has also been employed, often outperforming RGB sensors in detecting lodging [137,138].
A major limitation in this area of research is the difficulty in collecting large, diverse datasets, which often restricts studies to a single geographic region and wheat variety, limiting model generalization [136,137,138]. Another challenge is that lodging manifests differently depending on the plant’s growth stage, adding further complexity to detection efforts [137]. In most cases, healthy plant data are far more abundant than lodging data, necessitating the use of data augmentation [136,138] or class-balancing techniques such as the Tversky loss function [137]. Notably, all studies reviewed in this context have adopted deep learning approaches for lodging detection [136,137,138].
3.4. Phenotyping and Genetic Selection
Table 4 presents all the articles that focus on phenotyping and genetic selection.
Table 4.
References related to phenotyping and genetic selection.
Phenotyping and genotyping are complementary approaches that, when combined, provide powerful insights into the genetic control and environmental expression of plant traits. This integrated perspective is crucial for advancing crop productivity, resilience, and sustainability. Accordingly, this subsection groups together studies that address either or both dimensions.
Studies focused on phenotyping often face challenges similar to those encountered in yield prediction and stress management. A recurrent issue is the difficulty in constructing truly representative datasets. This limitation undermines model generalization, particularly under high variability conditions [140,146]. Class imbalance is another common challenge [156], which frequently motivates the use of data augmentation techniques [142,146,157,158,159,160]. Occlusions further complicate image-based phenotyping, leading to errors in trait estimation [142,157]. Hyperparameter tuning is also cited as a non-trivial hurdle [145].
Among the sensors used for phenotyping, RGB cameras are the most prevalent [140,146,152,156,157,158], but others such as microscopy [160], multispectral cameras [146], multispectral radiometers [153], and hyperspectral sensors [147,159] are also employed. Hyperspectral imaging, in particular, is effective for detecting physiological traits invisible to the naked eye, although it may suffer from noise due to atmospheric and sensor-related artifacts [147]. In some studies, data are collected in controlled environments using lab or field experiments rather than onboard sensors [145].
Ground-based phenotyping remains the most common practice [140,142,153,156,157,158], although the use of UAVs has expanded since the early 2010s [146]. Nonetheless, determining optimal flight altitude and camera configurations is challenging, especially for hyperspectral setups [147]. Additionally, the ground sampling distance (GSD) from UAVs may be insufficient for capturing early-stage plant traits, which are vital for genetic selection [152]. Satellite imagery currently lacks the spatial resolution needed for most phenotyping applications [152].
Ground-truth generation is another major constraint, particularly when destructive sampling or complex measurements are involved [140]. Moreover, some agronomic indicators like yield lack the spatial precision required for robust model training and evaluation [147]. Annotation challenges are widespread, especially for high-volume datasets [156,160] and traits that involve subjective interpretation [142,158,159]. When field visits are necessary, logistical constraints often limit the number of measurements, prompting the use of interpolation techniques [152].
The traits targeted in phenotyping studies include grain yield [147], leaf area index [140], plant biomass [146], ear counting and length [142], flowering time [156], root characteristics [157], plant counting, height, and tillering [152,159], shoot regeneration frequency [145], awn morphology [156], vegetative cover [158], drought responses [159], stomatal index [160], and stem elongation onset [152].
Genotyping brings its own set of challenges, largely due to the nature of genomic data, which require specialized processing methods. Model tuning in this domain may be more complex than in image-based tasks, due to fewer reference studies, smaller datasets [151], and the intrinsic complexity of the data, which demands meticulous selection of model architecture and parameters [141,143,148,155]. Some studies must handle a mix of binary, ordinal, and continuous variables [149,150]. Additionally, certain traits are influenced by both major and minor genes, which can lead to underfitting or overfitting [143].
Data quality is another concern in genotyping. Missing data are common and need to be managed through filtering [141,153] or manual imputation [150]. Furthermore, effective genomic selection requires accounting for genotype-by-environment interactions [141,148,149,150], a non-trivial modeling challenge. Ground-truth acquisition can also be problematic due to subjective evaluation [143,149].
Traits studied in genotyping-based research include grain yield [141,143,148,149,155], plant height [148,149,155], disease resistance [143,149], days to heading and maturity [148,149,150,155], grain color and protein content [149,155], lodging [149], and anthesis-silking interval [148]. While many studies address one trait at a time, multi-trait models have been proposed to enhance genomic prediction [148], although they are more susceptible to overfitting [151].
Some studies integrate phenotyping and genotyping for a comprehensive trait characterization [144,150,151,153,154,159]. For example, Guo et al. [144] combined manual phenotyping with genotyping-by-sequencing to assess grain yield and related traits. Montesinos-López et al. [150,151] integrated SNP and phenotypic data to predict multiple agronomic traits. Zhang et al. [159] combined high-throughput phenotyping with GWAS to improve drought resistance and yield predictions.
As with other domains in agricultural research, both phenotyping and genotyping are increasingly leveraging deep learning [140,141,142,143,144,146,147,148,149,153,156,157,158,159,160], though shallow neural networks [145,150] and conventional machine learning approaches [152] remain in use for specific data types.
3.5. Spike Detection
Table 5 presents all the articles that focus on spike (ear) detection.
Table 5.
References related to spike detection.
In this review, all studies focused on spike detection and counting rely on digital RGB imagery combined with deep learning techniques. Minor deviations from the standard include the use of stereo RGB images [162] and ultra-wide-angle lenses [177]. Due to limited dataset diversity, data augmentation is commonly employed [13,165,167,169,170,171,172,173,178,179,180,182,186,187,188,189]. Most datasets were built with ground-based images due to the relatively small size of wheat spikes, although UAV imagery has also been widely adopted [13,169,173,176,179,188,189]. A notable portion of the literature relies on the Global Wheat Head Detection (GWHD) dataset [165,167,169,178,179,182,186], which was specifically developed for spike detection tasks [163,164].
Spike detection differs from other detection tasks discussed earlier in several key ways: it is almost always conducted in-field (with a few exceptions [166,175]), the objects of interest are almost always present, and occlusion is significantly more frequent and problematic [13,161,162,163,164,165,166,167,168,169,171,172,173,174,175,176,177,178,179,181,182,183,184,186,187,188,189]. Accordingly, individual spike separation becomes a central challenge in most works [171], with varying levels of success. While many authors have attempted to overcome occlusion through model fine-tuning [13,171,173,179,182,188], others seek improvements at the image acquisition stage [168].
Another major hurdle is the heterogeneity in spike density [165,184]. In some cases, a single image patch may contain between 0 and 120 spikes [163], while in others, up to 10,000 spikes may appear in one image [184]. Such variation introduces difficulties in both annotation and model training/inference.
Due to the complexity of annotation, multiple strategies are found in the literature. The most commonly used are bounding boxes, which offer a straightforward method for object counting and are comparatively easier to annotate [182]. However, they remain labor intensive and prone to subjectivity and error [164,170,174,179,186,187,188,189]. Furthermore, bounding boxes do not easily accommodate occlusions, nor do they enable extraction of more detailed morphological information [163,165,178]. To increase annotation reliability, some authors employed multiple experts and repeated labeling for each image to produce a robust ground-truth [186].
Despite being simpler than segmentation, bounding box annotation may still pose a heavy workload. This has led some researchers to explore point-level annotation, where each spike is marked with a single point, usually at the center [13,184]. This approach reduces annotation time and is effective for object counting, though it can reduce the accuracy of object localization.
A third approach involves pixel-level segmentation of the spikes, and occasionally awns [166], which allows for precise delineation and facilitates the extraction of additional traits [162,172]. However, this method is highly labor intensive and subjective, even when supported by computational tools [161,166,172,173,175,177,181,183]. Some authors have combined bounding boxes for detection with segmentation for refinement, achieving enhanced performance [162]. The literature suggests that segmentation is more accurate, particularly under occlusion [161,162,177], but the annotation effort remains a limiting factor.
A fourth, less common approach divides images into patches and performs binary classification (“spikes present” or “spikes absent”) [180]. This technique, used for automatic estimation of the wheat-heading date, is noted to be more robust and easier to annotate than bounding box or segmentation methods in phenological studies.
Although it is desirable to detect viable spikes as early as possible [170,173,175,180], many models struggle during the booting and heading stages, primarily due to confusion with background elements and limited training samples [161,163,170]. Conversely, spike detection at maturity can also be problematic, as ears bend under grain weight and become harder to identify [162,164].
3.6. Grain Classification
Table 6 presents all the articles that focus on grain classification.
Table 6.
References related to grain classification.
The application of AI techniques to wheat grain analysis is primarily concentrated in four areas: the classification of wheat varieties, the identification of damage types, discrimination between bread and durum wheat, and grain counting. While most of the studies reviewed adopt deep learning approaches for these tasks, shallow neural networks and other conventional machine learning methods are still in use [191,197,198,199]. All studies mentioned in this section have used data collected in a controlled environment and not on the field.
The classification of wheat grains by variety is crucial for multiple reasons, including quality control, market segmentation, economic valuation, and supply chain management. Consequently, the topic has received considerable attention in the literature. The complexity of this classification task is strongly influenced by the number of varieties involved, which in the studies reviewed ranges from as few as 3 [191] to as many as 41 [14].
While RGB imaging remains widely used, there is a growing interest in sensors capable of capturing the spectral characteristics of wheat kernels. This includes hyperspectral imaging [196,205,206] and soft X-ray imaging [191]. Additionally, sensor fusion strategies, such as combining RGB, SWIR, and VNIR data, have been explored to enhance classification performance [195].
To improve model generalization, data augmentation is commonly applied. Most studies employ standard techniques such as rotation, flipping, cropping, translation, and scaling [14,194,201]. However, more advanced methods have also been adopted. Notably, Passos and Mishra [196] enhanced the input feature space by stacking multiple chemometrically preprocessed versions of the reflectance spectra (e.g., SNV, first and second derivatives), expanding the number of features from 200 to 1200.
It is important to note that differences between wheat varieties can be subtle, making classification highly sensitive to minor alterations, such as those induced by storage conditions. Although this concern has been acknowledged in the literature [192], none of the reviewed studies explicitly examined whether classification accuracy is maintained when using stored grains as opposed to freshly harvested samples.
The detection of damaged kernels is critical for assessing the quality and marketability of wheat batches. Although only five studies on this topic were included in this review, they employ a diverse array of methods to address the challenge. RGB imaging was used by Gao et al. [190] to classify broken, sprouted, injured, moldy, and spotted kernels, and by Sabanci [199] to detect kernels damaged by sunn pests. Gao et al. noted that distinguishing between five visually similar damage categories posed significant challenges, not only in terms of model performance but also due to increased annotation errors during dataset preparation.
Hyperspectral imaging has also been employed to detect damaged, germinated, and mildewed grains [193], as well as to identify slightly sprouted kernels [204], offering richer spectral information for nuanced classification. In an alternative approach, Yang et al. [203] explored the use of impact acoustic signals to identify kernels affected by mildew or insect damage. In this method, kernels are dropped from a height of 50 cm onto a metal surface, and the resulting sounds are captured by a microphone. These audio signals are then transformed into spectrograms, two-dimensional visual representations of frequency and intensity over time, which serve as inputs for a deep learning model.
The task of distinguishing between bread and durum wheat was explored in three studies, all led by the same first author [197,198,200]. Two of these studies employed RGB imaging to perform the classification [197,200], while the third utilized a multispectral imaging system covering a broad spectral range from the ultraviolet to the near-infrared [198], thereby capturing more detailed spectral information to improve discrimination. Additionally, the problem of grain counting, important for yield estimation and crop assessment, was addressed by Wei et al. [202], who combined RGB imaging with image augmentation techniques to enhance model robustness and performance. This model was designed for healthy wheat grains and may struggle with broken or irregular grains.
3.7. Other Applications
Table 7 presents all the remaining articles considered in this review.
Table 7.
References related to other applications.
The task of wheat mapping and row identification is inherently grounded in the use of remote sensing imagery, predominantly captured by satellites [52,208,209,210,211], though some studies have also relied on UAV-based data [207]. Among the five studies reviewed on this topic, three employed deep learning models [207,209,211], while the remaining two applied traditional machine learning algorithms [52,208]. Multispectral imagery was the most frequently used data type [52,208,209,211], although RGB [207] and Synthetic Aperture Radar (SAR) imagery [52] have also been incorporated.
With the exception of Fang et al. [208], all studies reviewed applied some form of data fusion. For instance, Cai et al. [207] integrated texture, grayscale, and hue–saturation–value (HSV) features extracted from UAV imagery using a deep learning-based feature fusion framework. Similarly, Luo et al. [209] combined diverse data sources—including satellite-derived vegetation indices (NDVI and LAI), climate variables from TerraClimate, soil properties from the Harmonized World Soil Database, and cropland masks from GFSAD1k—to enhance wheat area mapping and yield estimation. In another example, Tian et al. [52] fused optical imagery (Sentinel-2 and Landsat-8) with SAR data (Sentinel-1) to differentiate between garlic and winter wheat cropping areas. Lastly, Zhong et al. [211] trained deep learning models for winter wheat mapping using fused MODIS time-series NDVI data (from Terra and Aqua satellites) and county-level agricultural statistics from the USDA NASS.
Three main challenges are frequently associated with wheat mapping. First, cloud contamination in optical imagery can significantly degrade dataset quality [52,208]. Second, the spatial resolution (GSD) of some satellite platforms may be too coarse to capture fine-scale variations in wheat fields, leading to mixed pixels that contain multiple land cover classes. While constellations such as Sentinel and Landsat offer moderate resolutions (10–30 m) [52,208], others like MODIS provide much coarser resolutions [209,211]. Third, ground-truth generation presents substantial difficulties across all reviewed studies. For example, Cai et al. [207] noted the complexity of annotating UAV images due to irregular crop row structures and the presence of vacant or cluttered areas. Other studies relied on manual visual interpretation, a process that is both labor intensive and inherently subjective [208]. To improve annotation accuracy, some authors incorporated field surveys [52]. In the case of Luo et al. [209], subnational agricultural census data were used, though these datasets varied in format, quality, and temporal coverage across different countries. Finally, the lack of pixel-level labeled training data was highlighted as a major limitation, impacting both the training and validation of models.
In the context of wheat flour classification, Nargesi et al. [213] employed a hyperspectral imaging system to differentiate between various wheat flour types. Accurate classification is critical, as the misuse of specific flour types can compromise the quality of the final product. The authors noted the need for manual preprocessing, such as sieving to 300 µm, to mitigate spectral noise caused by particle size variation. Complementing this, Shen et al. [214] developed a deep learning model to identify wheat impurities using RGB image data. While the method proved effective, the authors observed that occlusion and overlap between wheat and impurities (e.g., straw or insects) impaired classification accuracy. To improve model robustness, data augmentation techniques, including image rotation and flipping, were applied to the training set.
A more sophisticated approach to impurity detection was proposed by Shen et al. [215], who introduced a method integrating terahertz spectral imaging with convolutional neural networks. This fusion of spectral and spatial information yielded pseudo-color THz images that improved classification accuracy. Despite promising results, the system faced limitations in scalability due to the high cost of THz sensors and the restricted range of impurity types analyzed. Like the previous study, data augmentation was utilized to enhance model generalization.
Beyond the realm of image classification, Bourguet et al. [212] proposed an AI-based argumentation framework to support policy decisions related to wheat-based food quality. Their system synthesizes knowledge from the scientific literature, expert interviews, and regulatory documents to evaluate trade-offs in public health policies, particularly those concerning bread production. Applied to the French PNNS (Programme National Nutrition Santé), the framework facilitated decisions about promoting whole-grain versus refined flour by considering factors such as nutritional benefits, sanitary risks, economic feasibility, and consumer preferences. The study emphasized the complexity of formalizing stakeholder arguments and the reliance on manual expert input.
Both Bartley et al. [216] and Shafaei et al. [217] aimed to estimate grain moisture content, a key factor affecting quality, shelf-life, pricing, and storage risk. The first proposed a non-destructive, real-time method using a microwave transmission system with horn antennas and a network analyzer. The study employed artificial neural networks (ANNs) with input features derived from amplitude, phase, and permittivity values, constituting a form of data fusion. In contrast, Shafaei et al. [217] used the hydration time and temperature to predict hydration characteristics, including moisture content, through AI models. Measurements were based on weight changes, without electronic sensors or data fusion. The models used were not deep learning based but relied on traditional methods such as MLP and ANFIS. While both studies addressed moisture prediction, Bartley et al. [216] focused on sensor-driven, real-time estimation, whereas Shafaei et al. [217] employed a lab-based, classical modeling approach.
Two studies addressed nitrogen monitoring in wheat, highlighting its importance for crop health, yield, and environmental sustainability. Nitrogen is vital for chlorophyll production and photosynthesis, and its accurate estimation enables precision fertilization and improved nitrogen use efficiency. Singh et al. [218] used a proximal hyperspectral sensor (ASD FieldSpec) to collect high-resolution canopy reflectance data and applied traditional machine learning models to estimate nitrogen content directly. This method provided detailed spectral insights under controlled conditions. Wu et al. [219] employed multi-temporal UAV multispectral imagery to estimate chlorophyll content (SPAD), a proxy for nitrogen status. Using a DJI Phantom 4 Multispectral UAV, they combined multiple vegetation indices across four time points after wheat heading. This approach, which involved feature- and temporal-level data fusion, supported broad-scale, non-destructive nitrogen monitoring. Four models were tested, including one deep learning algorithm.
Yang et al. [220] proposed DeepTranSpectra (DTS), a deep learning method for transferring calibration models across five different NIR spectrometers. To ensure consistency, spectral data were harmonized through wavelength transformation and interpolation, a form of instrument-level data fusion. The study aimed to predict crude protein content in wheat and soybean meal, an essential parameter for quality control and non-destructive analysis. Due to limited data, the training sets were augmented tenfold using random spectral variations. Although based on simulated scenarios, DTS demonstrated strong potential for improving model transferability and reliability across heterogeneous NIR devices.
Akkem et al. [221] developed a machine learning-based crop recommendation system aimed at improving transparency and trust. The system utilized tabular data from sources like soil, weather, and historical yields, integrating features without applying full data fusion. To address the “black-box” issue, the study employed XAI methods, helping users interpret model outputs. A Streamlit-based interface was also created for interactive visualization. While effective, the authors noted that counterfactual explanations still require further validation in real-world applications.
Bai et al. [222] investigated the use of wheat germ oil and hydrogen in dual fuel mode to improve diesel engine performance and reduce emissions. To avoid extensive experimental trials, the study employed traditional machine learning algorithms to predict key engine parameters. The experimental setup included gas analyzers for emissions, a smoke meter, a piezoelectric pressure transducer, flow meters, and a crank angle encoder. This combination of dual-fuel combustion and machine learning enabled accurate predictions while reducing the need for costly physical testing.
Ghasemi-Mobtaker et al. [223] aimed to support sustainable wheat farming by predicting output energy, economic profit, and global warming potential (GWP). They compared the performance of different ML models to evaluate environmental impacts. Data were collected through field surveys and farmer interviews, without using sensors or remote sensing tools. While this method offered valuable insights, it also posed a risk of response bias due to the subjective nature of interview-based data.
Núñez et al. [224] aimed to optimize amylase production using solid-state fermentation with Rhizopus microsporus and low-cost agro-industrial wastes. The study compared traditional response surface methodology with ANNs combined with genetic algorithms to improve modeling and prediction accuracy. Using ternary mixtures of substrates, the study applied composition-level data fusion to identify optimal substrate combinations. While ANN-GA provided strong predictive performance, the research was limited to laboratory-scale experiments, with no industrial validation.
4. Discussion
The challenges associated with applying AI to wheat production are diverse, encompassing both application-specific issues and broader, cross-cutting barriers that affect nearly all research in the field. Some of these general challenges stand out as the most pervasive obstacles to the wider and more effective adoption of AI technologies in agriculture. This section focuses on discussing these key challenges and proposing potential solutions to address them.
Deep learning methods have generally outperformed traditional machine learning approaches, such as support vector machines (SVMs) and random forests, in tasks like disease detection, yield prediction, and phenotypic trait estimation. This superiority stems from their ability to automatically extract hierarchical features from raw data without the need for handcrafted feature engineering, which is often required in traditional models. For instance, ref. [13] reported that convolutional neural networks (CNNs) achieved higher prediction accuracies for wheat yield compared to classical regression models when applied to UAV imagery. Similarly, ref. [12] demonstrated that deep learning models provided more robust disease classification under variable field conditions than support vector machines. However, it is important to note that deep learning approaches typically demand larger datasets and higher computational resources, which may limit their applicability in certain agricultural contexts.
Crop fields are inherently unstructured environments, where both intrinsic and extrinsic factors introduce significant variability into nearly all types of data collected [81,118]. This issue is especially pronounced in the case of digital images [225], as conditions such as lighting, angle of insolation, plant architecture, soil background, and sensor settings vary widely [226], making it virtually impossible to capture two images under identical conditions [5,110]. High levels of variability usually lead models with poor generalization capabilities [227,228]. Deep learning models, in particular, are vulnerable to unseen conditions and thus require exposure to data from diverse environments and conditions for reliable predictions [229].
Building datasets that fully capture the entire range of real-world variation is largely unfeasible [230]. In practice, most published studies rely on datasets that fall far short of representing the true diversity of field conditions [227,231]. Consequently, the models developed under such constrained scenarios tend to produce overly optimistic results that fail to reflect real-world performance [232]. This issue is especially pronounced when model performance is validated using a subset of the original dataset rather than an independent, external dataset, which can lead to inflated accuracy metrics and misleading conclusions [35,44]. It is important to note, however, that efforts are currently underway to generate large-scale, annotated public datasets with different types of data [233].
While data augmentation is often used in an attempt to enhance dataset representativity [227,234], it remains an imperfect and limited solution, frequently insufficient for producing technologies that are truly ready for field deployment [5]. Even with the support of advanced techniques such as GANs [235,236,237,238,239], constructing truly representative datasets remains a significant challenge [240]. In addition, augmentation is not always applied correctly. If data augmentation is performed prior to dividing the dataset into training and test subsets, the random split may result in nearly identical images (differing only slightly due to augmentation) appearing across all subsets. This introduces significant bias into the results. Unfortunately, this flawed approach has been adopted in numerous published studies [96] and is often cited as justification for its continued use. Ultimately, the most effective way to overcome data limitations is by collecting additional data across a broader range of environmental and operational conditions. However, achieving such diversity demands considerable effort, which in turn calls for collaboration among research groups and the development of data-sharing networks aligned around common goals.
Promoting interdisciplinary collaboration is essential for advancing AI-driven solutions in wheat research. Agronomists and plant pathologists can contribute domain-specific knowledge for accurate ground-truth labeling and agronomic interpretation of results. Remote sensing specialists can aid in selecting optimal data acquisition strategies, while computer scientists and AI researchers can focus on model development, optimization, and explainability. Collaborative efforts should prioritize the creation of large, diverse, and standardized datasets to improve model generalizability. Additionally, the establishment of shared research platforms, open benchmarks, and coordinated field trials would accelerate the transition from experimental results to real-world applications. Funding agencies and academic institutions are encouraged to support interdisciplinary research initiatives that bridge gaps between agriculture and AI.
In particularly complex domains such as plant pathology, even collaborative research efforts may not be sufficient to overcome data scarcity. In such cases, leveraging citizen science and social media-based data collection emerges as a promising solution [110,239]. Citizen science initiatives, which engage farmers and non-expert volunteers in data collection, have already shown success in supporting agricultural machine learning models. For example, the Radiant Earth Foundation [241] has utilized citizen-contributed data for land cover classification and crop type identification across Africa, while the PlantVillage Nuru app [242] enables farmers to monitor plant health through smartphone imagery, generating large and diverse datasets [78,87]. Encouraging similar frameworks in wheat monitoring could greatly enhance the geographic and phenotypic diversity of datasets, while fostering user engagement and technology adoption. Nonetheless, effectively engaging stakeholders across the agricultural ecosystem remains a challenge, often dependent on favorable conditions and appropriate incentives. Moreover, more informal forms of citizen science, such as compiling datasets from online sources, can introduce substantial noise due to inconsistencies in image quality, resolution, and background conditions [227], underscoring the need for careful data curation and validation.
Beyond expanding datasets, advanced learning strategies such as few-shot learning (FSL) and self-supervised learning (SSL) offer promising alternatives to traditional supervised approaches. Few-shot learning methods enable models to generalize from a very limited number of labeled examples, thereby reducing the dependency on extensive annotated datasets. For instance, Uzhinskiy [243] evaluated different few-shot learning methods for plant disease recognition, demonstrating that accurate classification could be achieved even with a minimal number of training samples. Similarly, Ghanbarzadeh and Soleimani [244] showed that self-supervised learning approaches significantly improved remote sensing image classification by enabling models to learn meaningful representations from unlabeled data. Applying such methodologies to wheat monitoring tasks could help address current data limitations, enhancing model robustness and facilitating reliable performance in data-scarce environments.
The integration of heterogeneous data sources such as genomic, phenotypic, environmental, and management information has become essential in agricultural AI research [245,246]. Combining different types of images has also been frequently explored [11]. Known as data fusion, this process allows models to capture complex interactions and improve predictive performance [111,232,247,248]. Farooq et al. [249] highlight its role in strengthening genotype–phenotype associations, while other authors note that combining different types of remote sensing data enhances the accuracy of deep learning models [22,235,250,251]. In addition, Darwin et al. [252] emphasize that including contextual variables during modeling is crucial for improving reliability. Despite its advantages, implementing data fusion poses technical challenges. These include the need for dense, high-quality datasets and robust models capable of handling variable formats and scales [253,254]. Overall, while data fusion holds clear potential, its success depends on both computational strategies and comprehensive datasets.
Some problems require multi-class classification, where the data must be categorized into one of several possible classes. In such cases, it is common for some classes to be significantly more frequent than others [92,111,173,227,237]. For example, certain wheat diseases may occur almost every season, while others appear only sporadically [238]. This results in severe class imbalance, which must be properly addressed to prevent the development of biased models that underperform on underrepresented classes [68,87,95,98,255]. A variety of techniques are available to handle class imbalance, including resampling methods, cost-sensitive learning, and data augmentation [69,73,90,233]. However, the choice of method should be made carefully, taking into account the specific characteristics and constraints of the problem at hand [240].
Another important data-related challenge, particularly relevant to prediction and estimation tasks, is the need for accurate ground-truth values to serve as reference points and training targets for the models [227]. However, generating ground-truth data is often labor intensive [1,71,76,108,233], costly, and, in some cases, destructive [140], which adds logistical complexity and increases the overall cost of the research [32,60]. Crowdsourcing [11,22] and automated labeling tools [235] offer valuable support, but they frequently introduce errors that can distort both training and validation processes. To mitigate these ground-truth issues, some studies have adopted weak supervision strategies [22], for example using high-accuracy classification outputs from traditional machine learning methods as proxy labels for training deep learning models [91]. Some authors have emphasized the need for semi-supervised, unsupervised, and self-supervised learning approaches to reduce reliance on manually labeled data [226].
Moreover, the process of establishing ground-truth can involve subjective judgment, especially in field-based evaluations [91,102,111,227], which introduces uncertainty and reduces the reliability and reproducibility of the results [78,92,232,256]. Additionally, inter-annotator variability can be substantial, underscoring the importance of involving multiple experts or adopting consensus-based strategies to ensure reliable labeling [227]. Although there are no straightforward solutions to the challenges of ground-truth generation, it is crucial that studies explicitly disclose potential sources of error in their annotation processes. Such transparency enables a more nuanced interpretation of the findings and enhances the overall credibility and reproducibility of the research.
High computational demand is a recurring challenge in the application of artificial intelligence, particularly in deep learning [226]. When computational burden arises on the training side, there are technically viable solutions, such as the use of GPUs, cloud computing, or model parallelization, that can reduce training time to acceptable levels [72]. However, these solutions often come with significant financial costs, which may be prohibitive for some research groups or institutions [257]. In contrast, when models are computationally intensive during inference, it can severely limit their practical usability, especially when deployment is intended on devices with limited processing capabilities, such as smartphones or edge devices [258].
That said, it is important to recognize that not all applications require real-time or near real-time operation [237,240,247,259]. In some cases, inference times measured in minutes or even hours may be perfectly acceptable, depending on the urgency and context of the task at hand [227,233]. This flexibility opens the door for the use of more complex models in offline or batch processing scenarios, where immediate feedback is not critical. Nonetheless, it is important to note that in precision agriculture applications involving UAVs or robotic systems, near-real-time inference becomes particularly relevant, thereby favoring the use of lightweight and computationally efficient models [225,227,252,254].
An often overlooked but increasingly important issue in agricultural AI applications is data privacy [226,245]. With many countries enforcing strict regulations on data sharing and processing, including the need for explicit consent from landowners or data subjects, ensuring compliance has become a significant challenge [260]. This is particularly problematic for technologies intended for direct use by farmers and rural workers, where ease of deployment is crucial. In response, some studies focused on real-world applications have adopted security measures such as encrypted communication and token-based access [87]. Additionally, recent research has investigated privacy-preserving approaches that eliminate the need for centralized data transfer or sharing. Techniques like federated learning allow models to be trained locally on users’ devices, thereby mitigating legal and ethical concerns related to data movement and aligning with emerging privacy regulations [72,226].
Federated learning (FL) offers a promising decentralized framework for developing AI models while preserving data privacy across different farms and institutions. Although applications of FL in wheat research are still emerging, several case studies in agriculture highlight its potential. For instance, ref. [261] demonstrated the use of FL to collaboratively train crop disease detection models across geographically distributed farms without sharing sensitive data. Similarly, ref. [262] applied FL to precision irrigation management, enabling multiple farms to optimize water usage based on shared model improvements. These examples illustrate how FL can overcome data-sharing barriers, making it a promising approach for future wheat disease monitoring and yield prediction systems across diverse agroecological regions.
Despite the growing number of studies exploring the application of AI in wheat production, relatively few practical technologies have successfully transitioned from academic research to real-world farm implementation [232,236,237]. Several factors contribute to this gap between research and adoption. First, the cost–benefit ratio of many AI-based solutions may not be compelling enough to justify their adoption, particularly for small- and medium-sized producers [237,263]. Second, some models are computationally intensive, making them incompatible with the hardware constraints of field-deployable devices [11,225,258]. Third, in some cases, the technologies developed are misaligned with the actual needs and constraints of the intended users, limiting their relevance and usability [12,232,246,264,265]. Fourth, even promising models may underperform under real-world conditions due to the challenges previously discussed, such as poor generalizability, data limitations, and environmental variability [225,227,237]. Finally, some authors cite the lack of connectivity in production areas as a major hurdle for the adoption of the technologies [235].
To bridge this gap, greater emphasis must be placed on translating academic advancements into practical, user-centered technologies that are cost effective, scalable, and responsive to the real needs of farmers and agricultural stakeholders [226,266]. This includes stronger collaboration between researchers [229,247], technology developers, and end users, as well as investments in infrastructure, training, and extension services to support adoption [230]. A simple framework to enable this is suggested in Figure 1. Following these guidelines, successful applications have emerged in areas like cereal quality [255], plant phenotyping [233], yield estimation [257], crop monitoring [266], autonomous irrigation systems [226,263], and beyond. For instance, ref. [91] demonstrated the practical application of drone-based imaging for wheat disease detection under real farm conditions, achieving high classification accuracy despite environmental variability. Similarly, Schirrmann et al. [95] successfully employed UAV-mounted multispectral cameras to detect wheat leaf rust in operational agricultural settings.
Figure 1.
Proposed framework for translating AI research into practical applications in wheat production.
Deployment strategies for such AI-driven tools often require accessible and cost-effective UAV platforms, standardized flight protocols, and basic training for farmers or agricultural technicians to interpret outputs. However, infrastructural needs, including reliable internet connectivity for cloud-based processing and availability of affordable sensor equipment, remain critical barriers to large-scale adoption. Costs for drones and multispectral or hyperspectral sensors, though decreasing, still represent a significant investment for smallholder farmers.
To facilitate the adoption of these technologies, protocols could be developed, emphasizing low-cost drone models equipped with simplified imaging systems, integration with farmer-friendly mobile applications for disease alerts, and partnerships with extension services for capacity building. Successful pilot programs that bundle equipment, software, and training could serve as scalable prototypes for broader deployment.
5. Conclusions
This review examined the current state of the art in artificial intelligence (AI) techniques and models applied to challenges related to wheat crops. The volume of research in this area has been growing steadily, and substantial advances have been made not only in prediction accuracy but also in understanding how AI models generate their outputs. Despite these achievements, numerous challenges and research gaps remain unresolved. Many of these were identified and discussed throughout the article, with potential solutions proposed where feasible.
Emerging trends point to promising directions for future research, particularly in the fusion of heterogeneous data sources and the development of hybrid modeling approaches. For instance, Shen et al. [47] demonstrated that integrating multispectral and thermal imagery significantly improved wheat yield estimation accuracy compared to using either modality alone, highlighting the value of multi-source data fusion in enhancing model robustness and sensitivity to key crop parameters. Such approaches can better capture the complexity of agricultural systems by leveraging complementary information from different sensor types.
Another important trend involves combining deep learning techniques with physical modeling. Cao et al. [30] proposed a hybrid framework that integrates process-based crop models with deep neural networks, enabling models to incorporate domain-specific knowledge while retaining the flexibility and pattern recognition capabilities of AI methods. This hybridization has the potential to improve model generalization under diverse and changing environmental conditions, addressing some of the limitations associated with purely data-driven models. Future research should prioritize the exploration of data fusion strategies that combine satellite, UAV, ground sensor, and meteorological data, as well as the further development of hybrid AI-physical models tailored to specific agricultural tasks such as yield prediction, disease monitoring, and stress detection.
Looking ahead, based on recent developments in AI and crop management, several trajectories appear likely to dominate. AI and deep learning methods are expected to continue advancing rapidly, broadening their applicability across a wide range of crop management tasks. At the same time, progress in model interpretability may enable the development of lighter, more robust architectures suited for deployment in real-world environments. As technical barriers diminish, an increasing number of AI-based technologies should become viable under operational conditions. Although limitations related to data representativeness and model generalization will persist, these challenges are likely to diminish as sensor technologies and data acquisition methods evolve. Additionally, the swift progress in other AI domains may yield unforeseen impacts as illustrated by the societal influence of conversational models.
Funding
This research was funded by Fapesp, process numbers 2022/09319-9 and 2024/01308-3.
Conflicts of Interest
The author declares no conflicts of interest.
Abbreviations
| Acronym | Meaning |
| ACO | Ant Colony Optimization |
| AdaBoost | Adaptive Boosting |
| AI | Artificial Intelligence |
| AK | Arc-Cosine Kernel |
| ANFIS | Adaptive Neuro-Fuzzy Inference System |
| ANN | Artificial Neural Network |
| ARIMA | Auto-Regressive Integrated Moving Average |
| BPNN | Backpropagation Neural Network |
| BMTME | Bayesian Multi-Trait and Multi-Environment model |
| CEEMDAN | Complete Ensemble Empirical Mode Decomposition with Adaptive Noise |
| CNN | Convolutional Neural Network |
| CW | CERES-Wheat |
| DF | Deep Forest |
| DL | Deep Learning |
| DNN | Deep Neural Network |
| DON | Deoxynivalenol |
| DT | Decision Tree |
| E-MMC | Elliptical-Maximum Margin Criterion |
| EnKF | Ensemble Kalman Filter |
| FCN | Fully Convolutional Network |
| GA | Genetic Algorithm |
| GAN | Generative Adversarial Network |
| GBDT | Gradient Boosting Decision Trees |
| GBM | Gradient Boosting Machine |
| GBRT | Gradient Boost Regression Tree |
| GBLUP | Genomic Best Linear Unbiased Prediction |
| GK | Gaussian Kernel |
| GPR | Gaussian Process Regression |
| GRNN | Generalized Regression Neural Network |
| GRU | Gated Recurrent Unit |
| GSD | Ground Sample Distance |
| GWO | Grey Wolf Optimization |
| IABC | Improved Artificial Bee Colony |
| IPSO | Improved Particle Swarm Optimization |
| kNN | k-Nearest Neighbors |
| KRR | Kernel Ridge Regression |
| LAI | Leaf Area Index |
| Lasso | Least Absolute Shrinkage and Selection Operator |
| LDA | Linear Discriminant Analysis |
| LR | Linear Regression |
| LSTM | Long Short-Term Memory |
| ML | Machine Learning |
| MLP | Multilayer Perceptron |
| MLR | Multiple Linear Regression |
| MTDL | Multi-Trait Deep Learning |
| NB | Naive Bayes |
| NDVI | Normalized Difference Vegetation Index |
| NLB | Non-Local Block |
| OLS | Ordinary Least Squares |
| PCANet | Principal Component Analysis Network |
| PCNN | Pulse-Coupled Neural Network |
| PLS | Partial Least Squares |
| PLSDA | Partial Least Squares Discriminant Analysis |
| PLSR | Partial Least Squares Regression |
| PSPNet | Pyramid Scene Parsing Network |
| RCTC | Residual-Capsule Network with Threshold Convolution |
| RF | Random Forest |
| RFR | Random Forest Regression |
| RGB | Red-Green-Blue |
| RNN | Recurrent Neural Network |
| RPN | Region Proposal Networks |
| RR | Ridge Regression |
| RRBLUP | Ridge Regression Best Linear Unbiased Predictor |
| SAR | Synthetic Aperture Radar |
| SCNN | Shallow Convolutional Neural Networks |
| SIF | Solar-Induced Fluorescence |
| SPGAN | Spectrogram Generative Adversarial Networks |
| SSD | Single-Shot Detector |
| SVM | Support Vector Machine |
| SVR | Support Vector Machine Regression |
| TGBLUP | Threshold Genomic Best Linear Unbiased Prediction |
| TRMM | Tropical Rainfall Measuring Mission |
| UAV | Unmanned Aerial Vehicle |
| XGBoost | Extreme Gradient Boosting |
| YOLO | You Only Look Once |
References
- Aboneh, T.; Rorissa, A.; Srinivasagan, R.; Gemechu, A. Computer Vision Framework for Wheat Disease Identification and Classification Using Jetson GPU Infrastructure. Technologies 2021, 9, 47. [Google Scholar] [CrossRef]
- Ahmed, A.A.M.; Sharma, E.; Jui, S.J.J.; Deo, R.C.; Nguyen-Huy, T.; Ali, M. Kernel Ridge Regression Hybrid Method for Wheat Yield Prediction with Satellite-Derived Predictors. Remote Sens. 2022, 14, 1136. [Google Scholar] [CrossRef]
- Soussi, A.; Zero, E.; Sacile, R.; Trinchero, D.; Fossa, M. Smart Sensors and Smart Data for Precision Agriculture: A Review. Sensors 2024, 24, 2647. [Google Scholar] [CrossRef] [PubMed]
- Elashmawy, R.; Uysal, I. Precision Agriculture Using Soil Sensor Driven Machine Learning for Smart Strawberry Production. Sensors 2023, 23, 2247. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Deep learning applied to plant pathology: The problem of data representativeness. Trop. Plant Pathol. 2021, 47, 85–94. [Google Scholar] [CrossRef]
- Bock, C.H.; Barbedo, J.G.A.; Ponte, E.M.D.; Bohnenkamp, D.; Mahlein, A.K. From visual estimates to fully automated sensor-based measurements of plant disease severity: Status and challenges for improving accuracy. Phytopathol. Res. 2020, 2, 9. [Google Scholar] [CrossRef]
- Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. Drones in agriculture: A review and bibliometric analysis. Comput. Electron. Agric. 2022, 198, 107017. [Google Scholar] [CrossRef]
- Atzberger, C. Advances in Remote Sensing of Agriculture: Context Description, Existing Operational Monitoring Systems and Major Information Needs. Remote Sens. 2013, 5, 949–981. [Google Scholar] [CrossRef]
- de Moraes Navarro, E.; Costa, N.; de Jesus Pereira, A.M. A Systematic Review of IoT Solutions for Smart Farming. Sensors 2020, 20, 4231. [Google Scholar] [CrossRef]
- Bhat, S.A.; Huang, N. Big Data and AI Revolution in Precision Agriculture: Survey and Challenges. IEEE Access 2021, 9, 110209–110222. [Google Scholar] [CrossRef]
- Shaikh, T.A.; Rasool, T.; Lone, F.R. Towards Leveraging the Role of Machine Learning and Artificial Intelligence in Precision Agriculture and Smart Farming. Comput. Electron. Agric. 2022, 198, 107119. [Google Scholar] [CrossRef]
- Shafi, U.; Mumtaz, R.; Shafaq, Z.; Zaidi, S.M.H.; Kaifi, M.O.; Mahmood, Z.; Zaidi, S.A.R. Wheat rust disease detection techniques: A technical perspective. J. Plant Dis. Prot. 2022, 129, 489–504. [Google Scholar] [CrossRef]
- Khaki, S.; Safaei, N.; Pham, H.; Wang, L. WheatNet: A lightweight convolutional neural network for high-throughput image-based wheat head detection and counting. Neurocomputing 2022, 489, 78–89. [Google Scholar] [CrossRef]
- Çelik, Y.; Başaran, E.; Dilay, Y. Identification of durum wheat grains by using hybrid convolution neural network and deep features. Signal Image Video Process. 2022, 16, 1135–1142. [Google Scholar] [CrossRef]
- Balaska, V.; Adamidou, Z.; Vryzas, Z.; Gasteratos, A. Sustainable Crop Protection via Robotics and Artificial Intelligence Solutions. Machines 2023, 11, 774. [Google Scholar] [CrossRef]
- Yang, S.; Li, L.; Fei, S.; Yang, M.; Tao, Z.; Meng, Y.; Xiao, Y. Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data. Drones 2024, 8, 284. [Google Scholar] [CrossRef]
- de Camargo, T.; Schirrmann, M.; Landwehr, N.; Dammer, K.H.; Pflanz, M. Optimized Deep Learning Model as a Basis for Fast UAV Mapping of Weed Species in Winter Wheat Crops. Remote Sens. 2021, 13, 1704. [Google Scholar] [CrossRef]
- Oliveira, F.; Costa, D.G.; Assis, F.; Silva, I. Internet of Intelligent Things: A convergence of embedded systems, edge computing and machine learning. Internet Things 2024, 26, 101153. [Google Scholar] [CrossRef]
- Ryo, M. Explainable artificial intelligence and interpretable machine learning for agricultural data analysis. Artif. Intell. Agric. 2022, 6, 46–50. [Google Scholar] [CrossRef]
- Mostafaeipour, A.; Fakhrzad, M.B.; Gharaat, S.; Jahangiri, M.; Dhanraj, J.A.; Band, S.S.; Issakhov, A.; Mosavi, A. Machine Learning for Prediction of Energy in Wheat Production. Agriculture 2020, 10, 517. [Google Scholar] [CrossRef]
- Paudel, D.; de Wit, A.; Boogaard, H.; Marcos, D.; Osinga, S.; Athanasiadis, I.N. Interpretability of deep learning models for crop yield forecasting. Comput. Electron. Agric. 2023, 206, 107663. [Google Scholar] [CrossRef]
- Joshi, A.; Pradhan, B.; Gite, S.; Chakraborty, S. Remote-Sensing Data and Deep-Learning Techniques in Crop Mapping and Yield Prediction: A Systematic Review. Remote Sens. 2023, 15, 2014. [Google Scholar] [CrossRef]
- Bock, C.H.; Pethybridge, S.J.; Barbedo, J.G.A.; Esker, P.D.; Mahlein, A.K.; Ponte, E.M.D. A phytopathometry glossary for the twenty-first century: Towards consistency and precision in intra- and inter-disciplinary dialogues. Trop. Plant Pathol. 2022, 47, 14–24. [Google Scholar] [CrossRef]
- Ahmed, M.U.; Hussain, I. Prediction of Wheat Production Using Machine Learning Algorithms in northern areas of Pakistan. Telecommun. Policy 2022, 46, 102370. [Google Scholar] [CrossRef]
- Bali, N.; Singla, A. Deep Learning Based Wheat Crop Yield Prediction Model in Punjab Region of North India. Appl. Artif. Intell. 2021, 35, 1304–1328. [Google Scholar] [CrossRef]
- Bhojani, S.H.; Bhatt, N. Wheat crop yield prediction using new activation functions in neural network. Neural Comput. Appl. 2020, 32, 13941–13951. [Google Scholar] [CrossRef]
- Bian, C.; Shi, H.; Wu, S.; Zhang, K.; Wei, M.; Zhao, Y.; Sun, Y.; Zhuang, H.; Zhang, X.; Chen, S. Prediction of Field-Scale Wheat Yield Using Machine Learning Method and Multi-Spectral UAV Data. Remote Sens. 2022, 14, 1474. [Google Scholar] [CrossRef]
- Cao, J.; Zhang, Z.; Tao, F.; Zhang, L.; Luo, Y.; Han, J.; Li, Z. Identifying the Contributions of Multi-Source Data for Winter Wheat Yield Prediction in China. Remote Sens. 2020, 12, 750. [Google Scholar] [CrossRef]
- Cao, J.; Zhang, Z.; Luo, Y.; Zhang, L.; Zhang, J.; Li, Z.; Tao, F. Wheat yield predictions at a county and field scale with deep learning, machine learning, and google earth engine. Eur. J. Agron. 2021, 123, 126204. [Google Scholar] [CrossRef]
- Cao, J.; Wang, H.; Li, J.; Tian, Q.; Niyogi, D. Improving the Forecasting of Winter Wheat Yields in Northern China with Machine Learning–Dynamical Hybrid Subseasonal-to-Seasonal Ensemble Prediction. Remote Sens. 2022, 14, 1707. [Google Scholar] [CrossRef]
- Cheng, E.; Zhang, B.; Peng, D.; Zhong, L.; Yu, L.; Liu, Y.; Xiao, C.; Li, C.; Li, X.; Chen, Y.; et al. Wheat yield estimation using remote sensing data based on machine learning approaches. Front. Plant Sci. 2022, 13, 1090970. [Google Scholar] [CrossRef]
- Fei, S.; Hassan, M.A.; Xiao, Y.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.; Chen, R.; Ma, Y. UAV-based Multi-Sensor Data Fusion and Machine Learning Algorithm for Yield Prediction in Wheat. Precis. Agric. 2023, 24, 187–212. [Google Scholar] [CrossRef]
- Haider, S.A.; Naqvi, S.R.; Akram, T.; Umar, G.A.; Shahzad, A.; Sial, M.R.; Khaliq, S.; Kamran, M. LSTM Neural Network Based Forecasting Model for Wheat Production in Pakistan. Agronomy 2019, 9, 72. [Google Scholar] [CrossRef]
- Huang, H.; Huang, J.; Wu, Y.; Zhuo, W.; Song, J.; Li, X.; Li, L.; Su, W.; Ma, H.; Liang, S. The Improved Winter Wheat Yield Estimation by Assimilating GLASS LAI Into a Crop Growth Model With the Proposed Bayesian Posterior-Based Ensemble Kalman Filter. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4401818. [Google Scholar] [CrossRef]
- Kheir, A.M.S.; Ammar, K.A.; Amer, A.; Ali, M.G.M.; Ding, Z.; Elnashar, A. Machine learning-based cloud computing improved wheat yield simulation in arid regions. Comput. Electron. Agric. 2022, 203, 107457. [Google Scholar] [CrossRef]
- Khoshnevisan, B.; Rafiee, S.; Omid, M.; Mousazadeh, H. Development of an intelligent system based on ANFIS for predicting wheat grain yield on the basis of energy inputs. Inf. Process. Agric. 2014, 1, 14–22. [Google Scholar] [CrossRef]
- Li, Y.; Liu, H.; Ma, J.; Zhang, L. Estimation of Leaf Area Index for Winter Wheat at Early Stages Based on Convolutional Neural Networks. Comput. Electron. Agric. 2021, 190, 106480. [Google Scholar] [CrossRef]
- Li, L.; Wang, B.; Feng, P.; Liu, D.L.; He, Q.; Zhang, Y.; Wang, Y.; Li, S.; Lu, X.; Yue, C.; et al. Developing machine learning models with multi-source environmental data to predict wheat yield in China. Comput. Electron. Agric. 2022, 194, 106790. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, S.; Wang, X.; Chen, B.; Chen, J.; Wang, J.; Huang, M.; Wang, Z.; Ma, L.; Wang, P.; et al. Exploring the Superiority of Solar-Induced Chlorophyll Fluorescence Data in Predicting Wheat Yield Using Machine Learning and Deep Learning Methods. Comput. Electron. Agric. 2022, 192, 106612. [Google Scholar] [CrossRef]
- Aamir, R.; Shahid, M.A.; Zaman, M.; Miao, Y.; Huang, Y.; Safdar, M.; Maqbool, S.; Muhammad, N.E. Improving Wheat Yield Prediction with Multi-Source Remote Sensing Data and Machine Learning in Arid Regions. Comput. Electron. Agric. 2023, 209, 108317. [Google Scholar] [CrossRef]
- Nevavuori, P.; Narra, N.; Lipping, T. Crop yield prediction with deep convolutional neural networks. Comput. Electron. Agric. 2019, 163, 104859. [Google Scholar] [CrossRef]
- Romero, J.R.; Roncallo, P.F.; Akkiraju, P.C.; Ponzoni, I.; Echenique, V.C.; Carballido, J.A. Using classification algorithms for predicting durum wheat yield in the province of Buenos Aires. Comput. Electron. Agric. 2013, 96, 173–179. [Google Scholar] [CrossRef]
- Ruan, G.; Li, X.; Yuan, F.; Cammarano, D.; Ata-UI-Karim, S.T.; Liu, X.; Tian, Y.; Zhu, Y.; Cao, W.; Cao, Q. Improving wheat yield prediction integrating proximal sensing and weather data with machine learning. Comput. Electron. Agric. 2022, 195, 106852. [Google Scholar] [CrossRef]
- Salehnia, N.; Salehnia, N.; Ansari, H.; Kolsoumi, S.; Bannayan, M. Climate data clustering effects on arid and semi-arid rainfed wheat yield: A comparison of artificial intelligence and K-means approaches. Int. J. Biometeorol. 2019, 63, 861–872. [Google Scholar] [CrossRef]
- Schreiber, L.V.; Amorim, J.G.A.; Guimarães, L.; Matos, D.M.; Maciel da Costa, C.; Parraga, A. Above-ground Biomass Wheat Estimation: Deep Learning with UAV-based RGB Images. Appl. Artif. Intell. 2022, 36, 2055392. [Google Scholar] [CrossRef]
- Sharma, A.; Georgi, M.; Tregubenko, M.; Tselykh, A.; Tselykh, A. Enabling smart agriculture by implementing artificial intelligence and embedded sensing. Comput. Ind. Eng. 2022, 165, 107936. [Google Scholar] [CrossRef]
- Shen, Y.; Mercatoris, B.; Cao, Z.; Kwan, P.; Guo, L.; Yao, H.; Cheng, Q. Improving Wheat Yield Prediction Accuracy Using LSTM-RF Framework Based on UAV Thermal Infrared and Multispectral Imagery. Agriculture 2022, 12, 892. [Google Scholar] [CrossRef]
- Srivastava, A.K.; Safaei, N.; Khaki, S.; Lopez, G.; Zeng, W.; Ewert, F.; Gaiser, T.; Rahimi, J. Winter wheat yield prediction using convolutional neural networks from environmental and phenological data. Sci. Rep. 2022, 12, 3215. [Google Scholar] [CrossRef]
- Sun, Z.; Li, Q.; Jin, S.; Song, Y.; Xu, S.; Wang, X.; Cai, J.; Zhou, Q.; Ge, Y.; Zhang, R.; et al. Simultaneous Prediction of Wheat Yield and Grain Protein Content Using Multitask Deep Learning from Time-Series Proximal Sensing. Plant Phenomics 2022, 2022, 9757948. [Google Scholar] [CrossRef]
- Tanabe, R.; Matsui, T.; Tanaka, T.S. Winter wheat yield prediction using convolutional neural networks and UAV-based multispectral imagery. Field Crop. Res. 2023, 291, 108786. [Google Scholar] [CrossRef]
- Tian, H.; Wang, P.; Tansey, K.; Zhang, S.; Zhang, J.; Li, H. An IPSO-BP neural network for estimating wheat yield using two remotely sensed variables in the Guanzhong Plain, PR China. Comput. Electron. Agric. 2020, 169, 105180. [Google Scholar] [CrossRef]
- Tian, H.; Pei, J.; Huang, J.; Li, X.; Wang, J.; Zhou, B.; Qin, Y.; Wang, L. Garlic and Winter Wheat Identification Based on Active and Passive Satellite Imagery and the Google Earth Engine in Northern China. Remote Sens. 2020, 12, 3539. [Google Scholar] [CrossRef]
- Tripathi, A.; Tiwari, R.K.; Tiwari, S.P. A Deep Learning Multi-Layer Perceptron and Remote Sensing Approach for Soil Health-Based Crop Yield Estimation. Int. J. Appl. Earth Obs. Geoinf. 2022, 113, 102959. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, Z.; Feng, L.; Du, Q.; Runge, T. Combining Multi-Source Data and Machine Learning Approaches to Predict Winter Wheat Yield in the Conterminous United States. Remote Sens. 2020, 12, 1232. [Google Scholar] [CrossRef]
- Wang, X.; Huang, J.; Feng, Q.; Yin, D. Winter Wheat Yield Prediction at County Level and Uncertainty Analysis in Main Wheat-Producing Regions of China with Deep Learning Approaches. Remote Sens. 2020, 12, 1744. [Google Scholar] [CrossRef]
- Wang, J.; Si, H.; Gao, Z.; Shi, L. Winter Wheat Yield Prediction Using an LSTM Model from MODIS LAI Products. Agriculture 2022, 12, 1707. [Google Scholar] [CrossRef]
- Wang, J.; Wang, P.; Tian, H.; Tansey, K.; Liu, J.; Quan, W. A deep learning framework combining CNN and GRU for improving wheat yield estimates using time series remotely sensed multi-variables. Comput. Electron. Agric. 2023, 206, 107705. [Google Scholar] [CrossRef]
- Wolanin, A.; Mateo-García, G.; Camps-Valls, G.; Gómez-Chova, L.; Meroni, M.; Duveiller, G.; Liangzhi, Y.; Guanter, L. Estimating and understanding crop yields with explainable deep learning in the Indian Wheat Belt. Environ. Res. Lett. 2020, 15, 024019. [Google Scholar] [CrossRef]
- Wu, S.; Deng, L.; Guo, L.; Wu, Y. Wheat leaf area index prediction using data fusion based on high-resolution unmanned aerial vehicle imagery. Plant Methods 2022, 18, 68. [Google Scholar] [CrossRef]
- Xie, Y.; Huang, J. Integration of a Crop Growth Model and Deep Learning Methods to Improve Satellite-Based Yield Estimation of Winter Wheat in Henan Province, China. Remote Sens. 2021, 13, 4372. [Google Scholar] [CrossRef]
- Yang, S.; Hu, L.; Wu, H.; Ren, H.; Qiao, H.; Li, P.; Fan, W. Integration of Crop Growth Model and Random Forest for Winter Wheat Yield Estimation From UAV Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6253–6268. [Google Scholar] [CrossRef]
- Zhang, J.; Cheng, T.; Guo, W.; Xu, X.; Qiao, H.; Xie, Y.; Ma, X. Leaf area index estimation model for UAV image hyperspectral data based on wavelength variable selection and machine learning methods. Plant Methods 2021, 17, 49. [Google Scholar] [CrossRef] [PubMed]
- Zhou, X.; Kono, Y.; Win, A.; Matsui, T.; Tanaka, T.S.T. Predicting within-field variability in grain yield and protein content of winter wheat using UAV-based multispectral imagery and machine learning approaches. Plant Prod. Sci. 2021, 24, 137–151. [Google Scholar] [CrossRef]
- Zhou, W.; Liu, Y.; Ata-Ul-Karim, S.T.; Ge, Q.; Li, X.; Xiao, J. Integrating climate and satellite remote sensing data for predicting county-level wheat yield in China using machine learning methods. Int. J. Appl. Earth Obs. Geoinf. 2022, 111, 102861. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Data Fusion in Agriculture: Resolving Ambiguities and Closing Data Gaps. Sensors 2022, 22, 2285. [Google Scholar] [CrossRef]
- Zhou, J.; Li, J.; Wang, C.; Wu, H.; Zhao, C.; Teng, G. Crop disease identification and interpretation method based on multimodal deep learning. Comput. Electron. Agric. 2021, 189, 106408. [Google Scholar] [CrossRef]
- Akbar, S.; Ahmad, K.T.; Abid, M.K.; Aslam, N. Wheat Disease Detection for Yield Management Using IoT and Deep Learning Techniques. VFAST Trans. Softw. Eng. 2020, 9, 19–30. [Google Scholar] [CrossRef]
- Azimi, N.; Sofalian, O.; Davari, M.; Asghari, A.; Zare, N. Statistical and Machine Learning-Based FHB Detection in Durum Wheat. Plant Breed. Biotechnol. 2020, 8, 265–280. [Google Scholar] [CrossRef]
- Bao, W.; Yang, X.; Liang, D.; Hu, G.; Yang, X. Lightweight convolutional neural network model for field wheat ear disease identification. Comput. Electron. Agric. 2021, 189, 106367. [Google Scholar] [CrossRef]
- Bao, W.; Zhao, J.; Hu, G.; Zhang, D.; Huang, L.; Liang, D. Identification of wheat leaf diseases and their severity based on elliptical-maximum margin criterion metric learning. Sustain. Comput. Inform. Syst. 2021, 30, 100526. [Google Scholar] [CrossRef]
- Deng, J.; Hong, D.; Li, C.; Yao, J.; Yang, Z.; Zhang, Z.; Chanussot, J. RustQNet: Multimodal deep learning for quantitative inversion of wheat stripe rust disease index. Comput. Electron. Agric. 2024, 225, 109245. [Google Scholar] [CrossRef]
- Fahim-Ul-Islam, M.; Chakrabarty, A.; Ahmed, S.T.; Rahman, R.; Kwon, H.H.; Piran, M.J. A Comprehensive Approach Toward Wheat Leaf Disease Identification Leveraging Transformer Models and Federated Learning. IEEE Access 2024, 12, 109128–109136. [Google Scholar] [CrossRef]
- Fang, X.; Zhen, T.; Li, Z. Lightweight Multiscale CNN Model for Wheat Disease Detection. Appl. Sci. 2023, 13, 5801. [Google Scholar] [CrossRef]
- Gao, Y.; Wang, H.; Li, M.; Su, W.H. Automatic Tandem Dual BlendMask Networks for Severity Assessment of Wheat Fusarium Head Blight. Agriculture 2022, 12, 1493. [Google Scholar] [CrossRef]
- Genaev, M.A.; Skolotneva, E.S.; Gultyaeva, E.I.; Orlova, E.A.; Bechtold, N.P.; Afonnikov, D.A. Image-Based Wheat Fungi Diseases Identification by Deep Learning. Plants 2021, 10, 1500. [Google Scholar] [CrossRef]
- Gonçalves, J.P.; Pinto, F.A.; Queiroz, D.M.; Villar, F.M.; Barbedo, J.G.; Ponte, E.M.D. Deep learning architectures for semantic segmentation and automatic estimation of severity of foliar symptoms caused by diseases or pests. Biosyst. Eng. 2021, 210, 129–142. [Google Scholar] [CrossRef]
- Goyal, L.; Sharma, C.M.; Singh, A.; Singh, P.K. Leaf and spike wheat disease detection & classification using an improved deep convolutional architecture. Inform. Med. Unlocked 2021, 25, 100642. [Google Scholar] [CrossRef]
- Haider, W.; Rehman, A.-U.; Durrani, N.M.; Rehman, S.U. A Generic Approach for Wheat Disease Classification and Verification Using Expert Opinion. IEEE Access 2021, 9, 31122–31135. [Google Scholar] [CrossRef]
- Hayit, T.; Erbay, H.; Varçın, F.; Hayit, F.; Akci, N. Determination of the Severity Level of Yellow Rust Disease in Wheat by Using Convolutional Neural Networks. J. Plant Pathol. 2021, 103, 923–934. [Google Scholar] [CrossRef]
- Jiang, Z.; Dong, Z.; Jiang, W.; Yang, Y. Recognition of Rice Leaf Diseases and Wheat Leaf Diseases Based on Multi-Task Deep Transfer Learning. Comput. Electron. Agric. 2021, 186, 106184. [Google Scholar] [CrossRef]
- Jiang, J.; Liu, H.; Zhao, C.; He, C.; Ma, J.; Cheng, T.; Zhu, Y.; Cao, W.; Yao, X. Evaluation of Diverse Convolutional Neural Networks and Training Strategies for Wheat Leaf Disease Identification with Field-Acquired Photographs. Remote Sens. 2022, 14, 3446. [Google Scholar] [CrossRef]
- Jin, X.; Jie, L.; Wang, S.; Qi, H.J.; Li, S.W. Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field. Remote Sens. 2018, 10, 395. [Google Scholar] [CrossRef]
- Khan, H.; Haq, I.U.; Munsif, M.; Mustaqeem; Khan, S.U.; Lee, M.Y. Automated Wheat Diseases Classification Framework Using Advanced Machine Learning Technique. Agriculture 2022, 12, 1226. [Google Scholar] [CrossRef]
- Lin, Z.; Mu, S.; Huang, F.; Mateen, K.A.; Wang, M.; Gao, W.; Jia, J. A Unified Matrix-Based Convolutional Neural Network for Fine-Grained Image Classification of Wheat Leaf Diseases. IEEE Access 2019, 7, 11570–11586. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, G.; Sun, H.; An, L.; Zhao, R.; Liu, M.; Tang, W.; Li, M.; Yan, X.; Ma, Y.; et al. Exploring multi-features in UAV based optical and thermal infrared images to estimate disease severity of wheat powdery mildew. Comput. Electron. Agric. 2024, 225, 109285. [Google Scholar] [CrossRef]
- Lu, J.; Hu, J.; Zhao, G.; Mei, F.; Zhang, C. An in-field automatic wheat disease diagnosis system. Comput. Electron. Agric. 2017, 142, 369–379. [Google Scholar] [CrossRef]
- Dainelli, R.; Bruno, A.; Martinelli, M.; Moroni, D.; Rocchi, L.; Morelli, S.; Ferrari, E.; Silvestri, M.; La Cava, P.; Toscano, P. GranoScan: An AI-powered mobile app for in-field identification of biotic threats of wheat. Front. Plant Sci. 2023, 14, 1119032. [Google Scholar] [CrossRef]
- Maqsood, M.H.; Mumtaz, R.; Haq, I.U.; Shafi, U.; Zaidi, S.M.H.; Hafeez, M. Super Resolution Generative Adversarial Network (SRGANs) for Wheat Stripe Rust Classification. Sensors 2021, 21, 7903. [Google Scholar] [CrossRef]
- Mi, Z.; Zhang, X.; Su, J.; Han, D.; Su, B. Wheat Stripe Rust Grading by Deep Learning With Attention Mechanism and Images From Mobile Devices. Front. Plant Sci. 2020, 11, 558126. [Google Scholar] [CrossRef]
- Nigam, S.; Jain, R.; Marwaha, S.; Arora, A.; Haque, M.A.; Dheeraj, A.; Singh, V.K. Deep transfer learning model for disease identification in wheat crop. Ecol. Inform. 2023, 75, 102068. [Google Scholar] [CrossRef]
- Pan, Q.; Gao, M.; Wu, P.; Yan, J.; Li, S. A Deep-Learning-Based Approach for Wheat Yellow Rust Disease Recognition from Unmanned Aerial Vehicle Images. Sensors 2021, 21, 6540. [Google Scholar] [CrossRef]
- Pan, Q.; Gao, M.; Wu, P.; Yan, J.; AbdelRahman, M.A.E. Image Classification of Wheat Rust Based on Ensemble Learning. Sensors 2022, 22, 6047. [Google Scholar] [CrossRef] [PubMed]
- Qiu, R.; Yang, C.; Moghimi, A.; Zhang, M.; Steffenson, B.J.; Hirsch, C.D. Detection of Fusarium Head Blight in Wheat Using a Deep Neural Network and Color Imaging. Remote Sens. 2019, 11, 2658. [Google Scholar] [CrossRef]
- Rangarajan, A.K.; Whetton, R.L.; Mouazen, A.M. Detection of fusarium head blight in wheat using hyperspectral data and deep learning. Expert Syst. Appl. 2022, 208, 118240. [Google Scholar] [CrossRef]
- Schirrmann, M.; Landwehr, N.; Giebel, A.; Garz, A.; Dammer, K.H. Early Detection of Stripe Rust in Winter Wheat Using Deep Residual Neural Networks. Front. Plant Sci. 2021, 12, 469689. [Google Scholar] [CrossRef]
- Shafi, U.; Mumtaz, R.; Haq, I.U.; Hafeez, M.; Iqbal, N.; Shaukat, A.; Zaidi, S.M.H.; Mahmood, Z. Wheat Yellow Rust Disease Infection Type Classification Using Texture Features. Sensors 2022, 22, 146. [Google Scholar] [CrossRef] [PubMed]
- Su, W.H.; Zhang, J.; Yang, C.; Page, R.; Szinyei, T.; Hirsch, C.D.; Steffenson, B.J. Automatic Evaluation of Wheat Resistance to Fusarium Head Blight Using Dual Mask-RCNN Deep Learning Frameworks in Computer Vision. Remote Sens. 2021, 13, 26. [Google Scholar] [CrossRef]
- Su, J.; Yi, D.; Su, B.; Mi, Z.; Liu, C.; Hu, X.; Xu, X.; Guo, L.; Chen, W.H. Aerial Visual Perception in Smart Farming: Field Study of Wheat Yellow Rust Monitoring. IEEE Trans. Ind. Inform. 2021, 17, 2242–2252. [Google Scholar] [CrossRef]
- Weng, S.; Hu, X.; Zhu, W.; Li, P.; Zheng, S.; Zheng, L.; Huang, L.; Zhang, D. Surface-enhanced Raman spectroscopy with gold nanorods modified by sodium citrate and liquid–liquid interface self-extraction for detection of deoxynivalenol in Fusarium head blight-infected wheat kernels coupled with a fully convolution network. Food Chem. 2021, 359, 129847. [Google Scholar] [CrossRef]
- Weng, S.; Han, K.; Chu, Z.; Zhu, G.; Liu, C.; Zhu, Z.; Zhang, Z.; Zheng, L.; Huang, L. Reflectance images of effective wavelengths from hyperspectral imaging for identification of Fusarium head blight-infected wheat kernels combined with a residual attention convolution neural network. Comput. Electron. Agric. 2021, 190, 106483. [Google Scholar] [CrossRef]
- Xiao, Y.; Dong, Y.; Huang, W.; Liu, L.; Ma, H. Wheat Fusarium Head Blight Detection Using UAV-Based Spectral and Texture Features in Optimal Window Size. Remote Sens. 2021, 13, 2437. [Google Scholar] [CrossRef]
- Xu, L.; Cao, B.; Zhao, F.; Ning, S.; Xu, P.; Zhang, W.; Hou, X. Wheat Leaf Disease Identification Based on Deep Learning Algorithms. Physiol. Mol. Plant Pathol. 2023, 123, 101940. [Google Scholar] [CrossRef]
- Zhang, D.; Wang, D.; Gu, C.; Jin, N.; Zhao, H.; Chen, G.; Liang, H.; Liang, D. Using Neural Network to Identify the Severity of Wheat Fusarium Head Blight in the Field Environment. Remote Sens. 2019, 11, 2375. [Google Scholar] [CrossRef]
- Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef]
- Zhang, D.Y.; Chen, G.; Yin, X.; Hu, R.J.; Gu, C.Y.; Pan, Z.G.; Zhou, X.G.; Chen, Y. Integrating Spectral and Image Data to Detect Fusarium Head Blight of Wheat. Comput. Electron. Agric. 2020, 175, 105588. [Google Scholar] [CrossRef]
- Zhang, D.; Wang, Z.; Jin, N.; Gu, C.; Chen, Y.; Huang, Y. Evaluation of Efficacy of Fungicides for Control of Wheat Fusarium Head Blight Based on Digital Imaging. IEEE Access 2020, 8, 109876–109890. [Google Scholar] [CrossRef]
- Zhang, T.; Xu, Z.; Su, J.; Yang, Z.; Liu, C.; Chen, W.H.; Li, J. Ir-UNet: Irregular Segmentation U-Shape Network for Wheat Yellow Rust Detection by UAV Multispectral Imagery. Remote Sens. 2021, 13, 3892. [Google Scholar] [CrossRef]
- Zhang, D.Y.; Luo, H.S.; Wang, D.Y.; Zhou, X.G.; Li, W.F.; Gu, C.Y.; Zhang, G.; He, F.M. Assessment of the levels of damage caused by Fusarium head blight in wheat using an improved YoloV5 method. Comput. Electron. Agric. 2022, 198, 107086. [Google Scholar] [CrossRef]
- Zhang, T.; Yang, Z.; Xu, Z.; Li, J. Wheat Yellow Rust Severity Detection by Efficient DF-UNet and UAV Multispectral Imagery. IEEE Sens. J. 2022, 22, 9057–9068. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 2019, 180, 96–107. [Google Scholar] [CrossRef]
- Feng, G.; Gu, Y.; Wang, C.; Zhou, Y.; Huang, S.; Luo, B. Wheat Fusarium Head Blight Automatic Non-Destructive Detection Based on Multi-Scale Imaging: A Technical Perspective. Plants 2024, 13, 1722. [Google Scholar] [CrossRef]
- Mahlein, A.K.; Barbedo, J.G.A.; Chiang, K.S.; Ponte, E.M.D.; Bock, C.H. From Detection to Protection: The Role of Optical Sensors, Robots, and Artificial Intelligence in Modern Plant Disease Management. Phytopathology 2024, 114, 6–16. [Google Scholar] [CrossRef]
- Saad, M.H.; Salman, A.E. A plant disease classification using one-shot learning technique with field images. Multimed. Tools Appl. 2024, 83, 58935–58960. [Google Scholar] [CrossRef]
- Argüeso, D.; Picón, A.; Irusta, U.; Medela, A.; San-Emeterio, M.G.; Bereciartua, A.; Álvarez Gila, A. Few-Shot Learning Approach for Plant Disease Classification Using Images Taken in the Field. Comput. Electron. Agric. 2020, 175, 105542. [Google Scholar] [CrossRef]
- Zhou, L.; Tan, L.; Zhang, C.; Zhao, N.; He, Y.; Qiu, Z. A Portable NIR-System for Mixture Powdery Food Analysis Using Deep Learning. LWT Food Sci. Technol. 2022, 153, 112456. [Google Scholar] [CrossRef]
- Barbedo, J.G.A.; Tibola, C.S.; Fernandes, J.M.C. Detecting Fusarium head blight in wheat kernels using hyperspectral imaging. Biosyst. Eng. 2015, 131, 65–76. [Google Scholar] [CrossRef]
- Thompson, M.; Tarr, A.; Tarr, J.A.; Ritterband, S. Unmanned Aerial Vehicles. In Drone Law and Policy: Global Development, Risks, Regulation and Insurance; Tarr, A.A., Tarr, J.A., Thompson, M., Ellis, J., Eds.; Routledge: New York, NY, USA, 2021; pp. 153–168. [Google Scholar] [CrossRef]
- El-Kenawy, E.S.M.; Khodadadi, N.; Mirjalili, S.; Makarovskikh, T.; Abotaleb, M.; Karim, F.K.; Alkahtani, H.K.; Abdelhamid, A.A.; Eid, M.M.; Horiuchi, T.; et al. Metaheuristic Optimization for Improving Weed Detection in Wheat Images Captured by Drones. Mathematics 2022, 10, 4421. [Google Scholar] [CrossRef]
- Jabir, B.; Falih, N. Deep Learning-Based Decision Support System for Weeds Detection in Wheat Fields. Int. J. Electr. Comput. Eng. 2022, 12, 816–825. [Google Scholar] [CrossRef]
- Li, Z.; Wang, D.; Yan, Q.; Zhao, M.; Wu, X.; Liu, X. Winter wheat weed detection based on deep learning models. Comput. Electron. Agric. 2024, 227, 109448. [Google Scholar] [CrossRef]
- Mishra, A.M.; Harnal, S.; Mohiuddin, K.; Gautam, V.; Nasr, O.A.; Goyal, N.; Alwetaishi, M.; Singh, A. A Deep Learning-Based Novel Approach for Weed Growth Estimation. Intell. Autom. Soft Comput. 2022, 31, 1156–1172. [Google Scholar] [CrossRef]
- Su, D.; Kong, H.; Qiao, Y.; Sukkarieh, S. Data augmentation for deep learning based semantic segmentation and crop-weed classification in agricultural robotics. Comput. Electron. Agric. 2021, 190, 106418. [Google Scholar] [CrossRef]
- Su, D.; Qiao, Y.; Kong, H.; Sukkarieh, S. Real-time detection of inter-row ryegrass in wheat farms using deep learning. Biosyst. Eng. 2021, 204, 198–211. [Google Scholar] [CrossRef]
- Su, J.; Yi, D.; Coombes, M.; Liu, C.; Zhai, X.; McDonald-Maier, K.; Chen, W.H. Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [Google Scholar] [CrossRef]
- Wang, K.; Hu, X.; Zheng, H.; Lan, M.; Liu, C.; Liu, Y.; Zhong, L.; Li, H.; Tan, S. Weed Detection and Recognition in Complex Wheat Fields Based on an Improved YOLOv7. Front. Plant Sci. 2024, 15, 1372237. [Google Scholar] [CrossRef]
- Zhuang, J.; Li, X.; Bagavathiannan, M.; Jin, X.; Yang, J.; Meng, W.; Li, T.; Li, L.; Wang, Y.; Chen, Y.; et al. Evaluation of different deep convolutional neural networks for detection of broadleaf weed seedlings in wheat. Pest Manag. Sci. 2022, 78, 521–529. [Google Scholar] [CrossRef] [PubMed]
- Zou, K.; Liao, Q.; Zhang, F.; Che, X.; Zhang, C. A segmentation network for smart weed management in wheat fields. Comput. Electron. Agric. 2022, 202, 107303. [Google Scholar] [CrossRef]
- Chen, P.; Li, W.; Yao, S.; Ma, C.; Zhang, J.; Wang, B.; Zheng, C.; Xie, C.; Liang, D. Recognition and counting of wheat mites in wheat fields by a three-step deep learning method. Neurocomputing 2021, 437, 21–30. [Google Scholar] [CrossRef]
- Fuentes, S.; Tongson, E.; Unnithan, R.R.; Viejo, C.G. Early Detection of Aphid Infestation and Insect-Plant Interaction Assessment in Wheat Using a Low-Cost Electronic Nose (E-Nose), Near-Infrared Spectroscopy and Machine Learning Modeling. Sensors 2021, 21, 5948. [Google Scholar] [CrossRef]
- Li, R.; Wang, R.; Zhang, J.; Xie, C.; Liu, L.; Wang, F.; Chen, H.; Chen, T.; Hu, H.; Jia, X.; et al. An Effective Data Augmentation Strategy for CNN-Based Pest Localization and Recognition in the Field. IEEE Access 2019, 7, 160274–160283. [Google Scholar] [CrossRef]
- Li, W.; Chen, P.; Wang, B.; Xie, C. Automatic Localization and Count of Agricultural Crop Pests Based on an Improved Deep Learning Pipeline. Sci. Rep. 2019, 9, 7024. [Google Scholar] [CrossRef]
- Elbeltagi, A.; Deng, J.; Wang, K.; Malik, A.; Maroufpoor, S. Modeling long-term dynamics of crop evapotranspiration using deep learning in a semi-arid environment. Agric. Water Manag. 2020, 241, 106334. [Google Scholar] [CrossRef]
- Shen, R.; Huang, A.; Li, B.; Guo, J. Construction of a drought monitoring model using deep learning based on multi-source remote sensing data. Int. J. Appl. Earth Obs. Geoinf. 2019, 79, 48–57. [Google Scholar] [CrossRef]
- Chu, H.; Zhang, C.; Wang, M.; Gouda, M.; Wei, X.; He, Y.; Liu, Y. Hyperspectral imaging with shallow convolutional neural networks (SCNN) predicts the early herbicide stress in wheat cultivars. J. Hazard. Mater. 2022, 421, 126706. [Google Scholar] [CrossRef]
- Weng, S.; Yuan, H.; Zhang, X.; Li, P.; Zheng, L.; Zhao, J.; Huang, L. Deep learning networks for the recognition and quantitation of surface-enhanced Raman spectroscopy. Analyst 2020, 145, 4827–4835. [Google Scholar] [CrossRef]
- Yang, B.; Zhu, Y.; Zhou, S. Accurate Wheat Lodging Extraction from Multi-Channel UAV Images Using a Lightweight Network Model. Sensors 2021, 21, 6826. [Google Scholar] [CrossRef] [PubMed]
- Zhang, D.; Ding, Y.; Chen, P.; Zhang, X.; Pan, Z.; Liang, D. Automatic extraction of wheat lodging area based on transfer learning method and DeepLabv3+ network. Comput. Electron. Agric. 2020, 179, 105845. [Google Scholar] [CrossRef]
- Zhang, Z.; Flores, P.; Igathinathane, C.; Naik, D.L.; Kiran, R.; Ransom, J.K. Wheat Lodging Detection from UAS Imagery Using Machine Learning Algorithms. Remote Sens. 2020, 12, 1838. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Detecting and Classifying Pests in Crops Using Proximal Images and Machine Learning: A Review. AI 2020, 1, 312–328. [Google Scholar] [CrossRef]
- Apolo-Apolo, O.E.; Pérez-Ruiz, M.; Martínez-Guanter, J.; Egea, G. A Mixed Data-Based Deep Neural Network to Estimate Leaf Area Index in Wheat Breeding Trials. Agronomy 2020, 10, 175. [Google Scholar] [CrossRef]
- Crossa, J.; Martini, J.W.R.; Gianola, D.; Pérez-Rodríguez, P.; Jarquin, D.; Juliana, P.; Montesinos-López, O.; Cuevas, J. Deep Kernel and Deep Learning for Genome-Based Prediction of Single Traits in Multienvironment Breeding Trials. Front. Genet. 2019, 10, 1168. [Google Scholar] [CrossRef]
- Ghahremani, M.; Williams, K.; Corke, F.M.K.; Tiddeman, B.; Liu, Y.; Doonan, J.H. Deep Segmentation of Point Clouds of Wheat. Front. Plant Sci. 2021, 12, 608732. [Google Scholar] [CrossRef]
- González-Camacho, J.M.; Ornella, L.; Pérez-Rodríguez, P.; Gianola, D.; Dreisigacker, S.; Crossa, J. Applications of Machine Learning Methods to Genomic Selection in Breeding Wheat for Rust Resistance. Plant Genome 2018, 11, 170104. [Google Scholar] [CrossRef] [PubMed]
- Guo, J.; Khan, J.; Pradhan, S.; Shahi, D.; Khan, N.; Avci, M.; Mcbreen, J.; Harrison, S.; Brown-Guedira, G.; Murphy, J.P.; et al. Multi-Trait Genomic Prediction of Yield-Related Traits in US Soft Wheat under Variable Water Regimes. Genes 2020, 11, 1270. [Google Scholar] [CrossRef]
- Hesami, M.; Condori-Apfata, J.A.; Valencia, M.V.; Mohammadi, M. Application of Artificial Neural Network for Modeling and Studying In Vitro Genotype-Independent Shoot Regeneration in Wheat. Appl. Sci. 2020, 10, 5370. [Google Scholar] [CrossRef]
- Khan, Z.; Rahimi-Eichi, V.; Haefele, S.; Garnett, T.; Miklavcic, S.J. Estimation of Vegetation Indices for High-Throughput Phenotyping of Wheat Using Aerial Imaging. Plant Methods 2018, 14, 20. [Google Scholar] [CrossRef] [PubMed]
- Moghimi, A.; Yang, C.; Anderson, J.A. Aerial hyperspectral imagery and deep neural networks for high-throughput yield phenotyping in wheat. Comput. Electron. Agric. 2020, 172, 105299. [Google Scholar] [CrossRef]
- Montesinos-López, O.A.; Montesinos-López, A.; Crossa, J.; Gianola, D.; Hernández-Suárez, C.M.; Martín-Vallejo, J. Multi-trait, Multi-environment Deep Learning Modeling for Genomic-Enabled Prediction of Plant Traits. G3 Genes Genomes Genet. 2018, 8, 3829–3840. [Google Scholar] [CrossRef]
- Montesinos-López, O.A.; Martín-Vallejo, J.; Crossa, J.; Gianola, D.; Hernández-Suárez, C.M.; Montesinos-López, A.; Juliana, P.; Singh, R. New Deep Learning Genomic-Based Prediction Model for Multiple Traits with Binary, Ordinal, and Continuous Phenotypes. G3 Genes Genomes Genet. 2019, 9, 1545–1556. [Google Scholar] [CrossRef]
- Montesinos-López, O.A.; Martín-Vallejo, J.; Crossa, J.; Gianola, D.; Hernández-Suárez, C.M.; Montesinos-López, A.; Juliana, P.; Singh, R. A Benchmarking Between Deep Learning, Support Vector Machine and Bayesian Threshold Best Linear Unbiased Prediction for Predicting Ordinal Traits in Plant Breeding. G3 Genes Genomes Genet. 2019, 9, 601–616. [Google Scholar] [CrossRef]
- Montesinos-López, O.A.; Montesinos-López, A.; Tuberosa, R.; Maccaferri, M.; Sciara, G.; Ammar, K.; Crossa, J. Multi-Trait, Multi-Environment Genomic Prediction of Durum Wheat With Genomic Best Linear Unbiased Predictor and Deep Learning Methods. Front. Plant Sci. 2019, 10, 1311. [Google Scholar] [CrossRef]
- Roth, L.; Camenzind, M.; Aasen, H.; Kronenberg, L.; Barendregt, C.; Camp, K.H.; Walter, A.; Kirchgessner, N.; Hund, A. Repeated Multiview Imaging for Estimating Seedling Tiller Counts of Wheat Genotypes Using Drones. Plant Phenomics 2020, 2020, 3729715. [Google Scholar] [CrossRef] [PubMed]
- Sandhu, K.; Patil, S.S.; Pumphrey, M.; Carter, A. Multitrait machine- and deep-learning models for genomic selection using spectral information in a wheat breeding program. Plant Genome 2021, 14, e20119. [Google Scholar] [CrossRef] [PubMed]
- Sandhu, K.S.; Aoun, M.; Morris, C.F.; Carter, A.H. Genomic Selection for End-Use Quality and Processing Traits in Soft White Winter Wheat Breeding Program with Machine and Deep Learning Models. Biology 2021, 10, 689. [Google Scholar] [CrossRef] [PubMed]
- Sandhu, K.S.; Lozada, D.N.; Zhang, Z.; Pumphrey, M.O.; Carter, A.H. Deep Learning for Predicting Complex Traits in Spring Wheat Breeding Program. Front. Plant Sci. 2021, 11, 613325. [Google Scholar] [CrossRef]
- Wang, X.; Xuan, H.; Evers, B.; Shrestha, S.; Pless, R.; Poland, J. High-throughput phenotyping with deep learning gives insight into the genetic architecture of flowering time in wheat. GigaScience 2019, 8, giz120. [Google Scholar] [CrossRef]
- Yasrab, R.; Atkinson, J.A.; Wells, D.M.; French, A.P.; Pridmore, T.P.; Pound, M.P. RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures. GigaScience 2019, 8, giz123. [Google Scholar] [CrossRef]
- Zenkl, R.; Timofte, R.; Kirchgessner, N.; Roth, L.; Hund, A.; Van Gool, L.; Walter, A.; Aasen, H. Outdoor Plant Segmentation With Deep Learning for High-Throughput Field Phenotyping on a Diverse Wheat Dataset. Front. Plant Sci. 2022, 12, 774068. [Google Scholar] [CrossRef]
- Zhang, Z.; Qu, Y.; Ma, F.; Lv, Q.; Zhu, X.; Guo, G.; Li, M.; Yang, W.; Que, B.; Zhang, Y.; et al. Integrating high-throughput phenotyping and genome-wide association studies for enhanced drought resistance and yield prediction in wheat. New Phytol. 2024, 243, 1758–1775. [Google Scholar] [CrossRef]
- Zhu, C.; Hu, Y.; Mao, H.; Li, S.; Li, F.; Zhao, C.; Luo, L.; Liu, W.; Yuan, X. A Deep Learning-Based Method for Automatic Assessment of Stomatal Index in Wheat Microscopic Images of Leaf Epidermis. Front. Plant Sci. 2021, 12, 716784. [Google Scholar] [CrossRef]
- Alkhudaydi, T.; Reynolds, D.; Griffiths, S.; Zhou, J.; de la Iglesia, B. An Exploration of Deep-Learning Based Phenotypic Analysis to Detect Spike Regions in Field Conditions for UK Bread Wheat. Plant Phenomics 2019, 2019, 7368761. [Google Scholar] [CrossRef]
- Dandrifosse, S.; Ennadifi, E.; Carlier, A.; Gosselin, B.; Dumont, B.; Mercatoris, B. Deep learning for wheat ear segmentation and ear density measurement: From heading to maturity. Comput. Electron. Agric. 2022, 199, 107161. [Google Scholar] [CrossRef]
- David, E.; Madec, S.; Sadeghi-Tehran, P.; Aasen, H.; Zheng, B.; Liu, S.; Kirchgessner, N.; Ishikawa, G.; Nagasawa, K.; Badhon, M.A.; et al. Global Wheat Head Detection (GWHD) Dataset: A Large and Diverse Dataset of High-Resolution RGB-Labelled Images to Develop and Benchmark Wheat Head Detection Methods. Plant Phenomics 2020, 2020, 3521852. [Google Scholar] [CrossRef] [PubMed]
- David, E.; Serouart, M.; Smith, D.; Madec, S.; Velumani, K.; Liu, S.; Wang, X.; Pinto, F.; Shafiee, S.; Tahir, I.S.A.; et al. Global Wheat Head Detection 2021: An Improved Dataset for Benchmarking Wheat Head Detection Methods. Plant Phenomics 2021, 2021, 9846158. [Google Scholar] [CrossRef]
- Fourati, F.; Mseddi, W.S.; Attia, R. Wheat Head Detection using Deep, Semi-Supervised and Ensemble Learning. Can. J. Remote Sens. 2021, 47, 198–208. [Google Scholar] [CrossRef]
- Genaev, M.A.; Komyshev, E.G.; Smirnov, N.V.; Kruchinina, Y.V.; Goncharov, N.P.; Afonnikov, D.A. Morphometry of the Wheat Spike by Analyzing 2D Images. Agronomy 2019, 9, 390. [Google Scholar] [CrossRef]
- Gong, B.; Ergu, D.; Cai, Y.; Ma, B. Real-Time Detection for Wheat Head Applying Deep Neural Network. Sensors 2021, 21, 191. [Google Scholar] [CrossRef] [PubMed]
- Hasan, M.M.; Chopin, J.P.; Laga, H.; Miklavcic, S.J. Detection and Analysis of Wheat Spikes Using Convolutional Neural Networks. Plant Methods 2018, 14, 100. [Google Scholar] [CrossRef]
- He, M.X.; Hao, P.; Xin, Y.Z. A Robust Method for Wheatear Detection Using UAV in Natural Scenes. IEEE Access 2020, 8, 189043–189053. [Google Scholar] [CrossRef]
- Li, J.; Li, C.; Fei, S.; Ma, C.; Chen, W.; Ding, F.; Wang, Y.; Li, Y.; Shi, J.; Xiao, Z. Wheat Ear Recognition Based on RetinaNet and Transfer Learning. Sensors 2021, 21, 4845. [Google Scholar] [CrossRef]
- Li, R.; Wu, Y. Improved YOLO v5 Wheat Ear Detection Algorithm Based on Attention Mechanism. Electronics 2022, 11, 1673. [Google Scholar] [CrossRef]
- Ma, J.; Li, Y.; Liu, H.; Du, K.; Zheng, F.; Wu, Y.; Zhang, L. Improving segmentation accuracy for ears of winter wheat at flowering stage by semantic segmentation. Comput. Electron. Agric. 2020, 176, 105662. [Google Scholar] [CrossRef]
- Ma, J.; Li, Y.; Du, K.; Zheng, F.; Zhang, L.; Gong, Z.; Jiao, W. Segmenting ears of winter wheat at flowering stage using digital images and deep learning. Comput. Electron. Agric. 2020, 168, 105159. [Google Scholar] [CrossRef]
- Madec, S.; Jin, X.; Lu, H.; Solan, B.D.; Liu, S.; Duyme, F.; Heritier, E.; Baret, F. Ear density estimation from high resolution RGB imagery using deep learning technique. Agric. For. Meteorol. 2019, 264, 225–234. [Google Scholar] [CrossRef]
- Misra, T.; Arora, A.; Marwaha, S.; Chinnusamy, V.; Rao, A.R.; Jain, R.; Sahoo, R.N.; Ray, M.; Kumar, S.; Raju, D.; et al. SpikeSegNet: A deep learning approach utilizing encoder-decoder network with hourglass for spike segmentation and counting in wheat plant from visual imaging. Plant Methods 2020, 16, 40. [Google Scholar] [CrossRef]
- Qing, S.; Qiu, Z.; Wang, W.; Wang, F.; Jin, X.; Ji, J.; Zhao, L.; Shi, Y. Improved YOLO-FastestV2 wheat spike detection model based on a multi-stage attention mechanism with a LightFPN detection head. Front. Plant Sci. 2024, 15, 1411510. [Google Scholar] [CrossRef]
- Sadeghi-Tehran, P.; Virlet, N.; Ampe, E.M.; Reyns, P.; Hawkesford, M.J. DeepCount: In-Field Automatic Quantification of Wheat Spikes Using Simple Linear Iterative Clustering and Deep Convolutional Neural Networks. Front. Plant Sci. 2019, 10, 1176. [Google Scholar] [CrossRef]
- Shen, R.; Zhen, T.; Li, Z. YOLOv5-Based Model Integrating Separable Convolutions for Detection of Wheat Head Images. IEEE Access 2023, 11, 12059–12074. [Google Scholar] [CrossRef]
- Sun, J.; Yang, K.; Chen, C.; Shen, J.; Yang, Y.; Wu, X.; Norton, T. Wheat head counting in the wild by an augmented feature pyramid networks-based convolutional neural network. Comput. Electron. Agric. 2022, 193, 106705. [Google Scholar] [CrossRef]
- Velumani, K.; Madec, S.; de Solan, B.; Lopez-Lozano, R.; Gillet, J.; Labrosse, J.; Jezequel, S.; Comar, A.; Baret, F. An automatic method based on daily in situ images and deep learning to date wheat heading stage. Field Crop. Res. 2020, 252, 107793. [Google Scholar] [CrossRef]
- Wang, D.; Fu, Y.; Yang, G.; Yang, X.; Liang, D.; Zhou, C.; Zhang, N.; Wu, H.; Zhang, D. Combined Use of FCN and Harris Corner Detection for Counting Wheat Ears in Field Conditions. IEEE Access 2019, 7, 178930–178941. [Google Scholar] [CrossRef]
- Wang, Y.; Qin, Y.; Cui, J. Occlusion Robust Wheat Ear Counting Algorithm Based on Deep Learning. Front. Plant Sci. 2021, 12, 645899. [Google Scholar] [CrossRef] [PubMed]
- Wang, D.; Zhang, D.; Yang, G.; Xu, B.; Luo, Y.; Yang, X. SSRNet: In-Field Counting Wheat Ears Using Multi-Stage Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4403311. [Google Scholar] [CrossRef]
- Xiong, H.; Cao, Z.; Lu, H.; Madec, S.; Liu, L.; Shen, C. TasselNetv2: In-field counting of wheat spikes with context-augmented local regression networks. Plant Methods 2019, 15, 150. [Google Scholar] [CrossRef]
- Xu, X.; Li, H.; Yin, F.; Xi, L.; Qiao, H.; Ma, Z.; Shen, S.; Jiang, B.; Ma, X. Wheat Ear Counting Using K-means Clustering Segmentation and Convolutional Neural Network. Plant Methods 2020, 16, 106. [Google Scholar] [CrossRef] [PubMed]
- Yang, B.; Gao, Z.; Gao, Y.; Zhu, Y. Rapid Detection and Counting of Wheat Ears in the Field Using YOLOv4 with Attention Module. Agronomy 2021, 11, 1202. [Google Scholar] [CrossRef]
- Zang, H.; Wang, Y.; Ru, L.; Zhou, M.; Chen, D.; Zhao, Q.; Zhang, J.; Li, G.; Zheng, G. Detection method of wheat spike improved YOLOv5s based on the attention mechanism. Front. Plant Sci. 2022, 13, 993244. [Google Scholar] [CrossRef]
- Zhao, J.; Zhang, X.; Yan, J.; Qiu, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W. A Wheat Spike Detection Method in UAV Images Based on Improved YOLOv5. Remote Sens. 2021, 13, 3095. [Google Scholar] [CrossRef]
- Zhao, J.; Yan, J.; Xue, T.; Wang, S.; Qiu, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W.; Zhang, X. A deep learning method for oriented and small wheat spike detection (OSWSDet) in UAV images. Comput. Electron. Agric. 2022, 198, 107087. [Google Scholar] [CrossRef]
- Gao, H.; Zhen, T.; Li, Z. Detection of Wheat Unsound Kernels Based on Improved ResNet. IEEE Access 2022, 10, 20092–20101. [Google Scholar] [CrossRef]
- Khatri, A.; Agrawal, S.; Chatterjee, J.M. Wheat Seed Classification: Utilizing Ensemble Machine Learning Approach. Sci. Program. 2022, 2022, 1–9. [Google Scholar] [CrossRef]
- Laabassi, K.; Belarbi, M.A.; Mahmoudi, S.; Mahmoudi, S.A.; Ferhat, K. Wheat varieties identification based on a deep learning approach. J. Saudi Soc. Agric. Sci. 2021, 20, 281–289. [Google Scholar] [CrossRef]
- Li, H.; Zhang, L.; Sun, H.; Rao, Z.; Ji, H. Discrimination of Unsound Wheat Kernels Based on Deep Convolutional Generative Adversarial Network and Near-Infrared Hyperspectral Imaging Technology. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2022, 268, 120722. [Google Scholar] [CrossRef]
- Lingwal, S.; Bhatia, K.K.; Tomer, M.S. Image-Based Wheat Grain Classification Using Convolutional Neural Network. Multimed. Tools Appl. 2021, 80, 35441–35465. [Google Scholar] [CrossRef]
- Özkan, K.; Işık, Ş.; Yavuz, B.T. Identification of wheat kernels by fusion of RGB, SWIR, and VNIR samples. J. Sci. Food Agric. 2019, 99, 4977–4984. [Google Scholar] [CrossRef] [PubMed]
- Passos, D.; Mishra, P. An automated deep learning pipeline based on advanced optimisations for leveraging spectral classification modelling. Chemom. Intell. Lab. Syst. 2021, 215, 104354. [Google Scholar] [CrossRef]
- Sabanci, K.; Kayabasi, A.; Toktas, A. Computer vision-based method for classification of wheat grains using artificial neural network. J. Sci. Food Agric. 2017, 97, 2588–2593. [Google Scholar] [CrossRef]
- Sabanci, K.; Aslan, M.F.; Durdu, A. Bread and durum wheat classification using wavelet-based image fusion. J. Sci. Food Agric. 2020, 100, 5577–5585. [Google Scholar] [CrossRef]
- Sabanci, K. Detection of sunn pest-damaged wheat grains using artificial bee colony optimization-based artificial intelligence techniques. J. Sci. Food Agric. 2020, 100, 817–824. [Google Scholar] [CrossRef]
- Sabanci, K.; Aslan, M.F.; Ropelewska, E.; Unlersen, M.F.; Durdu, A. A Novel Convolutional-Recurrent Hybrid Network for Sunn Pest–Damaged Wheat Grain Detection. Food Anal. Methods 2022, 15, 1748–1760. [Google Scholar] [CrossRef]
- Unlersen, M.F.; Sonmez, M.E.; Aslan, M.F.; Demir, B.; Aydin, N.; Sabanci, K.; Ropelewska, E. CNN–SVM hybrid model for varietal classification of wheat based on bulk samples. Eur. Food Res. Technol. 2022, 248, 2043–2052. [Google Scholar] [CrossRef]
- Wei, W.; Tian-le, Y.; Rui, L.; Chen, C.; Tao, L.; Kai, Z.; Cheng-ming, S.; Chun-yan, L.; Xin-kai, Z.; Wen-shan, G. Detection and Enumeration of Wheat Grains Based on a Deep Learning Method Under Various Scenarios and Scales. J. Integr. Agric. 2020, 19, 1998–2008. [Google Scholar] [CrossRef]
- Yang, X.; Guo, M.; Lyu, Q.; Ma, M. Detection and Classification of Damaged Wheat Kernels Based on Progressive Neural Architecture Search. Biosyst. Eng. 2021, 208, 176–185. [Google Scholar] [CrossRef]
- Zhang, L.; Sun, H.; Rao, Z.; Ji, H. Non-destructive identification of slightly sprouted wheat kernels using hyperspectral data on both sides of wheat kernels. Biosyst. Eng. 2020, 200, 188–199. [Google Scholar] [CrossRef]
- Zhao, X.; Que, H.; Sun, X.; Zhu, Q.; Huang, M. Hybrid convolutional network based on hyperspectral imaging for wheat seed varieties classification. Infrared Phys. Technol. 2022, 125, 104270. [Google Scholar] [CrossRef]
- Zhou, L.; Zhang, C.; Taha, M.F.; Wei, X.; He, Y.; Qiu, Z.; Liu, Y. Wheat Kernel Variety Identification Based on a Large Near-Infrared Spectral Dataset and a Novel Deep Learning-Based Feature Selection Method. Front. Plant Sci. 2020, 11, 575810. [Google Scholar] [CrossRef] [PubMed]
- Cai, W.; Wei, Z.; Song, Y.; Li, M.; Yang, X. Residual-capsule networks with threshold convolution for segmentation of wheat plantation rows in UAV images. Multimed. Tools Appl. 2021, 80, 32131–32147. [Google Scholar] [CrossRef]
- Fang, P.; Zhang, X.; Wei, P.; Wang, Y.; Zhang, H.; Liu, F.; Zhao, J. The Classification Performance and Mechanism of Machine Learning Algorithms in Winter Wheat Mapping Using Sentinel-2 10 m Resolution Imagery. Appl. Sci. 2020, 10, 5075. [Google Scholar] [CrossRef]
- Luo, Y.; Zhang, Z.; Cao, J.; Zhang, L.; Zhang, J.; Han, J.; Zhuang, H.; Cheng, F.; Tao, F. Accurately mapping global wheat production system using deep learning algorithms. Int. J. Appl. Earth Obs. Geoinf. 2022, 110, 102823. [Google Scholar] [CrossRef]
- Meng, S.; Wang, X.; Hu, X.; Luo, C.; Zhong, Y. Deep Learning-Based Crop Mapping in the Cloudy Season Using One-Shot Hyperspectral Satellite Imagery. Remote Sens. 2021, 13, 2674. [Google Scholar] [CrossRef]
- Zhong, L.; Hu, L.; Zhou, H.; Tao, X. Deep learning based winter wheat mapping using statistical data as ground references in Kansas and northern Texas, US. Remote Sens. Environ. 2019, 233, 111411. [Google Scholar] [CrossRef]
- Bourguet, J.R.; Thomopoulos, R.; Mugnier, M.L.; Abécassis, J. An artificial intelligence-based approach to deal with argumentation applied to food quality in a public health policy. Expert Syst. Appl. 2013, 40, 4539–4546. [Google Scholar] [CrossRef]
- Nargesi, M.H.; Kheiralipour, K.; Jayas, D.S. Classification of different wheat flour types using hyperspectral imaging and machine learning techniques. Infrared Phys. Technol. 2024, 142, 105520. [Google Scholar] [CrossRef]
- Shen, Y.; Yin, Y.; Zhao, C.; Li, B.; Wang, J.; Li, G.; Zhang, Z. Image Recognition Method Based on an Improved Convolutional Neural Network to Detect Impurities in Wheat. IEEE Access 2019, 7, 162206–162218. [Google Scholar] [CrossRef]
- Shen, Y.; Yin, Y.; Li, B.; Zhao, C.; Li, G. Detection of impurities in wheat using terahertz spectral imaging and convolutional neural networks. Comput. Electron. Agric. 2021, 181, 105931. [Google Scholar] [CrossRef]
- Bartley, P.G.; Nelson, S.O.; McClendon, R.W.; Trabelsi, S. Determining Moisture Content of Wheat with an Artificial Neural Network from Microwave Transmission Measurements. IEEE Trans. Instrum. Meas. 1998, 47, 123–127. [Google Scholar] [CrossRef]
- Shafaei, S.; Nourmohamadi-Moghadami, A.; Kamgar, S. Development of artificial intelligence based systems for prediction of hydration characteristics of wheat. Comput. Electron. Agric. 2016, 128, 34–45. [Google Scholar] [CrossRef]
- Singh, H.; Roy, A.; Setia, R.K.; Pateriya, B. Estimation of nitrogen content in wheat from proximal hyperspectral data using machine learning and explainable artificial intelligence (XAI) approach. Model. Earth Syst. Environ. 2022, 8, 2505–2511. [Google Scholar] [CrossRef]
- Wu, Q.; Zhang, Y.; Zhao, Z.; Xie, M.; Hou, D. Estimation of Relative Chlorophyll Content in Spring Wheat Based on Multi-Temporal UAV Remote Sensing. Agronomy 2023, 13, 211. [Google Scholar] [CrossRef]
- Yang, J.; Li, J.; Hu, J.; Yang, W.; Zhang, X.; Xu, J.; Zhang, Y.; Luo, X.; Ting, K.; Lin, T.; et al. An Interpretable Deep Learning Approach for Calibration Transfer Among Multiple Near-Infrared Instruments. Comput. Electron. Agric. 2022, 192, 106584. [Google Scholar] [CrossRef]
- Akkem, Y.; Biswas, S.K.; Varanasi, A. Streamlit-based enhancing crop recommendation systems with advanced explainable artificial intelligence for smart farming. Neural Comput. Appl. 2024, 36, 20011–20025. [Google Scholar] [CrossRef]
- Bai, F.J.J.S.; Shanmugaiah, K.; Sonthalia, A.; Devarajan, Y.; Varuvel, E.G. Application of machine learning algorithms for predicting the engine characteristics of a wheat germ oil–Hydrogen fueled dual fuel engine. Int. J. Hydrogen Energy 2023, 48, 23308–23322. [Google Scholar] [CrossRef]
- Ghasemi-Mobtaker, H.; Kaab, A.; Rafiee, S.; Nabavi-Pelesaraei, A. A comparative modeling techniques and life cycle assessment for prediction of output energy, economic profit, and global warming potential for wheat farms. Energy Rep. 2022, 8, 4922–4934. [Google Scholar] [CrossRef]
- Núñez, E.G.F.; Barchi, A.C.; Ito, S.; Escaramboni, B.; Herculano, R.D.; Mayer, C.R.M.; de Oliva Neto, P. Artificial intelligence approach for high-level production of amylase using Rhizopus microsporus var. oligosporus and different agro-industrial wastes. J. Chem. Technol. Biotechnol. 2017, 92, 684–692. [Google Scholar] [CrossRef]
- Kamilaris, A.; Prenafeta-Boldú, F.X. Deep Learning in Agriculture: A Survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
- Attri, I.; Awasthi, L.K.; Sharma, T.P.; Rathee, P. A review of deep learning techniques used in agriculture. Ecol. Inform. 2023, 77, 102217. [Google Scholar] [CrossRef]
- Ahmad, A.; Saraswat, D.; El Gamal, A. A survey on using deep learning techniques for plant disease diagnosis and recommendations for development of appropriate tools. Smart Agric. Technol. 2023, 3, 100083. [Google Scholar] [CrossRef]
- van Klompenburg, T.; Kassahun, A.; Catal, C. Crop Yield Prediction Using Machine Learning: A Systematic Literature Review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
- Bayer, P.E.; Edwards, D. Machine learning in agriculture: From silos to marketplaces. Plant Biotechnol. J. 2021, 19, 648–650. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
- Noon, S.K.; Amjad, M.; Qureshi, M.A.; Mannan, A. Use of deep learning techniques for identification of plant leaf stresses: A review. Inf. Process. Agric. 2020, 7, 341–365. [Google Scholar] [CrossRef]
- Jung, J.; Maeda, M.; Chang, A.; Bhandari, M.; Ashapure, A.; Landivar-Bowles, J. The potential of remote sensing and artificial intelligence as tools to improve the resilience of agriculture production systems. Remote Sens. 2021, 13, 3230. [Google Scholar] [CrossRef]
- Arya, S.; Sandhu, K.S.; Singh, J.; Kumar, S. Deep learning: As the new frontier in high-throughput plant phenotyping. Euphytica 2022, 218, 47. [Google Scholar] [CrossRef]
- Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef]
- Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. A survey on deep learning-based identification of plant and crop diseases from UAV-based aerial images. Clust. Comput. 2023, 26, 1297–1317. [Google Scholar] [CrossRef]
- Harfouche, A.L.; Nakhle, F.; Harfouche, A.H.; Sardella, O.G.; Dart, E.; Jacobson, D. A primer on artificial intelligence in plant digital phenomics: Embarking on the data to insights journey. Trends Plant Sci. 2023, 28, 154–184. [Google Scholar] [CrossRef] [PubMed]
- Li, W.; Zheng, T.; Yang, Z.; Li, M.; Sun, C.; Yang, X. Classification and detection of insects from field images using deep learning for smart pest management: A systematic review. Ecol. Inform. 2021, 66, 101460. [Google Scholar] [CrossRef]
- Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L. Artificial Intelligence for Remote Sensing Data Analysis: A Review of Challenges and Opportunities. ISPRS J. Photogramm. Remote Sens. 2022, 184, 183–204. [Google Scholar] [CrossRef]
- Li, L.; Zhang, S.; Wang, B. Plant Disease Detection and Classification by Deep Learning—A Review. IEEE Access 2021, 9, 56683–56698. [Google Scholar] [CrossRef]
- Alemohammad, H.; Booth, K. LandCoverNet: A global benchmark land cover classification training dataset. arXiv 2020, arXiv:2012.03111. [Google Scholar] [CrossRef]
- Hughes, D.P.; Legg, J.; Alemohammad, H. PlantVillage Nuru: Pest and disease monitoring using AI. 2019. Available online: https://bigdata.cgiar.org/digital-intervention/plantvillage-nuru-pest-and-disease-monitoring-using-ai/ (accessed on 28 April 2025).
- Uzhinskiy, A. Evaluation of Different Few-Shot Learning Methods in the Plant Disease Classification Domain. Biology 2025, 14, 99. [Google Scholar] [CrossRef] [PubMed]
- Ghanbarzadeh, A.; Soleimani, H. Self-supervised in-domain representation learning for remote sensing image scene classification. Heliyon 2024, 10, e37962. [Google Scholar] [CrossRef] [PubMed]
- Mushtaq, M.A.; Ahmed, H.G.M.D.; Zeng, Y. Applications of Artificial Intelligence in Wheat Breeding for Sustainable Food Security. Sustainability 2024, 16, 5688. [Google Scholar] [CrossRef]
- Sheikh, M.; Iqra, F.; Ambreen, H.; Pravin, K.A.; Ikra, M.; Chung, Y.S. Integrating artificial intelligence and high-throughput phenotyping for crop improvement. J. Integr. Agric. 2024, 23, 1787–1802. [Google Scholar] [CrossRef]
- Kuswidiyanto, L.W.; Noh, H.H.; Han, X. Plant Disease Diagnosis Using Deep Learning Based on Aerial Hyperspectral Images: A Review. Remote Sens. 2022, 14, 5073. [Google Scholar] [CrossRef]
- Zhang, X.; Yang, J.; Lin, T.; Ying, Y. Food and Agro-Product Quality Evaluation Based on Spectroscopy and Deep Learning: A Review. Trends Food Sci. Technol. 2021, 112, 431–441. [Google Scholar] [CrossRef]
- Farooq, M.A.; Gao, S.; Hassan, M.A.; Huang, Z.; Rasheed, A.; Hearne, S.; Prasanna, B.; Li, X.; Li, H. Artificial intelligence in plant breeding. Trends Genet. 2024, 40, 891–905. [Google Scholar] [CrossRef]
- Ferchichi, A.; Abbes, A.B.; Barra, V.; Farah, I.R. Forecasting vegetation indices from spatio-temporal remotely sensed data using deep learning-based approaches: A systematic literature review. Environ. Res. 2022, 214, 113845. [Google Scholar] [CrossRef]
- Muruganantham, P.; Wibowo, S.; Grandhi, S.; Samrat, N.H.; Islam, N. A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing. Remote Sens. 2022, 14, 1990. [Google Scholar] [CrossRef]
- Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646. [Google Scholar] [CrossRef]
- Campos-Taberner, M.; García-Haro, F.J.; Martínez, B.; Izquierdo-Verdiguier, E.; Atzberger, C.; Camps-Valls, G.; Gilabert, M.A. Understanding deep learning in land use classification based on Sentinel-2 time series. Sci. Rep. 2020, 10, 17188. [Google Scholar] [CrossRef] [PubMed]
- Chakraborty, S.K.; Chandel, N.S.; Jat, D.; Tiwari, M.K.; Rajwade, Y.A.; Subeesh, A. Deep learning approaches and interventions for futuristic engineering in agriculture. Neural Comput. Appl. 2022, 34, 20539–20573. [Google Scholar] [CrossRef]
- An, D.; Zhang, L.; Liu, Z.; Liu, J.; Wei, Y. Advances in infrared spectroscopy and hyperspectral imaging combined with artificial intelligence for the detection of cereals quality. Crit. Rev. Food Sci. Nutr. 2023, 63, 9766–9796. [Google Scholar] [CrossRef]
- Aslan, M.F.; Sabanci, K.; Aslan, B. Artificial Intelligence Techniques in Crop Yield Estimation Based on Sentinel-2 Data: A Comprehensive Survey. Sustainability 2024, 16, 8277. [Google Scholar] [CrossRef]
- Mamat, N.; Othman, M.F.; Abdoulghafor, R.; Belhaouari, S.B.; Mamat, N.; Mohd Hussein, S.F. Advanced Technology in Agriculture Industry by Implementing Image Annotation Technique and Deep Learning Approach: A Review. Agriculture 2022, 12, 1033. [Google Scholar] [CrossRef]
- Ilyas, Q.M.; Ahmad, M.; Mehmood, A. Automated Estimation of Crop Yield Using Artificial Intelligence and Remote Sensing Technologies. Bioengineering 2023, 10, 125. [Google Scholar] [CrossRef]
- Shahi, T.B.; Xu, C.Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
- Kaur, J.; Fard, S.M.H.; Amiri-Zarandi, M.; Dara, R. Protecting farmers’ data privacy and confidentiality: Recommendations and considerations. Front. Sustain. Food Syst. 2022, 6, 903230. [Google Scholar] [CrossRef]
- Kabala, D.M.; Hafiane, A.; Bobelin, L.; Canals, R. Image-based crop disease detection with federated learning. IEEE Access 2021, 9, 138074–138105. [Google Scholar] [CrossRef]
- Bera, S.; Dey, T.; Mukherjee, A..; De, D. FLAG: Federated Learning for Sustainable Irrigation in Agriculture 5.0. IEEE Access 2022, 10, 66693–66715. [Google Scholar] [CrossRef]
- Jha, K.; Doshi, A.; Patil, P.S.K.; Kumar, M. A Comprehensive Review on Automation in Agriculture Using Artificial Intelligence. Artif. Intell. Agric. 2019, 2, 48–55. [Google Scholar] [CrossRef]
- Sarkar, C.; Gupta, D.; Gupta, U.; Hazarika, B.B. Leaf disease detection using machine learning and deep learning: Review and challenges. Appl. Soft Comput. 2023, 145, 110534. [Google Scholar] [CrossRef]
- Siregar, R.R.A.; Seminar, K.B.; Wahjuni, S.; Santosa, E. Vertical Farming Perspectives in Support of Precision Agriculture Using Artificial Intelligence: A Review. Computers 2022, 11, 135. [Google Scholar] [CrossRef]
- Pathan, M.; Patel, N.; Yagnik, H.; Shah, M. Artificial Cognition for Applications in Smart Agriculture: A Comprehensive Review. Artif. Intell. Agric. 2020, 4, 81–95. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).