Next Article in Journal
Spatiotemporal Stability Responses of Tunnel Excavation Under Cyclical Footage Impact: A FLAC3D-Based Numerical Study
Previous Article in Journal
Simulation of Rock Failure Cone Development Using a Modified Load-Transferring Anchor Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Computer Vision Meets Generative Models in Agriculture: Technological Advances, Challenges and Opportunities

by
Xirun Min
1,
Yuwen Ye
1,
Shuming Xiong
1,* and
Xiao Chen
2,*
1
School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China
2
School of Computing Science and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(14), 7663; https://doi.org/10.3390/app15147663
Submission received: 4 June 2025 / Revised: 1 July 2025 / Accepted: 4 July 2025 / Published: 8 July 2025
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

The integration of computer vision (CV) and generative artificial intelligence (GenAI) into smart agriculture has revolutionised traditional farming practices by enabling real-time monitoring, automation, and data-driven decision-making. This review systematically examines the applications of CV in key agricultural domains, such as crop health monitoring, precision farming, harvesting automation, and livestock management, while highlighting the transformative role of GenAI in addressing data scarcity and enhancing model robustness. Advanced techniques, including convolutional neural networks (CNNs), YOLO variants, and transformer-based architectures, are analysed for their effectiveness in tasks like pest detection, fruit maturity classification, and field management. The survey reveals that generative models, such as generative adversarial networks (GANs) and diffusion models, significantly improve dataset diversity and model generalisation, particularly in low-resource scenarios. However, challenges persist, including environmental variability, edge deployment limitations, and the need for interpretable systems. Emerging trends, such as vision–language models and federated learning, offer promising avenues for future research. The study concludes that the synergy of CV and GenAI holds immense potential for advancing smart agriculture, though scalable, adaptive, and trustworthy solutions remain critical for widespread adoption. This comprehensive analysis provides valuable insights for researchers and practitioners aiming to harness AI-driven innovations in agricultural ecosystems.

1. Introduction

The integration of computer vision (CV) into agricultural practices has driven a paradigm shift from traditional farming to data-driven Agriculture 4.0 [1]. Classical CV techniques, such as image segmentation for crop health monitoring [2,3,4] and object detection for livestock management [5,6,7], form the technological foundation for three operational pillars: continuous field surveillance [8,9], predictive analytics [10,11,12], and automated control systems [13,14]. These advancements enhance precise yield forecasting via feature extraction methodologies [15,16,17] and quality assessment through pattern recognition architectures [18,19,20], thereby enhancing agricultural productivity while mitigating the environmental impact. The synergy between continuous surveillance, predictive modelling, and frameworks for automated decision-making underpins the transition toward precision agriculture, offering scalable solutions for resource optimisation and sustainable farming practices [21].
Expanding on these technological foundations, smart agriculture increasingly depends on the integration of CV with autonomous systems and distributed computing frameworks [1]. This convergence now supports navigation systems in unmanned ground vehicles through stereo-vision mapping [22,23,24], enables drone-based crop surveillance via hyperspectral imaging [25,26,27], and facilitates real-time decision-making through Internet of Things (IoT)-based sensor fusion [28,29,30]. These technological integrations collectively establish an interconnected agricultural ecosystem where edge computing platforms process visual data from distributed sensors to coordinate automated machinery responses [31]. This closed-loop system effectively bridges the gap between data acquisition and field interventions, enabling more responsive and precise agricultural operations [32].
However, this technological progression also exposes critical limitations when deployed across diverse agricultural contexts [33]. Operational challenges arise from environmental variability, including fluctuating illumination and occluded crop structures, which compromise model generalisation [34]. Infrastructure constraints further exacerbate these technical limitations, particularly in resource-constrained settings lacking the computational capacity required for complex vision algorithms [35]. The fundamental bottleneck, however, remains data scarcity [36,37,38]. The prohibitive cost of acquiring labelled agricultural datasets across numerous crop phenotypes and growth stages restricts system scalability, ultimately limiting the practical deployment reliability of these technologies [39].
The recent generative artificial intelligence (GenAI) has emerged as a transformative approach to resolving long-standing challenges in agricultural technology through advanced synthetic data generation and adaptive model optimisation techniques [40]. Current implementations leverage diffusion models [41] and generative adversarial networks (GANs) [42] to synthesise photorealistic agricultural imagery encompassing rare crop phenotypes and extreme meteorological conditions [43,44,45]. This methodology effectively mitigates data paucity challenges while maintaining annotation fidelity through intelligent style-transfer mechanisms. Crucially, latent space manipulations within foundation models facilitate cross-domain adaptation across diverse geographical regions and seasonal variations [46], thereby addressing the generalisation constraints prevalent in conventional CV systems.
For practical deployment, GenAI can augment knowledge distillation methodologies to transform complex vision architectures into computationally lean models deployable on edge computing platforms [47,48,49], thereby systematically addressing the operational limitations inherent to rural agricultural infrastructure. Furthermore, sophisticated simulation environments, powered by generative techniques, are leveraged to create targeted training datasets that systematically include challenging scenarios like occlusions and variable crop geometries, essential for robust autonomous harvesting systems. This approach yields two critical advancements: (1) it substantially enhances the capability to detect objects even when they are occluded, and (2) it significantly reduces the need for field datasets that are laborious to collect. These outcomes collectively underscore the transformative potential of GenAI in enabling robust CV solutions for precision agriculture.
Building upon this transformative potential, this survey provides a comprehensive examination of GenAI’s burgeoning role in revolutionising CV for smart agriculture. We chart the evolution from established CV techniques to cutting-edge GenAI-driven solutions, elucidating how these novel approaches address persistent challenges like data scarcity, environmental variability, and the need for adaptable models. The primary achievements of this review are threefold: firstly, it offers a structured analysis of how GenAI, particularly through diffusion models and GANs, facilitates the generation of diverse, high-fidelity synthetic agricultural data, thereby enhancing model training, resilience, and cross-domain adaptability. Secondly, it presents an extensive overview of state-of-the-art applications where GenAI-augmented CV is driving innovation, spanning crop health monitoring, autonomous harvesting, precision field management, livestock and aquaculture surveillance, and agricultural product quality control. Finally, this survey critically assesses the remaining operational and ethical challenges while outlining promising future research directions, providing a holistic perspective on the ongoing integration of GenAI into the fabric of Agriculture 4.0. Figure 1 provides a visual summary of the methodological spectrum from traditional CV to GenAI, highlighting key applications across smart agriculture domains.

2. Review Methodology

To ensure this survey’s systematic and comprehensive nature, we designed and executed a rigorous literature search and screening process following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [50], as illustrated in Figure 2. The literature for this study was sourced from the following major academic databases: IEEE Xplore, ACM Digital Library, Elsevier ScienceDirect, SpringerLink, and Google Scholar. To comprehensively cover the field’s foundational and recent advancements, we set the publication time frame from 2009 to 2025, with a strong focus on the latest literature from the past five years, while foundational earlier works were included when historically significant.
Our search strategy employed a structured combination of keywords, which were divided into four core dimensions to ensure broad and deep coverage of the research area.
  • Core Themes: This dimension defined the study’s broader context, including terms such as “Precision Agriculture”, “Smart Farming”, and “Digital Agriculture”.
  • Core Technologies: This dimension focused on the key technical methods central to this review, with search terms like “Artificial Intelligence (AI)”, “Computer Vision”, “Deep Learning”, “Machine Learning”, “Generative AI”, “Generative Adversarial Networks (GANs)”, “Foundation Models”, “Convolutional Neural Networks (CNNs)”, and “Transformer”.
  • Application Tasks: This dimension aimed to cover specific application scenarios of AI and vision in agriculture, including keywords such as “Disease Detection”, “Pest Detection”, “Weed Detection”, “Yield Prediction”, “Fruit Grading”, “Maturity Detection”, “Behaviour Recognition”, “Robotic Harvesting”, and “Autonomous Navigation”.
  • Enabling Systems and Sensing Technologies: This dimension covered the hardware platforms and data acquisition technologies supporting the aforementioned applications, for instance, “Unmanned Aerial Vehicle (UAV)/Drone”, “Internet of Things (IoT)”, “Remote Sensing”, “Hyperspectral Imaging”, “Multispectral Imaging”, “Stereo Vision”, and “Edge Computing”.
During the actual search, we utilised Boolean operators (AND, OR) to flexibly combine keywords from different dimensions to construct precise search queries. A typical query example is as follows: (“Computer Vision” OR “Deep Learning”) AND (“Precision Agriculture” OR “Smart Farming”) AND (“Disease Detection” OR “Weed Detection” OR “Fruit Grading”).
The literature screening process was divided into two stages. In the initial screening, we reviewed titles and abstracts to exclude entries clearly irrelevant to the survey’s topic; simultaneously, non-English literature, short abstracts, and articles without full-text access were eliminated. In the secondary screening, we conducted a full-text review of the remaining literature, making the final selection based on the following inclusion criteria: (1) the research content is highly relevant to the application of computer vision and artificial intelligence in agriculture; (2) the study includes a clear methodology, detailed experimental procedures, and verifiable results. Following this screening process, a total of 309 publications were included for analysis in this survey.

3. An Overview of GenAI-Driven CV Frameworks

3.1. Traditional CV Technologies in Smart Agriculture

The traditional CV technological framework is anchored in classical algorithms such as image segmentation [51] and object detection [52], relying on manually engineered feature extraction techniques (e.g., texture, colour, and shape analysis) and pattern recognition architectures [53,54]. These rule-driven systems exhibit a strong dependency on structured data, employing methods like threshold-based segmentation or edge detection for crop health monitoring [55] or feature-matching approaches for pest identification [56]. Characterised by computational robustness in controlled environments, these techniques nonetheless face inherent limitations, including environmental sensitivity (e.g., fluctuating illumination, occlusions) and constrained generalisation capabilities, necessitating extensive labelled datasets for scenario-specific training [34]. While stable in closed-loop settings, their high computational cost, need for labour-intensive annotation, and inability to adapt to diverse and changing farm environments have historically made it difficult to deploy them at scale [57].
Conventional CV methodologies underpin three critical pillars of smart agriculture: (1) Continuous field surveillance leverages image segmentation with multispectral imaging for real-time crop phenotyping [58,59,60], while object detection enables livestock behavioural tracking [61,62,63]. (2) Predictive analytics utilises feature extraction (e.g., leaf area index quantification [64,65]) to construct yield forecasting models, augmented by temporal data analysis for disease risk prediction [66,67]. (3) Automated control systems integrate visual navigation (e.g., path planning for unmanned ground vehicles) with sensor fusion to execute precision farming operations. For instance, drone-mounted CV algorithms facilitate large-scale field inspections [68,69], while edge computing platforms synthesise visual inputs to optimise irrigation scheduling [70,71]. Though limited by their reliance on handcrafted features in complex agroecological contexts, these technologies established the foundational infrastructure for Agriculture 4.0, enabling data-driven decision cycles that transitioned farming from empirical practices to precision resource management.

3.2. A Paradigm Shift from Traditional CV to Generative AI

GenAI refers to advanced computational frameworks capable of autonomously synthesising high-fidelity data—such as images, videos, or 3D representations—through self-supervised learning paradigms [72]. Core methodologies, including diffusion models [41], GANs [42], and transformer-based architectures [73], are characterised by their ability to learn latent data distributions and generate contextually coherent outputs. Their key strengths lie in their scalability, annotation efficiency, and capacity to model complex, non-linear relationships within data, often augmented by large-scale pretraining on multimodal datasets [74]. Recent advancements have propelled GenAI beyond theoretical frameworks, enabling applications in synthetic dataset generation [75], domain adaptation [76], and physics-informed neural rendering, which incorporates physical laws into neural networks to create photorealistic and consistent 3D representations of objects and scenes [77]. These breakthroughs solidify its role as a cornerstone of next-generation AI solutions.
These capabilities enable GenAI to overcome the fundamental limitations of traditional CV systems by addressing two critical bottlenecks: data scarcity and domain generalisation. Conventional CV systems, dependent on handcrafted feature engineering and scenario-specific labelled datasets, are not robust to changing environmental conditions (such as different lighting or weather), and the cost to annotate data for them is unsustainably high [34]. GenAI circumvents these issues through photorealistic agricultural image synthesis, spanning diverse crop phenotypes [78,79], growth stages [80,81], and meteorological extremes [82], leveraging conditional diffusion models [41]. Synthetic datasets emulating rare disease manifestations or occluded plant structures, for instance, enhance model generalisability without resource-intensive field data collection [83,84].
Style-transfer mechanisms help maintain consistent annotations between synthetic and real-world datasets by transforming the visual characteristics of synthetic images to more closely match those found in real-world scenarios. This reduces the domain gap and ensures that models trained on synthetic data generalise better when applied to real-world agricultural tasks [85]. Meanwhile, latent space interpolation refers to the process of blending representations within the neural network’s latent space—essentially, the abstract feature space learned during training—which allows for smooth adaptation between different domains. For example, by interpolating between the features of datasets from different regions, models can be fine-tuned to specific soil compositions or local microclimates, thereby improving cross-domain adaptability [86].
Furthermore, integrating physics-aware neural rendering with agro-environmental simulations enables GenAI to generate training data that remains robust even under conditions of occlusion—such as when crops are partially hidden by machinery or foliage—a challenge that conventional computer vision pipelines struggle to address [78,87,88]. These innovations not only reduce the dependence on labour-intensive manual annotation but also enhance computational efficiency by leveraging knowledge distillation techniques. This approach compresses large-scale models into lightweight versions suitable for deployment on edge devices, making advanced AI solutions more accessible in resource-constrained agricultural settings.

3.3. GenAI-Driven CV Framework with Agricultural IoT and Edge Computing Systems

The GenAI-based CV framework establishes a synergistic ecosystem by converging agricultural IoT networks [89], edge computing architectures [90], and physics-informed neural rendering [77]. This integration addresses three core challenges in smart agriculture: (1) the lack of data for rare events, (2) the need for low-latency (real-time) responses in field operations, and (3) the difficulty of applying a model to different farm environments (domain shift).
To illustrate how these technologies intersect in practice, consider the task of detecting early-stage nutrient deficiency in a cornfield—a problem requiring timely and precise intervention. The process unfolds as an integrated, adaptive loop:
  • Data Acquisition and Edge Processing (IoT and Edge): An autonomous drone, acting as an agricultural IoT device, captures multispectral imagery of the field. Simultaneously, in-ground sensors report real-time soil moisture and temperature. Instead of transmitting vast amounts of raw data to the cloud, this information is processed locally on an edge computing device. This ensures a low-latency response, which is critical for guiding immediate in-field operations.
  • The Data and Domain Challenge: The central problem is that a pretrained, generic CV model would likely fail. It lacks sufficient real-world examples of this specific crop variety’s nutrient deficiency under today’s unique lighting, atmospheric, and growth-stage conditions. This is a classic domain shift problem where models trained in one context perform poorly in another.
  • Real-time, Physics-Informed Synthesis (GenAI and IoT): This is where GenAI becomes transformative. A lightweight generative model (e.g., a diffusion model) running on the edge device takes the real-time, multimodal IoT inputs—such as ambient light conditions from the camera and soil data—as conditional parameters [83,91]. Leveraging physics-informed neural rendering, the model synthesises a small, bespoke dataset of photorealistic images. These images accurately depict what the early signs of deficiency should look like, right here and now, effectively closing the domain gap.
  • Adaptive Perception (CV and GenAI): The onboard CV model is then instantly fine-tuned with these newly generated, perfectly contextualised synthetic images. Now, the perception model is precisely adapted to the current environment and can accurately identify the subtle spectral and textural anomalies on the corn leaves that indicate deficiency.
  • Actionable Insight for the Practitioner: Finally, the system moves beyond mere detection. It generates a precise, intuitive heat map of the affected areas, which is relayed to the farmer’s tablet or a variable-rate tractor. This enables a targeted application of fertiliser only where needed, optimising resource use and maximising crop health.
This example demonstrates a cohesive vision where IoT, edge computing, GenAI, and CV are not merely coexisting technologies but are deeply integrated into a single, adaptive cognitive system. This framework transforms raw environmental data into immediate, actionable intelligence, making advanced AI solutions truly accessible and impactful for practitioners in the field.

4. Applications of GenAI-Driven CV in Smart Agriculture

Smart agriculture is undergoing a significant transformation, largely driven by advancements in CV. While traditional CV and early deep learning models have laid a foundational framework for automating agricultural tasks, their performance is often hampered by two persistent challenges: data scarcity and domain shift. Data scarcity refers to the difficulty and expense of collecting and annotating large, diverse datasets required to train robust models, especially for rare conditions or specific crop varieties. Domain shift occurs when a model trained in one environment (e.g., a specific farm, lighting condition, or season) performs poorly in another, limiting its generalisability.
The emergence of GenAI has introduced a paradigm shift, offering powerful tools to directly address these limitations. By synthetically generating realistic data and enabling models to adapt across different domains, GenAI is unlocking unprecedented innovations in smart agriculture.
To systematically synthesise the literature and highlight this paradigm shift, this section is organised thematically around key agricultural applications, such as crop and field management, drone and remote sensing analysis, livestock and aquaculture monitoring, and product quality control (see Figure 3). Within each theme, we first provide an overview of the limitations of traditional CV techniques and early deep learning models, establishing the specific data and generalisation challenges inherent to that application context. We then analyse how GenAI-driven solutions, functioning as a distinct methodological category, directly tackle these challenges—focusing on two primary mechanisms: synthetic data generation to overcome data scarcity, and domain adaptation techniques to mitigate domain shift.

4.1. Crop Health Monitoring and Pest Detection

Traditional CV techniques have long been employed for crop health monitoring and pest detection, relying predominantly on handcrafted feature extraction and classical machine learning classifiers. Early methodologies, such as colour-based segmentation (often employing clustering algorithms like k-means) and statistical feature extraction, were used to identify pest-infested areas on plant leaves [92]. Similarly, thresholding and morphological operations have been applied to detect fungal infections on rice leaves [93]. However, these techniques are highly sensitive to illumination variance and background clutter, limiting their scalability in heterogeneous field environments. The reliance on manually engineered features also necessitates extensive labelled datasets for scenario-specific training, which is labour-intensive and impractical for large-scale agricultural applications.
Building on these early techniques, the advent of deep learning, particularly convolutional neural networks (CNNs), has significantly advanced the field of crop health monitoring. CNNs, such as ResNet, VGGNet, Inception, and YOLO variants like YOLOv8 for detecting Phytophthora blight in chili [94] or, for instance, segmenting lotus seedpods [95], have demonstrated superior accuracy in diagnosing plant diseases from leaf images [96,97,98,99], and CV systems have also been applied for specific conditions like Citrus Huanglongbing detection based on leaf characteristics [100]. Hybrid models that combine handcrafted features with CNNs have also been developed to enhance disease recognition in maize leaves [101], and transformer-embedded CNNs have shown promise for identifying field crop diseases [102]. Additionally, single 3D-CNN, such as 3D-ResNet, have been utilised for the early detection of specific plant diseases, like downy mildew in kimchi cabbage, leveraging hyperspectral imaging data [103]. Similarly, for cucumber downy mildew, a CNN-LSTM approach has been developed for disease prediction [104]. However, these deep learning models typically require vast labelled datasets for training, and their performance can degrade when exposed to unseen conditions or rare disease classes [105]. This limitation has spurred the need for more advanced techniques to address data scarcity and improve model generalisation.
To address these challenges, GANs have emerged as a transformative solution to address data scarcity and class imbalance in agricultural datasets. As illustrated in Figure 4, GAN architectures, such as DCGAN and CycleGAN, have been used to generate realistic leaf images with various disease symptoms, thereby enriching training datasets without the need for additional field collection [106,107,108,109]. This synthetic augmentation has proven particularly effective for improving CNN performance on minority classes, such as detecting early-stage infections that are often underrepresented in real-world data [109]. Furthermore, GANs have shown potential in unsupervised anomaly detection, where models learn normal healthy patterns and flag deviations suggestive of disease or pest attacks [110,111]. These advancements have led to improved model generalisation and enhanced recognition of minority classes.
To combat the domain shift problem, generative models offer sophisticated adaptation techniques. For example, CycleGANs can translate visual characteristics between domains, such as applying disease patterns from one crop species to the healthy leaves of another, thereby improving a model’s robustness and adaptability to new agricultural settings [112]. Spatiotemporal GANs even allow for modelling disease progression over time, synthesising future crop health states under various conditions [113,114].
Modern crop health monitoring systems increasingly integrate multimodal data streams, including hyperspectral, thermal, and RGB imagery. Generative models now play a crucial role in translating between these modalities. For instance, conditional GANs have been used to generate synthetic thermal maps from RGB inputs, enriching the input space and compensating for sensor limitations, especially in resource-constrained environments [115,116]. This multimodal fusion not only enhances the feature representation but also improves the system’s resilience to sensor failures and environmental variability. However, aligning heterogeneous data modalities remains a complex task, requiring sophisticated techniques to ensure the consistency and accuracy of the fused data.
Despite the significant advancements in GAN-augmented CV systems, challenges remain in terms of interpretability and real-time deployment. GAN training is known for its instability and mode collapse, and synthetic outputs may lack semantic consistency if not well-regularised [117]. To address these limitations, hybrid approaches combining GANs with explainable AI (XAI) frameworks have been proposed. To provide more transparency, techniques such as SHAP and LIME have been applied to CNN models that were trained on GAN-augmented data. These methods offer interpretable, feature-based explanations for the model’s predictions, which helps agronomists and farmers understand the results and gain actionable insights [118]. Additionally, edge AI accelerators [119] and quantised GAN architectures [120] have been developed to enable real-time processing on edge devices, reducing computational overhead and latency. However, ensuring that models are stable and that the synthetic data they generate is semantically consistent remains a critical challenge. Overcoming this challenge is key for these technologies to be widely adopted.
Looking ahead, integrating foundation models that have been pretrained on large agricultural datasets holds great promise for the future of crop health monitoring. These models can facilitate few-shot adaptation, enabling systems to learn from limited data and generalise across diverse agricultural scenarios [121,122]. The emergence of vision–language models (VLMs), such as GPT-Vision, further opens the door to pest advisory systems that operate through dialogue. In these systems, an AI can provide real-time recommendations based on visual inputs and contextual information [123,124]. Efforts to democratise access to these technologies through open-source datasets and lightweight edge deployment solutions are also gaining momentum [125]. These advancements are expected to make AI-driven crop health monitoring more accessible and scalable, ultimately enhancing agricultural productivity and sustainability.
In summary, while traditional CV methods laid the foundation for automated crop health monitoring, the integration of deep learning and GANs has significantly enhanced the capabilities of these systems. From improving dataset diversity to enhancing model generalisation and resilience, GANs have become indispensable tools in modern precision agriculture. However, ensuring model stability, interpretability, and real-time deployment efficiency remain critical for the sustainable adoption of these technologies. Future directions point towards foundation models and human-centric AI, promising to further democratise access and improve the robustness of crop health monitoring systems. A comparison of these techniques is presented in Table 1.

4.2. Fruit and Vegetable Maturity Detection and Harvesting Automation

The evolution of fruit and vegetable maturity detection began with traditional CV techniques, which employed manual feature engineering such as colour thresholding and texture analysis. Early approaches, including using visual saliency maps and CNNs for citrus ripeness classification [126] and morphological operations for coconut maturity assessment [127], achieved moderate success in controlled environments. However, these methods exhibited pronounced limitations under field conditions, particularly sensitivity to illumination fluctuations and occlusion, leading to reduced accuracy in detection. Such constraints highlighted the inadequacy of rule-based systems for complex agricultural settings, prompting the adoption of data-driven methodologies.
To address these shortcomings, deep learning architectures revolutionised maturity assessment through automated feature extraction and contextual understanding. CNNs, notably YOLO variants, enabled real-time detection of fruits like apples [128,129,130,131], strawberries [132,133], and cherry tomatoes [134] in occluded canopies and have been used for tasks like predicting feed quantity for harvesters [135], often achieving high accuracies. Vision Foundation Models (VFMs) utilising transformer architectures further enhanced performance in dense clusters by leveraging attention mechanisms to resolve overlapping fruit instances and address occlusion challenges in complex orchard environments [136]. Despite these advancements, challenges persisted in deploying deep neural networks on edge devices due to substantial resource demands, necessitating optimised architectures such as DNNShifter’s pruned variants [137] and lightweight detection models for specific produce like broccoli heads in complex field environments [138].
The integration of multimodal sensing emerged as a pivotal advancement, combining hyperspectral, thermal, and RGB data to assess maturity based on biochemical indicators, rather than just appearance. Hyperspectral imaging, for instance, enabled non-destructive quantification of chlorophyll degradation in grapes [139] and sugar accumulation in melons [140], while fusion models improved robustness against environmental noise by correlating visual features with complementary sensor data [141]. These systems bridged the gap between superficial appearance and internal quality metrics, though issues of sensor interoperability and cost hindered widespread adoption.
GenAI offers powerful techniques to overcome the data limitations and environmental variations that challenge maturity detection models. For example, to tackle data scarcity and class imbalance, in contrast to conventional image data augmentation techniques, GANs can synthetically augment datasets with images of rare maturity stages or underrepresented fruit types [142,143]. More advanced diffusion models, such as the text-guided D4 method, generate fully annotated images of vineyard shoots that are adapted to diverse domain conditions, boosting object detection performance in field trials by up to 28.65% [144]. These models also provide a path for domain adaptation. For instance, generative techniques can be used to translate visual features from one orchard environment to another, helping a harvesting robot trained on one farm generalise to another with different lighting or tree structures. Similarly, conditional GANs can reconstruct images of occluded grapevine berries, providing a clearer view for more accurate yield estimation in dense foliage [145]. These algorithmic advancements underpinned the development of autonomous harvesting systems, which integrated 3D vision, LiDAR, and reinforcement learning to synchronise perception and action [146], including stereo vision for tasks like the precise automatic steering of combine harvesters through edge detection [147]. The Occluder–Occludee Relational Network (O2RNet) enhanced apple detection in clustered orchard environments, achieving a detection accuracy of 94% and an F1-score of 0.88, thereby improving localisation precision in occluded clusters [148], while soft pneumatic grippers minimised mechanical damage during cucumber harvesting [149]. Real-time defect detection systems, exemplified by finger-vision sensors [150], further ensured selective picking of market-ready produce. A critical component for the safe and efficient operation of such autonomous harvesting systems is robust field obstacle detection, for which improved algorithms based on models like YOLOv8 have been developed [151]. Beyond direct harvesting tasks, specialised robotic units like RoMu4o contribute to orchard automation by performing tasks such as proximal hyperspectral leaf sensing [152]. However, common operational constraints, including limited battery endurance and mechanical agility, broadly hinder the scalability of such robotic systems.
To enhance real-time adaptability, temporal analysis and edge AI frameworks prioritised low-latency processing and decentralised computation. A video-based tracking system integrating MangoYOLO detection, Kalman filtering, and an enhanced Hungarian algorithm effectively monitored mango fruit across growth stages, reducing redundant detections by approximately 9.9% [153], while federated learning allowed models to be trained across distributed farms without compromising data privacy [154]. Smartphone-integrated CNNs democratised access to quality assessment, as evidenced by portable systems for Zanthoxylum fruit evaluation [155]. However, seasonal and regional domain shifts necessitated frequent model retraining, exacerbating computational overheads [156].
In addition, cross-modal VLMs and physics-informed synthetic data generation represent transformative frontiers. Vision–language systems like AgroGPT empower farmers to obtain expert agronomic insights from images through intuitive natural language queries, bridging the gap between AI capabilities and field-level decision-making [123], while synthetic data generation frameworks replicate the variability of agricultural environments to create realistic, specific training datasets for farming vision models [157]. Ethical considerations, including labour displacement and algorithmic bias, demand rigorous governance frameworks to ensure equitable technology deployment [158]. Addressing these challenges will require interdisciplinary collaboration to harmonise technical innovation with socio-economic sustainability in global agriculture. Table 2 provides a comparison of different approaches discussed in this subsection.

4.3. Precision Agriculture and Field Management

Precision agriculture has revolutionised traditional farming by integrating sensor-based technologies, data analytics, and intelligent decision-making systems to optimise agricultural inputs, enhance productivity, and minimise environmental impact [159,160,161]. At its core, precision agriculture leverages spatial and temporal variability analysis to enable site-specific management decisions, a paradigm significantly advanced by the convergence of CV and GenAI. This section examines the technological advancements, applications, and challenges in modern field management systems.
UAV-Based Monitoring and Weed Management. Unmanned aerial vehicles (UAVs) equipped with multispectral, hyperspectral, and RGB sensors have become indispensable tools for large-scale agricultural monitoring. These platforms facilitate high-resolution data acquisition, enabling crop stress detection, weed infestation mapping, and nutrient deficiency analysis [162,163,164]. For instance, hyperspectral imaging has proven effective in identifying barnyard grass in paddy fields, enabling spatially targeted herbicide applications [163]. UAV-derived aerial perspectives further enhance canopy structure mapping and yield prediction by correlating spectral signatures with crop health metrics [160,162].
Weed detection remains a critical challenge due to inter-class similarities, occlusions, and environmental variability. Deep learning architectures, particularly CNNs and GANs, have addressed these limitations through robust feature extraction and synthetic data augmentation [165,166,167,168]. Recent models, such as the YOLOv8-based YOLO-CWD and YOLOv10n variants [167,169], improved YOLOv5 networks like HAD-YOLO [170] and STBNA-YOLOv5 for rapeseed fields [171], and approaches using improved YOLOv4 for UAV imagery [172] demonstrate superior performance in distinguishing crops from weeds under cluttered field conditions, achieving accuracies above 90% in real-time analysis. Semi-supervised learning frameworks further reduce dependency on manual labelling, which is labour-intensive, while improving generalisation across diverse agroecological contexts [173,174]. Innovations like region-based CNNs enable precise monocot/dicot weed classification in dense crop fields, forming the basis for autonomous precision sprayers that reduce herbicide usage by 30–40% [166,175,176].
Resource Optimisation and Irrigation Systems. Precision irrigation systems integrated with IoT networks and edge computing optimise water allocation based on real-time evapotranspiration, soil moisture, and meteorological data [177,178,179]. These systems employ predictive algorithms to dynamically adjust irrigation schedules, improving water-use efficiency without compromising yield. For example, vapour pressure deficit forecasting in semi-arid regions has enabled adaptive irrigation, reducing water consumption by 25% while maintaining crop health [177,179]. Sprinkler systems embedded with CV modules further enhance precision by modulating spray patterns according to the canopy density and spatial distribution, achieving uniform water delivery across heterogeneous fields [178,180].
Plant Growth Tracking and Non-Invasive Phenotyping. Non-invasive imaging methodologies, including smartphone-driven and camera-based systems, provide continuous insights into plant morphology and phenological stages [181,182,183,184]. Recursive segmentation models, such as those developed for bok choy, enable real-time nutrient and irrigation adjustments in controlled environment agriculture (CEA), reducing labour costs while improving temporal resolution compared to manual assessments [181]. Hyperspectral imaging further augments phenotyping by quantifying biochemical indicators like chlorophyll degradation in grapes, bridging the gap between visual appearance and internal quality metrics [139].
GenAI and Multimodal Data Fusion. In precision agriculture, where collecting diverse, large-scale data from sources like hyperspectral sensors is costly, GenAI provides a critical advantage. GANs synthesise multispectral or normalised difference vegetation index (NDVI) images from RGB inputs or enhance existing multispectral satellite imagery via techniques like super-resolution, circumventing the cost and complexity of hyperspectral sensors [185,186,187]. For instance, GAN-based translations preserve spectral fidelity in synthetic datasets, enabling scalable vegetation monitoring with low-cost hardware [186]. Hybrid architectures combining transformers and GAN modules, such as Tranvolution Detection Networks, enhance disease diagnosis under limited data availability by leveraging style augmentation and transfer learning [111,188].
Cloud-based platforms like Agroview centralise UAV-collected data, integrating deep learning pipelines with spatial analysis tools for comprehensive field assessments [161]. These systems fuse satellite, drone, and ground-level imagery to support multi-source data analysis, while IoT networks synchronise soil sensors and actuators for closed-loop control of irrigation and fertilisation systems [159,189].
Challenges and Emerging Solutions. Despite advancements, domain generalisation remains constrained by environmental variability, including lighting fluctuations and soil heterogeneity [165,190]. Semi-supervised frameworks, such as those applied to sunflower weed mapping via UAVs, mitigate these limitations by leveraging unlabelled data structures [174]. Explainability tools, including Grad-CAM and SHAP, are increasingly integrated to enhance user trust and regulatory compliance, particularly in high-stakes decision-making scenarios [191].
Efforts to democratise access include low-cost imaging platforms and open-source training pipelines, exemplified by smartphone-integrated CNNs for Zanthoxylum fruit quality assessment [155]. Future directions emphasise physics-informed synthetic data generation and VLMs, which offer actionable insights through natural language queries, bridging the gap between AI systems and agricultural stakeholders [114,192].
In conclusion, the integration of CV and GAI has redefined precision agriculture, enabling data-driven resource optimisation and adaptive field management. While challenges persist in model robustness and scalability, advancements in edge computing, synthetic data generation, and human-centric AI promise to further democratise intelligent farming solutions, aligning technical innovation with global sustainability goals [157,193,194]. Representative studies highlighting traditional CV and GAN-based methods in this domain are summarised in Table 3.

4.4. Agricultural Drones and Remote Sensing Image Analysis

The integration of unmanned aerial vehicles (UAVs) and advanced remote sensing technologies has redefined precision agriculture by enabling high-resolution, real-time monitoring of crops, soil, and environmental conditions [195,196,197]. UAV platforms equipped with multispectral, hyperspectral, thermal, and RGB sensors provide unparalleled flexibility in data acquisition, supporting critical tasks such as yield estimation, disease detection, and resource management. Unlike satellite systems constrained by revisit intervals and cloud cover, drones offer on-demand imaging under customisable flight plans, making them indispensable for dynamic crop surveillance [198,199].
Sensor Integration and Data Acquisition. Hyperspectral and multispectral imaging dominate UAV-based systems due to their capacity to capture subtle crop stress signals. For instance, hyperspectral sensors enable the early detection of diseases by analysing spectral anomalies, such as identifying citrus gummosis [200,201] or tea leaf blight, even with lower-resolution UAV imagery [202]. In parallel, multispectral data facilitate fine-scale phenotyping, which includes tasks like seedling density estimation and canopy cover analysis [203,204,205,206]. Recent advancements in lightweight LiDAR and thermal cameras further enhance UAV capabilities, enabling 3D crop modelling and heat stress assessment [207,208]. These multimodal datasets form the foundation for AI-driven decision-making, bridging gaps between aerial observations and ground-level agricultural processes.
Deep Learning for Image Analysis. Deep learning architectures, particularly CNNs and transformer-based models, have significantly improved the accuracy of UAV image processing. Hybrid frameworks like CCTNet, which integrates CNN and transformer modules, achieve superior crop segmentation across diverse lighting and growth conditions [209]. For disease detection, GAN-based models such as DSCONV-GAN enhance feature alignment in temporally varying datasets [201], and other machine learning approaches also leverage GANs with UAV technology in two-step processes for crop disease detection [210]. Attention-augmented networks like HSI-TransUNet improve the robustness of hyperspectral classification [211]. Further advancements in precise crop classification using hyperspectral images include methods like multi-branch feature fusion with dilation-based MLPs [212], spectral–spatial-scale attention networks such as S3ANet for precise end-to-end crop classification based on UAV-borne hyperspectral imagery [213], and self-attention 3D convolutional neural network approaches for enhancing precision agriculture and land cover classification from hyperspectral data [214]. Semi-supervised learning frameworks address labelling bottlenecks by leveraging unlabelled UAV imagery, as demonstrated in sunflower weed mapping [174]. YOLO variants, optimised for real-time performance, excel in object detection tasks such as weed identification and boundary mapping [215].
Multi-Source Data Fusion and Applications. The fusion of UAV data with satellite imagery and ground sensors enhances model generalisability. For example, combining Sentinel-1/2 data with UAV-derived imagery improves land cover classification accuracy through DC-CNN architectures [216], and deep learning-based methods using dual-polarimetric synthetic aperture radar and high-resolution optical images are also employed for the identification of typical crops [217]. Furthermore, the inversion of soil moisture in farmland areas can be achieved using SSA-CNN with multi-source remote sensing data [218]. Multimodal deep learning enables soybean phenotyping and winter wheat yield prediction [205,219]. Hyperspectral imaging further supports species-level crop classification and biochemical analysis, such as chlorophyll degradation monitoring in grapes [139] and sugar accumulation in melons [220], with spectrally segmented-enhanced neural networks being developed for precise land cover object classification in such imagery [221]. Real-time anomaly detection systems, powered by spatiotemporal GANs, identify crop stress patterns and predict disease progression [110,222], facilitating proactive interventions.
Autonomous Operations and Challenges. UAVs increasingly support autonomous field operations, including navigation and obstacle avoidance. Autonomous operations rely on several key technologies. SLAM algorithms like YOLOD-SLAM2 enable precise localisation in orchards [223]. Lightweight algorithms such as YOLOv4-Tiny provide real-time plant detection, enhancing the UAV’s ability to localise objects precisely [224]. Furthermore, collaborative drone networks improve the scalability of large-scale spraying and scouting missions [197]. However, operational challenges persist, including limited battery endurance, weather sensitivity, and computational constraints on edge devices. Lightweight models, such as YOLOv4-Tiny and pruned ResNet variants, mitigate these issues by balancing accuracy with resource efficiency [225,226]. To further enhance model robustness, particularly when field data is sparse or difficult to obtain, GenAI is used to create synthetic datasets for tasks like seed maturity estimation or to perform image-to-image translation for mapping complex orchard environments [222,227].
Emerging trends emphasise edge AI deployment and physics-informed synthetic data. Federated learning frameworks enable privacy-preserving model training across distributed farms [154], while neural rendering techniques simulate agro-environmental dynamics to enhance dataset realism [78,87,88]. Standardisation of sensor outputs and interoperability with farm management platforms remain critical for scalable adoption. Advances in miniaturised sensors and AI-driven compression will further democratise UAV technologies, particularly for smallholder farms [199,228].
In summary, UAVs and remote sensing image analysis constitute a cornerstone of modern smart agriculture. By synergising high-resolution imaging, deep learning, and multi-source data fusion, these systems empower farmers with actionable insights, driving sustainable and efficient agricultural practices. Persistent challenges in hardware limitations and data processing underscore the need for continued innovation in lightweight algorithms and adaptive AI frameworks. Table 4 summarises the key methodologies, sensor types, and outcomes discussed in this section, providing a concise reference for UAV-based agricultural applications.

4.5. Livestock and Aquaculture Monitoring

4.5.1. Livestock Monitoring

The integration of CV and artificial intelligence (AI) is revolutionising livestock monitoring by providing non-invasive, scalable, and automated alternatives to traditional labour-intensive methods. This advancement not only enhances operational efficiency but also significantly improves animal welfare, behavioural understanding, health diagnostics, and yield optimisation.
In livestock environments, CV plays a crucial role in identifying individual animals, tracking their movements, and assessing behavioural patterns. For example, Luo et al. [229] developed a deep learning-based pose estimation framework for red pandas, demonstrating its potential for tracking subtle movements and social interactions in farm animals. Similarly, Kuncheva [230] highlighted the application of restricted set classification for animal re-identification, which is crucial for livestock traceability and population management. High-resolution stereo-vision systems, as used by Hsieh and Lee [231] for extracting morphological parameters of marine species, also show promise for livestock body condition scoring.
Deep learning advancements have enabled more robust recognition of complex behaviours. Chen et al. [232] reviewed the evolution of behaviour detection in pigs and cattle, noting the transition from simple movement tracking to comprehensive behavioural understanding. Huang et al. [233] successfully applied deep convolutional networks for tail tracking in cows, aiding in stress and discomfort assessment. Lee et al. [234] enhanced swine behaviour detection using YOLO models integrated with efficient aggregation networks, enabling real-time monitoring in commercial farming settings.
Vision-based systems can automatically detect and quantify contact behaviour in pigs, as shown by Alameer et al. [235], aiding in the early detection of aggression and health issues. Zheng and Qin [236] introduced PrunedYOLO-Tracker for recognising and tracking basic cow behaviours in group housing conditions. Pan et al. [237] proposed a self-activated CNN approach for buffalo breed identification, and Nguyen et al. [238] developed a handheld RGB-D camera system for rapid pig weight assessment, underscoring the potential of mobile vision solutions.

4.5.2. Aquaculture Monitoring

Aquaculture presents unique challenges for computer vision systems, primarily due to the significant domain gap across different imaging environments, as illustrated in Figure 5. Real-world underwater images are often degraded by lighting variations, water turbidity, and occlusions, starkly contrasting with clearer laboratory or Internet-sourced images. This discrepancy can cause models to overfit to clean data and fail to generalise to operational conditions. To address these issues, Li et al. [239] refined the YOLOv5 model for fish detection tasks in aquatic environments, achieving a 1.6% improvement in precision over the original model and reaching an accuracy of 95.7%. Moreover, Chang et al. [240] developed a two-mode AIoT-based smart sensor system that combines CV with environmental monitoring for precision aquaculture. Chieza et al. [241] compared YOLOv8 and YOLO-NAS models for underwater fish detection, demonstrating the superior performance of YOLOv8 even in visually noisy conditions. Tackling the complexity of underwater environments, Zhao et al. [242] designed Composited FishNet, a framework that utilises a novel composite backbone and an enhanced path aggregation network to enable fish recognition even in challenging conditions with poor lighting and turbidity.
Understanding fish behaviour is critical for stress detection and disease prediction. Yu et al. [243] analysed the spatial behavioural characteristics of fish schools to identify special behaviours, while Cai et al. [244] presented a two-stage framework for recognising fish feeding behaviour, supporting optimised feeding schedules and waste reduction. Zhao et al. [245] proposed DMDnet, a cross-domain model capable of robust detection across species and environments.
Advanced image processing techniques are also vital for aquaculture monitoring. Sarkar et al. [246] introduced the UICE-MIRNet framework for underwater image enhancement, improving the detection of submerged objects and the accuracy of downstream CV tasks. Sanhueza et al. [247] employed VIS-NIR hyperspectral imaging for pelagic fish characterisation, offering new possibilities for non-destructive species classification. Xu et al. [248] utilised SE-ResNet152 and transfer learning to accurately identify unbalanced fish datasets.
Health monitoring is a critical application in aquaculture. Bohara et al. [249] reviewed emerging technologies in aquatic animal health, emphasising the role of CV in early disease detection. Zhou et al. [250] proposed a lightweight model for detecting dead fish in recirculating systems, while Li et al. [251] addressed hypoxia detection through multi-target fish tracking to infer oxygen stress levels. Hu et al. [252] explored the real-time identification of uneaten feed pellets, enabling optimised feeding and waste reduction.
Feeding behaviour is an essential metric for health and growth performance. Schellewald et al. [253] demonstrated how mouth-opening frequency in salmon could be detected using video data, offering an indirect measure of feeding readiness. Yu et al. [254] introduced SED-RCNN-BE, a binocular depth estimation model for automatic fish size measurement, aiding in growth tracking and harvest planning. Xun et al. [255] showed the potential of multispectral imaging for real-time meat quality evaluation.
Cross-species and complex environments require models with strong generalisation abilities. Yang et al. [256] proposed an atrous convolution-based segmentation model for fish in complex underwater scenes, addressing multi-species entanglement. An et al. [257] introduced a marine intelligent netting monitoring system, and Riego del Castillo et al. [258] extended CV applications to sheep herding using a vision-based module integrated with robotic shepherds.
CV systems are also contributing to species identification and behaviour modelling in wild and lab conditions. Martin et al. [259] explored visual cues in social recognition among mangrove fish, while Desgarnier et al. [260] used aerial video surveys and deep learning to map eagle ray populations. Shreesha et al. [261] refined pattern prediction in aquaculture using deep learning for intelligent decision support systems.
Li et al. [262] provide comprehensive overviews of CNN-based CV applications in animal farming, emphasising the need for adaptable, real-time, and cost-effective monitoring systems to meet global demands for sustainable and ethical animal farming practices.
Despite these advancements, challenges remain, including the high cost of deploying underwater or aerial sensors, limitations in data annotation, and the need for generalised models that function across species and farm conditions. Environmental variability in livestock barns or open aquatic spaces can also degrade detection accuracy. Integrating GenAI holds promise in several areas: it can enhance image resolution, simulate rare behaviours to expand training datasets, and improve model robustness when data is scarce.
In conclusion, CV continues to redefine livestock and aquaculture monitoring through enhanced behaviour analysis, health tracking, and yield optimisation. With the ongoing integration of multispectral sensors, generative models, and edge AI hardware, future monitoring systems are expected to become more autonomous, reliable, and scalable, catering to both smallholder farmers and industrial operations. Table 5 summarises selected representative studies that demonstrate the diversity of CV techniques applied to livestock and aquaculture monitoring tasks, highlighting a spectrum of animal species, deployment environments, and technical approaches.

4.6. Agricultural Product Quality Control and Food Safety

Ensuring the quality and safety of agricultural products is critical in the smart agriculture paradigm. The integration of CV and GenAI has revolutionised quality control, enabling real-time detection of defects, contamination, and freshness issues. This advancement enhances consumer safety and regulatory compliance while improving efficiency and scalability.
Deep Learning in Quality Monitoring. Deep learning models trained on hyperspectral, RGB, and NIR images have become central to detecting quality indicators in fresh produce. Hyperspectral imaging combined with deep learning excels in early bruise detection on apples, offering a non-invasive alternative to conventional methods [263]. Similarly, CNNs can monitor damage caused by the oxidation of lipids and proteins in pork that has been frozen and thawed, doing so with high sensitivity and specificity [264]. Multi-task learning further enhances these models by allowing simultaneous monitoring of multiple quality parameters, reducing computational costs and improving generalisation.
Addressing Data Scarcity with Generative Models. Acquiring sufficient data for every possible product defect or contamination type is a major hurdle in quality control. Generative models offer a solution by creating synthetic data to augment sparse datasets. Synthetic data from these models have improved classifier performance in food safety tasks. For instance, in detecting pesticide residues in Hami melons, augmenting hyperspectral data with a Regression Generative Adversarial Network (RGAN) improved a Support Vector Regression (SVR) model’s predictive performance, achieving a coefficient of determination ( R p 2 ) of 0.8781 and a Ratio of Prediction to Deviation (RPD) of 2.7882 [265]. Similarly, for predicting postharvest decay in apples, a Pix2PixHD model was used to generate synthetic VNIR images from RGB images with high fidelity (Structural Similarity Index Measure, SSIM = 0.972). A subsequent Mask R-CNN model trained on these images effectively segmented fungal and decay zones with an F1-score of up to 94.8% [266]. In the context of industrial production, StyleGAN2 was employed to address class imbalance in candy defect detection. By augmenting the dataset with synthetic defective candy images, an improved YOLOv7 model’s recognition accuracy increased by 3.0% and its recall rate by 3.7% [267], demonstrating the practical benefits of generative augmentation in real-time detection scenarios.
Real-Time Quality Assessment in Commercial Applications. Real-time quality assessment has been increasingly deployed in commercial grading lines, facilitated by lightweight object detection algorithms like improved YOLOv5 and YOLOv10 [268,269]. These networks are often embedded in automated grader systems or mobile applications, enabling on-the-spot quality evaluations. For example, a mobile food grading system combining YOLO and SVMs demonstrated high performance in detecting various food defects [270], while a CNN framework integrated into a smartphone has been developed for the rapid assessment of Zanthoxylum fruit quality [155].
Spectral Sensor Fusion and Electronic Sensing Technologies. Beyond visual defect detection, research has focused on spectral sensor fusion and electronic sensing technologies. Colourimetric sensor arrays, often embedded in low-cost electronic noses, have been used for tasks like detecting pork adulteration in beef [271], tracing the geographical origin of edible bird’s nests [272], and discriminating rice varieties [273]. When combined with CV models, these sensors extend inspection capabilities to chemical contamination and authenticity verification. Portable Vis/NIR systems integrated with CNNs have enabled the non-destructive detection of the watercore degree in apples [274].
Quality Authentication through Multi-Source Data Integration. The authentication of quality in fruits like citrus has benefited significantly from non-destructive sensing approaches that are based on data fusion. By integrating hyperspectral images, RGB texture features, and spectroscopic sensors, studies have achieved superior prediction of Brix values, acidity, and phenolic content [275]. Models like FastSegFormer have demonstrated how deep learning can balance computational efficiency and accuracy in real-time processing, as seen in surface defect detection in navel oranges [276].
Industrial Applications and Food Packaging Innovations. In industrial settings, real-time visual inspection systems for grading fruits and vegetables are becoming commonplace. These systems combine high-resolution imaging with CNNs for defect classification, achieving scalable performance with minimal human intervention [18,277]. One system grades mangoes by detecting colour, shape, and texture inconsistencies using deep grading networks [20]. Traditional CV methods using monochrome imaging are still explored for their simplicity and interpretability [278].
CV has also been applied to food packaging and spoilage monitoring. Smart packaging films embedded with pH-sensitive dyes can detect spoilage in real time [279]. Electrochemical aptasensors and biosensors are used for detecting pesticide residues and microbial contaminants, such as acetamiprid detection using impedimetric sensors [280] and Fusarium spore detection with attention-residual networks [281].
Enhancing Transparency with Explainable AI. To improve interpretability and model transparency, XAI is being integrated into classification frameworks. Visual saliency maps and feature importance scores have been used to demystify CV models in tasks like carambola fruit classification [282]. Robust defect detection in noisy environments remains a challenge, but bilayer inspection methods and fusion techniques have shown promise. For example, a graded-line inspection model improved kiwifruit defect detection by considering multi-angle views and spatial distribution [283].
Real-Time Monitoring and Combating Food Adulteration. The real-time monitoring of product quality during harvesting is gaining traction. Robots enabled with “finger vision” can assess fruit surface conditions before detachment, preventing damaged or diseased products from entering the supply chain [150]. Density estimation methods complement object detection models like YOLO to evaluate fruit clustering and packing density [128].
Food adulteration detection has seen advancements, with deep learning models identifying adulterants in powdered spices like red chilli [284]. Hyperspectral imaging combined with CNN classification has been used for high-confidence predictions of surface anomalies in Achacha fruit [285].
Innovations in Sensing and Grading Systems. Recent innovations include mobile-integrated fluorescent sensors for detecting trace elements in food and water [286]. Electrospun membranes embedded with AuAg nanoclusters have demonstrated high selectivity and sensitivity for heavy metals and amino acids [286]. Near-infrared hyperspectral imaging has yielded reliable predictions of physicochemical attributes in durian pulp, such as firmness and sugar content [287].
Fruit grading systems continue to evolve with improvements in model architecture and training data quality. A multidimensional feature extraction network incorporating transformer modules has enhanced detection rates in complex backgrounds [288]. Multi-view image processing has improved visual evaluation consistency in apple grading systems by minimising occlusion effects [289].
Future Directions. The integration of regression-based GANs with hyperspectral imaging offers a promising pathway for real-time chemical safety monitoring [265]. Automated fruit quality inspection systems utilising transfer learning and GANs are being developed to maximise performance with minimal labelled data [290]. Specular highlight removal algorithms and denoising preprocessing techniques are increasingly integrated into CV pipelines to enhance inspection accuracy under varying lighting conditions [291]. When combined with lightweight detection architectures, these improvements facilitate deployment in resource-constrained environments. Mobile-based systems and edge AI solutions are transforming food quality monitoring. Deep learning models optimised for mobile deployment, such as those integrating CNNs with real-time classification pipelines, enable immediate, reliable quality assessments [292]. Smartphone-based colourimetric sensor arrays and spectrometers are increasingly used in field testing and supply chain quality management [273,286].
In conclusion, CV and GenAI are reshaping agricultural product quality control and food safety, making inspections more accurate, scalable, and accessible. The synergy between sensing hardware, CV models, and GenAI ensures an intelligent and automated pipeline from field to consumer, safeguarding public health and fostering transparency across the food supply chain. Table 6 summarises key representative applications of CV and GenAI in agricultural quality control and food safety, highlighting specific methodologies and examples.

5. Challenges of CV in Smart Agriculture

The transition of CV applications in smart agriculture from experimental systems to field-ready solutions faces several critical challenges. These challenges stem from the complex agricultural environments and the limitations of current technologies. Although GenAI integration has brought improvements in data augmentation and model generalisation, practical barriers still affect reliable deployment. These challenges can be broadly categorised into six interconnected areas: data, socio-economic factors, model performance, environmental conditions, hardware deployment, and system trustworthiness, as summarised in Figure 6. Below is a detailed discussion of the six main challenges in CV-based smart agriculture, along with the impact of these issues on system performance and scalability, and the potential solutions offered by recent GenAI advancements.

5.1. Data Scarcity and Quality

The effectiveness of CV systems in smart agriculture is closely tied to the availability of high-quality image data for training, validation, and deployment. However, the scarcity of large-scale, diverse, and well-annotated agricultural image datasets is a persistent challenge. Unlike general CV tasks with abundant datasets like ImageNet or COCO, agricultural data is often domain-specific, seasonal, and geographically constrained, which limits the quantity and diversity of available data [293,294]. Collecting agricultural image data, especially in field conditions, involves logistical difficulties, variable lighting, weather disturbances, and inconsistent object appearances, making standardised data acquisition tough [198,295].
The annotation process presents another bottleneck. Labelling agricultural images requires expert knowledge to accurately classify plant diseases, pests, or growth stages, making it time-consuming and cost-intensive, especially for tasks requiring pixel-level segmentation or temporal tracking across image sequences [296,297]. Intra-class variability caused by natural growth diversity, disease symptoms, and environmental conditions further complicates the labelling task, often requiring repeated efforts and multi-expert validation to ensure reliability and consistency [298].
To tackle the limited annotated datasets, recent advances have turned to GenAI, particularly GANs and diffusion models, which can synthesise synthetic agricultural images for data augmentation [299]. These techniques can theoretically enhance model robustness and generalisation by producing varied samples that mimic rare conditions or underrepresented classes. However, practical adoption faces issues such as questionable realism and biological validity of generated samples, especially for fine-grained tasks like disease spotting or pest identification, where subtle textural patterns are critical [300,301]. Synthetic data may also lack the environmental and phenotypic diversity needed to capture the full range of field variability, leading to potential overfitting on artificial patterns not present in real-world images [302,303].
Another concern is the domain shift between laboratory-generated datasets and real-world deployment scenarios. CV models trained on controlled or synthetic datasets may fail to generalise in the field due to differences in sensor quality, background clutter, and unanticipated anomalies [304,305]. This highlights the demand for domain adaptation and transfer learning strategies that can leverage limited real data while benefiting from large synthetic corpora. Semi-supervised and self-supervised learning paradigms, though promising, remain underexplored in agriculture and need further validation under diverse field conditions [306,307].
In response to these challenges, hybrid approaches combining real annotated images, weakly labelled data, and generative augmentation are gaining attention. These approaches aim to reduce dependency on expensive annotations while enhancing the diversity and volume of training data. Meanwhile, data standardisation protocols, open-access platforms, and community-driven datasets like PlantVillage or Global Wheat Head Dataset represent key efforts in democratising agricultural CV development [308].
Despite these developments, data challenges remain a fundamental limitation in deploying CV in smart agriculture. Future efforts should prioritise data collection and annotation strategies, as well as the development of rigorous evaluation benchmarks, transparency in dataset reporting, and ethical considerations related to data ownership, privacy, and representativeness [307].

5.2. Model Generalisation and Regional Adaptation

Achieving robust model generalisation is a key challenge in applying CV techniques to smart agriculture due to the high variability of agricultural environments. Crops grow under drastically different soil conditions, climates, and cultivation practices across regions, leading to significant visual heterogeneity. This variation often hinders the transferability of models trained on specific datasets, a challenge echoed across multiple studies in both robotics and deep learning for agriculture [301,304,306]. For instance, models trained in temperate, well-lit greenhouses may fail when deployed in open fields with variable light, occlusion, and crop morphology [295].
To address this, researchers have explored domain adaptation and transfer learning strategies, which allow pretrained deep networks to be fine-tuned with smaller region-specific datasets. However, these methods are constrained when data scarcity coincides with domain shift, as often happens in less-developed agricultural zones [293,297]. GenAI has been proposed as a means to bridge these gaps by producing synthetic images to simulate rare or underrepresented conditions [294]. Yet, the synthetic samples often suffer from a lack of realism or fail to represent critical edge cases, thus limiting their utility in enhancing generalisation [299]. Moreover, the diverse growth stages of and seasonal variations in crops introduce temporal inconsistencies that standard CNN-based models are not well-equipped to handle without extensive retraining [296].
Another important aspect is generalisation across different agricultural tasks, such as weed detection, fruit harvesting, and disease classification. Each task may require a distinct set of features and decision rules. For example, a weed detection model optimised for cotton fields [306] is unlikely to be effective in wheat or maize fields due to differences in plant spacing, structure, and colouration. Similarly, models used for phenotyping in aquaculture show limited transferability to terrestrial livestock monitoring due to differences in texture and movement patterns [293]. This fragmentation necessitates task-specific retraining or the adoption of modular AI systems that can adapt their feature representation dynamically [300].
Large foundation models, including transformers and vision–language models, have shown promise in improving generalisation by leveraging broad contextual understanding [307]. However, their application in agriculture is still in its early stages, and their benefits have not yet been fully realised in real-world settings due to limited labelled data and computing resources [303]. Furthermore, some studies suggest that XAI could support model adaptation by revealing which features are relied upon during inference, potentially identifying overfitting to spurious cues in the training data [298,305].
The lack of benchmarking standards further exacerbates the problem. Many research works use proprietary datasets or measure success using inconsistent metrics, which makes it difficult to assess the true generalisation capabilities of proposed methods. This gap highlights the need for open, standardised multi-location datasets that represent the diversity of real-world agricultural environments [308].
In conclusion, while the generalisation ability of CV models in agriculture has improved through advanced training strategies and the emergence of large-scale architectures, true robustness across tasks and regions remains elusive. Future efforts must integrate synthetic data, domain adaptation, modular architectures, and explainability tools while prioritising the creation of benchmark datasets that reflect the diversity of global agricultural systems.

5.3. Edge Deployment Limitations

The practical implementation of CV in smart agriculture faces significant hardware-level constraints, especially regarding the deployment of deep learning models on edge devices. Unlike industrial or cloud-based environments, smart agricultural systems often operate under limited power, bandwidth, and processing capabilities. Devices such as drones, field robots, and handheld terminals are expected to deliver real-time predictions under stringent energy and memory constraints [198]. This mismatch between model complexity and device capabilities presents a considerable barrier to wide-scale CV adoption in agricultural tasks.
Deep neural networks (DNNs), especially large-scale architectures such as CNNs and ViTs, demand substantial computational resources for both training and inference. Although these models achieve remarkable performance in laboratory settings, they often require gigabytes of memory and billions of floating-point operations, which exceed the capabilities of edge devices commonly used in field scenarios [300]. For instance, UAVs used for real-time weed detection or crop disease classification must process high-resolution images rapidly while conserving battery life and maintaining flight stability, a trade-off that current models struggle to address effectively.
Several lightweight network architectures, including backbones like MobileNet, SqueezeNet, and EfficientNet-lite, and specialised object detectors, such as lightweight YOLO variants [309], have been proposed to alleviate these issues by reducing model size and computational load. However, the performance of these lightweight models often drops when they are used in real agricultural environments, which contain diverse and complex visual patterns [298]. Moreover, compression techniques such as pruning, quantisation, and knowledge distillation provide partial solutions, yet they introduce accuracy–efficiency trade-offs that are non-trivial to optimise [303]. These limitations become even more apparent in tasks involving multimodal data fusion, such as combining RGB, multispectral, and thermal images for stress analysis, where model complexity escalates sharply [305].
Emerging techniques in GenAI provide alternative strategies. For example, generative models can be used to simulate intermediate image features, reducing the need for high-resolution input and consequently lowering computation costs during inference [301]. Yet, such strategies remain in the early research phase and face challenges in reliability and robustness when deployed in the field. Additionally, federated learning frameworks have been considered for decentralised training of agricultural CV models directly on edge devices, thereby enhancing privacy and reducing data transfer burdens. However, real-time synchronisation and bandwidth limitations continue to hinder practical deployments [296].
Therefore, future research should focus on the co-design of hardware-aware AI models and domain-specific integrated circuits (DSICs) tailored for agricultural CV tasks. This includes developing models optimised for edge inference without sacrificing robustness and accuracy, especially under varying environmental conditions. Model-agnostic optimisation techniques and neural architecture search (NAS) tuned for agricultural datasets could play a crucial role in bridging this computational gap. Overall, addressing edge deployment constraints is vital for achieving scalable and autonomous smart agriculture systems, and it requires a multidisciplinary approach spanning AI model design, embedded systems, and agricultural engineering.

5.4. Environmental Variability and Robustness

Smart agriculture is intrinsically tied to the outdoor, dynamic, and often unpredictable nature of farm environments. As such, environmental variability—ranging from changing illumination and weather fluctuations to background clutter—poses a persistent challenge to the robustness of CV systems. While deep learning models have achieved significant success in controlled settings, their performance frequently degrades in real-world agricultural scenes due to a lack of adaptability to shifting conditions [299].
Lighting changes, including variations in sunlight, cloud coverage, and shadow patterns, significantly affect image quality and model performance. For example, tasks such as fruit ripeness detection or leaf disease classification rely heavily on colour and texture features that can be distorted under inconsistent lighting conditions [302]. Similarly, seasonal changes influence plant morphology and field appearance, rendering models trained in one season ineffective in another. Furthermore, rain, dust, and wind can introduce occlusions and motion blur, compromising image clarity and annotation accuracy [293].
Real-time systems such as autonomous tractors or harvest robots demand not only accurate but also rapid predictions. Yet, the underlying models must adapt to environmental feedback loops, which is difficult to achieve using static models trained on offline datasets. Current solutions often involve data augmentation strategies, such as simulating environmental effects during training or applying adaptive image enhancement. However, these approaches offer limited generalisation, especially when confronted with rare or extreme conditions [295]. Temporal and spatial domain shifts further complicate the problem, where models trained in one region fail when deployed in another with different soil types, vegetation structures, or pest prevalence [294].
Some progress has been made by integrating multi-sensor fusion (e.g., combining visual, thermal, and LiDAR data) to improve robustness, though at the cost of increased system complexity and energy demands [297]. Transfer learning and continual learning approaches have also been explored to adapt models over time. However, these methods require frequent retraining, which is computationally expensive and sometimes impractical for edge-based deployment [307]. Few-shot learning and meta-learning frameworks offer a promising direction for rapid adaptation under low-resource scenarios but are still underdeveloped in agricultural applications [308].
To build truly adaptive and robust agricultural CV systems, research must advance in the direction of self-supervised learning, domain generalisation, and online learning. Future models must be capable of continual self-improvement and uncertainty estimation to alert users when predictions become unreliable. Additionally, to accurately assess how a model will perform in the real world, evaluation benchmarks should shift. Instead of relying on accuracy metrics from static datasets, they should prioritise testing models in dynamic, real-world field conditions [295]. Overall, enhancing CV robustness under environmental variability is key to sustainable and reliable smart agriculture.

5.5. Trustworthiness and Explainability

The integration of AI-based CV technologies into agriculture introduces not only technical challenges but also socio-ethical concerns, particularly regarding transparency, interpretability, and trustworthiness. Agricultural stakeholders, including farmers, agronomists, and policymakers, often rely on these systems for high-stakes decisions—such as pesticide application, irrigation scheduling, and yield forecasting—making it critical that AI systems offer clear justifications for their outputs [307].
Traditional deep learning models, especially CNNs and generative architectures, are often perceived as black boxes, lacking interpretability and traceability. This lack of transparency can undermine a user’s confidence and limit the adoption of AI tools, especially in rural regions where farmers may have less familiarity with digital technology. For instance, a farmer may question why a disease detection system flagged a healthy plant or failed to identify a visibly infected crop, particularly when such decisions lead to economic loss [294]. Moreover, automated decisions that lack context-specific justification may conflict with indigenous farming knowledge or experience, creating a barrier between traditional and digital practices [296].
XAI methods such as Grad-CAM, LIME, and SHAP have been adopted to visualise model attention and feature importance, enabling partial insights into decision mechanisms. However, these techniques are mostly post hoc and provide only localised explanations that may not be consistent across different inputs or tasks [300]. Additionally, many XAI tools have yet to be optimised for the multimodal and temporal nature of agricultural CV data. There is a pressing need for native interpretability within model architectures, allowing for transparency without compromising performance.
Trust also intersects with model fairness and accountability. Models trained on skewed datasets may propagate biases—for example, under-detecting diseases on underrepresented crop types or soil textures—leading to unequal treatment or decision errors [301]. Furthermore, in scenarios involving automated robots or drones, liability in cases of system malfunction or misjudgement remains legally ambiguous. These risks underscore the importance of building AI systems that are not only technically sound but also socially responsible and ethically aligned [298].
To this end, collaborative human-in-the-loop systems are gaining traction. These systems keep the human decision-maker involved during critical inference steps, thus facilitating better oversight, correction, and learning. Additionally, regulatory frameworks and auditing mechanisms must be developed in tandem with technical innovation to ensure responsible deployment. Ultimately, the trustworthiness of AI systems in agriculture will depend not only on their predictive accuracy but also on their transparency, user-friendliness, and alignment with human values.

5.6. Socio-Technical and Economic Barriers

Beyond technical hurdles, the widespread adoption of CV and GenAI in agriculture is profoundly constrained by socio-technical and economic barriers. The assumption that technological availability leads to adoption overlooks the complex on-farm reality, particularly in smallholder contexts, which represent over 90% of the world’s farms [310]. These barriers are often more decisive than purely computational challenges.
A primary concern is data sovereignty and ownership. As farms become data sources, questions arise over who controls this information—the farmer, the technology provider, or corporate aggregators. For smallholder farmers, this is especially critical. There is a significant risk of data exploitation, where aggregated data on soil quality, yield, and practices could be used to their disadvantage in market negotiations or for land speculation. Without equitable data governance models, such as farmer-led data cooperatives, the benefits of AI may flow away from the farmers who generate the foundational data, exacerbating existing power imbalances.
Furthermore, significant economic and infrastructural gaps limit accessibility. The high upfront costs of hardware like drones and sensors and the recurring expenses for software and data processing present formidable barriers for farmers with low profit margins. The return on investment is often uncertain and long-term, making it a risky proposition. This is compounded by the digital divide, including a lack of reliable Internet connectivity and the technical literacy required to operate and maintain these sophisticated systems in many rural areas. Consequently, there is a risk of creating a two-tiered agricultural system, where large, well-capitalised farms advance while smallholders are left behind. Addressing these socio-technical challenges through inclusive design, fair data policies, and affordable, context-appropriate technology is therefore as crucial as algorithmic innovation for the equitable and sustainable future of smart agriculture.

6. Future Trends and Outlook

The agricultural sector is undergoing a cognitive revolution, driven by the symbiotic evolution of GenAI and CV. This transformation transcends mere technological substitution, establishing an adaptive framework where computational systems dynamically interact with agricultural ecosystems. Several key shifts are redefining the agricultural intelligence paradigm:
Intelligent Scenario Engineering. Generative architectures are advancing beyond conventional data augmentation. They enable the simulation of complex agricultural scenarios and the prediction of environmental impacts. Concurrently, physics-informed generative models are increasingly capable of preserving crucial biophysical dynamics, leading to the creation of highly realistic synthetic datasets. These datasets can depict subtle, early-stage issues that are difficult to capture at scale in the real world. This dual capability—forecasting potential scenarios and generating rich training data—enhances the ability of CV models to identify and diagnose issues with greater precision, thereby supporting more effective agricultural management.
Distributed Intelligence Ecosystems. The deployment of AI at the edge is evolving towards more autonomous and resource-efficient cognitive networks. Advances in model optimisation allow sophisticated vision models to operate effectively on low-power, potentially self-sustaining, local devices. This facilitates near-real-time analysis and response directly within agricultural environments. Federated learning approaches are crucial for updating models collectively without centralising sensitive raw data, thereby maintaining functionality and privacy even in areas with limited connectivity. The scope of such systems is also broadening to encompass diverse applications, including animal monitoring, by integrating various sensor inputs and analytical techniques.
Human-Centric Cognitive Symbiosis. A key future trend is the establishment of human-centric cognitive symbiosis, which refers to the development of AI systems designed to work intuitively and collaboratively with human users, such as farmers and agronomists, to augment rather than replace their knowledge and agency. Emerging frameworks for human–AI interaction are central to this vision, aiming to bridge the gap between digital insights and physical farming practices. To achieve this, multimodal and XAI systems are being developed to make algorithmic outputs more understandable and to align them with traditional farming knowledge and local contexts. This symbiotic relationship can be enhanced through innovative interfaces that translate complex data into intuitive formats, broadening accessibility. The shift towards data-centric AI, which prioritises the quality and strategic use of data, further empowers agricultural stakeholders by improving data annotation efficiency and exploring mechanisms to encourage broader participation in data ecosystems.
Cross-Scale Environmental Intelligence. The vertical integration of data from diverse sources, including satellite, aerial, and ground-based sensors, enables comprehensive, multi-resolution environmental modelling. Advanced analytical techniques applied to these fused datasets can provide early warnings for large-scale environmental challenges and detect subtle crop issues before they become visually apparent. Such integrated intelligence can support dynamic and adaptive management strategies in response to changing environmental conditions.
Ethical Implementation Imperative. As agricultural AI matures, the need for robust ethical frameworks and governance becomes paramount. This includes considerations for algorithmic transparency, accountability, and data privacy. Efforts to bridge digital divides in rural areas and address data sovereignty concerns are critical for ensuring equitable access to and benefits from these technologies.
The trajectory of these advancements suggests a significant potential for AI systems to contribute to more sustainable agricultural practices, such as optimising resource use at a highly localised level. However, achieving this potential demands a tight coupling of technological innovation with socio-ecological awareness—where AI serves not as an automation engine but as a stewardship mechanism that amplifies ecological resilience and farmer agency. Success will ultimately be measured by how effectively these systems translate computational intelligence into equitable and sustainable food system outcomes.

7. Conclusions

In conclusion, the integration of CV and GenAI is profoundly reshaping smart agriculture, offering unprecedented capabilities for monitoring, automation, and data-informed decision-making across diverse agricultural domains. This review has systematically examined how CV techniques, significantly amplified by GenAI’s ability to address data scarcity and enhance model robustness through sophisticated data synthesis and adaptation, are revolutionising crop health management, precision farming, automated harvesting, and livestock monitoring. Despite these substantial technological advancements, the path to widespread, effective deployment is encumbered by a host of challenges. These include not only technical issues like environmental variability and edge computing limitations but also critical socio-economic and ethical barriers, such as concerns over data sovereignty and the risk of a widening digital divide. The future trajectory points towards increasingly sophisticated applications, including intelligent scenario engineering and distributed intelligence ecosystems, alongside a growing emphasis on human–AI collaboration and ethical considerations. Realising the full transformative potential of CV and GenAI in agriculture will depend critically on fostering scalable, adaptive, and dependable solutions that contribute to resilient and sustainable food production systems globally.

Author Contributions

Conceptualisation, X.M.; methodology, X.M.; investigation, X.M. and Y.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, X.M., Y.Y., and X.C.; visualisation, X.M. and Y.Y.; supervision, X.C. and S.X.; project administration, X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the Natural Science Foundation of Jiangsu Province, China (Grant no. BK20220515), the Leading-edge Technology Program of Jiangsu Natural Science Foundation, China (Grant no. BK20202001), the China Postdoctoral Science Foundation, China (Grant no. 2021M691310), and the Qinglan Project of Jiangsu Province, China.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analysed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ghazal, S.; Munir, A.; Qureshi, W.S. Computer Vision in Smart Agriculture and Precision Farming: Techniques and Applications. Artif. Intell. Agric. 2024, 13, 64–83. [Google Scholar]
  2. Kumar, K.V.; Jayasankar, T. An Identification of Crop Disease Using Image Segmentation. Int. J. Pharm. Sci. Res. 2019, 10, 1054–1064. [Google Scholar]
  3. Granwehr, A.; Hofer, V. Analysis on Digital Image Processing for Plant Health Monitoring. J. Comput. Nat. Sci. 2021, 1, 5–8. [Google Scholar]
  4. Putra Laksamana, A.D.; Fakhrurroja, H.; Pramesti, D. Developing a Labeled Dataset for Chili Plant Health Monitoring: A Multispectral Image Segmentation Approach with YOLOv8. In Proceedings of the 2024 International Conference on Computer, Control, Informatics and its Applications (IC3INA), Bandung, Indonesia, 9–10 October 2024; pp. 440–445. [Google Scholar]
  5. Doll, O.; Loos, A. Comparison of Object Detection Algorithms for Livestock Monitoring of Sheep in UAV Images. In Proceedings of the International Workshop Camera Traps, AI, and Ecology, Jena, Germany, 7–8 September 2023. [Google Scholar]
  6. Ahmad, M.; Abbas, S.; Fatima, A.; Ghazal, T.M.; Alharbi, M.; Khan, M.A.; Elmitwally, N.S. AI-Driven Livestock Identification and Insurance Management System. Egypt. Inform. J. 2023, 24, 100390. [Google Scholar]
  7. Li, G.; Sun, J.; Guan, M.; Sun, S.; Shi, G.; Zhu, C. A New Method for Non-Destructive Identification and Tracking of Multi-Object Behaviors in Beef Cattle Based on Deep Learning. Animals 2024, 14, 2464. [Google Scholar]
  8. Kirongo, A.C.; Maitethia, D.; Mworia, E.; Muketha, G.M. Application of Real-Time Deep Learning in Integrated Surveillance of Maize and Tomato Pests and Bacterial Diseases. J. Kenya Natl. Comm. UNESCO 2024, 4, 1–13. [Google Scholar]
  9. K, S.R.; Sannakashappanavar, B.S.; Kumar, M.; Hegde, G.S.; Megha, M. Intelligent Surveillance and Protection System for Farmlands from Animals. In Proceedings of the 2024 IEEE International Conference on Contemporary Computing and Communications (InC4), Bangalore, India, 15–16 March 2024; IEEE: Piscataway, NJ, USA, 2024; Volume 1, pp. 1–6. [Google Scholar]
  10. Jiang, G.; Grafton, M.; Pearson, D.; Bretherton, M.; Holmes, A. Predicting Spatiotemporal Yield Variability to Aid Arable Precision Agriculture in New Zealand: A Case Study of Maize-Grain Crop Production in the Waikato Region. N. Z. J. Crop Hortic. Sci. 2021, 49, 41–62. [Google Scholar]
  11. Asadollah, S.B.H.S.; Jodar-Abellan, A.; Pardo, M.Á. Optimizing Machine Learning for Agricultural Productivity: A Novel Approach with RScv and Remote Sensing Data over Europe. Agric. Syst. 2024, 218, 103955. [Google Scholar]
  12. Sompal; Singh, R. To Identify a ML and CV Method for Monitoring and Recording the Variables that Impact on Crop Output. In Proceedings of the International Conference on Cognitive Computing and Cyber Physical Systems, Delhi, India, 1–2 December 2023; Springer: Singapore, 2025; pp. 371–382. [Google Scholar]
  13. Fadhaeel, T.; H, P.C.; Al Ahdal, A.; Rakhra, M.; Singh, D. Design and Development an Agriculture Robot for Seed Sowing, Water Spray and Fertigation. In Proceedings of the 2022 International Conference on Computational Intelligence and Sustainable Engineering Solutions (CISES), Greater Noida, India, 20–21 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 148–153. [Google Scholar]
  14. Yeshmukhametov, A.; Dauletiya, D.; Zhassuzak, M.; Buribayev, Z. Development of Mobile Robot with Autonomous Mobile Robot Weeding and Weed Recognition by Using Computer Vision. In Proceedings of the 2023 23rd International Conference on Control, Automation and Systems (ICCAS), Yeosu, Republic of Korea, 17–20 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1407–1412. [Google Scholar]
  15. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean Yield Prediction from UAV Using Multimodal Data Fusion and Deep Learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar]
  16. Zhou, H.; Yang, J.; Lou, W.; Sheng, L.; Li, D.; Hu, H. Improving Grain Yield Prediction through Fusion of Multi-Temporal Spectral Features and Agronomic Trait Parameters Derived from UAV Imagery. Front. Plant Sci. 2023, 14, 1217448. [Google Scholar]
  17. Su, X.; Nian, Y.; Shaghaleh, H.; Hamad, A.; Yue, H.; Zhu, Y.; Li, J.; Wang, W.; Wang, H.; Ma, Q.; et al. Combining Features Selection Strategy and Features Fusion Strategy for SPAD Estimation of Winter Wheat Based on UAV Multispectral Imagery. Front. Plant Sci. 2024, 15, 1404238. [Google Scholar]
  18. Ismail, N.; Malik, O.A. Real-Time Visual Inspection System for Grading Fruits Using Computer Vision and Deep Learning Techniques. Inf. Process. Agric. 2022, 9, 24–37. [Google Scholar]
  19. Hemamalini, V.; Rajarajeswari, S.; Nachiyappan, S.; Sambath, M.; Devi, T.; Singh, B.K.; Raghuvanshi, A. Food Quality Inspection and Grading Using Efficient Image Segmentation and Machine Learning-Based System. J. Food Qual. 2022, 2022, 5262294. [Google Scholar]
  20. Gururaj, N.; Vinod, V.; Vijayakumar, K. Deep Grading of Mangoes Using Convolutional Neural Network and Computer Vision. Multimed. Tools Appl. 2023, 82, 39525–39550. [Google Scholar]
  21. Dönmez, D.; Isak, M.A.; İzgü, T.; Şimşek, Ö. Green Horizons: Navigating the Future of Agriculture through Sustainable Practices. Sustainability 2024, 16, 3505. [Google Scholar] [CrossRef]
  22. Gai, J.; Xiang, L.; Tang, L. Using a Depth Camera for Crop Row Detection and Mapping for Under-Canopy Navigation of Agricultural Robotic Vehicle. Comput. Electron. Agric. 2021, 188, 106301. [Google Scholar]
  23. Yan, Y.; Zhang, B.; Zhou, J.; Zhang, Y.; Liu, X. Real-Time Localization and Mapping Utilizing Multi-Sensor Fusion and Visual–IMU–Wheel Odometry for Agricultural Robots in Unstructured, Dynamic and GPS-Denied Greenhouse Environments. Agronomy 2022, 12, 1740. [Google Scholar]
  24. Nakaguchi, V.M.; Abeyrathna, R.R.D.; Liu, Z.; Noguchi, R.; Ahamed, T. Development of a Machine Stereo Vision-Based Autonomous Navigation System for Orchard Speed Sprayers. Comput. Electron. Agric. 2024, 227, 109669. [Google Scholar]
  25. Li, M.; Shamshiri, R.R.; Weltzien, C.; Schirrmann, M. Crop Monitoring Using Sentinel-2 and UAV Multispectral Imagery: A Comparison Case Study in Northeastern Germany. Remote Sens. 2022, 14, 4426. [Google Scholar]
  26. Yang, Q.; Shi, L.; Han, J.; Chen, Z.; Yu, J. A VI-Based Phenology Adaptation Approach for Rice Crop Monitoring Using UAV Multispectral Images. Field Crops Res. 2022, 277, 108419. [Google Scholar]
  27. Shammi, S.A.; Huang, Y.; Feng, G.; Tewolde, H.; Zhang, X.; Jenkins, J.; Shankle, M. Application of UAV Multispectral Imaging to Monitor Soybean Growth with Yield Prediction through Machine Learning. Agronomy 2024, 14, 672. [Google Scholar] [CrossRef]
  28. Coito, T.; Firme, B.; Martins, M.S.E.; Vieira, S.M.; Figueiredo, J.; Sousa, J.M.C. Intelligent Sensors for Real-Time Decision-Making. Automation 2021, 2, 62–82. [Google Scholar] [CrossRef]
  29. Al-Jarrah, M.A.; Yaseen, M.A.; Al-Dweik, A.; Dobre, O.A.; Alsusa, E. Decision Fusion for IoT-Based Wireless Sensor Networks. IEEE Internet Things J. 2020, 7, 1313–1326. [Google Scholar]
  30. Kumar, P.; Motia, S.; Reddy, S. Integrating Wireless Sensing and Decision Support Technologies for Real-Time Farmland Monitoring and Support for Effective Decision Making: Designing and Deployment of WSN and DSS for Sustainable Growth of Indian Agriculture. Int. J. Inf. Technol. 2023, 15, 1081–1099. [Google Scholar]
  31. Almadani, B.; Mostafa, S.M. IIoT Based Multimodal Communication Model for Agriculture and Agro-Industries. IEEE Access 2021, 9, 10070–10088. [Google Scholar]
  32. Suciu, G.; Marcu, I.; Balaceanu, C.; Dobrea, M.; Botezat, E. Efficient IoT System for Precision Agriculture. In Proceedings of the 2019 15th International Conference on Engineering of Modern Electric Systems (EMES), Oradea, Romania, 13–14 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 173–176. [Google Scholar]
  33. Beldek, C.; Cunningham, J.; Aydin, M.; Sariyildiz, E.; Phung, S.; Alici, G. Sensing-Based Robustness Challenges in Agricultural Robotic Harvesting. In Proceedings of the 2025 IEEE International Conference on Mechatronics (ICM), Wollongong, Australia, 28 February–2 March 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 1–6. [Google Scholar]
  34. Luo, J.; Li, B.; Leung, C. A Survey of Computer Vision Technologies in Urban and Controlled-Environment Agriculture. ACM Comput. Surv. 2023, 56, 1–39. [Google Scholar]
  35. Zhang, X.; Cao, Z.; Dong, W. Overview of Edge Computing in the Agricultural Internet of Things: Key Technologies, Applications, Challenges. IEEE Access 2020, 8, 141748–141761. [Google Scholar]
  36. Williamson, H.F.; Brettschneider, J.; Caccamo, M.; Davey, R.P.; Goble, C.; Kersey, P.J.; May, S.; Morris, R.J.; Ostler, R.; Pridmore, T.; et al. Data Management Challenges for Artificial Intelligence in Plant and Agricultural Research. F1000Research 2023, 10, 324. [Google Scholar]
  37. Paulus, S.; Leiding, B. Can Distributed Ledgers Help to Overcome the Need of Labeled Data for Agricultural Machine Learning Tasks? Plant Phenomics 2023, 5, 0070. [Google Scholar]
  38. Culman, M.; Delalieux, S.; Beusen, B.; Somers, B. Automatic Labeling to Overcome the Limitations of Deep Learning in Applications with Insufficient Training Data: A Case Study on Fruit Detection in Pear Orchards. Comput. Electron. Agric. 2023, 213, 108196. [Google Scholar]
  39. Idoje, G.; Dagiuklas, T.; Iqbal, M. Survey for Smart Farming Technologies: Challenges and Issues. Comput. Electr. Eng. 2021, 92, 107104. [Google Scholar]
  40. Sai, S.; Kumar, S.; Gaur, A.; Goyal, S.; Chamola, V.; Hussain, A. Unleashing the Power of Generative AI in Agriculture 4.0 for Smart and Sustainable Farming. Cogn. Comput. 2025, 17, 63. [Google Scholar]
  41. Ho, J.; Jain, A.; Abbeel, P. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2020; Volume 33, pp. 6840–6851. [Google Scholar]
  42. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  43. Fawakherji, M.; Potena, C.; Prevedello, I.; Pretto, A.; Bloisi, D.D.; Nardi, D. Data Augmentation Using GANs for Crop/Weed Segmentation in Precision Farming. In Proceedings of the 2020 IEEE Conference on Control Technology and Applications (CCTA), Montreal, QC, Canada, 24–26 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 279–284. [Google Scholar]
  44. Pasaribu, F.B.; Dewi, L.J.E.; Aryanto, K.Y.E.; Seputra, K.A.; Varnakovida, P.; Kertiasih, N.K. Generating Synthetic Data on Agricultural Crops with DCGAN. In Proceedings of the 2024 11th International Conference on Advanced Informatics: Concept, Theory and Application (ICAICTA), Singapore, 28–30 September 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  45. Zhao, K.; Nguyen, M.; Yan, W. Evaluating Accuracy and Efficiency of Fruit Image Generation Using Generative AI Diffusion Models for Agricultural Robotics. In Proceedings of the 2024 39th International Conference on Image and Vision Computing New Zealand (IVCNZ), Christchurch, New Zealand, 4–6 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  46. Yu, Z.; Ye, J.; Liufu, S.; Lu, D.; Zhou, H. TasselLFANetV2: Exploring Vision Models Adaptation in Cross-Domain. IEEE Geosci. Remote Sens. Lett. 2024, 21, 2504105. [Google Scholar]
  47. Hinton, G.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
  48. Gu, Y.; Dong, L.; Wei, F.; Huang, M. MiniLLM: Knowledge Distillation of Large Language Models. In Proceedings of the The Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
  49. Huang, Q.; Wu, X.; Wang, Q.; Dong, X.; Qin, Y.; Wu, X.; Gao, Y.; Hao, G. Knowledge Distillation Facilitates the Lightweight and Efficient Plant Diseases Detection Model. Plant Phenomics 2023, 5, 0062. [Google Scholar]
  50. Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar]
  51. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  52. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  53. Sharif, M.; Khan, M.A.; Iqbal, Z.; Azam, M.F.; Lali, M.I.U.; Javed, M.Y. Detection and Classification of Citrus Diseases in Agriculture Based on Optimized Weighted Segmentation and Feature Selection. Comput. Electron. Agric. 2018, 150, 220–234. [Google Scholar]
  54. Aygün, S.; Güneş, E.O. A Benchmarking: Feature Extraction and Classification of Agricultural Textures Using LBP, GLCM, RBO, Neural Networks, k-NN, and Random Forest. In Proceedings of the 2017 6th International Conference on Agro-Geoinformatics, Fairfax, VA, USA, 7–10 August 2017; pp. 1–4. [Google Scholar]
  55. Dutta, K.; Talukdar, D.; Bora, S.S. Segmentation of Unhealthy Leaves in Cruciferous Crops for Early Disease Detection Using Vegetative Indices and Otsu Thresholding of Aerial Images. Measurement 2022, 189, 110478. [Google Scholar]
  56. Li, Z.; Sun, J.; Shen, Y.; Yang, Y.; Wang, X.; Wang, X.; Tian, P.; Qian, Y. Deep Migration Learning-Based Recognition of Diseases and Insect Pests in Yunnan Tea under Complex Environments. Plant Methods 2024, 20, 101. [Google Scholar]
  57. Heider, N.; Gunreben, L.; Zürner, S.; Schieck, M. A Survey of Datasets for Computer Vision in Agriculture. arXiv 2025, arXiv:2502.16950. [Google Scholar]
  58. Xu, R.; Li, C.; Paterson, A.H. Multispectral Imaging and Unmanned Aerial Systems for Cotton Plant Phenotyping. PLoS ONE 2019, 14, e0205083. [Google Scholar]
  59. Sarić, R.; Nguyen, V.D.; Burge, T.; Berkowitz, O.; Trtílek, M.; Whelan, J.; Lewsey, M.G.; Čustović, E. Applications of Hyperspectral Imaging in Plant Phenotyping. Trends Plant Sci. 2022, 27, 301–315. [Google Scholar] [PubMed]
  60. Detring, J.; Barreto, A.; Mahlein, A.K.; Paulus, S. Quality Assurance of Hyperspectral Imaging Systems for Neural Network Supported Plant Phenotyping. Plant Methods 2024, 20, 189. [Google Scholar]
  61. Tong, L.; Fang, J.; Wang, X.; Zhao, Y. Research on Cattle Behavior Recognition and Multi-Object Tracking Algorithm Based on YOLO-BoT. Animals 2024, 14, 2993. [Google Scholar] [CrossRef] [PubMed]
  62. Qiao, Y.; Guo, Y.; He, D. Cattle Body Detection Based on YOLOv5-ASFF for Precision Livestock Farming. Comput. Electron. Agric. 2023, 204, 107579. [Google Scholar]
  63. Vogg, R.; Lüddecke, T.; Henrich, J.; Dey, S.; Nuske, M.; Hassler, V.; Murphy, D.; Fischer, J.; Ostner, J.; Schülke, O.; et al. Computer Vision for Primate Behavior Analysis in the Wild. Nat. Methods 2025, 22, 1154–1166. [Google Scholar]
  64. Gaso, D.V.; de Wit, A.; Berger, A.G.; Kooistra, L. Predicting within-Field Soybean Yield Variability by Coupling Sentinel-2 Leaf Area Index with a Crop Growth Model. Agric. For. Meteorol. 2021, 308, 108553. [Google Scholar]
  65. Raj, R.; Walker, J.P.; Pingale, R.; Nandan, R.; Naik, B.; Jagarlapudi, A. Leaf Area Index Estimation Using Top-of-Canopy Airborne RGB Images. Int. J. Appl. Earth Obs. Geoinf. 2021, 96, 102282. [Google Scholar]
  66. Das Menon H, K.; Mishra, D.; Deepa, D. Automation and Integration of Growth Monitoring in Plants (with Disease Prediction) and Crop Prediction. Mater. Today Proc. 2021, 43, 3922–3927. [Google Scholar]
  67. Marcu, I.; Drăgulinescu, A.M.; Oprea, C.; Suciu, G.; Bălăceanu, C. Predictive Analysis and Wine-Grapes Disease Risk Assessment Based on Atmospheric Parameters and Precision Agriculture Platform. Sustainability 2022, 14, 11487. [Google Scholar] [CrossRef]
  68. Donmez, C.; Villi, O.; Berberoglu, S.; Cilek, A. Computer Vision-Based Citrus Tree Detection in a Cultivated Environment Using UAV Imagery. Comput. Electron. Agric. 2021, 187, 106273. [Google Scholar]
  69. Vergara, B.; Toledo, K.; Fernández-Campusano, C.; Santamaria, A.R. Drone and Computer Vision-Based Detection of Aleurothrixus Floccosus in Citrus Crops. In Proceedings of the 2024 IEEE International Conference on Automation/XXVI Congress of the Chilean Association of Automatic Control (ICA-ACCA), Santiago, Chile, 20–23 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  70. Ali, H.; Nidzamuddin, S.; Elshaikh, M. Smart Irrigation System Based IoT for Indoor Housing Farming. In AIP Conference Proceedings; AIP Publishing: Melville, NY, USA, 2024; Volume 2898. [Google Scholar]
  71. Manikandan, J.; Saran, J.; Samitha, S.; Rhikshitha, K. An Effective Study on the Machine Vision-Based Automatic Control and Monitoring in Furrow Irrigation and Precision Irrigation. In Computer Vision in Smart Agriculture and Crop Management; Scrivener Publishing LLC: Beverly, MA, USA, 2025; pp. 323–342. [Google Scholar]
  72. Banh, L.; Strobel, G. Generative Artificial Intelligence. Electron. Mark. 2023, 33, 63. [Google Scholar]
  73. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.u.; Polosukhin, I. Attention is All you Need. In Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
  74. Hu, X.; Gan, Z.; Wang, J.; Yang, Z.; Liu, Z.; Lu, Y.; Wang, L. Scaling up Vision-Language pre-Training for Image Captioning. In Proceedings of the IEEE/CVF conference on Computer Vision and pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17980–17989. [Google Scholar]
  75. Sapkota, R.; Karkee, M. Creating Image Datasets in Agricultural Environments Using DALL. E: Generative AI-Powered Large Language Model. arXiv 2023, arXiv:2307.08789. [Google Scholar]
  76. Chou, C.B.; Lee, C.H. Generative Neural network-Based Online Domain Adaptation (GNN-ODA) Approach for Incomplete Target Domain Data. IEEE Trans. Instrum. Meas. 2023, 72, 3508110. [Google Scholar]
  77. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing Scenes as Neural Radiance Fields for View Synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar]
  78. Modak, S.; Stein, A. Synthesizing Training Data for Intelligent Weed Control Systems Using Generative AI. In Proceedings of the International Conference on Architecture of Computing Systems, Potsdam, Germany, 13–15 May 2024; Springer: Cham, Switzerland, 2024; pp. 112–126. [Google Scholar]
  79. Bhugra, S.; Srivastava, S.; Kaushik, V.; Mukherjee, P.; Lall, B. Plant Data Generation with Generative AI: An Application to Plant Phenotyping. In Applications of Generative AI; Springer: Cham, Switzerland, 2024; pp. 503–535. [Google Scholar]
  80. Madsen, S.L.; Dyrmann, M.; Jørgensen, R.N.; Karstoft, H. Generating Artificial Images of Plant Seedlings Using Generative Adversarial Networks. Biosyst. Eng. 2019, 187, 147–159. [Google Scholar]
  81. Miranda, M.; Drees, L.; Roscher, R. Controlled Multi-Modal Image Generation for Plant Growth Modeling. In Proceedings of the 2022 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 21–25 August 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 5118–5124. [Google Scholar]
  82. Dhumale, Y.; Bamnote, G.; Kale, R.; Sawale, G.; Chaudhari, A.; Karwa, R. Generative Modeling Techniques for Simulating Rare Agricultural Events on Prediction of Wheat Yield Production. In Proceedings of the 2024 2nd DMIHER International Conference on Artificial Intelligence in Healthcare, Education and Industry (IDICAIEI), Wardha, India, 29–30 November 2024; pp. 1–5. [Google Scholar]
  83. Klair, Y.S.; Agrawal, K.; Kumar, A. Impact of Generative AI in Diagnosing Diseases in Agriculture. In Proceedings of the 2024 2nd International Conference on Disruptive Technologies (ICDT), Greater Noida, India, 15–16 March 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 870–875. [Google Scholar]
  84. Klein, J.; Waller, R.; Pirk, S.; Pałubicki, W.; Tester, M.; Michels, D.L. Synthetic Data at Scale: A Development Model to Efficiently Leverage Machine Learning in Agriculture. Front. Plant Sci. 2024, 15, 1360113. [Google Scholar]
  85. Saraceni, L.; Motoi, I.M.; Nardi, D.; Ciarfuglia, T.A. Self-Supervised Data Generation for Precision Agriculture: Blending Simulated Environments with Real Imagery. In Proceedings of the 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), Bari, Italy, 28 August–1 September 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 71–77. [Google Scholar]
  86. Lohar, R.; Mathur, H.; patil, V.R. Estimation of Crop Recommendation Using Generative Adversarial Network with Optimized Machine Learning Model. Cuest. Fisioter. 2025, 54, 328–341. [Google Scholar]
  87. Yoon, S.; Cho, Y.; Shin, M.; Lin, M.Y.; Kim, D.; Ahn, T.I. Melon Fruit Detection and Quality Assessment Using Generative AI-Based Image Data Augmentation. J. Bio-Environ. Control 2024, 33, 352–360. [Google Scholar]
  88. Yoon, S.; Lee, Y.; Jung, E.; Ahn, T.I. Agricultural Applicability of AI Based Image Generation. J. Bio-Environ. Control 2024, 33, 120–128. [Google Scholar]
  89. Duguma, A.; Bai, X. Contribution of Internet of Things (IoT) in Improving Agricultural Systems. Int. J. Environ. Sci. Technol. 2024, 21, 2195–2208. [Google Scholar]
  90. David, P.E.; Chelliah, P.R.; Anandhakumar, P. Reshaping Agriculture Using Intelligent Edge Computing. In Advances in Computers; Elsevier: Amsterdam, The Netherlands, 2024; Volume 132, pp. 167–204. [Google Scholar]
  91. Majumder, S.; Khandelwal, Y.; Sornalakshmi, K. Computer Vision and Generative AI for Yield Prediction in Digital Agriculture. In Proceedings of the 2024 2nd International Conference on Networking and Communications (ICNWC), Chennai, India, 2–4 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  92. Dey, A.; Bhoumik, D.; Dey, K.N. Automatic detection of whitefly pest using statistical feature extraction and image classification methods. Int. Res. J. Eng. Technol. 2016, 3, 950–959. [Google Scholar]
  93. Islam, A.; Islam, R.; Haque, S.R.; Islam, S.M.; Khan, M.A.I. Rice Leaf Disease Recognition using Local Threshold Based Segmentation and Deep CNN. Int. J. Intell. Syst. Appl. 2021, 10, 35. [Google Scholar]
  94. Duan, Y.; Han, W.; Guo, P.; Wei, X. YOLOv8-GDCI: Research on the Phytophthora Blight Detection Method of Different Parts of Chili Based on Improved YOLOv8 Model. Agronomy 2024, 14, 2734. [Google Scholar] [CrossRef]
  95. Ma, J.; Zhao, Y.; Fan, W.; Liu, J. An Improved YOLOv8 Model for Lotus Seedpod Instance Segmentation in the Lotus Pond Environment. Agronomy 2024, 14, 1325. [Google Scholar] [CrossRef]
  96. Yang, N.; Chang, K.; Dong, S.; Tang, J.; Wang, A.; Huang, R.; Jia, Y. Rapid Image Detection and Recognition of Rice False Smut Based on Mobile Smart Devices with Anti-Light Features from Cloud Database. Biosyst. Eng. 2022, 218, 229–244. [Google Scholar]
  97. Mahmood ur Rehman, M.; Liu, J.; Nijabat, A.; Faheem, M.; Wang, W.; Zhao, S. Leveraging Convolutional Neural Networks for Disease Detection in Vegetables: A Comprehensive Review. Agronomy 2024, 14, 2231. [Google Scholar] [CrossRef]
  98. Verma, S.; Chug, A.; Singh, A.P.; Singh, D. Plant Disease Detection and Severity Assessment Using Image Processing and Deep Learning Techniques. SN Comput. Sci. 2023, 5, 83. [Google Scholar]
  99. Eunice, J.; Popescu, D.E.; Chowdary, M.K.; Hemanth, J. Deep Learning-Based Leaf Disease Detection in Crops using Images for Agricultural Applications. Agronomy 2022, 12, 2395. [Google Scholar] [CrossRef]
  100. Xu, Q.; Cai, J.R.; Zhang, W.; Bai, J.W.; Li, Z.Q.; Tan, B.; Sun, L. Detection of Citrus Huanglongbing (HLB) Based on the HLB-Induced Leaf Starch Accumulation Using a Home-Made Computer Vision System. Biosyst. Eng. 2022, 218, 163–174. [Google Scholar]
  101. Jia, Y.; Shi, Y.; Luo, J.; Sun, H. Y-Net: Identification of Typical Diseases of Corn Leaves Using a 3D-2D Hybrid CNN Model Combined with a Hyperspectral Image Band Selection Module. Sensors 2023, 23, 1494. [Google Scholar] [PubMed]
  102. Zhu, W.; Sun, J.; Wang, S.; Shen, J.; Yang, K.; Zhou, X. Identifying Field Crop Diseases Using Transformer-Embedded Convolutional Neural Network. Agriculture 2022, 12, 1083. [Google Scholar] [CrossRef]
  103. Kuswidiyanto, L.W.; Wang, P.; Noh, H.H.; Jung, H.Y.; Jung, D.H.; Han, X. Airborne Hyperspectral Imaging for Early Diagnosis of Kimchi Cabbage Downy Mildew using 3D-ResNet and Leaf Segmentation. Comput. Electron. Agric. 2023, 214, 108312. [Google Scholar]
  104. Wang, Y.; Li, T.; Chen, T.; Zhang, X.; Taha, M.F.; Yang, N.; Mao, H.; Shi, Q. Cucumber Downy Mildew Disease Prediction Using a CNN-LSTM Approach. Agriculture 2024, 14, 1155. [Google Scholar] [CrossRef]
  105. Bhargava, A.; Shukla, A.; Goswami, O.P.; Alsharif, M.H.; Uthansakul, P.; Uthansakul, M. Plant Leaf Disease Detection, Classification, and Diagnosis Using Computer Vision and Artificial Intelligence: A Review. IEEE Access 2024, 12, 37443–37469. [Google Scholar]
  106. Hu, Y.; Zhang, Y.; Liu, S.; Zhou, G.; Li, M.; Hu, Y.; Li, J.; Sun, L. DMFGAN: A Multifeature Data Augmentation Method for Grape Leaf Disease Identification. Plant J. 2024, 120, 1278–1303. [Google Scholar]
  107. Min, B.; Kim, T.; Shin, D.; Shin, D. Data Augmentation Method for Plant Leaf Disease Recognition. Appl. Sci. 2023, 13, 1465. [Google Scholar]
  108. Zhao, Y.; Chen, Z.; Gao, X.; Song, W.; Xiong, Q.; Hu, J.; Zhang, Z. Plant Disease Detection Using Generated Leaves Based on DoubleGAN. IEEE-ACM Trans. Comput. Biol. Bioinform. 2022, 19, 1817–1826. [Google Scholar]
  109. Abbas, A.; Jain, S.; Gour, M.; Vankudothu, S. Tomato plant disease detection using transfer learning with C-GAN synthetic images. Comput. Electron. Agric. 2021, 187, 106279. [Google Scholar]
  110. Cheng, W.; Ma, T.; Wang, X.; Wang, G. Anomaly Detection for Internet of Things Time Series Data Using Generative Adversarial Networks With Attention Mechanism in Smart Agriculture. Front. Plant Sci. 2022, 13, 890563. [Google Scholar]
  111. Zhang, Y.; Wa, S.; Zhang, L.; Lv, C. Automatic Plant Disease Detection Based on Tranvolution Detection Network With GAN Modules Using Leaf Images. Front. Plant Sci. 2022, 13, 875693. [Google Scholar]
  112. Cui, X.; Ying, Y.; Chen, Z. CycleGAN Based Confusion Model for Cross-Species Plant Disease Image Migration. J. Intell. Fuzzy Syst. 2021, 41, 6685–6696. [Google Scholar]
  113. Drees, L.; Demie, D.T.; Paul, M.R.; Leonhardt, J.; Seidel, S.J.; Doering, T.F.; Roscher, R. Data-Driven Crop Growth Simulation on Time-Varying Generated Images using Multi-Conditional Generative Adversarial Networks. Plant Methods 2024, 20, 93. [Google Scholar]
  114. Lopes, F.A.; Sagan, V.; Esposito, F. PlantPlotGAN: A Physics-Informed Generative Adversarial Network for Plant Disease Prediction. In Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 7051–7060. [Google Scholar]
  115. Fawakherji, M.; Potena, C.; Pretto, A.; Bloisi, D.D.; Nardi, D. Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming. Robot. Auton. Syst. 2021, 146, 103861. [Google Scholar]
  116. Ismail; Budiman, D.; Asri, E.; Aidha, Z.R. The Smart Agriculture based on Reconstructed Thermal Image. In Proceedings of the 2022 2nd International Conference on Intelligent Technologies (CONIT), Hubli, India, 24–26 June 2022; pp. 1–6. [Google Scholar]
  117. Saxena, D.; Cao, J. Generative adversarial networks (GANs) challenges, solutions, and future directions. ACM Comput. Surv. (CSUR) 2021, 54, 1–42. [Google Scholar]
  118. Pothapragada, I.S.; R, S. GANs for data augmentation with stacked CNN models and XAI for interpretable maize yield prediction. Smart Agric. Technol. 2025, 11, 100992. [Google Scholar]
  119. Lin, W.; Adetomi, A.; Arslan, T. Low-Power Ultra-Small Edge AI Accelerators for Image Recognition with Convolution Neural Networks: Analysis and Future Directions. Electronics 2021, 10, 2048. [Google Scholar]
  120. Zhao, Y.; Li, C.; Yu, P.; Gao, J.; Chen, C. Feature Quantization Improves GAN Training. arXiv 2020, arXiv:2004.02088. [Google Scholar]
  121. Kudo, S.; Kagiwada, S.; Iyatomi, H. Few-shot Metric Domain Adaptation: Practical Learning Strategies for an Automated Plant Disease Diagnosis. arXiv 2024, arXiv:2412.18859. [Google Scholar]
  122. Chang, Y.C.; Stewart, A.J.; Bastani, F.; Wolters, P.; Kannan, S.; Huber, G.R.; Wang, J.; Banerjee, A. On the Generalizability of Foundation Models for Crop Type Mapping. arXiv 2025, arXiv:2409.09451. [Google Scholar]
  123. Awais, M.; Alharthi, A.H.S.A.; Kumar, A.; Cholakkal, H.; Anwer, R.M. AgroGPT: Efficient Agricultural Vision-Language Model with Expert Tuning. In Proceedings of the 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Tucson, AZ, USA, 26 February–6 March 2025. [Google Scholar]
  124. Roumeliotis, K.I.; Sapkota, R.; Karkee, M.; Tselikas, N.D.; Nasiopoulos, D.K. Plant Disease Detection through Multimodal Large Language Models and Convolutional Neural Networks. arXiv 2025, arXiv:2504.20419. [Google Scholar]
  125. Joshi, H. Edge-AI for Agriculture: Lightweight Vision Models for Disease Detection in Resource-Limited Settings. arXiv 2024, arXiv:2412.18635. [Google Scholar]
  126. Chen, S.; Xiong, J.; Jiao, J.; Xie, Z.; Huo, Z.; Hu, W. Citrus fruits maturity detection in natural environments based on convolutional neural networks and visual saliency map. Precis. Agric. 2022, 23, 1515–1531. [Google Scholar]
  127. Terdwongworakul, A.; Chaiyapong, S.; Jarimopas, B.; Meeklangsaen, W. Physical properties of fresh young Thai coconut for maturity sorting. Biosyst. Eng. 2009, 103, 208–216. [Google Scholar]
  128. Jiang, L.; Wang, Y.; Wu, C.; Wu, H. Fruit Distribution Density Estimation in YOLO-Detected Strawberry Images: A Kernel Density and Nearest Neighbor Analysis Approach. Agriculture 2024, 14, 1848. [Google Scholar] [CrossRef]
  129. Ji, W.; Gao, X.; Xu, B.; Pan, Y.; Zhang, Z.; Zhao, D. Apple Target Recognition Method in Complex Environment Based on Improved YOLOv4. J. Food Process Eng. 2021, 44, e13866. [Google Scholar]
  130. Yan, B.; Fan, P.; Lei, X.; Liu, Z.; Yang, F. A Real-Time Apple Targets Detection Method for Picking Robot Based on Improved YOLOv5. Remote Sens. 2021, 13, 1619. [Google Scholar]
  131. Hu, T.; Wang, W.; Gu, J.; Xia, Z.; Zhang, J.; Wang, B. Research on Apple Object Detection and Localization Method Based on Improved YOLOX and RGB-D Images. Agronomy 2023, 13, 1816. [Google Scholar] [CrossRef]
  132. Chen, Y.; Xu, H.; Chang, P.; Huang, Y.; Zhong, F.; Jia, Q.; Chen, L.; Zhong, H.; Liu, S. CES-YOLOv8: Strawberry Maturity Detection Based on the Improved YOLOv8. Agronomy 2024, 14, 1353. [Google Scholar] [CrossRef]
  133. Wang, M.; Li, F. Real-Time Accurate Apple Detection Based on Improved YOLOv8n in Complex Natural Environments. Plants 2025, 14, 365. [Google Scholar] [PubMed]
  134. Zhang, F.; Chen, Z.; Ali, S.; Yang, N.; Fu, S.; Zhang, Y. Multi-Class Detection of Cherry Tomatoes Using Improved Yolov4-Tiny Model. Int. J. Agric. Biol. Eng. 2023, 16, 225–231. [Google Scholar]
  135. Zhang, Q.; Chen, Q.; Xu, W.; Xu, L.; Lu, E. Prediction of Feed Quantity for Wheat Combine Harvester Based on Improved YOLOv5s and Weight of Single Wheat Plant without Stubble. Agriculture 2024, 14, 1251. [Google Scholar] [CrossRef]
  136. Li, J.; Lammers, K.; Yin, X.; Yin, X.; He, L.; Lu, R.; Li, Z. MetaFruit Meets Foundation Models: Leveraging a Comprehensive Multi-Fruit Dataset for Advancing Agricultural Foundation Models. Comput. Electron. Agric. 2025, 231, 109908. [Google Scholar]
  137. Eccles, B.J.; Rodgers, P.; Kilpatrick, P.; Spence, I.; Varghese, B. DNNShifter: An efficient DNN pruning system for edge computing. Future Gener. Comput. Syst. 2024, 152, 43–54. [Google Scholar]
  138. Zuo, Z.; Gao, S.; Peng, H.; Xue, Y.; Han, L.; Ma, G.; Mao, H. Lightweight Detection of Broccoli Heads in Complex Field Environments Based on LBDC-YOLO. Agronomy 2024, 14, 2359. [Google Scholar] [CrossRef]
  139. Li, Y.; Xu, X.; Wu, W.; Zhu, Y.; Yang, G.; Yang, X.; Meng, Y.; Jiang, X.; Xue, H. Hyperspectral Estimation of Chlorophyll Content in Grape Leaves Based on Fractional-Order Differentiation and Random Forest Algorithm. Remote Sens. 2024, 16, 2174. [Google Scholar]
  140. Sun, M.; Zhang, D.; Liu, L.; Wang, Z. How to Predict the Sugariness and Hardness of Melons: A Near-Infrared Hyperspectral Imaging Method. Food Chem. 2017, 218, 413–421. [Google Scholar]
  141. Sun, Q.; Chai, X.; Zeng, Z.; Zhou, G.; Sun, T. Noise-Tolerant RGB-D Feature Fusion Network for Outdoor Fruit Detection. Comput. Electron. Agric. 2022, 198, 107034. [Google Scholar]
  142. Lu, P.; Zheng, W.; Lv, X.; Xu, J.; Zhang, S.; Li, Y.; Zhangzhong, L. An Extended Method Based on the Geometric Position of Salient Image Features: Solving the Dataset Imbalance Problem in Greenhouse Tomato Growing Scenarios. Agriculture 2024, 14, 1893. [Google Scholar] [CrossRef]
  143. Huo, Y.; Liu, Y.; He, P.; Hu, L.; Gao, W.; Gu, L. Identifying Tomato Growth Stages in Protected Agriculture with StyleGAN3-Synthetic Images and Vision Transformer. Agriculture 2025, 15, 120. [Google Scholar]
  144. Hirahara, K.; Nakane, C.; Ebisawa, H.; Kuroda, T.; Iwaki, Y.; Utsumi, T.; Nomura, Y.; Koike, M.; Mineno, H. D4: Text-Guided Diffusion Model-Based Domain Adaptive Data Augmentation for Vineyard Shoot Detection. Comput. Electron. Agric. 2025, 230, 109849. [Google Scholar]
  145. Kierdorf, J.; Weber, I.; Kicherer, A.; Zabawa, L.; Drees, L.; Roscher, R. Behind the Leaves: Estimation of Occluded Grapevine Berries with Conditional Generative Adversarial Networks. Front. Artif. Intell. 2022, 5, 830026. [Google Scholar]
  146. Jin, Y.; Liu, J.; Xu, Z.; Yuan, S.; Li, P.; Wang, J. Development Status and Trend of Agricultural Robot Technology. Int. J. Agric. Biol. Eng. 2021, 14, 1–19. [Google Scholar]
  147. Luo, Y.; Wei, L.; Xu, L.; Zhang, Q.; Liu, J.; Cai, Q.; Zhang, W. Stereo-Vision-Based Multi-Crop Harvesting Edge Detection for Precise Automatic Steering of Combine Harvester. Biosyst. Eng. 2022, 215, 115–128. [Google Scholar]
  148. Chu, P.; Li, Z.; Zhang, K.; Chen, D.; Lammers, K.; Lu, R. O2RNet: Occluder-Occludee Relational Network for Robust Apple Detection in Clustered Orchard Environments. Smart Agric. Technol. 2023, 5, 100284. [Google Scholar]
  149. Jo, Y.; Park, Y.; Son, H.I. A Suction Cup-Based Soft Robotic Gripper for Cucumber Harvesting: Design and Validation. Biosyst. Eng. 2024, 238, 143–156. [Google Scholar]
  150. Zhou, H.; Ahmed, A.; Liu, T.; Romeo, M.; Beh, T.; Pan, Y.; Kang, H.; Chen, C. Finger Vision Enabled Real-Time Defect Detection in Robotic Harvesting. Comput. Electron. Agric. 2025, 234, 110222. [Google Scholar]
  151. Zhou, X.; Chen, W.; Wei, X. Improved Field Obstacle Detection Algorithm Based on YOLOv8. Agriculture 2024, 14, 2263. [Google Scholar] [CrossRef]
  152. Mortazavi, M.; Cappelleri, D.J.; Ehsani, R. RoMu4o: A Robotic Manipulation Unit For Orchard Operations Automating Proximal Hyperspectral Leaf Sensing. arXiv 2025, arXiv:2501.10621. [Google Scholar]
  153. Wang, Z.; Walsh, K.; Koirala, A. Mango Fruit Load Estimation using a Video Based MangoYOLO—Kalman Filter—Hungarian Algorithm Method. Sensors 2019, 19, 2742. [Google Scholar] [PubMed]
  154. Manoj, T.; Makkithaya, K.; Narendra, V. A Blockchain-Assisted Trusted Federated Learning for Smart Agriculture. SN Comput. Sci. 2025, 6, 221. [Google Scholar]
  155. Bai, J.; Mei, Y.; He, F.; Long, F.; Liao, Y.; Gao, H.; Huang, Y. Rapid and Non-Destructive Quality Grade Assessment of Hanyuan Zanthoxylum Bungeanum Fruit Using a Smartphone Application Integrating Computer Vision Systems and Convolutional Neural Networks. Food Control 2025, 168, 110844. [Google Scholar]
  156. Nyborg, J.; Pelletier, C.; Lefèvre, S.; Assent, I. TimeMatch: Unsupervised Cross-Region Adaptation by Temporal Shift Estimation. ISPRS J. Photogramm. Remote Sens. 2022, 188, 301–313. [Google Scholar]
  157. Cieslak, M.; Govindarajan, U.; Garcia, A.; Chandrashekar, A.; Hadrich, T.; Mendoza-Drosik, A.; Michels, D.L.; Pirk, S.; Fu, C.C.; Palubicki, W. Generating Diverse Agricultural Data for Vision-Based Farming Applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 5422–5431. [Google Scholar]
  158. Alexander, C.S.; Yarborough, M.; Smith, A. Who is Responsible for ‘Responsible AI’?: Navigating Challenges to Build Trust in AI Agriculture and Food System Technology. Precis. Agric. 2024, 25, 146–185. [Google Scholar]
  159. Saranya, T.; Deisy, C.; Sridevi, S.; Anbananthen, K.S.M. A Comparative Study of Deep Learning and Internet of Things for Precision Agriculture. Eng. Appl. Artif. Intell. 2023, 122, 106034. [Google Scholar]
  160. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A Compilation of UAV Applications for Precision Agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar]
  161. Ampatzidis, Y.; Partel, V.; Costa, L. Agroview: Cloud-Based Application to Process, Analyze and Visualize UAV-Collected Data for Precision Agriculture Applications Utilizing Artificial Intelligence. Comput. Electron. Agric. 2020, 174, 105457. [Google Scholar]
  162. Anderegg, J.; Tschurr, F.; Kirchgessner, N.; Treier, S.; Schmucki, M.; Streit, B.; Walter, A. On-Farm Evaluation of UAV-Based Aerial Imagery for Season-Long Weed Monitoring under Contrasting Management and Pedoclimatic Conditions in Wheat. Comput. Electron. Agric. 2023, 204, 107558. [Google Scholar]
  163. Zhang, Y.; Yan, Z.; Gao, J.; Shen, Y.; Zhou, H.; Tang, W.; Lu, Y.; Yang, Y. UAV Imaging Hyperspectral for Barnyard Identification and Spatial Distribution in Paddy Fields. Expert Syst. Appl. 2024, 255, 124771. [Google Scholar]
  164. Velusamy, P.; Rajendran, S.; Mahendran, R.K.; Naseer, S.; Shafiq, M.; Choi, J.G. Unmanned Aerial Vehicles (UAV) in Precision Agriculture: Applications and Challenges. Energies 2021, 15, 217. [Google Scholar] [CrossRef]
  165. Punithavathi, R.; Rani, A.D.C.; Sughashini, K.; Kurangi, C.; Nirmala, M.; Ahmed, H.F.T.; Balamurugan, S. Computer Vision and Deep Learning-Enabled Weed Detection Model for Precision Agriculture. Comput. Syst. Sci. Eng. 2023, 44, 2759–2774. [Google Scholar]
  166. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Anwar, S. Deep Learning-Based Identification System of Weeds and Crops in Strawberry and Pea Fields for a Precision Agriculture Sprayer. Precis. Agric. 2021, 22, 1711–1727. [Google Scholar]
  167. Ma, C.; Chi, G.; Ju, X.; Zhang, J.; Yan, C. YOLO-CWD: A Novel Model for Crop and Weed Detection Based on Improved YOLOv8. Crop Prot. 2025, 192, 107169. [Google Scholar]
  168. Sharma, V.; Tripathi, A.K.; Mittal, H.; Parmar, A.; Soni, A.; Amarwal, R. Weedgan: A Novel Generative Adversarial Network for Cotton Weed Identification. Vis. Comput. 2023, 39, 6503–6519. [Google Scholar]
  169. Li, Y.; Guo, Z.; Sun, Y.; Chen, X.; Cao, Y. Weed Detection Algorithms in Rice Fields Based on Improved YOLOv10n. Agriculture 2024, 14, 2066. [Google Scholar] [CrossRef]
  170. Deng, L.; Miao, Z.; Zhao, X.; Yang, S.; Gao, Y.; Zhai, C.; Zhao, C. HAD-YOLO: An Accurate and Effective Weed Detection Model Based on Improved YOLOV5 Network. Agronomy 2025, 15, 57. [Google Scholar]
  171. Tao, T.; Wei, X. STBNA-YOLOv5: An Improved YOLOv5 Network for Weed Detection in Rapeseed Field. Agriculture 2024, 15, 22. [Google Scholar] [CrossRef]
  172. Pei, H.; Sun, Y.; Huang, H.; Zhang, W.; Sheng, J.; Zhang, Z. Weed Detection in Maize Fields by UAV Images Based on Crop Row Preprocessing and Improved YOLOv4. Agriculture 2022, 12, 975. [Google Scholar] [CrossRef]
  173. Hu, C.; Thomasson, J.A.; Bagavathiannan, M.V. A Powerful Image Synthesis and Semi-Supervised Learning Pipeline for Site-Specific Weed Detection. Comput. Electron. Agric. 2021, 190, 106423. [Google Scholar]
  174. Pérez-Ortiz, M.; Peña, J.; Gutiérrez, P.A.; Torres-Sánchez, J.; Hervás-Martínez, C.; López-Granados, F. A Semi-Supervised System for Weed Mapping in Sunflower Crops Using Unmanned Aerial Vehicles and a Crop Row Detection Method. Appl. Soft Comput. 2015, 37, 533–544. [Google Scholar]
  175. Teimouri, N.; Jørgensen, R.N.; Green, O. Novel Assessment of Region-Based CNNs for Detecting Monocot/Dicot Weeds in Dense Field Environments. Agronomy 2022, 12, 1167. [Google Scholar]
  176. Liu, J.; Abbas, I.; Noor, R.S. Development of Deep Learning-Based Variable Rate Agrochemical Spraying System for Targeted Weeds Control in Strawberry Crop. Agronomy 2021, 11, 1480. [Google Scholar] [CrossRef]
  177. Lakhiar, I.A.; Yan, H.; Zhang, C.; Wang, G.; He, B.; Hao, B.; Han, Y.; Wang, B.; Bao, R.; Syed, T.N.; et al. A Review of Precision Irrigation Water-Saving Technology under Changing Climate for Enhancing Water Use Efficiency, Crop Yield, and Environmental Footprints. Agriculture 2024, 14, 1141. [Google Scholar] [CrossRef]
  178. Chauhdary, J.N.; Li, H.; Jiang, Y.; Pan, X.; Hussain, Z.; Javaid, M.; Rizwan, M. Advances in Sprinkler Irrigation: A Review in the Context of Precision Irrigation for Crop Production. Agronomy 2023, 14, 47. [Google Scholar] [CrossRef]
  179. Elbeltagi, A.; Srivastava, A.; Deng, J.; Li, Z.; Raza, A.; Khadke, L.; Yu, Z.; El-Rawy, M. Forecasting Vapor Pressure Deficit for Agricultural Water Management Using Machine Learning in Semi-Arid Environments. Agric. Water Manag. 2023, 283, 108302. [Google Scholar]
  180. Zhu, X.; Chikangaise, P.; Shi, W.; Chen, W.H.; Yuan, S. Review of Intelligent Sprinkler Irrigation Technologies for Remote Autonomous System. Int. J. Agric. Biol. Eng. 2018, 11, 23–30. [Google Scholar]
  181. Kang, C.; Mu, X.; Seffrin, A.N.; Di Gioia, F.; He, L. A Recursive Segmentation Model for Bok Choy Growth Monitoring with Internet of Things (IoT) Technology in Controlled Environment Agriculture. Comput. Electron. Agric. 2025, 230, 109866. [Google Scholar]
  182. Chen, T.; Yin, H. Camera-Based Plant Growth Monitoring for Automated Plant Cultivation with Controlled Environment Agriculture. Smart Agric. Technol. 2024, 8, 100449. [Google Scholar]
  183. Li, C.; Adhikari, R.; Yao, Y.; Miller, A.G.; Kalbaugh, K.; Li, D.; Nemali, K. Measuring Plant Growth Characteristics Using Smartphone Based Image Analysis Technique in Controlled Environment Agriculture. Comput. Electron. Agric. 2020, 168, 105123. [Google Scholar]
  184. Zhang, T.; Zhou, J.; Liu, W.; Yue, R.; Yao, M.; Shi, J.; Hu, J. Seedling-YOLO: High-Efficiency Target Detection Algorithm for Field Broccoli Seedling Transplanting Quality Based on YOLOv7-Tiny. Agronomy 2024, 14, 931. [Google Scholar]
  185. Farooque, A.A.; Afzaal, H.; Benlamri, R.; Al-Naemi, S.; MacDonald, E.; Abbas, F.; MacLeod, K.; Ali, H. Red-Green-Blue to Normalized Difference Vegetation Index Translation: A Robust and Inexpensive Approach for Vegetation Monitoring Using Machine Vision and Generative Adversarial Networks. Precis. Agric. 2023, 24, 1097–1115. [Google Scholar]
  186. Fawakherji, M.; Suriani, V.; Nardi, D.; Bloisi, D.D. Shape and Style GAN-Based Multispectral Data Augmentation for Crop/Weed Segmentation in Precision Farming. Crop Prot. 2024, 184, 106848. [Google Scholar]
  187. Kong, J.; Ryu, Y.; Jeong, S.; Zhong, Z.; Choi, W.; Kim, J.; Lee, K.; Lim, J.; Jang, K.; Chun, J.; et al. Super Resolution of Historic Landsat Imagery Using a Dual Generative Adversarial Network (GAN) Model with CubeSat Constellation Imagery for Spatially Enhanced Long-Term Vegetation Monitoring. ISPRS J. Photogramm. Remote Sens. 2023, 200, 1–23. [Google Scholar]
  188. Li, X.; Li, X.; Zhang, M.; Dong, Q.; Zhang, G.; Wang, Z.; Wei, P. SugarcaneGAN: A Novel Dataset Generating Approach for Sugarcane Leaf Diseases Based on Lightweight Hybrid CNN-Transformer Network. Comput. Electron. Agric. 2024, 219, 108762. [Google Scholar]
  189. Mohamed, E.S.; Belal, A.; Abd-Elmabod, S.K.; El-Shirbeny, M.A.; Gad, A.; Zahran, M.B. Smart Farming for Improving Agricultural Management. Egypt. J. Remote Sens. Space Sci. 2021, 24, 971–981. [Google Scholar]
  190. Ganatra, N.; Patel, A. Deep Learning Methods and Applications for Precision Agriculture. In Machine Learning for Predictive Analysis; Springer: Singapore, 2020; pp. 515–527. [Google Scholar]
  191. Murindanyi, S.; Nakatumba-Nabende, J.; Sanya, R.; Nakibuule, R.; Katumba, A. Enhanced Infield Agriculture with Interpretable Machine Learning Approaches for Crop Classification. arXiv 2024, arXiv:2408.12426. [Google Scholar]
  192. Li, L.; Li, J.; Chen, D.; Pu, L.; Yao, H.; Huang, Y. VLLFL: A Vision-Language Model Based Lightweight Federated Learning Framework for Smart Agriculture. arXiv 2025, arXiv:2504.13365. [Google Scholar]
  193. Zhang, R.; Li, X. Edge Computing Driven Data Sensing Strategy in the Entire Crop Lifecycle for Smart Agriculture. Sensors 2021, 21, 7502. [Google Scholar] [CrossRef]
  194. Deka, S.A.; Phodapol, S.; Gimenez, A.M.; Fernandez-Ayala, V.N.; Wong, R.; Yu, P.; Tan, X.; Dimarogonas, D.V. Enhancing Precision Agriculture Through Human-in-the-Loop Planning and Control. In Proceedings of the 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE), CASE 2024, Bari, Italy, 28 August–1 September 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 78–83. [Google Scholar]
  195. Zhang, H.; Wang, L.; Tian, T.; Yin, J. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar]
  196. Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of Remote Sensing in Precision Agriculture: A Review. Remote Sens. 2020, 12, 3136. [Google Scholar]
  197. Mammarella, M.; Comba, L.; Biglia, A.; Dabbene, F.; Gay, P. Cooperation of Unmanned Systems for Agricultural Applications: A Theoretical Framework. Biosyst. Eng. 2022, 223, 61–80. [Google Scholar]
  198. Delavarpour, N.; Koparan, C.; Nowatzki, J.; Bajwa, S.; Sun, X. A Technical Study on UAV Characteristics for Precision Agriculture Applications and Associated Practical Challenges. Remote Sens. 2021, 13, 1204. [Google Scholar]
  199. Matese, A.; Czarnecki, J.M.P.; Samiappan, S.; Moorhead, R. Are Unmanned Aerial Vehicle-Based Hyperspectral Imaging and Machine Learning Advancing Crop Science? Trends Plant Sci. 2024, 29, 196–209. [Google Scholar] [PubMed]
  200. Moriya, É.A.S.; Imai, N.N.; Tommaselli, A.M.G.; Berveglieri, A.; Santos, G.H.; Soares, M.A.; Marino, M.; Reis, T.T. Detection and Mapping of Trees Infected with Citrus Gummosis Using UAV Hyperspectral Data. Comput. Electron. Agric. 2021, 188, 106298. [Google Scholar]
  201. Zhang, J.; Zhang, D.; Liu, J.; Zhou, Y.; Cui, X.; Fan, X. DSCONV-GAN: A UAV-BASED Model for Verticillium Wilt Disease Detection in Chinese Cabbage in Complex Growing Environments. Plant Methods 2024, 20, 186. [Google Scholar]
  202. Hu, G.; Ye, R.; Wan, M.; Bao, W.; Zhang, Y.; Zeng, W. Detection of Tea Leaf Blight in Low-Resolution UAV Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5601218. [Google Scholar]
  203. Yeh, J.F.; Lin, K.M.; Yuan, L.C.; Hsu, J.M. Automatic Counting and Location Labeling of Rice Seedlings from Unmanned Aerial Vehicle Images. Electronics 2024, 13, 273. [Google Scholar] [CrossRef]
  204. Feng, Z.; Cai, J.; Wu, K.; Li, Y.; Yuan, X.; Duan, J.; He, L.; Feng, W. Enhancing the Accuracy of Monitoring Effective Tiller Counts of Wheat Using Multi-Source Data and Machine Learning Derived from Consumer Drones. Comput. Electron. Agric. 2025, 232, 110120. [Google Scholar]
  205. Yu, H.; Weng, L.; Wu, S.; He, J.; Yuan, Y.; Wang, J.; Xu, X.; Feng, X. Time-Series Field Phenotyping of Soybean Growth Analysis by Combining Multimodal Deep Learning and Dynamic modeling. Plant Phenomics 2024, 6, 0158. [Google Scholar]
  206. Zhao, J.; Li, H.; Chen, C.; Pang, Y.; Zhu, X. Detection of Water Content in Lettuce Canopies Based on Hyperspectral Imaging Technology under Outdoor Conditions. Agriculture 2022, 12, 1796. [Google Scholar] [CrossRef]
  207. Fareed, N.; Das, A.K.; Flores, J.P.; Mathew, J.J.; Mukaila, T.; Numata, I.; Janjua, U.U.R. UAS Quality Control and Crop Three-Dimensional Characterization Framework Using Multi-Temporal LiDAR Data. Remote Sens. 2024, 16, 699. [Google Scholar]
  208. Burchard-Levine, V.; Guerra, J.G.; Borra-Serrano, I.; Nieto, H.; Mesias-Ruiz, G.; Dorado, J.; de Castro, A.I.; Herrezuelo, M.; Mary, B.; Aguirre, E.P.; et al. Evaluating the Utility of Combining High Resolution Thermal, Multispectral and 3D Imagery from Unmanned Aerial Vehicles to Monitor Water Stress in Vineyards. Precis. Agric. 2024, 25, 2447–2476. [Google Scholar]
  209. Wang, H.; Chen, X.; Zhang, T.; Xu, Z.; Li, J. CCTNet: Coupled CNN and Transformer Network for Crop Segmentation of Remote Sensing Images. Remote Sens. 2022, 14, 1956. [Google Scholar]
  210. Prasad, A.; Mehta, N.; Horak, M.; Bae, W.D. A Two-Step Machine Learning Approach for Crop Disease Detection Using GAN and UAV Technology. Remote Sens. 2022, 14, 4765. [Google Scholar]
  211. Niu, B.; Feng, Q.; Chen, B.; Ou, C.; Liu, Y.; Yang, J. HSI-TransUNet: A Transformer Based Semantic Segmentation Model for Crop Mapping from UAV Hyperspectral Imagery. Comput. Electron. Agric. 2022, 201, 107297. [Google Scholar]
  212. Wu, H.; Zhou, H.; Wang, A.; Iwahori, Y. Precise Crop Classification of Hyperspectral Images Using Multi-Branch Feature Fusion and Dilation-Based MLP. Remote Sens. 2022, 14, 2713. [Google Scholar]
  213. Hu, X.; Wang, X.; Zhong, Y.; Zhang, L. S3ANet: Spectral-Spatial-Scale Attention Network for End-to-End Precise Crop Classification Based on UAV-Borne H2 Imagery. ISPRS J. Photogramm. Remote Sens. 2022, 183, 147–163. [Google Scholar]
  214. Reddy, K.K.; Daduvy, A.; Mohana, R.M.; Assiri, B.; Shuaib, M.; Alam, S.; Sheneamer, A. Enhancing Precision Agriculture and Land Cover Classification: A Self-Attention 3D Convolutional Neural Network Approach for Hyperspectral Image Analysis. IEEE Access 2024, 12, 125592–125608. [Google Scholar]
  215. Rai, N.; Zhang, Y.; Villamil, M.; Howatt, K.; Ostlie, M.; Sun, X. Agricultural Weed Identification in Images and Videos by Integrating Optimized Feep Learning Architecture on an Edge Computing Technology. Comput. Electron. Agric. 2024, 216, 108442. [Google Scholar]
  216. Zhang, K.; Yuan, D.; Yang, H.; Zhao, J.; Li, N. Synergy of Sentinel-1 and Sentinel-2 Imagery for Crop Classification Based on DC-CNN. Remote Sens. 2023, 15, 2727. [Google Scholar]
  217. Ma, X.; Li, L.; Wu, Y. Deep-Learning-Based Method for the Identification of Typical Crops Using Dual-Polarimetric Synthetic Aperture Radar and High-Resolution Optical Images. Remote Sens. 2025, 17, 148. [Google Scholar]
  218. Wang, R.; Zhao, J.; Yang, H.; Li, N. Inversion of Soil Moisture on Farmland Areas Based on SSA-CNN Using Multi-Source Remote Sensing Data. Remote Sens. 2023, 15, 2515. [Google Scholar]
  219. Fu, H.; Lu, J.; Li, J.; Zou, W.; Tang, X.; Ning, X.; Sun, Y. Winter Wheat Yield Prediction Using Satellite Remote Sensing Data and Deep Learning Models. Agronomy 2025, 15, 205. [Google Scholar] [CrossRef]
  220. Ong, P.; Chen, S.; Tsai, C.Y.; Wu, Y.J.; Shen, Y.T. A Non-Destructive Methodology for Determination of Cantaloupe Sugar Content using Machine Vision and Deep Learning. J. Sci. Food Agric. 2022, 102, 6586–6595. [Google Scholar] [PubMed]
  221. Islam, T.; Islam, R.; Uddin, P.; Ulhaq, A. Spectrally Segmented-Enhanced Neural network for Precise Land Cover Object Classification in Hyperspectral Imagery. Remote Sens. 2024, 16, 807. [Google Scholar]
  222. Dericquebourg, E.; Hafiane, A.; Canals, R. Generative-Model-Based Data Labeling for Deep Network Regression: Application to Seed Maturity Estimation from UAV Multispectral Images. Remote Sens. 2022, 14, 5238. [Google Scholar]
  223. Ma, Z.; Yang, S.; Li, J.; Qi, J. Research on Slam Localization Algorithm for Orchard Dynamic Vision Based on YOLOD-SLAM2. Agriculture 2024, 14, 1622. [Google Scholar] [CrossRef]
  224. Wang, J.; Gao, Z.; Zhang, Y.; Zhou, J.; Wu, J.; Li, P. Real-Time Detection and Location of Potted Flowers Based on a ZED Camera and a YOLO V4-Tiny Deep Learning Algorithm. Horticulturae 2021, 8, 21. [Google Scholar]
  225. Mwitta, C.; Rains, G.C.; Prostko, E. Evaluation of Inference Performance of Deep Learning Models for Real-Time Weed Detection in an Embedded Computer. Sensors 2024, 24, 514. [Google Scholar] [CrossRef]
  226. Khater, O.H.; Siddiqui, A.J.; Hossain, M.S. EcoWeedNet: A Lightweight and Automated Weed Detection Method for Sustainable Next-Generation Agricultural Consumer Electronics. arXiv 2025, arXiv:2502.00205. [Google Scholar]
  227. Zhang, T.; Hu, D.; Wu, C.; Liu, Y.; Yang, J.; Tang, K. Large-Scale Apple Orchard Mapping from Multi-Source Data Using the Semantic Segmentation Model with Image- to- Image Translation and Transfer Learning. Comput. Electron. Agric. 2023, 213, 108204. [Google Scholar]
  228. Galodha, A.; Vashisht, R.; Nidamanuri, R.R.; Ramiya, A.M. Convolutional Neural Network (CNN) for Crop-Classification of Drone Acquired Hyperspectral Imagery. In Proceedings of the IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, Brisbane, Australia, 3–8 August 2025; IEEE: Piscataway, NJ, USA, 2022; pp. 7741–7744. [Google Scholar]
  229. Luo, P.; Niu, Y.; Tang, D.; Huang, W.; Luo, X.; Mu, J. A Computer Vision Solution for Behavioral Recognition in Red Pandas. Sci. Rep. 2025, 15, 9201. [Google Scholar]
  230. Kuncheva, L. Animal ReIdentification Using Restricted Set Classification. Ecol. Inform. 2021, 62, 101225. [Google Scholar]
  231. Hsieh, Y.Z.; Lee, P.Y. Analysis of Oplegnathus Punctatus Body Parameters Using Underwater Stereo Vision. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 8, 879–891. [Google Scholar]
  232. Chen, C.; Zhu, W.; Norton, T. Behaviour Recognition of Pigs and Cattle: Journey from Computer Vision to Deep Learning. Comput. Electron. Agric. 2021, 187, 106255. [Google Scholar]
  233. Huang, X.; Hu, Z.; Qiao, Y.; Sukkarieh, S. Deep Learning-Based Cow Tail Detection and Tracking for Precision Livestock Farming. IEEE/ASME Trans. Mechatron. 2022, 28, 1213–1221. [Google Scholar]
  234. Lee, J.h.; Choi, Y.H.; Lee, H.s.; Park, H.J.; Hong, J.S.; Lee, J.H.; Sa, S.J.; Kim, Y.M.; Kim, J.E.; Jeong, Y.D.; et al. Enhanced Swine Behavior Detection with YOLOs and a Mixed Efficient Layer Aggregation Network in Real Time. Animals 2024, 14, 3375. [Google Scholar] [CrossRef]
  235. Alameer, A.; Buijs, S.; O’Connell, N.; Dalton, L.; Larsen, M.; Pedersen, L.; Kyriazakis, I. Automated Detection and Quantification of Contact Behaviour in Pigs Using Deep Learning. Biosyst. Eng. 2022, 224, 118–130. [Google Scholar]
  236. Zheng, Z.; Qin, L. PrunedYOLO-Tracker: An Efficient Multi-Cows Basic Behavior Recognition and Tracking Technique. Comput. Electron. Agric. 2023, 213, 108172. [Google Scholar]
  237. Pan, Y.; Jin, H.; Gao, J.; Rauf, H.T. Identification of Buffalo Breeds Using Self-Activated-Based Improved Convolutional Neural Networks. Agriculture 2022, 12, 1386. [Google Scholar]
  238. Nguyen, A.H.; Holt, J.P.; Knauer, M.T.; Abner, V.A.; Lobaton, E.J.; Young, S.N. Towards Rapid Weight Assessment of Finishing Pigs Using a Handheld, Mobile RGB-D Camera. Biosyst. Eng. 2023, 226, 155–168. [Google Scholar]
  239. Li, L.; Shi, G.; Jiang, T. Fish Detection Method Based on Improved YOLOv5. Aquac. Int. 2023, 31, 2513–2530. [Google Scholar]
  240. Chang, C.C.; Ubina, N.A.; Cheng, S.C.; Lan, H.Y.; Chen, K.C.; Huang, C.C. A Two-Mode Underwater Smart Sensor Object for Precision Aquaculture Based on AIoT Technology. Sensors 2022, 22, 7603. [Google Scholar]
  241. Chieza, K.; Brown, D.; Connan, J.; Salie, D. Automated Fish Detection in Underwater Environments: Performance Analysis of YOLOv8 and YOLO-NAS. In Communications in Computer and Information Science, Proceedings of the 5th Southern African Conference for Artificial Intelligence Research, SACAIR 2024, Bloemfontein, South Africa, 2–6 December 2024; Gerber, A., Maritz, J., Pillay, A., Eds.; Springer: Cham, Switzerland, 2025; Volume 2326, pp. 334–351. [Google Scholar]
  242. Zhao, Z.; Liu, Y.; Sun, X.; Liu, J.; Yang, X.; Zhou, C. Composited FishNet: Fish Detection and Species Recognition from Low-Quality Underwater Videos. IEEE Trans. Image Process. 2021, 30, 4719–4734. [Google Scholar]
  243. Yu, X.; Wang, Y.; An, D.; Wei, Y. Identification Methodology of Special Behaviors for Fish School Based on Spatial Behavior Characteristics. Comput. Electron. Agric. 2021, 185, 106169. [Google Scholar]
  244. Cai, K.; Yang, Z.; Gao, T.; Liang, M.; Liu, P.; Zhou, S.; Pang, H.; Liu, Y. Efficient Recognition of Fish Feeding Behavior: A Novel Two-Stage Framework Pioneering Intelligent Aquaculture Strategies. Comput. Electron. Agric. 2024, 224, 109129. [Google Scholar]
  245. Zhao, T.; Zhang, G.; Zhong, P.; Shen, Z. DMDnet: A Decoupled Multi-Scale Discriminant Model for Cross-Domain Fish Detection. Biosyst. Eng. 2023, 234, 32–45. [Google Scholar]
  246. Sarkar, P.; De, S.; Gurung, S.; Dey, P. UICE-MIRNet Guided Image Enhancement for Underwater Object Detection. Sci. Rep. 2024, 14, 22448. [Google Scholar]
  247. Sanhueza, M.I.; Montes, C.S.; Sanhueza, I.; Montoya-Gallardo, N.; Escalona, F.; Luarte, D.; Escribano, R.; Torres, S.; Godoy, S.E.; Amigo, J.M.; et al. VIS-NIR Hyperspectral Imaging and Multivariate Analysis for Direct Characterization of Pelagic Fish Species. Spectrochim. Acta Part Mol. Biomol. Spectrosc. 2025, 328, 125451. [Google Scholar]
  248. Xu, X.; Li, W.; Duan, Q. Transfer Learning and SE-ResNet152 Networks-Based for Small-Scale Unbalanced Fish Species Identification. Comput. Electron. Agric. 2021, 180, 105878. [Google Scholar]
  249. Bohara, K.; Joshi, P.; Acharya, K.P.; Ramena, G. Emerging Technologies Revolutionising Disease Diagnosis and Monitoring in Aquatic Animal Health. Rev. Aquac. 2024, 16, 836–854. [Google Scholar]
  250. Zhou, C.; Wang, C.; Sun, D.; Hu, J.; Ye, H. An Automated Lightweight Approach for Detecting Dead Fish in a Recirculating Aquaculture System. Aquaculture 2025, 594, 741433. [Google Scholar]
  251. Li, Y.; Tan, H.; Deng, Y.; Zhou, D.; Zhu, M. Hypoxia Monitoring of Fish in Intensive Aquaculture Based on Underwater Multi-Target Tracking. Comput. Electron. Agric. 2025, 232, 110127. [Google Scholar]
  252. Hu, X.; Liu, Y.; Zhao, Z.; Liu, J.; Yang, X.; Sun, C.; Chen, S.; Li, B.; Zhou, C. Real-Time Detection of Uneaten Feed Pellets in Underwater Images for Aquaculture Using an Improved YOLO-V4 Network. Comput. Electron. Agric. 2021, 185, 106135. [Google Scholar]
  253. Schellewald, C.; Saad, A.; Stahl, A. Mouth Opening Frequency of Salmon from Underwater Video Exploiting Computer Vision. IFAC-PapersOnLine 2024, 58, 313–318. [Google Scholar]
  254. Yu, H.; Song, H.; Xu, L.; Li, D.; Chen, Y. SED-RCNN-BE: A SE-Dual Channel RCNN Network Optimized Binocular Estimation Model for Automatic Size Estimation of Free Swimming Fish in Aquaculture. Expert Syst. Appl. 2024, 255, 124519. [Google Scholar]
  255. Xun, Z.; Wang, X.; Xue, H.; Zhang, Q.; Yang, W.; Zhang, H.; Li, M.; Jia, S.; Qu, J.; Wang, X. Deep Machine Learning Identified Fish Flesh Using Multispectral Imaging. Curr. Res. Food Sci. 2024, 9, 100784. [Google Scholar]
  256. Yang, Y.; Li, D.; Zhao, S. A Novel Approach for Underwater Fish Segmentation in Complex Scenes Based on Multi-Levels Triangular Atrous Convolution. Aquac. Int. 2024, 32, 5215–5240. [Google Scholar]
  257. An, S.; Wang, L.; Wang, L. MINM: Marine Intelligent Netting Monitoring Using Multi-Scattering Model and Multi-Space Transformation. ISA Trans. 2024, 150, 278–297. [Google Scholar]
  258. Riego del Castillo, V.; Sánchez-González, L.; Campazas-Vega, A.; Strisciuglio, N. Vision-Based Module for Herding with a Sheepdog Robot. Sensors 2022, 22, 5321. [Google Scholar] [CrossRef] [PubMed]
  259. Martin, K.E.; Blewett, T.A.; Burnett, M.; Rubinger, K.; Standen, E.M.; Taylor, D.S.; Trueman, J.; Turko, A.J.; Weir, L.; West, C.M.; et al. The Importance of Familiarity, Relatedness, and Vision in Social Recognition in Wild and Laboratory Populations of a Selfing, Hermaphroditic Mangrove Fish. Behav. Ecol. Sociobiol. 2022, 76, 34. [Google Scholar]
  260. Desgarnier, L.; Mouillot, D.; Vigliola, L.; Chaumont, M.; Mannocci, L. Putting Eagle Rays on the Map by Coupling Aerial Video-Surveys and Deep Learning. Biol. Conserv. 2022, 267, 109494. [Google Scholar]
  261. Shreesha, S.; Pai, M.M.; Pai, R.M.; Verma, U. Pattern Detection and Prediction Using Deep Learning for Intelligent Decision Support to Identify Fish Behaviour in Aquaculture. Ecol. Inform. 2023, 78, 102287. [Google Scholar]
  262. Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D., Jr.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef]
  263. Zhang, C.; Liu, C.; Zeng, S.; Yang, W.; Chen, Y. Hyperspectral Imaging Coupled with Deep Learning Model for Visualization and Detection of Early Bruises on Apples. J. Food Compos. Anal. 2024, 134, 106489. [Google Scholar]
  264. Cheng, J.; Sun, J.; Yao, K.; Xu, M.; Dai, C. Multi-Task Convolutional Neural Network for Simultaneous Monitoring of Lipid and Protein Oxidative Damage in Frozen-Thawed Pork Using Hyperspectral Imaging. Meat Sci. 2023, 201, 109196. [Google Scholar]
  265. Tan, H.; Ma, B.; Xu, Y.; Dang, F.; Yu, G.; Bian, H. An Innovative Variant Based on Generative Adversarial Network (GAN): Regression GAN Combined with Hyperspectral Imaging to Predict Pesticide Residue Content of Hami Melon. Spectrochim. Acta Part A-Mol. Biomol. Spectrosc. 2025, 325, 125086. [Google Scholar]
  266. Stasenko, N.; Shukhratov, I.; Savinov, M.; Shadrin, D.; Somov, A. Deep Learning in Precision Agriculture: Artificially Generated VNIR Images Segmentation for Early Postharvest Decay Prediction in Apples. Entropy 2023, 25, 987. [Google Scholar] [CrossRef]
  267. Li, X.; Xue, S.; Li, Z.; Fang, X.; Zhu, T.; Ni, C. A Candy Defect Detection Method Based on StyleGAN2 and Improved YOLOv7 for Imbalanced Data. Foods 2024, 13, 3343. [Google Scholar] [CrossRef]
  268. Xu, B.; Cui, X.; Ji, W.; Yuan, H.; Wang, J. Apple Grading Method Design and Implementation for Automatic Grader Based on Improved YOLOv5. Agriculture 2023, 13, 124. [Google Scholar] [CrossRef]
  269. Li, A.; Wang, C.; Ji, T.; Wang, Q.; Zhang, T. D3-YOLOv10: Improved YOLOv10-Based Lightweight Tomato Detection Algorithm Under Facility Scenario. Agriculture 2024, 14, 2268. [Google Scholar]
  270. Zhu, L.; Spachos, P. Support Vector Machine and YOLO for a Mobile Food Grading System. Internet Things 2021, 13, 100359. [Google Scholar]
  271. Han, F.; Huang, X.; Aheto, J.H.; Zhang, D.; Feng, F. Detection of Beef Adulterated with Pork Using a Low-Cost Electronic Nose Based on Colorimetric Sensors. Foods 2020, 9, 193. [Google Scholar] [CrossRef]
  272. Huang, X.; Li, Z.; Xiaobo, Z.; Shi, J.; Tahir, H.E.; Xu, Y.; Zhai, X.; Hu, X. Geographical Origin Discrimination of Edible Bird’s Nests Using Smart Handheld Device Based on Colorimetric Sensor Array. J. Food Meas. Charact. 2020, 14, 514–526. [Google Scholar]
  273. Arslan, M.; Zareef, M.; Tahir, H.E.; Guo, Z.; Rakha, A.; Xuetao, H.; Shi, J.; Zhihua, L.; Xiaobo, Z.; Khan, M.R. Discrimination of Rice Varieties Using Smartphone-Based Colorimetric Sensor Arrays and Gas Chromatography Techniques. Food Chem. 2022, 368, 130783. [Google Scholar] [PubMed]
  274. Guo, Z.; Zou, Y.; Sun, C.; Jayan, H.; Jiang, S.; El-Seedi, H.R.; Zou, X. Nondestructive Determination of Edible Quality and Watercore Degree of Apples by Portable Vis/NIR Transmittance System Combined with CARS-CNN. J. Food Meas. Charact. 2024, 18, 4058–4073. [Google Scholar]
  275. Srivastava, S.; Sadistap, S. Data Fusion for Fruit Quality Authentication: Combining Non-Destructive Sensing Techniques to Predict Quality Parameters of Citrus Cultivars. J. Food Meas. Charact. 2022, 16, 344–365. [Google Scholar]
  276. Cai, X.; Zhu, Y.; Liu, S.; Yu, Z.; Xu, Y. FastSegFormer: A Knowledge Distillation-Based Method for Real-Time Semantic Segmentation of Surface Defects in Navel Oranges. Comput. Electron. Agric. 2024, 217, 108604. [Google Scholar]
  277. Qiu, D.; Guo, T.; Yu, S.; Liu, W.; Li, L.; Sun, Z.; Peng, H.; Hu, D. Classification of Apple Color and Deformity Using Machine Vision Combined with CNN. Agriculture 2024, 14, 978. [Google Scholar] [CrossRef]
  278. Patel, K.K.; Kar, A.; Khan, M.A. Monochrome Computer Vision for Detecting Common External Defects of Mango. J. Food Sci. Techinology 2021, 58, 4550–4557. [Google Scholar]
  279. Yang, Z.; Zhai, X.; Zou, X.; Shi, J.; Huang, X.; Li, Z.; Gong, Y.; Holmes, M.; Povey, M.; Xiao, J. Bilayer pH-Sensitive Colorimetric Films with Light-Blocking Ability and Electrochemical Writing Property: Application in Monitoring Crucian Spoilage in Smart Packaging. Food Chem. 2021, 336, 127634. [Google Scholar]
  280. Xu, Y.; Zhang, W.; Shi, J.; Li, Z.; Huang, X.; Zou, X.; Tan, W.; Zhang, X.; Hu, X.; Wang, X.; et al. Impedimetric Aptasensor Based on Highly Porous Gold for Sensitive Detection of Acetamiprid in Fruits and Vegetables. Food Chem. 2020, 322, 126762. [Google Scholar]
  281. Wu, Y.; Li, P.; Xie, T.; Yang, R.; Zhu, R.; Liu, Y.; Zhang, S.; Weng, S. Enhanced Quasi-Meshing Hotspot Effect Integrated Embedded Attention Residual Network for Culture-Free SERS Accurate Determination of Fusarium Spores. Biosens. Bioelectron. 2025, 271, 117053. [Google Scholar]
  282. de Moraes, I.A.; Junior, S.B.; Barbin, D.F. Interpretation and Explanation of Computer Vision Classification of Carambola (Averrhoa Carambola L.) according to Maturity Stage. Food Res. Int. 2024, 192, 114836. [Google Scholar]
  283. Wang, F.; Lv, C.; Dong, L.; Li, X.; Guo, P.; Zhao, B. Development of Effective Model for Non-Destructive Detection of Defective Kiwifruit Based on Graded Lines. Front. Plant Sci. 2023, 14, 1170221. [Google Scholar]
  284. Sarkar, T.; Choudhury, T.; Bansal, N.; Arunachalaeshwaran, V.; Khayrullin, M.; Shariati, M.A.; Lorenzo, J.M. Artificial Intelligence Aided Adulteration Detection and Quantification for Red Chilli Powder. Food Anal. Methods 2023, 16, 721–748. [Google Scholar]
  285. Nguyen, N.M.T.; Liou, N.S. Detecting Surface Defects of Achacha Fruit (Garcinia Humilis) with Hyperspectral Images. Horticulturae 2023, 9, 869. [Google Scholar] [CrossRef]
  286. Wu, H.; Xie, R.; Hao, Y.; Pang, J.; Gao, H.; Qu, F.; Tian, M.; Guo, C.; Mao, B.; Chai, F. Portable Smartphone-Integrated AuAg Nanoclusters Electrospun Membranes for Multivariate Fluorescent Sensing of Hg2+, Cu2+ and l-histidine in Water and Food Samples. Food Chem. 2023, 418, 135961. [Google Scholar]
  287. Sharma, S.; Sirisomboon, P.; K.C, S.; Terdwongworakul, A.; Phetpan, K.; Kshetri, T.B.; Sangwanangkul, P. Near-Infrared Hyperspectral Imaging Combined with Machine Learning for Physicochemical-Based Quality Evaluation of Durian Pulp. Postharvest Biol. Technol. 2023, 200, 112334. [Google Scholar]
  288. Ji, W.; Zhai, K.; Xu, B.; Wu, J. Green Apple Detection Method Based on Multidimensional Feature Extraction Network Model and Transformer Module. J. Food Prot. 2025, 88, 100397. [Google Scholar]
  289. Ji, W.; Wang, J.; Xu, B.; Zhang, T. Apple Grading Based on Multi-Dimensional View Processing and Deep Learning. Foods 2023, 12, 2117. [Google Scholar] [CrossRef] [PubMed]
  290. Nabil, P.; Mohamed, K.; Atia, A. Automating Fruit Quality Inspection Through Transfer Learning and GANs. In Proceedings of the 2024 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), Cairo, Egypt, 13–14 November 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 401–406. [Google Scholar]
  291. Hao, J.; Zhao, Y.; Peng, Q. A Specular Highlight Removal Algorithm for Quality Inspection of Fresh Fruits. Remote Sens. 2022, 14, 3215. [Google Scholar]
  292. Fakhrou, A.; Kunhoth, J.; Al Maadeed, S. Smartphone-based food recognition system using multiple deep CNN models. Multimed. Tools Appl. 2021, 80, 33011–33032. [Google Scholar]
  293. Zhao, Y.; Qin, H.; Xu, L.; Yu, H.; Chen, Y. A Review of Deep Learning-Based Stereo Vision Techniques for Phenotype Feature and Behavioral Analysis of Fish in Aquaculture. Artif. Intell. Rev. 2025, 58, 7. [Google Scholar]
  294. Zha, J. Artificial Intelligence in Agriculture. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020; Volume 1693, p. 012058. [Google Scholar]
  295. Salman, Z.; Muhammad, A.; Piran, M.J.; Han, D. Crop-Saving with AI: Latest Trends in Deep Learning Techniques for Plant Pathology. Front. Plant Sci. 2023, 14, 1224709. [Google Scholar]
  296. Sharma, R. Artificial Intelligence in Agriculture: A Review. In Proceedings of the 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 6–8 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 937–942. [Google Scholar]
  297. Coulibaly, S.; Kamsu-Foguem, B.; Kamissoko, D.; Traore, D. Deep Learning for Precision Agriculture: A Bibliometric Analysis. Intell. Syst. Appl. 2022, 16, 200102. [Google Scholar]
  298. Subeesh, A.; Mehta, C. Automation and Digitization of Agriculture Using Artificial Intelligence and Internet of Things. Artif. Intell. Agric. 2021, 5, 278–291. [Google Scholar]
  299. Tian, H.; Wang, T.; Liu, Y.; Qiao, X.; Li, Y. Computer Vision Technology in Agricultural Automation—A Review. Inf. Process. Agric. 2020, 7, 1–19. [Google Scholar]
  300. Khan, A.A.; Laghari, A.A.; Awan, S.A. Machine learning in Computer Vision: A Review. EAI Endorsed Trans. Scalable Inf. Syst. 2021, 8, e4. [Google Scholar]
  301. Ding, W.; Abdel-Basset, M.; Alrashdi, I.; Hawash, H. Next Generation of Computer Vision for Plant Disease Monitoring in Precision Agriculture: A Contemporary Survey, Taxonomy, Experiments, and Future Direction. Inf. Sci. 2024, 665, 120338. [Google Scholar]
  302. Zhang, J.; Kang, N.; Qu, Q.; Zhou, L.; Zhang, H. Automatic Fruit Picking Technology: A Comprehensive Review of Research Advances. Artif. Intell. Rev. 2024, 57, 54. [Google Scholar]
  303. Kassim, M.R.M. Iot Applications in Smart Agriculture: Issues and Challenges. In Proceedings of the 2020 IEEE Conference on Open Systems (ICOS), Kota Kinabalu, Malaysia, 17–19 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 19–24. [Google Scholar]
  304. Wang, T.; Chen, B.; Zhang, Z.; Li, H.; Zhang, M. Applications of Machine Vision in Agricultural Robot Navigation: A Review. Comput. Electron. Agric. 2022, 198, 107085. [Google Scholar]
  305. Akhter, R.; Sofi, S.A. Precision Agriculture Using IoT Data Analytics and Machine Learning. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 5602–5618. [Google Scholar]
  306. Memon, M.S.; Chen, S.; Shen, B.; Liang, R.; Tang, Z.; Wang, S.; Zhou, W.; Memon, N. Automatic Visual Recognition, Detection and Classification of Weeds in Cotton Fields Based on Machine Vision. Crop Prot. 2025, 187, 106966. [Google Scholar]
  307. Tzachor, A.; Devare, M.; King, B.; Avin, S.; Ó hÉigeartaigh, S. Responsible Artificial Intelligence in Agriculture Requires Systemic Understanding of Risks and Externalities. Nat. Mach. Intell. 2022, 4, 104–109. [Google Scholar]
  308. Karunathilake, E.; Le, A.T.; Heo, S.; Chung, Y.S.; Mansoor, S. The Path to Smart Farming: Innovations and Opportunities in Precision Agriculture. Agriculture 2023, 13, 1593. [Google Scholar] [CrossRef]
  309. Zhang, Z.; Lu, Y.; Zhao, Y.; Pan, Q.; Jin, K.; Xu, G.; Hu, Y. Ts-yolo: An All-Day and Lightweight Tea Canopy Shoots Detection Model. Agronomy 2023, 13, 1411. [Google Scholar]
  310. Lowder, S.K.; Sánchez, M.V.; Bertini, R. Which farms feed the world and has farmland become more concentrated? World Dev. 2021, 142, 105455. [Google Scholar]
Figure 1. Overview of computer vision and generative AI methods in smart agriculture applications.
Figure 1. Overview of computer vision and generative AI methods in smart agriculture applications.
Applsci 15 07663 g001
Figure 2. PRISMA flow diagram.
Figure 2. PRISMA flow diagram.
Applsci 15 07663 g002
Figure 3. Conceptual framework of GenAI-driven CV for smart agriculture applications.
Figure 3. Conceptual framework of GenAI-driven CV for smart agriculture applications.
Applsci 15 07663 g003
Figure 4. Original and synthetic diseased tomato leaf images generated by C-GAN [109].
Figure 4. Original and synthetic diseased tomato leaf images generated by C-GAN [109].
Applsci 15 07663 g004
Figure 5. Illustration of the significant domain gap in fish imagery across different environments [239].
Figure 5. Illustration of the significant domain gap in fish imagery across different environments [239].
Applsci 15 07663 g005
Figure 6. Key challenges of computer vision in smart agriculture.
Figure 6. Key challenges of computer vision in smart agriculture.
Applsci 15 07663 g006
Table 1. Comparison of techniques for crop health monitoring and pest detection.
Table 1. Comparison of techniques for crop health monitoring and pest detection.
StudyTechniqueMethodsAdvantagesLimitations
[92]Classical CVColour segmentation, thresholding, morphological operationsSimple, computationally inexpensive, intuitive feature extractionSensitive to illumination, requires manual features, lacks scalability
[94,101,104]Deep LearningResNet, VGGNet, Inception, YOLO variants, Hybrid CNN models, 3D-CNN, CNN-LSTMHigh accuracy, automatic feature extraction, suitable for complex patternsRequires extensive datasets, performance degrades with unseen conditions
[109,113]Generative Adversarial NetworksDCGAN, CycleGAN, spatiotemporal GANsAddresses data scarcity and class imbalance, enhances dataset diversity, facilitates cross-domain adaptationTraining instability, potential mode collapse, semantic consistency challenges
[115,116]Multimodal IntegrationFusion of hyperspectral, thermal, and RGB imagery, conditional GANs for modality translationImproved resilience to sensor variability, enriched feature representationComplex data alignment, sensor integration challenges
[118,120]Explainable and Edge AISHAP, LIME, edge AI accelerators, quantised GAN architecturesInterpretability, real-time deployment, reduced computational overheadLimited semantic consistency, stability concerns
[121,123]Foundation Models and VLMsFew-shot learning, GPT-Vision, dialogue-based advisory systemsImproved generalisation, accessibility, enhanced human interactionData privacy, resource-intensive pretraining
Table 2. Comparison of approaches for fruit and vegetable maturity detection and harvesting automation.
Table 2. Comparison of approaches for fruit and vegetable maturity detection and harvesting automation.
StudyMethodologyCore TechnologyTypical Applications
[126,127]Classical CVColour thresholding, texture analysis, visual saliency, morphologyCitrus ripeness, coconut maturity assessment
[128,129,132,133,134,135]Deep LearningCNNs, YOLO variantsFruit detection (apple, strawberry, tomato), harvester feed prediction
[139,140,141]Multimodal SensingHyperspectral, thermal, and RGB datafusion; hyperspectral imagingGrape/melon biochemical maturity (chlorophyll, sugar); robust sensing
[136]Transformer-basedModelsVFMs, transformers, attentionOverlapping fruit resolution, occlusion handling in dense clusters
[143,144,145]Generative AIDiffusion models, GANsAnnotated image generation, synthetic data augmentation, occluded berry reconstruction, domain adaptation
[146,148,149,150,152]Robotic Harvesting3D vision, LiDAR, RL, O2RNet, soft grippers, finger visionAutonomous harvesting (apple, cucumber), selective picking, orchard automation
[123]VLMsCross-modal VLMs, Conversational Agents (e.g., AgroGPT)Image-based agronomic advice via NLP, field decision support
Table 3. Representative studies of traditional CV and GAN-based methods in precision agriculture and field management.
Table 3. Representative studies of traditional CV and GAN-based methods in precision agriculture and field management.
StudyMethodologyTaskApproachOutcome
[166]CNNWeed and crop classificationPrecision sprayingHigh accuracy in crop-specific weed identification
[165]CNNWeed detectionReal-time field analysisEffective under diverse field conditions
[169]YOLOv10nWeed detection in riceObject detectionOptimised for real-time performance
[161]UAV + AI pipelineUAV data processingVisualisation and analysisReal-time analysis of UAV imagery
[175]Region-based CNNWeed type classificationDense environment detectionMonocot/dicot separation in complex scenes
[168]GANs + CNNWeed image synthesisData augmentationEnhanced training dataset diversity
[185]GANVegetation monitoringRGB-to-NDVI translationLow-cost monitoring alternative
[173]GANs + CNNWeed detectionSemi-supervised learningImproved site-specific model accuracy
[188]GANs + CNNDisease dataset expansionImage synthesisEnabled training with limited real data
[187]Satellite + CubeSat and Dual-GANVegetation mappingSuper-resolutionHigh-res imagery for long-term analysis
Table 4. Summary of key remote sensing and UAV-based image analysis applications in agriculture.
Table 4. Summary of key remote sensing and UAV-based image analysis applications in agriculture.
StudyApplicationMethodology UsedSensor TypeOutcomes
[209,212,216,217]Crop ClassificationCNN, transformer, Morphological-LSTMHyperspectral, RGBHigh-accuracy crop type and variety mapping
[202,210]Disease DetectionGANs, YOLOv8, attention-based modelsRGB, multispectralEarly disease recognition and precision treatment
[205,219]Yield PredictionMultimodal DL, time-series modellingMultispectral, satelliteEnhanced yield forecasting from environmental cues
[218]Soil Moisture MonitoringSSA-CNN, neural networksMultispectralNon-invasive estimation of soil properties
[203,204,222]PhenotypingObject detection, regression, counting algorithmsRGB, multispectralHigh-throughput analysis of seedlings and tillers
[223,224]Target LocalisationYOLOv4-Tiny, YOLOD-SLAM2RGB + DepthAccurate target identification and positioning
[213,214,221]Land Cover MappingSelf-attention 3D CNN, S3ANetHyperspectralFine-resolution land and vegetation type segmentation
Table 5. Summary of representative studies on CV applications in livestock and aquaculture monitoring.
Table 5. Summary of representative studies on CV applications in livestock and aquaculture monitoring.
StudyTask TypeMethodKey Contribution
[229]Behaviour recognitionPose estimation + deep learningDemonstrated fine-grained motion analysis in wild animals
[235]Social behaviour detection2D CNN + data augmentationNon-invasive contact behaviour recognition
[234]Activity detectionYOLO + efficient aggregationImproved real-time behaviour recognition in commercial barns
[236]Behaviour recognition/trackingPrunedYOLO + Kalman filterLightweight model for multi-cow tracking and action classification
[238]Weight estimationRGB-D camera + depth regressionPortable real-time pig weight prediction system
[242]Species recognitionFishNet + low-light enhancementRobust fish classification from low-quality videos
[241]Object detectionYOLOv8, YOLO-NASModel comparison for fish detection under murky conditions
[244]Feeding behaviour recognitionTwo-stage framework + temporal modellingAutomated recognition of appetite-driven behaviour
[246]Image enhancementUICE-MIRNetImproved image clarity for better detection downstream
[254]Size estimationSED-RCNN-BE (binocular depth)Accurate growth estimation using stereo vision
Table 6. Representative applications of CV and GenAI in agricultural quality control and food safety.
Table 6. Representative applications of CV and GenAI in agricultural quality control and food safety.
StudyApplication AreaMethodologiesRepresentative Examples
[263,264]Deep Learning-based Quality MonitoringHyperspectral, RGB, and NIR imaging; CNN, multi-task learningApple bruise detection, pork freshness evaluation
[265,267]Data Augmentation with Generative ModelsGANs, time-series GANs, StyleGAN2Pesticide residue detection, candy defect classification
[155,268,270]Real-Time Quality AssessmentLightweight CNNs (YOLOv5, YOLOv10)Food defect detection on mobile systems, fruit quality grading
[271,274]Sensor Fusion and Electronic SensingColourimetric arrays, electronic noses, Vis/NIR spectroscopyPork adulteration detection, apple watercore evaluation
[282,283]Explainable AI IntegrationVisual saliency maps, feature importanceCarambola fruit classification, kiwifruit defect inspection
[280,286]Innovative Sensing SystemsFluorescent sensors, electrochemical biosensorsTrace element detection, pesticide residue identification
[20,279]Industrial Grading and PackagingCNNs, smart packaging filmsFruit grading systems, spoilage detection
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Min, X.; Ye, Y.; Xiong, S.; Chen, X. Computer Vision Meets Generative Models in Agriculture: Technological Advances, Challenges and Opportunities. Appl. Sci. 2025, 15, 7663. https://doi.org/10.3390/app15147663

AMA Style

Min X, Ye Y, Xiong S, Chen X. Computer Vision Meets Generative Models in Agriculture: Technological Advances, Challenges and Opportunities. Applied Sciences. 2025; 15(14):7663. https://doi.org/10.3390/app15147663

Chicago/Turabian Style

Min, Xirun, Yuwen Ye, Shuming Xiong, and Xiao Chen. 2025. "Computer Vision Meets Generative Models in Agriculture: Technological Advances, Challenges and Opportunities" Applied Sciences 15, no. 14: 7663. https://doi.org/10.3390/app15147663

APA Style

Min, X., Ye, Y., Xiong, S., & Chen, X. (2025). Computer Vision Meets Generative Models in Agriculture: Technological Advances, Challenges and Opportunities. Applied Sciences, 15(14), 7663. https://doi.org/10.3390/app15147663

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop