Next Article in Journal
High-Precision Mapping and Real-Time Localization for Agricultural Machinery Sheds and Farm Access Roads Environments
Previous Article in Journal
Design of a Seed-Pressing Mechanism for Precision Peanut Planters and Verification of Optimal Operating Parameters Under High-Speed Seeding Conditions
Previous Article in Special Issue
Enhanced YOLOv5 with ECA Module for Vision-Based Apple Harvesting Using a 6-DOF Robotic Arm in Occluded Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Research Status and Development Trends of Artificial Intelligence in Smart Agriculture

School of Electrical and Information Engineering, Jiangsu University, Zhenjiang 212013, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(21), 2247; https://doi.org/10.3390/agriculture15212247
Submission received: 29 September 2025 / Revised: 21 October 2025 / Accepted: 27 October 2025 / Published: 28 October 2025
(This article belongs to the Special Issue Perception, Decision-Making, and Control of Agricultural Robots)

Abstract

Artificial Intelligence (AI) is a key technological enabler for the transition of agricultural production and management from experience-driven to data-driven, continuously advancing modern agriculture toward smart agriculture. This evolution ultimately aims to achieve a precise agricultural production model characterized by low resource consumption, high safety, high quality, high yield, and stable, sustainable development. Although machine learning, deep learning, computer vision, Internet of Things, and other AI technologies have made significant progress in numerous agricultural production applications, most studies focus on singular agricultural scenarios or specific AI algorithm research, such as object detection, navigation, agricultural machinery maintenance, and food safety, resulting in relatively limited coverage. To comprehensively elucidate the applications of AI in agriculture and provide a valuable reference for practitioners and policymakers, this paper reviews relevant research by investigating the entire agricultural production process—including planting, management, and harvesting—covering application scenarios such as seed selection during the cultivation phase, pest and disease identification and intelligent management during the growth phase, and agricultural product grading during the harvest phase, as well as agricultural machinery and devices like fault diagnosis and predictive maintenance of agricultural equipment, agricultural robots, and the agricultural Internet of Things. It first analyzes the fundamental principles and potential advantages of typical AI technologies, followed by a systematic and in-depth review of the latest progress in applying these core technologies to smart agriculture. The challenges faced by existing technologies are also explored, such as the inherent limitations of AI models—including poor generalization capability, low interpretability, and insufficient real-time performance—as well as the complex agricultural operating environments that result in multi-source, heterogeneous, and low-quality, unevenly annotated data. Furthermore, future research directions are discussed, such as lightweight network models, transfer learning, embodied intelligent agricultural robots, multimodal perception technologies, and large language models for agriculture. The aim is to provide meaningful insights for both theoretical research and practical applications of AI technologies in agriculture.

1. Introduction

Faced with global population growth and food security challenges, coupled with the increasing demand for high-quality agricultural products, it is urgent to realize the precise, automated and intelligent management of agricultural production, namely smart agriculture [1]. However, the traditional manual agricultural production model relies solely on sparse experience and subjective judgment, making it impossible to achieve continuous and comprehensive coverage of macro and micro-level perception and decision-making throughout the entire agricultural production process. Specifically:
  • It fails to enable real-time operational responses and long-term predictions, such as addressing soil nutrient variations, early diagnosis of crop diseases, and yield forecasting.
  • Extensive agricultural management leads to resource waste and environmental pollution, including excessive use of chemical fertilizers and pesticides, soil degradation, and inefficient flood irrigation.
  • The aging trend among agricultural practitioners results in low operational efficiency, high costs, and unstable production quality, placing immense pressure on high-intensity, high-precision repetitive tasks like planting, management, and harvesting.
  • Farmers’ personal safety cannot be adequately ensured, for example, during manual pesticide spraying in large-scale plant protection environments or in complex indoor temperature, humidity, and gas conditions in livestock and poultry farming.
  • Reliance on subjective product grading hinders standardized quality assurance and prevents effective traceability of agricultural product quality.
In essence, deficiencies in perception, decision-making, physical capability, and safety demonstrate that manual operations are no longer sufficient to meet the requirements of smart agriculture. With the rapid development of information technologies such as Artificial Intelligence (AI), Internet of Things, robotics, and big data, smart agriculture can leverage the Internet of Things for large-scale environmental perception and data collection, use AI to build predictive control models and make optimal decisions, and employ robotics to replace high-intensity human labor. Ultimately, this will enable efficient, low-cost, and safe agricultural production across the entire process [2]. Among these advanced technologies, data-driven AI has become the core of smart agriculture.
AI aims to analyze datasets, identify different patterns, and make predictions about targets, enabling machines to perform complex tasks such as learning, reasoning, perception, and decision-making. It encompasses various fields, including machine learning [3], deep learning [4], and computer vision [5], and is widely applied in areas such as industrial automation, autonomous driving, smart healthcare, and social security. In agricultural production, unlike traditional human experience-based analysis and decision-making or simple mechanical replacement of labor, AI technologies are reshaping agricultural production methods in terms of data, algorithms, and equipment. These technologies cover almost all aspects of agricultural production, including planting, cultivation, management, and harvesting, with applications in agricultural production management, soil and crop monitoring, pest and disease detection, yield prediction and crop planning, food quality, climate-smart agriculture, supply chain optimization, autonomous farming equipment, mechanical equipment fault analysis, etc. However, the application of AI in agriculture faces significant challenges due to open and unstructured complex agricultural environments, scarce and unevenly distributed high-quality annotated data, multi-source heterogeneous data fusion, and issues such as inadequate generalization ability, poor interpretability, and insufficient real-time performance of AI models.
In recent years, substantial research has emerged on the application of AI technologies in agriculture. Our earlier published study analyzed relevant publications in the Web of Science and Engineering Village databases over the past decade (from 1 January 2014 to 1 April 2025), further confirming a steadily rising trend in the development of AI agriculture [6]. Statistical reports also indicate that the global market value of AI in agriculture is expected to reach $16.92 billion by 2034, with a compound annual growth rate of 23.32% over the forecast period from 2024 to 2034 [7]. Typical applications include:
In the area of detection and identification, most studies leverage acquired crop images and integrate machine learning algorithms such as Support Vector Machines (SVM) [8] and Random Forests (RF) [9], as well as deep learning algorithms like You Only Look Once (YOLO) [10], ResNet [11], Visual Geometry Group (VGG) [12], and transformer [13] to achieve accurate pest and disease identification and agricultural product quality inspection. These approaches are particularly suitable for detecting early and subtle symptoms, minor defects in products (such as small bruises, early mold, and slight color unevenness), and internal quality attributes (e.g., sugar content and hollow).
In terms of crop growth and environment prediction, time-series data on crop growth environments and management history, along with spatial distribution data of crop growth collected by Unmanned Aerial Vehicles (UAV) and field cameras, etc., are typically utilized to predict crop growth trends and accurately forecast growth environments in combination with neural network models such as Long Short-Term Memory (LSTM) [14] and Gated Recurrent Unit (GRU) [15].
In the field of agricultural robotics, core AI technologies such as computer vision [16], navigation path planning [17], and operation control [18] are primarily involved. These technologies are widely applied in tasks including planting, weeding, spraying, fertilization, and picking, achieving refined operations far beyond human capabilities. This enhances operational safety and mitigates cost pressures and inefficiencies caused by labor shortages.
To enable predictive maintenance of agricultural machinery and equipment, thereby improving utilization rates and lifespan, comprehensive data reflecting operational status—such as vibration, temperature, pressure, infrared images, and sound—are integrated. Techniques like Auto-encoders [19] is employed for anomaly detection, while SVM [20], RF [21], and Convolutional Neural Networks (CNN) [22] are used for fault classification. Additionally, time-series analysis and regression models based on LSTM networks [23] are applied for fault prediction. The application of these technologies has laid a solid foundation for the development of smart agriculture.
In order to gain a clearer understanding of how AI technologies facilitate the development of smart agriculture, this paper focuses on various application domains, including pest and disease identification, product quality detection, growth and environmental prediction, agricultural robotics, fault diagnosis, agricultural Internet of Things (IoT), etc. Furthermore, a systematic and in-depth investigation is conducted for each category, covering aspects such as planting, management, and harvesting, with a review of related methods and application cases. Additionally, this paper analyzes the current challenges in applying AI to smart agriculture and outlines future research directions. Unlike existing reviews that primarily focus on simply listing current research methods, this paper adopts an application-driven approach to dissect recent studies in the context of agricultural production processes, highlighting their strengths and limitations.

2. Representative AI Fundamentals

In this section, several typical AI algorithms are outlined to better understand their application in smart agriculture.

2.1. Machine Learning, Deep Learning, and Reinforcement Learning

Machine Learning (ML) is a mechanism enabling machines to learn automatically without explicit programming [24]. Its core principle involves algorithms discovering patterns and rules from large volumes of known data, achieving self-optimization and improvement, and utilizing these patterns and rules to predict or classify unknown data. Machine learning algorithms can be categorized into various types based on different classification criteria. Figure 1 illustrates the classification results by learning paradigm and presents several representative models.
The evolution of machine learning can be divided into two phases: shallow learning and deep learning. As a branch of machine learning, deep learning is fundamentally a system where multiple classifiers work collaboratively. Its foundation lies in the combination of linear regression and activation functions, sharing the core principles with traditional statistical linear regression [25]. Deep neural networks automatically learn hierarchical features of data through multiple nonlinear hidden layers. Their classifiers autonomously generate hypotheses without manual construction, enabling efficient learning of nonlinear relationships. Current deep learning models primarily include Deep Neural Networks (DNN), CNN, Recurrent Neural Networks (RNN), Generative Adversarial Networks (GAN), and Transformer architectures.
Reinforcement Learning (RL) is an important branch of ML that investigates how agents autonomously learn behavioral strategies to maximize long-term rewards or achieve specific goals through interaction with an environment [26]. It primarily consists of seven elements: agent, environment, state, action, reward, policy, and objective. Unlike supervised learning, which relies on labeled data for training, reinforcement learning does not provide demonstrations of correct actions. Instead, it evaluates the quality of actions and provides feedback signals to guide the agent in progressively optimizing its decision-making strategy. This mechanism allows for more flexible reward function design and requires less prior knowledge, making reinforcement learning particularly well-suited for highly complex, sequential decision-making tasks.

2.2. Computer Vision

As a pivotal domain within AI, computer vision aims to teach computers how to understand images and videos. This involves not only extracting meaningful information from visual data but also interpreting and judging it to achieve intelligent recognition and interaction across diverse scenarios [27]. Computer vision primarily trains machines using cameras, massive datasets, and algorithmic models, enabling them to execute visual tasks at speeds far exceeding human capabilities. The workflow diagram for computer vision is shown in Figure 2. Currently, computer vision is mainly applied in image classification, object detection, image segmentation, and pose estimation.

3. Applications of AI in Crop Detection

AI technologies are revolutionizing crop detection by integrating intelligence and high precision through the deep fusion of multi-source data, encompassing spectral, vibrational, and chemical sensing. As shown in Figure 3 [28,29,30] and Table 1, this section highlights breakthrough applications of AI across four key domains: seed variety identification, pest and disease detection, food safety monitoring (including pesticide residues, heavy metals, and mycotoxins), and product quality upgrading (such as non-destructive meat quality evaluation and agricultural product grading). It systematically reviews cutting-edge advances in the synergistic fusion of vibration signal processing, Hyperspectral Imaging (HSI), nano-enhanced sensing, and neural networks, thereby establishing robust technical paradigms for next-generation intelligent agricultural detection systems.

3.1. Seed Variety Identification-Optimal Selection

As a significant area of agricultural scientific innovation, seed variety identification and detection play a critical role in modern variety management, quality control, and crop yield enhancement, thereby highlighting their indispensability for agricultural development. Concurrently, AI is facilitating the transition of seed detection into intelligence.
Recent advances in non-destructive seed identification have leveraged hybrid machine learning methods integrating spectral imaging and feature optimization techniques. Fu et al. [31] introduced a method combining Stacked Sparse Autoencoder, Cuckoo Search, and SVM (SSAE-CS-SVM) to achieve rapid screening of corn (Zea mays L.) seeds. The process includes preprocessing near-infrared spectra via Savitzky–Golay (SG) smoothing and Standard Normal Variate (SNV) correction, followed by feature extraction using SSAE and classification with a CS-optimized SVM model. Similarly, Sun et al. [32] developed an Artificial Fish Swarm Algorithm-SVM (AFSA-SVM) model for rice (Oryza sativa L.) seed identification, integrating spectral-image multimodal features with the Bootstrapping Soft Shrinkage (BOSS) variable method. This approach demonstrated superior performance over conventional techniques, offering a viable tool for agricultural quality control.
Further extending these methodologies, Zhang et al. [33] proposed a framework for detecting selenium-enriched millet (Setaria italica (L.) P. Beauv.), which combines SG smoothing with dual feature selection via Competitive Adaptive Reweighted Sampling and Successive Projections Algorithm (CARS-SPA) to build SVM model. The model achieved higher accuracy compared to single-feature selection methods. In the context of fruit seeds, Xu et al. [34] developed a hyperspectral-based approach that incorporates EEMD and DWT joint denoising, along with CARS-SPA feature selection and SVM modeling, to discriminate grape (Vitis vinifera L.) varieties effectively. Meanwhile, Sun et al. [35] established a non-destructive testing system for watermelon (Citrullus lanatus (Thunb.) Matsum. & Nakai) seed viability using HSI merged with principal component analysis (PCA) feature extraction and an Artificial Bee Colony (ABC)-optimized SVM classifier. These studies collectively highlight the trend of combining signal preprocessing, feature selection, and machine learning optimization to enhance accuracy and efficiency in seed identification.

3.2. Pest and Disease Detection-Health Monitoring

Plant diseases and insect pests lead to significant reductions in crop yield and quality, while also raising control expenses and impinging upon environmental and food safety. Consequently, the advancement of intelligent monitoring and precision control technologies has become essential in modern agricultural practices.
To accurately identify pests and diseases in field grain crops, several advanced approaches have been developed. Zuo et al. [29] introduced a multi-fine-grained feature aggregation network for crop disease classification, integrating multi-source features to improve the recognition of discriminative symptom characteristics. Zhu et al. [13] presented a hybrid recognition model combining transformers and CNN architectures. By incorporating a transformer encoder, the model strengthens global feature representation, and with CenterLoss optimization, it achieves compact feature distribution, leading to improved discriminative power. Cross-dataset validation confirmed its generalization capacity and robustness, particularly in distinguishing visually similar diseases. Zhao et al. [11] developed an attention-guided deep CNN for multi-disease diagnosis, incorporating residual learning and attention mechanisms to improve recognition under complex conditions. The model achieved high performance in terms of both accuracy and efficiency, demonstrating promise for real-time applications. Peng et al. [8] implemented a grape leaf disease recognition system utilizing CNN-extracted features fed into a SVM, attaining high classification accuracy with reduced computational overhead. Cap et al. [36] designed LeafGAN, an attention-based image synthesis model that generates realistic diseased leaf images from healthy reference images. This data augmentation strategy significantly enhanced the generalization performance of diagnostic models.
Beyond image-based approaches, researchers have also integrated spectroscopy with AI for non-destructive disease detection. Lu et al. [37] established an Extreme Learning Machine (ELM) framework for tea (Camellia sinensis (L.) Kuntze) disease recognition by applying masking techniques to isolate spectral signatures of disease spots, yielding higher accuracy compared to full-leaf spectral analysis. This methodology offers a new reference for hyperspectral disease classification. Yang et al. [38] introduced an early diagnostic technique for rice blast based on spore diffraction fingerprint textures and CNN, achieving 97.18% accuracy and a 90% increase in detection speed. This approach provides an efficient and accurate solution for early-stage disease prevention. However, existing methods heavily rely on high-quality images or spectral data and remain limited in their adaptability to complex and variable field conditions, such as light variations, shading, and different growth stages. Furthermore, the models focus on single-disease identification, and their diagnostic capabilities for concurrent multiple pests and diseases or early latent symptoms still need improvement.

3.3. Food Safety Monitoring-Risk Screening

3.3.1. Detection of Pesticide Residues in Agricultural Products

The rapid detection of pesticide residues is crucial for safeguarding food safety and public health. To overcome the limitations of conventional methods, recent research has increasingly integrated advanced sensing technologies (e.g., Surface-Enhanced Raman Spectroscopy (SERS)) with deep learning algorithms, significantly enhancing detection sensitivity and efficiency. This section reviews emerging applications of nanomaterial-enhanced SERS substrates and CNN for quantifying pesticide residues in food commodities such as tea, fruits, and vegetables.
The quantitative analysis of pesticide residues in tea has benefited from the combination of SERS-based sensors and nanotechnology, enabling highly sensitive detection of traces. For example, Li et al. [39] developed a sensing platform using gold-silver octahedral hollow cages (Au–Ag OHCs) as a SERS substrate, coupled with a CNN model for quantifying thiram and pymetrozine residues. The CNN achieved determination coefficients (R2) of 0.995 and 0.977 for thiram and pymetrozine, respectively, with detection limits as low as 0.286 ppb and 29 ppb. In a related study, Chen et al. [57] fabricated a SERS substrate based on gold@silver core–shell nanorods (Au@Ag NRs) for rapid detection of thiabendazole in fruit juices such as apple (Malus domestica (Suckow) Borkh.) and peach (Prunus persica (L.) Batsch).
To address the need for efficient screening of diverse pesticide residues across multiple agricultural products, innovative approaches such as molecularly imprinted polymers and fluorescence-based sensors have been introduced. Qiu et al. [58] synthesized silica-based Fluorescent Molecularly Imprinted Polymers (FMIPs) for selective detection of beta-cyfluthrin. Li et al. [59] constructed an upconversion fluorescence nanosensor capable of screening organophosphorus pesticides, including dimethoate.

3.3.2. Detection of Heavy Metals in Food

The high-sensitivity detection of heavy metal contamination in food is essential for preventing and controlling food safety risks. Researchers are increasingly combining advanced sensing technologies—such as microwave sensing and HSI—with deep feature extraction networks to develop efficient quantitative models for heavy metal analysis. This section reviews recent technological advances and practical applications in detecting heavy metals in foods, including edible oils and vegetables, with a particular emphasis on multi-source sensor data fusion, wavelet transforms, and residual neural networks.
The identification of heavy metal residues in edible oils has made progress through the combination of microwave sensing and SERS, enabling efficient and quantitative safety assessments. Deng et al. [40] introduced a method combining microwave detection with an attention-based deep residual network for quantitative analysis of lead (Pb) in edible oils. Chen et al. [60] synthesized silver nanoparticles and established a quantitative model for cadmium (Cd) using SERS coupled with chemometric techniques.
Monitoring heavy metal stress in vegetable crops increasingly uses HSI combined with deep learning-based feature extraction to support non-invasive diagnosis. Zhou et al. [41] proposed a method integrating fluorescence HSI, Wavelet Transform (WT), and a Stacked Denoising Autoencoder (SDAE) for detecting Pb in rapeseed leaves. Zhou et al. [42] also presented a fusion approach linking hyperspectral technology, WT, and a Stacked Convolutional Autoencoder (SCAE) for detecting composite contamination from Cd and Pb in lettuce (Lactuca sativa L.). However, these methods often rely on sophisticated laboratory equipment and complex sample preparation procedures, making rapid, on-site testing difficult to implement within the actual food supply chain.

3.3.3. Detection of Mycotoxins in Agricultural Products

The detection of mycotoxins in agricultural products is of critical practical importance, playing an essential role in enhancing product quality, protecting consumer health, and ensuring food safety. In recent years, AI-based detection technologies have been rapidly advancing, offering increasingly robust tools to support the reliable monitoring of mycotoxin contamination.
Novel AI-driven approaches are effectively overcoming the limitations of traditional methods, such as low sensitivity and poor interference resistance. For instance, Yang et al. [43] developed a SVM classification model using line-scan Raman HSI combined with SG smoothing and adaptive iteratively reweighted Penalized Least Squares (airPLS) preprocessing to detect aflatoxin contamination in peanuts (Arachis hypogaea L.) accurately. This method offers a high-precision and rapid detection strategy for fungal toxins. Similarly, Lin et al. [44] introduced an aflatoxin detection method for maize (Zea mays L.) based on highly sensitive nano-colorimetric sensor array. By integrating colorimetric features with Linear Discriminant Analysis (LDA) or K-Nearest Neighbors (KNN) algorithm, the approach achieved 100% accuracy in identifying toxigenic aspergillus flavus infections, providing an efficient tool for on-site screening.
With the expanding adoption of deep learning, mycotoxin detection is increasingly shifting toward more intelligent and user-friendly applications. Wang et al. [45] proposed a high-accuracy model for aflatoxin B1 detection in corn by combining Near-Infrared Reflectance Spectroscopy (NIRS) with a CNN. Using a Markov Transition Field (MTF) to convert 1D spectral data into 2D images, the proposed two-dimensional MTF-CNN architecture significantly outperformed conventional techniques. Zhu et al. [46] developed a method for detecting zearalenone (ZEN) in corn by integrating surface-enhanced Raman spectroscopy (SERS) with a deep learning model. By adopting a composite enhancement strategy for SERS spectral optimization and constructing a 2D-CNN model, the method achieved markedly higher prediction accuracy. Zhao et al. [47] designed a non-destructive detection system for ZEN in wheat (Triticum aestivum L.) using a colorimetric sensor array and CNN. This method combined multispectral and colorimetric features within a CNN-based quantitative model, substantially outperforming traditional prediction models and offering a new practical tool for grain safety inspection. However, these data-driven approaches face a dual challenge: an over-reliance on scarce, high-quality labeled data, and the potential loss of features during image transformation.

3.4. Product Quality Upgrading-Value Enhancement

3.4.1. Non-Destructive Meat Quality Detection

Non-destructive meat quality detection represents a critical technological foundation for ensuring food safety and upholding quality standards. Driven by the deep fusion of spectral imaging and deep learning, researchers are developing intelligent inspection systems marked by high precision and cross-modal data fusion. This section highlights synergistic advances combining optical technologies—such as HSI and fluorescence HSI (F-HSI)—with CNNs and attention mechanisms, demonstrating their ground-breaking applications in pork freshness evaluation, meat adulteration identification, and quantitative component prediction.
Quantitative and non-destructive evaluation of key indicators in pork, including freshness and oxidative damage, has been achieved through the combination of spectral imaging and deep learning. Cheng et al. [48] introduced a fusion framework integrating HSI with a multi-task CNN, enabling simultaneous monitoring of lipid and protein oxidation in freeze-thawed pork. The model achieved predictive coefficients of determination (R2p) of 0.97 and 0.96 for these key metrics, respectively. Sun et al. [49] innovatively incorporated two-dimensional Correlation Spectroscopy (2D-COS) images into a dual-branch CNN architecture. By leveraging synchronous-asynchronous spectral feature fusion, the model achieved an R2p of 0.958 for predicting Total Volatile Basic Nitrogen (TVB-N), a key indicator of freshness. Cheng et al. [50] proposed a Hybrid Fusion Attention Network (HFA-Net) to integrate F-HSI and electronic nose data. Through attention-guided feature optimization, the model reached an R2 of 0.937 for TVB-N prediction.
To ensure meat authenticity, HSI coupled with transfer learning has been employed for accurate detection of foreign substances in chicken products. Yang et al. [51] utilized an HSI with a fine-tuned GoogleNet model to identify starch adulteration in minced chicken non-destructively. Similarly, Sun et al. [12] applied VGG16-SVM fusion for the detection of soy protein impurities, demonstrating robust performance in authenticity verification.
NIRS combined with CNNs provides an efficient tool for rapid geographical origin traceability and authenticity discrimination of herbal medicines. Huang et al. [52] implemented a 1D-CNN model with preprocessed NIRS data for recognizing the origin of Chuanxiong (Ligusticum chuanxiong Hort.) slices, achieving 92.2% accuracy—a 12.1% improvement over conventional techniques.
In summary, the cross-modal fusion of spectral technologies and deep learning has established itself as a prominent paradigm in meat quality detection. Methodologies such as multi-task learning, cross-sensor feature fusion, and transfer learning have substantially elevated detection accuracy and model generalizability, opening avenues for intelligent food inspection systems.

3.4.2. Quality Grading and Detection

Grading and quality detection of agricultural products are essential for enhancing their market value and meeting diverse consumer demands. Moreover, these practices significantly contribute to accelerating market circulation and fostering a sustainable agricultural ecosystem.
In recent years, the application of non-destructive testing technologies has expanded from evaluating external quality to precisely assessing internal physiological indicators. Li et al. [61] introduced an innovative method for diagnosing potassium nutrition levels in tomato (Solanum lycopersicum L.) leaves using electrical impedance spectroscopy. The established prediction model demonstrated greater robustness against environmental interference compared to optical techniques. Subsequently, in the domain of fruit quality evaluation, Xu et al. [53] developed a novel model for the non-destructive and rapid assessment of grape quality. This method employed Variational Mode Decomposition combined with Regression Coefficient (VMD-RC) to process hyperspectral data, and constructed Least Squares SVM (LSSVM) models under both full-spectrum and feature-spectrum wavelengths to predict Total Soluble Solids (TSS). The results indicated that the model achieved optimal TSS prediction accuracy. Qiu et al. [30] conducted a feasibility study using CNNs for apple classification based on color and deformity, assembling a machine vision dataset to facilitate model training. AlexNet, GoogLeNet, and VGG16 were deployed for three-class quality evaluation, categorizing apples through visual traits. Experiments revealed that VGG16 attained a top accuracy of 92.29%, establishing a practical deep learning framework for agricultural quality inspection. Ji et al. [10] proposed an apple grading method based on the YOLOv5s network, incorporating Omni-dimensional Dynamic Convolution (ODConv) to improve surface defect feature extraction, and designing a lightweight architecture combining Grouped Spatial Convolution (GSConv) and VoVGSCSP. The model effectively recognized defects and graded quality. Xu et al. [54] tackled the challenges of low accuracy and efficiency in automated apple grading by integrating Squeeze-and-Excitation (SE) modules into the YOLOv5 backbone to strengthen feature representation, and adopting Distance-IoU Loss (DIoU_Loss) to accelerate convergence. This approach attained high grading speed and accuracy in practical automated grading systems.
Shifting to other agricultural products, Guo et al. [55] constructed a non-destructive and efficient method for tea blend ratio analysis and sample matching. Based on a ResNet architecture integrated with the Convolutional Block Attention Module (CBAM), the model enhanced feature capture capability and achieved high precision in tea categorization and mixture analysis. Zhang et al. [56] proposed a YOLOv4 Tiny-based deep learning model for the inspection of vegetable quality. By leveraging the Cross Stage Partial Network (CSPNet) structure to design a feature enhancement network, the method effectively extracted discriminative features of various cherry tomato categories, proving to be a practical and efficient solution for multi-class detection. However, existing methods still suffer from insufficient recognition stability when confronted with target defects in agricultural products characterized by diverse appearances, varying lighting conditions, and complex backgrounds.

4. AI in Crop Growth and Agricultural Condition Forecasting

Crop growth and agricultural condition forecasting monitoring systems can assist farmers in understanding crop development, enhancing production efficiency, and safeguarding agricultural output [62]. With advances in AI technology, intelligent learning has demonstrated superior performance in crop growth and agricultural condition prediction. As illustrated in Figure 4, this section provides a concise overview of recent research findings across three domains: growth environments, production management, and yield forecasting.

4.1. Growing Environment

The nutrient content within soil is crucial for crop selection and preparatory fertilizer work, directly impacting crop growth and yield [63]. Soil nutrient forecasting primarily involves predicting soil moisture levels and nutrient concentrations within the soil. Reda et al. [64] employs multiple machine learning algorithms to estimate Soil Organic Carbon (SOC) and Total Nitrogen (TN) models. Khanal et al. [65] utilized aerial remote sensing imagery of farmland to generate datasets for predicting soil nutrient content. SVM, Cubist algorithms, RF, and neural networks were employed to forecast soil pH levels and the concentrations of substances such as magnesium (Mg) and potassium (K). Bhattacharyya et al. [66] proposed a two-stage ensemble prediction model integrating the Gaussian Probability Method (GPM), CNN, and SVM to forecast soil moisture. The proposed method was validated using soil moisture measurement datasets from the Godavari River Plateau and Krishna River Plateau. Experimental results demonstrated that the proposed method achieved a prediction accuracy of 89.53%, whereas traditional single-stage ensemble prediction models reached only 82.56%, indicating a significant performance improvement. Cui et al. [67] employed three machine learning algorithms—RF, Support Vector Regression (SVR), and Artificial Neural Network (ANN)—to construct different Soil Salinity Content (SSC) estimation models. Utilizing multi-temporal Sentinel-2 imagery and ground-truth data, they classified sunflower (Helianthus annuus L.) and maize crop types and analyzed temporal characteristics across different growth stages. By integrating vegetation indices and salinity indices, they estimated the SSC. Experimental results indicate that classifying crop types and accounting for different time series significantly enhances the correlation between SSC and spectral indices, thereby positively influencing SSC estimation. However, existing analytical and predictive methods still suffer from inefficiencies, limited coverage, and challenges in deploying on edge devices. Future efforts must further optimize multi-source data fusion and lightweight algorithms to facilitate the transition of soil analysis from monitoring to agricultural decision-making support.
Climate factors also significantly influence crop growth, with suitable temperatures and rainfall directly impacting plant development and final yields [68]. Yue et al. [69] innovatively proposed an encoder–decoder model based on Convolutional LSTM (ConvLSTM), achieving high-precision annual forecasting of three key meteorological elements—sunshine duration, cumulative precipitation, and daily mean temperature—by integrating LSTM with ConvLSTM architectures. However, in practical applications, model reliability significantly decreases when real-time weather data inputs are insufficient, while emergency response mechanisms for extreme climate events remain underdeveloped. For extreme weather events, Devarashetti et al. [70] proposed a climate prediction model that integrates residual LSTM (R-LSTM) with Artificial Gorilla Troops Optimized Deep Learning Networks (AGTO-DLN), significantly enhancing the forecasting accuracy of extreme weather occurrences. However, this model fails to incorporate real-time climatic factors, severely restricting its applicability in dynamic agricultural environments.
Agricultural drought represents one of the recurrent natural disasters in crop cultivation, typically arising from meteorological or soil drought conditions. This leads to sustained soil moisture levels falling below crop requirements throughout the growing season, subsequently causing reduced yields, complete crop failure, or plant mortality [71]. Consequently, numerous scholars have devoted considerable effort to developing various drought prediction systems. Statistical methods, exemplified by the Autoregressive Integrated Moving Average (ARIMA) model, are commonly employed prediction techniques. However, such statistical approaches neglect the temporal information inherent in remote sensing data. Guo et al. [72] innovatively combined the ARIMA model with the LSTM model, leveraging the former’s linear expressive capability and the latter’s temporal information extraction capacity. By simultaneously utilizing both the linear trend and temporal information within the Vegetation Temperature Condition Index (VTCI) time series, they achieved high-precision drought forecasting for the Sichuan Basin. Khan et al. [73] designed a hybrid model that combines the wavelet transform and ARIMA-ANN (W-2A), using the Standardized Precipitation Index (SPI) and Standard Index of Annual Precipitation (SIAP) as drought indices to forecast future drought events. Gowri et al. [74] employed a Triple Exponential-LSTM (TEX-LSTM) model to forecast the Standard Vegetation Index (SVI) and Vegetation Health Index (VHI), thereby determining drought occurrence. Mokhtarzad et al. [75] utilized SVM, ANN, and Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for drought prediction, employing SPI as the drought indicator. De Vos et al. [76] integrated Earth Observation (EO)-based monitoring systems with Categorical Boosting (CatBoost) regression models to achieve three-month advance, high-precision forecasting of Normalized Difference Vegetation Index (NDVI) negative anomalies (a key indicator of drought) across Africa. This advances drought management from a reactive response to a proactive early warning, although challenges persist in forecasting during the transition period from the rainy season.

4.2. Crop Growth and Management

The growth status of crops largely reflects their health condition. Predicting crop growth stages aids in arranging cultivation practices, determining mechanical harvesting timings, and forecasting crop yields. Some scholars approach crop growth stage prediction by first forecasting indicators related to crop growth, followed by decision-makers determining the specific growth stage [77,78,79]. However, this approach requires decision-makers to possess specialized knowledge to analyze and evaluate the predicted growth indicators effectively. Consequently, some researchers directly forecast crop growth characteristics for comprehensive assessment. Pal et al. [80] proposed a Digital Twin (DT) system for cotton (Gossypium hirsutum L.) crops based on UAV remote sensing data and machine learning, capable of accurately predicting key characteristic parameters, including Canopy Cover (CC), Canopy Height (CH), Canopy Volume (CV), and Excess Greenness (EXG). Lou et al. [81] employed a 1D-ResNet18 deep learning model to predict polyphenol and crude fiber content in fresh tea leaves. Tsai et al. [82] utilized ensemble machine learning to assess bean sprout (Glycine max (L.) Merr.) growth conditions and forecast growth requirements. Yue et al. [69] combined a ConvLSTM encoder–decoder with traditional neural networks, achieving high accuracy in predicting maize growth stages.
In agricultural production management and decision-making, the application of intelligent monitoring and prediction systems can effectively reduce labor intensity for workers, minimize losses of grain and other agricultural products during storage, and improve storage quality. This robustly supports the advancement and construction of smart agricultural warehouses and precision farming. Agricultural greenhouses serve as core facilities driving the intensive, intelligent, and sustainable development of modern agriculture [83], with numerous scholars consistently dedicated to researching intelligent management systems for such structures. Francik et al. [84] proposed an ANN-based system for predicting internal temperatures within heated plastic greenhouses. Lee et al. [85] employed a hybrid deep learning model including ConvLSTM, CNN and regression Backpropagation Neural Network (BPNN), to construct an AI-powered greenhouse environmental control system. Scientific irrigation significantly enhances crop yield and quality; however, improper irrigation practices can lead to resource wastage and ecological issues [86,87,88]. Elbeltagi et al. [89] and Raza et al. [90] evaluated the predictive performance of multiple machine learning algorithms using evapotranspiration (ET) as an irrigation indicator. Tong et al. [91] combined the Competitive Adaptive Reweighted Sampling (CARS) with the CatBoost machine learning method to forecast transpiration rates (Tr) in greenhouse tomatoes, providing an intelligent solution for precise irrigation control. Appropriate nitrogen fertilizer aids crop growth, yet excessive application leads to soil compaction, water pollution, and increased crop lodging susceptibility [92]. Zhang et al. [93] employed multiple machine learning algorithms to construct diverse UAV RGB systems, integrating spectral and textural features to estimate winter wheat Plant Nitrogen Content (PNC), and demonstrating the feasibility of UAV RGB systems for this purpose. Zhang et al. [94] employs SVM in conjunction with UAV multispectral vegetation indices to predict winter wheat Leaf Chlorophyll Content (LCC), thereby enabling indirect nitrogen fertilizer management.
In summary, despite the immense potential of smart technologies across all aspects of agricultural management, current practical applications still commonly face challenges such as reliance on experience-based decision-making, inefficient resource utilization, and difficulties in real-time monitoring of key physiological parameters.

4.3. Yield Forecast

Reliable crop yield forecasting is of paramount importance to agricultural producers and remains a prominent research focus within the field of smart agriculture both domestically and internationally [95]. Yield prediction methods based on statistical models or crop growth models are commonly employed; however, both approaches exhibit limitations: the former requires high-quality data, while the latter suffers from significant uncertainty at large spatial scales. With the advancement of AI, an increasing number of scholars are applying machine learning techniques to crop yield prediction. Some scholars have evaluated the performance of various single machine learning algorithms in predicting crop yields [96,97,98]. Another group of scholars combined machine learning algorithms with different prediction models, validating the feasibility of hybrid prediction models [99,100,101,102]. Feng et al. [103] developed a hybrid dynamic prediction model by integrating APSIM-simulated biomass at multiple crop growth stages, extreme weather events preceding the prediction date, NDVI, and the Standard Precipitation and Evapotranspiration Index (SPEI). This approach utilized random forests and multiple linear regression models, achieving high prediction accuracy in experiments. Lee et al. [104] proposed a country-specific maize yield forecasting system based on Earth Observation (EO) and Extremely Randomized Trees (ERT), achieving high-precision maize yield predictions across multiple nations in South Africa.
While machine learning demonstrates promising performance in crop yield prediction, traditional machine learning methods are constrained in their ability to extract features from input data, limiting their capacity to fully exploit information characteristics within remote sensing data. In contrast, deep learning overcomes this limitation by delivering superior fitting capabilities and enhanced flexibility. Consequently, numerous scholars have employed deep neural networks to extract spatial features from remote sensing data for crop yield forecasting [105,106,107]. LSTM-based models are also frequently used to predict wheat and rice yields [108,109,110]. Ren et al. [111] combined the World Food Studies (WOFOST) growth model with a GRU, employing distinct feature combinations across growth stages to forecast maize yields. Fieuzal et al. [112] proposed a dual-mode maize yield prediction system based on neural networks. By integrating multi-source satellite data (optical and microwave remote sensing), it enables both diagnostic predictions throughout the entire growth period and real-time dynamic forecasting during the growing season, achieving optimal prediction accuracy three months prior to harvest. However, cloud cover obstruction during remote sensing data acquisition frequently causes data gaps, compromising prediction accuracy. Future research should prioritize developing hybrid models to mitigate the impacts of such data deficiencies.
The outstanding performance of AI-based crop growth and agricultural condition prediction methods relies heavily on large-scale, high-quality data. However, the industry lacks unified standards for data collection, and the process is susceptible to noise, resulting in significant variability within datasets. Furthermore, most existing models are tailored to specific crops in particular regions, exhibiting limited generalizability and demanding substantial computational resources. Future research should focus on model light weighting and enhancing generalizability. The methods employed in some of the literature cited in this section are summarized in Table 2.

5. AI in Fault Diagnosis of Agricultural Machinery

The condition monitoring and fault diagnosis of agricultural machinery play a pivotal role in ensuring the stability of agricultural production and have demonstrated significant potential with advancements in AI technology [6]. This section provides an overview of fault diagnosis applications based on AI methods in agricultural machinery. We only highlight some recent research and provide a simple classification. As shown in Figure 5, the AI-based fault diagnosis methods primarily include data acquisition, feature extraction, and fault classification.

5.1. Harvesters

Harvesters, as essential equipment in agriculture, frequently experience various faults due to their complex structure and harsh working conditions [113]. Wang et al. [114] comprehensively summarized the current status and future development trends of data-driven methods, including signal processing and AI methods, in structural fault detection for combine harvesters. Traditional signal processing methods have proven valuable in diagnosing faults in agricultural machinery, enabling the extraction of fault characteristics [115]. For example, Li et al. [116] employed order tracking to locate gearbox bearing fault frequencies precisely. However, the performance and generalization capabilities of signal processing methods depend on expert experience and prior knowledge. In contrast, AI-based approaches such as machine learning and deep learning can adaptively learn the mapping relationships between fault features and signals, thereby enhancing diagnostic accuracy and robustness [117].
To address component faults in tractors, Xu et al. [20] presented a SVM based model for detecting bolt loosening in combine harvesters. Through the analysis of critical bolt torque and the extraction of a high-dimensional feature matrix, the method enabled accurate identification of bolt failure states under various excitation conditions. Gomez-Gil et al. [118] explored the feasibility of utilizing a KNN classifier combined with the Harmonic Search (HS) algorithm to diagnose the operational status of rotating components in a harvester using vibration signals. Martínez-Martínez et al. [119] employed a single vibration signal in conjunction with an ANN model optimized by a Genetic Algorithm (GA) to identify the different operational states of multiple rotating components in agricultural machinery. Yang et al. [19] employed SDAE to remove noise from vibration signals and achieved fault diagnosis of combined harvester rolling bearings using SVM. Confronted with insufficient samples, She et al. [120] proposed a method for diagnosing faults in combine harvester gearboxes under variable operating conditions and limited data. It utilizes a meta-transfer learning-driven approach that combines Multi-step Loss Optimization (MSL) and Conditional Domain Adversarial Networks (CDAN), ensuring high diagnostic accuracy even with few-shot data. Ling et al. [121] proposed a threshing cylinder blockage diagnosis model based on a hybrid search sparrow algorithm (HSSA) and SVM. The model significantly improved diagnostic accuracy and reduced sample requirements by optimizing algorithm initialization and perturbation strategies. However, the performance of existing models heavily relies on relatively clean vibration signals, making it difficult to handle complex noise and material interference in the field. This results in insufficient generalization capabilities under real-world conditions.

5.2. Sensors

Sensors are the core of the Agricultural Internet of Things (Ag-IoT), mainly used to collect various types of data during agricultural production processes, making agricultural machinery operate better in a complex farmland environment [122]. To achieve smart agriculture, timely detection and diagnosis of sensor failures are particularly crucial [123]. Karimzadeh et al. [124] investigated the effectiveness of five machine learning models for fault diagnosis of Electrical Conductivity (EC) and potential of Hydrogen (pH) sensors in closed-loop hydroponic systems, and innovatively proposed a sensor-independent EC prediction framework that does not rely on target sensor readings. Kaur et al. [125] proposed an intelligent agricultural diagnosis system that combines reinforcement learning to optimize mobile sink path planning and enhances sensor fault detection accuracy through hyperparameter-tuned least square SVM (HT-LS-SVM), improving network reliability and agricultural monitoring efficiency. Similarly, Ling et al. [126] also optimized SVM and proposed an improved dung beetle optimization (IDBO) SVM model for Ag-IoT sensor fault diagnosis, which significantly enhanced diagnosis accuracy under conditions of limited sample data. With technological advancements, wireless sensor networks (WSNs) are increasingly being applied in agriculture monitoring [127]. Barriga et al. [128] proposed an expert system for detecting faults in leaf-turgor pressure sensors. The system leverages a machine learning model trained on real-world agricultural data, which shows potential to replace human experts, improving the reliability and efficiency of precision irrigation. Salhi et al. [129] explored the real-time monitoring and intelligent fault diagnosis of farmland environments and equipment using an Evolutionary Recurrent Self-Organizing Map (ERSOM) deep learning model based on WSNs. Most models exhibit insufficient adaptability to varying climatic and soil conditions and are susceptible to signal interference between sensors. Simultaneously, the scarcity of samples constrains the models’ generalization capabilities. These issues require further resolution in future research.

5.3. Tractors

Tractors are another typical representative of agricultural power machinery [130]. Xu et al. [23] employed a feature fusion method based on an improved CNN-Bidirectional LSTM (CNN-BILSTM) for diagnosing faults in tractor transmission systems, which maintains high accuracy even in noisy environments. The author also integrated Time GAN with the multi-head self-attention transformer model, effectively addressing the issue of sample deficiency [131]. Ni et al. [132] utilized SVM to identify various faults in the PST Electro-Hydraulic system of the tractors, analyzing flow and pressure signals to achieve high classification and fault identification performance. Hosseinpour-Zarnaq et al. [21] proposed a fault diagnosis method for tractor transmissions based on vibration analysis and the RF classifier. The method acquired vibration signals at different rotational speeds and extracted features using the Discrete Wavelet Transform (DWT). It obtained accurate classification results through feature optimization with Correlation-based Feature Selection (CFS). Xue et al. [133] investigated a method based on PCA and an improved Gaussian Naive Bayes Algorithm (GNBA) for diagnosing faults in the control system of a tractor hydrostatic power split continuously variable transmission (CVT). The proposed method achieved superior fault classification accuracy compared to other conventional algorithms through optimized time window selection and feature extraction. In addition, Wang et al. [22] developed a fault diagnosis method based on Infrared Thermography (IRT) and the CNN for diesel generators. Xiao et al. [134] proposed the Competitive Multiswarm Cooperative Particle Swarm Optimizer (COM-MCPSO) algorithm for fault diagnosis of diesel engines.
A major limitation in current tractor fault diagnosis research is that most methods conduct isolated analyses of individual subsystems, such as the transmission or engine, failing to adequately account for systemic coupling faults within the tractor as a complex integrated machine. Furthermore, the decision-making processes within these models often lack interpretability, hindering direct on-site repairs.

5.4. Pumps

Pumps find extensive application in fields such as agricultural irrigation and urban water supply because of their high flow capacity and reliability [135]. Thus, extensive research has been conducted for the stable operation of the pump [136,137]. Wang et al. [138] proposed a self-attention mechanism combined with a Dense Neural Network (DNN) model to identify flexible winding faults in double-suction centrifugal pumps, which maintains superior classification accuracy even in noisy environments. Prasshanth et al. [139] converts vibration signals collected by sensors into images to enhance feature representation, integrating transfer learning techniques, and employing 15 pre-trained deep learning models to classify different faults. Moreover, the optimized AlexNet achieved the highest classification accuracy with the shortest processing time. However, in reality, it is often difficult to obtain sufficient data to train models. Therefore, Zou et al. [140] combined the Self-Calibrating Attention Mechanism (SCAM) with the Distributed Edge Prediction Strategy (DEPS) for pump fault diagnosis. The method transferred features from the source domain via SCAM and generated new samples conforming to a Gaussian distribution. It then employed DEPS to design an edge discriminator, preventing feature aliasing. Ultimately, it effectively enhances classification accuracy under few-shot conditions. Furthermore, Zou et al. [141] also integrated the diffusion mechanism and self-adaptive Lr with the Model-Agnostic Meta-Learning Strategy (MAMLS) to enhance the accuracy of pump anomaly detection.
Although these advanced methods enhance performance in scenarios with small samples and noisy environments, their model structures are often highly complex, demanding significant computational resources. Furthermore, when addressing unknown or compound failures in pump equipment, their interpretability and generalization capabilities may be limited.

5.5. Others

The study also identifies several other applications, which are grouped into a single category due to limited research on them. Bai et al. [6] summarized the application of deep learning-based technologies in the motor fault diagnosis of agricultural machinery, providing reliable solutions for sustainable agricultural practices. Xie et al. employed Graph Convolutional Networks (GCNs) [142] and SVM [143] for fault diagnosis of rolling bearings in agricultural machines, respectively.
To address the faults of agricultural vehicles, Rajakumar et al. [144] acquired acoustic signals from vehicles using smartphones and employed an optimized deep CNN (DCNN) for feature extraction and classification, enabling vehicle fault identification. Gupta et al. [145] investigates lightweight techniques for ANN optimized using GA for vehicle fault detection. Additionally, Mystkowski et al. [146] presented a method based on vibration data and Multilayer Perceptron (MLP) neural networks for diagnosing faults in the rotary hay tedder. The method combines time and frequency domain metrics from vibration signals to train the MLP model, achieving high detection accuracy in both laboratory and real-world conditions. Luque et al. [147] also utilized vibration signals and combined with the RF algorithm to diagnose gripping pliers in bottling plants. The number of the above studies remains relatively limited, lacking systematic comparisons and in-depth optimization tailored to specific agricultural applications. Moreover, the effectiveness and robustness of most methods have yet to be fully validated in real-world farmland environments.
Despite significant achievements in AI-based fault diagnosis methods, as summarized in Table 3, these methods necessitate abundant data to train the diagnostic model, which should cover various load and supply conditions with different levels of faults. In addition, the training process consumes excessive computational resources and time, and the model cannot be applied well when the operating conditions change. Future research can be focused on aspects such as operational efficiency, limited samples, and model generalization to enhance the effectiveness of AI methods further.

6. AI Agricultural Robots

AI agricultural robots represent a cutting-edge application of AI and robotics technology in modern agriculture. In the current era, AI agricultural robots are increasingly being adopted in agricultural production. The primary reasons for this trend lie in their significant potential to address environmental protection, enhance production efficiency, mitigate labor shortages, and improve food safety [5,148,149]. Currently, the technological evolution of agricultural robots centers on intelligent applications for field operations, particularly in planting, growth management, and harvesting. Numerous robots capable of replacing traditional manual labor have emerged across these three primary stages of agricultural production, as illustrated in Figure 6. This transformation has become a crucial pathway for addressing global food security challenges and driving the automation and upgrading of agriculture [150]. In this context, autonomous navigation technology has also become a core pillar of innovative agricultural robots. The autonomous navigation framework for agricultural robots primarily relies on three key modules: localization and perception [151,152,153,154], path planning [155,156], and path tracking control [157].

6.1. Autonomous Navigation Technologies of AI Agricultural Robots

6.1.1. Localization and Perception

Positioning and perception form the foundation of automated navigation for AI agricultural robots. Through high-precision positioning and perception identification technologies, they provide core data support for situational awareness in complex field environments.
The localization technology of AI agricultural robots primarily achieves relatively high-precision navigation through the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS) [158]. However, both GNSS and INS have certain limitations. For example, GNSS may experience signal degradation or loss under adverse weather conditions or in complex environments. Zhou et al. [159] proposed a neural network-based Simultaneous Localization and Mapping (SLAM)/GNSS fusion localization algorithm to address the loss of control in agricultural robots caused by signal degradation or loss in complex agricultural environments. The algorithm achieved an overall average positioning error of 0.12 m in orchard field tests, demonstrating its capability to maintain centimeter-level accuracy even under fluctuating or denied GNSS conditions. Li et al. [160] proposed a navigation method that integrates Real-Time Kinematic GNSS (RTK-GNSS), INS, and Light Detection and Ranging (LiDAR) technologies. Through dynamic switching strategies and Proportional Integral Derivative (PID) path tracking control, it achieved an average lateral error of 0.1m and a positioning accuracy improvement of 1.6m in orchard environments, effectively addressing satellite signal loss issues.
The perception technology of AI agricultural robots primarily relies on sensors to perceive the surrounding environment, such as visual sensors [16] and lidar [161]. Visual sensors and lidar have been widely adopted for autonomous navigation of agricultural robots in fields, playing a crucial role particularly in tasks such as spraying and harvesting.
To address navigation challenges for agricultural robots under varying conditions, numerous visual navigation technologies based on deep learning and traditional image processing have emerged. Liu et al. [162] proposed a machine vision navigation method based on field ridge color, which achieved an average recognition success rate of 98.8% in four typical ridge environments through the gray reconstruction method and the approximate quadrilateral method, successfully enabling automated navigation of agricultural machinery on crop-free field ridges. Syed et al. [163] introduced a CNN-based model for obstacle classification in orchard environments, enabling agricultural robots to operate collision-free during autonomous navigation. Zhang et al. [164] developed a seedling band-based navigation line extraction model—Seedling Navigation CNN (SN-CNN)—to achieve autonomous navigation for cross-ridge field operation robots. However, vision-based navigation suffers from poor environmental adaptability and is highly vulnerable to adverse weather and complex surroundings.
Compared to vision-based navigation, LiDAR offers superior stability and environmental adaptability in complex settings. It not only provides high-precision distance measurements but also generates detailed point cloud data. Teng et al. [165] proposed an adaptive LiDAR odometry and mapping framework. By employing generalized ICP point cloud matching and selective map update strategies, it effectively addresses motion distortion and dynamic interference issues in agricultural unstructured environments. While maintaining computational efficiency, the framework achieves superior odometry estimation accuracy and environmental adaptability compared to competing methods. Firkat et al. [166] proposed a ground segmentation algorithm specifically designed for complex agricultural field environments. This algorithm is compatible with a wide range of LiDAR sensors, enabling agricultural robots to distinguish between horizontal and sloped terrains during operations effectively. However, LiDAR-based solutions are constrained by the high cost of laser-radar sensors, which greatly limits their large-scale popularization.
The aforementioned GNSS and sensors each have their own advantages and limitations. A comprehensive understanding of their characteristics and effective integration would significantly enhance the environmental perception capabilities and autonomous operation levels of agricultural machinery.

6.1.2. Path Planning

Path planning is a critical component of autonomous navigation for agricultural robots. Rational path planning can enhance operational efficiency and multi-machine collaboration while reducing unnecessary energy consumption [167,168]. Path planning includes full-coverage path planning, headland turn planning, and multi-vehicle cooperative path planning, among others.
The primary objective of full-coverage path planning is to ensure complete coverage of the operational area while minimizing path overlap and missed zones. Multi-vehicle cooperative path planning involves the precise and rational arrangement of operational paths for multiple agricultural robots, representing a major direction in the development of agricultural automation. Jeon et al. [169] developed an automated tillage path planning method for polygonal paddy fields, which synergistically optimizes internal and peripheral field paths to enhance trajectory tracking accuracy and reduce missed areas. Wang et al. [170] addressed target constraints in complex environments by proposing a full-coverage path planning approach that integrates an improved ant colony algorithm with a shuttle strategy, achieving high coverage rates and operational efficiency for harvesting machinery. Soitinaho et al. [171] introduced a novel method for dual-machine cooperative full-coverage path planning, utilizing short-path decomposition and real-time collision detection scheduling to enable synchronous tillage and seeding by autonomous agricultural machinery in fields. However, the aforementioned cooperative schemes generally assume regular field geometry and zero-delay communication; when irregular ridges, crop occlusion, or local signal loss occur in practice, positioning drift readily leads to overlapping or missed coverage, leaving a clear shortfall before truly robust large-scale multi-robot coordination.

6.1.3. Path Tracking Control

Path tracking control is critical for the autonomous navigation of agricultural robots and a key factor determining the performance of navigation systems [172,173]. Its primary function is to drive agricultural machinery to accurately execute planned trajectories while maintaining system stability during dynamic operations. However, achieving high-[precision path tracking control in a complex field environment remains challenging due to susceptibility to external environmental variations, parameter inaccuracies, and actuator saturation. Therefore, path-tracking control algorithms used in agricultural settings must combine environmental adaptability with strong robustness.
Currently, path tracking control methods applied to unmanned agricultural vehicles are primarily based on the following models: geometric models, kinematic models, and dynamic models. Yang et al. [174] proposed an agricultural robot path tracking algorithm that incorporates optimal goal points, simulating driver preview behavior and evaluation functions to achieve adaptive optimization of target points. This approach reduces tracking errors by over 20% compared to traditional algorithms, significantly enhancing navigation accuracy in complex field environments. Cheng et al. [175] developed a path tracking method integrating pre-aiming theory with an adaptive PID architecture, significantly enhancing the tractor’s tracking robustness and disturbance rejection capabilities across diverse paths and operating conditions. Yang et al. [176] developed a novel Fast Super-Torsional Sliding Mode (FSTSM) control method based on an Anti-Peak Expansion State Observer (AESO), enabling finite-time stable tracking of unmanned agricultural vehicles under disturbances while significantly enhancing interference resistance and convergence performance. However, these path-tracking methods still rely on high-frequency GNSS-RTK and multi-sensor fusion; under fruit-canopy shade, rolling terrain, or dust-covered conditions the signals easily drift, so real-world tracking errors exceed simulation results, leaving a gap before low-cost, all-weather, all-terrain robust deployment.
Furthermore, the above agricultural robot autonomous navigation technologies are classified and summarized as shown in Table 4.

6.2. Categories of AI-Agricultural Robots

6.2.1. Planting Operations

Planting is a critical stage in agricultural production, directly affecting crop yield and quality. Agricultural robots for planting operations can be broadly categorized into automatic transplanting robots [177,178] and automatic seeding robots [179], which are designed for different types of crops. Liu et al. [180] developed a strawberry autonomous transplanting system that integrates photoelectric navigation and a pneumatic dual-gripper mechanism. This system enables arched-back oriented transplanting in elevated cultivation facilities, achieving a success rate of 95.3% and an operational efficiency of 1047.8 plants per hour, which significantly advances the automation of strawberry (Fragaria × ananassa Duchesne) transplantation. Abo-habaga et al. [181] proposed an Automated Precision Seeding Unit (APSU) for greenhouses, achieving precise seeding within pots for four crop types by optimizing seed-aspiration nozzle size, with an operational efficiency of 35 s per pot. Liu et al. [182] introduced an automatic seeding robot utilizing basketball motion capture technology, which accomplishes a seeding positioning accuracy of 95.5%, a target recognition rate of 98.2%, substantially enhancing high-precision seeding performance and crop yield. However, when confronted with uneven substrate, deformed plug trays, or variations in seed coating, such transplanting robots tend to miss, double-plant, or damage roots; coupled with delayed real-time sensing and feedback during high-speed operation, overall reliability drops, leaving a gap before stable unmanned operation in large, complex field environments.

6.2.2. Growth Management Operations

The crop growth management phase is the longest and most complex stage in agricultural production. Agricultural robots play a more diverse and intelligent role in this phase compared to the planting stage. Key types of robots for growth management include pollination robots, weeding robots, and pesticide spraying robots.
Pollination is one of the key factors in fruit development. Standard pollination methods include natural pollination and artificial pollination [183]. Natural pollination primarily relies on insects [184] and is susceptible to environmental influences, while artificial pollination requires excessive labor. Robotic pollination overcomes these limitations by utilizing computer vision to locate flowers and then employing mechanical devices to physically contact the stigma and transfer pollen. Yang et al. [185] proposed a pistil orientation detection method named PSTL_Orient. By integrating visual servo technology with a robotic arm system, the autonomous pollination robot achieved an 86.19% pollination success rate on forsythia (Forsythia suspensa (Thunb.) Vahl) flowers. Cao et al. [186] developed a kiwifruit (Actinidia chinensis Planch.) pollination robot integrating vision, air-liquid spraying, and a robotic arm, achieving a 99.3% pollination success rate and an 88.5% fruit set rate, with pollen consumption of only 0.15 g per 60 flowers, significantly improving pollen utilization efficiency. Masuda et al. [187] introduced a rail-suspended pollination robot equipped with a flexible multi-degree-of-freedom manipulator and a dual-actuator system, enabling automated pollination of tomato flowers in greenhouse environments. However, current pollination robots still face bottlenecks such as missed detection caused by clustered flower spikes, insufficient force control of the actuator on stamens leading to damage, high sensitivity of pollen viability to temperature and humidity fluctuations, and short continuous operation endurance, leaving a gap before large-scale deployment in complex open environments outside greenhouses.
Weed-removing robots primarily identify crops and weeds through machine vision and AI, then locate and eliminate weeds. Based on their weeding methods, these robots can be categorized into two types: physical weed removal [188,189] and precision spraying [190]. Zhao et al. [191] developed an autonomous laser weeding robot for strawberry fields based on the detection method for drip irrigation pipe navigation and laser weeding, achieving a 92.6% field weeding rate with only 1.2% crop damage, enabling precise weed identification and removal in strawberry plantations. Zheng et al. [192] proposed an electric swing-type intra-row weeding control system that integrates deep learning for accurate cabbage (Brassica oleracea L.) recognition and dynamic obstacle avoidance, attaining a 96% weeding accuracy and 1.57% crop injury rate at a low speed of 0.1 m/s. Balabantaray et al. [193] designed an AI-powered weeding robot targeting Palmer amaranth (Amaranthus palmeri S. Watson), which employs a self-developed AI recognition model and robotic technology to achieve real-time weed identification and spot spraying. Fan et al. [194] introduced a cotton field weed detection model incorporating a Convolutional Block Attention Module (CBAM), a Bidirectional Feature Pyramid Network (BiFPN), and bilinear interpolation algorithms. This allowed robots to achieve high-precision weed recognition and highly effective spraying operations. However, the aforementioned weeding robots generally rely on AI recognition models and multi-sensor fusion, resulting in high overall costs. In dense or tall-stalk crops, plant occlusion still causes a sharp drop in recognition accuracy and a surge in seedling-damage rates, leaving some bottlenecks before large-scale deployment.

6.2.3. Harvesting Operations

Harvesting is the final stage of agricultural production and also the most challenging one. It demands high speed and precision from robotic harvesters, as fruits not only ripen simultaneously but are also highly perishable.
Fruit harvesting robots primarily identify and locate fruits through machine vision and AI, then use robotic arms for precise picking and transportation [195,196]. Yang et al. [197] developed an automated pumpkin (Cucurbita moschata Duchesne) harvesting robot system based on AI and an RGB-D camera, achieving 99% fruit detection accuracy and 90% picking success rate. This thereby alleviates labor demands for harvesting heavy fruit. Yang et al. [198] proposed a grape harvesting robot utilizing a multi-camera system and an AI object detection algorithm, enabling accurate identification of grape cutting points and automated outdoor grape harvesting. Chang et al. [199] introduced a wire-driven multi-joint strawberry harvesting robotic arm, combining deep learning with two-stage fuzzy logic control to achieve precise strawberry fruit recognition and successful harvesting. Choi et al. [200] developed an AI robotic system for harvesting citrus (Citrus sinensis (L.) Osbeck). Through eye-hand coordination of the robotic arm and 6D fruit posture sensing, it achieves adaptive grasping and precise stem cutting, demonstrating high success rates and harvesting efficiency in real orchard environments. Although fruit-harvesting robots can alleviate labor shortages, they still suffer from poor scene generalization and a high fruit-damage rate.
Despite continuous technological breakthroughs and advancements in agricultural robots, their application in real-world field environments still faces numerous challenges. On one hand, existing navigation algorithms struggle to maintain high stability and reliability in highly unstructured and dynamically evolving complex farmland scenarios. On the other hand, AI struggles to achieve high recognition accuracy for targets in relatively complex agricultural environments. Overall, for agricultural robots to achieve widespread adoption, further breakthroughs are needed in both the robustness of their autonomous navigation algorithms and the precision of their AI recognition capabilities. Table 5 summarizes the application scenarios and methods of AI-agricultural robots in different stages of crop production.

7. AI-Internet of Things for Smart Agriculture

The rapid development of the Internet of Things and AI has provided a strong boost to the transformation of agriculture to intelligence and automation. Smart agriculture realizes accurate monitoring, intelligent decision-making and automatic control of the whole process of agricultural production by integrating wireless sensing technology, cloud computing, machine learning and other methods, i.e., AI-Internet of Things for smart agriculture. According to the existing research results, it can be summarized into the following three categories: environmental monitoring and intelligent control, bioinformation perception and processing, and system architecture and prospect.

7.1. Environmental Monitoring and Intelligent Control

This research mainly collects various parameters such as temperature, humidity, and soil elements in farmland, greenhouses and other production environments in real time through sensor networks, and realizes automatic judgment and adjustment of environmental variables according to cloud computing and deep learning technology. For instance, Bu et al. [201] introduced a four-layer intelligent agriculture system comprising agricultural data collection, edge computing, data transmission, and cloud computing. This system enables real-time environmental monitoring, analysis of soil moisture and climatic conditions, and automatic adjustment of irrigation and fertilization dosage. It also facilitates the integration of existing knowledge into new agricultural environment to accommodate different working scenarios. Mohamed et al. [202] designed a fog cultivation system that mainly uses Arduino-based cooling fans to intelligently cool the lettuce root zone based on real-time collected temperature and humidity data of the surrounding environment, so as to improve the plant growth conditions and enable real-time user notifications. Separately, Qureshi et al. [203] explored plasma technology for plant nitrogen fixation and microbial regulation, developing an AIoT-enabled system that autonomously adjusts spray cycles, nutrient concentration, and plasma composition to reduce resource consumption while ensuring crop yield.
Moreover, Shi et al. [204] adopted wireless sensor networks with different topologies to monitor dissolved oxygen concentration and water temperature in a freshwater fishpond aquaculture environment. The results show that tree topology yields a lower packet loss rate compared to star or mesh topology. Furthermore, the system’s energy consumption can be greatly reduced with the PM2 energy-saving mode and data fusion strategy. Ma et al. [205] designed a LoRa IoT data collection platform for irrigation management. This system determines plant growth stages through color analysis and automatically controls irrigation by leveraging historical data and AI-driven predictions. Similarly, Sitharthan et al. [206] developed an autonomous irrigation system based on AI and 6G Internet of Things. The implementation deploys algorithms on microprocessors to identify rainfall patterns and climate change based on historical meteorological data, thereby providing data support for irrigation decisions, and monitoring soil moisture content in real-time to achieve precise water replenishment. Furthermore, Singh et al. [207] utilized IoT and machine learning methods to investigate the relationship between tractor driving characteristics (such as average speed, plowing depth and traction) and ride comfort in actual rotary tillage operations. Using the ESP8266 IoT module for remote data acquisition, their study aims to identify vibration thresholds that may pose health risks to operators.
Although IoT technology enables precise monitoring and intelligent control in agricultural environments, the complex and harsh conditions—such as wind, sun exposure, rain, dust, and corrosion—impose higher demands on the longevity and stability of network equipment. Additionally, since these devices often operate unattended and rely on battery power, AI-enabled IoT solutions in agricultural settings must meet requirements for low cost, low power consumption, and high robustness.

7.2. Bioinformation Perception and Processing

This line of research primarily leverages AI technology to process biological signals and image data for species identification, growth monitoring and health status assessment.
For example, Jose et al. [208] proposed an intelligent acoustic sensor network that classify birdsong using a CNN-BiLSTM deep learning model, integrating Mel-Frequency Cepstral Coefficients (MFCC), Mel spectrum, and Short-Time Fourier Transform (STFT) for high-accuracy recognition. This system employs a LoRaWAN network to enable long-distance, low-power communication between sensing nodes and the AWS IoT cloud platform, forming a comprehensive farm monitoring system. Its core bird repellent module—equipped with an ultrasonic generator, a laser scintillation device, and a whining simulator that can immediately activate the corresponding sonic drives upon identification of the target species. By introducing graphene oxide, Li et al. [209] designed a wearable non-destructive detection sensor based on PDS-GO-SS flexible material. The sensor exhibits high sensitivity and strong anti-interference ability, enabling accurate detection of the leaf-scale Vapor Pressure Deficit (VPD) and reliably reflecting the true transpiration strength of plants. This device provides a novel approach for the quantitative monitoring of crop growth parameters. Characterized by its multi-functionality, miniaturization, flexibility, and high sensitivity, it shows significant potential for advancing intelligent agricultural systems.
Moreover, the work in [210] demonstrated that deploying AI algorithms on embedded systems enables precise seed positioning in containers and identification of distinct germination stages. These capabilities facilitate the evaluation of planting systems and the prediction of harvest cycles. Subsequently, an advanced embedded system was introduced in [211]. Equipped with a low-power sensing platform and an integrated graphics processing unit, this system can locally execute neural network algorithms. Utilizing LSTM as its AI core, it performs intelligent analysis and autonomous control, allowing for continuous analysis and prediction of plant leaf growth dynamics.
However, embedded sensing network devices with constrained computing and energy resources are unable to run computationally intensive AI network models in real time, while cloud computing introduces significant network latency. Thus, embedded network devices based on edge computing and lightweight network models are crucial for the development of agricultural IoT.

7.3. System Architecture and Prospects

In addition, some studies have discussed the overall network architecture, technology integration, and future challenges of smart agriculture from a macro level, and many studies have conducted in-depth discussions on these contents from different perspectives. Zhu et al. [86] provided an early review of the key technical elements in intelligent sprinkler irrigation system, covering sensors, communications, and autonomous control solutions. As technologies advanced, Taha et al. [1] provided a comprehensive survey of the emerging technological landscape for Agriculture 5.0 (Ag5.0) and emphasized the integration and application of AI, machine learning, digital twins, and block chain. Muhammed et al. [212] and Adli et al. [213] both focused on the core paradigm of AI Internet of Things (AIoT), systematically outlining its architecture, key technologies, and solutions for smart agriculture, while also identifying persistent challenges in data management and connectivity performance. Mowla et al. [214] summarized the practical application of Internet of Things and wireless sensor networks in agriculture through a systematic research, analyzing in particular the key roles of wireless communication protocols such as ZigBee and LoRa WAN.
To address the challenge of communication coverage in agriculture areas, Liu et al. [215] proposed a satellite Internet of Things framework to support large-scale agricultural monitoring through satellite edge computing. In terms of security and trust mechanisms, Vangala et al. [216] studied the information security problem in smart agriculture from a perspective of block chain, proposing a general security architecture and conducting a cost–benefit analysis. Regarding sustainability, Dhanaraju et al. [217] and AlZubi et al. [218] both emphasized the sustainable value of smart agriculture technologies. The latter also proposed an AI and IoT convergence framework for Smart Sustainable Agriculture (SSA) to address the systemic problems arising from the highly decentralized nature of agricultural production. From a data management perspective, Luo et al. [219] provided a comprehensive review, outlining a technology integration path for AIoT across the four-layer architecture of perception, transmission, platform, and application. Their study also delved into bottlenecks related to privacy protection and system interoperability. Furthermore, the systematic review in [220] revealed that smart agriculture technology faces multiple obstacles in practical promotion, including high infrastructure costs, insufficient system interoperability, limited connectivity in rural areas, as well as data privacy and ethical. It also highlighted the critical issues of the digital divide and the need for inclusive access mechanisms.
Despite the extensive research and application of AI-IoT in smart agriculture, significant research gaps remain. Beyond common AI challenges such as generalization capability and model interpretability, issues like missing or anomalous data caused by agricultural environmental interference and faulty IoT devices—compounded by the high cost of data annotation in agriculture—impose additional pressure on AI algorithms. Furthermore, the application of various IoT technologies (e.g., NB-IoT, LoRa, WSNs) in agricultural settings has inadvertently led to network and device heterogeneity, triggering communication silos. Achieving an integrated smart agricultural system that combines perception, networking, decision-making, and control—while providing farmers without specialized technical expertise with understandable and trustworthy AI systems—remains a formidable challenge.
Table 6 further summarizes the application scenarios and methods of AI and IoT technology in the field of agriculture

8. Challenges and Future Perspectives

AI technologies have extensive applications in smart agriculture, with each method offering unique advantages. However, significant limitations persist in practical implementation.
In AI detection, agricultural machinery fault diagnosis, and crop condition forecasting, most existing models require extensive labeled datasets and computational resources to achieve the desired accuracy [36,39,66,69,119,120]. However, due to the complexity of agricultural production environments, acquiring large-scale, balanced, and thoroughly annotated agricultural datasets is costly, cumbersome, and computationally challenging, severely limiting their practical application in real-world farming. Furthermore, the poor interpretability of AI models often leaves agricultural producers struggling to understand the specific basis and reasoning behind the model’s judgments. This lack of transparency leads to a lack of trust and confidence in the model’s diagnostic and predictive results [41,45,81,82,126,133]. Insufficient generalization also limits the practical application of the proposed models [48,52,70,85,118,140]. Future research could develop models capable of handling low-quality, imbalanced data, reducing computational demands, and enhancing generalization and stability under varying agricultural conditions. Exploring lightweight neural network architectures and optimization algorithms can reduce computational demands, enabling high-performance AI models to be directly deployed on agricultural machinery, drones, or field edge computing devices for real-time, online detection and diagnosis. Combining transfer learning with domain adaptation algorithms to develop adaptive diagnostic networks that can adjust to dynamic operations and environmental changes can help address data imbalance issues and enhance model generalization capabilities.
Beyond the aforementioned common challenges, AI-based detection, fault diagnosis, crop condition prediction, and agriculture robots also face the following issues:
(1)
A large number of AI detection technologies have achieved high-precision detection by integrating multi-source signals. However, traditional machine learning models have limited capabilities in processing unstructured text and fusing multimodal information, making comprehensive and systematic detection challenging. Large language models (LLMs), pre-trained on massive cross-domain datasets, can handle diverse tasks with minimal task-specific fine-tuning, enabling higher-precision, more comprehensive, and systematic agricultural detection tasks.
(2)
AI-based agricultural machinery fault diagnosis models demonstrate high accuracy across various machine failures, yet their low real-time performance severely limits practical application. The integration of edge AI, IoT, and sensor fusion offers new avenues for development. Edge AI enables real-time, local data processing, reducing latency and bandwidth consumption; IoT facilitates seamless transmission of data from diverse sensors; sensor fusion combines multi-source data to uncover correlations. Together, these technologies achieve low-latency, highly stable agricultural machinery fault diagnosis.
(3)
While existing crop condition prediction models demonstrate superior performance by integrating traditional growth models with machine learning approaches, due to differing primary influencing factors across crop growth stages, comprehensive prediction management across all stages remains challenging with a single model. Multimodal sensing technology, capable of integrating diverse data modalities, acquires more comprehensive and accurate environmental information. This offers a novel approach for crop condition prediction models: by fusing multi-source data—including imagery, environmental, and physiological data—to achieve holistic perception and precise control over the crop growth process.
(4)
Existing agricultural robots rely on advanced AI algorithms and extensive pre-trained data to achieve intelligent and precise operations in perception, decision-making, and execution. However, factors such as the open and complex working environment, as well as the inconsistent postures of target objects, make robots designed for single-task types and specific working conditions unable to meet the demands of cross-scenario operations and adaptive functioning in complex agricultural environments. Embodied intelligence, which emphasizes the interaction between intelligent agents and the physical environment to achieve intelligence, offers a new direction for the further development of agricultural robots. This leads to the concept of embodied intelligent agricultural robots, which aim to establish a closed-loop intelligent system capable of further creating a new generation of intelligent agricultural robots with adaptive environmental perception, multi-machine collaboration, transferability, and evolution.

9. Conclusions

This paper provides a systematic review of the application domains, key technologies, challenges, and future research directions of AI in smart agriculture. The analysis demonstrates that AI technologies, with their powerful perception, cognition, and decision-making capabilities, have been deeply integrated into various stages of agricultural production, driving the transition of modern agriculture toward smart farming. In terms of perception, the integration of computer vision, Internet of things and multi-source information sensing technology has realized the accurate detection of crop growth and environment. In cognition, machine learning and deep learning algorithms are employed to mine and analyze agricultural production data, constructing intelligent methods for tasks such as pest and disease identification, yield prediction, and equipment fault alerts. In decision-making, robotics technology is applied to agricultural machinery for tasks including planting, irrigation, fertilization, pesticide spraying, harvesting, and other agricultural machines, achieving precision operations. Despite these advancements, the application of AI in smart agriculture still faces challenges due to limitations in AI technologies, the complex and dynamic agricultural environment, and economic considerations. Finally, this paper discusses future research directions, including AI algorithms with small-sample and weakly annotated learning, network models with lightweight and strong embeddability, AI-driven IoT for multimodal perception technologies, and the application of emerging concepts such as embodied intelligence and large-scale models in agriculture. It is expected that this work can offer valuable insights for the research, application, and development of AI technologies in agriculture.

Author Contributions

Conceptualization, C.G. and X.S.; methodology, X.S.; software, G.Z.; validation, C.G., X.S. and Z.W.; formal analysis, C.G.; investigation, G.Z., Y.W. and D.S.; resources, C.G.; data curation, C.G.; writing—original draft preparation, C.G., G.Z., X.S. and Z.W.; writing—review and editing, X.S. and Z.W.; visualization, X.S.; supervision, X.S.; project administration, X.S.; funding acquisition, X.S. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Jiangsu Province, grant number BK20200887, and the Doctor’s Program of Entrepreneurship and Innovation in Jiangsu Province.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editor and reviewers for their valuable suggestions for improving this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Taha, M.F.; Mao, H.P.; Zhang, Z.; Elmasry, G.; Awad, M.A.; Abdalla, A.; Mousa, S.; Elwakeel, A.E.; Elsherbiny, O. Emerging technologies for precision crop management towards agriculture 5.0: A comprehensive overview. Agriculture 2025, 15, 582. [Google Scholar] [CrossRef]
  2. Yang, X.; Shu, L.; Chen, J.N.; Ferrag, M.A.; Wu, J.; Nurellari, E.; Huang, K. A survey on smart agriculture: Development modes, technologies, and security and privacy challenges. IEEE-CAA J. Autom. Sin. 2021, 8, 273–302. [Google Scholar] [CrossRef]
  3. Shaikh, T.A.; Rasool, T.; Lone, F.R. Towards leveraging the role of machine learning and artificial intelligence in precision agriculture and smart farming. Comput. Electron. Agric. 2022, 198, 107119. [Google Scholar] [CrossRef]
  4. Pan, Y.Z.; Zhang, Y.Z.; Wang, X.P.; Gao, X.X.; Hou, Z.Y. Low-cost livestock sorting information management system based on deep learning. Artif. Intell. Agric. 2023, 9, 110–126. [Google Scholar] [CrossRef]
  5. Wang, H.J.; Gu, J.A.; Wang, M.N. A review on the application of computer vision and machine learning in the tea industry. Front. Sustain. Food Syst. 2023, 7, 1172543. [Google Scholar] [CrossRef]
  6. Bai, X.S.; Chen, Q.; Song, X.J.; Hong, W.H. Advancing agricultural machinery maintenance: Deep learning-enabled motor fault diagnosis. IEEE Access 2025, 13, 129933–129951. [Google Scholar] [CrossRef]
  7. Artificial Intelligence in Agriculture Market Size, Share, and Trends 2024 to 2034. Precedence Research. 2024. Available online: https://www.precedenceresearch.com/artificial-intelligence-in-agriculture-market (accessed on 17 October 2024).
  8. Peng, Y.; Zhao, S.Y.; Liu, J.Z. Fused-deep-features based grape leaf disease diagnosis. Agronomy 2021, 11, 2234. [Google Scholar] [CrossRef]
  9. Singh, A.; Singh, K.; Kaur, J.; Singh, M.L. Smart agriculture framework for automated detection of leaf blast disease in paddy crop using colour slicing and GLCM features based random forest approach. Wirel. Pers. Commun. 2023, 131, 2445–2462. [Google Scholar] [CrossRef]
  10. Ji, W.; Wang, J.C.; Xu, B.; Zhang, T. Apple grading based on multi-dimensional view processing and deep learning. Foods 2023, 12, 2117. [Google Scholar] [CrossRef]
  11. Zhao, S.Y.; Peng, Y.; Liu, J.Z.; Wu, S. Tomato leaf disease diagnosis based on improved convolution neural network by attention module. Agriculture 2021, 11, 651. [Google Scholar] [CrossRef]
  12. Sun, J.; Yang, F.Y.; Cheng, J.H.; Wang, S.M.; Fu, L.H. Nondestructive identification of soybean protein in minced chicken meat based on hyperspectral imaging and VGG16-SVM. J. Food Compos. Anal. 2024, 125, 105713. [Google Scholar] [CrossRef]
  13. Zhu, W.D.; Sun, J.; Wang, S.M.; Shen, J.F.; Yang, K.F.; Zhou, X. Identifying field crop diseases using transformer-embedded convolutional neural network. Agriculture 2022, 12, 1083. [Google Scholar] [CrossRef]
  14. Wang, Y.F.; Li, T.Z.; Chen, T.H.; Zhang, X.D.; Taha, M.F.; Yang, N.; Mao, H.P.; Shi, Q. Cucumber downy mildew disease prediction using a CNN-LSTM approach. Agriculture 2024, 14, 1155. [Google Scholar] [CrossRef]
  15. Akilan, T.; Baalamurugan, K.M. Automated weather forecasting and field monitoring using GRU-CNN model along with IoT to support precision agriculture. Expert Syst. Appl. 2024, 249, 123468. [Google Scholar] [CrossRef]
  16. Wu, Q.; Gu, J.N. Design and research of robot visual servo system based on artificial intelligence. Agro Food Ind. Hi-Tech 2017, 28, 125–128. [Google Scholar]
  17. Chen, W.M.; Yang, J.X.; Zhang, S.C.; Wei, X.H.; Liu, C.L.; Zhou, X.Y.; Sun, L.; Wang, F.; Wang, A.Z. Variable scale operational path planning for land levelling based on the improved ant colony optimization algorithm. Sci. Rep. 2025, 15, 9854. [Google Scholar] [CrossRef] [PubMed]
  18. Ji, W.; Gao, X.X.; Xu, B.; Pan, Y.; Zhang, Z.; Zhao, D. Apple target recognition method in complex environment based on improved YOLOv4. J. Food Process Eng. 2021, 44, e13866. [Google Scholar] [CrossRef]
  19. Yang, G.Y.; Cheng, Y.; Xi, C.B.; Liu, L.; Gan, X. Combine harvester bearing fault-diagnosis method based on SDAE-RCmvMSE. Entropy 2022, 24, 1139. [Google Scholar] [CrossRef] [PubMed]
  20. Xu, J.J.; Jing, T.T.; Fang, M.; Li, P.C.; Tang, Z. Failure state identification and fault diagnosis method of vibrating screen bolt under multiple excitation of combine harvester. Agriculture 2025, 15, 455. [Google Scholar] [CrossRef]
  21. Hosseinpour-Zarnaq, M.; Omid, M.; Biabani-Aghdam, E. Fault diagnosis of tractor auxiliary gearbox using vibration analysis and random forest classifier. Inf. Process. Agric. 2022, 9, 60–67. [Google Scholar] [CrossRef]
  22. Wang, R.C.; Yan, H.; Dong, E.Z.; Cheng, Z.H.; Li, Y.; Jia, X.S. Infrared thermography based fault diagnosis of diesel engines using convolutional neural network and image enhancement. Open Phys. 2024, 22, 20240110. [Google Scholar] [CrossRef]
  23. Xu, L.Y.; Zhao, G.X.; Zhao, S.X.; Wu, Y.W.; Chen, X.L. Fault diagnosis method for tractor transmission system based on improved convolutional neural network–bidirectional long short-term memory. Machines 2024, 12, 492. [Google Scholar] [CrossRef]
  24. An, Q.; Rahman, S.; Zhou, J.W.; Kang, J.J. A comprehensive review on machine learning in healthcare industry: Classification, restrictions, opportunities and challenges. Sensors 2023, 23, 4178. [Google Scholar] [CrossRef]
  25. Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
  26. Elguea-Aguinaco, I.; Serrano-Muñoz, A.; Chrysostomou, D.; Inziarte-Hidalgo, I.; Bogh, S.; Arana-Arexolaleiba, N. A review on reinforcement learning for contact-rich robotic manipulation tasks. Robot. Comput.-Integr. Manuf. 2023, 81, 102517. [Google Scholar] [CrossRef]
  27. Matsuzaka, Y.; Yashiro, R. AI-Based Computer Vision Techniques and Expert Systems. AI 2023, 4, 289–302. [Google Scholar] [CrossRef]
  28. Tu, K.L.; Wen, S.Z.; Cheng, Y.; Xu, Y.A.; Pan, T.; Hou, H.N.; Gu, R.L.; Wang, J.H.; Wang, F.G.; Sun, Q. A model for genuineness detection in genetically and phenotypically similar maize variety seeds based on hyperspectral imaging and machine learning. Plant Methods 2022, 18, 81. [Google Scholar] [CrossRef] [PubMed]
  29. Zuo, X.; Chu, J.; Shen, J.F.; Sun, J. Multi-granularity feature aggregation with self-attention and spatial reasoning for fine-grained crop disease classification. Agriculture 2022, 12, 1499. [Google Scholar] [CrossRef]
  30. Qiu, D.K.; Guo, T.H.; Yu, S.Q.; Liu, W.; Li, L.; Sun, Z.Z.; Peng, H.H.; Hu, D. Classification of apple color and deformity using machine vision combined with CNN. Agriculture 2024, 14, 978. [Google Scholar] [CrossRef]
  31. Fu, L.H.; Sun, J.; Wang, S.M.; Xu, M.; Yao, K.S.; Cao, Y.; Tang, N.Q. Identification of maize seed varieties based on stacked sparse autoencoder and near-infrared hyperspectral imaging technology. J. Food Process Eng. 2022, 45, e14120. [Google Scholar] [CrossRef]
  32. Sun, J.; Zhang, L.; Zhou, X.; Yao, K.S.; Tian, Y.; Nirere, A. A method of information fusion for identification of rice seed varieties based on hyperspectral imaging technology. J. Food Process Eng. 2022, 44, e13797. [Google Scholar] [CrossRef]
  33. Zhang, F.; Cui, X.H.; Zhang, C.C.; Cao, W.H.; Wang, X.Y.; Fu, S.L.; Teng, S. Rapid non-destructive identification of selenium-enriched millet based on hyperspectral imaging technology. Czech J. Food Sci. 2022, 40, 445–455. [Google Scholar] [CrossRef]
  34. Xu, M.; Sun, J.; Zhou, X.; Tang, N.Q.; Shen, J.F.; Wu, X.H. Research on nondestructive identification of grape varieties based on EEMD-DWT and hyperspectral image. J. Food Sci. 2021, 86, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
  35. Sun, J.; Nirere, A.; Dusabe, K.D.; Zhang, Y.H.; Adrien, G. Rapid and nondestructive watermelon (Citrullus lanatus) seed viability detection based on visible near-infrared hyperspectral imaging technology and machine learning algorithms. J. Food Sci. 2024, 89, 4403–4418. [Google Scholar] [CrossRef]
  36. Cap, Q.H.; Uga, H.; Kagiwada, S.; Iyatomi, H. Leafgan: An effective data augmentation method for practical plant disease diagnosis. IEEE Trans. Autom. Sci. Eng. Sci. Eng. 2022, 19, 1258–1267. [Google Scholar] [CrossRef]
  37. Lu, B.; Sun, J.; Yang, N.; Wu, X.H.; Zhou, X. Identification of tea white star disease and anthrax based on hyperspectral image information. J. Food Process Eng. 2021, 44, e13584. [Google Scholar] [CrossRef]
  38. Yang, N.; Yu, J.J.; Wang, A.Y.; Tang, J.; Zhang, R.B.; Xie, L.L.; Shu, F.Y.; Kwabena, O.P. A rapid rice blast detection and identification method based on crop disease spores’ diffraction fingerprint texture. J. Sci. Food. Agric. 2020, 100, 3608–3621. [Google Scholar] [CrossRef]
  39. Li, H.H.; Luo, X.F.; Haruna, S.A.; Zareef, M.; Chen, Q.S.; Ding, Z.; Yan, Y.Y. Au-Ag OHCs-based SERS sensor coupled with deep learning CNN algorithm to quantify thiram and pymetrozine in tea. Food Chem. 2023, 428, 136798. [Google Scholar] [CrossRef]
  40. Deng, J.H.; Zhao, X.K.; Luo, W.F.; Bai, X.; Xu, L.J.; Jiang, H. Microwave detection technique combined with deep learning algorithm facilitates quantitative analysis of heavy metal pb residues in edible oils. J. Food Sci. 2024, 89, 6005–6015. [Google Scholar] [CrossRef] [PubMed]
  41. Zhou, X.; Zhao, C.J.; Sun, J.; Cao, Y.; Yao, K.S.; Xu, M. A deep learning method for predicting lead content in oilseed rape leaves using fluorescence hyperspectral imaging. Food Chem. 2023, 409, 135251. [Google Scholar] [CrossRef] [PubMed]
  42. Zhou, X.; Sun, J.; Yan, T.; Lu, B.; Hang, Y.Y.; Chen, Q.S. Hyperspectral technique combined with deep learning algorithm for detection of compound heavy metals in lettuce. Food Chem. 2020, 321, 126503. [Google Scholar] [CrossRef]
  43. Yang, G.; Tian, X.; Fan, Y.Y.; Xiang, D.Q.; An, T.; Huang, W.Q.; Long, Y. Identification of peanut kernels infected with multiple aspergillus flavus fungi using line-scan raman hyperspectral imaging. Food Anal. Methods 2024, 17, 155–165. [Google Scholar] [CrossRef]
  44. Lin, H.; Chen, Z.Y.; Adade, S.Y.S.S.; Yang, W.J.; Chen, Q.S. Detection of maize mold based on a nanocomposite colorimetric sensor array under different substrates. J. Agric. Food. Chem. 2024, 72, 11164–11173. [Google Scholar] [CrossRef] [PubMed]
  45. Wang, B.; Deng, J.H.; Jiang, H. Markov transition field combined with convolutional neural network improved the predictive performance of near-infrared spectroscopy models for determination of aflatoxin b1 in maize. Foods 2022, 11, 2210. [Google Scholar] [CrossRef]
  46. Zhu, J.J.; Jiang, X.; Rong, Y.W.; Wei, W.Y.; Wu, S.D.; Jiao, T.H.; Chen, Q.S. Label-free detection of trace level zearalenone in corn oil by surface-enhanced Raman spectroscopy (SERS) coupled with deep learning models. Food Chem. 2023, 414, 135705. [Google Scholar] [CrossRef] [PubMed]
  47. Zhao, Y.Q.; Deng, J.H.; Chen, Q.S.; Jiang, H. Near-infrared spectroscopy based on colorimetric sensor array coupled with convolutional neural network detecting zearalenone in wheat. Food Chem. X 2024, 22, 101322. [Google Scholar] [CrossRef] [PubMed]
  48. Cheng, J.H.; Sun, J.; Yao, K.S.; Xu, M.; Dai, C.X. Multi-task convolutional neural network for simultaneous monitoring of lipid and protein oxidative damage in frozen-thawed pork using hyperspectral imaging. Meat Sci. 2023, 201, 109196. [Google Scholar] [CrossRef]
  49. Sun, J.; Cheng, J.H.; Xu, M.; Yao, K.S. A method for freshness detection of pork using two-dimensional correlation spectroscopy images combined with dual-branch deep learning. J. Food Compos. Anal. 2024, 129, 106144. [Google Scholar] [CrossRef]
  50. Cheng, J.H.; Sun, J.; Shi, L.; Dai, C.X. An effective method fusing electronic nose and fluorescence hyperspectral imaging for the detection of pork freshness. Food Biosci. 2024, 59, 103880. [Google Scholar] [CrossRef]
  51. Yang, F.Y.; Sun, J.; Cheng, J.H.; Fu, L.H.; Wang, S.M.; Xu, M. Detection of starch in minced chicken meat based on hyperspectral imaging technique and transfer learning. J. Food Process Eng. 2023, 46, e14304. [Google Scholar] [CrossRef]
  52. Huang, Y.X.; Pan, Y.; Liu, C.; Zhou, L.; Tang, L.J.; Wei, H.Y.; Fan, K.; Wang, A.C.; Tang, Y. Rapid and non-destructive geographical origin identification of chuanxiong slices using near-infrared spectroscopy and convolutional neural networks. Agriculture 2024, 14, 1281. [Google Scholar] [CrossRef]
  53. Xu, M.; Sun, J.; Yao, K.S.; Wu, X.H.; Shen, J.F.; Cao, Y.; Zhou, X. Nondestructive detection of total soluble solids in grapes using VMD-RC and hyperspectral imaging. J. Food Sci. 2022, 87, 326–338. [Google Scholar] [CrossRef]
  54. Xu, B.; Cui, X.; Ji, W.; Yuan, H.; Wang, J.C. Apple grading method design and implementation for automatic grader based on improved YOLOv5. Agriculture 2023, 13, 124. [Google Scholar] [CrossRef]
  55. Guo, J.L.; Zhang, K.X.; Adade, S.Y.S.S.; Lin, J.S.; Lin, H.; Chen, Q.S. Tea grading, blending, and matching based on computer vision and deep learning. J. Sci. Food Agric. 2025, 105, 3239–3251. [Google Scholar] [CrossRef] [PubMed]
  56. Zhang, F.; Chen, Z.J.; Ali, S.; Yang, N.; Fu, S.L.; Zhang, Y.K. Multi-class detection of cherry tomatoes using improved YOLOv4-Tiny. Int. J. Agric. Biol. Eng. 2023, 16, 225–231. [Google Scholar] [CrossRef]
  57. Chen, Z.Y.; Sun, Y.; Shi, J.Y.; Zhang, W.; Zhang, X.A.; Huang, X.W.; Zou, X.B.; Li, Z.H.; Wei, R.C. Facile synthesis of Au@Ag core-shell nanorod with bimetallic synergistic effect for SERS detection of thiabendazole in fruit juice. Food Chem. 2022, 370, 131276. [Google Scholar] [CrossRef]
  58. Qiu, H.; Gao, L.; Wang, J.X.; Pan, J.M.; Yan, Y.S.; Zhang, X.F. A precise and efficient detection of Seta-Cyfluthrin via fluorescent molecularly imprinted polymers with ally fluorescein as functional monomer in agricultural products. Food Chem. 2017, 217, 620–627. [Google Scholar] [CrossRef]
  59. Li, S.H.; Zhang, S.; Wu, J.Z.; Khan, I.M.; Chen, M.; Jiao, T.H.; Wei, J.; Chen, X.M.; Chen, Q.M.; Chen, Q.S. Upconversion fluorescence nanosensor based on enzymatic inhibited and copper-triggered o-phenylenediamine oxidation for the detection of dimethoate pesticides. Food Chem. 2024, 453, 139666. [Google Scholar] [CrossRef]
  60. Chen, P.; Yin, L.M.; El-Seedi, H.R.; Zou, X.B.; Guo, Z.M. Green reduction of silver nanoparticles for cadmium detection in food using surface-enhanced Raman spectroscopy coupled multivariate calibration. Food Chem. 2022, 394, 133481. [Google Scholar] [CrossRef] [PubMed]
  61. Li, J.Y.; Li, M.Q.; Mao, H.P.; Zhu, W.J. Diagnosis of potassium nutrition level in Solanum lycopersicum based on electrical impedance. Biosyst. Eng. 2016, 147, 130–138. [Google Scholar] [CrossRef]
  62. Balyan, S.; Jangir, H.; Tripathi, S.N.; Tripathi, A.; Jhang, T.; Pandey, P. Seeding a sustainable future: Navigating the digital horizon of smart agriculture. Sustainability 2024, 16, 475. [Google Scholar] [CrossRef]
  63. Abubaker, B.A.; Yan, H.F.; Hong, L.; You, W.Y.; Elshaikh, N.A.; Hussein, G.; Pandab, S.; Hassan, S. Enhancement of depleted loam soil as well as cucumber productivity utilizing biochar under water stress. Commun. Soil Sci. Plant Anal. 2019, 50, 49–64. [Google Scholar]
  64. Reda, R.; Saffaj, T.; Ilham, B.; Saidi, O.; Issam, K.; Brahim, L.; El, H.; El Hadrami, E. A comparative study between a new method and other machine learning algorithms for soil organic carbon and total nitrogen prediction using near infrared spectroscopy. Chemometric. Intell. Lab. Syst. 2019, 195, 103873. [Google Scholar] [CrossRef]
  65. Khanal, S.; Fulton, J.; Klopfenstein, A.; Douridas, N.; Shearer, S. Integration of high resolution remotely sensed data and machine learning techniques for spatial prediction of soil properties and corn yield. Comput. Electron. Agric. 2018, 153, 213–225. [Google Scholar] [CrossRef]
  66. Bhattacharyya, D.; Joshua, E.S.N.; Rao, N.T.; Kim, T. Hybrid CNN-SVM classifier approaches to process semi-structured data in sugarcane yield forecasting production. Agronomy 2023, 13, 1169. [Google Scholar] [CrossRef]
  67. Cui, X.; Han, W.T.; Zhang, H.H.; Dong, Y.X.; Ma, W.T.; Zhai, X.D.; Zhang, L.Y.; Li, G. Estimating and mapping the dynamics of soil salinity under different crop types using sentinel-2 satellite imagery. Geoderma 2023, 440, 116738. [Google Scholar] [CrossRef]
  68. Han, S.H.; Kim, S.; Chang, H.N.; Li, G.L.; Son, Y.H. Increased soil temperature stimulates changes in carbon, nitrogen, and mass loss in the fine roots of pinus koraiensis under experimental warming and drought. Turk. J. Agric. For. 2019, 43, 80–87. [Google Scholar] [CrossRef]
  69. Yue, Y.; Li, J.H.; Fan, L.F.; Zhang, L.L.; Zhao, P.F.; Zhou, Q.; Wang, N.; Wang, Z.Y.; Huang, L.; Dong, X.H. Prediction of maize growth stages based on deep learning. Comput. Electron. Agric. 2020, 172, 105351. [Google Scholar] [CrossRef]
  70. Devarashetti, D.; Aravinth, S.S. Design and development of gorilla optimized deep resilient architecture for prediction of agro-climatic changes to increase the crop-yield production. Int. J. Comput. Int. Sys. 2025, 18, 136. [Google Scholar] [CrossRef]
  71. Li, L.; Liu, J.Z.; Peng, Q.; Wang, X.W.; Xu, J.T.; Cai, H.J. Propagation process-based agricultural drought typology and its copula-based risk. Irrig. Drain. 2024, 73, 1496–1519. [Google Scholar] [CrossRef]
  72. Guo, F.W.; Wang, P.X.; Tansey, K.; Sun, Y.F.; Li, M.Q.; Zhou, J. Pixel-based agricultural drought forecasting based on deep learning approach: Considering the linear trend and residual feature of vegetation temperature condition index. Comput. Electron. Agric. 2025, 237, 110570. [Google Scholar] [CrossRef]
  73. Khan, M.M.H.; Muhammad, N.S.; El-Shafie, A. Wavelet based hybrid ANN-ARIMA models for meteorological drought forecasting. J. Hydrol. 2020, 590, 125380. [Google Scholar] [CrossRef]
  74. Gowri, L.; Manjula, K.R. Development of agricultural drought prediction using triple exponential-long short-term memory (TEX-LSTM) model. Theor. Appl. Climatol. 2025, 156, 426. [Google Scholar] [CrossRef]
  75. Mokhtarzad, M.; Eskandari, F.; Vanjani, N.J.; Arabasadi, A. Drought forecasting by ANN, ANFIS, and SVM and comparison of the models. Environ. Earth Sci. 2017, 76, 729. [Google Scholar] [CrossRef]
  76. De Vos, K.; Gebruers, S.; Degerickx, J.; Iordache, M.D.; Keune, J.; Di Giuseppe, F.; Pereira, F.V.; Wouters, H.; Swinnen, E.; Van Rossum, K.; et al. Predicting below-average NDVI anomalies for agricultural drought impact forecasting. Remote Sens. Environ. 2025, 330, 114980. [Google Scholar] [CrossRef]
  77. Wang, X.; Yang, Y.; Zhao, X.; Huang, M.; Zhu, Q.B. Integrating field images and microclimate data to realize multi-day ahead forecasting of maize crop coverage using CNN-LSTM. Int. J. Agric. Biol. Eng. 2023, 16, 199–206. [Google Scholar] [CrossRef]
  78. Ahmad, R.; Yang, B.; Ettlin, G.; Berger, A.; Rodríguez-Bocca, P. A machine-learning based ConvLSTM architecture for NDVI forecasting. Int. Trans. Oper. Res. 2023, 30, 2025–2048. [Google Scholar] [CrossRef]
  79. Zhang, L.L.; Wang, X.W.; Zhang, H.H.; Zhang, B.; Zhang, J.; Hu, X.K.; Du, X.T.; Cai, J.R.; Jia, W.D.; Wu, C.D. UAV-based multispectral winter wheat growth monitoring with adaptive weight allocation. Agriculture 2024, 14, 1900. [Google Scholar] [CrossRef]
  80. Pal, P.; Landivar-Bowles, J.; Landivar-Scott, J.; Duffield, N.; Nowka, K.; Jung, J.H.; Chang, A.; Lee, K.; Zhao, L.; Bhandari, M. Unmanned aerial system and machine learning driven digital-twin framework for in-season cotton growth forecasting. Comput. Electron. Agric. 2025, 228, 109589. [Google Scholar] [CrossRef]
  81. Luo, X.L.; Sun, C.J.; He, Y.; Zhu, F.L.; Li, X.L. Cross-cultivar prediction of quality indicators of tea based on VIS-NIR hyperspectral imaging. Ind. Crop. Prod. 2023, 202, 117009. [Google Scholar] [CrossRef]
  82. Tsai, A.C.; Saengsoi, A. Artificial Intelligent IoT-Based Cognitive Hardware for Agricultural Precision Analysis. Mobile. Netw. Appl. 2024, 29, 334–348. [Google Scholar] [CrossRef]
  83. Acquah, S.J.; Yan, H.F.; Zhang, C.; Wang, G.Q.; Zhao, B.S.; Wu, H.M.; Zhang, H.N. Application and evaluation of stanghellini model in the determination of crop evapotranspiration in a naturally ventilated greenhouse. Int. J. Agric. Biol. Eng. 2018, 11, 95–103. [Google Scholar] [CrossRef]
  84. Francik, S.; Kurpaska, S. The use of artificial neural networks for forecasting of air temperature inside a heated foil tunnel. Sensors 2020, 20, 652. [Google Scholar] [CrossRef]
  85. Lee, M.H.; Yao, M.H.; Kow, P.Y.; Kuo, B.J.; Chang, F.J. An artificial intelligence-powered environmental control system for resilient and efficient greenhouse farming. Sustainability 2024, 16, 10958. [Google Scholar] [CrossRef]
  86. Zhu, X.Y.; Chikangaise, P.; Shi, W.D.; Chen, W.H.; Yuan, S.Q. Review of intelligent sprinkler irrigation technologies for remote autonomous system. Int. J. Agric. Biol. Eng. 2018, 11, 23–30. [Google Scholar] [CrossRef]
  87. Gao, S.K.; Yu, S.E.; Wang, M.; Meng, J.J.; Tang, S.H.; Ding, J.H.; Li, S.; Miao, Z.M. Effect of different controlled irrigation and drainage regimes on crop growth and water use in paddy rice. Int. J. Agric. Biol. 2018, 20, 486–492. [Google Scholar] [CrossRef]
  88. Darko, R.O.; Yuan, S.Q.; Hong, L.; Liu, J.P.; Yan, H.F. Irrigation, a productive tool for food security—A review. Acta Agric. Scand. Sect. B-Soil Plant Sci. 2016, 66, 191–206. [Google Scholar] [CrossRef]
  89. Elbeltagi, A.; Srivastava, A.; Deng, J.S.; Li, Z.B.; Raza, A.; Khadke, L.; Yu, Z.L.; El-Rawy, M. Forecasting vapor pressure deficit for agricultural water management using machine learning in semi-arid environments. Agric. Water Manag. 2023, 283, 108302. [Google Scholar] [CrossRef]
  90. Raza, A.; Saber, K.; Hu, Y.G.; Ray, R.L.; Ziya Kaya, Y.; Dehghanisanij, H.; Kisi, O.; Elbeltagi, A. Modelling reference evapotranspiration using principal component analysis and machine learning methods under different climatic environments. Irrig. Drain. 2023, 72, 945–970. [Google Scholar] [CrossRef]
  91. Tong, Z.Y.; Zhang, S.R.; Yu, J.X.; Zhang, X.L.; Wang, B.J.; Zheng, W.A. A hybrid prediction model for catboost tomato transpiration rate based on feature extraction. Agronomy 2023, 13, 2371. [Google Scholar] [CrossRef]
  92. Zhu, Q.Z.; Zhang, H.Y.; Zhu, Z.H.; Gao, Y.Y.; Chen, L.P. Structural design and simulation of pneumatic conveying line for a paddy side-deep fertilisation system. Agriculture 2022, 12, 867. [Google Scholar] [CrossRef]
  93. Zhang, L.Y.; Song, X.Y.; Niu, Y.X.; Zhang, H.H.; Wang, A.C.; Zhu, Y.H.; Zhu, X.Y.; Chen, L.P.; Zhu, Q.Z. Estimating winter wheat plant nitrogen content by combining spectral and texture features based on a low-cost UAV rgb system throughout the growing season. Agriculture 2024, 14, 456. [Google Scholar] [CrossRef]
  94. Zhang, L.Y.; Wang, A.C.; Zhang, H.Y.; Zhu, Q.Z.; Zhang, H.H.; Sun, W.H.; Niu, Y.X. Estimating leaf chlorophyll content of winter wheat from UAV multispectral images using machine learning algorithms under different species, growth stages, and nitrogen stress conditions. Agriculture 2024, 14, 1064. [Google Scholar] [CrossRef]
  95. Schauberger, B.; Jägermeyr, J.; Gornott, C. A systematic review of local to regional yield forecasting approaches and frequently used data resources. Eur. J. Agron. 2020, 120, 126153. [Google Scholar] [CrossRef]
  96. Hirooka, Y.; Eda, S.; Ikazaki, K.; Batieno, J.B.; Iseki, K. Development of a simple prediction model for cowpea yield under environmentally growth-restricted conditions. Sci. Rep. 2024, 14, 28706. [Google Scholar] [CrossRef]
  97. Hoque, M.J.; Islam, M.S.; Uddin, J.; Samad, M.A.; De Abajo, B.S.; Vargas, D.L.R.; Ashraf, I. Incorporating meteorological data and pesticide information to forecast crop yields using machine learning. IEEE Access 2024, 12, 47768–47786. [Google Scholar] [CrossRef]
  98. Kularathne, S.; Rathnayake, N.; Herath, M.; Rathnayake, U.; Hoshino, Y. Impact of economic indicators on rice production: A machine learning approach in Sri Lanka. PLoS ONE 2024, 19, e303883. [Google Scholar] [CrossRef]
  99. Guo, Y.L. Integrating genetic algorithm with ARIMA and reinforced random forest models to improve agriculture economy and yield forecasting. Soft Comput. 2024, 28, 1685–1706. [Google Scholar] [CrossRef]
  100. Ashfaq, M.; Khan, I.; Afzal, R.F.; Shah, D.; Ali, S.; Tahir, M. Enhanced wheat yield prediction through integrated climate and satellite data using advanced AI techniques. Sci. Rep. 2025, 15, 18093. [Google Scholar] [CrossRef]
  101. Pinto, A.A.; Zerbato, C.; Rolim, G.D. A machine learning models approach and remote sensing to forecast yield in corn with based cumulative growth degree days. Theor. Appl. Climatol. 2024, 155, 7285–7294. [Google Scholar] [CrossRef]
  102. Xie, J.Y.; Zhang, D.Y.; Jin, N.; Cheng, T.; Zhao, G.; Han, D.; Niu, Z.; Li, W.F. Coupling crop growth models and machine learning for scalable winter wheat yield estimation across major wheat regions in China. Agric. For. Meteorol. 2025, 372, 110687. [Google Scholar] [CrossRef]
  103. Feng, P.Y.; Wang, B.; Liu, D.; Waters, C.; Xiao, D.P.; Shi, L.J.; Yu, Q. Dynamic wheat yield forecasts are improved by a hybrid approach using a biophysical model and machine learning technique. Agric. For. Meteorol. 2020, 285, 107922. [Google Scholar] [CrossRef]
  104. Lee, D.H.; Davenport, F.; Shukla, S.; Husak, G.; Funk, C.; Harrison, L.; Mcnally, A.; Rowland, J.; Budde, M.; Verdin, J. Maize yield forecasts for sub-saharan africa using earth observation data and machine learning. Glob. Food Secur.-Agric. Policy 2022, 33, 100643. [Google Scholar] [CrossRef]
  105. Wei, L.L.; Yang, H.S.; Niu, Y.X.; Zhang, Y.N.; Xu, L.Z.; Chai, X.Y. Wheat biomass, yield, and straw-grain ratio estimation from multi-temporal uav-based rgb and multispectral images. Biosyst. Eng. 2023, 234, 187–205. [Google Scholar] [CrossRef]
  106. Khan, T.; Sherazi, H.H.R.; Ali, M.; Letchmunan, S.; Butt, U.M. Deep learning-based growth prediction system: A use case of china agriculture. Agronomy 2021, 11, 1551. [Google Scholar] [CrossRef]
  107. Ashfaq, M.; Khan, I.; Shah, D.; Ali, S.; Tahir, M. Predicting wheat yield using deep learning and multi-source environmental data. Sci. Rep. 2025, 15, 26446. [Google Scholar] [CrossRef]
  108. Sathya, P.; Gnanasekaran, P. Paddy yield prediction in tamilnadu delta region using MLR-LSTM Model. Appl. Artif. Intell. 2023, 37, 2175113. [Google Scholar] [CrossRef]
  109. Shook, J.; Gangopadhyay, T.; Wu, L.J.; Ganapathysubramanian, B.; Sarkar, S.; Singh, A.K. Crop yield prediction integrating genotype and weather variables using deep learning. PLoS ONE 2021, 16, e252402. [Google Scholar] [CrossRef] [PubMed]
  110. Di, Y.; Gao, M.F.; Feng, F.K.; Li, Q.; Zhang, H.J. A new framework for winter wheat yield prediction integrating deep learning and bayesian optimization. Agronomy 2022, 12, 3194. [Google Scholar] [CrossRef]
  111. Ren, Y.T.; Li, Q.Z.; Du, X.; Zhang, Y.; Wang, H.Y.; Shi, G.W.; Wei, M.F. Analysis of corn yield prediction potential at various growth phases using a process-based model and deep learning. Plants 2023, 12, 446. [Google Scholar] [CrossRef]
  112. Fieuzal, R.; Marais Sicre, C.M.; Baup, F. Estimation of corn yield using multi-temporal optical and radar satellite data and artificial neural networks. Int. J. Appl. Earth Obs. Geoinf. 2017, 57, 14–23. [Google Scholar] [CrossRef]
  113. Liang, Z.W.; Wada, M.E. Development of cleaning systems for combine harvesters: A review. Biosyst. Eng. 2023, 236, 79–102. [Google Scholar] [CrossRef]
  114. Wang, H.Y.; Lao, L.Y.; Zhang, H.L.; Tang, Z.; Qian, P.F.; He, Q. Structural fault detection and diagnosis for combine harvesters: A critical review. Sensors 2025, 25, 3851. [Google Scholar] [CrossRef] [PubMed]
  115. Hao, S.H.; Tang, Z.; Guo, S.B.; Ding, Z.; Su, Z. Model and method of fault signal diagnosis for blockage and slippage of rice threshing drum. Agriculture 2022, 12, 1968. [Google Scholar] [CrossRef]
  116. Li, Y.M.; Liu, Y.B.; Ji, K.Z.; Zhu, R.H. A fault diagnosis method for a differential inverse gearbox of a crawler combine harvester based on order analysis. Agriculture 2022, 12, 1300. [Google Scholar] [CrossRef]
  117. Zhou, X.L.; Xu, X.C.; Zhang, J.F.; Wang, L.; Wang, D.F.; Zhang, P.P. Fault diagnosis of silage harvester based on a modified random forest. Inf. Process. Agric. 2023, 10, 301–311. [Google Scholar] [CrossRef]
  118. Gomez-Gil, F.J.; Martínez-Martínez, V.; Ruiz-Gonzalez, R.; Martínez-Martínez, L.; Gomez-Gil, J. Vibration-based monitoring of agro-industrial machinery using a k-nearest neighbors (kNN) classifier with a harmony search (HS) frequency selector algorithm. Comput. Electron. Agric. 2024, 217, 108556. [Google Scholar] [CrossRef]
  119. Martínez-Martínez, V.; Gomez-Gil, F.J.; Gomez-Gil, J.; Ruiz-Gonzalez, R. An artificial neural network based expert system fitted with genetic algorithms for detecting the status of several rotary components in agro-industrial machines using a single vibration signal. Expert Syst. Appl. 2015, 42, 6433–6441. [Google Scholar] [CrossRef]
  120. She, D.M.; Yang, Z.C.; Duan, Y.D.; Pecht, M.G. A meta transfer learning-driven few-shot fault diagnosis method for combine harvester gearboxes. Comput. Electron. Agric. 2024, 227, 109605. [Google Scholar] [CrossRef]
  121. Ling, G.M.; Zhang, L.K.; Liu, W.R.; Lyu, Z.; Xu, H.M.; Wu, Q.; Zhang, G.Z. Diagnosis model of threshing cylinder blockage condition based on hybrid sparrow search algorithm and support vector machine. Comput. Electron. Agric. 2025, 237, 110660. [Google Scholar] [CrossRef]
  122. Zhang, Y.Y.; Zhang, B.; Shen, C.; Liu, H.L.; Huang, J.C.; Tian, K.P.; Tang, Z. Review of the field environmental sensing methods based on multi-sensor information fusion technology. Int. J. Agric. Biol. Eng. 2024, 17, 1–13. [Google Scholar] [CrossRef]
  123. Zou, X.G.; Liu, W.C.; Huo, Z.Q.; Wang, S.Y.; Chen, Z.L.; Xin, C.R.; Bai, Y.A.; Liang, Z.Y.; Gong, Y.; Qian, Y.; et al. Current status and prospects of research on sensor fault diagnosis of agricultural internet of things. Sensors 2023, 23, 2528. [Google Scholar] [CrossRef] [PubMed]
  124. Karimzadeh, S.; Li, Z.; Ahamed, M.S. Machine learning-based fault detection and diagnosis of electrical conductivity and ph sensors in hydroponic systems. Comput. Electron. Agric. 2025, 237, 110544. [Google Scholar] [CrossRef]
  125. Kaur, G.; Bhattacharya, M. Intelligent fault diagnosis for ait-based smart farming applications. IEEE Sens. J. 2023, 23, 28261–28269. [Google Scholar] [CrossRef]
  126. Liang, S.C.; Liu, P.Z.; Zhang, Z.W.; Wu, Y. Research on fault diagnosis of agricultural iot sensors based on improved dung beetle optimization–support vector machine. Sustainability 2024, 16, 10001. [Google Scholar] [CrossRef]
  127. Lu, W.D.; Xu, X.H.; Huang, G.X.; Li, B.; Wu, Y.; Zhao, N.; Yu, F.R. Energy efficiency optimization in swipt enabled wsns for smart agriculture. IEEE Trans. Ind. Inform. 2021, 17, 4335–4344. [Google Scholar] [CrossRef]
  128. Barriga, A.; Barriga, J.A.; Moñino, M.J.; Clemente, P.J. IoT-based expert system for fault detection in Japanese Plum leaf-turgor pressure WSN. Internet Things 2023, 23, 100829. [Google Scholar] [CrossRef]
  129. Salhi, M.S.; Salhi, M.; Touti, E.; Zitouni, N.; Benzarti, F. On the use of wireless sensor nodes for agricultural smart fault detection. Wirel. Pers. Commun. 2024, 134, 95–117. [Google Scholar] [CrossRef]
  130. Zhu, Z.; Zeng, L.X.; Chen, L.; Zou, R.; Cai, Y.F. Fuzzy adaptive energy management strategy for a hybrid agricultural tractor equipped with hmcvt. Agriculture 2022, 12, 1986. [Google Scholar] [CrossRef]
  131. Xu, L.Y.; Zhang, G.D.; Zhao, S.X.; Wu, Y.W.; Xi, Z.Q. Fault diagnosis of tractor transmission system based on time GAN and transformer. IEEE Access 2024, 12, 107153–107169. [Google Scholar] [CrossRef]
  132. Ni, H.T.; Lu, L.Q.; Sun, M.; Bai, X.; Yin, Y.F. Research on fault diagnosis of pst electro-hydraulic control system of heavy tractor based on support vector machine. Processes 2022, 10, 791. [Google Scholar] [CrossRef]
  133. Xue, L.J.; Jiang, H.H.; Zhao, Y.H.; Wang, J.B.; Wang, G.M.; Xiao, M.H. Fault diagnosis of wet clutch control system of tractor hydrostatic power split continuously variable transmission. Comput. Electron. Agric. 2022, 194, 106778. [Google Scholar] [CrossRef]
  134. Xiao, M.H.; Wang, W.C.; Wang, K.X.; Zhang, W.; Zhang, H.T. Fault diagnosis of high-power tractor engine based on competitive multiswarm cooperative particle swarm optimizer algorithm. Shock Vib. 2020, 2020, 8829257. [Google Scholar] [CrossRef]
  135. Wang, Z.; Chen, Y.J.; Rakibuzzaman, M.; Agarwal, R.; Zhou, L. Numerical and experimental investigations of a double-suction pump with a middle spacer and a staggered impeller. Irrig. Drain. 2025, 74, 944–956. [Google Scholar] [CrossRef]
  136. Balaji, A.M.; Venkatesh, N.S.; Sakthivel, N.R.; Sugumaran, V. Can pretrained networks be used in fault diagnosis of monoblock centrifugal pump? Proc. Inst. Mech. Eng. Part. E-J. Process Mech. Eng. 2025, 239, 1306–1317. [Google Scholar]
  137. Loukatos, D.; Kondoyanni, M.; Alexopoulos, G.; Maraveas, C.; Arvanitis, K.G. On-device intelligence for malfunction detection of water pump equipment in agricultural premises: Feasibility and experimentation. Sensors 2023, 23, 839. [Google Scholar] [CrossRef]
  138. Wang, Z.Y.; Xie, R.T.; Chen, J.W.; Kang, T.B.; Zhang, M. A flexible winding identification method utilizing self-attention and dnn for the double-suction centrifugal pump. IEEE Trans. Instrum. Meas. 2025, 74, 3525511. [Google Scholar] [CrossRef]
  139. Prasshanth, C.V.; Naveen Venkatesh, S.N.; Mahanta, T.K.; Sakthivel, N.R.; Sugumaran, V. Fault diagnosis of monoblock centrifugal pumps using pre-trained deep learning models and scalogram images. Eng. Appl. Artif. Intell. 2024, 136, 109022. [Google Scholar] [CrossRef]
  140. Zou, F.Q.; Sang, S.T.; Jiang, M.; Guo, H.L.; Yan, S.Q.; Li, X.M.; Liu, X.W.; Zhang, H.F. A few-shot sample augmentation algorithm based on scam and deps for pump fault diagnosis. ISA Trans. 2023, 142, 445–453. [Google Scholar] [CrossRef]
  141. Zou, F.Q.; Sang, S.T.; Jiang, M.; Li, X.M.; Zhang, H.F. Few-shot pump anomaly detection via Diff-WRN-based model-agnostic meta-learning strategy. Struct. Health Monit. 2023, 22, 2674–2687. [Google Scholar] [CrossRef]
  142. Xie, F.Y.; Li, G.; Liu, H.; Sun, E.G.; Wang, Y. Advancing early fault diagnosis for multi-domain agricultural machinery rolling bearings through data enhancement. Agriculture 2024, 14, 112. [Google Scholar] [CrossRef]
  143. Xie, F.Y.; Sun, E.G.; Wang, L.L.; Wang, G.; Xiao, Q. Rolling bearing fault diagnosis in agricultural machinery based on multi-source locally adaptive graph convolution. Agriculture 2024, 14, 1333. [Google Scholar] [CrossRef]
  144. Rajakumar, M.P.; Ramya, J.; Maheswari, B.U. Health monitoring and fault prediction using a lightweight deep convolutional neural network optimized by levy flight optimization algorithm. Neural Comput. Appl. 2021, 33, 12513–12534. [Google Scholar] [CrossRef]
  145. Gupta, N.; Khosravy, M.; Gupta, S.; Dey, N.; Crespo, R.G. Lightweight artificial intelligence technology for health diagnosis of agriculture vehicles: Parallel evolving artificial neural networks by genetic algorithm. Int. J. Parallel Program. 2022, 50, 1–26. [Google Scholar] [CrossRef]
  146. Mystkowski, A.; Wolniakowski, A.; Idzkowski, A.; Ciężkowski, M.; Ostaszewski, M.; Kociszewski, R.; Kotowski, A.; Kulesza, Z.; Dobrzański, S.; Miastkowski, K. Measurement and diagnostic system for detecting and classifying faults in the rotary hay tedder using multilayer perceptron neural networks. Eng. Appl. Artif. Intell. 2024, 133, 108513. [Google Scholar] [CrossRef]
  147. Luque, A.; Campos Olivares, D.C.; Mazzoleni, M.; Ferramosca, A.; Previdi, F.; Carrasco, A. Use of artificial intelligence techniques in characterization of vibration signals for application in agri-food engineering. Appl. Intell. 2025, 55, 534. [Google Scholar] [CrossRef]
  148. Li, C.Q.; Wu, J.G.; Pan, X.Y.; Dou, H.J.; Zhao, X.G.; Gao, Y.Y.; Yang, S.; Zhai, C.Y. Design and experiment of a breakpoint continuous spraying system for automatic-guidance boom sprayers. Agriculture 2023, 13, 2203. [Google Scholar] [CrossRef]
  149. Qing, Y.R.; Li, Y.M.; Yang, Y.; Xu, L.Z.; Ma, Z. Development and experiments on reel with improved tine trajectory for harvesting oilseed rape. Biosyst. Eng. 2021, 206, 19–31. [Google Scholar] [CrossRef]
  150. Liu, L.; Yang, F.; Liu, X.Y.; Du, Y.F.; Li, X.Y.; Li, G.R.; Chen, D.; Zhu, Z.X.; Song, Z.H. A review of the current status and common key technologies for agricultural field robots. Comput. Electron. Agric. 2024, 227, 109630. [Google Scholar] [CrossRef]
  151. Jia, W.K.; Zheng, Y.J.; Zhao, D.A.; Yin, X.; Liu, X.Y.; Du, R.C. Preprocessing method of night vision image application in apple harvesting robot. Int. J. Agric. Biol. Eng. 2018, 11, 158–163. [Google Scholar] [CrossRef]
  152. Wang, J.Z.; Gao, Z.H.; Zhang, Y.; Zhou, J.; Wu, J.Z.; Li, P.P. Real-time detection and location of potted flowers based on a ZED camera and a YOLO V4-tiny deep learning algorithm. Horticulturae 2022, 8, 21. [Google Scholar] [CrossRef]
  153. Cui, L.F.; Le, F.X.; Xue, X.Y.; Sun, T.; Jiao, Y.X. Design and experiment of an agricultural field management robot and its navigation control system. Agronomy 2024, 14, 654. [Google Scholar] [CrossRef]
  154. Hu, T.T.; Wang, W.B.; Gu, J.A.; Xia, Z.L.; Zhang, J.; Wang, B. Research on apple object detection and localization method based on improved YOLOX and RGB-D images. Agronomy 2023, 13, 1816. [Google Scholar] [CrossRef]
  155. Gugan, G.; Haque, A. Path planning for autonomous drones: Challenges and future directions. Drones 2023, 7, 169. [Google Scholar] [CrossRef]
  156. Jiang, Y.; Xu, X.X.; Zheng, M.Y.; Zhan, Z.H. Evolutionary computation for unmanned aerial vehicle path planning: A survey. Artif. Intell. Rev. 2024, 57, 267. [Google Scholar] [CrossRef]
  157. Amer, N.H.; Zamzuri, H.; Hudha, K.; Kadir, Z.A. Modelling and control strategies in path tracking control for autonomous ground vehicles: A review of state of the art and challenges. J. Intell. Robot. Syst. 2017, 86, 225–254. [Google Scholar] [CrossRef]
  158. Zhu, F.H.; Chen, J.; Guan, Z.H.; Zhu, Y.H.; Shi, H.; Cheng, K. Development of a combined harvester navigation control system based on visual simultaneous localization and mapping-inertial guidance fusion. J. Agric. Eng. 2024, 55, 1583. [Google Scholar] [CrossRef]
  159. Zhou, H.X.; Wang, J.T.; Chen, Y.Q.; Hu, L.; Li, Z.H.; Xie, F.M.; He, J.; Wang, P. Neural network-based SLAM/GNSS fusion localization algorithm for agricultural robots in orchard GNSS-degraded or denied environments. Agriculture 2025, 15, 1612. [Google Scholar] [CrossRef]
  160. Li, Y.L.; Feng, Q.C.; Ji, C.; Sun, J.H.; Sun, Y. GNSS and LiDAR integrated navigation method in orchards with intermittent GNSS dropout. Appl. Sci. 2024, 14, 3231. [Google Scholar] [CrossRef]
  161. Ma, Z.; Yang, S.Y.; Li, J.B.; Qi, J.T. Research on slam localization algorithm for orchard dynamic vision based on YOLOD-SLAM2. Agriculture 2024, 14, 1622. [Google Scholar] [CrossRef]
  162. Liu, W.; Hu, J.P.; Liu, J.X.; Yue, R.C.; Zhang, T.F.; Yao, M.J.; Li, J. Method for the navigation line recognition of the ridge without crops via machine vision. Int. J. Agric. Biol. Eng. 2024, 17, 230–239. [Google Scholar] [CrossRef]
  163. Syed, T.N.; Zhou, J.; Lakhiar, I.A.; Marinello, F.; Gemechu, T.T.; Rottok, L.T.; Jiang, Z.Z. Enhancing autonomous orchard navigation: A real-time convolutional neural network-based obstacle classification system for distinguishing ‘real’ and ‘fake’ obstacles in agricultural robotics. Agriculture 2025, 15, 827. [Google Scholar] [CrossRef]
  164. Zhang, T.F.; Zhou, J.H.; Liu, W.; Yue, R.C.; Shi, J.W.; Zhou, C.J.; Hu, J.P. SN-CNN: A lightweight and accurate line extraction algorithm for seedling navigation in ridge-planted vegetables. Agriculture 2024, 14, 1446. [Google Scholar] [CrossRef]
  165. Teng, H.Z.; Wang, Y.P.; Chatziparaschis, D.; Karydis, K. Adaptive LiDAR odometry and mapping for autonomous agricultural mobile robots in unmanned farms. Comput. Electron. Agric. 2025, 232, 110023. [Google Scholar] [CrossRef]
  166. Firkat, E.; An, F.; Peng, B.; Zhang, J.L.; Mijit, T.; Ahat, A.; Zhu, J.H.; Hamdulla, A. FGSeg: Field-ground segmentation for agricultural robot based on LiDAR. Comput. Electron. Agric. 2023, 211, 107965. [Google Scholar] [CrossRef]
  167. Lu, E.; Xu, L.Z.; Li, Y.M.; Tang, Z.; Ma, Z. Modeling of working environment and coverage path planning method of combine harvesters. Int. J. Agric. Biol. Eng. 2020, 13, 132–137. [Google Scholar] [CrossRef]
  168. Zhou, X.Y.; Chen, W.M.; Wei, X.H. Improved field obstacle detection algorithm based on YOLOv8. Agriculture 2024, 14, 2263. [Google Scholar] [CrossRef]
  169. Jeon, C.W.; Kim, H.J.; Yun, C.H.; Han, X.Z.; Kim, J.H. Design and validation testing of a complete paddy field-coverage path planner for a fully autonomous tillage tractor. Biosyst. Eng. 2021, 208, 79–97. [Google Scholar] [CrossRef]
  170. Wang, L.H.; Wang, Z.X.; Liu, M.J.; Ying, Z.H.; Xu, N.H.; Meng, Q. Full coverage path planning methods of harvesting robot with multi-objective constraints. J. Intell. Robot. Syst. 2022, 106, 17. [Google Scholar] [CrossRef]
  171. Soitinaho, R.; Väyrynen, V.; Oksanen, T. Heuristic cooperative coverage path planning for multiple autonomous agricultural field machines performing sequentially dependent tasks of different working widths and turn characteristics. Biosyst. Eng. 2024, 242, 16–28. [Google Scholar] [CrossRef]
  172. Ahmed, S.; Qiu, B.; Kong, C.; Xin, H.; Ahmad, F.; Lin, J. A data-driven dynamic obstacle avoidance method for liquid-carrying plant protection UAVs. Agronomy 2022, 12, 873. [Google Scholar] [CrossRef]
  173. Wu, H.X.; Wang, X.Z.; Chen, X.G.; Zhang, Y.F.; Zhang, Y.W. Review on key technologies for autonomous navigation in field agricultural machinery. Agriculture 2025, 15, 1297. [Google Scholar] [CrossRef]
  174. Yang, Y.; Li, Y.K.; Wen, X.; Zhang, G.; Ma, Q.L.; Cheng, S.K.; Qi, J.; Xu, L.Y.; Chen, L.Q. An optimal goal point determination algorithm for automatic navigation of agricultural machinery: Improving the tracking accuracy of the pure pursuit algorithm. Comput. Electron. Agric. 2022, 194, 106760. [Google Scholar] [CrossRef]
  175. Cheng, J.; Zhang, B.L.; Zhang, C.B.; Zhang, Y.Y.; Shen, G. A model-free adaptive predictive path-tracking controller with pid terms for tractors. Biosyst. Eng. 2024, 242, 38–49. [Google Scholar] [CrossRef]
  176. Yang, W.H.; Ding, S.H.; Ding, C. Fast supertwisting sliding mode control with antipeaking extended state observer for path-tracking of unmanned agricultural vehicles. IEEE Trans. Ind. Electron. 2024, 71, 12973–12982. [Google Scholar] [CrossRef]
  177. Han, L.H.; Mao, H.P.; Hu, J.P.; Kumi, F. Development of a riding-type fully automatic transplanter for vegetable plug seedlings. Span. J. Agric. Res. 2019, 17, e205. [Google Scholar] [CrossRef]
  178. Han, L.; Mao, H.; Kumi, F.; Hu, J. Development of a multi-task robotic transplanting workcell for greenhouse seedlings. Appl. Eng. Agric. 2018, 34, 335–342. [Google Scholar] [CrossRef]
  179. Shanmugasundar, G.; Manoj Kumar, G.; Gouthem, S.E.; Surya Prakash, V. Design and development of solar powered autonomous seed sowing robot. J. Pharm. Negat. Results 2022, 13, 1013–1016. [Google Scholar] [CrossRef]
  180. Liu, J.Z.; Zhao, S.Y.; Li, N.; Faheem, M.; Zhou, T.; Cai, W.J.; Zhao, M.Z.; Zhu, X.Y.; Li, P.P. Development and field test of an autonomous strawberry plug seeding transplanter for use in elevated cultivation. Appl. Eng. Agric. 2019, 35, 1067–1078. [Google Scholar] [CrossRef]
  181. Abo-Habaga, M.; Ismail, Z.; Moustafa, N.; Okasha, M. Developing an automatic precision seeding unit (APSU) for pot seed planting. INMATEH-Agric. Eng. 2024, 74, 260–272. [Google Scholar] [CrossRef]
  182. Liu, H.M. Control of automatic seeding robot based on basketball movement capture. J. Intell. Fuzzy Syst. 2020, 38, 7475–7485. [Google Scholar] [CrossRef]
  183. Hiraguri, T.; Kimura, T.; Endo, K.; Ohya, T.; Takanashi, T.; Shimizu, H. Shape classification technology of pollinated tomato flowers for robotic implementation. Sci. Rep. 2023, 13, 2159. [Google Scholar] [CrossRef]
  184. Hiraguri, T.; Shimizu, H.; Kimura, T.; Matsuda, T.; Maruta, K.; Takemura, Y.; Ohya, T.; Takanashi, T. Autonomous drone-based pollination system using AI classifier to replace bees for greenhouse tomato cultivation. IEEE Access 2023, 11, 99352–99364. [Google Scholar] [CrossRef]
  185. Yang, M.H.; Lyu, H.C.; Zhao, Y.J.; Sun, Y.C.; Pan, H.; Sun, Q.; Chen, J.L.; Qiang, B.H.; Yang, H.B. Delivery of pollen to forsythia flower pistils autonomously and precisely using a robot arm. Comput. Electron. Agric. 2023, 214, 108274. [Google Scholar] [CrossRef]
  186. Gao, C.Q.; He, L.L.; Fang, W.T.; Wu, Z.C.; Jiang, H.H.; Li, R.; Fu, L.S. A novel pollination robot for kiwifruit flower based on preferential flowers selection and precisely target. Comput. Electron. Agric. 2023, 207, 107762. [Google Scholar] [CrossRef]
  187. Masuda, N.; Khalil, M.M.; Toda, S.; Takayama, K.; Kanada, A.; Mashimo, T. A suspended pollination robot with a flexible multi-degrees-of-freedom manipulator for self-pollinated plants. IEEE Access 2024, 12, 142449–142458. [Google Scholar] [CrossRef]
  188. Ju, J.Y.; Chen, G.Q.; Lv, Z.Y.; Zhao, M.Y.; Sun, L.; Wang, Z.T.; Wang, J.F. Design and experiment of an adaptive cruise weeding robot for paddy fields based on improved YOLOv5. Comput. Electron. Agric. 2024, 219, 108824. [Google Scholar] [CrossRef]
  189. Zhu, H.B.; Zhang, Y.Y.; Mu, D.L.; Bai, L.Z.; Zhuang, H.; Li, H. YOLOX-based blue laser weeding robot in corn field. Front. Plant Sci. 2022, 13, 1017803. [Google Scholar] [CrossRef] [PubMed]
  190. Hu, C.S.; Xie, S.Y.; Song, D.Z.; Thomasson, J.A.; Hardin, R.G.; Bagavathiannan, M. Algorithm and system development for robotic micro-volume herbicide spray towards precision weed management. IEEE Robot. Autom. Lett. 2022, 7, 11633–11640. [Google Scholar] [CrossRef]
  191. Zhao, P.; Chen, J.L.; Li, J.H.; Ning, J.F.; Chang, Y.M.; Yang, S.Q. Design and testing of an autonomous laser weeding robot for strawberry fields based on DIN-LW-YOLO. Comput. Electron. Agric. 2025, 229, 109808. [Google Scholar] [CrossRef]
  192. Zheng, S.Y.; Zhao, X.G.; Fu, H.; Tan, H.R.; Zhai, C.Y.; Chen, L.P. Design and experimental evaluation of a smart intra-row weed control system for open-field cabbage. Agronomy 2025, 15, 112. [Google Scholar] [CrossRef]
  193. Balabantaray, A.; Behera, S.; Liew, C.; Chamara, N.; Singh, M.; Jhala, A.J.; Pitla, S. Targeted weed management of palmer amaranth using robotics and deep learning (YOLOv7). Front. Robot. AI 2024, 11, 1441371. [Google Scholar] [CrossRef]
  194. Fan, X.P.; Chai, X.J.; Zhou, J.P.; Sun, T. Deep learning based weed detection and target spraying robot system at seedling stage of cotton field. Comput. Electron. Agric. 2023, 214, 108317. [Google Scholar] [CrossRef]
  195. Lei, X.J.; Liu, J.Z.; Jiang, H.K.; Xu, B.C.; Jin, Y.C.; Gao, J.A. Design and testing of a four-arm multi-joint apple harvesting robot based on singularity analysis. Agronomy 2025, 15, 1446. [Google Scholar] [CrossRef]
  196. Yu, Y.; Xie, H.H.; Zhang, K.L.; Wang, Y.J.; Li, Y.T.; Zhou, J.M.; Xu, L.Z. Design, development, integration, and field evaluation of a ridge-planting strawberry harvesting robot. Agriculture 2024, 14, 2126. [Google Scholar] [CrossRef]
  197. Yang, L.L.; Noguchi, T.; Hoshino, Y. Development of a pumpkin fruits pick-and-place robot using an RGB-D camera and a Yolo based object detection AI model. Comput. Electron. Agric. 2024, 227, 109625. [Google Scholar] [CrossRef]
  198. Yang, L.L.; Noguchi, T.; Hoshino, Y. Development of a grape cut point detection system using multi-cameras for a grape-harvesting robot. Sensors 2024, 24, 8035. [Google Scholar] [CrossRef] [PubMed]
  199. Chang, C.L.; Huang, C.C. Design and implementation of an AI-based robotic arm for strawberry harvesting. Agriculture 2024, 14, 2057. [Google Scholar] [CrossRef]
  200. Choi, D.W.; Park, J.H.; Yoo, J.H.; Ko, K.E. Ai-driven adaptive grasping and precise detaching robot for efficient citrus harvesting. Comput. Electron. Agric. 2025, 232, 110131. [Google Scholar] [CrossRef]
  201. Bu, F.Y.; Wang, X. A smart agriculture IoT system based on deep reinforcement learning. Futur. Gener. Comp. Syst. 2019, 99, 500–507. [Google Scholar] [CrossRef]
  202. Mohamed, T.M.K.; Gao, J.M.; Tunio, M. Development and experiment of the intelligent control system for rhizosphere temperature of aeroponic lettuce via the Internet of Things. Int. J. Agric. Biol. Eng. 2022, 15, 225–233. [Google Scholar] [CrossRef]
  203. Qureshi, W.A.; Gao, J.M.; Elsherbiny, O.; Mosha, A.H.; Tunio, M.H.; Qureshi, J.A. Boosting aeroponic system development with plasma and high-efficiency tools: AI and IoT-A Review. Agronomy 2025, 15, 546. [Google Scholar] [CrossRef]
  204. Shi, B.; Sreeram, V.; Zhao, D.A.; Duan, S.L.; Jiang, J.M. A wireless sensor network-based monitoring system for freshwater fishpond aquaculture. Biosyst. Eng. 2018, 172, 57–66. [Google Scholar] [CrossRef]
  205. Ma, Y.W.; Chen, J.L.; Shih, C.C.; Dabirian, A. An automatic and intelligent internet of things for future agriculture. IT Prof. 2022, 24, 74–80. [Google Scholar] [CrossRef]
  206. Sitharthan, R.; Rajesh, M.; Vimal, S.; Kumar, E.S.; Yuvaraj, S.; Kumar, A.; Raglend, I.J.; Vengatesan, K. A novel autonomous irrigation system for smart agriculture using AI and 6G enabled IoT network. Microprocess. Microsyst. 2023, 101, 104905. [Google Scholar]
  207. Singh, A.; Nawayseh, N.; Singh, H.; Dhabi, Y.K.; Samuel, S. Internet of agriculture: Analyzing and predicting tractor ride comfort through supervised machine learning. Eng. Appl. Artif. Intell. 2023, 125, 106720. [Google Scholar] [CrossRef]
  208. Jose, T.; Mayan, J.A. LoRaWAN and artificial intelligence integrated smart acoustic sensor network for bird species identification and deterrence system for farm protection. Measurement 2025, 256, 118452. [Google Scholar] [CrossRef]
  209. Li, Z.K.; Mao, H.P.; Li, L.Z.; Wei, Y.Z.; Yu, Y.S.; Zhao, M.X.; Liu, Z. A flexible wearable sensor for in situ non-destructive detection of plant leaf transpiration information. Agriculture 2024, 14, 2174. [Google Scholar] [CrossRef]
  210. Shadrin, D.; Menshchikov, A.; Ermilov, D.; Somov, A. Designing future precision agriculture: Detection of seeds germination using artificial intelligence on a low-power embedded system. IEEE Sens. J. 2019, 19, 11573–11582. [Google Scholar] [CrossRef]
  211. Shadrin, D.; Menshchikov, A.; Ermilov, D.; Somov, A. Enabling precision agriculture through embedded sensing with artificial intelligence. IEEE Trans. Instrum. Meas. 2020, 69, 4103–4113. [Google Scholar] [CrossRef]
  212. Muhammed, D.; Ahvar, E.; Ahvar, S.; Trocan, M.; Montpetit, M.; Ehsani, R. Artificial intelligence of things (AIoT) for smart agriculture: A review of architectures, technologies and solutions. J. Netw. Comput. Appl. 2024, 228, 103905. [Google Scholar] [CrossRef]
  213. Adli, H.K.; Remli, M.A.; Wong, K.N.S.W.S.; Ismail, N.A.; González-Briones, A.; Corchado, J.M.; Mohamad, M.S. Recent advancements and challenges of aiot application in smart agriculture: A Review. Sensors 2023, 23, 3752. [Google Scholar] [CrossRef] [PubMed]
  214. Mowla, M.N.; Mowla, N.; Shah, A.F.M.S.; Rabie, K.M.; Shongwe, T. Internet of things and wireless sensor networks for smart agriculture applications: A Survey. IEEE Access 2023, 11, 145813–145852. [Google Scholar] [CrossRef]
  215. Liu, J.H.; Jiang, W.W.; Han, H.Y.; He, M.; Gu, W.X. Satellite internet of things for smart agriculture applications: A case study of computer vision. In Proceedings of the IEEE International Conference on Sensing Communication and Networking, Madrid, Spain, 11–14 September 2023. [Google Scholar]
  216. Vangala, A.; Das, A.K.; Kumar, N.; Alazab, M. Smart secure sensing for IoT-based agriculture: Blockchain perspective. IEEE Sens. J. 2021, 21, 17591–17607. [Google Scholar] [CrossRef]
  217. Dhanaraju, M.; Chenniappan, P.; Ramalingam, K.; Pazhanivelan, S.; Kaliaperumal, R. Smart farming: Internet of Things (IoT)-based sustainable agriculture. Agriculture 2022, 12, 1745. [Google Scholar] [CrossRef]
  218. AlZubi, A.A.; Galyna, K. Artificial intelligence and internet of things for sustainable farming and smart agriculture. IEEE Access 2023, 11, 78686–78692. [Google Scholar] [CrossRef]
  219. Luo, X.; Xiong, S.M.; Jia, X.C.; Zeng, Y.; Chen, X. AIoT-enabled data management for smart agriculture: A comprehensive review on emerging technologies. IEEE Access 2025, 13, 102964–102993. [Google Scholar] [CrossRef]
  220. Miller, T.; Mikiciuk, G.; Durlik, I.; Mikiciuk, M.; Lobodzinska, A.; Snieg, M. The iot and ai in agriculture: The time is now-a systematic review of smart sensing technologies. Sensors 2025, 25, 3583. [Google Scholar] [CrossRef]
Figure 1. Classification of ML algorithms.
Figure 1. Classification of ML algorithms.
Agriculture 15 02247 g001
Figure 2. Workflow diagram of computer vision.
Figure 2. Workflow diagram of computer vision.
Agriculture 15 02247 g002
Figure 3. Illustration of AI-crop detection.
Figure 3. Illustration of AI-crop detection.
Agriculture 15 02247 g003
Figure 4. Illustration of crop growth and agricultural forecasts.
Figure 4. Illustration of crop growth and agricultural forecasts.
Agriculture 15 02247 g004
Figure 5. Illustration of AI-fault diagnosis in agricultural machinery.
Figure 5. Illustration of AI-fault diagnosis in agricultural machinery.
Agriculture 15 02247 g005
Figure 6. AI agricultural robot application scenarios.
Figure 6. AI agricultural robot application scenarios.
Agriculture 15 02247 g006
Table 1. Summary of AI crop-detection methods.
Table 1. Summary of AI crop-detection methods.
FieldsMethodsApplication ScenariosAcc.DatasetRefs.
Optimal Selection of SeedsSSAE + SVM+ CSMaize seeds95.81%Lab data[31]
SVM + AFSARice varieties99.44%Lab data[32]
SVMSelenium rich foxtail millet99.58%Lab data[33]
SVMGrape varieties99.3125%Lab data[34]
SVM + ABCVitality of watermelon seeds100%Lab data[35]
Health MonitoringSelf-attention mechanismCrop diseases89.95%FGVC8[29]
Transformer + CNN + Center LossCrop diseases99.62%Plant Village[13]
CNN + Attention mechanism + ResNetTomato leaf diseases96.81%Plant Village[11]
CNN + SVMGrape leaf diseases99.77%Kaggle[8]
GAN + Attention mechanismCrop diseases78.7%Lab data[36]
ELMTea disease95.77%Lab data[37]
CNNRice blast disease97.18%Lab data[38]
Risk Screening-Food Safety CNNTea pesticide residuesR2p = 0.995Lab data[39]
Attention + ResNetHeavy metal residues in edible oilsR2p = 0.9605Lab data[40]
SDAELead content in oilseed rape leavesR2p = 0.9388Lab data[41]
SCAEHeavy metal analysis in lettuceR2p = 0.9418Lab data[42]
SVMPeanut kernels infected with aspergillus flavus fungi92.4%Lab data[43]
KNN & LDAMaize mold100Lab data[44]
CNNAflatoxin B1 in maizeR2p = 0.9955Lab data[45]
CNNTrace level zearalenone in corn oilR2p = 0.9872Lab data[46]
CNNWheat zearalenoneR2p = 0.91Lab data[47]
Value EnhancementMulti-task CNNLipid and protein oxidative damage in frozen-thawed porkR2p = 0.9724Lab data[48]
Dual-branch CNNPork freshnessR2p = 0.9579Lab data[49]
HFA-NetPork freshnessR2p = 0.9373Lab data[50]
TL + CNNChicken meat adulteration98.6%Lab data[51]
CNN + SVMChicken meat adulteration98.1%Lab data[12]
1D-CNN + CAMHerb traceability92.22%Lab data[52]
LSSVMGrape qualityR2p = 0.93Lab data[53]
CNNApple quality90.50%Lab data[30]
YOLOv5sApple quality94.46%Lab data[10]
YOLOv5Apple grading and detection90.6%Lab data[54]
ResNetTea grading and detection99.13%Lab data[55]
YOLOv4-TinyCherry tomato grading and detectionmAP = 94.72%Lab data[56]
Note. SSAE: Stacked Sparse Autoencoder; SVM: Support Vector Machines; CS: Cuckoo Search; AFSA: Artificial Fish Swarm Algorithm; ABC: Artificial Bee Colony; CNN: Convolutional Neural Networks; GAN: Generative Adversarial Networks; ELM: Extreme Learning Machine; SDAE: Stacked Denoising Autoencoder; SCAE: Stacked Convolutional Autoencoder; KNN: K-Nearest Neighbors; LDA: Linear Discriminant Analysis; HFA-Net: Hybrid Fusion Attention Network; TL: Transfer Learning; CAM: Calibrating Attention Mechanism; YOLO: You Only Look Once.
Table 2. Summary of AI methods for crop growth and agricultural condition forecasting.
Table 2. Summary of AI methods for crop growth and agricultural condition forecasting.
ScenariosMethodsTargetAcc.DatasetRefs.
Soil AnalysisCNN-SVMSoil moistureMAE < 3.24
RMSE < 4.12
MAPE < 12.52
Godavari River Plateau and Krishna River Plateau soil moisture measurements[66]
ANN & RF & SVRSoil salinizationR2 > 0.43
RMSE < 0.16
Sentinel-2 images obtained from the USGS Earth probe data portal[67]
Weather ForecastingLSTM-ConvLSTMSunshine duration, cumulative precipitation, and average temperaturePerformed very wellMeteorological data from Dandong Meteorological Station, 1981–2017[69]
ARIMA-LSTMDroughtMAE = 0.05
RMSE = 0.06
VTCI remote sensing data of the Sichuan Basin downloaded from Copernicus Data Centre of the European Space Agency[72]
W-2ADroughtR2 > 0.836
R > 0.914
RMSE < 0.448
30-Year Rainfall data for the Langa River Basin[73]
TEX-LSTMDrought97.27
MAE = 4.1
MSE = 8.89
RMSE = 2.98
Standard Vegetation Index and Vegetation Health Index datasets from Thanjavur[74]
ANN & SVM & ANFISDroughtR > 0.9237Standard Precipitation Index data for the Awash River Basin, Ethiopia[75]
EO-CatBoostDroughtPerformed very wellIntegrated data on remote sensing, weather forecasts, soil moisture, and static environmental descriptors for Mali, Mozambique, and Somalia[76]
Crop GrowthCNN-LSTMCCR2 > 0.922
RMSE < 8.260
Corn image data and microclimate data from Zhengzhou, Henan; Tai’an, Shandong; and Gucheng, Hebei, China[77]
ConvLSTMNDVIRMSE < 0.10264Spectral Index data for Uruguay[78]
DTCC, CV, CH, and EXGPerformed very wellCotton Growth Characteristics data from Farmland in Driscoll, Texas, USA[80]
Crop ManagementANNGreenhouse temperatureRMSE < 4.59Data from heated foil tunnel at the University of Agriculture in Krakow[84]
ConvLSTM-CNN-BPNNGreenhouse temperaturetPerformed very wellA greenhouse located at the TARI in Central Taiwan[85]
CARS-ConvLSTMTrMAE < 0.009
RMSE < 0.012
National Precision Agriculture Demonstration Base in Changping District, Beijing[91]
Yield ForecastRF-APSIMWheatPerformed very wellClimate data, remote sensing data, experimental variety data, and soil hydraulic properties data for the wheat belt of New South Wales (NSW), Australia[103]
EO-ERTCornPerformed very wellSouth African Corn Production data and Corn Growing Season Anomaly Hotspot data[104]
WOFOST-GRUCornR2 = 0.98
RMSE = 102.65
MRE = 1.53%
Meteorological and soil data for Shandong Province, China[111]
Note. ANN: Artificial Neural Network; RF: Random Forests; SVR: Support Vector Regression; LSTM: Long Short-Term Memory; ARIMA: Autoregressive Integrated Moving Average; W-2A: Wavelet Transform and ARIMA-ANN; TEX-LSTM: Triple Exponential-LSTM; ANFIS: Adaptive Neuro-Fuzzy Inference Systems; EO-CatBoost: Earth Observation-Categorical Boosting; NDVI: Normalized Difference Vegetation Index; DT: Digital-Twin; CC: Canopy Cover; CH: Canopy Height; CV: Canopy Volume; EXG: Excess Greenness; BPNN: Backpropagation Neural Network; CARS: Competitive Adaptive Reweighted Sampling; APSIM: Agricultural Production Systems sIMulator; EO-ERT: Earth Observation and Extremely Randomized Trees; WOFOST-GRU: World Food Studies growth model with Gated Recurrent Unit.
Table 3. Summary of AI-based fault diagnosis methods.
Table 3. Summary of AI-based fault diagnosis methods.
ScenariosMethodTypeApplied SignalAcc.DatasetRefs.
HarvesterSVMVibrating screen boltVibration>96.8%Lab data[20]
KNN + HSRotating componentsVibration>99%Lab data[118]
ANN + GARotating componentsVibration92.96%Lab data[119]
SVM + SDAEThreshing drum bearingVibration>99%Lab data[19]
CDAN + MSLGearboxVibration>99%Lab data[120]
SVM + HSSAThreshing cylinderVibration100%Lab data[121]
SensorRF, LSTM, ANN KNN, and SVM/Temperature and humidity>90%Lab data[124]
HT-LS-SVM/Temperature and humidity99%Real-world environmental data[125]
SVM + IDBO/Temperature and humidity94.91%Sichuan pepper farm in Laiwu District, Jinan City, Shandong Province, China[126]
SVM/Pressure90%CICYTEX—La Orden Research Center in Badajoz (Spain)[128]
ERSOM/Vibration92%Lab data[129]
TractorCNN + BILSTMTransmission systemVibration>98%CWRU and Lab data[23]
GAN + TransformerTransmission systemVibration>93%CWRU and Lab data[131]
SVMTransmissionPressure95%Lab and simulation data[132]
RF + CFSAuxiliary gearboxVibration92.5%Lab data[21]
PCA + GNBAWet-clutch control systemPressure98.2%Lab data[133]
CNNDiesel engineInfrared image97.67%Lab data[22]
PumpDNNCentrifugal pumpPressure>82.82%Lab data[138]
AlexNetMonoblock centrifugal pumpVibration>99%Lab data[139]
SCAM + DEPS/Vibration>72%Lab data[140]
MAMLS/Vibration>95%Lab and simulation data[141]
OthersGCNRolling bearingVibration97%Xi’an Jiaotong University bearing data[142]
SVMRolling bearingVibration99%Jiangnan University bearing data[143]
DCNNAgricultural vehicleAcoustic>81%Lab data[144]
ANN + GAAgricultural vehicleAcoustic76.56%Lab data[145]
MLPHay tedderVibration92.5%Lab data[146]
RFGripping plierVibration>83%Lab data[147]
Note. HS: Harmonic Search; GA: Genetic Algorithm; CDAN: Conditional Domain Adversarial Networks; MSL: Multi-step Loss Optimization; HSSA: Hybrid Search Sparrow Algorithm; HT-SL-SVM: Hyperparameter-Tuned Least Square SVM; IDBO: Improved Dung Beetle Optimization; ERSOM: Evolutionary Recurrent Self-Organizing Map; BILSTM: Bidirectional LSTM; CFS: Correlation-based Feature Selection; PCA: Principal Component Analysis; GNBA: Gaussian Naive Bayes Algorithm; SCAM: Self-Calibrating Attention Mechanism; DEPS: Distributed Edge Prediction Strategy; MAMLS: Model-Agnostic Meta-Learning Strategy; GCN: Graph Convolutional Networks; DCNN: Deep CNN; GA: Genetic Algorithm; MLP: Multilayer Perceptron; CWRU: Case Western Reserve University.
Table 4. Autonomous navigation technologies for agricultural robots.
Table 4. Autonomous navigation technologies for agricultural robots.
TypesMethodsScenariosPerformanceRefs.
Localization and PerceptionSLAM/GNSS fusion positioning algorithmFarmland, orchardAvg. Pos. Error: 0.12 m[159]
RTK-GNSS/INS/LiDAR navigationOrchardAvg. Lat. Error: 0.1 m[160]
Gray reconstruction and approximate quadrilateral methodField ridge98.8%[162]
CNNOrchardRecognition success rate: 95%[163]
SN-CNNSeedbedRecognition success rate: 94.6%[164]
Adaptive LiDAR odometry and mapping frameworkFarmlandCentimeter-level positioning accuracy[165]
Ground segmentation algorithm and LiDAR sensorsFarmlandRecognition success rate: 96.54%[166]
Path PlanningBoundary corner turning methods and coverage path plannerPaddy fieldsCoverage rate ≥ 98%[169]
Ant colony algorithm and shuttle strategyFarmlandCoverage rate ≥ 90%[170]
Short-path decomposition and real-time collision detection schedulingFarmlandCoverage rate ≥ 96%[171]
Path Tracking ControlOptimal goal pointsFarmlandHigh stability and accuracy[174]
Pre-aiming theory and adaptive PID architectureAutonomous tractorRMSE = 0.022 m[175]
FSTSM control method with AESOUnmanned agricultural vehiclesSmall heading deviation error[176]
Note. SLAM: Simultaneous Localization and Mapping; GNSS: Global Navigation Satellite Systems; RTK-GNSS: Real-Time Kinematic GNSS; INS: Inertial Navigation Systems; LiDAR: Light Detection and Ranging; SN-CNN: Seedling Navigation CNN; PID: Proportional Integral Derivative; FSTSM: Fast Super-Torsional Sliding Mode; AESO: Anti-Peak Expansion State Observer.
Table 5. Agricultural robots used for different stages of crop growth.
Table 5. Agricultural robots used for different stages of crop growth.
OperationsMethodsScenariosPerformanceRefs.
PlantingTransplanting system with photoelectric navigation and pneumatic dual-gripper mechanismStrawberry transplantingOverall success rate: 95.3%
Speed: 047.8 plants/h
[180]
APSU for greenhousesSowing in potsSowing speed:
35 s/pot
[181]
Basketball motion capture technologyAutomatic SeedingSowing positioning accuracy: 95.5%[182]
Growth ManagementPSTL_OrientPollination of ForsythiaPollination success rate: 86.19%[185]
Pollination robot integrating vision, air-liquid spraying, and robotic armPollination of kiwifruitPollination success rate: 99.3%[186]
Flexible multi-degree-of-freedom manipulator, a dual-actuator systemPollination of tomatoPollination success rate: 92%[187]
Detection method for drip irrigation pipe navigation, laser weedingWeeding in strawberry fieldsWeeding accuracy: 92.6%[191]
An electric swing-type intra-row weeding control systemWeeding in Cabbage FieldsWeeding accuracy: 96%[192]
Self-developed AI recognition model, robotic technologyWeeding targeting Palmer amaranthSpraying precision rate: 60.4%[193]
CBAM, BiFPN, bilinear interpolation algorithmsWeeding in cotton fieldsSpraying precision rate: 98.93%[194]
HarvestingAI, RGB-D cameraHarvesting PumpkinsPicking success rate: 90.2%[197]
Multi-camera system, AI Object Detection AlgorithmsHarvesting grapesPicking success rate: 93%[198]
Wire-driven multi-joint robotic arm, deep learning, two-stage fuzzy logic controlHarvesting strawberriesPicking success rate: 82%[199]
Eye-hand coordination of the robotic arm, 6D fruit posture sensingHarvesting CitrusPicking success rate: 83.33%[200]
Note. APSU: Automatic Precision Seeding Unit; PSTL_Orient: PisTil Orientation; CBAM: Convolutional Block Attention Module; BiFPN: Bi-directional Feature Pyramid Network; AI: Artificial Intelligence. RGB-D: Red Green Blue-Depth camera.
Table 6. Summary of AI-IoT technologies in agriculture.
Table 6. Summary of AI-IoT technologies in agriculture.
ScenariosMethodsRefs.
Smart farming optimizationDRL, edge-cloud computing, IoT, and multi-task learning[201]
Real-time bird identification, crop protectionCNN-BiLSTM model, LoRaWAN communication, edge AI processing[208]
Automated rhizosphere coolingIoT sensor monitoring, Arduino-based fan control, real-time data logging[202]
Aeroponics optimization via plasma-induced nitrogen fixationComparative analysis of PAW/PAM efficacy[203]
Flexible wearable sensor for plant transpiration monitoringGO-based humidity sensing, PDMS substrate, in situ leaf attachment[209]
Wireless aquaculture monitoring DO, temperatureWSN with tree topology, sleep mode, data merging[204]
On-device AI for automated seed germination monitoringCustom CNN, Low-power embedded system[210]
Long-term autonomous plant growth predictionLSTM-RNN on low-power embedded system[211]
Automated, water-efficient smart farmingLoRa IoT, image analysis, dynamic transmission, machine learning, automated irrigation[205]
Autonomous irrigation for smart agricultureAI prediction, 6G-IoT, soil moisture sensing[206]
Predict tractor ride comfortSupervised ML (ANN, SVR, GPR, DTR, LR) with hyperparameter optimization[207]
Note. DRL: Deep Reinforcement Learning; IoT: Internet of Things; LoRaWAN: Long Range Wide Area Network; PAW: Plasma-Activated Water; PAM: Plasma-Activated Mist; GO: Graphene Oxide; WSN: Wireless Sensor Networks; PDMS: Polydimethylsiloxane; ML: Machine Learning; SVR: Support Vector Regression; GPR: Gaussian Process Regression; DTR: Decision Tree Regressor; LR: Logistic Regression.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ge, C.; Zhang, G.; Wang, Y.; Shao, D.; Song, X.; Wang, Z. Research Status and Development Trends of Artificial Intelligence in Smart Agriculture. Agriculture 2025, 15, 2247. https://doi.org/10.3390/agriculture15212247

AMA Style

Ge C, Zhang G, Wang Y, Shao D, Song X, Wang Z. Research Status and Development Trends of Artificial Intelligence in Smart Agriculture. Agriculture. 2025; 15(21):2247. https://doi.org/10.3390/agriculture15212247

Chicago/Turabian Style

Ge, Chuang, Guangjian Zhang, Yijie Wang, Dandan Shao, Xiangjin Song, and Zhaowei Wang. 2025. "Research Status and Development Trends of Artificial Intelligence in Smart Agriculture" Agriculture 15, no. 21: 2247. https://doi.org/10.3390/agriculture15212247

APA Style

Ge, C., Zhang, G., Wang, Y., Shao, D., Song, X., & Wang, Z. (2025). Research Status and Development Trends of Artificial Intelligence in Smart Agriculture. Agriculture, 15(21), 2247. https://doi.org/10.3390/agriculture15212247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop