Next Article in Journal
Multi-Agent Transfer Learning Based on Contrastive Role Relationship Representation
Previous Article in Journal
AI-Driven Advances in Precision Oncology: Toward Optimizing Cancer Diagnostics and Personalized Treatment
Previous Article in Special Issue
Adapting a Large-Scale Transformer Model to Decode Chicken Vocalizations: A Non-Invasive AI Approach to Poultry Welfare
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Artificial Intelligence and Machine Learning for Smart and Sustainable Agriculture

Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
Submission received: 26 December 2025 / Accepted: 30 December 2025 / Published: 6 January 2026
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)

1. Introduction

Agriculture is entering a profound period of transformation, driven by the accelerating integration of artificial intelligence (AI), machine learning, computer vision, autonomous sensing, and data-driven decision support [1]. These technological advances are reshaping the ways in which crops are monitored, soils are assessed, environmental conditions are forecasted, disease symptoms are detected, yields are predicted, and even animal welfare is evaluated [2]. What once required extensive manual labor, specialized expertise, and repeated field visits is increasingly supported or in some cases replaced by intelligent systems capable of delivering real-time insights and precision-guided interventions.
The contributions assembled in this edited volume capture the current momentum and future promise of AI for agriculture. Collectively, they illustrate how modern learning paradigms are being brought into the demanding context of agricultural environments, where variability in lighting, occlusion, weather, biological diversity, and resource limitations challenge the adaptability and robustness of conventional algorithms. The works included here demonstrate not only scientific innovation but also a strong emphasis on real-world applicability, highlighting robotic systems, mobile applications, edge devices, spectral-analysis pipelines, and federated learning networks [3] engineered for seamless deployment in agricultural settings.

2. Organization

To provide a coherent perspective on this broad and rapidly evolving landscape, the chapters in this edited volume are organized into six thematic clusters, each representing a major axis of contemporary research. These clusters progress naturally from field robotics and perceptual intelligence to soil and environmental modeling, plant stress and disease analytics, yield prediction and spatial modeling, smart hydroponics and edge-enabled computing ecosystems, and finally to livestock welfare assessment through advanced bioacoustic analysis. Figure 1 provides an overview of this organization, illustrating the structure of the volume across its six thematic clusters. Collectively, these contributions offer a comprehensive view of how artificial intelligence is reshaping agriculture from soil to canopy and from crops to livestock.
Cluster 1—Robotic Perception, Object Detection, and Scene Understanding in Agricultural Environments
Field robotics and automated sensing systems must contend with visual environments that are significantly more complex than those represented in standard computer vision benchmarks. Agricultural settings are marked by heterogeneous background textures, low-contrast crop canopies, foliage-induced occlusion, and highly dynamic illumination, all of which complicate robust perception. The works in this cluster advance the state-of-the-art by developing perceptual methods capable of meeting these challenges and maintaining reliable performance under real-world field conditions.
  • Spatial AI for Robotic Wheat Detection and Navigation (Gunturu et al.) presents a stereo-vision robotic platform designed for autonomous navigation in dense wheat fields. By fusing depth perception with efficient object detection architectures, the system performs real-time wheat plants detection and collision avoidance, illustrating how spatial AI can support field scouting and future robotic field operations.
  • Model Size and Image Augmentation for Object Detection in Complex Background Scenes (Sangha and Darr) studies the influence of different photometric image augmentation methods on model performance for standard one-stage and two-stage detectors. The work conducts a rigorous analysis of how detector size and photometric augmentation influence performance in the visually demanding context of agriculture. The work provides valuable guidance for designing models intended for deployment in the field rather than the laboratory.
  • Detection and Counting of Olive Tree Fruits in Olive Groves (Osco-Mamani et al.) compares several deep learning models for olive tree fruits detection using imagery collected directly from olive orchards. Results indicate that YOLOv8m achieves highly accurate detection and strong correlation with manual counts, demonstrating that deep detection models are now sufficiently mature for practical yield-monitoring applications.
Cluster 2—Soil, Water, and Environmental Intelligence for Sustainable Crop Production
Sustainable crop production depends on a precise understanding of soil condition, water availability, and the complex interactions between plant physiology and the surrounding environment. As climate variability intensifies drought risk and amplifies nutrient–water coupling effects, growers and researchers increasingly rely on data-driven methods to monitor resource dynamics and anticipate stress before yield losses occur. The chapters in this cluster advance this goal by integrating proximal and remote sensing with meteorological observations and predictive machine learning. Together, they demonstrate how spectral analytics and forecast-informed models can support scalable estimation of soil attributes, improve soil moisture prediction for irrigation planning, and strengthen crop stress characterization under interacting environmental and nutrient constraints.
  • Dimensionality Reduction Models for Soil Attribute Prediction (Wei et al.) demonstrates that dimensionality reduction statistical models such as principal components regression (PCR) and least absolute shrinkage and selection operator (LASSO) can derive meaningful predictions directly from raw spectral data without complex preprocessing. This study provides a promising direction for scalable, low-cost soil sensing in diverse agricultural contexts.
  • Machine Learning for Topsoil Moisture Forecasting (Martínez et al.) demonstrates highly accurate soil moisture predictions by integrating soil probe measurements with meteorological and forecast data. Such models are essential for proactive irrigation management, especially in drought-prone regions where efficient water resource management is crucial for ensuring sustainable crop production.
  • Comparison of Nitrogen and Water Dual-Stress Effects Through Hyperspectral Imaging (Maki et al.) examines how nitrogen availability and water stress jointly affect maize physiology. By leveraging hyperspectral imaging, the authors demonstrate that dual-stress modeling improves the robustness of relative water content prediction, while maintaining reliable nitrogen content estimation, which is an essential advancement for climate-adaptive crop breeding and precision management.
Cluster 3—Vision-Based Crop Stress, Disease Detection, and Agricultural Product Health Assessment
Vision-based analytics play a central role in enabling timely, scalable, and non-invasive assessment of crop condition across the agricultural lifecycle. From early detection of nutrient stress and pest infestation in the field to post-harvest evaluation of produce quality, computer vision and deep learning provide powerful tools for translating visual cues into actionable agronomic insights. The contributions in this cluster illustrate how modern image-based learning frameworks, ranging from convolutional neural networks and vision transformers to ensemble and neural architecture search-driven models, can reliably characterize plant health, diagnose disease, and assess product quality under both pre-harvest and post-harvest conditions. Together, these works demonstrate the maturation of vision-driven agricultural intelligence, emphasizing robustness, generalization, and deployability across diverse crops, environments, and stages of production.
  • Deep Learning for Nitrogen Stress Detection in Maize Using RGB Imaging (Ghazal et al.) leverages different deep learning models to predict nitrogen deficiency in maize crops using a field-acquired RGB imaging dataset. The authors benchmark several convolutional neural networks (CNNs) alongside a custom-built Vision Transformer. Among the tested models, EfficientNetB0 achieves the best accuracy. This work demonstrates how cost-effective imaging solutions can support nutrient-stress monitoring at scale.
  • Enhancing Pest Detection on Tomato Plants Using Ensemble Techniques (Giakoumoglou et al.) shows that ensemble techniques, such as Non-Maximum Suppression (NMS), Soft Non-Maximum Suppression (Soft NMS), Non-Maximum Weighted (NMW), and Weighted Boxes Fusion (WBF) significantly improve the detection of pest Tuta absoluta. This work contributes towards the development of more effective pest management strategies in tomato production.
  • Neural Architecture Search for Soybean Disease Recognition (Omole et al.) proposes AgriNAS, a framework that employs neural architecture search to automatically discover optimal architectures for soybean disease classification. By incorporating adaptive convolution and spatial–temporal augmentation, the approach outperforms conventional CNNs, demonstrating the promise of automated architecture design for pest and disease detection in different agricultural settings.
  • Detection of Leaf Diseases in Banana Crops Using Deep Learning Techniques (Jiménez et al.) explores deep learning techniques for the detection of leaf diseases, such as Black Sigatoka and Cordana, in banana crops. The authors compare and evaluate the performance of various deep learning models (e.g., ResNet50, EfficientNetB0, VGG19) for disease detection in banana crops. Further, this work describes the implementation of deep learning models in a mobile application, which is an important step toward practical, in-field disease diagnostics for growers.
  • Machine Learning Model for Assessing Fruit Quality (Apostolopoulos et al.) extends vision-based agricultural health assessment beyond the field and into the post-harvest stage by introducing a generalizable machine learning framework for fruit quality evaluation. Using deep visual representations learned through vision transformers, the authors demonstrate that a single, unified model can accurately distinguish between high-quality and defective produce across a wide range of fruit types, without reliance on fruit-specific training or handcrafted features. Extensive experiments across sixteen fruit categories show that the proposed approach achieves accuracy comparable to, and in many cases exceeding, that of dedicated per-fruit models. By framing fruit quality inspection as a transferable visual recognition problem, this work advances scalable and deployable solutions for automated grading, sorting, and quality control in agricultural supply chains, complementing earlier in-field vision-based health monitoring approaches within this cluster.
Cluster 4—Yield Prediction, Synthetic Data Generation, and Spatial Analytics
Accurate yield prediction is fundamental to agricultural planning, market forecasting, and risk management, yet it is often hindered by sparse, imbalanced, or spatially heterogeneous data. These limitations are especially pronounced in field-scale and smallholder contexts, where comprehensive historical records are rarely available. The contributions in this cluster address these challenges by introducing data-centric innovations, including synthetic data generation and spatially aware learning frameworks. Together, they demonstrate how generative modeling and neighborhood-informed analytics can improve predictive robustness and accuracy, even under constrained data conditions.
  • Data Synthesis for Alfalfa Biomass Yield Estimation (Vance et al.) explores the use of generative models to synthesize datasets to accurately predict the Alfalfa crop biomass. By using Conditional Tabular Generative Adversarial Network (CTGAN) and Tabular Variational Autoencoder (TVAE) to augment limited yield datasets, the authors significantly improve prediction accuracy and introduce a practical tool Predict Your CropS (PYCS pronounced like Pisces) accessible to end users. Their work illustrates how generative models can compensate for the scarcity of labeled agricultural data.
  • Spatial-Lagged Machine Learning for Corn Yield Prediction (Noa-Yarasca et al.) proposed an approach to crop yield prediction by incorporating spatially lagged spectral data (SLSD) from neighboring pixels through the spatial-lagged machine learning (SLML) model. The results indicate that incorporation of SLSD through SLML outperforms traditional vegetation-index-based methods in predicting crop yield. This research highlights the significance of spatial context and neighborhood information in improving corn yield prediction and emphasizes the need for optimizing spatial parameters, feature selection, and neighborhood size to enhance model accuracy for crop yield prediction.
Cluster 5—Controlled-Environment Agriculture, IoT-Enabled Monitoring, and Edge/Federated Intelligence
Controlled and semi-controlled agricultural environments, such as hydroponic farms, grain storage facilities, and distributed field-deployed sensing networks, offer powerful opportunities for automation, continuous monitoring, and closed-loop decision support. Yet these settings also impose practical constraints: systems must operate reliably under variable illumination and environmental conditions, run on resource-limited devices, and scale across distributed deployments without excessive energy or connectivity demands. The contributions in this cluster address these challenges by coupling AI with Internet of Things (IoT) and edge computing, and by introducing federated learning strategies that reduce energy consumption and communication overhead while preserving performance. Together, these works illustrate how intelligent monitoring and optimization pipelines can be translated into deployable systems for controlled-environment agriculture, post-harvest protection, and large-area sensing networks.
  • Parameters Optimization for Smart Hydroponic Systems (Shareef et al.) conducts a systematic literature review to determine the optimal growth parameters for leafy green vegetables, herbs, and spices cultivated in hydroponic systems. The work analyzed numerous research papers to identify the optimal ranges for pH, electrical conductivity (EC), temperature (nutrient solution and ambient), aeration/dissolved oxygen (DO), growing media/substrate, lighting/artificial illumination, relative humidity, and CO2 dosing, as well as plant-specific parameters. The work further highlights the increasing role of AI and automation in controlled-environment agriculture.
  • Machine Learning for Insect Monitoring in Grain Facilities (Mendoza et al.) addresses a critical but often underrepresented part of the agricultural pipeline, that is, post-harvest protection and stored-product security, by developing an affordable, camera-based insect monitoring system for grain and food facilities. The proposed setup couples a manual-focus camera with an NVIDIA Jetson Nano to run a trained deep learning detector on a live video stream, enabling real-time identification of key stored-product pests. The system is evaluated under multiple illumination conditions (white LED, yellow LED, and no dedicated lighting), demonstrating both the feasibility of low-cost automation and the practical sensitivity of machine-vision monitoring to lighting variability. The study further details an end-to-end deployment pipeline, from data collection and annotation to model training (SSD-MobileNet), model conversion (ONNX), and accelerated inference (TensorRT), highlighting the engineering considerations required for operational edge deployment. By translating deep learning-based detection into a deployable facility-scale monitoring workflow, this work contributes an important step toward scalable, continuous pest surveillance that can support integrated pest management decisions and reduce post-harvest losses.
  • FedBirdAg: A Low-Energy Federated Learning Platform for Bird Detection (Benhoussa et al.) proposes an energy-efficient federated learning framework for bird detection in crop fields using a compact smart camera network. The results reveal that the proposed federated learning-based approach results in energy savings of eight times as compared to a centrally trained model. Their approach demonstrates how decentralized intelligence can support wildlife monitoring and crop protection at scale.
Cluster 6—Livestock Bioacoustics and Welfare Assessment Through AI
As agricultural intelligence expands beyond crop-focused applications, AI is increasingly being applied to the monitoring and management of livestock systems. Among emerging modalities, bioacoustic sensing offers a non-invasive and continuous means of assessing animal condition and behavior. The contribution in this cluster explores the use of transformer-based acoustic modeling to extract meaningful representations from animal vocalizations, highlighting a novel pathway for advancing livestock welfare assessment within precision livestock farming.
  • Adapting a Large-Scale Transformer Model to Decode Chicken Vocalizations (Neethirajan) explores the feasibility of repurposing a large-scale Transformer-based model, OpenAI’s Whisper, originally developed for human speech recognition, to the decoding of chicken vocalizations. The work records chicken vocal data under diverse experimental conditions, including healthy versus unhealthy birds, pre-stress versus post-stress scenarios, and quiet versus noisy environments. Processing these chicken vocal data recordings through Whisper produces text-like outputs which are further processed through natural language processing (NLP) sentiment analysis tools to obtain “negative” and “positive” scores with close correspondence to documented changes in vocal intensity associated with stress events and altered physiological states. The work reveals that the repurposed transformer-based architectures can learn meaningful representations of non-human acoustic signals. This work opens the door to non-invasive, real-time monitoring of poultry welfare.

3. Conclusions

The research presented in this volume highlights both the scientific sophistication and practical relevance of AI innovations across the agricultural domain. Several themes resonate throughout the contributions: the push toward field-ready robustness; the emphasis on model efficiency and energy-aware computation; the growing use of multi-modal sensing; and the extension of AI from crop-centered applications to soil, environment, and livestock systems.
Together, these works illustrate a field that is advancing rapidly toward intelligent, resilient, and sustainable agricultural ecosystems. As Guest Editor, I extend my deepest appreciation to the authors, reviewers, and editorial team whose efforts made this collection possible. It is my hope that this volume serves not only as a snapshot of current progress but also as a catalyst for future innovation at the nexus of agriculture and artificial intelligence.

Conflicts of Interest

The author declares no conflicts of interest.

List of Contributions

  • Gunturu, S.; Munir, A.; Ullah, H.; Welch, S.; Flippo, D. A Spatial AI-Based Agricultural Robotic Platform for Wheat Detection and Collision Avoidance. AI 2022, 3, 719–738. https://doi.org/10.3390/ai3030042.
  • Sangha, H.S.; Darr, M.J. Influence of Model Size and Image Augmentations on Object Detection in Low-Contrast Complex Background Scenes. AI 2025, 6, 52. https://doi.org/10.3390/ai6030052.
  • Osco-Mamani, E.; Santana-Carbajal, O.; Chaparro-Cruz, I.; Ochoa-Donoso, D.; Alcazar-Alay, S. The Detection and Counting of Olive Tree Fruits Using Deep Learning Models in Tacna, Perú. AI 2025, 6, 25. https://doi.org/10.3390/ai6020025.
  • Wei, M.C.F.; Canal Filho, R.; Tavares, T.R.; Molin, J.P.; Vieira, A.M.C. Dimensionality Reduction Statistical Models for Soil Attribute Prediction Based on Raw Spectral Data. AI 2022, 3, 809–819. https://doi.org/10.3390/ai3040049.
  • Zambudio Martínez, M.; Silveira, L.H.M.d.; Marin-Perez, R.; Gomez, A.F.S. Development and Comparison of Artificial Neural Networks and Gradient Boosting Regressors for Predicting Topsoil Moisture Using Forecast Data. AI 2025, 6, 41. https://doi.org/10.3390/ai6020041.
  • Maki, H.; Lynch, V.; Ma, D.; Tuinstra, M.R.; Yamasaki, M.; Jin, J. Comparison of Various Nitrogen and Water Dual Stress Effects for Predicting Relative Water Content and Nitrogen Content in Maize Plants through Hyperspectral Imaging. AI 2023, 4, 692–705. https://doi.org/10.3390/ai4030036.
  • Ghazal, S.; Kommineni, N.; Munir, A. Comparative Analysis of Machine Learning Techniques Using RGB Imaging for Nitrogen Stress Detection in Maize. AI 2024, 5, 1286–1300. https://doi.org/10.3390/ai5030062.
  • Giakoumoglou, N.; Pechlivani, E.-M.; Frangakis, N.; Tzovaras, D. Enhancing Tuta absoluta Detection on Tomato Plants: Ensemble Techniques and Deep Learning. AI 2023, 4, 996–1009. https://doi.org/10.3390/ai4040050.
  • Omole, O.J.; Rosa, R.L.; Saadi, M.; Rodriguez, D.Z. AgriNAS: Neural Architecture Search with Adaptive Convolution and Spatial–Time Augmentation Method for Soybean Diseases. AI 2024, 5, 2945–2966. https://doi.org/10.3390/ai5040142.
  • Jiménez, N.; Orellana, S.; Mazon-Olivo, B.; Rivas-Asanza, W.; Ramírez-Morales, I. Detection of Leaf Diseases in Banana Crops Using Deep Learning Techniques. AI 2025, 6, 61. https://doi.org/10.3390/ai6030061.
  • Apostolopoulos, I.D.; Tzani, M.; Aznaouridis, S.I. A General Machine Learning Model for Assessing Fruit Quality Using Deep Image Features. AI 2023, 4, 812–830. https://doi.org/10.3390/ai4040041.
  • Vance, J.; Rasheed, K.; Missaoui, A.; Maier, F.W. Data Synthesis for Alfalfa Biomass Yield Estimation. AI 2023, 4, 1–15. https://doi.org/10.3390/ai4010001.
  • Noa-Yarasca, E.; Osorio Leyton, J.M.; Hajda, C.B.; Adhikari, K.; Smith, D.R. Leveraging Spectral Neighborhood Information for Corn Yield Prediction with Spatial-Lagged Machine Learning Modeling: Can Neighborhood Information Outperform Vegetation Indices? AI 2025, 6, 58. https://doi.org/10.3390/ai6030058.
  • Shareef, U.; Rehman, A.U.; Ahmad, R. A Systematic Literature Review on Parameters Optimization for Smart Hydroponic Systems. AI 2024, 5, 1517–1533. https://doi.org/10.3390/ai5030073.
  • Mendoza, Q.A.; Pordesimo, L.; Neilsen, M.; Armstrong, P.; Campbell, J.; Mendoza, P.T. Application of Machine Learning for Insect Monitoring in Grain Facilities. AI 2023, 4, 348–360. https://doi.org/10.3390/ai4010017.
  • Benhoussa, S.; De Sousa, G.; Chanet, J.-P. FedBirdAg: A Low-Energy Federated Learning Platform for Bird Detection with Wireless Smart Cameras in Agriculture 4.0. AI 2025, 6, 63. https://doi.org/10.3390/ai6040063.
  • Neethirajan, S. Adapting a Large-Scale Transformer Model to Decode Chicken Vocalizations: A Non-Invasive AI Approach to Poultry Welfare. AI 2025, 6, 65. https://doi.org/10.3390/ai6040065.

References

  1. Ghazal, S.; Munir, A.; Qureshi, W.S. Computer vision in smart agriculture and precision farming: Techniques and applications. Artif. Intell. Agric. 2024, 13, 64–83. [Google Scholar] [CrossRef]
  2. Muqaddas, S.; Qureshi, W.S.; Jabbar, H.; Munir, A.; Haider, A. A Comprehensive Deep Learning Approach for Harvest Ready Sugarcane Pixel Classification in Punjab, Pakistan Using Sentinel-2 Multispectral Imagery. Remote Sens. Appl. Soc. Environ. 2024, 35, 101225. [Google Scholar] [CrossRef]
  3. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, Fort Lauderdale, FL, USA, 20–22 April 2017; Proceedings of Machine Learning Research (PMLR): Cambridge, MA, USA, 2017; pp. 1273–1282. [Google Scholar]
Figure 1. Thematic map of clusters in AI and ML for smart and sustainable agriculture.
Figure 1. Thematic map of clusters in AI and ML for smart and sustainable agriculture.
Ai 07 00012 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Munir, A. Artificial Intelligence and Machine Learning for Smart and Sustainable Agriculture. AI 2026, 7, 12. https://doi.org/10.3390/ai7010012

AMA Style

Munir A. Artificial Intelligence and Machine Learning for Smart and Sustainable Agriculture. AI. 2026; 7(1):12. https://doi.org/10.3390/ai7010012

Chicago/Turabian Style

Munir, Arslan. 2026. "Artificial Intelligence and Machine Learning for Smart and Sustainable Agriculture" AI 7, no. 1: 12. https://doi.org/10.3390/ai7010012

APA Style

Munir, A. (2026). Artificial Intelligence and Machine Learning for Smart and Sustainable Agriculture. AI, 7(1), 12. https://doi.org/10.3390/ai7010012

Article Metrics

Back to TopTop