Artificial Intelligence in Agriculture

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: 30 April 2025 | Viewed by 53356

Special Issue Editor

Special Issue Information

Dear Colleagues,

The agriculture industry has long used technology to improve farming practices and yield. However, traditional technological methods alone are not sufficient to provide for the increasing world population. It is estimated that food production needs to increase by approximately 60% in order to feed an additional two billion people in 2050. This need for increased food production is driving farmers and the agriculture industry to devise newer ways of increasing production, improving crop quality, and reducing waste and nitrogen footprint. Artificial intelligence (AI) is emerging as a promising solution for the agriculture industry to meet the food challenges in coming years. AI can be utilized in various stages of farming and food production, from seed sowing to monitoring and harvesting. AI is also an important enabling technology for precision agriculture. AI can be used for crop monitoring to detect diseases, nutrient deficiencies, and pest infestation. AI can also be used for soil monitoring to detect nutrient deficiencies and defects in soil. Moreover, AI can be used for weed detection, and then for intelligently and precisely spraying herbicides in the right areas to reduce the usage of herbicides. AI-enabled autonomous agricultural robots outfitted with different sensors and actuators can assist not only in crop harvesting and fruit picking, but also in crop and soil monitoring. AI can also provide predictive insights for maximizing crop productivity, such as predicting the impact of weather conditions on crops. Predictive analytics can help predict the best time to sow the seeds. AI can also be utilized for predicting crop yields, and for forecasting the price of crops in the coming weeks. 

This Special Issue on “Artificial Intelligence in Agriculture” focuses on fundamental and applied research targeting AI in all stages of agriculture, from soil preparation to the sowing of seeds, addition of fertilizers, irrigation, weed protection, harvesting, storage, packing, and transportation. Topics of interest include but are not limited to the following:

  • Smart farming and agriculture;
  • AI-assisted precision agriculture;
  • AI-based soil and plant nutrient analysis;
  • AI-assisted sowing;
  • Computer vision in agriculture;
  • Spatial AI-based agricultural robotics;
  • AI-based crop monitoring;
  • AI-based disease detection in crops;
  • AI-based pest infestation detection and management;
  • Intelligent irrigation for agriculture;
  • Intelligent spraying of crops;
  • AI-assisted phenotyping and genotyping;
  • Predictive analytics for agriculture;
  • Computational intelligence in agriculture;
  • Livestock health monitoring;
  • Smart Internet of things in agriculture;
  • Edge AI in agriculture;
  • AI in food supply chain.

Dr. Arslan Munir
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • agriculture
  • smart farming
  • precision agriculture
  • agricultural robotics
  • intelligent irrigation
  • crop monitoring
  • predictive analytics
  • computational intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 2018 KiB  
Article
Adapting a Large-Scale Transformer Model to Decode Chicken Vocalizations: A Non-Invasive AI Approach to Poultry Welfare
by Suresh Neethirajan
AI 2025, 6(4), 65; https://doi.org/10.3390/ai6040065 - 25 Mar 2025
Viewed by 202
Abstract
Natural Language Processing (NLP) and advanced acoustic analysis have opened new avenues in animal welfare research by decoding the vocal signals of farm animals. This study explored the feasibility of adapting a large-scale Transformer-based model, OpenAI’s Whisper, originally developed for human speech recognition, [...] Read more.
Natural Language Processing (NLP) and advanced acoustic analysis have opened new avenues in animal welfare research by decoding the vocal signals of farm animals. This study explored the feasibility of adapting a large-scale Transformer-based model, OpenAI’s Whisper, originally developed for human speech recognition, to decode chicken vocalizations. Our primary objective was to determine whether Whisper could effectively identify acoustic patterns associated with emotional and physiological states in poultry, thereby enabling real-time, non-invasive welfare assessments. To achieve this, chicken vocal data were recorded under diverse experimental conditions, including healthy versus unhealthy birds, pre-stress versus post-stress scenarios, and quiet versus noisy environments. The audio recordings were processed through Whisper, producing text-like outputs. Although these outputs did not represent literal translations of chicken vocalizations into human language, they exhibited consistent patterns in token sequences and sentiment indicators strongly correlated with recognized poultry stressors and welfare conditions. Sentiment analysis using standard NLP tools (e.g., polarity scoring) identified notable shifts in “negative” and “positive” scores that corresponded closely with documented changes in vocal intensity associated with stress events and altered physiological states. Despite the inherent domain mismatch—given Whisper’s original training on human speech—the findings clearly demonstrate the model’s capability to reliably capture acoustic features significant to poultry welfare. Recognizing the limitations associated with applying English-oriented sentiment tools, this study proposes future multimodal validation frameworks incorporating physiological sensors and behavioral observations to further strengthen biological interpretability. To our knowledge, this work provides the first demonstration that Transformer-based architectures, even without species-specific fine-tuning, can effectively encode meaningful acoustic patterns from animal vocalizations, highlighting their transformative potential for advancing productivity, sustainability, and welfare practices in precision poultry farming. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

34 pages, 14344 KiB  
Article
FedBirdAg: A Low-Energy Federated Learning Platform for Bird Detection with Wireless Smart Cameras in Agriculture 4.0
by Samy Benhoussa, Gil De Sousa and Jean-Pierre Chanet
AI 2025, 6(4), 63; https://doi.org/10.3390/ai6040063 - 21 Mar 2025
Viewed by 299
Abstract
Birds can cause substantial damage to crops, directly affecting farmers’ productivity and profitability. As a result, detecting bird presence in crop fields is crucial for effective crop management. Traditional agricultural practices have used various tools and techniques to deter pest birds, while digital [...] Read more.
Birds can cause substantial damage to crops, directly affecting farmers’ productivity and profitability. As a result, detecting bird presence in crop fields is crucial for effective crop management. Traditional agricultural practices have used various tools and techniques to deter pest birds, while digital agriculture has advanced these efforts through Internet of Things (IoT) and artificial intelligence (AI) technologies. With recent advancements in hardware and processing chips, connected devices can now utilize deep convolutional neural networks (CNNs) for on-field image classification. However, training these models can be energy-intensive, especially when large amounts of data, such as images, need to be transmitted for centralized model training. Federated learning (FL) offers a solution by enabling local training on edge devices, reducing data transmission costs and energy demands while also preserving data privacy and achieving shared model knowledge across connected devices. This paper proposes a low-energy federated learning framework for a compact smart camera network designed to perform simple image classification for bird detection in crop fields. The results demonstrate that this decentralized approach achieves performance comparable to a centrally trained model while consuming at least 8 times less energy. Further efficiency improvements, with a minimal tradeoff in performance reduction, are explored through early stopping. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

27 pages, 7182 KiB  
Article
Detection of Leaf Diseases in Banana Crops Using Deep Learning Techniques
by Nixon Jiménez, Stefany Orellana, Bertha Mazon-Olivo, Wilmer Rivas-Asanza and Iván Ramírez-Morales
AI 2025, 6(3), 61; https://doi.org/10.3390/ai6030061 - 17 Mar 2025
Viewed by 439
Abstract
Leaf diseases, such as Black Sigatoka and Cordana, represent a growing threat to banana crops in Ecuador. These diseases spread rapidly, impacting both leaf and fruit quality. Early detection is crucial for effective control measures. Recently, deep learning has proven to be a [...] Read more.
Leaf diseases, such as Black Sigatoka and Cordana, represent a growing threat to banana crops in Ecuador. These diseases spread rapidly, impacting both leaf and fruit quality. Early detection is crucial for effective control measures. Recently, deep learning has proven to be a powerful tool in agriculture, enabling more accurate analysis and identification of crop diseases. This study applied the CRISP-DM methodology, consisting of six phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment. A dataset of 900 banana leaf images was collected—300 of Black Sigatoka, 300 of Cordana, and 300 of healthy leaves. Three pre-trained models (EfficientNetB0, ResNet50, and VGG19) were trained on this dataset. To improve performance, data augmentation techniques were applied using TensorFlow Keras’s ImageDataGenerator class, expanding the dataset to 9000 images. Due to the high computational demands of ResNet50 and VGG19, training was performed with EfficientNetB0. The models—EfficientNetB0, ResNet50, and VGG19—demonstrated the ability to identify leaf diseases in bananas, with accuracies of 88.33%, 88.90%, and 87.22%, respectively. The data augmentation increased the performance of EfficientNetB0 to 87.83%, but did not significantly improve its accuracy. These findings highlight the value of deep learning techniques for early disease detection in banana crops, enhancing diagnostic accuracy and efficiency. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Graphical abstract

25 pages, 4169 KiB  
Article
Leveraging Spectral Neighborhood Information for Corn Yield Prediction with Spatial-Lagged Machine Learning Modeling: Can Neighborhood Information Outperform Vegetation Indices?
by Efrain Noa-Yarasca, Javier M. Osorio Leyton, Chad B. Hajda, Kabindra Adhikari and Douglas R. Smith
AI 2025, 6(3), 58; https://doi.org/10.3390/ai6030058 - 13 Mar 2025
Viewed by 240
Abstract
Accurate and reliable crop yield prediction is essential for optimizing agricultural management, resource allocation, and decision-making, while also supporting farmers and stakeholders in adapting to climate change and increasing global demand. This study introduces an innovative approach to crop yield prediction by incorporating [...] Read more.
Accurate and reliable crop yield prediction is essential for optimizing agricultural management, resource allocation, and decision-making, while also supporting farmers and stakeholders in adapting to climate change and increasing global demand. This study introduces an innovative approach to crop yield prediction by incorporating spatially lagged spectral data (SLSD) through the spatial-lagged machine learning (SLML) model, an enhanced version of the spatial lag X (SLX) model. The research aims to show that SLSD improves prediction compared to traditional vegetation index (VI)-based methods. Conducted on a 19-hectare cornfield at the ARS Grassland, Soil, and Water Research Laboratory during the 2023 growing season, this study used five-band multispectral image data and 8581 yield measurements ranging from 1.69 to 15.86 Mg/Ha. Four predictor sets were evaluated: Set 1 (spectral bands), Set 2 (spectral bands + neighborhood data), Set 3 (spectral bands + VIs), and Set 4 (spectral bands + top VIs + neighborhood data). These were evaluated using the SLX model and four decision-tree-based SLML models (RF, XGB, ET, GBR), with performance assessed using R2 and RMSE. Results showed that incorporating spatial neighborhood data (Set 2) outperformed VI-based approaches (Set 3), emphasizing the importance of spatial context. SLML models, particularly XGB, RF, and ET, performed best with 4–8 neighbors, while excessive neighbors slightly reduced accuracy. In Set 3, VIs improved predictions, but a smaller subset (10–15 indices) was sufficient for optimal yield prediction. Set 4 showed slight gains over Sets 2 and 3, with XGB and RF achieving the highest R2 values. Key predictors included spatially lagged spectral bands (e.g., Green_lag, NIR_lag, RedEdge_lag) and VIs (e.g., CREI, GCI, NCPI, ARI, CCCI), highlighting the value of integrating neighborhood data for improved corn yield prediction. This study underscores the importance of spatial context in corn yield prediction and lays the foundation for future research across diverse agricultural settings, focusing on optimizing neighborhood size, integrating spatial and spectral data, and refining spatial dependencies through localized search algorithms. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

24 pages, 23486 KiB  
Article
Influence of Model Size and Image Augmentations on Object Detection in Low-Contrast Complex Background Scenes
by Harman Singh Sangha and Matthew J. Darr
AI 2025, 6(3), 52; https://doi.org/10.3390/ai6030052 - 5 Mar 2025
Viewed by 351
Abstract
Background: Bigger and more complex models are often developed for challenging object detection tasks, and image augmentations are used to train a robust deep learning model for small image datasets. Previous studies have suggested that smaller models provide better performance compared to bigger [...] Read more.
Background: Bigger and more complex models are often developed for challenging object detection tasks, and image augmentations are used to train a robust deep learning model for small image datasets. Previous studies have suggested that smaller models provide better performance compared to bigger models for agricultural applications, and not all image augmentation methods contribute equally to model performance. An important part of these studies was also to define the scene of the image. Methods: A standard definition was developed to describe scenes in real-world agricultural datasets by reviewing various image-based machine-learning applications in the agriculture literature. This study primarily evaluates the effects of model size in both one-stage and two-stage detectors on model performance for low-contrast complex background applications. It further explores the influence of different photo-metric image augmentation methods on model performance for standard one-stage and two-stage detectors. Results: For one-stage detectors, a smaller model performed better than a bigger model. Whereas in the case of two-stage detectors, model performance increased with model size. In image augmentations, some methods considerably improved model performance and some either provided no improvement or reduced the model performance in both one-stage and two-stage detectors compared to the baseline. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

27 pages, 39555 KiB  
Article
Development and Comparison of Artificial Neural Networks and Gradient Boosting Regressors for Predicting Topsoil Moisture Using Forecast Data
by Miriam Zambudio Martínez, Larissa Haringer Martins da Silveira, Rafael Marin-Perez and Antonio Fernando Skarmeta Gomez
AI 2025, 6(2), 41; https://doi.org/10.3390/ai6020041 - 19 Feb 2025
Viewed by 416
Abstract
Introduction: The Earth’s growing population is increasing resource consumption, heavily pressuring agriculture, which, currently, uses 70% of the world’s freshwater from rivers and lakes, which, themselves, comprise only 1% of the Earth’s water reserves. Combined with climate change, the situation is alarming. [...] Read more.
Introduction: The Earth’s growing population is increasing resource consumption, heavily pressuring agriculture, which, currently, uses 70% of the world’s freshwater from rivers and lakes, which, themselves, comprise only 1% of the Earth’s water reserves. Combined with climate change, the situation is alarming. These challenges drive Agriculture 4.0, which is focused on sustainable agricultural processes to optimise water use. Objective: Given this context, this study proposes a model, based on Artificial Intelligence (AI) techniques to predict topsoil moisture in a study area located in the south of the Iberian Peninsula, primarily an agricultural region facing recurrent droughts and water scarcity. Methods: To develop the model, a comparison between Artificial Neural Networks (ANNs) and Gradient Booster Regressors (GBRs) was conducted, and topsoil moisture data from seven probes distributed over the study area were used, in addition to several variables (temperature, relative humidity, solar radiation, wind speed, precipitation and evapotranspiration) from a selection of weather stations and ensemble forecasts from meteorological models. Results: The final GBR model, with a 0.01 learning rate, 5 max depth, and 350 estimators, predicted topsoil moisture with an average mean squared error (MSE) of 0.027 and a maximum difference between observed and predicted data of 20.09% in a two-year series (May 2022–June 2024). Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

18 pages, 11587 KiB  
Article
The Detection and Counting of Olive Tree Fruits Using Deep Learning Models in Tacna, Perú
by Erbert Osco-Mamani, Oliver Santana-Carbajal, Israel Chaparro-Cruz, Daniel Ochoa-Donoso and Sylvia Alcazar-Alay
AI 2025, 6(2), 25; https://doi.org/10.3390/ai6020025 - 1 Feb 2025
Viewed by 1076
Abstract
Predicting crop performance is key to decision making for farmers and business owners. Tacna is the main olive-producing region in Perú, with an annual yield of 6.4 t/ha, mainly of the Sevillana variety. Recently, olive production levels have fluctuated due to severe weather [...] Read more.
Predicting crop performance is key to decision making for farmers and business owners. Tacna is the main olive-producing region in Perú, with an annual yield of 6.4 t/ha, mainly of the Sevillana variety. Recently, olive production levels have fluctuated due to severe weather conditions and disease outbreaks. These climatic phenomena are expected to continue in the coming years. The objective of the study was to evaluate the performance of the model in natural and specific environments of the olive grove and counting olive fruits using CNNs from images. Among the models evaluated, YOLOv8m proved to be the most effective (94.960), followed by YOLOv8s, Faster R-CNN and RetinaNet. For the mAP50-95 metric, YOLOv8m was also the most effective (0.775). YOLOv8m achieved the best performance with an RMSE of 402.458 and a coefficient of determination R2 of (0.944), indicating a high correlation with the actual fruit count. As part of this study, a novel olive fruit dataset was developed to capture the variability under different fruit conditions. Concluded that the predicting crop from images requires consideration of field imaging conditions, color tones, and the similarity between olives and leaves. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

22 pages, 686 KiB  
Article
AgriNAS: Neural Architecture Search with Adaptive Convolution and Spatial–Time Augmentation Method for Soybean Diseases
by Oluwatoyin Joy Omole, Renata Lopes Rosa, Muhammad Saadi and Demóstenes Zegarra Rodriguez
AI 2024, 5(4), 2945-2966; https://doi.org/10.3390/ai5040142 - 16 Dec 2024
Viewed by 1068
Abstract
Soybean is a critical agricultural commodity, serving as a vital source of protein and vegetable oil, and contributing significantly to the economies of producing nations. However, soybean yields are frequently compromised by disease and pest infestations, which, if not identified early, can lead [...] Read more.
Soybean is a critical agricultural commodity, serving as a vital source of protein and vegetable oil, and contributing significantly to the economies of producing nations. However, soybean yields are frequently compromised by disease and pest infestations, which, if not identified early, can lead to substantial production losses. To address this challenge, we propose AgriNAS, a method that integrates a Neural Architecture Search (NAS) framework with an adaptive convolutional architecture specifically designed for plant pathology. AgriNAS employs a novel data augmentation strategy and a Spatial–Time Augmentation (STA) method, and it utilizes a multi-stage convolutional network that dynamically adapts to the complexity of the input data. The proposed AgriNAS leverages powerful GPU resources to handle the intensive computational tasks involved in NAS and model training. The framework incorporates a bi-level optimization strategy and entropy-based regularization to enhance model robustness and prevent overfitting. AgriNAS achieves classification accuracies superior to VGG-19 and a transfer learning method using convolutional neural networks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

15 pages, 5569 KiB  
Article
Comparative Analysis of Machine Learning Techniques Using RGB Imaging for Nitrogen Stress Detection in Maize
by Sumaira Ghazal, Namratha Kommineni and Arslan Munir
AI 2024, 5(3), 1286-1300; https://doi.org/10.3390/ai5030062 - 28 Jul 2024
Cited by 3 | Viewed by 2239
Abstract
Proper nitrogen management in crops is crucial to ensure optimal growth and yield maximization. While hyperspectral imagery is often used for nitrogen status estimation in crops, it is not feasible for real-time applications due to the complexity and high cost associated with it. [...] Read more.
Proper nitrogen management in crops is crucial to ensure optimal growth and yield maximization. While hyperspectral imagery is often used for nitrogen status estimation in crops, it is not feasible for real-time applications due to the complexity and high cost associated with it. Much of the research utilizing RGB data for detecting nitrogen stress in plants relies on datasets obtained under laboratory settings, which limits its usability in practical applications. This study focuses on identifying nitrogen deficiency in maize crops using RGB imaging data from a publicly available dataset obtained under field conditions. We have proposed a custom-built vision transformer model for the classification of maize into three stress classes. Additionally, we have analyzed the performance of convolutional neural network models, including ResNet50, EfficientNetB0, InceptionV3, and DenseNet121, for nitrogen stress estimation. Our approach involves transfer learning with fine-tuning, adding layers tailored to our specific application. Our detailed analysis shows that while vision transformer models generalize well, they converge prematurely with a higher loss value, indicating the need for further optimization. In contrast, the fine-tuned CNN models classify the crop into stressed, non-stressed, and semi-stressed classes with higher accuracy, achieving a maximum accuracy of 97% with EfficientNetB0 as the base model. This makes our fine-tuned EfficientNetB0 model a suitable candidate for practical applications in nitrogen stress detection. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

14 pages, 31064 KiB  
Article
Enhancing Tuta absoluta Detection on Tomato Plants: Ensemble Techniques and Deep Learning
by Nikolaos Giakoumoglou, Eleftheria-Maria Pechlivani, Nikolaos Frangakis and Dimitrios Tzovaras
AI 2023, 4(4), 996-1009; https://doi.org/10.3390/ai4040050 - 20 Nov 2023
Cited by 6 | Viewed by 2997
Abstract
Early detection and efficient management practices to control Tuta absoluta (Meyrick) infestation is crucial for safeguarding tomato production yield and minimizing economic losses. This study investigates the detection of T. absoluta infestation on tomato plants using object detection models combined with ensemble techniques. [...] Read more.
Early detection and efficient management practices to control Tuta absoluta (Meyrick) infestation is crucial for safeguarding tomato production yield and minimizing economic losses. This study investigates the detection of T. absoluta infestation on tomato plants using object detection models combined with ensemble techniques. Additionally, this study highlights the importance of utilizing a dataset captured in real settings in open-field and greenhouse environments to address the complexity of real-life challenges in object detection of plant health scenarios. The effectiveness of deep-learning-based models, including Faster R-CNN and RetinaNet, was evaluated in terms of detecting T. absoluta damage. The initial model evaluations revealed diminishing performance levels across various model configurations, including different backbones and heads. To enhance detection predictions and improve mean Average Precision (mAP) scores, ensemble techniques were applied such as Non-Maximum Suppression (NMS), Soft Non-Maximum Suppression (Soft NMS), Non-Maximum Weighted (NMW), and Weighted Boxes Fusion (WBF). The outcomes shown that the WBF technique significantly improved the mAP scores, resulting in a 20% improvement from 0.58 (max mAP from individual models) to 0.70. The results of this study contribute to the field of agricultural pest detection by emphasizing the potential of deep learning and ensemble techniques in improving the accuracy and reliability of object detection models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

19 pages, 3076 KiB  
Article
A General Machine Learning Model for Assessing Fruit Quality Using Deep Image Features
by Ioannis D. Apostolopoulos, Mpesi Tzani and Sokratis I. Aznaouridis
AI 2023, 4(4), 812-830; https://doi.org/10.3390/ai4040041 - 27 Sep 2023
Cited by 15 | Viewed by 12019
Abstract
Fruit quality is a critical factor in the produce industry, affecting producers, distributors, consumers, and the economy. High-quality fruits are more appealing, nutritious, and safe, boosting consumer satisfaction and revenue for producers. Artificial intelligence can aid in assessing the quality of fruit using [...] Read more.
Fruit quality is a critical factor in the produce industry, affecting producers, distributors, consumers, and the economy. High-quality fruits are more appealing, nutritious, and safe, boosting consumer satisfaction and revenue for producers. Artificial intelligence can aid in assessing the quality of fruit using images. This paper presents a general machine learning model for assessing fruit quality using deep image features. This model leverages the learning capabilities of the recent successful networks for image classification called vision transformers (ViT). The ViT model is built and trained with a combination of various fruit datasets and taught to distinguish between good and rotten fruit images based on their visual appearance and not predefined quality attributes. The general model demonstrated impressive results in accurately identifying the quality of various fruits, such as apples (with a 99.50% accuracy), cucumbers (99%), grapes (100%), kakis (99.50%), oranges (99.50%), papayas (98%), peaches (98%), tomatoes (99.50%), and watermelons (98%). However, it showed slightly lower performance in identifying guavas (97%), lemons (97%), limes (97.50%), mangoes (97.50%), pears (97%), and pomegranates (97%). Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

14 pages, 3156 KiB  
Article
Comparison of Various Nitrogen and Water Dual Stress Effects for Predicting Relative Water Content and Nitrogen Content in Maize Plants through Hyperspectral Imaging
by Hideki Maki, Valerie Lynch, Dongdong Ma, Mitchell R. Tuinstra, Masanori Yamasaki and Jian Jin
AI 2023, 4(3), 692-705; https://doi.org/10.3390/ai4030036 - 18 Aug 2023
Cited by 2 | Viewed by 2504
Abstract
Water and nitrogen (N) are major factors in plant growth and agricultural production. However, these are often confounded and produce overlapping symptoms of plant stress. The objective of this study is to verify whether the different levels of N treatment influence water status [...] Read more.
Water and nitrogen (N) are major factors in plant growth and agricultural production. However, these are often confounded and produce overlapping symptoms of plant stress. The objective of this study is to verify whether the different levels of N treatment influence water status prediction and vice versa with hyperspectral modeling. We cultivated 108 maize plants in a greenhouse under three-level N treatments in combination with three-level water treatments. Hyperspectral images were collected from those plants, then Relative Water Content (RWC), as well as N content, was measured as ground truth. A Partial Least Squares (PLS) regression analysis was used to build prediction models for RWC and N content. Then, their accuracy and robustness were compared according to the different N treatment datasets and different water treatment datasets, respectively. The results demonstrated that the PLS prediction for RWC using hyperspectral data was impacted by N stress difference (Ratio of Performance to Deviation; RPD from 0.87 to 2.27). Furthermore, the dataset with water and N dual stresses improved model accuracy and robustness (RPD from 1.69 to 2.64). Conversely, the PLS prediction for N content was found to be robust against water stress difference (RPD from 2.33 to 3.06). In conclusion, we suggest that water and N dual treatments can be helpful in building models with wide applicability and high accuracy for evaluating plant water status such as RWC. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

13 pages, 3794 KiB  
Article
Application of Machine Learning for Insect Monitoring in Grain Facilities
by Querriel Arvy Mendoza, Lester Pordesimo, Mitchell Neilsen, Paul Armstrong, James Campbell and Princess Tiffany Mendoza
AI 2023, 4(1), 348-360; https://doi.org/10.3390/ai4010017 - 22 Mar 2023
Cited by 20 | Viewed by 8102
Abstract
In this study, a basic insect detection system consisting of a manual-focus camera, a Jetson Nano—a low-cost, low-power single-board computer, and a trained deep learning model was developed. The model was validated through a live visual feed. Detecting, classifying, and monitoring insect pests [...] Read more.
In this study, a basic insect detection system consisting of a manual-focus camera, a Jetson Nano—a low-cost, low-power single-board computer, and a trained deep learning model was developed. The model was validated through a live visual feed. Detecting, classifying, and monitoring insect pests in a grain storage or food facility in real time is vital to making insect control decisions. The camera captures the image of the insect and passes it to a Jetson Nano for processing. The Jetson Nano runs a trained deep-learning model to detect the presence and species of insects. With three different lighting situations: white LED light, yellow LED light, and no lighting condition, the detection results are displayed on a monitor. Validating using F1 scores and comparing the accuracy based on light sources, the system was tested with a variety of stored grain insect pests and was able to detect and classify adult cigarette beetles and warehouse beetles with acceptable accuracy. The results demonstrate that the system is an effective and affordable automated solution to insect detection. Such an automated insect detection system can help reduce pest control costs and save producers time and energy while safeguarding the quality of stored products. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

15 pages, 1696 KiB  
Article
Data Synthesis for Alfalfa Biomass Yield Estimation
by Jonathan Vance, Khaled Rasheed, Ali Missaoui and Frederick W. Maier
AI 2023, 4(1), 1-15; https://doi.org/10.3390/ai4010001 - 21 Dec 2022
Cited by 1 | Viewed by 2496
Abstract
Alfalfa is critical to global food security, and its data is abundant in the U.S. nationally, but often scarce locally, limiting the potential performance of machine learning (ML) models in predicting alfalfa biomass yields. Training ML models on local-only data results in very [...] Read more.
Alfalfa is critical to global food security, and its data is abundant in the U.S. nationally, but often scarce locally, limiting the potential performance of machine learning (ML) models in predicting alfalfa biomass yields. Training ML models on local-only data results in very low estimation accuracy when the datasets are very small. Therefore, we explore synthesizing non-local data to estimate biomass yields labeled as high, medium, or low. One option to remedy scarce local data is to train models using non-local data; however, this only works about as well as using local data. Therefore, we propose a novel pipeline that trains models using data synthesized from non-local data to estimate local crop yields. Our pipeline, synthesized non-local training (SNLT pronounced like sunlight), achieves a gain of 42.9% accuracy over the best results from regular non-local and local training on our very small target dataset. This pipeline produced the highest accuracy of 85.7% with a decision tree classifier. From these results, we conclude that SNLT can be a useful tool in helping to estimate crop yields with ML. Furthermore, we propose a software application called Predict Your CropS (PYCS pronounced like Pisces) designed to help farmers and researchers estimate and predict crop yields based on pretrained models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

11 pages, 337 KiB  
Article
Dimensionality Reduction Statistical Models for Soil Attribute Prediction Based on Raw Spectral Data
by Marcelo Chan Fu Wei, Ricardo Canal Filho, Tiago Rodrigues Tavares, José Paulo Molin and Afrânio Márcio Corrêa Vieira
AI 2022, 3(4), 809-819; https://doi.org/10.3390/ai3040049 - 30 Sep 2022
Cited by 4 | Viewed by 2728
Abstract
To obtain a better performance when modeling soil spectral data for attribute prediction, researchers frequently resort to data pretreatment, aiming to reduce noise and highlight the spectral features. Even with the awareness of the existence of dimensionality reduction statistical approaches that can cope [...] Read more.
To obtain a better performance when modeling soil spectral data for attribute prediction, researchers frequently resort to data pretreatment, aiming to reduce noise and highlight the spectral features. Even with the awareness of the existence of dimensionality reduction statistical approaches that can cope with data sparse dimensionality, few studies have explored its applicability in soil sensing. Therefore, this study’s objective was to assess the predictive performance of two dimensionality reduction statistical models that are not widespread in the proximal soil sensing community: principal components regression (PCR) and least absolute shrinkage and selection operator (lasso). Here, these two approaches were compared with multiple linear regressions (MLR). All of the modelling strategies were applied without employing pretreatment techniques for soil attribute determination using X-ray fluorescence spectroscopy (XRF) and visible and near-infrared diffuse reflectance spectroscopy (Vis-NIR) data. In addition, the achieved results were compared against the ones reported in the literature that applied pretreatment techniques. The study was carried out with 102 soil samples from two distinct fields. Predictive models were developed for nine chemical and physical soil attributes, using lasso, PCR and MLR. Both Vis-NIR and XRF raw spectral data presented a great performance for soil attribute prediction when modelled with PCR and the lasso method. In general, similar results were found comparing the root mean squared error (RMSE) and coefficient of determination (R2) from the literature that applied pretreatment techniques and this study. For example, considering base saturation (V%), for Vis-NIR combined with PCR, in this study, RMSE and R2 values of 10.60 and 0.79 were found compared with 10.38 and 0.80, respectively, in the literature. In addition, looking at potassium (K), XRF associated with lasso yielded an RMSE value of 0.60 and R2 of 0.92, and in the literature, RMSE and R2 of 0.53 and 0.95, respectively, were found. The major discrepancy was observed for phosphorus (P) and organic matter (OM) prediction applying PCR in the XRF data, which showed R2 of 0.33 (for P) and 0.52 (for OM) without using pretreatment techniques in this study, and R2 of 0.01 (for P) and 0.74 (for OM) when using preprocessing techniques in the literature. These results indicate that data pretreatment can be disposable for predicting some soil attributes when using Vis-NIR and XRF raw data modeled with dimensionality reduction statistical models. Despite this, there is no consensus on the best way to calibrate data, as this seems to be attribute and area specific. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
20 pages, 11177 KiB  
Article
A Spatial AI-Based Agricultural Robotic Platform for Wheat Detection and Collision Avoidance
by Sujith Gunturu, Arslan Munir, Hayat Ullah, Stephen Welch and Daniel Flippo
AI 2022, 3(3), 719-738; https://doi.org/10.3390/ai3030042 - 30 Aug 2022
Cited by 9 | Viewed by 4793
Abstract
To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to [...] Read more.
To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to not run over the wheat while driving. Accordingly, we have trained a spatial deep learning model that helps navigate the robot autonomously in the field while avoiding collisions with the wheat. To train this model, we used publicly available databases of prelabeled images of wheat, along with the images of wheat that we have collected in the field. We used the MobileNet single shot detector (SSD) as our deep learning model to detect wheat in the field. To increase the frame rate for real-time robot response to field environments, we trained MobileNet SSD on the wheat images and used a new stereo camera, the Luxonis Depth AI Camera. Together, the newly trained model and camera could achieve a frame rate of 18–23 frames per second (fps)—fast enough for the robot to process its surroundings once every 2–3 inches of driving. Once we knew the robot accurately detects its surroundings, we addressed the autonomous navigation of the robot. The new stereo camera allows the robot to determine its distance from the trained objects. In this work, we also developed a navigation and collision avoidance algorithm that utilizes this distance information to help the robot see its surroundings and maneuver in the field, thereby precisely avoiding collisions with the wheat crop. Extensive experiments were conducted to evaluate the performance of our proposed method. We also compared the quantitative results obtained by our proposed MobileNet SSD model with those of other state-of-the-art object detection models, such as the YOLO V5 and Faster region-based convolutional neural network (R-CNN) models. The detailed comparative analysis reveals the effectiveness of our method in terms of both model precision and inference speed. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Review

Jump to: Research

17 pages, 2210 KiB  
Review
A Systematic Literature Review on Parameters Optimization for Smart Hydroponic Systems
by Umar Shareef, Ateeq Ur Rehman and Rafiq Ahmad
AI 2024, 5(3), 1517-1533; https://doi.org/10.3390/ai5030073 - 27 Aug 2024
Cited by 1 | Viewed by 4587
Abstract
Hydroponics is a soilless farming technique that has emerged as a sustainable alternative. However, new technologies such as Industry 4.0, the internet of things (IoT), and artificial intelligence are needed to keep up with issues related to economics, automation, and social challenges in [...] Read more.
Hydroponics is a soilless farming technique that has emerged as a sustainable alternative. However, new technologies such as Industry 4.0, the internet of things (IoT), and artificial intelligence are needed to keep up with issues related to economics, automation, and social challenges in hydroponics farming. One significant issue is optimizing growth parameters to identify the best conditions for growing fruits and vegetables. These parameters include pH, total dissolved solids (TDS), electrical conductivity (EC), light intensity, daily light integral (DLI), and nutrient solution/ambient temperature and humidity. To address these challenges, a systematic literature review was conducted aiming to answer research questions regarding the optimal growth parameters for leafy green vegetables and herbs and spices grown in hydroponic systems. The review selected a total of 131 papers related to indoor farming, hydroponics, and aquaponics. The review selected a total of 123 papers related to indoor farming, hydroponics, and aquaponics. The majority of the articles focused on technology description (38.5%), artificial illumination (26.2%), and nutrient solution composition/parameters (13.8%). Additionally, remaining 10.7% articles focused on the application of sensors, slope, environment and economy. This comprehensive review provides valuable information on optimized growth parameters for smart hydroponic systems and explores future prospects and the application of digital technologies in this field. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Back to TopTop