Exploring the Application of Artificial Intelligence and Image Processing in Agriculture

A special issue of AgriEngineering (ISSN 2624-7402). This special issue belongs to the section "Computer Applications and Artificial Intelligence in Agriculture".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 16325

Special Issue Editors


E-Mail Website
Guest Editor
School of Architecture, Feng Chia University, Taichung 40724, Taiwan
Interests: image processing; robotics in indoor navigation; deep learning; AI vision computing; image object detection and recognition system; DNA computing; discrete mathematics

E-Mail Website
Guest Editor
Department of Electronic Engineering, Feng Chia University, Taichung 40724, Taiwan
Interests: image processing; computer vision; data analytics

Special Issue Information

Dear Colleagues,

In recent years, the utilization of artificial intelligence (AI) technology has gained popularity across various sectors, including, but not limited to, robotics, education, banking, and agriculture. Advancements in sensing technologies, such as RGB-D, multi- and hyper-spectral, and 3D technologies, in conjunction with the proliferation of the Internet of Things (IoT), have enabled the retrieval of information across a wide range of spatial, spectral, and temporal domains. This, coupled with the integration of AI approaches, has led to the emergence of new insights and analysis. In particular, AI-powered computer vision technologies have become crucial in the development of intelligent and automated solutions. Within the agricultural sector, the implementation of AI has led to significant improvements in crop production and real-time monitoring, harvesting, processing, and marketing. Various hi-tech computer-based systems have been developed to determine important parameters such as weed detection, yield detection, and crop quality. However, it is essential to note that understanding and addressing the challenges related to safety and quality assessment for food production using AI technologies is a necessary step in realizing the full potential of these technologies within the agricultural sector. As such, this journal welcomes both fundamental science and applied research that describes the practical applications of AI methods in the fields of agriculture, food and bio-system engineering, and related areas.

The journal welcomes original research articles, review articles, perspective papers and short communications on the following topics of interest:

  • AI-based precision agriculture;
  • Smart sensors and Internet of Things;
  • Agricultural robotics and automation equipment;
  • Computational intelligence in agriculture;
  • AI in agricultural optimization management;
  • Intelligent systems for agriculture;
  • Precision agricultural aviation;
  • Expert systems in agriculture;
  • Remote sensing in agriculture;
  • AI technology in aquiculture;
  • AI in food engineering;
  • Automatic navigation and self-driving technology;
  • Intelligent interfaces and human–machine interaction;
  • Machine vision and image/signal processing;
  • Machine learning and pattern recognition.

Dr. Yee Siang Gan
Dr. Sze-Teng Liong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AgriEngineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • agriculture
  • AIOT
  • artificial intelligence
  • image processing
  • robotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 9032 KiB  
Article
Flowering Intensity Estimation Using Computer Vision
by Sergejs Kodors, Imants Zarembo, Ilmars Apeinans, Edgars Rubauskis and Lienite Litavniece
AgriEngineering 2025, 7(4), 117; https://doi.org/10.3390/agriengineering7040117 - 10 Apr 2025
Viewed by 196
Abstract
Flowering intensity is an important parameter to predict and control fruit yield. However, its estimation is often based on subjective evaluations of fruit growers. This study explores the application of the YOLO framework for flowering intensity estimation. YOLO is a popular computer vision [...] Read more.
Flowering intensity is an important parameter to predict and control fruit yield. However, its estimation is often based on subjective evaluations of fruit growers. This study explores the application of the YOLO framework for flowering intensity estimation. YOLO is a popular computer vision solution for object-detecting tasks. It was applied to detect flowers in different studies. Still, it requires manual annotation of photographs of flowering trees, which is a complex and time-consuming process. It is hard to distinguish individual flowers in photos due to their overlapping and indistinct outlines, false positive flowers in the background, and the density of flowers in panicles. Our experiment shows that the small dataset of images (320 × 320 px) is sufficient to achieve an accuracy of 0.995 and 0.994 mAP@50 for YOLOv9m and YOLOv11m using aggregated mosaic augmentation. The AI-based method was compared with the manual method (flowering intensity estimation, 0–9 scale). The comparison was completed using data analysis and the MobileNetV2 classifier as an evaluation model. The analysis shows that the AI-based method is more effective than the manual method. Full article
Show Figures

Figure 1

30 pages, 4911 KiB  
Article
In-Field Forage Biomass and Quality Prediction Using Image and VIS-NIR Proximal Sensing with Machine Learning and Covariance-Based Strategies for Livestock Management in Silvopastoral Systems
by Claudia M. Serpa-Imbett, Erika L. Gómez-Palencia, Diego A. Medina-Herrera, Jorge A. Mejía-Luquez, Remberto R. Martínez, William O. Burgos-Paz and Lorena A. Aguayo-Ulloa
AgriEngineering 2025, 7(4), 111; https://doi.org/10.3390/agriengineering7040111 - 8 Apr 2025
Viewed by 336
Abstract
Controlling forage quality and grazing are crucial for sustainable livestock production, health, productivity, and animal performance. However, the limited availability of reliable handheld sensors for timely pasture quality prediction hinders farmers’ ability to make informed decisions. This study investigates the in-field dynamics of [...] Read more.
Controlling forage quality and grazing are crucial for sustainable livestock production, health, productivity, and animal performance. However, the limited availability of reliable handheld sensors for timely pasture quality prediction hinders farmers’ ability to make informed decisions. This study investigates the in-field dynamics of Mombasa grass (Megathyrsus maximus) forage biomass production and quality using optical techniques such as visible imaging and near-infrared (VIS-NIR) hyperspectral proximal sensing combined with machine learning models enhanced by covariance-based error reduction strategies. Data collection was conducted using a cellphone camera and a handheld VIS-NIR spectrometer. Feature extraction to build the dataset involved image segmentation, performed using the Mahalanobis distance algorithm, as well as spectral processing to calculate multiple vegetation indices. Machine learning models, including linear regression, LASSO, Ridge, ElasticNet, k-nearest neighbors, and decision tree algorithms, were employed for predictive analysis, achieving high accuracy with R2 values ranging from 0.938 to 0.998 in predicting biomass and quality traits. A strategy to achieve high performance was implemented by using four spectral captures and computing the reflectance covariance at NIR wavelengths, accounting for the three-dimensional characteristics of the forage. These findings are expected to advance the development of AI-based tools and handheld sensors particularly suited for silvopastoral systems. Full article
Show Figures

Graphical abstract

13 pages, 6074 KiB  
Article
Hyperspectral Imaging for the Dynamic Mapping of Total Phenolic and Flavonoid Contents in Microgreens
by Pawita Boonrat, Manish Patel, Panuwat Pengphorm, Preeyabhorn Detarun and Chalongrat Daengngam
AgriEngineering 2025, 7(4), 107; https://doi.org/10.3390/agriengineering7040107 - 7 Apr 2025
Viewed by 326
Abstract
This study investigates the application of hyperspectral imaging (HSI) combined with machine learning (ML) models for the dynamic mapping of total phenolic content (TPC) and total flavonoid content (TFC) in sunflower microgreens. Spectral data were collected across different cultivation durations (Days 5, 6, [...] Read more.
This study investigates the application of hyperspectral imaging (HSI) combined with machine learning (ML) models for the dynamic mapping of total phenolic content (TPC) and total flavonoid content (TFC) in sunflower microgreens. Spectral data were collected across different cultivation durations (Days 5, 6, and 7) to assess the secondary metabolite distribution in leaves and stems. Overall, the results indicate that TFC in leaves peaked on Day 5, followed by a decline on Days 6 and 7, while stems exhibited an opposite trend. However, TPC did not show a consistent pattern. Spectral reflectance analysis revealed higher near-infrared reflectance in leaves compared to stems. The variation in trait and spectral data among the collected samples was sufficient to develop models predicting the TPC and TFC content. K-nearest neighbours provided the highest predictive accuracy for TPC (R2 = 0.95 and 1.6 mg GAE/100 g) and ridge regression performed best for TFC (R2 = 0.97 and 6.1 mg QE/100 g). Dimensionality reduction via principal component analysis (PCA) proved effective for TPC and TFC prediction, with PC1 alone achieving performance comparable to the full spectral dataset. This integrated HSI-ML approach offers a non-destructive, real-time method for monitoring bioactive compounds, supporting sustainable agricultural practices, optimising harvest timing, and enhancing crop management. The findings can be further developed for smart microgreen farming to enable real-time secondary metabolite quantification, with future research recommended to explore other microgreen varieties for broader applicability. Full article
Show Figures

Figure 1

13 pages, 5239 KiB  
Article
Random Reflectance: A New Hyperspectral Data Preprocessing Method for Improving the Accuracy of Machine Learning Algorithms
by Pavel A. Dmitriev, Anastasiya A. Dmitrieva and Boris L. Kozlovsky
AgriEngineering 2025, 7(3), 90; https://doi.org/10.3390/agriengineering7030090 - 20 Mar 2025
Viewed by 254
Abstract
Hyperspectral plant phenotyping is a method that has a wide range of applications in various fields, including agriculture, forestry, food processing, medicine and plant breeding. It can be used to obtain a large amount of spectral and spatial information about an object. However, [...] Read more.
Hyperspectral plant phenotyping is a method that has a wide range of applications in various fields, including agriculture, forestry, food processing, medicine and plant breeding. It can be used to obtain a large amount of spectral and spatial information about an object. However, it is important to acknowledge the inherent limitations of this approach, which include the presence of noise and the redundancy of information. The present study aims to assess a novel approach to hyperspectral data preprocessing, namely Random Reflectance (RR), for the classification of plant species. This study employs machine learning (ML) algorithms, specifically Random Forest (RF) and Gradient Boosting (GB), to analyse the performance of RR in comparison to Min–Max Normalisation (MMN) and Principal Component Analysis (PCA). The testing process was conducted on data derived from the proximal hyperspectral imaging (HSI) of leaves from three different maple species, which were sampled from trees at 7–10-day intervals between 2021 and 2024. The RF algorithm demonstrated a relative increase of 8.8% in the F1-score in 2021, 9.7% in 2022, 11.3% in 2023 and 11.8% in 2024. The GB algorithm exhibited a similar trend: 6.5% in 2021, 13.2% in 2022, 16.5% in 2023 and 17.4% in 2024. It has been demonstrated that hyperspectral data preprocessing with the MMN and PCA methods does not result in enhanced accuracy when classifying species using ML algorithms. The impact of preprocessing spectral profiles using the RR method may be associated with the observation that the synthesised set of spectral profiles exhibits a stronger reflection of the general parameters of spectral reflectance compared to the set of actual profiles. Subsequent research endeavours are anticipated to elucidate a mechanistic rationale for the RR method in conjunction with the RF and GB algorithms. Furthermore, the efficacy of this method will be evaluated through its application in deep machine learning algorithms. Full article
Show Figures

Figure 1

20 pages, 13379 KiB  
Article
From Simulation to Field Validation: A Digital Twin-Driven Sim2real Transfer Approach for Strawberry Fruit Detection and Sizing
by Omeed Mirbod, Daeun Choi and John K. Schueller
AgriEngineering 2025, 7(3), 81; https://doi.org/10.3390/agriengineering7030081 - 17 Mar 2025
Viewed by 506
Abstract
Typically, developing new digital agriculture technologies requires substantial on-site resources and data. However, the crop’s growth cycle provides only limited time windows for experiments and equipment validation. This study presents a photorealistic digital twin of a commercial-scale strawberry farm, coupled with a simulated [...] Read more.
Typically, developing new digital agriculture technologies requires substantial on-site resources and data. However, the crop’s growth cycle provides only limited time windows for experiments and equipment validation. This study presents a photorealistic digital twin of a commercial-scale strawberry farm, coupled with a simulated ground vehicle, to address these constraints by generating high-fidelity synthetic RGB and LiDAR data. These data enable the rapid development and evaluation of a deep learning-based machine vision pipeline for fruit detection and sizing without continuously relying on real-field access. Traditional simulators often lack visual realism, leading many studies to mix real images or adopt domain adaptation methods to address the reality gap. In contrast, this work relies solely on photorealistic simulation outputs for training, eliminating the need for real images or specialized adaptation approaches. After training exclusively on images captured in the virtual environment, the model was tested on a commercial-scale strawberry farm using a physical ground vehicle. Two separate trials with field images resulted in F1-scores of 0.92 and 0.81 for detection and a sizing error of 1.4 mm (R2 = 0.92) when comparing image-derived diameters against caliper measurements. These findings indicate that a digital twin-driven sim2real transfer can offer substantial time and cost savings by refining crucial tasks such as stereo sensor calibration and machine learning model development before extensive real-field deployments. In addition, the study examined geometric accuracy and visual fidelity through systematic comparisons of LiDAR and RGB sensor outputs from the virtual and real farms. Results demonstrated close alignment in both topography and textural details, validating the digital twin’s ability to replicate intricate field characteristics, including raised bed geometry and strawberry plant distribution. The techniques developed and validated in this strawberry project have broad applicability across agricultural commodities, particularly for fruit and vegetable production systems. This study demonstrates that integrating digital twins with simulation tools can significantly reduce the need for resource-intensive field data collection while accelerating the development and refinement of agricultural robotics algorithms and hardware. Full article
Show Figures

Graphical abstract

19 pages, 4335 KiB  
Article
Cost-Effective Active Laser Scanning System for Depth-Aware Deep-Learning-Based Instance Segmentation in Poultry Processing
by Pouya Sohrabipour, Chaitanya Kumar Reddy Pallerla, Amirreza Davar, Siavash Mahmoudi, Philip Crandall, Wan Shou, Yu She and Dongyi Wang
AgriEngineering 2025, 7(3), 77; https://doi.org/10.3390/agriengineering7030077 - 12 Mar 2025
Viewed by 455
Abstract
The poultry industry plays a pivotal role in global agriculture, with poultry serving as a major source of protein and contributing significantly to economic growth. However, the sector faces challenges associated with labor-intensive tasks that are repetitive and physically demanding. Automation has emerged [...] Read more.
The poultry industry plays a pivotal role in global agriculture, with poultry serving as a major source of protein and contributing significantly to economic growth. However, the sector faces challenges associated with labor-intensive tasks that are repetitive and physically demanding. Automation has emerged as a critical solution to enhance operational efficiency and improve working conditions. Specifically, robotic manipulation and handling of objects is becoming ubiquitous in factories. However, challenges exist to precisely identify and guide a robot to handle a pile of objects with similar textures and colors. This paper focuses on the development of a vision system for a robotic solution aimed at automating the chicken rehanging process, a fundamental yet physically strenuous activity in poultry processing. To address the limitation of the generic instance segmentation model in identifying overlapped objects, a cost-effective, dual-active laser scanning system was developed to generate precise depth data on objects. The well-registered depth data generated were integrated with the RGB images and sent to the instance segmentation model for individual chicken detection and identification. This enhanced approach significantly improved the model’s performance in handling complex scenarios involving overlapping chickens. Specifically, the integration of RGB-D data increased the model’s mean average precision (mAP) detection accuracy by 4.9% and significantly improved the center offset—a customized metric introduced in this study to quantify the distance between the ground truth mask center and the predicted mask center. Precise center detection is crucial for the development of future robotic control solutions, as it ensures accurate grasping during the chicken rehanging process. The center offset was reduced from 22.09 pixels (7.30 mm) to 8.09 pixels (2.65 mm), demonstrating the approach’s effectiveness in mitigating occlusion challenges and enhancing the reliability of the vision system. Full article
Show Figures

Figure 1

16 pages, 8656 KiB  
Article
What Is the Predictive Capacity of Sesamum indicum L. Bioparameters Using Machine Learning with Red–Green–Blue (RGB) Images?
by Edimir Xavier Leal Ferraz, Alan Cezar Bezerra, Raquele Mendes de Lira, Elizeu Matos da Cruz Filho, Wagner Martins dos Santos, Henrique Fonseca Elias de Oliveira, Josef Augusto Oberdan Souza Silva, Marcos Vinícius da Silva, José Raliuson Inácio da Silva, Jhon Lennon Bezerra da Silva, Antônio Henrique Cardoso do Nascimento, Thieres George Freire da Silva and Ênio Farias de França e Silva
AgriEngineering 2025, 7(3), 64; https://doi.org/10.3390/agriengineering7030064 - 3 Mar 2025
Viewed by 411
Abstract
The application of machine learning techniques to determine bioparameters, such as the leaf area index (LAI) and chlorophyll content, has shown significant potential, particularly with the use of unmanned aerial vehicles (UAVs). This study evaluated the use of RGB images obtained from UAVs [...] Read more.
The application of machine learning techniques to determine bioparameters, such as the leaf area index (LAI) and chlorophyll content, has shown significant potential, particularly with the use of unmanned aerial vehicles (UAVs). This study evaluated the use of RGB images obtained from UAVs to estimate bioparameters in sesame crops, utilizing machine learning techniques and data selection methods. The experiment was conducted at the Federal Rural University of Pernambuco and involved using a portable AccuPAR ceptometer to measure the LAI and spectrophotometry to determine photosynthetic pigments. Field images were captured using a DJI Mavic 2 Enterprise Dual remotely piloted aircraft equipped with RGB and thermal cameras. To manage the high dimensionality of the data, CRITIC and Pearson correlation methods were applied to select the most relevant indices for the XGBoost model. The data were divided into training, testing, and validation sets to ensure model generalization, with performance assessed using the R2, MAE, and RMSE metrics. XGBoost effectively estimated the LAI, chlorophyll a, total chlorophyll, and carotenoids (R2 > 0.7) but had limited performance for chlorophyll b. Pearson correlation was found to be the most effective data selection method for the algorithm. Full article
Show Figures

Figure 1

24 pages, 11989 KiB  
Article
Deep Learning-Based System for Early Symptoms Recognition of Grapevine Red Blotch and Leafroll Diseases and Its Implementation on Edge Computing Devices
by Carolina Lazcano-García, Karen Guadalupe García-Resendiz, Jimena Carrillo-Tripp, Everardo Inzunza-Gonzalez, Enrique Efrén García-Guerrero, David Cervantes-Vasquez, Jorge Galarza-Falfan, Cesar Alberto Lopez-Mercado and Oscar Adrian Aguirre-Castro
AgriEngineering 2025, 7(3), 63; https://doi.org/10.3390/agriengineering7030063 - 3 Mar 2025
Viewed by 716
Abstract
In recent years, the agriculture sector has undergone a significant digital transformation, integrating artificial intelligence (AI) technologies to harness and analyze the growing volume of data from diverse sources. Machine learning (ML), a powerful branch of AI, has emerged as an essential tool [...] Read more.
In recent years, the agriculture sector has undergone a significant digital transformation, integrating artificial intelligence (AI) technologies to harness and analyze the growing volume of data from diverse sources. Machine learning (ML), a powerful branch of AI, has emerged as an essential tool for developing knowledge-based agricultural systems. Grapevine red blotch disease (GRBD) and grapevine leafroll disease (GLD) are viral infections that severely impact grapevine productivity and longevity, leading to considerable economic losses worldwide. Conventional diagnostic methods for these diseases are costly and time-consuming. To address this, ML-based technologies have been increasingly adopted by researchers for early detection by analyzing the foliar symptoms linked to viral infections. This study focused on detecting GRBD and GLD symptoms using Convolutional Neural Networks (CNNs) in computer vision. YOLOv5 outperformed the other deep learning (DL) models tested, such as YOLOv3, YOLOv8, and ResNet-50, where it achieved 95.36% Precision, 95.77% Recall, and an F1-score of 95.56%. These metrics underscore the model’s effectiveness at accurately classifying grapevine leaves with and without GRBD and/or GLD symptoms. Furthermore, benchmarking was performed with two edge computer devices, where Jetson NANO obtained the best cost–benefit performance. The findings support YOLOv5 as a reliable tool for early diagnosis, offering potential economic benefits for large-scale agricultural monitoring. Full article
Show Figures

Figure 1

26 pages, 7080 KiB  
Article
Methodology for Determining the Main Physical Parameters of Apples by Digital Image Analysis
by Jakhfer Alikhanov, Aidar Moldazhanov, Akmaral Kulmakhambetova, Dimitriy Zinchenko, Alisher Nurtuleuov, Zhandos Shynybay, Tsvetelina Georgieva and Plamen Daskalov
AgriEngineering 2025, 7(3), 57; https://doi.org/10.3390/agriengineering7030057 - 25 Feb 2025
Viewed by 415
Abstract
This paper presents the validation of a digital image analysis method for the quantitative determination of physical quality parameters of apples by comparative analysis with a traditional measurement method. The method was used to determine the quality indicators of apples based on digital [...] Read more.
This paper presents the validation of a digital image analysis method for the quantitative determination of physical quality parameters of apples by comparative analysis with a traditional measurement method. The method was used to determine the quality indicators of apples based on digital image analysis in accordance with standard requirements. Five popular apple varieties from Kazakhstan were selected for the study: Aport Alexander, Ainur, Sinap Almaty, Nursat and Kazakhskij Yubilejnyj. The parameters of the five apple varieties were measured both manually and digitally, which revealed close agreement between the obtained values. The analysis of the results of measuring the geometric parameters of the apples and the percentage of red color in the images was carried out. The maximum relative errors in determining the diameters (d, D) and height (h) were 2.99%, 3.03% and 4.12%, respectively. Regression models were developed to predict the weight and volume of apples. The best results in weight prediction were obtained for the Sinap Almatinsky variety using stepwise linear regression (R2 = 0.96), and volume prediction showed the best results for the Nursat variety (R2 = 0.92). This study lays the foundation for the development of automated systems for sorting apples by commercial varieties. Full article
Show Figures

Figure 1

21 pages, 4161 KiB  
Article
Systemic Uptake of Rhodamine Tracers Quantified by Fluorescence Imaging: Applications for Enhanced Crop–Weed Detection
by Yu Jiang, Masoume Amirkhani, Ethan Lewis, Lynn Sosnoskie and Alan Taylor
AgriEngineering 2025, 7(3), 49; https://doi.org/10.3390/agriengineering7030049 - 20 Feb 2025
Viewed by 468
Abstract
Systemic fluorescence tracers introduced into crop plants provide an active signal for crop–weed differentiation that can be exploited for precision weed management. Rhodamine B (RB), a widely used tracer for seeds and seedlings, possesses desirable properties; however, its application as a seed treatment [...] Read more.
Systemic fluorescence tracers introduced into crop plants provide an active signal for crop–weed differentiation that can be exploited for precision weed management. Rhodamine B (RB), a widely used tracer for seeds and seedlings, possesses desirable properties; however, its application as a seed treatment has been limited due to potential phytotoxic effects on seedling growth. Therefore, investigating mitigation strategies or alternative systemic tracers is necessary to fully leverage active signaling for crop–weed differentiation. This study aimed to identify and address the phytotoxicity concerns associated with Rhodamine B and evaluate Rhodamine WT and Sulforhodamine B as potential alternatives. A custom 2D fluorescence imaging system, along with analytical methods, was developed to optimize fluorescence imaging quality and facilitate quantitative characterization of fluorescence intensity and patterns in plant seedlings, individual leaves, and leaf disc samples. Rhodamine compounds were applied as seed treatments or in-furrow (soil application). Rhodamine B phytotoxicity was mitigated by growing in a sand and perlite media due to the adsorption of RB to perlite. Additionally, in-furrow and seed treatment methods were tested for Rhodamine WT and Sulforhodamine B to evaluate their efficacy as non-phytotoxic alternatives. Experimental results demonstrated that Rhodamine B applied via seed pelleting and Rhodamine WT used as a direct seed treatment were the most effective approaches. A case study was conducted to assess fluorescence signal intensity for crop–weed differentiation at a crop–weed seed distance of 2.5 cm (1 inch). Results indicated that fluorescence from both Rhodamine B via seed pelleting and Rhodamine WT as seed treatment was clearly detected in plant tissues and was ~10× higher than that from neighboring weed plant tissues. These findings suggest that RB ap-plied via seed pelleting effectively differentiates plant seedlings from weeds with reduced phytotoxicity, while Rhodamine WT as seed treatment offers a viable, non-phytotoxic alternative. In conclusion, the combination of the developed fluorescence imaging system and RB seed pelleting presents a promising technology for crop–weed differentiation and precision weed management. Additionally, Rhodamine WT, when used as a seed treatment, provides satisfactory efficacy as a non-phytotoxic alternative, further expanding the options for fluorescence-based crop–weed differentiation in weed management. Full article
Show Figures

Graphical abstract

16 pages, 7431 KiB  
Article
Deep Learning-Based Model for Effective Classification of Ziziphus jujuba Using RGB Images
by Yu-Jin Jeon, So Jin Park, Hyein Lee, Ho-Youn Kim and Dae-Hyun Jung
AgriEngineering 2024, 6(4), 4604-4619; https://doi.org/10.3390/agriengineering6040263 - 3 Dec 2024
Cited by 2 | Viewed by 823
Abstract
Ensuring the quality of medicinal herbs in the herbal market is crucial. However, the genetic and physical similarities among medicinal materials have led to issues of mixing and counterfeit distribution, posing significant challenges to quality assurance. Recent advancements in deep learning technology, widely [...] Read more.
Ensuring the quality of medicinal herbs in the herbal market is crucial. However, the genetic and physical similarities among medicinal materials have led to issues of mixing and counterfeit distribution, posing significant challenges to quality assurance. Recent advancements in deep learning technology, widely applied in the field of computer vision, have demonstrated the potential to classify images quickly and accurately, even those that can only be distinguished by experts. This study aimed to develop a classification model based on deep learning technology to distinguish RGB images of seeds from Ziziphus jujuba Mill. var. spinosa, Ziziphus mauritiana Lam., and Hovenia dulcis Thunb. Using three advanced convolutional neural network (CNN) architectures—ResNet-50, Inception-v3, and DenseNet-121—all models demonstrated a classification performance above 98% on the test set, with classification times as low as 23 ms. These results validate that the models and methods developed in this study can effectively distinguish Z. jujuba seeds from morphologically similar species. Furthermore, the strong performance and speed of these models make them suitable for practical use in quality inspection settings. Full article
Show Figures

Figure 1

17 pages, 5119 KiB  
Article
Application of a Real-Time Field-Programmable Gate Array-Based Image-Processing System for Crop Monitoring in Precision Agriculture
by Sabiha Shahid Antora, Mohammad Ashik Alahe, Young K. Chang, Tri Nguyen-Quang and Brandon Heung
AgriEngineering 2024, 6(3), 3345-3361; https://doi.org/10.3390/agriengineering6030191 - 14 Sep 2024
Viewed by 1439
Abstract
Precision agriculture (PA) technologies combined with remote sensors, GPS, and GIS are transforming the agricultural industry while promoting sustainable farming practices with the ability to optimize resource utilization and minimize environmental impact. However, their implementation faces challenges such as high computational costs, complexity, [...] Read more.
Precision agriculture (PA) technologies combined with remote sensors, GPS, and GIS are transforming the agricultural industry while promoting sustainable farming practices with the ability to optimize resource utilization and minimize environmental impact. However, their implementation faces challenges such as high computational costs, complexity, low image resolution, and limited GPS accuracy. These issues hinder timely delivery of prescription maps and impede farmers’ ability to make effective, on-the-spot decisions regarding farm management, especially in stress-sensitive crops. Therefore, this study proposes field programmable gate array (FPGA)-based hardware solutions and real-time kinematic GPS (RTK-GPS) to develop a real-time crop-monitoring system that can address the limitations of current PA technologies. Our proposed system uses high-accuracy RTK and real-time FPGA-based image-processing (RFIP) devices for data collection, geotagging real-time field data via Python and a camera. The acquired images are processed to extract metadata then visualized as a heat map on Google Maps, indicating green area intensity based on romaine lettuce leafage. The RFIP system showed a strong correlation (R2 = 0.9566) with a reference system and performed well in field tests, providing a Lin’s concordance correlation coefficient (CCC) of 0.8292. This study demonstrates the potential of the developed system to address current PA limitations by providing real-time, accurate data for immediate decision making. In the future, this proposed system will be integrated with autonomous farm equipment to further enhance sustainable farming practices, including real-time crop health monitoring, yield assessment, and crop disease detection. Full article
Show Figures

Figure 1

24 pages, 3934 KiB  
Article
Computational Techniques for Analysis of Thermal Images of Pigs and Characterization of Heat Stress in the Rearing Environment
by Maria de Fátima Araújo Alves, Héliton Pandorfi, Rodrigo Gabriel Ferreira Soares, Gledson Luiz Pontes de Almeida, Taize Calvacante Santana and Marcos Vinícius da Silva
AgriEngineering 2024, 6(3), 3203-3226; https://doi.org/10.3390/agriengineering6030183 - 6 Sep 2024
Viewed by 1340
Abstract
Heat stress stands out as one of the main elements linked to concerns related to animal thermal comfort. This research aims to develop a sequential methodology for the extraction of automatic characteristics from thermal images and the classification of heat stress in pigs [...] Read more.
Heat stress stands out as one of the main elements linked to concerns related to animal thermal comfort. This research aims to develop a sequential methodology for the extraction of automatic characteristics from thermal images and the classification of heat stress in pigs by means of machine learning. Infrared images were obtained from 18 pigs housed in air-conditioned and non-air-conditioned pens. The image analysis consisted of its pre-processing, followed by color segmentation to isolate the region of interest and later the extraction of the animal’s surface temperatures, from a developed algorithm and later the recognition of the comfort pattern through machine learning. The results indicated that the automated color segmentation method was able to identify the region of interest with an average accuracy of 88% and the temperature extraction differed from the Therma Cam program by 0.82 °C. Using a Vector Support Machine (SVM), the research achieved an accuracy rate of 80% in the automatic classification of pigs in comfort and thermal discomfort, with an accuracy of 91%, indicating that the proposal has the potential to monitor and evaluate the thermal comfort of pigs effectively. Full article
Show Figures

Figure 1

13 pages, 2643 KiB  
Article
Model Development for Identifying Aromatic Herbs Using Object Detection Algorithm
by Samira Nascimento Antunes, Marcelo Tsuguio Okano, Irenilza de Alencar Nääs, William Aparecido Celestino Lopes, Fernanda Pereira Leite Aguiar, Oduvaldo Vendrametto, João Carlos Lopes Fernandes and Marcelo Eloy Fernandes
AgriEngineering 2024, 6(3), 1924-1936; https://doi.org/10.3390/agriengineering6030112 - 21 Jun 2024
Cited by 3 | Viewed by 1915
Abstract
The rapid evolution of digital technology and the increasing integration of artificial intelligence in agriculture have paved the way for groundbreaking solutions in plant identification. This research pioneers the development and training of a deep learning model to identify three aromatic plants—rosemary, mint, [...] Read more.
The rapid evolution of digital technology and the increasing integration of artificial intelligence in agriculture have paved the way for groundbreaking solutions in plant identification. This research pioneers the development and training of a deep learning model to identify three aromatic plants—rosemary, mint, and bay leaf—using advanced computer-aided detection within the You Only Look Once (YOLO) framework. Employing the Cross Industry Standard Process for Data Mining (CRISP-DM) methodology, the study meticulously covers data understanding, preparation, modeling, evaluation, and deployment phases. The dataset, consisting of images from diverse devices and annotated with bounding boxes, was instrumental in the training process. The model’s performance was evaluated using the mean average precision at a 50% intersection over union (mAP50), a metric that combines precision and recall. The results demonstrated that the model achieved a precision of 0.7 or higher for each herb, though recall values indicated potential over-detection, suggesting the need for database expansion and methodological enhancements. This research underscores the innovative potential of deep learning in aromatic plant identification and addresses both the challenges and advantages of this technique. The findings significantly advance the integration of artificial intelligence in agriculture, promoting greater efficiency and accuracy in plant identification. Full article
Show Figures

Figure 1

14 pages, 5138 KiB  
Article
YOLO Network with a Circular Bounding Box to Classify the Flowering Degree of Chrysanthemum
by Hee-Mun Park and Jin-Hyun Park
AgriEngineering 2023, 5(3), 1530-1543; https://doi.org/10.3390/agriengineering5030094 - 31 Aug 2023
Cited by 8 | Viewed by 3360
Abstract
Detecting objects in digital images is challenging in computer vision, traditionally requiring manual threshold selection. However, object detection has improved significantly with convolutional neural networks (CNNs), and other advanced algorithms, like region-based convolutional neural networks (R-CNNs) and you only look once (YOLO). Deep [...] Read more.
Detecting objects in digital images is challenging in computer vision, traditionally requiring manual threshold selection. However, object detection has improved significantly with convolutional neural networks (CNNs), and other advanced algorithms, like region-based convolutional neural networks (R-CNNs) and you only look once (YOLO). Deep learning methods have various applications in agriculture, including detecting pests, diseases, and fruit quality. We propose a lightweight YOLOv4-Tiny-based object detection system with a circular bounding box to accurately determine chrysanthemum flower harvest time. The proposed network in this study uses a circular bounding box to accurately classify the degree of chrysanthemums blooming and detect circular objects effectively, showing better results than the network with the traditional rectangular bounding box. The proposed network has excellent scalability and can be applied to recognize general objects in a circular form. Full article
Show Figures

Figure 1

Review

Jump to: Research

15 pages, 2954 KiB  
Review
Rapid Analysis of Soil Organic Carbon in Agricultural Lands: Potential of Integrated Image Processing and Infrared Spectroscopy
by Nelundeniyage Sumuduni L. Senevirathne and Tofael Ahamed
AgriEngineering 2024, 6(3), 3001-3015; https://doi.org/10.3390/agriengineering6030172 - 20 Aug 2024
Viewed by 1648
Abstract
The significance of soil in the agricultural industry is profound, with healthy soil representing an important role in ensuring food security. In addition, soil is the largest terrestrial carbon sink on earth. The soil carbon pool is composed of both inorganic and organic [...] Read more.
The significance of soil in the agricultural industry is profound, with healthy soil representing an important role in ensuring food security. In addition, soil is the largest terrestrial carbon sink on earth. The soil carbon pool is composed of both inorganic and organic forms. The equilibrium of the soil carbon pool directly impacts the carbon cycle via all of the other processes on the planet. With the development of agricultural systems from traditional to conventional ones, and with the current era of precision agriculture, which involves making decisions based on information, the importance of understanding soil is becoming increasingly clear. The control of microenvironment conditions and soil fertility represents a key factor in achieving higher productivity in these systems. Furthermore, agriculture represents a significant contributor to carbon emissions, a topic that has become timely given the necessity for carbon neutrality. In addition to these concerns, updating soil-related data, including information on macro and micronutrient conditions, is important. Carbon represents one of the major nutrients for crops and plays a key role in the retention and release of other nutrients and the management of soil physical properties. Despite the importance of carbon, existing analytical methods are complex and expensive. This discourages frequent analyses, which results in a lack of soil carbon-related data for agricultural fields. From this perspective, in situ soil organic carbon (SOC) analysis can provide timely management information for calibrating fertilizer applications based on the soil–carbon relationship to increase soil productivity. In addition, the available data need frequent updates due to rapid changes in ecosystem services and the use of extensive fertilizers and pesticides. Despite the importance of this topic, few studies have investigated the potential of image analysis based on image processing and spectral data recording. The use of spectroscopy and visual color matching to develop SOC predictions has been considered, and the use of spectroscopic instruments has led to increased precision. Our extensive literature review shows that color models, especially Munsell color charts, are better for qualitative purposes and that Cartesian-type color models are appropriate for quantification. Even for the color model, spectroscopy data could be used, and these data have the potential to improve the precision of measurements. On the other hand, mid-infrared radiation (MIR) and near-infrared radiation (NIR) diffuse reflection has been reported to have a greater ability to predict SOC. Finally, this article reports the availability of inexpensive portable instruments that can enable the development of in situ SOC analysis from reflection and emission information with the integration of images and spectroscopy. This integration refers to machine learning algorithms with a reflection-oriented spectrophotometer and emission-based thermal images which have the potential to predict SOC without the need for expensive instruments and are easy to use in farm applications. Full article
Show Figures

Figure 1

Back to TopTop