Applications of Computer Vision in Agriculture

A special issue of AgriEngineering (ISSN 2624-7402).

Deadline for manuscript submissions: 28 February 2027 | Viewed by 1933

Special Issue Editors

Department of Biosystems & Agricultural Engineering, Michigan State University, East Lansing, MI 48824, USA
Interests: optical sensing; machine vision; food inspection; precision agriculture; plant/animal phenotyping; AI and robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Biosystems & Agricultural Engineering, Michigan State University, East Lansing, MI 48824, USA
Interests: precision agriculture; machine learning; robotic automation; computer vision; hyperspectral imaging

Special Issue Information

Dear Colleagues,

Computer vision is currently becoming a cornerstone of precision agriculture, enabling scalable, non-destructive monitoring of crops, orchards, livestock, and farm environments. By converting imagery into actionable information, vision-based systems can support timely decisions across the production chain, from scouting and phenotyping to automation and postharvest quality assessment. Yet, agricultural scenes remain uniquely challenging due to changing illumination, complex backgrounds, occlusions, variable growth stages, and strong seasonal dependence. These challenges motivate continued advances in robust algorithms, sensing strategies, and deployable systems.

This Special Issue, “Applications of Computer Vision in Agriculture,” focuses on practical developments that bridge research innovation and real-world adoption. The scope covers imaging and sensing platforms (RGB, multispectral/hyperspectral, thermal, depth/LiDAR, ground, UAV, and fixed installations), core vision tasks (detection, segmentation, tracking, counting, classification, disease/pest recognition, phenotyping, and yield estimation), and integration with agricultural machinery and robotics. We welcome papers that emphasize reproducible pipelines, well-designed field experiments, and evaluation protocols that reflect operational constraints and generalization across cultivars, locations, and seasons.

This Special Issue supplements the existing literature by consolidating contributions that translate computer vision performance into precision agriculture outcomes: actionable prescriptions, decision support, and site-specific management at plant, row, and field scales. Beyond reporting accuracy, we encourage studies that demonstrate how vision outputs can be used to drive practical operations, such as variable-rate spraying, targeted thinning, selective harvesting, automated scouting, maturity and yield mapping, and early detection of disease, pests, or abiotic stress. We also welcome work that tackles the realities that determine field usability, including data efficiency and adaptability across seasons and cultivars (annotation strategy, semi-/self-supervised learning, domain adaptation), uncertainty-aware predictions that support risk-sensitive decisions, and deployment constraints such as latency, robustness, and maintainability on embedded platforms. By bringing together interdisciplinary advances from computer vision, agricultural engineering, and biological sciences, this Special Issue aims to accelerate the development of reliable, scalable vision tools that measurably improve efficiency, sustainability, and resilience in precision agriculture.

Dr. Yuzhen Lu
Dr. Xinyang Mu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AgriEngineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • precision agriculture
  • image processing
  • machine learning
  • orchard environment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 6302 KB  
Article
Artificial Intelligence-Based Detection of On-Ground Chestnuts Toward Automated Picking
by Kaixuan Fang, Yuzhen Lu and Xinyang Mu
AgriEngineering 2026, 8(3), 116; https://doi.org/10.3390/agriengineering8030116 - 19 Mar 2026
Viewed by 653
Abstract
Traditional mechanized chestnut harvesting is too costly for small producers, non-selective, and prone to damaging nuts. Accurate, reliable detection of chestnuts on the orchard floor is crucial for developing low-cost, vision-guided automated harvesting technology. However, developing a reliable chestnut detection system faces challenges [...] Read more.
Traditional mechanized chestnut harvesting is too costly for small producers, non-selective, and prone to damaging nuts. Accurate, reliable detection of chestnuts on the orchard floor is crucial for developing low-cost, vision-guided automated harvesting technology. However, developing a reliable chestnut detection system faces challenges in complex environments with shading, varying natural light conditions, and interference from weeds, fallen leaves, stones, and other foreign on-ground objects, which have remained unaddressed. This study collected 319 images of chestnuts on the orchard floor, containing 6524 annotated chestnuts. A comprehensive set of 29 state-of-the-art real-time object detectors, including 14 in the YOLO (v11–v13) and 15 in the RT-DETR (v1–v4) families at various model scales, was systematically evaluated through replicated modeling experiments for chestnut detection. Experimental results show that the YOLOv12m model achieved the best mAP@0.5 of 95.1% among all the evaluated models, while RT-DETRv2-R101 was the most accurate variant among the RT-DETR models, with mAP@0.5 of 91.1%. In terms of mAP@[0.5:0.95], the YOLOv11x model achieved the best accuracy of 80.1%. All models demonstrated significant potential for real-time chestnut detection, and YOLO models outperformed RT-DETR models in terms of both detection accuracy and inference, making them better suited for on-board deployment. This work lays a foundation for developing AI-based, vision-guided intelligent chestnut harvest systems. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Agriculture)
Show Figures

Figure 1

30 pages, 7368 KB  
Article
Heterogeneous Network Framework for Predicting Novel Disease–Plant Associations Using Random Walk with Restart (RWR)
by Hina Shafi, Ali Ghulam, Mir. Sajjad Hussain Talpur and Rahu Sikander
AgriEngineering 2026, 8(3), 113; https://doi.org/10.3390/agriengineering8030113 - 16 Mar 2026
Viewed by 563
Abstract
It is necessary to understand the complicated interplay between diseases and medicinal plants to find new curing agents that may be used in natural sources. Nevertheless, the state of interaction between diseases and plants today is not fully developed yet, and the potentially [...] Read more.
It is necessary to understand the complicated interplay between diseases and medicinal plants to find new curing agents that may be used in natural sources. Nevertheless, the state of interaction between diseases and plants today is not fully developed yet, and the potentially productive plant-based treatment can hardly be identified rationally. In order to elaborate on this challenge, we will offer a heterogeneous network approach to the prediction of novel disease–plant associations by using the Random Walk with Restart (RWR) algorithm. The framework combines three significant relational networks, including (i) a disease–plant association network, which has been built using curated literature and biological databases, (ii) a disease–disease similarity net, which is constructed using shared symptoms and therapeutic profiles, and (iii) a plant–plant similarity net using phytochemical and functional similarities. These elements are integrated into a homogeneous graph that is heterogeneous in nature, and thus, information flows through related nodes. The model begins by finding RWR between known disease or plant nodes and develops the network by exploring the graph further to make estimates of the probability of association between disease and plant networks that were not previously connected. Experimental tests show that the proposed model has an excellent predictive ability, ROC-AUC of 0.9987, PR-AUC equal to 0.915, and Precision = 10 of 1.0, significantly better than the results of the base models, including Random- and Degree-based models. The bootstrap analysis supported the strength of the model as the mean ROC-AUC was 0.9987 with a standard deviation of 0.00051. The suggested structure offers an effective computational methodology to systematically explore disease–plant interactions to aid in finding novel herbal drugs to treat diseases and speed up the drug discovery process by means of inference based on networks. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Agriculture)
Show Figures

Figure 1

19 pages, 4538 KB  
Article
YOLO-EGASF: A Small-Target Detection Algorithm for Surface Residual Film in UAV Imagery of Arid-Region Cotton Fields
by Xiao Yang, Ji Shi, Kailin Yang, Xiaoqing Lian, Shufeng Zhang, Hongbiao Wang and Zheng Li
AgriEngineering 2026, 8(3), 106; https://doi.org/10.3390/agriengineering8030106 - 10 Mar 2026
Viewed by 376
Abstract
Mulch-film covering technology has been widely adopted in cotton production in arid regions; however, the associated problem of residual-film pollution has become increasingly prominent, creating an urgent demand for efficient and accurate monitoring approaches. Owing to the small target scale, irregular morphology, blurred [...] Read more.
Mulch-film covering technology has been widely adopted in cotton production in arid regions; however, the associated problem of residual-film pollution has become increasingly prominent, creating an urgent demand for efficient and accurate monitoring approaches. Owing to the small target scale, irregular morphology, blurred boundaries, and complex soil backgrounds of residual-film fragments, residual-film detection based on close-range UAV imagery remains a challenging task. To address these issues, this study proposes an improved algorithm, termed YOLO-EGASF, for residual-film detection in arid-region cotton fields, built upon the lightweight YOLOv11n framework. To enhance the detection of small targets with weak boundary characteristics, the baseline model is improved from three aspects. First, a boundary-enhanced multi-branch small-target extraction module (EMSE) is designed to reinforce shallow-layer details and gradient information through multi-scale convolution and explicit edge enhancement. Second, a GLoCA attention module that integrates global coordinate information with local geometric features is constructed to improve the discriminative capability of the model for residual-film targets under complex background conditions. Third, an ASF-layer multi-scale feature fusion structure is introduced, together with an additional small-target detection layer, to strengthen the participation of high-resolution features in cross-scale fusion and prediction. Experimental results on a self-constructed UAV-based residual-film dataset from cotton fields demonstrate that YOLO-EGASF outperforms several mainstream detection models in terms of Precision, Recall, mAP@0.5, and mAP@0.5:0.95, achieving mAP@0.5 and mAP@0.5:0.95 values of 71.9% and 26.8%, respectively. These results indicate a significant improvement in detection accuracy and robustness, confirming that the proposed method can effectively meet the practical requirements of fine-grained residual-film monitoring in arid-region cotton fields. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Agriculture)
Show Figures

Graphical abstract

Back to TopTop