Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = computer vision for plant health

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3326 KB  
Article
Hybrid Multi-Scale Neural Network with Attention-Based Fusion for Fruit Crop Disease Identification
by Shakhmaran Seilov, Akniyet Nurzhaubayev, Marat Baideldinov, Bibinur Zhursinbek, Medet Ashimgaliyev and Ainur Zhumadillayeva
J. Imaging 2025, 11(12), 440; https://doi.org/10.3390/jimaging11120440 - 10 Dec 2025
Viewed by 626
Abstract
Unobserved fruit crop illnesses are a major threat to agricultural productivity worldwide and frequently cause farmers to suffer large financial losses. Manual field inspection-based disease detection techniques are time-consuming, unreliable, and unsuitable for extensive monitoring. Deep learning approaches, in particular convolutional neural networks, [...] Read more.
Unobserved fruit crop illnesses are a major threat to agricultural productivity worldwide and frequently cause farmers to suffer large financial losses. Manual field inspection-based disease detection techniques are time-consuming, unreliable, and unsuitable for extensive monitoring. Deep learning approaches, in particular convolutional neural networks, have shown promise for automated plant disease identification, although they still face significant obstacles. These include poor generalization across complicated visual backdrops, limited resilience to different illness sizes, and high processing needs that make deployment on resource-constrained edge devices difficult. We suggest a Hybrid Multi-Scale Neural Network (HMCT-AF with GSAF) architecture for precise and effective fruit crop disease identification in order to overcome these drawbacks. In order to extract long-range dependencies, HMCT-AF with GSAF combines a Vision Transformer-based structural branch with multi-scale convolutional branches to capture both high-level contextual patterns and fine-grained local information. These disparate features are adaptively combined using a novel HMCT-AF with a GSAF module, which enhances model interpretability and classification performance. We conduct evaluations on both PlantVillage (controlled environment) and CLD (real-world in-field conditions), observing consistent performance gains that indicate strong resilience to natural lighting variations and background complexity. With an accuracy of up to 93.79%, HMCT-AF with GSAF outperforms vanilla Transformer models, EfficientNet, and traditional CNNs. These findings demonstrate how well the model captures scale-variant disease symptoms and how it may be used in real-time agricultural applications using hardware that is compatible with the edge. According to our research, HMCT-AF with GSAF presents a viable basis for intelligent, scalable plant disease monitoring systems in contemporary precision farming. Full article
Show Figures

Figure 1

26 pages, 20055 KB  
Article
Design and Development of a Neural Network-Based End-Effector for Disease Detection in Plants with 7-DOF Robot Integration
by Harol Toro, Hector Moncada, Kristhian Dierik Gonzales, Cristian Moreno, Claudia L. Garzón-Castro and Jose Luis Ordoñez-Avila
Processes 2025, 13(12), 3934; https://doi.org/10.3390/pr13123934 - 5 Dec 2025
Viewed by 528
Abstract
This study presents the design and development of an intelligent end-effector integrated into a custom 7-degree-of-freedom (DOF) robotic arm for monitoring the health status of tomato plants during their growth stages. The robotic system combines five rotational and two prismatic joints, enabling both [...] Read more.
This study presents the design and development of an intelligent end-effector integrated into a custom 7-degree-of-freedom (DOF) robotic arm for monitoring the health status of tomato plants during their growth stages. The robotic system combines five rotational and two prismatic joints, enabling both horizontal reach and vertical adaptability to inspect plants of varying heights without repositioning the robot’s base. The integrated vision module employs a YOLOv5 neural network trained with 7864 images of tomato leaves, including both healthy and diseased samples. Image preprocessing included normalization and data augmentation to enhance robustness under natural lighting conditions. The optimized model achieved a detection accuracy of 90.2% and a mean average precision (mAP) of 92.3%, demonstrating high reliability in real-time disease classification. The end-effector, fabricated using additive manufacturing, incorporates a Raspberry Pi 4 for onboard processing, allowing autonomous operation in agricultural environments. The experimental results validate the feasibility of combining a custom 7-DOF robotic structure with a deep learning-based detector for continuous plant monitoring. This research contributes to the field of agricultural robotics by providing a flexible and precise platform capable of early disease detection in dynamic cultivation conditions, promoting sustainable and data-driven crop management. Full article
Show Figures

Figure 1

23 pages, 7244 KB  
Article
Computer Vision for Cover Crop Seed-Mix Detection and Quantification
by Karishma Kumari, Kwanghee Won and Ali M. Nafchi
Seeds 2025, 4(4), 59; https://doi.org/10.3390/seeds4040059 - 12 Nov 2025
Viewed by 585
Abstract
Cover crop mixes play an important role in enhancing soil health, nutrient turnover, and ecosystem resilience; yet, maintaining even seed dispersion and planting uniformity is difficult due to significant variances in seed physical and aerodynamic properties. These discrepancies produce non-uniform seeding and species [...] Read more.
Cover crop mixes play an important role in enhancing soil health, nutrient turnover, and ecosystem resilience; yet, maintaining even seed dispersion and planting uniformity is difficult due to significant variances in seed physical and aerodynamic properties. These discrepancies produce non-uniform seeding and species separation in drill hoppers, which has an impact on stand establishment and biomass stability. The thousand-grain weight is an important measure for determining cover crop seed quality and yield since it represents the weight of 1000 seeds in grams. Accurate seed counting is thus a key factor in calculating thousand-grain weight. Accurate mixed-seed identification is also helpful in breeding, phenotypic assessment, and the detection of moldy or damaged grains. However, in real-world conditions, the overlap and thickness of adhesion of mixed seeds make precise counting difficult, necessitating current research into powerful seed detection. This study addresses these issues by integrating deep learning-based computer vision algorithms for multi-seed detection and counting in cover crop mixes. The Canon LP-E6N R6 5D Mark IV camera was used to capture high-resolution photos of flax, hairy vetch, red clover, radish, and rye seeds. The dataset was annotated, augmented, and preprocessed on RoboFlow, split into train, validation, and test splits. Two top models, YOLOv5 and YOLOv7, were tested for multi-seed detection accuracy. The results showed that YOLOv7 outperformed YOLOv5 with 98.5% accuracy, 98.7% recall, and a mean Average Precision (mAP 0–95) of 76.0%. The results show that deep learning-based models can accurately recognize and count mixed seeds using automated methods, which has practical applications in seed drill calibration, thousand-grain weight estimation, and fair cover crop establishment. Full article
(This article belongs to the Special Issue Agrotechnics in Seed Quality: Current Progress and Challenges)
Show Figures

Figure 1

23 pages, 3575 KB  
Article
Performance-Guided Aggregation for Federated Crop Disease Detection Across Heterogeneous Farmland Regions
by Yiduo Chen, Ruohong Zhou, Chongyu Wang, Mafangzhou Mo, Xinrui Hu, Xinyi He and Min Dong
Horticulturae 2025, 11(11), 1285; https://doi.org/10.3390/horticulturae11111285 - 25 Oct 2025
Viewed by 820
Abstract
A region-aware federated learning framework (RAFL) is proposed to address the non-IID heterogeneity in multi-regional crop disease recognition while reducing communication and computation costs. RAFL integrates three complementary modules: a region embedding module that captures region-specific representations, a cross-region feature alignment module that [...] Read more.
A region-aware federated learning framework (RAFL) is proposed to address the non-IID heterogeneity in multi-regional crop disease recognition while reducing communication and computation costs. RAFL integrates three complementary modules: a region embedding module that captures region-specific representations, a cross-region feature alignment module that aligns semantic distributions across regions on the server, and an attention-based aggregation module that dynamically weights client updates based on performance through Transformer attention. Without sharing raw images, RAFL achieves efficient and privacy-preserving collaboration among heterogeneous farmlands. Experiments on datasets from Bayan Nur, Zhungeer, and Tangshan demonstrate substantial improvements: a classification accuracy of 89.4%, an F1-score of 88.5%, an AUC of 0.948, while the detection performance reaches mAP@50=62.5. Compared with FedAvg, RAFL improves accuracy and F1 by over 5%, and converges faster with reduced communication overhead (total 2822 MB over 95 rounds). Ablation studies verify that the three modules act synergistically—regional embeddings enhance local discriminability, feature alignment mitigates cross-domain drift, and attention-based aggregation stabilizes training—resulting in a robust and deployable solution for large-scale, privacy-preserving agricultural monitoring. Furthermore, the framework enables regional-level economic analysis by correlating disease incidence with yield reduction and estimating potential economic losses, providing a data-driven reference for agricultural policy and resource allocation. Full article
Show Figures

Figure 1

34 pages, 3764 KB  
Review
Research Progress and Applications of Artificial Intelligence in Agricultural Equipment
by Yong Zhu, Shida Zhang, Shengnan Tang and Qiang Gao
Agriculture 2025, 15(15), 1703; https://doi.org/10.3390/agriculture15151703 - 7 Aug 2025
Cited by 5 | Viewed by 2368
Abstract
With the growth of the global population and the increasing scarcity of arable land, traditional agricultural production is confronted with multiple challenges, such as efficiency improvement, precision operation, and sustainable development. The progressive advancement of artificial intelligence (AI) technology has created a transformative [...] Read more.
With the growth of the global population and the increasing scarcity of arable land, traditional agricultural production is confronted with multiple challenges, such as efficiency improvement, precision operation, and sustainable development. The progressive advancement of artificial intelligence (AI) technology has created a transformative opportunity for the intelligent upgrade of agricultural equipment. This article systematically presents recent progress in computer vision, machine learning (ML), and intelligent sensing. The key innovations are highlighted in areas such as object detection and recognition (e.g., a K-nearest neighbor (KNN) achieved 98% accuracy in distinguishing vibration signals across operation stages); autonomous navigation and path planning (e.g., a deep reinforcement learning (DRL)-optimized task planner for multi-arm harvesting robots reduced execution time by 10.7%); state perception (e.g., a multilayer perceptron (MLP) yielded 96.9% accuracy in plug seedling health classification); and precision control (e.g., an intelligent multi-module coordinated control system achieved a transplanting efficiency of 5000 plants/h). The findings reveal a deep integration of AI models with multimodal perception technologies, significantly improving the operational efficiency, resource utilization, and environmental adaptability of agricultural equipment. This integration is catalyzing the transition toward intelligent, automated, and sustainable agricultural systems. Nevertheless, intelligent agricultural equipment still faces technical challenges regarding data sample acquisition, adaptation to complex field environments, and the coordination between algorithms and hardware. Looking ahead, the convergence of digital twin (DT) technology, edge computing, and big data-driven collaborative optimization is expected to become the core of next-generation intelligent agricultural systems. These technologies have the potential to overcome current limitations in perception and decision-making, ultimately enabling intelligent management and autonomous decision-making across the entire agricultural production chain. This article aims to provide a comprehensive foundation for advancing agricultural modernization and supporting green, sustainable development. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

26 pages, 11510 KB  
Article
Beyond Color: Phenomic and Physiological Tomato Harvest Maturity Assessment in an NFT Hydroponic Growing System
by Dugan Um, Chandana Koram, Prasad Nethala, Prashant Reddy Kasu, Shawana Tabassum, A. K. M. Sarwar Inam and Elvis D. Sangmen
Agronomy 2025, 15(7), 1524; https://doi.org/10.3390/agronomy15071524 - 23 Jun 2025
Viewed by 2270
Abstract
Current tomato harvesters rely primarily on external color as the sole indicator of ripeness. However, this approach often results in premature harvesting, leading to insufficient lycopene accumulation and a suboptimal nutritional content for human consumption. Such limitations are especially critical in controlled-environment agriculture [...] Read more.
Current tomato harvesters rely primarily on external color as the sole indicator of ripeness. However, this approach often results in premature harvesting, leading to insufficient lycopene accumulation and a suboptimal nutritional content for human consumption. Such limitations are especially critical in controlled-environment agriculture (CEA) systems, where maximizing fruit quality and nutrient density is essential for both the yield and consumer health. To address that challenge, this study introduces a novel, multimodal harvest readiness framework tailored to nutrient film technology (NFT)-based smart farms. The proposed approach integrates plant-level stress diagnostics and fruit-level phenotyping using wearable biosensors, AI-assisted computer vision, and non-invasive physiological sensing. Key physiological markers—including the volatile organic compound (VOC) methanol, phytohormones salicylic acid (SA) and indole-3-acetic acid (IAA), and nutrients nitrate and ammonium concentrations—are combined with phenomic traits such as fruit color (a*), size, chlorophyll index (rGb), and water status. The innovation lies in a four-stage decision-making pipeline that filters physiologically stressed plants before selecting ripened fruits based on internal and external quality indicators. Experimental validation across four plant conditions (control, water-stressed, light-stressed, and wounded) demonstrated the efficacy of VOC and hormone sensors in identifying optimal harvest candidates. Additionally, the integration of low-cost electrochemical ion sensors provides scalable nutrient monitoring within NFT systems. This research delivers a robust, sensor-driven framework for autonomous, data-informed harvesting decisions in smart indoor agriculture. By fusing real-time physiological feedback with AI-enhanced phenotyping, the system advances precision harvest timing, improves fruit nutritional quality, and sets the foundation for resilient, feedback-controlled farming platforms suited to meeting global food security and sustainability demands. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

27 pages, 7107 KB  
Article
CBACA-YOLOv5: A Symmetric and Asymmetric Attention-Driven Detection Framework for Citrus Leaf Disease Identification
by Jiaxian Zhu, Jiahong Chen, Huiyang He, Weihua Bai and Teng Zhou
Symmetry 2025, 17(4), 617; https://doi.org/10.3390/sym17040617 - 18 Apr 2025
Cited by 3 | Viewed by 1160
Abstract
The citrus industry plays a pivotal role in modern agriculture. With the expansion of citrus plantations, the intelligent detection and prevention of diseases and pests have become essential for advancing smart agriculture. Traditional citrus leaf disease identification methods primarily rely on manual observation, [...] Read more.
The citrus industry plays a pivotal role in modern agriculture. With the expansion of citrus plantations, the intelligent detection and prevention of diseases and pests have become essential for advancing smart agriculture. Traditional citrus leaf disease identification methods primarily rely on manual observation, which is often time-consuming, labor-intensive, and prone to inaccuracies due to inherent asymmetries in disease manifestations. This work introduces CBACA-YOLOv5, an enhanced YOLOv5s-based detection algorithm designed to effectively capture the symmetric and asymmetric features of common citrus leaf diseases. Specifically, the model integrates the convolutional block attention module (CBAM), which symmetrically enhances feature extraction across spatial and channel dimensions, significantly improving the detection of small and occluded targets. Additionally, we incorporate coordinate attention (CA) mechanisms into the YOLOv5s C3 module, explicitly addressing asymmetrical spatial distributions of disease features. The CARAFE upsampling module further optimizes feature fusion symmetry, enhancing the extraction efficiency and accelerating the network convergence. Experimental findings demonstrate that CBACA-YOLOv5 achieves an accuracy of 96.1% and a mean average precision (mAP) of 92.1%, and improvements of 0.6% and 2.3%, respectively, over the baseline model. The proposed CBACA-YOLOv5 model exhibits considerable robustness and reliability in detecting citrus leaf diseases under diverse and asymmetrical field conditions, thus holding substantial promise for practical integration into intelligent agricultural systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 1660 KB  
Article
A Deep Learning Model for Accurate Maize Disease Detection Based on State-Space Attention and Feature Fusion
by Tong Zhu, Fengyi Yan, Xinyang Lv, Hanyi Zhao, Zihang Wang, Keqin Dong, Zhengjie Fu, Ruihao Jia and Chunli Lv
Plants 2024, 13(22), 3151; https://doi.org/10.3390/plants13223151 - 9 Nov 2024
Cited by 6 | Viewed by 3918
Abstract
In improving agricultural yields and ensuring food security, precise detection of maize leaf diseases is of great importance. Traditional disease detection methods show limited performance in complex environments, making it challenging to meet the demands for precise detection in modern agriculture. This paper [...] Read more.
In improving agricultural yields and ensuring food security, precise detection of maize leaf diseases is of great importance. Traditional disease detection methods show limited performance in complex environments, making it challenging to meet the demands for precise detection in modern agriculture. This paper proposes a maize leaf disease detection model based on a state-space attention mechanism, aiming to effectively utilize the spatiotemporal characteristics of maize leaf diseases to achieve efficient and accurate detection. The model introduces a state-space attention mechanism combined with a multi-scale feature fusion module to capture the spatial distribution and dynamic development of maize diseases. In experimental comparisons, the proposed model demonstrates superior performance in the task of maize disease detection, achieving a precision, recall, accuracy, and F1 score of 0.94. Compared with baseline models such as AlexNet, GoogLeNet, ResNet, EfficientNet, and ViT, the proposed method achieves a precision of 0.95, with the other metrics also reaching 0.94, showing significant improvement. Additionally, ablation experiments verify the impact of different attention mechanisms and loss functions on model performance. The standard self-attention model achieved a precision, recall, accuracy, and F1 score of 0.74, 0.70, 0.72, and 0.72, respectively. The Convolutional Block Attention Module (CBAM) showed a precision of 0.87, recall of 0.83, accuracy of 0.85, and F1 score of 0.85, while the state-space attention module achieved a precision of 0.95, with the other metrics also at 0.94. In terms of loss functions, cross-entropy loss showed a precision, recall, accuracy, and F1 score of 0.69, 0.65, 0.67, and 0.67, respectively. Focal loss showed a precision of 0.83, recall of 0.80, accuracy of 0.81, and F1 score of 0.81. State-space loss demonstrated the best performance in these experiments, achieving a precision of 0.95, with recall, accuracy, and F1 score all at 0.94. These results indicate that the model based on the state-space attention mechanism achieves higher detection accuracy and better generalization ability in the task of maize leaf disease detection, effectively improving the accuracy and efficiency of disease recognition and providing strong technical support for the early diagnosis and management of maize diseases. Future work will focus on further optimizing the model’s spatiotemporal feature modeling capabilities and exploring multi-modal data fusion to enhance the model’s application in real agricultural scenarios. Full article
Show Figures

Figure 1

37 pages, 14848 KB  
Article
Design and Implementation of a Low-Cost, Linear Robotic Camera System, Targeting Greenhouse Plant Growth Monitoring
by Zacharias Kamarianakis, Spyros Perdikakis, Ioannis N. Daliakopoulos, Dimitrios M. Papadimitriou and Spyros Panagiotakis
Future Internet 2024, 16(5), 145; https://doi.org/10.3390/fi16050145 - 23 Apr 2024
Cited by 7 | Viewed by 5000
Abstract
Automated greenhouse production systems frequently employ non-destructive techniques, such as computer vision-based methods, to accurately measure plant physiological properties and monitor crop growth. By utilizing an automated image acquisition and analysis system, it becomes possible to swiftly assess the growth and health of [...] Read more.
Automated greenhouse production systems frequently employ non-destructive techniques, such as computer vision-based methods, to accurately measure plant physiological properties and monitor crop growth. By utilizing an automated image acquisition and analysis system, it becomes possible to swiftly assess the growth and health of plants throughout their entire lifecycle. This valuable information can be utilized by growers, farmers, and crop researchers who are interested in self-cultivation procedures. At the same time, such a system can alleviate the burden of daily plant photography for human photographers and crop researchers, while facilitating automated plant image acquisition for crop status monitoring. Given these considerations, the aim of this study was to develop an experimental, low-cost, 1-DOF linear robotic camera system specifically designed for automated plant photography. As an initial evaluation of the proposed system, which targets future research endeavors of simplifying the process of plant growth monitoring in a small greenhouse, the experimental setup and precise plant identification and localization are demonstrated in this work through an application on lettuce plants, imaged mostly under laboratory conditions. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

14 pages, 5540 KB  
Article
Addressing Ergonomic Challenges in Agriculture through AI-Enabled Posture Classification
by Siddhant Kapse, Ruoxuan Wu and Ornwipa Thamsuwan
Appl. Sci. 2024, 14(2), 525; https://doi.org/10.3390/app14020525 - 7 Jan 2024
Cited by 8 | Viewed by 4119
Abstract
In this study, we explored the application of Artificial Intelligence (AI) for posture detection in the context of ergonomics in the agricultural field. Leveraging computer vision and machine learning, we aim to overcome limitations in accuracy, robustness, and real-time application found in traditional [...] Read more.
In this study, we explored the application of Artificial Intelligence (AI) for posture detection in the context of ergonomics in the agricultural field. Leveraging computer vision and machine learning, we aim to overcome limitations in accuracy, robustness, and real-time application found in traditional approaches such as observation and direct measurement. We first collected field videos to capture real-world scenarios of workers in an outdoor plant nursery. Next, we labeled workers’ trunk postures into three distinct categories: neutral, slight forward bending and full forward bending. Then, through CNNs, transfer learning, and MoveNet, we investigated the effectiveness of different approaches in accurately classifying trunk postures. Specifically, MoveNet was utilized to extract key anatomical features, which were then fed into various classification algorithms including DT, SVM, RF and ANN. The best performance was obtained using MoveNet together with ANN (accuracy = 87.80%, precision = 87.46%, recall = 87.52%, and F1-score = 87.41%). The findings of this research contributed to the integration of computer vision techniques with ergonomic assessments especially in the outdoor field settings. The results highlighted the potential of correct posture classification systems to enhance health and safety prevention practices in the agricultural industry. Full article
(This article belongs to the Special Issue Computer Vision in Human Activity Recognition and Behavior Analysis)
Show Figures

Figure 1

29 pages, 3750 KB  
Review
An Overview of Recent Advances in Greenhouse Strawberry Cultivation Using Deep Learning Techniques: A Review for Strawberry Practitioners
by Jong-Won Yang and Hyun-Il Kim
Agronomy 2024, 14(1), 34; https://doi.org/10.3390/agronomy14010034 - 21 Dec 2023
Cited by 14 | Viewed by 5384
Abstract
Strawberry (Fragaria × ananassa Duch.) has been widely accepted as the “Queen of Fruits”. It has been identified as having high levels of vitamin C and antioxidants that are beneficial for maintaining cardiovascular health and maintaining blood sugar levels. The implementation [...] Read more.
Strawberry (Fragaria × ananassa Duch.) has been widely accepted as the “Queen of Fruits”. It has been identified as having high levels of vitamin C and antioxidants that are beneficial for maintaining cardiovascular health and maintaining blood sugar levels. The implementation of advanced techniques like precision agriculture (PA) is crucial for enhancing production compared to conventional farming methods. In recent years, the successful application of deep learning models was represented by convolutional neural networks (CNNs) in a variety of disciplines of computer vision (CV). Due to the dearth of a comprehensive and detailed discussion on the application of deep learning to strawberry cultivation, a particular review of recent technologies is needed. This paper provides an overview of recent advancements in strawberry cultivation utilizing Deep Learning (DL) techniques. It provides a comprehensive understanding of the most up-to-date techniques and methodologies used in this field by examining recent research. It also discusses the recent advanced variants of the DL model, along with a fundamental overview of CNN architecture. In addition, techniques for fine-tuning DL models have been covered. Besides, various strawberry-planting-related datasets were examined in the literature, and the limitations of using research models for real-time research have been discussed. Full article
(This article belongs to the Special Issue Food and Agricultural Imaging Systems – An Outlook to the Future)
Show Figures

Figure 1

18 pages, 9991 KB  
Article
Intelligent Monitoring System to Assess Plant Development State Based on Computer Vision in Viticulture
by Marina Rudenko, Anatoliy Kazak, Nikolay Oleinikov, Angela Mayorova, Anna Dorofeeva, Dmitry Nekhaychuk and Olga Shutova
Computation 2023, 11(9), 171; https://doi.org/10.3390/computation11090171 - 3 Sep 2023
Cited by 6 | Viewed by 2977
Abstract
Plant health plays an important role in influencing agricultural yields and poor plant health can lead to significant economic losses. Grapes are an important and widely cultivated plant, especially in the southern regions of Russia. Grapes are subject to a number of diseases [...] Read more.
Plant health plays an important role in influencing agricultural yields and poor plant health can lead to significant economic losses. Grapes are an important and widely cultivated plant, especially in the southern regions of Russia. Grapes are subject to a number of diseases that require timely diagnosis and treatment. Incorrect identification of diseases can lead to large crop losses. A neural network deep learning dataset of 4845 grape disease images was created. Eight categories of common grape diseases typical of the Black Sea region were studied: Mildew, Oidium, Anthracnose, Esca, Gray rot, Black rot, White rot, and bacterial cancer of grapes. In addition, a set of healthy plants was included. In this paper, a new selective search algorithm for monitoring the state of plant development based on computer vision in viticulture, based on YOLOv5, was considered. The most difficult part of object detection is object localization. As a result, the fast and accurate detection of grape health status was realized. The test results showed that the accuracy was 97.5%, with a model size of 14.85 MB. An analysis of existing publications and patents found using the search “Computer vision in viticulture” showed that this technology is original and promising. The developed software package implements the best approaches to the control system in viticulture using computer vision technologies. A mobile application was developed for practical use by the farmer. The developed software and hardware complex can be installed in any vehicle. Such a mobile system will allow for real-time monitoring of the state of the vineyards and will display it on a map. The novelty of this study lies in the integration of software and hardware. Decision support system software can be adapted to solve other similar problems. The software product commercialization plan is focused on the automation and robotization of agriculture, and will form the basis for adding the next set of similar software. Full article
Show Figures

Figure 1

20 pages, 1319 KB  
Article
UAV-Based Computer Vision System for Orchard Apple Tree Detection and Health Assessment
by Hela Jemaa, Wassim Bouachir, Brigitte Leblon, Armand LaRocque, Ata Haddadi and Nizar Bouguila
Remote Sens. 2023, 15(14), 3558; https://doi.org/10.3390/rs15143558 - 15 Jul 2023
Cited by 18 | Viewed by 6481
Abstract
Accurate and efficient orchard tree inventories are essential for acquiring up-to-date information, which is necessary for effective treatments and crop insurance purposes. Surveying orchard trees, including tasks such as counting, locating, and assessing health status, plays a vital role in predicting production volumes [...] Read more.
Accurate and efficient orchard tree inventories are essential for acquiring up-to-date information, which is necessary for effective treatments and crop insurance purposes. Surveying orchard trees, including tasks such as counting, locating, and assessing health status, plays a vital role in predicting production volumes and facilitating orchard management. However, traditional manual inventories are known to be labor-intensive, expensive, and prone to errors. Motivated by recent advancements in UAV imagery and computer vision methods, we propose a UAV-based computer vision framework for individual tree detection and health assessment. Our proposed approach follows a two-stage process. Firstly, we propose a tree detection model by employing a hard negative mining strategy using RGB UAV images. Subsequently, we address the health classification problem by leveraging multi-band imagery-derived vegetation indices. The proposed framework achieves an F1-score of 86.24% for tree detection and an overall accuracy of 97.52% for tree health assessment. Our study demonstrates the robustness of the proposed framework in accurately assessing orchard tree health from UAV images. Moreover, the proposed approach holds potential for application in various other plantation settings, enabling plant detection and health assessment using UAV imagery. Full article
Show Figures

Figure 1

19 pages, 11152 KB  
Review
Horticulture 4.0: Adoption of Industry 4.0 Technologies in Horticulture for Meeting Sustainable Farming
by Rajat Singh, Rajesh Singh, Anita Gehlot, Shaik Vaseem Akram, Neeraj Priyadarshi and Bhekisipho Twala
Appl. Sci. 2022, 12(24), 12557; https://doi.org/10.3390/app122412557 - 8 Dec 2022
Cited by 43 | Viewed by 10862
Abstract
The United Nations emphasized a significant agenda on reducing hunger and protein malnutrition as well as micronutrient (vitamins and minerals) malnutrition, which is estimated to affect the health of up to two billion people. The UN also recognized this need through Sustainable Development [...] Read more.
The United Nations emphasized a significant agenda on reducing hunger and protein malnutrition as well as micronutrient (vitamins and minerals) malnutrition, which is estimated to affect the health of up to two billion people. The UN also recognized this need through Sustainable Development Goals (SDG 2 and SDG 12) to end hunger and foster sustainable agriculture by enhancing the production and consumption of fruits and vegetables. Previous studies only stressed the various issues in horticulture with regard to industries, but they did not emphasize the centrality of Industry 4.0 technologies for confronting the diverse issues in horticulture, from production to marketing in the context of sustainability. The current study addresses the significance and application of Industry 4.0 technologies such as the Internet of Things, cloud computing, artificial intelligence, blockchain, and big data for horticulture in enhancing traditional practices for disease detection, irrigation management, fertilizer management, maturity identification, marketing, and supply chain, soil fertility, and weather patterns at pre-harvest, harvest, and post-harvest. On the basis of analysis, the article identifies challenges and suggests a few vital recommendations for future work. In horticulture settings, robotics, drones with vision technology and AI for the detection of pests, weeds, plant diseases, and malnutrition, and edge-computing portable devices that can be developed with IoT and AI for predicting and estimating crop diseases are vital recommendations suggested in the study. Full article
(This article belongs to the Special Issue Agriculture 4.0 – the Future of Farming Technology)
Show Figures

Figure 1

30 pages, 10675 KB  
Article
How Are Macro-Scale and Micro-Scale Built Environments Associated with Running Activity? The Application of Strava Data and Deep Learning in Inner London
by Hongchao Jiang, Lin Dong and Bing Qiu
ISPRS Int. J. Geo-Inf. 2022, 11(10), 504; https://doi.org/10.3390/ijgi11100504 - 27 Sep 2022
Cited by 66 | Viewed by 7370
Abstract
Running can promote public health. However, the association between running and the built environment, especially in terms of micro street-level factors, has rarely been studied. This study explored the influence of built environments at different scales on running in Inner London. The 5Ds [...] Read more.
Running can promote public health. However, the association between running and the built environment, especially in terms of micro street-level factors, has rarely been studied. This study explored the influence of built environments at different scales on running in Inner London. The 5Ds framework (density, diversity, design, destination accessibility, and distance to transit) was used to classify the macro-scale features, and computer vision (CV) and deep learning (DL) were used to measure the micro-scale features. We extracted the accumulated GPS running data of 40,290 sample points from Strava. The spatial autoregressive combined (SAC) model revealed the spatial autocorrelation effect. The result showed that, for macro-scale features: (1) running occurs more frequently on trunk, primary, secondary, and tertiary roads, cycleways, and footways, but runners choose tracks, paths, pedestrian streets, and service streets relatively less; (2) safety, larger open space areas, and longer street lengths promote running; (3) streets with higher accessibility might attract runners (according to a spatial syntactic analysis); and (4) higher job density, POI entropy, canopy density, and high levels of PM 2.5 might impede running. For micro-scale features: (1) wider roads (especially sidewalks), more streetlights, trees, higher sky openness, and proximity to mountains and water facilitate running; and (2) more architectural interfaces, fences, and plants with low branching points might hinder running. The results revealed the linkages between built environments (on the macro- and micro-scale) and running in Inner London, which can provide practical suggestions for creating running-friendly cities. Full article
Show Figures

Figure 1

Back to TopTop