Next Article in Journal
Development of an Intelligent Inspection System Based on YOLOv7 for Real-Time Detection of Foreign Materials in Fresh-Cut Vegetables
Previous Article in Journal
Research on Wheat Spike Phenotype Extraction Based on YOLOv11 and Image Processing
Previous Article in Special Issue
A Deep Learning Framework for Detecting Cross-Generational Facial Markers Associated with Stress in Pigs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Computer Vision for Site-Specific Weed Management in Precision Agriculture: A Review

1
Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
2
School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE 68583, USA
*
Author to whom correspondence should be addressed.
Current address: Plant and Soil Sciences, University of Delaware, Newark, DE 19716, USA.
Agriculture 2025, 15(21), 2296; https://doi.org/10.3390/agriculture15212296
Submission received: 30 September 2025 / Revised: 23 October 2025 / Accepted: 27 October 2025 / Published: 4 November 2025

Abstract

Weed management is always a challenge in crop production, exacerbated by the issue of herbicide resistance. Excessive herbicide application not only leads to the development of herbicide resistance weeds but also causes environmental problems. In precision agriculture, innovative weed management methods, especially advanced remote sensing and computer vision technologies for targeted herbicide applications, i.e., site-specific weed management (SSWM), have recently drawn a lot of attention. Challenges exist in accurately and reliably detecting diverse weed species under varying field conditions. Significant efforts have been made to advance computer vision technologies for weed detection. This comprehensive review provides an in-depth examination of various methodologies used in developing weed detection systems. These methodologies encompass a spectrum ranging from traditional image processing techniques to state-of-the-art machine and deep learning models. The review further discusses the potential of these methods for real-time applications, highlighting recent innovations, and identifying future research hotspots in SSWM. These advancements hold great promise for further enhancing and innovating weed management practices in precision agriculture.

1. Introduction

In the 21st century, one of the major concerns in agriculture is to improve food-producing efficiency to feed a growing global population with a declining rural labor force, which necessitates the adoption of sustainable and more efficient production methods [1]. In addition to breeding a higher-yielding variety of crops, it is vital to address the numerous factors affecting crop yield losses in the agriculture community, such as weed infestations. Weeds, defined as unwanted plants that compete with crops for essential resources such as water, sunlight, nutrients, and physical space, significantly reduce agricultural productivity. Beyond resource competition, weeds also serve as hosts for harmful insects and pathogens, further threatening crop health and quality. Traditional weed control methods, such as extensive herbicide use, have led to environmental pollution, land degradation, and the emergence of herbicide-resistant weed species [2]. These challenges underscore the urgent need for innovative and sustainable weed management strategies in precision agriculture (PA).
Weed management is a complex, labor-intensive, and information-intensive task. Since the 1940s, pesticides, particularly herbicides, have significantly contributed to weed management practices. However, traditional practices, including manual weeding and uniform herbicide applications, have proven to be inefficient and labor-intensive. These practices are still common and widely used by farmers operating small fields in developing countries [3]. The widespread use of herbicides, while effective in increasing crop yields, has led to significant environmental and economic concerns, including soil and water contamination, and the emergence of herbicide-resistant weeds, posing risks to human and ecosystem health [4].
In response, PA technologies that leverage unmanned aerial vehicle (UAV) and remote sensing (RS) tools for large-scale data collection, combined with machine vision analysis, have enabled real-time weed detection and targeted herbicide application, ultimately reducing chemical inputs and promoting sustainable agricultural practices [1]. A core idea of PA is leveraging a sustainable approach based on data-driven decisions to ensure that crops and soil receive exactly what they need (e.g., nutrient, water, and space) to optimize resource use and enhance crop productivity [5]. Important factors such as the emergence of herbicide-resistant weeds, the crops’ ability to suppress weed growth, and effects of different weed species on crop yield need to be taken into consideration to understand the influence of various weeds species on overall crop yield [4]. As more agricultural data have been gathered, particularly image data, the advent of advanced technologies, particularly computer vision, has further expanded the capabilities of PA. Computer vision techniques have proven invaluable in tasks such as crop yield prediction, disease detection, species identification, and weed management [1].
In the context of weed detection and management, PA combined with computer vision has introduced site-specific herbicide spraying techniques designed to cut down on waste and lower chemical residues compared to conventional spraying practices [6]. However, the success of implementing site-specific spraying methods relies on accurate identification of crops and weeds. To tackle these tasks, weed detection tasks have been driven by the integration of traditional image processing techniques, ML, and deep learning (DL) methods. Early approaches relied on extracting key features such as color, texture, and shape, combined with object-based image analysis to segment images into meaningful objects rather than individual pixels [7,8,9]. These features were then used to train ML algorithms, enhancing the accuracy of weed detection models [1,10]. More recently, DL methods, particularly convolutional neural networks (CNNs), have become increasingly popular because of their strong capacity for automatic feature extraction and self-learning capabilities, resulting in developing robust weed detection models [11]. These advancements in computer vision have laid the foundation for real-time or near-real-time site-specific weed management (SSWM) applications.
SSWM focuses on applying herbicides precisely at locations where weeds are detected, allowing treatments to be performed in real-time or near real-time as weeds are identified [12]. This approach leverages advanced machinery equipped with real-time weed detection technologies, such as machine vision systems, and sensor-integrated platforms [13]. Numerous practical SSWM applications have been developed, leveraging both chemical-based and non-chemical weeding strategies. Chemical based methods include spot-spraying of herbicide using sprayer tractors, unmanned ground vehicle (UGV) and UAVs [14]. One notable example is the “See & Spray” system developed by John Deere (Moline, IL, USA), an advanced technology that detects weeds and performs targeted spot-spraying of herbicides using tractor-mounted sprayers. Non-chemical strategies, on the other hand, employ techniques such as high electric discharge and mechanical actuators to physically remove weeds [15,16]. The success of implementing these weed removing technologies is based on the automated identification and classification of weed species, i.e., the computer vision system. The process of developing the computer vision system for SSWM typically involves five key steps [1,17]:
  • Data acquisition using remote sensing,
  • Data pre-processing and development of weed detection models,
  • Generation of weed density maps,
  • Weeding application via actuators, and
  • Performance evaluation of the precision operation. Among these steps, weed detection plays a pivotal role, as it informs subsequent decision-making processes and ensures the effectiveness of SSWM.
In this paper, we reviewed the computer vision techniques for weed detection using remotely sensed data in PA. While several reviews on weed detection model in agriculture have been published, covering topics such as conventional image processing methods, vegetation index-based approaches, ML/DL techniques, there remains a need for a comprehensive analysis focusing on real-time weed detection models [1,3,18,19]. Accordingly, given the growing emphasis on real-time or near-real-time applications in SSWM, this review aims to address this gap through a review that includes an in-depth analysis of existing real-time weed detection models. The primary objective of this review is to compare and describe the conventional image-processing approaches with the recent computer vision-based methods devised through ML and DL techniques on the development of weed detection models. We discuss the development of real-time weed detection models, analyze the challenges associated with existing methods, and explore future research directions. By doing so, this review seeks to contribute to the ongoing efforts to enhance weed management practices, ultimately supporting sustainable agricultural production.

2. Methodology

The paper aimed to review and investigate the role of computer vision techniques in developing real-time applications for SSWM and creating weed maps that involve the classification and detection of weed patches, along with their corresponding location data in agricultural fields. A systematic search was conducted through Google Scholar, Re-search Gate, Science Direct, MDPI, Wiley Online Library, ProQuest, and the University of Nebraska-Lincoln Library to gather relevant literature on weed identification models utilizing computer vision in PA.
The search strategy employed a combination of the primary keyword “Computer Vision” and the secondary keyword “weed detection” to identify relevant literature. Additionally, a manual search was conducted to ensure comprehensive coverage of the topic by including any further pertinent publications. As a result, 62 research articles were selected, encompassing various weed management practices utilizing real-time or near-real-time weed detection models from 2002 to 2025. Furthermore, 9 review articles were also examined in this research to gain a broader understanding of the advancements and trends in computer vision-based weed detection. Figure 1 presents the Preferred Reporting Items for Systematic review and Meta-Analyses (PRISMA) flow diagram outlining the systematic process used for identifying, screening, and selecting relevant studies included in this review.

3. Computer Vision for Weed Detection

Figure 2 presents the chronological progression of computer vision methods applied to weed detection, evolving from traditional image processing approaches to classical ML, CNN-based DL, and the modern era of attention and transformer-based architectures. It summarizes the key algorithms and model developments that have driven PA toward advanced, real-time, and intelligent weed management solutions. The subsequent sections provide detailed discussions of each phase in this evolution.

3.1. Remotely Sensed Images as a Foundation of Data Source

RS is the foundation of applying computer vision in SSWM, and RS have been intensively used for numerous PA applications, including crop monitoring, irrigation management, predicting weather conditions, observing air quality, disease and pest management, and yield prediction in the past few decades [3,5,6,19]. RS refers to the process of observing and measuring the physical properties of objects without direct contact. It serves as an essential data source for applying computer vision techniques that can be used for SSWM [13]. RS platforms such as satellites, airplanes, UAVs, and UGVs are used to collect remotely sensed data by carrying sensors capable of capturing RGB, multi/hyperspectral imagery, visible and near infra-red (NIR) spectroscopy, light detection and ranging for distance sensing [17,20,21,22]. The collected datasets are processed and analyzed, then used as inputs to train models designed for weed identification. For instance, several weed detection algorithms have been developed to classify various vegetation types based on features such as spectral characteristics, leaf color, height, texture, and shape extracted from remotely sensed data [2].

3.2. Traditional Computer Vision Based Approaches for Weed Detection

Image analysis has long served as a practical tool for identifying weeds from crops. Several established techniques form the foundation of this approach: edge detection algorithms trace the outlines of plants and objects, color segmentation exploits the visual differences between weed and crop species, texture-based methods examine surface patterns to tell plants apart, and template comparison checks portions of images against known weed examples to confirm matches [19]. These approaches provide effective, though limited, solutions for weed management by relying on visual cues from captured images and videos [18].
However, their adaptability varies significantly across crop types and environmental conditions. In cereal crops such as wheat, barley, and maize, color and shape-based segmentation methods generally performed well due to the relatively uniform canopy and clear contrast between weeds and crop plants [23]. In contrast, in broadleaf crops like soybean, sugar beet, and canola, where weeds and crops often exhibit similar leaf color, texture, and morphology, similar algorithms sometimes struggled to distinguish between species reliably [9,13]. Texture-based methods demonstrated better adaptability under these conditions, as they could capture structural and surface differences between leaves. Yet, their performance was still affected by external factors such as lighting variations, shadows, and soil background noise, particularly in heterogeneous crop environments. Spectral reflectance-based approaches showed broader adaptability across multiple crop systems, as they relied on multispectral or hyperspectral data rather than relying solely on the information available from the visible range [7,20]. For example, in sugar beet and maize fields, spectral indices such as NDVI and Red Edge NDVI allowed for more reliable separation of crops and weeds under variable illumination. However, these methods required precise calibration and specialized sensors, limiting their feasibility for real-time or low-cost applications.
Several studies have explored these traditional techniques of weed detection under different crops. For instance, reflectance-based weed identification were investigated by analyzing the spectral signatures of crop canopies (sugar beet and maize) and weeds under controlled laboratory settings, ultimately achieving a classification accuracy of 97.0% [13]. In another study, hyperspectral data was integrated by selecting two optimal spectral channels from 100 available channels for soil-crop segmentation, followed by texture analysis to detect weeds [20]. Similarly, spectral differences were utilized to classify sugar beet crops and four weed species [9]. A combination of both plant height and spectral reflectance were used to develop a weed detection model to distinguish crops from weeds in organic carrot fields [8]. These studies demonstrate the wide range of conventional computer vision approaches used for weed detection through the extraction of color, shape, texture, and spectral features. Nonetheless, their susceptibility to environmental variations and differences in plant characteristics underscores the need for more resilient and adaptable methods [6].
An early study conducted in Spain developed robotic weed controllers equipped with dual vision systems. The initial vision system enabled the robot to differentiate between cultivated plants and weeds and the secondary vision system played a role in pinpointing the precise coordinates of weed locations in the field. In this setup, an electrode powered by a battery pack executed weed eradication by emitting an electric discharge of 15,000 volts [15]. Similarly, a robotic platform was developed with a capability to adapt and operate between row crops ranging between 0.25 m and 0.50 m, employing cameras for row guidance and weed detection. The robot included improved modules such as four-wheel steering and propulsion which helped to attain better mobility from adjustments in orientation [24].
Traditional detection algorithms demonstrated inadequate and inconsistent accuracy for real-time applications in prototype trials. They rely on spectral and physical features (shape, color, texture, height) and are sensitive to changing illumination conditions [25]. For example, detection algorithms relied on image processing based on vegetative spectral reflectance of weed-infested regions within fields, their effectiveness varied as the time of day and the lighting conditions changed. Additionally, deploying these detection algorithms on devices for real-time weed detection applications also encountered other challenges such as slow processing speeds, large memory requirements, and high costs of systems hardware. Their dependence on manually designed features and vulnerability to factors such as canopy density, soil reflectance, and uneven lighting restricted their generalization to broader agricultural settings. Consequently, the transition toward data-driven approaches became essential, with ML and DL algorithms offering more robust, adaptive solutions capable of learning crop-specific and environmental variations, thereby overcoming many of the limitations inherent in traditional methods.

3.3. Machine Learning-Based Computer Vision Approaches for Weed Detection

ML employs algorithms that enable systems to learn patterns from data and progressively enhance their performance as more data is processed, serving as a fundamental component of artificial intelligence (AI) and forming the basis for many computer vision applications. In this section, we present a detailed review of ML applications developed specifically in the weed detection domain. ML could be broadly classified into two categories based on the learning approach that is used to train the model: supervised and unsupervised learning [26,27,28,29].
The key distinction between supervised and unsupervised learning lies in how the learning process is structured to enhance ML model performance. In supervised learning, the model is trained using a dataset where both the input and corresponding output labels are explicitly provided [3]. Simply put, the model learns from a well-annotated dataset, enabling it to make predictions on new data based on patterns identified during training. A subcategory of supervised learning is reinforcement learning, where the model improves through feedback received during training [2]. Positive reinforcement is given when the model performs correctly, while negative feedback is applied when it deviates from expected results. To assess the model’s ability to generalize unseen data, the dataset is typically divided into separate training and testing sets. The model learns from the training set, and its performance is evaluated on the testing set by comparing predicted outputs with actual values. On the other hand, unsupervised learning involves training a model on unlabeled data, where no predefined output labels are provided [28]. Instead, the model identifies patterns and structures within the data independently, learning to categorize or cluster similar data points. Unlike supervised learning, there is no prior knowledge of how data values relate to each other. Table 1 presents example studies on weed detection using ML models, with particular attention to specific algorithms including clustering methods, regression models, support vector machines (SVM), decision trees (DT), artificial neural network (ANN), and ensemble learning approaches.

3.3.1. Clustering Algorithms

Clustering is an example of unsupervised classification ML algorithm. For example, K-mean clustering like nearest neighbors could be referred to as a method of clustering data samples with similar feature information/characteristics in a cluster [19]. This approach has been used by various researchers to perform weed detection through clustering vegetation pixels into weeds and crops based on feature inputs. For instance, a clustering algorithm was employed to group plants with similar shapes, distinguishing them from weeds that were clustered separately based on their distinct characteristics [28]. Additionally, a weed detection study employed image segmentation, feature extraction, and crop-row detection, utilizing a clustering algorithm to classify weeds and crops based on sixteen extracted features from field imagery [29]. Clustering algorithms demonstrate strong performance in weed detection within less complex environments, such as the early growth stages when weeds and crops exhibit distinct morphological differences. As the growing season advances and the visual similarity between weeds and crops increases, the effectiveness of clustering-based approaches becomes limited, suggesting the need for complementary detection methods in later growth stages [19].

3.3.2. K-Nearest Neighbors Algorithm

The KNN classifier is a supervised learning algorithm that classifies data points based on the majority label of their K closest neighbors in the feature space. It is a non-parametric, instance-based method that relies on distance metrics to make predictions [23]. In another study, KNN classifiers were used to perform weed identification tasks on sugarcane plantations using imagery collected by a remote pilot aircraft, reporting an overall accuracy of 83.1% and a kappa coefficient of 0.775 [32]. KNN classifiers have shown promising potential for weed detection by identifying plants based on their resemblance to known training samples. However, their performance largely depends on choosing the right distance metric and the appropriate number of neighbors, making careful parameter tuning essential for consistent results under varying field conditions [2].

3.3.3. Regression Algorithms

Regression algorithms are supervised ML models used to determine relationships between input (independent) and output (dependent) variables. They can perform both prediction and classification tasks based on the response variable. Common types include linear regression, multiple regression, and Logistic Regression (LoR). While simple and multiple regression predict continuous outputs, LoR adapts the regression framework for classification by modeling the probability of discrete categories. This approach was applied to distinguish between crops and weeds using extracted features modeled using LoR to distinguish between crops and weeds [27]. A data matrix is created where rows correspond to individual image observations and columns represent predictor variables alongside a binary response indicator: for example, “1” denoting weed presence and “0” signifying absence (or vice versa). LoR has been applied to detect weed occurrence in fields by modeling its association with weed density metrics derived from image analysis [30]. These models were used to generate weed density maps of annual grass weeds in the later stages of cereals. In addition, Multiple regression was used to find a relationship between ultrasonic readings and the coverage of crops and weeds [31]. Overall, regression algorithms provide a simple yet powerful framework for modeling relationships between input features and target variables in weed detection tasks. Their interpretability and low computational demands make them suitable for early-stage studies or small datasets. However, their performance often declines in highly variable field conditions or when dealing with complex, nonlinear patterns, highlighting the need for more advanced learning methods in such scenarios.

3.3.4. Support Vector Machines

With advancements in image processing techniques, an increasing number of features can be extracted from images, presenting challenges for regression algorithms in handling complex tasks. One major issue is multicollinearity, where highly correlated features can negatively impact the performance of regression models and lead to overfitting. To address these challenges, SVM, a supervised ML algorithm, has been widely adopted for weed detection and other classification tasks [37]. SVM works by mapping data into a higher-dimensional feature space (using a kernel function if necessary) and finding the maximum margin hyperplane that best separates different classes [19]. This approach helps mitigate overfitting in high-dimensional feature spaces by maximizing the margin between classes and using regularization to balance model complexity and training accuracy.
In weed detection, SVM classifiers were used to detect broad and narrow weeds using features extracted from Gabor and Fast Fourier Transform (FFT) filters [10]. In a study of classifying four species of weeds in sugar beet, Fourier descriptors and moment invariant features together with several shape features were used to train the supervised SVM classifier which achieved an overall accuracy of 95.0% [33]. A SVM classifier was trained with fourteen optimal features extracted from digital images of crops and weeds, the trained SVM classifier achieved an overall classification accuracy score of 97.0% [34]. In another study, SVM classifier was able to perform complex classification between avena sterilis weed and cereal crops that share similar spectral signature successfully even with minimum system memory requirements and computation power [40].

3.3.5. Decision Trees

DTs are a type of supervised ML algorithm designed in a hierarchical structure, comprising a root node, branches, and internal nodes that systematically divide data based on specific feature values. This sequential splitting allows DTs to classify data or make predictions by learning patterns from labeled training datasets. The model progressively organizes objects with similar characteristics under a shared root node, facilitating effective classification. For instance, DT algorithms were applied to analyze shape and texture features of eight plant species using hyperspectral images, successfully differentiating corn from weeds in a controlled laboratory setting [26]. The model demonstrated high accuracy, exceeding 95.0% in classification performance. DT algorithms were employed to classify hyperspectral data from corn plots into three categories: water stress, nitrogen application levels, and weed presence achieving accuracy scores of 96.0%, 83.0%, and 100% highlighting the effectiveness of DTs in agricultural data analysis [35]. Decision trees are prone to high variance and overfitting because small changes in the training data can lead to very different tree structures and predictions.

3.3.6. Ensemble Learning

Ensemble learning refers to training multiple ML models on different samples from training datasets and aggregating responses from all the models to generate a final output. This approach enhances generalizability on unseen datasets and reduces overfitting problems [2,19]. The two common categories of ensemble learning are bagging and boosting, which differ in both technical design and practical application.
Bagging trains multiple base learners in parallel on different bootsrap samples of the training data and combines their predictions through averaging (for regression) or voting (for classification). This parallel training structure reduces model variance and improves model stability, making bagging-based methods suitable for tasks with noisy data or highly variable environmental conditions. A widely used bagging algorithm is RF, that builds multiple DTs using random subsets of features, then combines their predictions by averaging or voting to improve accuracy and reduce overfitting [36]. RF-based weed detection models have shown strong adaptability for large-scale UAV imagery with high-dimensional spectral datasets, where variability in illumination, soil background, and weed density can introduce noise [19]. RF classifiers were used to perform weed species recognition tasks on the imagery that were collected at low altitudes using UAVs in the presence of lower infestation levels in sugarcane fields [32]. For instance, RF models were developed to detect alligator weeds that form dense infestations in aquatic environments using RS data to improve biosecurity supervision and monitoring efforts in Australia [36]. The bagging enabled RF technique is especially well-suited for broad area weed mapping or monitoring scenarios where input variability is high, but the system must remain stable.
Boosting, employs a sequential learning strategy in which each subsequent model focuses on correcting the errors made by its predecessors [2]. This iterative process enhances model accuracy and bias correction, making boosting techniques advantageous in cases where weed-crop spectral differences are subtle or when dealing with imbalanced datasets where weed samples are underrepresented [19]. A widely used ML algorithm that uses boosting techniques is XGBoost. XGBoost has proven effective in application scenarios requiring fine-grained discrimination, such as distinguishing between buffel grass and spinifex with accuracy scores of 97.0%. Predictions performed in different case scenarios such as object rotation, illumination changes, and background cluttering suggest the robustness in weed detection under complex environments of the ML models developed using XGBoost techniques [41].
In summary, bagging-based methods (e.g., RF) are ideal for large-scale, variable, and noise-prone agricultural data environments where model stability is critical, while boosting-based methods (e.g., XGBoost) excel in high-precision, small-object detection tasks that demand adaptive error correction and fine differentiation between crop and weed species [19].

3.4. Deep Learning-Based Computer Vision Approaches for Weed Detection

The advent of advanced computer hardware, particularly graphics processing units (GPUs), and the development of sophisticated software frameworks have catalyzed the evolution from traditional ML to DL algorithms. DL, with its ability to automatically learn hierarchical features from raw data, has emerged as a powerful tool for addressing com-plex tasks in agriculture, including weed detection [3,6,27,41]. DL models usually require large datasets to efficiently train and perform tasks such as prediction and classification. DL algorithms such as CNNs, Long-Short Term Memory (LSTM), Generative Adversarial Networks (GAN), Fully Convolutional Networks (FCN), Recurrent Neural Networks (RNN), and attention-based transformer models have been used to develop both supervised and unsupervised DL models [42]. In recent years, transformer-based models have gained significant attention as an alternative to traditional CNN architectures for various computer vision applications. Originally developed for natural language processing tasks, transformers leverage a self-attention mechanism, which allows them to effectively capture long-range dependencies and contextual information within an image [43,44]. Unlike CNNs that rely on localized receptive fields, transformers analyze the entire image at once, enhancing their ability to differentiate between weeds and crops with similar visual characteristics.
DL approaches have been used to develop weed detection models that could be used to perform the task of SSWM and use those applications for real-time sensing and spraying practices [3]. For instance, several CNN architectures such as AlexNet, DenseNet, ExceptioNet, GoogleNet, Inception, MobileNet, ResNet, ShuffleNet, and VGGNet have been used to develop weed detection models in the past [18]. These CNN architectures differ in the arrangement and number of convolutional filters, pooling layers, and fully connected layers. Deep CNN architectures (consisting of a higher number of convolutional and pooling layers) such as Inception and ResNet have been successfully used to solve complex classification tasks and achieve higher accuracy scores and shallow CNN architectures (consisting of lower convolutional and pooling layers) such as MobileNet, ExceptionNet architectures have been used on edge computing devices to perform real-time applications [45]. These architectures have been largely used as backbone in popular classification algorithms of DL namely object detection, semantic segmentation, and facilitate the application of these state-of-the-art DL in weed detection and management. Vision transformers (ViTs) and hybrid architectures combining transformers with CNNs have shown promising results in PA due to their advanced feature representation capabilities [46]. Table 2 present example studies on weed detection using DL models. Typically, weed detection DL models can be categorized into three types, including object detection, semantic segmentation, and instance segmentation.

3.4.1. Object Detection

DL models carry out classification or detection tasks by generating bounding boxes around detected objects, assigning probability scores to indicate the confidence of each prediction, falls under the object detection category in DL. Some popular CNN object detection models used to train DL-based weed detection models in the past are Single Shot Detector (SSD), Faster R-CNN, You Only Look Once (YOLO) [6,55,56]. Two-stage detector models, such as Faster R-CNN, utilizing a region proposal network (RPN) first identify high-probability object locations within an image, which are then processed by a convolutional network for detection and localization [55]. In contrast, architectures of single-stage detector models, such as YOLO and SSD perform both object identification and localization directly through a convolutional network [2]. A key advantage of RPN-based models is their ability to detect small objects more effectively, whereas YOLO model versions and SSD could struggle with identifying smaller objects in an image, though recent versions of the YOLO model have significantly enhanced its ability to detect small objects, leading to substantial improvements in small object detection performance [6]. However, the complex structure of RPN-based models leads to longer processing times, making them less suitable for real-time applications. As a result, YOLO and SSD models are in general preferred for real-time or near-real-time applications, as they offer faster processing speeds and can be efficiently deployed on edge-computing devices [6,41]. However, performance trade-offs between speed and accuracy vary across different models and use cases. A comparative study evaluated Faster R-CNN and SSD models based on IoU scores and inference speed, revealing that Faster R-CNN achieved higher IoU scores while maintaining a comparable inference time to SSD [55]. In another study, the DL models of YOLOv3 and Faster R-CNN were trained to perform weed detection in vegetable crops; the study compared the performance of YOLOv3 and Faster R-CNN, revealing that the prior model outperformed the later one by achieving higher accuracy and significantly faster inference times [41]. Additionally, a multi-scale feature enhanced detection-based transformer model of RMS-DETR, was developed to improve the detection of small, occluded, and densely distributed weeds in rice fields [48]. By integrating multi-scale feature extraction and partial convolution, the model achieved higher accuracy and faster inference compared to traditional DETR-based models. Lightweight CNN-based models, including SSD, YOLOv8n, and a newly proposed MobileNetV4-Seg, were developed and deployed on Jetson Nano and Jetson Orin Nano platforms for real-time weed detection in corn and soybean fields. Among these architectures, MobileNetV4-Seg achieved the highest performance, with IoU scores of 69.9% and 76.8% and F1 scores of 82.3% and 86.9% for the corn and soybean datasets, respectively, while maintaining a real-time inference speed of 44 frame-per-second (FPS) [6].

3.4.2. Semantic Segmentation

Semantic segmentation is a computer vision technique task that classifies each pixel in an image into a specific category or class. Unlike object detection which identifies objects and locates them using bounding boxes, semantic segmentation models provide a detailed understanding by segmenting and delineating the shape and outline of the objects during detection [2,6,43]. Several weed detection models have been trained using popular semantic segmentation models such as SegNet, UNet, and LinkNet [57]. Most of the semantic segmentation models consist of an encoder–decoder architecture that comprise convolutional, pooling, and transposed convolutional layers [58]. The encoder layer extracts key features from the input image using convolutional filters and reduces the spatial dimensions through pooling layers, creating a condensed representation. In the decoder sections, the model uses the transposed convolutional layers to reconstruct the condensed representation back to the original image size for prediction. Different CNN architectures such as DenseNet, VGG, EfficientNet, ResNet, ResNeXt, MobileNet, or Inception, etc. can be used as a backbone in the encoder architecture to perform feature extraction [2]. A comparative evaluation was carried out between two semantic segmentation models, UNet and SegNet, for weed detection in canola fields. Different backbones, ResNet-50 and VGG16, were also assessed [49]. SegNet model along with VGG16 architecture backbone achieved a better mean IoU score of 0.829. In another study, a UNet model employing InceptionV3 as the encoder and enhanced with data augmentation techniques achieved a mean IoU of 88.9% [50].
Recently, ViT models have emerged as an alternative to CNNs for feature extraction in semantic segmentation tasks. Unlike CNNs that use localized convolutional filters, ViTs employ self-attention mechanisms to capture global relationships across the entire image. This architecture is particularly advantageous in scenarios with limited labeled datasets, as it enables better generalization through contextual learning [46]. A transformer-based semantic segmentation model integrating multi-scale feature extraction, global response normalization, and residual attention mechanisms was developed for weed detection [52]. This design improves efficiency by reducing training parameters by 25.0%, while achieving an accuracy of 97.0% and a mean IoU of 94.1%. In another study, multiple semantic segmentation models of UNet++, DeepLabV3+, PSPNet, and MANet were trained using vision transformer-based architecture backbones in the encoder section for invasive grass species detection and achieved a mean IoU score of 90.0% amongst eight different vegetation species [43]. Similarly, the ViT B-16 model was explored for differentiating between weeds and crops in high-resolution UAV imagery, leveraging the self-attention mechanism of transformers and demonstrating superior performance compared to traditional CNN models such as EfficientNet and ResNet, particularly in scenarios with limited labeled datasets [59].

3.4.3. Instance Segmentation

Instance Segmentation is a DL technique that performs pixel-wise detection and segmentation of individual objects within an image while distinguishing between multiple instances of the same category. One of the most widely used models, for instance segmentation is Mask R-CNN [53]. This model enhances the Faster R-CNN by incorporating an additional branch dedicated to mask prediction. Along with detecting objects and drawing bounding boxes, Mask R-CNN generates pixel-level segmentation masks for each detected instance [54]. In another study, Mask R-CNN model was developed and tested using twelve different backbone architectures from the ResNet family, all pre-trained on the COCO dataset [53]. The model’s effectiveness in detecting weeds within sugarcane fields was assessed using performance metrics including mAP50, Recall, Precision, and F1-score. Among the evaluated configurations, the Mask R-CNN model with a ResNet101 backbone achieved the highest performance, recording an mAP50 of 65.5% on the test set, which reflects a satisfactory level of accuracy for weed detection. An improved version of the Mask R-CNN model was developed by integrating a convolutional block attention module to enhance performance in complex agricultural settings [54]. This enhancement enabled the network to focus more precisely on salient features, resulting in improved detection accuracy. Compared to standard UNet and Mask R-CNN architectures, the modified model achieved superior results, with a mAP of 91.9% and a mean IoU of 0.768, demonstrating its enhanced robustness and precision under challenging conditions. Despite these promising results, instance segmentation is less commonly employed in practical weed detection applications compared to object detection and semantic segmentation methods, as the additional computational complexity may not always justify the marginal gains in accuracy for many agricultural scenarios [2].

3.5. Comparative Evaluation of Classical Machine Learning and Deep Learning Approaches for Weed Detection

The evolution of weed detection systems has progressed from classical ML algorithms toward more advanced DL models capable of real-time, high-accuracy performance in the field. While both paradigms have contributed significantly to SSWM, they differ considerably in terms of computational complexity, scalability, and robustness under diverse agricultural conditions. ML models, such as KNN, SVM, DT, RF, and XGBoost, rely on manually engineered features to distinguish crops from weeds [26,32,33,37,39]. These models offer the benefits of simplicity, lower computational requirements, and interpretability, often achieving acceptable accuracies. However, their dependency on handcrafted features makes them highly sensitive to environmental variability, including illumination changes, weed density, occlusion, and soil background noise. Consequently, their transferability across regions and crops remains limited, and their performance tends to deteriorate under complex field scenarios.
While DL methods represent an evolution within the broader ML domain, their automation of feature learning and scalability has made them the current standard for high-accuracy, real-time weed detection. Architectures such as region-based CNNs, single-stage detectors, encoder–decoder networks, and ViTs have demonstrated remarkable improvements in accuracy and generalization [2,6,43,46,48,49,51]. Notably, the emergence of the latest YOLO architectures has greatly enhanced the real-time capability of DL-based systems [6,42]. These models integrate advanced feature pyramids, decoupled heads, and lightweight attention mechanisms that significantly reduce inference time while maintaining high precision. As a result, modern YOLO variants can be deployed on edge devices like NVIDIA Jetson or Coral TPU, bridging the gap between research-grade and deployable real-time systems in agricultural fields.
Despite these advancements, DL models still face several challenges. They demand large annotated datasets, require expensive GPU resources, and may produce errors in scenarios where crops and weeds share similar spectral or morphological features [2,6,43]. Overfitting and reduced transparency (the “black-box” issue) remain key barriers to widespread adoption. On the other hand, ML models, while less accurate, are lightweight, explainable, and suitable for low-resource settings. A hybrid approach leveraging ML for rapid pre-screening and DL for fine-grained classification could optimize performance while balancing speed, interpretability, and computational demand. Future research should focus on developing energy-efficient, explainable, and adaptive DL models that maintain robustness across environmental and phenological variations.

4. Challenges and Opportunities of Real-Time Site-Specific Weed Management Applications Using Deep Learning

Developing and deploying DL models for SSWM presents both significant opportunities and critical challenges that must be addressed to enable practical field applications. While DL architectures have demonstrated remarkable capabilities in weed detection, their practical deployment in real-world agricultural systems requires careful consideration of multiple factors. Object detection models, such as YOLO and SSD, are designed for low-latency, real-time applications like on-the-go weed detection and herbicide applications [55]. However, these models rely on bounding boxes for weed detection, which may slightly reduce precision compared to pixel-level classification. In contrast, semantic segmentation models classify every pixel in an image, offering greater accuracy and precise weed identification, making them ideal for spot spraying [1,2,6,43]. Despite their accuracy, these models often experience higher latency, limiting their real-time application. Similarly, instance segmentation models, such as Mask R-CNN, not only classify individual pixels but also differentiate between multiple weed instances, providing detailed segmentation results. However, their computational demands result in longer inference times, making them more suitable for offline analysis than real-time deployment [41]. Additionally, self-attention mechanisms enhance a model’s ability to handle occlusion, lighting variations, and scale differences, making them particularly beneficial for SSWM [43]. As larger annotated datasets become more accessible, transformer-based architectures are expected to play a crucial role in advancing real-time weed detection and performing SSWM tasks. The following subsections examine key challenges related to data availability and real-time deployment, along with emerging opportunities and solutions that address these limitations, including strategies for efficient model development, computational optimization, and practical system integration.

4.1. Overcome Limited Training Data for SSWM Models

A fundamental challenge in developing DL models for SSWM is the requirement for large, high-quality annotated datasets, yet collecting such datasets under varied field conditions is time-consuming, expensive, and labor-intensive. Moreover, differences in lighting, soil background, growth stages, and weed-crop interactions make data acquisition and annotation even more complex. These data limitations often restrict model generalization, leading to inconsistent performance when models are applied beyond the training environment. However, recent advances in transfer learning, data augmentation, and collaborative data sharing offer promising pathways to overcome these barriers and accelerate robust model development.

4.1.1. Transfer Learning and Data Augmentation

A significant obstacle in developing DL models is their typical requirement for extensive datasets during the training phase, with model accuracy generally improving as training image quantities increase [42]. To address limited data availability when training these models, researchers commonly employ transfer learning methodologies and data augmentation strategies as effective workarounds [2,43,58].
Transfer learning uses knowledge gained from a task with abundant training data to solve a related task with limited data [2]. This approach allows models to leverage pre-trained features, reducing the need for large data sets and extensive training, making it especially useful for applications where collecting new data is challenging [41]. Transfer learning techniques were used to develop weed detection models and performance comparisons were made on the models that were developed using MobileNet and Inception architectures already trained on ImageNet datasets [2,58]. In a study, weed detection models were developed using a combination of various pre-trained convolutional architectures such as Xception, Inception-ResNet, MobileNet, and DenseNet with traditional ML models such SVMs, and LoR [47]. Additionally, semantic segmentation models employing ViT-based backbones for encoder-level feature extraction have demonstrated effective transfer learning capabilities in invasive grass species classification through attention-based modeling of global image relationships [43].
Data augmentation is a technique used to artificially expand the training dataset by generating modified versions of existing data or creating synthetic samples from it [1,55]. This is achieved through various transformations such as adjusting color intensities, applying random rotations, flipping, and cropping. Since DL models rely on extracted features from training images, augmenting the dataset significantly enhances model training by increasing the number of samples available. These augmentation methods have been shown to improve the robustness of CNN architectures, which are commonly used for feature extraction in DL [51]. Data augmentation techniques were used to enhance the original dataset, including adjustments to brightness, random rotations and flips, and the addition of color variations, all of which helped in increasing the dataset that contributed to improving the model’s accuracy [41].

4.1.2. Publishing and Sharing Data Sets to Develop Powerful Models

As datasets are important to train DL models of weed detection, current phenomena that studies relied on independent or unpublished datasets hinder and complicate the practice of real-time SSWM. Hence, developing and expanding publicly accessible datasets will not only build more robust models but also provide a platform for a broader audience to explore different and newer DL models on the available datasets. Table 3 highlights several such datasets that provide opportunities for benchmarking, collaboration, and improving model performance. Leveraging these datasets allows future research to optimize speed, accuracy, and efficiency, promoting effective weed management solutions. These open datasets facilitate the development of robust, high-accuracy models that can generalize across diverse field conditions. In addition, they promote collaboration and enable consistent performance comparison among different approaches. By leveraging these datasets, future research can focus on balancing speed, accuracy, and computational efficiency, contributing to more effectiveness.

4.2. Real-Time Deployment and System Integration of SSWM Models

A fundamental challenge in real-time SSWM deployment is that DL detection algorithms often exceed the computational capabilities of edge devices used in agricultural machinery. Although DL methods excel in weed detection tasks but are computationally demanding and require considerable memory and power resources. Edge devices such as Raspberry Pi, Orange Pi, and NVIDIA Jetson, are compact and suitable for deployment on tractors, UGVs, and UAVs, but are typically constrained in processing power, memory, and storage. Due to these reasons, implementing DL-based weed detection models for real-time SSWM applications remains a technically challenging task.
To address these computational constraints, several potential solutions are proposed and discussed: leveraging cloud computing with high-bandwidth IoT connectivity to offload intensive processing tasks, advancing edge device capabilities with more powerful hardware, and developing faster, more efficient model architectures. The following subsections examine these approaches and their integration with agricultural machinery for practical SSWM implementation.

4.2.1. High-Bandwidth Connectivity and Cloud Computing for SSWM

One of the potential solutions to address the high computational requirements and to perform millions of computations during training and inference phase of DL models is to use cloud computing technologies. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction [66]. Cloud computing enables offloading computationally intensive tasks such as data preprocessing and weed map generation to remote servers via IoT connectivity, overcoming the high computational complexity of computer vision algorithms for SSWM applications through on-demand access to scalable computing resources [62]. Platforms such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Alibaba Cloud have popularized the use of cloud-based technologies in recent past [42,67].
A cloud-based AI application was developed to process RS imagery from airplanes, UAVs, and satellites for tasks including plant height measurement, canopy analysis, and weed detection [67]. In this study, LinkNet model with ResNet34 as the architecture backbone, was employed to develop a parthenium weed detection model, achieving an average accuracy of 59.8% within 0.217 seconds. Similarly, an IoT-based system was implemented where field images are transmitted to a cloud server for weed detection using the YOLO-v5 DL model [68]. However, the use of cloud-based technologies for real-time SSWM applications faces several challenges [42]:
(1)
Latency: The weed detection models developed using cloud technologies require sharing of both the input data generated locally in the fields and the weed maps generated after pre-processing the information in between cloud and sensors in the fields to perform real-time SSWM applications. Strong network connectivity plays a vital role in using these technologies as there needs to be strong and robust communication between sensors and cloud locations. There needs to be constant access to the internet to maintain proper connectivity between both platforms. Also, exploiting resources available on cloud platforms may face additional queuing. These issues could lead to network latency and delay decision-making for real-time weeding applications.
(2)
Scalability: Data generated from sensors in the fields needs to be shared with cloud regularly and sharing large loads of data in a short time could be a challenge. Uploading higher resolution imagery or video streaming with cloud would excessive bandwidth consumptions and could lead to scalability issues in sharing data. Also, scalability issues would increase if multiple cameras shared data concurrently with the cloud.
(3)
Privacy and Data Security: This is a major concern as there might be risks associated with data leakage or compromising personal data from the cloud locations. There exist other issues associated with the use of cloud technologies such as misuse of sensitive information already uploaded to the cloud by the cloud companies. Users need to be wary about the privacy concerns of the information shared with cloud.
While advancements in 5G networks, satellite internet, and high-bandwidth IoT infrastructure may improve connectivity and reduce latency in the future, the fundamental dependency on network infrastructure and data transfer speed remains a significant limitation for cloud-based real-time SSWM, particularly in remote or connectivity-limited agricultural regions. Although technologies like Starlink satellite internet mounted on agricultural equipment offer potential solutions for remote area connectivity, their practical implementation and successful deployment for real-time SSWM applications have yet to be demonstrated in operational field settings and can be explored in future.

4.2.2. Advancing Edge Computing Capabilities and Model Efficiency for Machinery Integration

Edge computing has emerged as a practical solution to address the limitations of cloud-based systems. By performing computations locally at the edge of the network, edge computing reduces latency, enhances scalability, and mitigates privacy risks [69,70]. Recent advancements in GPU-accelerated edge devices, such as NVIDIA Jetson, Google Coral, and Intel Neural Compute Stick, have enabled the deployment of DL models directly on edge devices with significantly enhanced computational capabilities [6,71]. Edge devices are compact computing units that integrate hardware and software components to execute specific functions and can be mounted on agricultural platforms to perform real-time SSWM tasks [2]. To perform real-time weeding operations, the edge devices control sensors and cameras to acquire field data, pre-process the information to generate weed density maps, and activate actuators including nozzle sprays, releasing electrical discharge, or mechanical actuators attached with agricultural equipment’s to eliminate weeds [15,16,24,27].
Despite their strong potential, deploying DL models on edge devices presents significant challenges. Although DL methods excel at tasks such as object detection and segmentation, they are computationally demanding and require considerable memory and power resources. These requirements often exceed the capabilities of edge devices, which are typically constrained in processing power, memory, and storage. Consequently, implementing DL-based weed detection models for real-time SSWM applications remains a technically complex and resource-intensive task.
In addition, software development kits (SDKs) such as TensorRT (NVIDIA) and TensorFlow lite (Google) further optimize DL models for edge deployment [70]. TensorRT, for instance, performs transformations like constant folding, layer fusion, and graph pruning to reduce computational and memory requirements while maintaining performance. TensorRT optimizations on NVIDIA Jetson nano achieved a 14.7% reduction in inference time, albeit with a 14.8% reduction in mean IoU score). This trade-off between accuracy and inference time highlights the need for further research to balance performance and efficiency in edge-based DL models [71]. Furthermore, the continuous evolution of detection architectures, particularly the YOLO family of models, has produced progressively faster and more efficient architectures designed specifically for real-time applications [6,70]. These newer models achieve smaller computational footprints and reduced memory requirements while maintaining competitive detection accuracy, making them increasingly suitable for deployment on resource-constrained edge devices. The trend toward developing smaller yet more powerful models continue to narrow the gap between detection performance requirements and edge device capabilities [42,45].
The practical success of SSWM systems depends on the coordinated operation of weed detection algorithms and the mechanical units responsible for weed control. While significant technological advancements have been achieved in the design and performance of detection algorithms, their true impact is measured by how well these developments enhance overall agricultural productivity and sustainability. Different categories of machinery such as ground-based autonomous platforms, tractor-mounted implements, and aerial systems impose distinct requirements on detection models. Ground-based systems generally demand high spatial precision and fine-scale segmentation outputs to enable accurate mechanical or localized chemical removal at close range. Tractor-mounted sprayers, operating at higher field speeds, require models capable of real-time inference and rapid decision-making to synchronize with nozzle actuation and maintain treatment accuracy. Smart spraying systems, such as John Deere’s (Moline, IL, USA) “See & Spray” technology, have demonstrated herbicide savings of approximately 70–90%, highlighting the practical benefits of integrating DL-based weed detection with intelligent actuation mechanisms. In contrast, aerial platforms, especially UAVs, are usually operated in a fast speed with limited payload, favoring models that balance detection accuracy, inference time, and computational requirement.
The overall efficiency of intelligent weed detection systems depends on aligning model performance metrics such as inference speed, spatial resolution, detection accuracy, and robustness under varying lighting or canopy conditions with machinery characteristics like movement speed, working width, and actuation delay. Establishing this alignment allows detection models to maintain consistent performance when moving from controlled environments to real-world field operations. Models optimized for real-time processing enable rapid coverage of extensive field areas while preserving high detection reliability, ensuring timely and accurate weed management. In addition, integrating cloud-edge hybrid frameworks enhance scalability and responsiveness. Data-intensive processes such as model updates are managed in the cloud, whereas time-critical decisions, including real-time weed mapping, nozzle actuation and robotic control, are handled locally on edge devices. This coordinated approach improves operational efficiency, precision, and adaptability across diverse agricultural settings.

5. Summary and Conclusions

Weed management plays a crucial role in crop production practices. The global in-crease in herbicide usage for weed control has led to several challenges, including reduced crop quality, economic losses due to excessive off-target herbicide application, and the emergence of herbicide-resistant weeds. SSWM strategies involve applying precise amounts of herbicide only in areas where weed infestations are detected. As many studies have explored the development of SSWM technologies in agriculture, this review consolidates and evaluates a wide range of image processing, machine vision, ML, and DL-based computer vision approaches recently applied to weed detection. The paper offers researchers a comprehensive perspective on the evolving strategies for weed management and highlights the pivotal role of computer vision technologies in advancing site-specific weed detection and control.
There is significant potential for research in optimizing DL models for real-time SSWM applications on edge devices using various SDKs such as TensorFlow lite and TensorRT. While optimized DL models can achieve faster inference times, they often face a trade-off in achieving better prediction accuracy. Cloud computing infrastructure offers complementary opportunities for SSWM by enabling the processing of large-scale field data, training sophisticated models with extensive datasets, and supporting PA tasks that do not require immediate responses, such as field mapping, seasonal weed distribution analysis, and long-term management planning. Future studies should further explore the synchronization of weed detection model performance metrics with mechanical response characteristics of sprayers, robotic weeders, and UAV systems to improve the overall logic and practicality of SSWM implementation. Additionally, the development of large and publicly available datasets will be the next milestone in research and practices of weed detection and management.

Author Contributions

Conceptualization, P.S., B.Z. and Y.S.; writing—original draft preparation, P.S.; methodology, P.S., B.Z. and Y.S.; validation, B.Z. and Y.S.; formal analysis, P.S. and B.Z.; resources, Y.S.; writing—review and editing, P.S., B.Z. and Y.S.; supervision, B.Z. and Y.S.; funding acquisition, Y.S.; Resources and Project administration: Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the grant titled An Intelligent Unmanned Aerial Application System for Site-Specific Weed Management under AFRI Foundational and Applied Science Program of the United States Department of Agriculture (USDA award No. 2021-67021-34412).

Data Availability Statement

No new data was created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

During the preparation of this manuscript, Grammarly (version: 1.2.202) was used to correct grammar, enhance readability, and ensure a smooth flow of information. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The following abbreviations are used in this manuscript:
SSWMSite Specific Weed Management
PAPrecision Agriculture
MLMachine Learning
DLDeep Learning
RSRemote Sensing
AIArtificial Intelligence
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
CNNConvolutional Neural Network
UAVUnmanned Aerial Vehicles
UGVUnmanned Ground Vehicles
KNNK-Nearest Neighbor
CARTClassification and Regression Tree
SVMSupport Vector Machine
ANNArtificial Neural Network
DTDecision Tree
LoRLogistic Regression
RFRandom Forest
FFTFast Fourier Transform
XGBoostExtreme Gradient Boosting
LSTMLong-Short Term Memory
GANGenerative Adversarial Networks
FCNFully Convolutional Network
FPSFrame-per-second
RNNRecurrent Neural Networks
GPUGraphical Processing Unit
R-CNNRegion Convolutional Neural Network
YOLOYou only Look Once
RPNRegion Proposal Network
IoUIntersection over Union
IoTInternet of Things
SDKSoftware Development Kit
SSDSingle Shot Detector
ViTVision Transformer
NIRNear-Infra Red

References

  1. Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. [Google Scholar] [CrossRef]
  2. Singh, P. Semantic Segmentation Based Deep Learning Approaches for Weed Detection. Master’s Thesis, University of Nebraska-Lincoln, Lincoln, NE, USA, 16 December 2022. Available online: https://digitalcommons.unl.edu/biosysengdiss/137/ (accessed on 25 September 2025).
  3. Hasan, A.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G. A survey of deep learning techniques for weed detection from images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
  4. Heap, I. Global perspective of herbicide-resistant weeds. Pest Manag. Sci. 2014, 70, 1306–1315. [Google Scholar] [CrossRef]
  5. Zhang, N.; Wang, M.; Wang, N. Precision agriculture—A worldwide overview. Comput. Electron. Agric. 2002, 36, 113–132. [Google Scholar] [CrossRef]
  6. Islam, M.D.; Liu, W.; Izere, P.; Singh, P.; Yu, C.; Riggan, B.; Zhang, K.; Jhala, A.J.; Knezevic, S.; Ge, Y.; et al. Towards real-time weed detection and segmentation with lightweight CNN models on edge devices. Comput. Electron. Agric. 2025, 237, 110600. [Google Scholar] [CrossRef]
  7. Feyaerts, F.; Van Gool, L. Multi-spectral vision system for weed detection. Pattern Recognit. Lett. 2001, 22, 667–674. [Google Scholar] [CrossRef]
  8. Okamoto, H.; Murata, T.; Kataoka, T.; Hata, S.I. Plant classification for weed detection using hyperspectral imaging with wavelet analysis. Weed Biol. Manag. 2007, 7, 31–37. [Google Scholar] [CrossRef]
  9. Piron, A.; Leemans, V.; Lebeau, F.; Destain, M.F. Improving in-row weed detection in multispectral stereoscopic images. Comput. Electron. Agric. 2009, 69, 73–79. [Google Scholar] [CrossRef]
  10. Ishak, A.J.; Mustafa, M.M.; Tahir, N.M.; Hussain, A. Weed detection system using support vector machine. In Proceedings of the 2008 International Symposium on Information Theory and Its Applications (ISITA), Auckland, New Zealand, 7–10 December 2008; pp. 1–4. [Google Scholar] [CrossRef]
  11. Zhao, B.; Li, J.; Baenziger, P.S.; Belamkar, V.; Ge, Y.; Zhang, J.; Shi, Y. Automatic wheat lodging detection and mapping in aerial imagery to support high-throughput phenotyping and in-season crop management. Agronomy 2020, 10, 1762. [Google Scholar] [CrossRef]
  12. Christensen, S.; Søgaard, H.T.; Kudsk, P.; Nørremark, M.; Lund, I.; Nadimi, E.S.; Jørgensen, R. Site-specific weed control technologies. Weed Res. 2009, 49, 233–241. [Google Scholar] [CrossRef]
  13. Vrindts, E.; De Baerdemaeker, J.; Ramon, H. Weed detection using canopy reflection. Precis. Agric. 2002, 3, 63–80. [Google Scholar] [CrossRef]
  14. Shankar, R.H.; Veeraraghavan, A.K.; Sivaraman, K.; Ramachandran, S.S. Application of UAV for pest, weeds and disease detection using open computer vision. In Proceedings of the 2018 International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 13–14 December 2018; pp. 287–292. [Google Scholar] [CrossRef]
  15. Blasco, J.; Aleixos, N.; Roger, J.M.; Rabatel, G.; Moltó, E. Robotic weed control using machine vision. Biosyst. Eng. 2002, 83, 149–157. [Google Scholar] [CrossRef]
  16. Raja, R.; Nguyen, T.T.; Slaughter, D.C.; Fennimore, S.A. Real-time robotic weed knife control system for tomato and lettuce based on geometric appearance of plant labels. Biosyst. Eng. 2020, 194, 152–164. [Google Scholar] [CrossRef]
  17. López-Granados, F. Weed detection for site-specific weed management: Mapping and real-time approaches. Weed Res. 2011, 51, 1–11. [Google Scholar] [CrossRef]
  18. Murad, N.Y.; Mahmood, T.; Forkan, A.R.M.; Morshed, A.; Jayaraman, P.P.; Siddiqui, M.S. Weed detection using deep learning: A systematic literature review. Sensors 2023, 23, 3670. [Google Scholar] [CrossRef]
  19. Liakos, K.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef]
  20. Alchanatis, V.; Ridel, L.; Hetzroni, A.; Yaroslavsky, L. Weed detection in multi-spectral images of cotton fields. Comput. Electron. Agric. 2005, 47, 243–260. [Google Scholar] [CrossRef]
  21. Sa, I.; Chen, Z.; Popović, M.; Khanna, R.; Liebisch, F.; Nieto, J.; Siegwart, R. WeedNet: Dense semantic weed classification using multispectral images and MAV for smart farming. IEEE Robot. Autom. Lett. 2017, 3, 588–595. [Google Scholar] [CrossRef]
  22. Shahbazi, N.; Ashworth, M.B.; Callow, J.N.; Mian, A.; Beckie, H.J.; Speidel, S.; Nicholls, E.; Flower, K.C. Assessing the capability and potential of LiDAR for weed detection. Sensors 2021, 21, 2328. [Google Scholar] [CrossRef]
  23. Forero, M.G.; Herrera-Rivera, S.; Ávila-Navarro, J.; Franco, C.A.; Rasmussen, J.; Nielsen, J. Color classification methods for perennial weed detection in cereal crops. In Proceedings of the Iberoamerican Congress on Pattern Recognition, Madrid, Spain, 14–17 November 2018; Springer: Cham, Switzerland, 2018; pp. 117–123. [Google Scholar] [CrossRef]
  24. Bak, T.; Jakobsen, H. Agricultural robotic platform with four-wheel steering for weed detection. Biosyst. Eng. 2004, 87, 125–136. [Google Scholar] [CrossRef]
  25. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  26. Lin, F.; Zhang, D.; Huang, Y.; Wang, X.; Chen, X. Detection of corn and weed species by the combination of spectral, shape and textural features. Sustainability 2017, 9, 1335. [Google Scholar] [CrossRef]
  27. Rani, S.V.; Kumar, P.S.; Priyadharsini, R.; Srividya, S.J.; Harshana, S. An automated weed detection system in smart farming for developing sustainable agriculture. Int. J. Environ. Sci. Technol. 2022, 19, 9083–9094. [Google Scholar] [CrossRef]
  28. Sukumar, P.; Ravi, D.S. Weed detection using image processing by clustering analysis. Int. J. Emerg. Technol. Eng. Res. 2016, 4, 14–18. [Google Scholar]
  29. Zhang, X.; Li, X.; Zhang, B.; Zhou, J.; Tian, G.; Xiong, Y.; Gu, B. Automated robust crop-row detection in maize fields based on position clustering algorithm and shortest path method. Comput. Electron. Agric. 2018, 154, 165–175. [Google Scholar] [CrossRef]
  30. Jensen, S.M.; Akhter, M.J.; Azim, S.; Rasmussen, J. The predictive power of regression models to determine grass weed infestations in cereals based on drone imagery—Statistical and practical aspects. Agronomy 2021, 11, 2277. [Google Scholar] [CrossRef]
  31. Andújar, D.; Weis, M.; Gerhards, R. An ultrasonic system for weed detection in cereal crops. Sensors 2012, 12, 17343–17357. [Google Scholar] [CrossRef]
  32. Yano, I.H.; Mesa, N.F.O.; Santiago, W.E.; Aguiar, R.H.; Teruel, B. Weed identification in sugarcane plantations through images taken from remotely piloted aircraft (RPA) and KNN classifier. J. Food Nutr. Sci. 2017, 5, 211. [Google Scholar] [CrossRef]
  33. Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 2018, 145, 153–160. [Google Scholar] [CrossRef]
  34. Ahmed, F.; Al-Mamun, H.A.; Bari, A.H.; Hossain, E.; Kwan, P. Classification of crops and weeds from digital images: A support vector machine approach. Crop Prot. 2012, 40, 98–104. [Google Scholar] [CrossRef]
  35. Waheed, T.; Bonnell, R.B.; Prasher, S.O.; Paulet, E. Measuring performance in precision agriculture: CART—A decision tree approach. Agric. Water Manag. 2006, 84, 173–185. [Google Scholar] [CrossRef]
  36. Sheffield, K.J.; Clements, D.; Clune, D.J.; Constantine, A.; Dugdale, T.M. Detection of Aquatic Alligator Weed (Alternanthera philoxeroides) from aerial imagery using random forest classification. Remote Sens. 2022, 14, 2674. [Google Scholar] [CrossRef]
  37. Dadashzadeh, M.; Abbaspour-Gilandeh, Y.; Mesri-Gundoshmian, T.; Sabzi, S.; Hernández-Hernández, J.L.; Hernández-Hernández, M.; Arribas, J.I. Weed classification for site-specific weed management using an automated stereo computer-vision machine-learning system in rice fields. Plants 2020, 9, 559. [Google Scholar] [CrossRef] [PubMed]
  38. Sabzi, S.; Abbaspour-Gilandeh, Y. Using video processing to classify potato plant and three types of weed using hybrid of artificial neural network and particle swarm algorithm. Measurement 2018, 126, 22–36. [Google Scholar] [CrossRef]
  39. Kamath, R.; Balachandra, M.; Prabhu, S. Crop and weed discrimination using Laws’ texture masks. Int. J. Agric. Biol. Eng. 2020, 13, 191–197. [Google Scholar] [CrossRef]
  40. Castillejo-González, I.L.; Pena-Barragán, J.M.; Jurado-Expósito, M.; Mesas-Carrascosa, F.J.; López-Granados, F. Evaluation of pixel-and object-based approaches for mapping wild oat (Avena sterilis) weed patches in wheat fields using QuickBird imagery for site-specific management. Eur. J. Agron. 2014, 59, 57–66. [Google Scholar] [CrossRef]
  41. Jin, X.; Sun, Y.; Che, J.; Bagavathiannan, M.; Yu, J.; Chen, Y. A novel deep learning-based method for detection of weeds in vegetables. Pest. Manag. Sci. 2022, 78, 1861–1869. [Google Scholar] [CrossRef] [PubMed]
  42. Chen, J.; Ran, X. Deep learning with edge computing: A review. Proc. IEEE 2019, 107, 1655–1674. [Google Scholar] [CrossRef]
  43. Singh, P.; Perez, M.A.; Donald, W.N.; Bao, Y. A comparative study of deep semantic segmentation and UAV-based multispectral imaging for enhanced roadside vegetation composition assessment. Remote Sens. 2025, 17, 1991. [Google Scholar] [CrossRef]
  44. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar] [CrossRef]
  45. Razfar, N.; True, J.; Bassiouny, R.; Venkatesh, V.; Kashef, R. Weed detection in soybean crops using custom lightweight deep learning models. J. Agric. Food Res. 2022, 8, 100308. [Google Scholar] [CrossRef]
  46. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Mostafa, D.; Matthias, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16×16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
  47. Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S.; Vasilakoglou, I. Towards weeds identification assistance through transfer learning. Comput. Electron. Agric. 2020, 171, 105306. [Google Scholar] [CrossRef]
  48. Guo, Z.; Cai, D.; Zhou, Y.; Xu, T.; Yu, F. Identifying rice field weeds from unmanned aerial vehicle remote sensing imagery using deep learning. Plant Methods 2024, 20, 105. [Google Scholar] [CrossRef] [PubMed]
  49. Asad, M.H.; Bais, A. Weed detection in canola fields using maximum likelihood classification and deep convolutional neural network. Inf. Process. Agric. 2020, 7, 535–545. [Google Scholar] [CrossRef]
  50. Boyina, L.; Sandhya, G.; Vasavi, S.; Koneru, L.; Koushik, V. Weed Detection in Broad Leaves using Invariant U-Net Model. In Proceedings of the 2021 International Conference on Communication, Control and Information Sciences (ICCISc), Chennai, India, 16–18 June 2021; pp. 1–4. [Google Scholar] [CrossRef]
  51. Wang, A.; Xu, Y.; Wei, X.; Cui, B. Semantic segmentation of crop and weed using an encoder-decoder network and image enhancement method under uncontrolled outdoor illumination. IEEE Access 2020, 8, 81724–81734. [Google Scholar] [CrossRef]
  52. Sun, C.; Zhang, M.; Zhou, M.; Zhou, X. An improved transformer network with multi-scale convolution for weed identification in sugarcane field. IEEE Access 2024, 12, 31168–31181. [Google Scholar] [CrossRef]
  53. Mini, G.A.; Sales, D.O.; Luppe, M. Weed segmentation in sugarcane crops using Mask R-CNN through aerial images. In Proceedings of the 2020 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 16–18 December 2020; pp. 485–491. [Google Scholar] [CrossRef]
  54. Jin, S.; Dai, H.; Peng, J.; He, Y.; Zhu, M.; Yu, W.; Li, Q. An improved mask R-CNN method for weed segmentation. In Proceedings of the 2022 IEEE 17th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 1–4 December 2022; pp. 1430–1435. [Google Scholar] [CrossRef]
  55. Veeranampalayam Sivakumar, A.N.; Li, J.; Scott, S.; Psota, E.; Jhala, A.J.; Luck, J.D.; Shi, Y. Comparison of object detection and patch-based classification deep learning models on mid-to-late-season weed detection in UAV imagery. Remote Sens. 2020, 12, 2136. [Google Scholar] [CrossRef]
  56. Singh, P.; Bao, Y.; Ru, S. Deep learning approaches for yield prediction and maturity assessment in Southern Highbush blueberry cultivation. In Proceedings of the ASA, CSSA, SSSA International Annual Meeting, San Antonio, TX, USA, 10–13 November 2024; Available online: https://scisoc.confex.com/scisoc/2024am/meetingapp.cgi/Paper/161550 (accessed on 28 October 2025).
  57. Chaurasia, A.; Culurciello, E. LinkNet: Exploiting encoder representations for efficient semantic segmentation. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar] [CrossRef]
  58. Singh, P.; Bao, Y.; Perez, M.A.; Donald, W.N. Image-based assessment of vegetation cover and composition using U-Net-based semantic segmentation. In Proceedings of the 2023 ASABE Annual International Meeting, Omaha, NE, USA, 9–12 July 2023; p. 1. Available online: https://elibrary.asabe.org/abstract.asp?aid=54094 (accessed on 28 October 2025).
  59. Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer neural network for weed and crop classification of high-resolution UAV images. Remote Sens. 2022, 14, 592. [Google Scholar] [CrossRef]
  60. Giselsson, T.M.; Jørgensen, R.N.; Jensen, P.K.; Dyrmann, M.; Midtiby, H.S. A public image database for benchmark of plant seedling classification algorithms. arXiv 2017, arXiv:1711.05458. [Google Scholar] [CrossRef]
  61. Chebrolu, N.; Lottes, P.; Schaefer, A.; Winterhalter, W.; Burgard, W.; Stachniss, C. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. Int. J. Robot. Res. 2017, 36, 1045–1052. [Google Scholar] [CrossRef]
  62. Lameski, P.; Zdravevski, E.; Trajkovik, V.; Kulakov, A. Cloud-based architecture for automated weed control. In Proceedings of the IEEE EUROCON 2017—17th International Conference on Smart Technologies, Ohrid, North Macedonia, 6–8 July 2017; pp. 757–762. [Google Scholar] [CrossRef]
  63. Haug, S.; Ostermann, J. A crop/weed field image dataset for the evaluation of computer vision based precision agriculture tasks. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 105–116. [Google Scholar] [CrossRef]
  64. Leminen Madsen, S.; Mathiassen, S.K.; Dyrmann, M.; Laursen, M.S.; Paz, L.C.; Jørgensen, R.N. Open plant phenotype database of common weeds in Denmark. Remote Sens. 2020, 12, 1246. [Google Scholar] [CrossRef]
  65. Teimouri, N.; Dyrmann, M.; Nielsen, P.R.; Mathiassen, S.K.; Somerville, G.J.; Jørgensen, R.N. Weed growth stage estimator using deep convolutional neural networks. Sensors 2018, 18, 1580. [Google Scholar] [CrossRef] [PubMed]
  66. Mell, P.; Grance, T. Effectively and securely using the cloud computing paradigm. NIST Inf. Technol. Lab. 2009, 2, 304–311. Available online: https://zxr.io/nsac/ccsw09/slides/mell.pdf (accessed on 25 September 2025).
  67. Ampatzidis, Y.; Partel, V.; Costa, L. Agroview: Cloud-based application to process, analyze and visualize UAV-collected data for precision agriculture applications utilizing artificial intelligence. Comput. Electron. Agric. 2020, 174, 105457. [Google Scholar] [CrossRef]
  68. Alrowais, F.; Asiri, M.M.; Alabdan, R.; Marzouk, R.; Hilal, A.M.; Gupta, D. Hybrid leader based optimization with deep learning driven weed detection on internet of things enabled smart agriculture environment. Comput. Electr. Eng. 2022, 104, 108411. [Google Scholar] [CrossRef]
  69. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  70. Singh, P.; Niknejad, N.; Spiers, J.D.; Bao, Y.; Ru, S. Development of a smartphone application for rapid blueberry detection and yield estimation. Smart Agric. Technol. 2025, 1, 101361. [Google Scholar] [CrossRef]
  71. Assunção, E.; Gaspar, P.D.; Mesquita, R.; Simões, M.P.; Alibabaei, K.; Veiros, A.; Proença, H. Real-Time Weed Control Application Using a Jetson Nano Edge Device and a Spray Mechanism. Remote Sens. 2022, 14, 4217. [Google Scholar] [CrossRef]
Figure 1. PRISMA flowchart showing the study selection process for the review on computer vision-based weed detection.
Figure 1. PRISMA flowchart showing the study selection process for the review on computer vision-based weed detection.
Agriculture 15 02296 g001
Figure 2. Evolution of Computer Vision Methods for Weed Detection (1990s–2025).
Figure 2. Evolution of Computer Vision Methods for Weed Detection (1990s–2025).
Agriculture 15 02296 g002
Table 1. Example studies on weed detection using machine learning models.
Table 1. Example studies on weed detection using machine learning models.
Data CollectionML ModelResultsReference
Features extracted from Gabor and FFT filtersSVMClassification of narrow and broad-leaved weeds, Overall accuracy: 100%[10]
Hyper-spectral imagesDTs developed with boostingGlobal accuracy scores of 95.0% were achieved using spectral and shape features[26]
Bi-spectral imagesUnsupervised ClusteringOverall weed detection accuracy of 75.0%[28]
RGB imagesLinear RegressionAverage Balanced accuracy ranged between 75.0–83.0% across 6 different fields[30]
RGB imagesMultiple RegressionClassification accuracy of 92.8% between infested and non-infested regions[31]
Pattern recognition featuresK-Nearest Neighbors (KNN) classifierOverall accuracy of 83.1% with a kappa coefficient of 0.775[32]
Three types of shape features used for trainingSVM and ANNOverall classification accuracy of crops and weeds, ANN: 92.9% and SVM 92.5%[33]
Optimal features extracted from RGB imagesSVMOverall classification accuracy score of 97.0%[34]
Hyper-spectral imagesClassification and Regression Tree (CART) and DTClassification accuracy of 96.0% for early growth stages of weeds in corn crops[35]
RGB imagesRandom Forest (RF)Overall pixel-based classification of 98.2%[36]
RGB imagesSVM, Extreme Gradient Boosting (XGBoost), Linear RegressionSVM classifier obtained a F1-score of 99.3%[37]
RGB imagesANNOverall classification accuracy score of 98.1%[38]
Grayscale and RGB imagesRFOverall classification accuracy of 94.0%[39]
Table 2. Example studies on weed detection using deep learning models.
Table 2. Example studies on weed detection using deep learning models.
Dataset CollectionDL ModelResultsReference
RGB images captured at an altitude of 0.6 mObject detection models: Faster R-CNN, Yolo-v3, and CenterNetYolo-v3 achieved the highest F1-score of 0.971 and computational efficiency[41]
RGB imagery from UAVSemantic Segmentation models: UNet++, MAnet, DeepLab V3+, and PSPNetBest-performing model achieved mAP of 90.0% on MAnet with mit_b4 backbone[43]
400 RGB images from UAV5 DL models: MobileNetV2, ResNet50 and 3 custom CNNs5-layer CNN achieves a detection accuracy of 95.0%[45]
RGB images under natural light conditionDenseNet model combined with SVMThe proposed model achieves a F1-score of 99.3%[47]
Rice field weed datasetHigh-level semantic feature extraction used TransformersThe developed RMS-DETR model achieved an accuracy of 79.2%[48]
RGB imagery from UAVSemantic Segmentation models: UNet and SegNetSegNet model performed best with an Intersection over Union (IoU) score of 0.821[49]
RGB imagery from UAVUNet model with InceptionV3 as feature extractorAccuracy of weed detection up to 90.0%[50]
RGB imagery and NIR informationEncoder-decoder based Deep CNNMean IoU (mIoU) score for pixel-wise segmentation score of 88.9%[51]
RGB imagery Digital Nikon Z5 cameraMulti-scale feature extraction Residual attention transformerBest-performing model achieved 97.0% accuracy and 94.1% mIoU[52]
RGB imagery from UAVMask R-CNN with ResNet 50 and ResNet 101 backboneBest-performing model achieved mAP50 of 65.5% using ResNet101 backbone[53]
RGB imagery from UAVMask R-CNN: Convolutional Block Attention ModuleThe Improved Mask R-CNN model achieved mAP score of 0.919[54]
RGB imagery from UAVObject detection: Faster R-CNN and SSDBest performing Faster R-CNN achieved an IoU score 0.850 of on test dataset[55]
Table 3. Publicly available weed detection datasets.
Table 3. Publicly available weed detection datasets.
Dataset NameDataset FormatAnnotation TypeTotal No. of ImagesURL
WeedNet [21]MultispectralImages per class category465https://github.com/inkyusa/weedNet (accessed on 28 October 2025)
Early crop weed [47]RGBImages per class category508https://github.com/AUAgroup/early-crop-weed (accessed on 28 October 2025)
Plant Seedling [60]RGBImages per class category407https://www.kaggle.com/competitions/plant-seedlings-classification/ (accessed on 28 October 2025)
Sugar Beets [61]Available in multiple formatsImages per class category>10,000https://www.ipb.uni-bonn.de/datasets_IJRR2017/annotations/ (accessed on 28 October 2025)
Carrot-Weed [62]RGBPixel level39https://github.com/lameski/rgbweeddetection (accessed on 28 October 2025)
CWFI dataset [63]MultispectralPixel level60https://github.com/cwfid/dataset (accessed on 28 October 2025)
Plant Phenotype database [64]RGBBounding box7590https://gitlab.au.dk/AUENG-Vision/OPPD/-/tree/master/ (accessed on 28 October 2025)
Leaf counting [65]RGBImages per class category9372https://www.kaggle.com/code/girgismicheal/plant-s-leaf-counting-using-vgg16 (accessed on 28 October 2025)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, P.; Zhao, B.; Shi, Y. Computer Vision for Site-Specific Weed Management in Precision Agriculture: A Review. Agriculture 2025, 15, 2296. https://doi.org/10.3390/agriculture15212296

AMA Style

Singh P, Zhao B, Shi Y. Computer Vision for Site-Specific Weed Management in Precision Agriculture: A Review. Agriculture. 2025; 15(21):2296. https://doi.org/10.3390/agriculture15212296

Chicago/Turabian Style

Singh, Puranjit, Biquan Zhao, and Yeyin Shi. 2025. "Computer Vision for Site-Specific Weed Management in Precision Agriculture: A Review" Agriculture 15, no. 21: 2296. https://doi.org/10.3390/agriculture15212296

APA Style

Singh, P., Zhao, B., & Shi, Y. (2025). Computer Vision for Site-Specific Weed Management in Precision Agriculture: A Review. Agriculture, 15(21), 2296. https://doi.org/10.3390/agriculture15212296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop