Next Article in Journal
Critical Flow Velocity Analysis of Multi-Span Viscoelastic Micro-Bending Irrigation Pipelines
Previous Article in Journal
Validating Data Interpolation Empirical Orthogonal Functions Interpolated Soil Moisture Data in the Contiguous United States
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Precision Weeding in Agriculture: A Comprehensive Review of Intelligent Laser Robots Leveraging Deep Learning Techniques

1
Laboratory Management Center, Qingdao Agricultural University, Qingdao 266109, China
2
College of Science and Information, Qingdao Agricultural University, Qingdao 266109, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(11), 1213; https://doi.org/10.3390/agriculture15111213
Submission received: 8 April 2025 / Revised: 8 May 2025 / Accepted: 27 May 2025 / Published: 1 June 2025
(This article belongs to the Section Digital Agriculture)

Abstract

:
With the advancement of modern agriculture, intelligent laser robots driven by deep learning have emerged as an effective solution to address the limitations of traditional weeding methods. These robots offer precise and efficient weed control, crucial for boosting agricultural productivity. This paper provides a comprehensive review of recent research on laser weeding applications using intelligent robots. Firstly, we introduce the content analysis method employed to organize the reviewed literature. Subsequently, we present the workflow of weeding systems, emphasizing key technologies such as the perception, decision-making, and execution layers. A detailed discussion follows on the application of deep learning algorithms, including Convolutional Neural Networks (CNNs), YOLO, and Faster R-CNN, in weed control. Here, we show that these algorithms can achieve high accuracy in weed detection, with YOLO demonstrating particularly fast and accurate performance. Furthermore, we analyze the challenges and open problems associated with deep learning detection systems and explore future trends in this research field. By summarizing the role of intelligent laser robots powered by deep learning, we aim to provide insights for researchers and practitioners in agriculture, fostering further innovation and development in this promising area.

1. Introduction

In the realm of modern agriculture, weed management stands as a cornerstone for ensuring optimal crop yields and sustainable farming practices. Traditional weeding methods, predominantly mechanical and chemical weeding, have long been employed to combat weed infestations [1,2]. However, these conventional approaches come with significant drawbacks. Mechanical weeding often results in collateral damage to beneficial organisms and disrupts the soil structure, while chemical weeding poses risks to human health [3] and has detrimental impacts on the ecological environment.
With the rapid advancement of agricultural modernization and the continuous progress of science and technology, laser weeding technology has emerged as a promising alternative, demonstrating remarkable potential in the agricultural field. The application of laser weeding has expanded across diverse agricultural production environments, including farmlands, orchards, and greenhouses. Nevertheless, as the scale of agricultural production continues to expand, the ability to accurately and efficiently identify and remove weeds in complex farmland settings without harming crops has become a pressing challenge. This challenge has spurred the development of the intelligent laser robot weeding system based on machine learning, which offers an effective solution to the aforementioned issues.
A comprehensive review of the current research landscape reveals that the WeLASER project, as proposed in [4], represents a significant advancement in this field. This project aims to develop a high-powered self-driving weeding vehicle equipped with lasers, boasting high environmental performance from a life-cycle perspective. Compared to other weeding technologies, high-powered self-driving laser weeding vehicles hold the promise of being both environmentally friendly and highly efficient in practical applications. Through the integration of expertise, guidance, and advice, continuous improvements have been made in their design, operating characteristics, and the promotion of smart applications in agricultural practice.
Intelligent laser robots are a management tool for preventing the excessive growth of weeds in farmland. Given the increasingly complex weed problem in modern agriculture, precise weeding has become an important strategy to promote efficient agricultural production. By leveraging precision weeding techniques, the issues of weed interference and crop damage that farmers encounter during farmland management can be effectively mitigated. In this context, intelligent laser robot weeding systems can assist farmers in removing weeds and protecting crops with enhanced efficiency and accuracy by analyzing the farmland environment and identifying weed characteristics. Unlike traditional weeding methods, these systems require minimal labor, and do not disrupt the soil structure, harm beneficial organisms, or negatively impact the ecological environment.
When compared to other laser application fields, the intelligent laser robot weeding system must carefully consider the impact of laser intensity on agricultural products. Different crops have distinct physiological characteristics and tolerance levels to laser energy. Under varying environmental conditions, such as different light intensities, temperatures, and soil moisture levels, the effects of laser irradiation can also vary significantly. This consideration is essential to ensure that the weeding effect is achieved without compromising the quality and safety of agricultural products.
The intelligent laser robot weeding system is built upon two core technologies: deep learning technology [5] and laser control technology [6]. These technologies have been seamlessly integrated into modern intelligent agriculture. The intelligent laser weeding robot combines multiple advanced technologies, including perception technology, image recognition technology, and automatic control technology. It supports a wide range of intelligent weeding services and applications [7], making it a versatile tool in the field of intelligent agriculture. Through sensors, the intelligent laser weeding robot can perceive environmental changes and weed growth conditions in real-time. In the context of intelligent weeding operations, the system can be divided into three levels: the perception layer, the decision layer, and the execution layer. The perception layer is responsible for collecting image information and monitoring the real-time position of the robot. The decision layer analyzes and identifies the collected image information, utilizing deep learning algorithms to classify and distinguish between weeds and crops. Finally, the execution layer performs weeding operations. Based on the weed coordinates identified by the decision layer, it controls the laser device to emit lasers, thereby achieving precise weeding [8]. The system architecture of the intelligent laser weeding robot is shown in Figure 1.
This review aims to provide a comprehensive overview of the intelligent laser robot weeding system based on deep learning. It will first delve into the significance of weed management in modern agriculture and the limitations of traditional weeding methods. Then, it will explore the development and application of laser weeding technology, with a particular focus on the intelligent laser robot weeding system. Subsequently, the core technologies, system architecture, and practical applications of the intelligent laser robot will be analyzed in detail. Finally, the current challenges and future prospects of this technology will be discussed, offering insights into the direction of future research and development in this field.
In view of the above requirements and characteristics, the research status of intelligent laser robots and the application of intelligent laser robots in the field of weed control are mainly discussed and analyzed in this paper. The aim of this study is to conduct a comprehensive review of the research work on laser weeding applications based on intelligent robots in recent years. An overall view is provided by this paper, through which readers can quickly understand the field of intelligent laser robot technology and the application of intelligent laser robots in the field of weed control. The foundation for promoting the innovative application of intelligent laser robots in the agricultural field is laid by this study. This survey is for researchers, practitioners, and educators who are interested in intelligent laser robot weeding technology. It is hoped that they can have a rough guide when choosing weeding technology to solve problems related to weed control. In summary, the main contributions of this survey are threefold:
(1)
A content analysis method is provided to organize the literature and present the research process.
(2)
The key technology of intelligent laser robot in weed-control process is provided.
(3)
Challenges and open problems are analyzed from all angles. New trends in this research field and future directions are accurately identified, so as to share the grand vision of research in the field of intelligent laser robot weeding and greatly expand our cognitive horizons.

2. Content Analysis Method and Research Process Design

2.1. Sample Extraction

Generally speaking, compared with monographs, research reports and dissertations, journal paper can more acutely and directly reflect research hotspots and frontiers. Therefore, the sampling principles are as follows:
Paper Collection: We used Google Scholar as the main search engines, and we also adopted the databases CNKI (https://www.cnki.net/, accessed on 20 May 2025), Web of Science, and IEEE Xplore as three important tools to discover related papers. In addition, we have screened most of the relevant well-known conferences, such as CVPR, ICCV, NIPS, ICML, AAAI, SIGGRAPH, IEEE ICMA, and so on.
Time Interval: From 2008 to 2024.
Major Search Keywords: The major search keywords are “intelligent laser robots”, “deep learning”, “weeding in agriculture”, “systematic literature review”, “agricultural robotics”, “laser technology”, “weed detection”, “Artificial intelligence in agriculture”, “precision agriculture”, “autonomous weeding”. These keywords were used both individually and in combination.

2.2. Content Analysis Coding

According to the research objectives of this paper, the two researchers discuss and set the analysis coding rule together.
Basic Information of the Papers: Title, author, year of publication, the journal name, technique used, applied model, specific content of the model study.
Research Content Analysis: A quantitative analysis of intelligent laser robot algorithms, an analysis of the advantages and disadvantages of intelligent laser robot algorithms, and an analysis of the technical applications of intelligent laser robot algorithms.

2.3. Research Steps

Step one: According to the principle of sample extraction, papers were extracted and screened, and then 122 initial samples were obtained.
Step two: Does the paper use intelligent laser robot algorithm technology? If yes, it is classified into the statistical sample. Otherwise, it is discard.
Step three: Identify the technologies used in the papers and their application models; this step was carried out independently by two researchers.
Step four: The preliminary identification results of the two researchers were combined, and the identification results that were controversial were discussed and determined by two researchers.
Step five: The preliminary classification was performed by two researchers.
Step six: The controversial classifications were discussed and determined by two researchers, and finally the final research samples were obtained.

3. Weed-Control System

The weed-control system is an innovative intelligent weed-control solution that is operated by the perception layer, decision layer, and execution layer. Table 1 briefly describes the main processes of a typical intelligent weed-control system.The rest of this section will describe each part in detail.

3.1. Perception Layer

The perception layer is the foundation of the entire weed-control system. In the image acquisition process, high-resolution cameras, scanners or other advanced image acquisition equipment may be used. All-round, multi-angle, and high-frequency photos of the target area can be taken by these devices to ensure that comprehensive and accurate image information is obtained. These devices may have functions such as autofocus and light adjustment. Different environmental conditions and shooting requirements can be adapted to by these functions. The image-processing process includes a series of complex algorithms and technologies. First, the collected original images are pre-processed by denoising and enhancement. The quality and clarity of the image can be improved by this pre-processing. After the above processing and analysis, the location information of the weeds can be accurately output by the perception layer, providing a key basis for the subsequent decision making and execution of the system. This paper will introduce the following image acquisition methods.

3.1.1. USB Camera

The primary role of cameras in the weed-control system is to obtain pixel-based images. These images contain basic visual information about the target area, such as the color, shape, and texture of plants, which serve as the raw data for subsequent weed identification and analysis.
Cameras sense the light of the external environment [9] through their built-in image sensors, such as Complementary Metal Oxide Semiconductor (CMOS) [10] or Charge-Coupled Device (CCD). Composed of numerous photosensitive units, each unit generates an electrical signal according to the intensity of the received light. When light enters the camera lens and is projected onto the image sensor, the sensor converts the light signal into an analog electrical signal. Subsequently, the Analog-to-Digital Converter (ADC) inside the camera [11] transforms these analog electrical signals into digital signals. The converted digital image data are then transmitted to a connected computer or other device, typically in the form of data packets that carry the image’s pixel information, color information, and synchronization signals.

3.1.2. Binocular Camera

While cameras provide basic pixel-based images, binocular cameras play a crucial role in acquiring in-depth information, which is essential for a more accurate understanding of the spatial relationship between weeds and the surrounding environment in the weed-control system. This in-depth information, especially the depth data, enables a more precise targeting of weeds for subsequent treatment actions.
The binocular camera [12] operates based on the principle of parallax. It consists of two horizontally placed cameras with a specific distance between them. When capturing the same object, due to the different positions of the two cameras, the images they obtain are distinct. By calculating and analyzing the position deviation, namely the parallax, of the corresponding points in these two images, the depth information of the object can be derived. During data collection, the binocular camera simultaneously captures the scene image. Each camera’s acquired image contains basic visual details of the object, and through matching the corresponding pixel points in the two images, and combining with camera parameters and parallax information, the position and depth of the object in space can be calculated using the principle of triangulation [13]. As a result, the binocular camera provides not only two-dimensional image information but also rich three-dimensional spatial data, making it widely applicable in various fields including three-dimensional modeling, distance measurement, and tracking.

3.2. Decision-Making Layer

In [14], a dual-stream fusion network based on spiking neural units is introduced by the system for classification and recognition, thereby achieving real-time interaction between users and sensors. The system also integrates a 3D human visualization function, which can visualize sensor data and present the recognized human movements in a 3D model in real time, providing accurate and comprehensive visual feedback to help users better understand and analyze the details and characteristics of human movements. In the system proposed in this study, the recognition model for weed detection simulates the human brain’s ability to understand and learn data by building a multi-layer neural network structure [15]. In deep learning, neurons are the basic units of neural networks [16]. Input signals are received by neurons and outputs are generated through activation functions. Neural networks are composed of multiple layers, including input layers, hidden layers, and output layers. Input data are subjected to nonlinear transformations by neurons in the hidden layers to extract higher-level features.
Weights [17] are the strength of connections between neurons. They are adjusted through training to optimize the performance of the model. Activation functions [18] are used to introduce nonlinear characteristics. Common activation functions include ReLU, Sigmoid, Tanh, etc. The loss function is used to measure the difference between the model prediction and the actual value. The goal of training is to minimize the loss function. Back propagation is an algorithm. The gradient of the loss function with respect to the weight is calculated by it in order to update the weight. During the training process, the data are usually divided into a training set, a validation set, and a test set. The training set is used to adjust the parameters of the model. The validation set is used to evaluate the performance of the model and adjust the hyperparameters. The test set is used to finally evaluate the generalization ability of the model on new data.
A Convolutional Neural Network (CNN) is a neural network specially designed for processing two-dimensional data such as images. It includes convolutional layers, pooling layers, etc. Batch normalization is usually used. The training process can be accelerated and internal covariate shift can be reduced by it. Dropout regularization technology is used. Neurons can be randomly discarded by it to prevent overfitting. Deep learning target-detection algorithms are usually divided into three parts, as shown in Figure 2.
Depending on the application, detection models [19] are usually divided into two types: Anchor-based detection. A large amount of computation is involved in it and it is usually suitable for small target detection. For this method, some anchor boxes (usually rectangular boxes of different sizes and proportions) are pre-defined. Then, these anchor boxes are slid on the image. Whether the anchor box contains the target is determined. The target is classified and adjusted. Anchor-free detection [20]. This method does not require pre-defined anchor boxes. A small amount of computation is involved in it. The center point or boundary of the target is directly predicted, or the image is segmented to detect the target [21].

3.3. Execution Layer

The execution layer [22] constitutes a pivotal segment of the entire weed-control system [23]. Its primary function is to receive instructions dispatched from the decision layer and oversee the operation of relevant equipment in strict accordance with these directives [24]. Specifically, the execution layer first acquires the instructions generated by the decision layer. These instructions are the outcome of deep learning algorithm processing, which analyzes the image acquisition and location data furnished by the perception layer. Subsequently, the execution layer exerts control over the equipment to execute corresponding operations. In this system, the core equipment is a laser-emitting device [25]. Upon receiving a laser emission instruction [26], the execution layer promptly controls the laser-emitting equipment to target and irradiate weeds.

3.3.1. Laser Weeding Device—Working Principle

The operation of the laser weeding device is rooted in the unique interaction between lasers and plant tissues. When a laser, typically a 1064nm continuous-wave laser, irradiates weeds [27], the photon energy of the laser is absorbed by the weed. Given that weed tissues can be regarded as porous media, the absorbed laser energy is converted into heat within the tissues. According to the theory of porous media heat transfer, this heat causes the temperature inside the weed to rise rapidly. As the temperature reaches a certain threshold, the water within the weed undergoes a phase change from liquid to vapor, resulting in tissue dehydration. Continued exposure to laser energy further disrupts the cell structure of the weed, denaturing proteins and destroying biological membranes. Eventually, the weed loses its physiological functions, achieving the goal of weed control.

3.3.2. Laser Control Mechanism

Direction Control: To accurately direct the laser towards target weeds, the execution layer employs a sophisticated control mechanism for laser direction. This mechanism usually consists of a combination of mechanical and optical components. A servo-motor-driven mirror system is commonly used, where the servo motor adjusts the angle of the reflective mirror according to the position information of the target weed sent by the decision-making layer. High-precision sensors, such as encoders, are integrated into the system to ensure accurate mirror angle adjustment, enabling the laser beam to be precisely directed at the target, even in uneven terrains or complex crop-growth environments.
Power Control: For laser power control, the execution layer utilizes an intelligent control strategy. It considers various factors, including the type of weed, its growth stage, and environmental conditions. Based on pre-established algorithms and real-time feedback from temperature sensors attached to the laser-irradiation area, the execution layer adjusts the power output of the laser source. For example, for thick-stemmed and robust weeds, a higher power density is required, while for young and tender weeds, a lower power density can achieve the desired weeding effect. This power adjustment not only ensures effective weed removal but also optimizes energy consumption and reduces potential damage to surrounding crops.

3.3.3. Weeding Performance of Existing Laser Weeding Robots

Existing laser weeding robots have shown promising potential in weed control. In field trials, a number of advanced laser weeding robots have achieved a relatively high weeding accuracy rate, reaching up to 95% in certain controlled agricultural scenarios [4]. These robots demonstrate the ability to adapt to various crop-planting patterns, including row-planted crops and wide-area sown crops, although the performance may vary under different environmental conditions.
For row-planted crops, the robots have the capacity to identify and remove weeds between rows with decent efficiency. Regarding processing speed, some models can manage to handle several hundred square meters of weeding area per hour, which is an improvement compared to traditional manual weeding methods. However, this speed is often achieved under ideal conditions, and in practical, complex farming environments, the efficiency may be affected by factors such as uneven terrain, variable crop densities, and diverse weed types.
With the continuous optimization of the laser-weeding algorithm and control system, there is an ongoing effort to reduce the weed omission rate and achieve more comprehensive weed control. While the current performance of laser weeding robots has shown positive signs, it is important to note that there is still room for improvement in terms of overall reliability, adaptability to different farming conditions, and consistent high-precision operation across a wide range of agricultural scenarios.

3.3.4. Robot Body Structure

The body structure of laser weeding robots is designed to facilitate stable movement and efficient operation in agricultural fields. Most laser weeding robots adopt a four-wheel or six-wheel drive structure, which provides strong traction and stability on various terrains. The chassis is usually made of lightweight and high-strength materials, such as aluminum alloy or carbon-fiber composites, to reduce the overall weight of the robot while ensuring structural strength. The laser-emitting device is mounted on an adjustable arm, which can be raised or lowered, and rotated horizontally and vertically to adapt to different crop heights and weeding angles. Additionally, the robot is equipped with multiple sensors, including ultrasonic sensors, LiDAR sensors, and cameras, which are used for obstacle detection, terrain mapping, and crop–weed identification. These sensors, combined with the control system, enable the robot to navigate autonomously in the field, avoiding collisions with crops and obstacles, and ensuring the safe and efficient operation of the weeding process. Through the precise control of the execution layer, the laser weeding robot can accurately irradiate the target weeds, improving the efficiency and accuracy of weeding while minimizing the impact on the surrounding environment [28].

3.4. Weeding Process

3.4.1. Weed-Control Standards

A complete laser weeding system should meet numerous standards. The perception layer ought to possess high-resolution, wide-field image acquisition capabilities and be capable of accurately obtaining clear field image data under varying lighting and complex environments to ensure that no potential weed areas are overlooked. The weed recognition accuracy of the decision-making layer should reach an exceedingly high level and be able to swiftly and precisely distinguish weeds from crops. The laser emission device of the execution layer must be able to accurately control the power, wavelength, and emission angle of the laser. The power has to be flexibly adjusted in accordance with the type of weeds and their growth conditions, so as to effectively remove weeds while averting adverse impacts on soil fertility and the surrounding ecological environment.

3.4.2. Weed Target Detection

In laser weeding systems, target detection plays a critical role. This function mainly relies on advanced machine vision technology. When faced with complex and changing farmland scenes, traditional image-processing methods have obvious limitations in the accuracy and efficiency of weed identification. The emergence of deep learning has completely revolutionized the application of machine vision in weeding systems. With its powerful self-learning ability and deep neural network architecture, deep learning can deeply mine and analyze massive amounts of farmland image data. It can automatically learn the various characteristics of weeds at different growth stages and under different environmental conditions, whether it is shape, color, texture or spatial distribution, and it can accurately extract and identify, thereby achieving efficient detection of weed targets.

4. Deep Learning Algorithms

4.1. Introduction to Deep Learning Detection Algorithms

Deep learning has revolutionized agricultural laser weed control system by enabling precise and automated weed detection and management. This technology leverages advanced neural networks to process and analyze images of crops and weeds with high accuracy. Table 2 shows various deep learning techniques in the intelligent laser robot weeding system.

4.1.1. Convolutional Neural Network

CNNs are mainly used to process data with grid structures such as images [38]. The algorithm has been widely studied and applied to recognition systems. Its design is inspired by the biological visual system. Features are automatically learned from data through multiple layers of convolution [39] and pooling operations to achieve tasks such as efficient image recognition and classification.
The input layer of CNN [40] receives image data, and the image enters the network in the form of a matrix. Next is the convolution layer, which is the core part of CNN. In the convolution layer, the input image is subjected to sliding convolution operations using multiple learnable convolution kernels. Feature maps are obtained by each convolution kernel performing an inner product operation with a local area of the image, which capture specific patterns and features at different locations in the image. Then comes the pooling layer [41], which is usually located after the convolution layer [42]. The feature map is downsampled by it to reduce the data dimension while retaining the main features, improving computational efficiency and robustness to small displacements. After alternating multiple convolutional and pooling layers, the resulting feature map is flattened into a one-dimensional vector. The one-dimensional vector is input to the fully connected layer. The learned features are integrated and classified by the fully connected layer [43]. Finally, the prediction results, such as the category of the image, are output through the output layer. In the whole process, the parameters of the network are adjusted by using the back-propagation algorithm [44] according to the error between the prediction results and the true labels. The model performance is continuously optimized so that it can better adapt to different image data.
CNN first processes the farmland images obtained by the perception layer. Its unique convolution layer can automatically extract local features in the image, such as the edge contours of weed leaves, texture details, and subtle differences in color and shape from crops. These features are reduced in dimension and compressed through the pooling layer, effectively reducing the amount of data while retaining key information, allowing the network to quickly process large-scale farmland image data.
Faster R-CNN detection algorithm: Faster R-CNN [19] is an object-detection algorithm with a wide range of applications in computer vision. Faster R-CNN [45] is an improvement on the traditional region-based convolutional neural network. The aim is to improve the speed and accuracy of object detection. In traditional CNN-based object-detection methods, candidate regions are usually generated by methods such as selective search. These methods run on the CPU. They are slow. They do not share features with the detection network. And they have a lot of redundant calculations. In Faster R-CNN, RPN is introduced. Candidate regions are generated directly by sliding windows on the convolutional feature map by it [46]. RPN [47] shares the convolutional layer with the subsequent detection network. The speed and quality of candidate region generation are greatly improved by it.
Mask R-CNN detection algorithm: Mask R-CNN [48] excels in complex scenes that need to detect and identify multiple different categories at the same time. Similar to Faster R-CNN, Mask R-CNN also uses RPN. Candidate regions that may contain target objects are generated by RPN. RPN uses a sliding window on the feature map and uses small convolution kernels. A large number of anchor boxes [49] are generated by it. These anchor boxes are classified and regressed by it to screen out high-quality candidate regions. The difference is that Mask R-CNN uses the RoIAlign [50] operation. Candidate regions and feature maps are more accurately aligned by it through methods such as bilinear interpolation. Quantization errors are avoided by it. The accuracy of detection and segmentation is improved by it. For each candidate region, Mask R-CNN uses the detection head [51]. Target classification and bounding box regression are performed by it to determine the category and location of the target. At the same time, it also uses the segmentation head. Instance segmentation masks for each target are generated by it. The segmentation head usually adopts a fully convolutional network structure. A mask map of the same size as the input image can be output by it.

4.1.2. Transformer-Based Convolutional Neural Network

In [52], Positional encodings specialized for visual transformers that work for patches of arbitrary dimension and length are proposed. A transformer [53] consists of two parts: encoder and decoder [54]. The encoder is responsible for converting the input sequence into a series of high-dimensional feature representations, while the decoder gradually generates the target sequence based on the encoder output and the generated partial output sequence. The input of the Transformer is a sequence data. After input, Position Encoding (PE) is added to each word vector. The calculation formula is as follows:
P E p o s , 2 i = sin p o s 10000 2 i d m o e d e l
P E p o s , 2 i + 1 = cos p o s 10000 2 i d m o e d e l
the encoder block is composed of six encoders stacked together; N x = 6. One encoder consists of Multi-Head Attention and a fully connected neural network Feed-Forward Network. In the self-attention mechanism, the embedded patches vector is mapped to three vectors: query (Q), key (K), and value (V). The similarity between the Q and K vectors is calculated by dot product. After scaling and softmax normalization, the obtained similarity value is multiplied by the V value vector. The semantic weight is obtained by this multiplication. The weighted sum of all semantic weights is obtained. The self-attention feature is obtained by this weighted sum. Finally, the MLP is used. A feature map with rich global information is obtained by it. The self-attention mechanism is:
Z = A t t e n t i o n Q , K , V = s o f t m a x Q K T / d k V
where Z is the self-attention feature, Q is the query vector, K is the key vector, V is the value vector, and d k is the scaling factor. The Transformer architecture is shown in Figure 3.

4.1.3. YOLO Target-Detection Algorithm

You Only Look Once (YOLO) [55] is an efficient real-time target-detection algorithm [56]. Its core idea is to regard target detection as a regression problem. Targets in images or videos are mainly detected by it in real time. Its core use is to quickly and accurately identify and locate multiple targets. It is widely used in autonomous driving, intelligent monitoring, agricultural production, and other fields. Through the YOLO algorithm, objects in images can be discovered in a timely manner. Their categories and location information can be obtained by it, providing an important basis for subsequent decision making and actions.
YOLO divides the input image into multiple grid cells. Multiple bounding boxes and the confidence and category probabilities corresponding to these bounding boxes are predicted by each grid cell. Among them, the confidence represents the probability that the bounding box contains the target and the accuracy of the predicted bounding box. The category probability represents the probability of each category existing in the grid cell. During training, the features of the entire image are used by YOLO to predict the bounding boxes and categories of all targets. The loss function is calculated by it by comparing with the true label. The model parameters are updated by it using the back-propagation algorithm. During inference, the confidence [57] and category probability of each bounding box are multiplied by YOLO to obtain the category confidence. Then, the bounding boxes with higher confidence are selected by it according to the confidence threshold. Then, the redundant bounding boxes are removed by it using the non-maximum suppression algorithm. Finally, the detection results of the targets in the image are obtained by it. The advantages of YOLO are fast speed, the ability to use the global information of the entire image for prediction, and the simple and intuitive model structure. However, it also has limitations such as a poor detection effect on small targets and sensitivity to changes in target shape and scale [56]. The workflow of the YOLO target-detection algorithm is shown in Figure 4.
When the YOLO algorithm is applied to the laser weeding system, its unique network architecture enables it to quickly and efficiently process farmland images transmitted by the perception layer. Unlike traditional target-detection algorithms, YOLO uses a single forward-propagation method to divide the entire image into multiple grids, and can simultaneously predict the bounding boxes and category probabilities of multiple targets in one calculation process. When faced with vast farmland scenes, it can quickly scan images and quickly locate the location information of weeds, and its speed advantage is particularly prominent. This high-speed detection capability enables the execution layer of the weeding system to respond quickly, and timely launch lasers to accurately remove weeds, preventing weeds from further growing and spreading and causing damage to crops. It brings efficient and intelligent solutions to the weeding process in agricultural production, and strongly promotes the rapid development of agricultural automation.

4.2. Other Object-Detection Algorithms Related to Deep Learning

OpenCV [58] is an open-source computer vision and machine learning software library. It covers a wide range of functions and performs well in image and video processing. Images can be read, saved, and displayed by it [59]. A variety of filtering and enhancement algorithms can be used by it to remove noise and improve image quality. Operations such as geometric transformation and color-space transformation of images can also be performed by it. In the field of feature extraction and description, corner detection, edge detection, and a variety of powerful feature descriptors such as SIFT, SURF, and ORB are provided by OpenCV. They play a key role in tasks such as image matching, target recognition, and 3D reconstruction. In terms of target detection and recognition, traditional Haar features and cascade classifiers [60] for tasks such as face detection are had by it. HOG features combined with SVM [61] classifiers for target detection are supported by it. At the same time, it can also be combined with deep learning models such as YOLO and SSD for efficient target detection [62]. In addition, camera calibration and 3D reconstruction are important roles played by OpenCV. Camera parameters can be determined by it and the 3D structure of the scene can be reconstructed by it by analyzing multiple images. It is not only open source and free, but also has the advantage of being cross-platform and can run on multiple operating systems and hardware platforms. With its efficient performance, rich functions and huge support from user and developer communities, OpenCV is widely used in many fields such as robotic vision, intelligent transportation, medical imaging, industrial inspection, and agricultural production. A solid foundation for the development and innovation in the field of computer vision is provided by it.

4.3. Object-Detection Algorithm Evaluation Method

4.3.1. Comparative Analysis of Popular Deep Learning Algorithms

Different algorithms have their own unique advantages and disadvantages [63]. These differences are accounted for by different design concepts, technical architectures, and implementation methods. Different algorithms also show different effects when dealing with problems and needs in different fields [64]. When YOLO deals with small targets [65], it is difficult for the neural network to fully learn the characteristics of small targets. As a result, the detection accuracy of small targets is low. In the improved neural network based on CNN, a large amount of memory is required to store model parameters and intermediate calculation results. This affects the reasoning speed of the model.

4.3.2. Evaluation Metrics for Deep Learning Algorithm

In today’s computer-vision field, deep learning target-detection algorithms have become the focus of research and one of the core technologies. They show excellent performance and great potential in many practical application scenarios. In order to scientifically and accurately evaluate different deep learning target-detection algorithms and determine their applicability and advantages in specific tasks, a reliable performance evaluation index system is particularly critical [56]. This paper will explore several key indicators in deep learning target-detection algorithms. They cover [66] accuracy, precision, recall, mean average precision (mAP), intersection over union (IoU) [67], and indicators related to detection speed (FPS). Through the systematic analysis and explanation of these indicators, a deeper understanding is aimed to be provided for researchers and practitioners, so they can more accurately select appropriate target-detection algorithms in practical applications and effectively improve their performance. As shown in Table 3, the calculation formulas for the above six indicators are as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N
among them, True Positives ( T P ) represent true-positive examples, that is, the number of samples correctly detected as positive samples; True Negatives ( T N ) represent true-negative examples, that is, the number of samples correctly detected as negative samples; False Positives ( F P ) represents false-positive examples, that is, the number of negative samples mistakenly detected as positive samples; False Negatives ( F N ) represent false-negative examples, that is, the number of positive samples mistakenly detected as negative samples.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
A P = 0 1 P ( r ) d r
m A P = i = 1 C A P i
by changing the confidence threshold of the detection result, different precision and recall values can be obtained. With the recall as the horizontal axis and the precision as the vertical axis, a precision–recall curve (PR curve) is drawn to obtain the A P value.
Table 3. Evaluation metrics for deep learning object-detection algorithms.
Table 3. Evaluation metrics for deep learning object-detection algorithms.
Evaluation MetricsMeaningApplication Scenarios
Accuracy• Accuracy refers to the proportion of correct predictions made by the model among all samples, that is, the ratio of the number of correctly predicted samples to the total number of samples.• Accuracy is a basic indicator for evaluating model performance, which measures the classification accuracy of the model on the overall data.
Precision• Precision refers to the proportion of samples that are actually positive among all samples predicted to be positive.• Precision is mainly used to focus on the accuracy of the model in predicting the positive class.
Recall• Recall refers to the proportion of samples that are correctly predicted as positive by the model among all samples that are actually positive.• Recall is used to measure the model’s ability to detect positive samples.
mean Average Precision (mAP)• mAP is a commonly used indicator in target-detection tasks. It is the average of the average precision (AP) of multiple categories.• mAP is mainly used in target-detection tasks to evaluate the detection accuracy of the model for different target categories.
Intersection over Union (IoU)• IoU is a metric used to measure the degree of overlap between two bounding boxes. It is the ratio of the intersection to the union of the two bounding boxes.• In object detection, IoU is usually used to determine the degree of match between the predicted bounding box and the true bounding box, thereby evaluating the positioning accuracy of the model.
Frames Per Second (FPS)• FPS means the number of frames processed per second• Reflects the processing speed of the model.
I o U = P r e d i c t i o n G r o u n d T r u t h P r e d i c t i o n G r o u n d T r u t h
g r o u n d T r u t h is the real box area; P r e d i c t i o n is the predicted box area.
F P S = Frames t i m e s
f r a m e s is the total number of frames; t i m e s is the total time.

4.4. Key Issues to Be Addressed for Deep Learning Detection System

Deep learning detection algorithms have achieved remarkable achievements in the field of computer vision. The development of tasks such as target detection and image recognition has been greatly promoted [68]. However, deep learning technology still faces a series of challenges in practical applications, such as small target-detection problems, the balance between speed and accuracy, the detection of rare categories, and the positioning accuracy problem. The problems existing in deep learning target-detection algorithms are analyzed in depth. This is of vital importance to further improve and optimize the algorithms and expand their applications in more complex scenarios. In this paper, we will discuss the meaning of the small target-detection problem and the balance between speed and accuracy, and give their solutions, as shown in Table 4.

4.4.1. Small Object-Detection Problem

Since the input image is partitioned into fixed-size grids for detection, small targets often occupy only a tiny portion within a single grid. This situation makes it challenging for the network to comprehensively extract and learn the features of small targets, ultimately leading to a relatively low detection accuracy for small targets. Since small targets account for a small proportion in the image and their features are not obvious, traditional deep learning detection algorithms often have difficulty in accurately detecting small targets, resulting in low detection accuracy. There is an effective solution proposed by the author to address the problem of small target detection. The author improved the YOLOv5 algorithm and introduced a Convolutional Block Attention Module (CBAM) and Transformer encoder module into the backbone feature-extraction network [74]. CBAM can enhance the network’s focus on vegetable targets and suppress the interference of irrelevant information, thereby improving the salience of vegetables in complex farmland backgrounds. The Transformer encoder [75] module can improve the model’s ability to capture global information and rich contextual information [76], and enhance the feature-extraction ability of targets in complex backgrounds. Through these improvements, the algorithm can detect targets more accurately, thereby indirectly improving the detection accuracy of small target weeds.

4.4.2. The Balance Between Speed and Accuracy

In the process of model inference, to improve the detection speed, the detection accuracy is sacrificed to a certain extent. In some application scenarios with high accuracy requirements, it may not meet the requirements. When both speed and accuracy need to be taken into account, it may be difficult to adjust the network model parameters and structure. A trade-off between speed and accuracy needs to be made. Deep learning detection algorithms often face the problem of balancing speed and accuracy in practical applications. There is an effective solution proposed to balance the speed and accuracy [72]. The network structure is improved by the author, and GSConv and VoV-GSCSP are introduced to replace part of the network of the Neck layer [77] of the model to achieve slim-neck optimization. Through hybrid convolution, a new convolution layer is constructed by GSConv, creating a lightweight model while maintaining the model accuracy. An excellent balance between accuracy and speed is achieved by it. VoV-GSCSP is used to ensure that the detector minimizes the computational complexity while ensuring accuracy. It has a simple structure, fast inference speed, and is more hardware-friendly. Through these improvements, higher accuracy (mAP value reaches 0.959) is achieved by the system in the article and a faster response speed is also obtained. This shows that the problem of balancing speed and accuracy in deep learning detection algorithms is effectively solved by the scheme, making it more suitable for practical application in scenarios with high real-time requirements such as agricultural production.

5. Applications of Deep Learning Algorithms in Weed Control

With the continuous development of “smart agriculture”, its advanced technology has gradually been applied to all aspects of agricultural production. The intelligent laser robot weeding system is a system in which intelligent technology is emphasized in the agricultural weeding process. Therefore, new technologies such as deep learning, computer vision, and automated control are expected to take advantage of this development trend and have more efficient weeding technology introduced into agriculture. The removal effect of weeds in farmland is continuously improving. A good intelligent laser robot weeding system can achieve precise and efficient weeding operations, and deep learning algorithms play a vital role [78].
Table 5 shows the application scenarios of deep learning algorithms in the laser weeding system. Table 6 shows how deep learning algorithms solve problems in the laser weeding system.

5.1. Application of YOLO-Based Target-Detection Algorithm in Weed Control

The YOLO detection algorithm [89] plays a vital role in the field of weed control. With its efficient real-time detection capability and accurate target-recognition capability, it provides strong support for weed control. Through rapid analysis of farmland images, the position of weeds can be quickly located by the YOLO algorithm, providing key information for subsequent weed removal. There is an innovative solution for weed detection using the YOLOv8 algorithm. In this solution, a separate camera is used for weed detection. This camera is installed on the front pole of the robot and can effectively identify the stems of the crop rows, thereby accurately detecting weeds in the path and weeds close to the crops. The YOLOv8 algorithm is deeply integrated into the robot’s weed-detection system, allowing weeds to be detected more quickly and accurately by the algorithm. When the robot is running in the farmland, weeds can be detected in real time by the algorithm. Once a weed is detected, the coordinates of the weed are quickly marked by the system [107]. In [108], the YOLOv4 algorithm is used by the authors to implement weed detection. Robots equipped with cameras and sensors are deployed in the fields, and the image information of crops is collected in real time. Then, these image data are transmitted to the big data system for storage and analysis. The big data system uses tools such as Apache Kafka and Apache Spark Streaming to process and optimize these image data in real time, improving the quality and availability of the data. Next, the processed images are analyzed by the deep learning algorithm based on the YOLOv4 algorithm to detect the presence of weeds.

5.2. Application of Object-Detection Algorithm Based on Faster R-CNN in Weed Control

Ref. [82] proposed an improved algorithm for reverse recognition of farmland weeds based on Faster R-CNN. The algorithm uses the image-generation ability of Cycle-GAN to solve the problem of scarce training samples, and mixes Cycle-GAN with Faster R-CNN to improve the weed-recognition ability. In the implementation of the weed-recognition algorithm, the collected images are input into Cycle-GAN [109] for amplification, and the synthesized samples are used as a data set for image classification. Then, the convolution layer is input to extract feature maps, and then the candidate regions are generated through the RPN network. The output is converted to a fixed size through the pooling layer, and finally a full connection operation is performed to train the classification probability and bounding-box regression, and the class and precise position of the candidate region are output. The experimental results show that the recognition rate of this method in the normal test-set pictures can reach 95.06%, which is better than the 87.59% of the traditional Faster R-CNN. It has the advantages of fast recognition speed and good real-time performance, and has application value in orchard and garden weeding.

5.3. Application of Object-Detection Algorithm Based on Convolutional Neural Network in Weed Control

CNN deep learning algorithms are widely used in the field of image recognition. There is an application scheme for weed detection using CNN is proposed. Specifically, collect image data of crops and weeds and enhance the data to increase the diversity and robustness of the data. Then, a convolutional neural network model was constructed, which included ReLU layers, 2D convolution layers, 2D maximum pooling layers, Dropout layers, Flatten layers, and fully connected layers. Through training and optimization of the model, it can accurately classify crops and weeds. In practical applications, the model can be combined with an automatic weeding robot to achieve the automatic detection and removal of weeds [82].

5.4. Application of Other Detection Algorithms in Weed Control

There is a system to track the behaviour or pattern of the mobile robot through RNN technique, which proposed a hybrid RNN-based model to optimize the tracking performance of mobile autonomous robots. The model handles various trajectories in agricultural environments, including curves and straight lines, by integrating spiral, lateral, and linear speed control, and was field tested on a tracked mobile platform [83]. The Computer-Vision-Based Robotic Weed-Control System for Precision Agriculture [99] involved a computer vision-based robotic weed-control system (WCS) for real-time weed control in onion fields. The system identifies weeds through steps such as image enhancement, feature extraction, and classification. The WCS consists of an ATMEGA8 microcontroller, a Node MCU microcontroller, a Raspberry Pi, a camera, and an ultrasonic sensor, and can be remotely controlled and monitored through network services. Ref. [110] introduces an automatic weeding robot for organic farming fields. The robot consists of Arduino Uno, motor driver, servo motor, DC motor, wheels, chassis, blade, ultrasonic sensor, etc. The ultrasonic sensor detects obstacles and avoids them, thereby achieving the weeding function. In the future, LIDAR can be used for obstacle detection and avoidance, and image-processing technology can be used to identify and remove weeds. Ref. [102] developed an automatic weed-detection and -killing robot that uses computer-vision algorithms to detect and classify weeds, collects images through a camera, processes them on a Raspberry Pi, and uses image processing and OpenCV libraries to identify weeds. Automated computer-vision-based weed-removal Bot44 proposed an automatic weed-removal robot based on computer vision. The robot collects images through a Raspberry Pi camera, uses Mask RCNN for image segmentation [111], extracts features, and uses VGG16 for image classification to identify weeds and crops. If it is identified as a weed, the Delta robot arm will use a high-speed rotating blade to remove the weed. The robot’s recognition accuracy rate reached 99.5%.

6. Future Trends

Traditional weed-control methods have a series of drawbacks, such as damage to soil structure and impact on the ecological environment. The laser weed-control system based on deep learning is an effective solution to the drawbacks of traditional weed-control methods. In recent years, more and more attention has been drawn from academia and industry to the research on deep learning technology. This section discusses the future development direction of deep learning target-detection algorithms in the field of weed control.

6.1. Weeding System Based on Multimodal Data Fusion

Various types of data, such as image data, spectral data, geographic location data, and environmental data, will be used in a comprehensive manner. The farmland images obtained by the camera can show the color, shape, texture and other information of crops and weeds; the spectral data collected by the spectral sensor can reflect their physiological characteristics and chemical composition; the geographic location data helps to understand the soil conditions and climate characteristics of different regions; environmental data such as temperature, humidity, and light intensity have an important impact on the growth of crops and weeds. The fusion of these different modal data can achieve more comprehensive information perception, thereby more accurately identifying weeds and crops. At the same time, multimodal data fusion can enable the weeding system to better adapt to complex farmland environments, such as different soil types, climate conditions, and weed species, and improve the robustness and generalization ability of the system. In practical applications, these data can be input into a multimodal deep learning model, which can simultaneously learn and fuse the features of multiple data, thereby achieving more accurate weed identification and weeding decisions. In addition, combined with geographic location and environmental data, it can also provide a personalized weeding solution for the weeding system, adjust the laser power and weeding timing according to the characteristics of different farmlands, improve weeding effects and reduce energy consumption. In short, multimodal data fusion will bring great potential to the development of deep learning laser weeding technology, enabling it to serve agricultural production more efficiently and accurately.

6.2. Intelligent Decision Making

Intelligent decision making enables the weeding system to make more intelligent and accurate decisions based on the real-time situation of the farmland. In the process of intelligent decision making, the weeding system will make full use of deep learning algorithms to analyze and process multimodal data. First, the system will obtain multimodal data such as images, spectra, geographic locations and environments of the farmland in real time. These data will fully reflect the current status of the farmland, including information such as the growth status of crops and the distribution and growth stage of weeds. Then, the deep learning algorithm will conduct in-depth analysis of these data to extract valuable features and information. Through the learning and training of a large amount of data, the algorithm can identify different types of weeds and judge their growth status and the degree of threat to crops. Based on these analysis results, the weeding system will automatically formulate the optimal weeding plan. The system will intelligently adjust the power and irradiation range of the laser according to the density and distribution of the weeds to ensure that the weeds can be effectively removed while minimizing damage to crops. In addition, the system will also take into account the growth stage and needs of crops, and reasonably arrange the timing of weeding to avoid adverse effects on the growth of crops. During the weeding process, the system will continuously monitor changes in the farmland and adjust the weeding strategy in a timely manner based on real-time data. If the growth of weeds is found to have changed, the system will automatically re-evaluate and optimize the weeding plan to ensure the maximum weeding effect. At the same time, intelligent decision making can also be integrated with other agricultural technologies to achieve more comprehensive agricultural production management. The weeding system can be combined with precision irrigation, fertilization, and other technologies to coordinate and adjust various agricultural operations according to the actual needs of the farmland to improve the efficiency and quality of agricultural production.

7. Conclusions

The laser weeding system based on deep learning has brought new breakthroughs in addressing the drawbacks of traditional weeding methods. Traditional weeding methods, such as manual weeding, are time consuming and labor intensive. Meanwhile, chemical weeding can lead to problems such as pesticide residues and environmental pollution. The deep learning laser weeding system can realize automated and precise weeding operations. Weeding efficiency can be improved and the impact on the environment can be reduced. In the laser weeding system based on deep learning, YOLO and CNN are common detection methods. Targets can be quickly and accurately detected by the YOLO algorithm, and CNN is good at extracting image features. These methods provide strong technical support for the laser weeding system. Our research enables readers to understand the performance indicators of deep learning target-detection algorithms and the practical application of detection algorithms in the field of weed control. We hope that this investigation will help improve agricultural production efficiency, reduce costs, and protect the ecological environment. In addition, our research also provides useful references for researchers and practitioners in related fields and promotes the further application and development of deep learning in the field of agriculture.

Author Contributions

C.W., T.X., and C.S. wrote the main manuscript text. R.J. prepared all the figures and supervised the writing of the entire manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by Ministry of Education Industry-University Cooperation Collaborative Education Project (NO. 202002284010), in part by the Shandong Province Technology Innovation Guidance Plan (No. YDZX2024018), in part by Qingdao Science and Technology Demonstration project—New modern agriculture project in 2024 (No. 24-2-8-xdny-11-nsh), in part by Shandong Province College Students Innovation and Entrepreneurship Training Program (No. S202310435001, No. S202310435163, No. S202310435151, No. S202310435149, No. S202310435032, No. S202410435034, NO. S202410435101, NO. S202410435069, NO. S202410435066 and No. S202410435065).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ahmadi, A.; Halstead, M.; Smitt, C.; McCool, C. BonnBot-I Plus: A Bio-Diversity Aware Precise Weed Management Robotic Platform. IEEE Robot. Autom. Lett. 2024, 9, 6560–6567. [Google Scholar] [CrossRef]
  2. Alcantara, A.; Magwili, G.V. CROPBot: Customized Rigid Organic Plantation Robot. In Proceedings of the 2022 International Conference on Emerging Technologies in Electronics, Computing and Communication (ICETECC), Jamshoro, Sindh, Pakistan, 7–9 December 2022; pp. 1–7. [Google Scholar]
  3. Andreasen, C.; Scholle, K.; Saberi, M. Laser Weeding With Small Autonomous Vehicles: Friends or Foes? Front. Agron. 2022, 4, 841086. [Google Scholar] [CrossRef]
  4. Krupanek, J.; de Santos, P.G.; Emmi, L.; Wollweber, M.; Sandmann, H.; Scholle, K.; Di Minh Tran, D.; Schouteten, J.J.; Andreasen, C. Environmental performance of an autonomous laser weeding robot—A case study. Int. J. Life Cycle Assess. 2024, 29, 1021–1052. [Google Scholar] [CrossRef]
  5. Arakeri, M.P.; Vijaya Kumar, B.P.; Barsaiya, S.; Sairam, H.V. Computer vision based robotic weed control system for precision agriculture. In Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 13–16 September 2017; pp. 1201–1205. [Google Scholar]
  6. Aoki, T.; Inada, S.; Shimizu, D. Development of a Snake Robot to Weed in Rice Paddy Field and Trial of Field Test. In Proceedings of the 2024 IEEE/SICE International Symposium on System Integration (SII), Ha Long, Vietnam, 8–11 January 2024; pp. 1537–1542. [Google Scholar]
  7. Zhao, P.; Chen, J.; Li, J.; Ning, J.; Chang, Y.; Yang, S. Design and Testing of an autonomous laser weeding robot for strawberry fields based on DIN-LW-YOLO. Comput. Electron. Agric. 2025, 229, 109808. [Google Scholar] [CrossRef]
  8. Banu, E.A.; Chidambaranathan, S.; Jose, N.N.; Kadiri, P.; Abed, R.E.; Al-Hilali, A. A System to Track the Behaviour or Pattern of Mobile Robot Through RNN Technique. In Proceedings of the 2024 4th International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 14–15 May 2024; pp. 2003–2005. [Google Scholar]
  9. Fossum, E.R. CMOS image sensors: Electronic camera-on-a-chip. IEEE Trans. Electron. Dev. 1997, 44, 1689–1698. [Google Scholar] [CrossRef]
  10. Litwiller, D. Ccd vs. cmos. Photonic Spectra 2001, 35, 154–158. [Google Scholar]
  11. Shung, K.K.; Smith, M.; Tsui, B.M. Principles of Medical Imaging; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  12. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trsns. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  13. Hu, X.; Chang, Y.; Qin, H.; Xiao, J.; Cheng, H. Binocular ranging method based on improved YOLOv8 and GMM image point set matching. J. Graph. 2024, 45, 714. [Google Scholar]
  14. Jiang, M.; Tian, Z.; Yu, C.; Shi, Y.; Liu, L.; Peng, T.; Hu, X.; Yu, F. Intelligent 3D garment system of the human body based on deep spiking neural network. Virtual Real. Intell. Hardw. 2024, 6, 43–55. [Google Scholar] [CrossRef]
  15. Moldvai, L.; Mesterhazi, P.A.; Teschner, G.; Nyeeki, A. Weed Detection and Classification with Computer Vision Using a Limited Image Dataset. Appl. Sci. 2024, 14, 4839. [Google Scholar] [CrossRef]
  16. Verma, A.; Al-Jawahry, H.M.; Alsailawi, H.A.; Kirubanantham, P.; SherinEliyas; Saadoun, O.N.; Manalmoradkarim; Zaidan, D.T. The Gm System for the use of AGMR System. In Proceedings of the 2024 4th International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 14–15 May 2024; pp. 1114–1119. [Google Scholar]
  17. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  18. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference Proceedings, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
  19. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trsns. Pattern Anal. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  20. Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: Fully convolutional one-stage object detection. arXiv 2019, arXiv:1904.01355. [Google Scholar]
  21. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6569–6578. [Google Scholar]
  22. Sirikunkitti, S.; Chongcharoen, K.; Yoongsuntia, P.; Ratanavis, A. Progress in a Development of a Laser-Based Weed Control System. In Proceedings of the 2019 Research, Invention, and Innovation Congress (RI2C), Bangkok, Thailand, 11–13 December 2019; pp. 1–4. [Google Scholar]
  23. Zhang, H.; Cao, D.; Zhou, W.; Currie, K. Laser and optical radiation weed control: A critical review. Precis. Agric. 2024, 25, 2033–2057. [Google Scholar] [CrossRef]
  24. Kashyap, P.K.; Kumar, S.; Jaiswal, A.; Prasad, M.; Gandomi, A.H. Towards precision agriculture: IoT-enabled intelligent irrigation systems using deep learning neural network. IEEE Sens. J. 2021, 21, 17479–17491. [Google Scholar] [CrossRef]
  25. Yaseen, M.U.; Long, J.M. Laser Weeding Technology in Cropping Systems: A Comprehensive Review. Agronomy 2024, 14, 2253. [Google Scholar] [CrossRef]
  26. Andreasen, C.; Vlassi, E.; Salehan, N. Laser weeding of common weed species. Front. Plant Sci. 2024, 15, 1375164. [Google Scholar] [CrossRef]
  27. Ji, Y. Theoretical and Experimental Research on Weed Removal by 1064 nm Laser. Master’s Thesis, Changchun University of Science and Technology, Changchun, China, 2024. [Google Scholar]
  28. Lameski, P.; Zdravevski, E.; Kulakov, A. Review of automated weed control approaches: An environmental impact perspective. In Proceedings of the ICT Innovations 2018. Engineering and Life Sciences: 10th International Conference, ICT Innovations 2018, Ohrid, Macedonia, 17–19 September 2018; Proceedings 10. Springer: Berlin/Heidelberg, Germany, 2018; pp. 132–147. [Google Scholar]
  29. Ju, M.R.; Luo, H.B.; Wang, Z.B.; He, M.; Chang, Z.; Hui, B. Improved YOLOV3 Algorithm and Its Application in Small Target Detection. Acta Opt. Sin. 2019, 39, 0715004. [Google Scholar]
  30. Xu, Y.; Jiang, M.; Li, Y.; Wu, Y.; Lu, G. Fruit target detection based on improved YOLO and NMS. J. Electron. Meas. Instrum. 2023, 36, 114–123. [Google Scholar]
  31. Li, C.Y.; Yao, J.M.; Lin, Z.X.; Yan, Q.; Fan, B.Q. Object Detection Method Based on Improved YOLO Light weight Network. Laser Optoelectron. Prog. 2020, 57, 141003. [Google Scholar]
  32. Jia, S.; Dabo, G.; Yang, T. Real Time Object Detection Based on Improved YOLOv3 Network. Laser Optoelectron. Prog. 2020, 57, 221505. [Google Scholar]
  33. Chen, X.; Gupta, A. An implementation of faster rcnn with study for region sampling. arXiv 2017, arXiv:1702.02138. [Google Scholar]
  34. Johnson, J.W. Adapting mask-rcnn for automatic nucleus segmentation. arXiv 2018, arXiv:1805.00500. [Google Scholar]
  35. Pei, W.; Xu, Y.; Zhu, Y.; Wang, P.; Lu, M.; Li, F. The Target Detection Method of Aerial Photography Images with Improved SSD. J. Softw. 2019, 30, 738–758. [Google Scholar]
  36. Zhen, Z.; LI, M.; Liu, H.; Ma, J. Improved ssd algorithm and its application in target detection. Comput. Appl. Softw. 2021, 38, 226–231. [Google Scholar]
  37. Wu, T.; Zhang, Z.; Liu, Y.; Pei, W.; Chen, H. A Lightweight small target detection algorithm based on improved SSD. Infrared Laser Eng. 2018, 47, 37–43. [Google Scholar] [CrossRef]
  38. Ali, M.A.M.; Aly, T.; Raslan, A.T.; Gheith, M.; Amin, E.A. Advancing Crowd Object Detection: A Review of YOLO, CNN and ViTs Hybrid Approach. J. Intell. Learn. Syst. Appl. 2024, 16, 175–221. [Google Scholar] [CrossRef]
  39. Rashid, Y.; Bhat, J.I. OlapGN: A multi-layered graph convolution network-based model for locating influential nodes in graph networks. Knowl.-Based Syst. 2024, 283, 111163. [Google Scholar] [CrossRef]
  40. Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep learning for generic object detection: A survey. Int. J. Comput. Vision 2020, 128, 261–318. [Google Scholar] [CrossRef]
  41. Zhao, L.; Zhang, Z. A improved pooling method for convolutional neural networks. Sci. Rep. 2024, 14, 1589. [Google Scholar] [CrossRef]
  42. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  43. Katona, T.; Tóth, G.; Petró, M.; Harangi, B. Developing New Fully Connected Layers for Convolutional Neural Networks with Hyperparameter Optimization for Improved Multi-Label Image Classification. Mathematics 2024, 12, 806. [Google Scholar] [CrossRef]
  44. Dalm, S.; Offergeld, J.; Ahmad, N.; van Gerven, M. Efficient deep learning with decorrelated backpropagation. arXiv 2024, arXiv:2405.02385. [Google Scholar]
  45. Qin, H.; Wang, J.; Mao, X.; Zhao, Z.; Gao, X.; Lu, W. An improved faster R-CNN method for landslide detection in remote sensing images. J. Geovis. Spat. Anal. 2024, 8, 2. [Google Scholar] [CrossRef]
  46. Yao, X.; Chen, H.; Li, Y.; Sun, J.; Wei, J. Lightweight image super-resolution based on stepwise feedback mechanism and multi-feature maps fusion. Multimed. Syst. 2024, 30, 39. [Google Scholar] [CrossRef]
  47. Zhang, J. Rpn: Reconciled polynomial network towards unifying pgms, kernel svms, mlp and kan. arXiv 2024, arXiv:2407.04819. [Google Scholar]
  48. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  49. Jiao, L.; Abdullah, M.I. YOLO series algorithms in object detection of unmanned aerial vehicles: A survey. Serv. Oriented. Comput. Appl. 2024, 18, 269–298. [Google Scholar] [CrossRef]
  50. Parfenov, A.; Parfenov, D. Creating an Image Recognition Model to Optimize Technological Processes. In Proceedings of the 2024 IEEE International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON), Novosibirsk, Russia, 30 September–2 October 2024; pp. 421–424. [Google Scholar]
  51. Mohammed, K. Reviewing Mask R-CNN: An In-depth Analysis of Models and Applications. Easychair Prepr. 2024, 2024, 11838. [Google Scholar]
  52. Michael, J.J.; Thenmozhi, M. Survey on Weeding Tools, Equipment, AI-IoT Robots with Recent Advancements. In Proceedings of the 2023 International Conference on Integration of Computational Intelligent System (ICICIS), Pune, India, 1–4 November 2023; pp. 1–6. [Google Scholar]
  53. Islam, S.; Elmekki, H.; Elsebai, A.; Bentahar, J.; Drawel, N.; Rjoub, G.; Pedrycz, W. A comprehensive survey on applications of transformers for deep learning tasks. Expert Syst. Appl. 2024, 241, 122666. [Google Scholar] [CrossRef]
  54. Tang, Y.; Wang, Y.; Guo, J.; Tu, Z.; Han, K.; Hu, H.; Tao, D. A survey on transformer compression. arXiv 2024, arXiv:2402.05964. [Google Scholar]
  55. Han, X.; Chang, J.; Wang, K. You only look once: Unified, real-time object detection. Procedia Comput. Sci. 2021, 183, 61–72. [Google Scholar] [CrossRef]
  56. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
  57. Savinainen, O. Uncertainty Estimation and Confidence Calibration in YOLO5Face. Master’s Thesis, Department of Electrical Engineering, Linköping University, Linköping, Sweden, 2024. Available online: https://www.diva-portal.org/smash/get/diva2:1871866/FULLTEXT01.pdf (accessed on 26 May 2025).
  58. Russell, M.; Fischaber, S. OpenCV based road sign recognition on Zynq. In Proceedings of the 2013 11th IEEE International Conference on Industrial Informatics (INDIN), Bochum, Germany, 29–31 July 2013; pp. 596–601. [Google Scholar]
  59. Chandrika, R.R.; Vanitha, K.; Thahaseen, A.; Chandramma, R.; Neerugatti, V.; Mahesh, T. Number Plate Recognition Using OpenCV. In Proceedings of the 2024 International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India, 5–7 March 2024; pp. 1–4. [Google Scholar]
  60. Mentzingen, H.; Antonio, N.; Lobo, V. Joining metadata and textual features to advise administrative courts decisions: A cascading classifier approach. Artif. Intell. Law 2024, 32, 201–230. [Google Scholar] [CrossRef]
  61. Lai, Z.; Liang, G.; Zhou, J.; Kong, H.; Lu, Y. A joint learning framework for optimal feature extraction and multi-class SVM. Inf. Sci. 2024, 671, 120656. [Google Scholar] [CrossRef]
  62. Chandan, G.; Jain, A.; Jain, H. Real time object detection and tracking using Deep Learning and OpenCV. In Proceedings of the 2018 International Conference on inventive research in computing applications (ICIRCA), Coimbatore, India, 11–12 July 2018; pp. 1305–1308. [Google Scholar]
  63. Jiao, L.; Zhang, F.; Liu, F.; Yang, S.; Li, L.; Feng, Z.; Qu, R. A survey of deep learning-based object detection. IEEE Access 2019, 7, 128837–128868. [Google Scholar] [CrossRef]
  64. Wu, X.; Sahoo, D.; Hoi, S.C. Recent advances in deep learning for object detection. Neurocomputing 2020, 396, 39–64. [Google Scholar] [CrossRef]
  65. Hui, Y.; You, S.; Hu, X.; Yang, P.; Zhao, J. SEB-YOLO: An Improved YOLOv5 Model for Remote Sensing Small Target Detection. Sensors 2024, 24, 2193. [Google Scholar] [CrossRef]
  66. Hanley, J.A.; McNeil, B.J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982, 143, 29–36. [Google Scholar] [CrossRef] [PubMed]
  67. Rosebrock, A. Intersection over Union (IoU) for Object Detection. 2016. Available online: https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/ (accessed on 18 May 2021).
  68. Rane, N.; Kaya, O.; Rane, J. Advancing the Sustainable Development Goals (SDGs) through artificial intelligence, machine learning, and deep learning. Artif. Intell. Mach. Learn. Deep. Learn. Sustain. Ind. 2024, 5, 2–74. [Google Scholar]
  69. Zhang, W.; Sun, H.; Chen, X.; Li, X.; Yao, L.; Dong, H. Research on weed detection in vegetable seedling fields based on the improved YOLOv5 intelligent weeding robot. J. Graph. 2023, 44, 346. [Google Scholar]
  70. Yang, Z.; Li, H. Improved weed recognition algorithm based on YOLOv5-SPD. J. Shanghai Univ. Eng. Sci. Gongcheng Jishu Daxue Xuebao 2024, 38, 75–82. [Google Scholar]
  71. Jiawei, H.; Xia, L.; Fangtao, D.; Mengchao, H.; Xiwang, D.; Tengfei, T. Research on weed recognition and positioning method of laser weeding robot based on deep learning. J. Tianjin Univ. Technol. 2024, 40, 1–9. [Google Scholar]
  72. Chu, B.; Shao, R.; Fang, Y.; Lu, Y. Weed Detection Method Based on Improved YOLOv8 with Neck-Slim. In Proceedings of the 2023 China Automation Congress (CAC), Chongqing, China, 17–19 November 2023; pp. 9378–9382. [Google Scholar]
  73. Tesema, S.N.; Bourennane, E.B. Denseyolo: Yet faster, lighter and more accurate yolo. In Proceedings of the 2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 4–7 November 2020; pp. 0534–0539. [Google Scholar]
  74. Chen, H.; Jin, H.; Lv, S. YOLO-DSD: A YOLO-Based Detector Optimized for Better Balance between Accuracy, Deployability and Inference Time in Optical Remote Sensing Object Detectionc. Appl. Sci. 2022, 12, 7622. [Google Scholar] [CrossRef]
  75. Hammad, A.; Moretti, S.; Nojiri, M. Multi-scale cross-attention transformer encoder for event classification. J. High Energy Phys. 2024, 2024, 144. [Google Scholar] [CrossRef]
  76. Li, R.; Li, Y.; Qin, W.; Abbas, A.; Li, S.; Ji, R.; Wu, Y.; He, Y.; Yang, J. Lightweight Network for Corn Leaf Disease Identification Based on Improved YOLO v8s. Agriculture 2024, 14, 220. [Google Scholar] [CrossRef]
  77. Min, X.; Zhou, W.; Hu, R.; Wu, Y.; Pang, Y.; Yi, J. Lwuavdet: A lightweight uav object detection network on edge devices. IEEE Internet Things J. 2024, 11, 24013–24023. [Google Scholar] [CrossRef]
  78. Selmy, H.A.; Mohamed, H.K.; Medhat, W. Big data analytics deep learning techniques and applications: A survey. Inf. Syst. 2024, 120, 102318. [Google Scholar] [CrossRef]
  79. Raja, R.; Nguyen, T.T.; Slaughter, D.C.; Fennimore, S.A. Real-time weed-crop classification and localisation technique for robotic weed control in lettuce. Biosyst. Eng. 2020, 192, 257–274. [Google Scholar] [CrossRef]
  80. Qin, L.; Xu, Z.; Wang, W.; Wu, X. YOLOv7-Based Intelligent Weed Detection and Laser Weeding System Research: Targeting Veronica didyma in Winter Rapeseed Fields. Agriculture 2024, 14, 910. [Google Scholar] [CrossRef]
  81. Rakhmatulin, I.; Andreasen, C. A Concept of a Compact and Inexpensive Device for Controlling Weeds with Laser Beams. Agronomy 2020, 10, 1616. [Google Scholar] [CrossRef]
  82. Yu, Z.; He, X.; Qi, P.; Wang, Z.; Liu, L.; Han, L.; Huang, Z.; Wang, C. A Static Laser Weeding Device and System Based on Fiber Laser: Development, Experimentation, and Evaluation. Agronomy 2024, 14, 1426. [Google Scholar] [CrossRef]
  83. Xiong, Y.; Ge, Y.; Liang, Y.; Blackmore, S. Development of a prototype robot and fast path-planning algorithm for static laser weeding. Comput. Electron. Agric. 2017, 142, 494–503. [Google Scholar] [CrossRef]
  84. Chen, B.; Tojo, S.; Watanabe, K. Machine Vision for a Micro Weeding Robot in a Paddy Field. Biosyst. Eng. 2003, 85, 393–404. [Google Scholar] [CrossRef]
  85. Coleman, G.; Betters, C.; Squires, C.; Leon-Saval, S.; Walsh, M. Low Energy Laser Treatments Control Annual Ryegrass (Lolium rigidum). Front. Agron. 2021, 2, 601542. [Google Scholar] [CrossRef]
  86. Wangwang, W.; Zhenyang, G.; Yingjie, Y.; Huaifeng, Y.; Haoran, Z. Research on the application of laser weed control technology in upland rice fields. Agric. Eng. 2013, 3, 5–7. [Google Scholar]
  87. Kim, G.H.; Kim, S.C.; Hong, Y.K.; Han, K.S.; Lee, S.G. A robot platform for unmanned weeding in a paddy field using sensor fusion. In Proceedings of the 2012 IEEE International Conference on Automation Science and Engineering (CASE), Seoul, Republic of Korea, 20–24 August 2012; pp. 904–907. [Google Scholar]
  88. Kameyama, K.; Umeda, Y.; Hashimoto, Y. Simulation and experiment study for the navigation of the small autonomous weeding robot in paddy fields. In Proceedings of the The Society of Instrument and Control Engineers-SICE, Nagoya, Japan, 14–17 September 2013; pp. 1612–1617. [Google Scholar]
  89. Zhu, H.; Zhang, Y.; Mu, D.; Bai, L.; Zhuang, H.; Li, H. YOLOX-based blue laser weeding robot in corn field. Front. Plant Sci. 2022, 13, 1017803. [Google Scholar] [CrossRef]
  90. Xingye, M.; Bing, Y.; Jiayi, R.; Zhouchao, L.; He, Y. Intelligent laser weeding device based on STM32 microcontroller. J. Tianjin Univ. Technol. 2023, 39, 1–9. [Google Scholar]
  91. Danlei, M. Research on the Actuator and Recognition Algorithm of Laser Weeding Robot. Master’s Thesis, Kunming University of Science and Technology, Kunming, China, 2022. [Google Scholar]
  92. Chen, D.; Xiangping, F.; Yongke, L.; Shihao, Z.; Qin, S.; Junjie, Z. Research on cotton seedling and weed recognition method in Xinjiang based on improved YOLOv5. Comput. Digit. Eng. 2023, 51, 1144–1149. [Google Scholar]
  93. Panboonyuen, T.; Thongbai, S.; Wongweeranimit, W.; Santitamnont, P.; Suphan, K.; Charoenphon, C. Object detection of road assets using transformer-based YOLOX with feature pyramid decoder on thai highway panorama. Information 2021, 13, 5. [Google Scholar] [CrossRef]
  94. Debiao, M.; Dengyong, T.; Pan, L.; Yun, W.; Mingyu, H. Intelligent perception and precise control system design of laser weeding robot. China Sci. Technol. Inf. 2023, 29, 81–85. [Google Scholar]
  95. Maram, B.; Das, S.; Daniya, T.; Cristin, R. A Framework for Weed Detection in Agricultural Fields Using Image Processing and Machine Learning Algorithms. In Proceedings of the 2022 International Conference on Intelligent Controller and Computing for Smart Power (ICICCSP), Hyderabad, India, 21–23 July 2022; pp. 1–6. [Google Scholar]
  96. Ge, Z.Y.; Wu, W.W.; Yu, Y.J.; Zhang, R.Q. Design of mechanical arm for laser weeding robot. Appl. Mech. Mater. 2013, 347, 834–838. [Google Scholar] [CrossRef]
  97. Wangwang, W. Research on the Actuator of Laser Weeding Robot. Ph.D. Thesis, Kunming University of Science and Technology, Kunming, China, 2014. [Google Scholar]
  98. Andreasen, C.; Vlassi, E.; Salehan, N. Laser weeding: Opportunities and challenges for couch grass (Elymus repens (L.) Gould) control. Sci. Rep. 2024, 14, 11173. [Google Scholar] [CrossRef]
  99. Sethia, G.; Guragol, H.K.S.; Sandhya, S.; Shruthi, J.; Rashmi, N. Automated Computer Vision based Weed Removal Bot. In Proceedings of the 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2–4 July 2020; pp. 1–6. [Google Scholar]
  100. Fatima, H.S.; Ul Hassan, I.; Hasan, S.; Khurram, M.; Stricker, D.; Afzal, M.Z. Formation of a Lightweight, Deep Learning-Based Weed Detection System for a Commercial Autonomous Laser Weeding Robot. Appl. Sci. 2023, 13, 3997. [Google Scholar] [CrossRef]
  101. Dhinesh, S.; Nagarajan, P.; Raghunath, M.; Sundar, S.; Dhanushree, N.; Pugazharasu, S. AI Based Weed Locating and Deweeding using Agri-Bot. In Proceedings of the 2023 Third International Conference on Smart Technologies, Communication and Robotics (STCR), Suryamangalam, India, 9–10 December 2023; Volume 1, pp. 1–6. [Google Scholar]
  102. Raffik, R.; Mayukha, S.; Hemchander, J.; Abishek, D.; Tharun, R.; Kumar, S.D. Autonomous weeding robot for organic farming fields. In Proceedings of the 2021 International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation (ICAECA), Coimbatore, India, 8–9 October 2021; pp. 1–4. [Google Scholar]
  103. Patnaik, A.; Narayanamoorthi, R. Weed removal in cultivated field by autonomous robot using LABVIEW. In Proceedings of the 2015 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 19–20 March 2015; pp. 1–5. [Google Scholar]
  104. Zhang, H.; Zhong, J.X.; Zhou, W. Precision optical weed removal evaluation with laser. In Proceedings of the CLEO: Applications and Technology, San Jose, CA, USA, 7–12 May 2023; Optica Publishing Group: Washington, DC, USA, 2023; p. JW2A.145. [Google Scholar]
  105. Jian, W.; Yuguang, Q. Farmland grass seedling detection method based on improved YOLO v5. Jiangsu Agric. Sci. 2024, 52, 197–204. [Google Scholar]
  106. Zatari, A.; Hoor, B.; Nasereddin, N. Intelligent Weeding Robot Using Deep-Learning. 2022. Available online: https://scholar.ppu.edu/handle/123456789/8955 (accessed on 1 June 2022).
  107. Zhang, J.L.; Su, W.H.; Zhang, H.Y.; Peng, Y. SE-YOLOv5x: An Optimized Model Based on Transfer Learning and Visual Attention Mechanism for Identifying and Localizing Weeds and Vegetables. Agronomy 2022, 12, 2061. [Google Scholar] [CrossRef]
  108. Khalid, N.; Elkhiri, H.; Oumaima, E.; ElFahsi, N.; Zahra, Z.F.; Abdellatif, K. Revolutionizing Weed Detection in Agriculture through the Integration of IoT, Big Data, and Deep Learning with Robotic Technology. In Proceedings of the 2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Tenerife, Canary Islands, Spain, 19–21 July 2023. [Google Scholar]
  109. Wang, Z.; Yang, Z.; Song, X.; Zhang, H.; Sun, B.; Zhai, J.; Yang, S.; Xie, Y.; Liang, P. Raman spectrum model transfer method based on Cycle-GAN. Spectrochim. Acta A 2024, 304, 123416. [Google Scholar] [CrossRef]
  110. Aravind, R.; Daman, M.; Kariyappa, B.S. Design and development of automatic weed detection and smart herbicide sprayer robot. In Proceedings of the 2015 IEEE Recent Advances in Intelligent Computational Systems (RAICS), Trivandrum, India, 10–12 December 2015; pp. 257–261. [Google Scholar]
  111. Qiao, H.; Yang, X.; Liang, Z.; Liu, Y.; Ge, Z.; Zhou, J. A Method for Extracting Joints on Mountain Tunnel Faces Based on Mask R-CNN Image Segmentation Algorithm. Appl. Sci. 2024, 14, 6403. [Google Scholar] [CrossRef]
Figure 1. Intelligent laser weeding robot system architecture based on deep learning.
Figure 1. Intelligent laser weeding robot system architecture based on deep learning.
Agriculture 15 01213 g001
Figure 2. General model of deep learning object-detection algorithm.
Figure 2. General model of deep learning object-detection algorithm.
Agriculture 15 01213 g002
Figure 3. Transformer network structure.
Figure 3. Transformer network structure.
Agriculture 15 01213 g003
Figure 4. The workflow of the YOLO target-detection algorithm.
Figure 4. The workflow of the YOLO target-detection algorithm.
Agriculture 15 01213 g004
Table 1. The processes of a typical intelligent weed-control system.
Table 1. The processes of a typical intelligent weed-control system.
StepDescriptionMethods
Perception LayerResponsible for obtaining image data of fields and other areas to provide basic materials for subsequent weed identification.Binocular Camera, USB camera, USB camera
Decision LayerBy analyzing and processing the image data transmitted from the perception layer, the location and characteristics of weeds can be accurately identified.BP, CNN, ANN, ResNet, GoogleNet, Transformer, YOLO
Execution LayerBased on the weed information identified by the decision-making layer, lasers are emitted in a targeted manner to achieve precise weeding operations.Semiconductor laser, carbon dioxide laser
Table 2. Algorithms used deep learning techniques in weed-control systems.
Table 2. Algorithms used deep learning techniques in weed-control systems.
ModelDeep Learning TechnologyMethodAdvantage
YOLO + ResNet [29]• YOLOAdd two residual units to the second block; modify six DBL units before detection layerImprove feature reuse, enhance small target understanding and accuracy
YOLO + SPP5 [30]• YOLODesign SPP5 module with refined pooling kernels; create YOLOv4-SPP2-5 modelReplace first SPP of YOLOv4 with SPP5, add second SPP for multi-scale feature capture
YOLO + DenseNet [31]• YOLOMerge channels across layers; add DenseNet layerReduce calculations, speed up training, optimize resource use
YOLO Network Branch [32]• YOLOAdd network branch, adjust anchor box, filter samples by maskBalance positive and negative samples, improve sample learning
Faster R-CNN [33]• CNNMigrate Faster RCNN to TensorFlow, simplifyOptimize training and testing speeds on COCO dataset
Mask R-CNN [34]• CNNReplace ROI-Pooling with ROIAlign, add FCN headCreate accurate masks, avoid feature misalignment
SSD + Anti-residual Module [35]• SSDUse a series of convolutions and sum layerEnhance image perception, improve detection accuracy
SSD + Hierarchical feature fusion [36]• SSDSum and concatenate hole convolution outputsUtilize feature-scale differences, complement features
Efnet-1 [37]• EfficientNetComprise normal and MB convolution modules, connect to classifierExtract multi-scale features, enhance feature representation
Table 4. The solutions to key issues related to small target detection and the balance between speed and accuracy.
Table 4. The solutions to key issues related to small target detection and the balance between speed and accuracy.
Key IssuesSolutionsReference
Small object-detection problem(1) Introduce the Convolutional Block Attention Module (CBAM) into the backbone feature-extraction network of the YOLOv5 target-detection algorithm and add the Transformer module.[69]
(2) Combine the collaborative attention (CA) and the receptive field block (RFB) module to improve the backbone network, introduce the CA attention mechanism, use the CARAFE upsampling method, and adopt WIoU v3 to replace the CIoU loss function.[70]
(3) The YOLOX algorithm is optimized by adding lightweight attention modules, adding deconvolution layers, using GIoU instead of IoU, etc., to improve detection accuracy, enhance the algorithm’s ability to extract small-size features, and improve the accuracy of the predicted box position.[71]
The balance between speed and accuracy(1) Optimize the Neck layer of YOLOv8 using GSConv and VoV-GSCSP to improve the accuracy and inference speed of the model[72]
(2) Reshape the subsequent layers so that the new output tensor corresponds to an 8 × 8 pixel grid cell instead of 32 × 32 as in YOLOv2. Add two blocks consisting of a convolutional layer, a batch normalization layer, and a leaky ReLU activation layer after the reshape, and finally add an output convolutional layer[73]
(3) A new feature-extraction module DenseRes Block is proposed to replace the CSP Block in the backbone network CSP DarkNet in YOLOv4. The DenseRes Block consists of several series residual structures and shortcut connections with the same topology, which can better extract features while reducing the amount of calculation and inference time.[74]
Table 5. Application scenarios of deep learning algorithms in the laser weeding system.
Table 5. Application scenarios of deep learning algorithms in the laser weeding system.
ApplicationsGoalMethodReference
Vegetable plot• Avoid the intensive labor of manual weeding and reduce food production costs(1) Intelligent weed detection and laser weeding system to achieve the accurate positioning and removal of weeds[79]
• Achieve the accurate detection of weeds in vegetable seedling fields, which has potential practical value in the research and development of smart agricultural equipment, etc.(2) Use crop-marking technology and a machine-vision system[80]
Laboratory environment• Demonstrate the feasibility of laser weeding equipmentDesigned and tested a laser-based weed control device that controls weeds by irradiating the weed stems with lasers[81]
• Provide technical support for the realization of automated agricultural machinery precision fertilization, pesticide application and weeding
Orchard• Improve weed-control efficiency and reduce environmental impact(1) Developed a static, movable, liftable and adjustable enclosed fiber laser weeding equipment and system[82]
• Improve weed-control efficiency and accuracy and reduce costs(2) Designed and produced a prototype of a laser weeding robot based on STM32, using color training and morphological feature recognition algorithms to improve weed recognition accuracy[83]
Rice fields• Increase rice yield and reduce labor input
• Reduce herbicide use and reduce environmental impact
(1) Use machine vision systems to identify weeds and crops in rice fields and guide robots to perform precise weeding.[84]
• Improve the accuracy of crop and associated weed identification and detection under complex backgrounds(2) Use low-energy laser processing to control the growth of weeds by irradiating specific parts of the weeds.[85]
   (3) Use image processing to identify weeds, determine the amount of laser required for weeding, and use a fixed step length to perform weeding operations.[86]
(4) Develop an unmanned weeding robot platform and achieve stable autonomous navigation and weeding operations through sensor fusion.[87]
(5) Develop a small two-wheeled autonomous weeding robot that uses GPS and directional sensors for navigation, taking into account the effects of soil and GPS errors to achieve precise weeding operations.[88]
Cornfield• Increase corn yield, reduce the impact of weeds on corn, and reduce labor intensity(1) Identify crops and weeds through the YOLOX network and calculate the coordinates of weeds using the triangular similarity principle.[89]
• Achieve the rapid identification and positioning of weed meristems based on laser weeding(2) Upload images and send control signals through the WiFi module.[90]
• Improve the accuracy and efficiency of weed identification and provide accurate targeting for laser weeding(3) Optimize the YOLOX algorithm and use self-made data sets for training and testing.[87]
(4) Test a new static weeding path-planning algorithm, using an image-processing algorithm based on the color and size differences of crops and weeds to separate crops and weeds from the field background, detect the type of foreground and output the location information of weeds.[91]
(5) Design and trial-produce a laser weeding robot prototype, conduct field trials, and optimize the weed recognition algorithm.[92]
(6) Introduce a feature-extraction method based on wavelet transform to classify and identify weeds, and accurately control the sprayer to spray herbicides according to the weed location.[86]
Cotton Field• Reduce the use of chemical herbicidesUse the YOLOX network to identify weeds, calculate the weed coordinates through monocular ranging, and control the end of the robotic arm to emit laser to weed.[93]
lawn• Achieve efficient, accurate and environmentally friendly weed-control operations(1) Designed and manufactured a laser weeding device based on a single-chip microcomputer, which senses and identifies weeds through sensors or cameras[94]
• Reduce dependence on chemical agents and reduce pollution to the environment(2) Designed and trained a CNN model for weed detection and classification, combining a laser range finder (LRF) and an inertial measurement unit (IMU) to detect rice seedlings and obstacles, and achieve automatic weeding[95]
Table 6. Deep learning algorithms solve the problems in laser weeding systems.
Table 6. Deep learning algorithms solve the problems in laser weeding systems.
ProblemSolutionReference
Traditional weed control methods are labor intensive(1) Designed and manufactured a laser weeding robot based on a robotic arm, which achieves weeding through the cooperation of the robotic arm and laser.[96]
(2) Used the blue laser as the weeding actuator to design an intelligent laser weeding device.[90]
(3) Designed and studied the actuator of the laser weeding robot, including laser control, weed recognition, robot field positioning and navigation, etc.[97]
Mechanical weeding may damage crop roots, harm beneficial organisms, and affect soil structure. Chemical weeding is harmful to humans and the environment.(1) Use low-energy laser treatment to control the growth of weeds by irradiating specific parts of the weeds.[85]
(2) Experimentally study the effects of the laser on the growth and development of weeds at different growth stages and determine the optimal timing and dosage for weed control.[26]
(3) Study the effect of laser on weed control and create a weed damage model.[84]
(4) Study the effect of laser on Elymus repens and experimentally determine the laser dose and number of irradiations required to kill this weed.[98]
Traditional herbicides have resistance problems(1) Use small autonomous laser weeding vehicles to reduce the use of chemical herbicides and reduce the impact on the environment and organisms.[3]
(2) Use CO2 laser cutting systems and laser-pointer triangulation systems to replace pesticides for weeding.[22]
(3) Develop an automatic weeding robot that detects the location of weeds in real time and removes them through image recognition and processing technology, avoiding the use of harmful chemicals.[99]
Traditional computer-vision methods have difficulty detecting weeds in natural scenes(1) Develop a deep learning-based weed detection model[100]
(2) Design and manufacture Agri-Bot, which uses image processing and AI technology to identify and locate weeds[101]
(3) Use advanced sensors and AI technology to accurately identify and remove weeds[102]
Large weeding robots are not suitable for farmland environment in southwest China(1) The design of a small blue laser weeding robot[89]
(2) The use of a small autonomous robot to automatically detect and remove weeds in farmland[103]
Traditional detection algorithms have low recognition accuracy for small-sized weeds and obscured weeds(1) Use laser reflection to identify changes in weeds, and use the Cascade RCNN deep learning method to detect and locate weeds after laser irradiation.[104]
(2) Use GSConv and VoV-GSCSP to optimize the Neck layer of YOLOv8 to improve the accuracy and inference speed of the model and achieve weed detection.[72]
(3) Combine the collaborative attention CA and the receptive field block (RFB) module to improve the backbone network and introduce the CA attention mechanism.[105]
Wheeled mobile robots are susceptible to uncertainty and interference during operation(1) Control the robot’s movement and weeding operations through a smartphone to achieve automated weeding[106]
(2) Use a dual-servo system to adjust the laser emission angle to achieve precise weeding[98]
(3) Propose an RNN-based tracking system to coordinate multiple controllers to achieve predetermined path tracking[8]
(4) Improve the planning method and use a rolling-view observation model and biodiversity-aware weeding method[1]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.; Song, C.; Xu, T.; Jiang, R. Precision Weeding in Agriculture: A Comprehensive Review of Intelligent Laser Robots Leveraging Deep Learning Techniques. Agriculture 2025, 15, 1213. https://doi.org/10.3390/agriculture15111213

AMA Style

Wang C, Song C, Xu T, Jiang R. Precision Weeding in Agriculture: A Comprehensive Review of Intelligent Laser Robots Leveraging Deep Learning Techniques. Agriculture. 2025; 15(11):1213. https://doi.org/10.3390/agriculture15111213

Chicago/Turabian Style

Wang, Chengming, Caixia Song, Tong Xu, and Runze Jiang. 2025. "Precision Weeding in Agriculture: A Comprehensive Review of Intelligent Laser Robots Leveraging Deep Learning Techniques" Agriculture 15, no. 11: 1213. https://doi.org/10.3390/agriculture15111213

APA Style

Wang, C., Song, C., Xu, T., & Jiang, R. (2025). Precision Weeding in Agriculture: A Comprehensive Review of Intelligent Laser Robots Leveraging Deep Learning Techniques. Agriculture, 15(11), 1213. https://doi.org/10.3390/agriculture15111213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop