Drone-Based Smart Weed Localization from Limited Training Data and Radiometric Calibration Parameters †

: The most efﬁcient tool for practical uses, like weed monitoring in smart farming, is presently small object localization from drone images. While most object detection models indicate competency in localization when trained on large datasets, applying a few-shot learning technique can enhance scene comprehension, even when provided with limited training data. This investigation introduces a few-shot model for localizing weed grasses in multispectral drone images. The model encompasses a reﬂectance calibration factor, enabling it to perform well on tasks that it has yet to be speciﬁcally trained. An inductive transfer system enhances the model’s ability to generalize and accurately localize weeds. The research results demonstrate the potential of the suggested approach to detect weed grasses in drone-based multispectral images and calibration reﬂectance factor with a mIoU score of 71.45% and an accuracy of 84.3%, despite several difﬁculties in practical implementation.


Introduction
Increased world population growth will demand more high-quality food production, which can only be achieved by applying a sustainable method for increasing crop yields.The FAO pointed out that weed grasses increase environmental and economic costs of pesticide use by spreading them across farm boundaries, and their competition with agricultural crops reduces quantity and quality output [1].Among pests, weed grasses are considered a crucial biotic constraint to food production [1,2].In traditional pest control methods in agriculture, most farm fields are spatially variable in grass weed infestation to a certain degree, but general weed management methods for herbicide application are based on the assumption that grass weeds are distributed uniformly in agricultural fields [3].However, a smart weed localization system for optimized herbicide dose in the agricultural filed is a crucial step for smart farming and is still an open problem in pest control methods [4].
A drone-based smart weed localization system is an effective method for real-time and precise grass weed control and optimized herbicide dosage selection [5].Due to the similarity between weed grasses and crops, visual grass weed localization in drone-based multispectral images is an important task in precision farming [6].In contrast, designing diverse approaches for various target types at different scales is ineffective due to the growth of spectral and spatial similarities.For practical applications, including weed monitoring in smart farming, small object localization from drone images has emerged as a distinctive technique [7].A few-shot learning method could enhance scene comprehension with minimal or no training data.Yet, numerous object detection models comprehend localization with much training data.The diversity of geospatial data from agricultural regions might make it challenging to identify the most effective technique that meets their Environ.Sci.Proc.2024, 29, 39 2 of 5 learning preferences for dataset generation and processing.Real-world applications of deep learning with few training images are often challenging; however, the resulting high-value output is valued commercially and technologically.Farmers and scholars can make a significant livelihood using a drone-based smartweed localization system with limited training data.

Method
This research utilizes reflectance calibration factor and weed grass localization employing drone-based multispectral images to pinpoint the weed on large-scale images.While several weed detector models claim to comprehend single-time tracking with extensive training data, weed grass localization utilizing few-shot learning for drone-based multispectral images could enhance multispectral scene knowledge with short training data.Few-shot learning, a transfer model whose major objective is to enhance the generalization capacity for numerous tasks, can perform unseen tasks following training on a small set of annotated datasets and considers various tasks to develop a predictive model.Although the trained model's localization of weed grasses could be believed to be accurate, this issue is not the case for making decisions.For instance, timely weed grass localization in agricultural fields is essential for producing high-quality crops.The presence of weed grasses in agricultural fields significantly decreases available areas for growing crops.Despite recent advancements in deep learning and drone imaging, weed grass localization continues to be an issue for smart farming.The suggested approach uses three sets, each containing K multispectral images, consisting of a training set train = l j , m j K j=1 , an input of multispectral data, a support set support = l j , m j K support j=1 , and a test set test = l j k test j=1 [8].A limited training set of 80 image patches was utilized to train the proposed network (a patch size would be 480 × 360 pixels).Our network overcomes the limitations of convolution blocks and binary mask creation for the localization of weed grasses, since it is distinct from comparable models in the meta-feature extraction relying on a convolution neural network.The model being presented consists of a set of encoders and decoders [9].The encoder consists of five attention modules [10], while the decoder comprises a single transpose convolution module and interpolation technique, which enables the learning of spatial-spectral representations.We implement five attention layers on the multi-scale outputs of the network to aggregate multistep representations, improving the boundaries of weed grasses.The proposed layer (a out ) for meta-feature extraction of the input image is defined as follows (Figure 1): where relu is the rectified linear activation function, a j is an attention function with three convolutional layers, l j is the observed data, and m j is the predicted map.
comprehend localization with much training data.The diversity of geospatial data from agricultural regions might make it challenging to identify the most effective technique that meets their learning preferences for dataset generation and processing.Real-world applications of deep learning with few training images are often challenging; however, the resulting high-value output is valued commercially and technologically.Farmers and scholars can make a significant livelihood using a drone-based smartweed localization system with limited training data.

Method
This research utilizes reflectance calibration factor and weed grass localization employing drone-based multispectral images to pinpoint the weed on large-scale images.While several weed detector models claim to comprehend single-time tracking with extensive training data, weed grass localization utilizing few-shot learning for drone-based multispectral images could enhance multispectral scene knowledge with short training data.Few-shot learning, a transfer model whose major objective is to enhance the generalization capacity for numerous tasks, can perform unseen tasks following training on a small set of annotated datasets and considers various tasks to develop a predictive model.Although the trained modelʹs localization of weed grasses could be believed to be accurate, this issue is not the case for making decisions.For instance, timely weed grass localization in agricultural fields is essential for producing high-quality crops.The presence of weed grasses in agricultural fields significantly decreases available areas for growing crops.Despite recent advancements in deep learning and drone imaging, weed grass localization continues to be an issue for smart farming.The suggested approach uses three sets, each containing  multispectral images, consisting of a training set   ,  , an input of multispectral data, a support set   ,  , and a test set   [8].A limited training set of 80 image patches was utilized to train the proposed network (a patch size would be 480 × 360 pixels).Our network overcomes the limitations of convolution blocks and binary mask creation for the localization of weed grasses, since it is distinct from comparable models in the meta-feature extraction relying on a convolution neural network.The model being presented consists of a set of encoders and decoders [9].The encoder consists of five attention modules [10], while the decoder comprises a single transpose convolution module and interpolation technique, which enables the learning of spatial-spectral representations.We implement five attention layers on the multi-scale outputs of the network to aggregate multistep representations, improving the boundaries of weed grasses.The proposed layer ( ) for meta-feature extraction of the input image is defined as follows (Figure 1): where  is the rectified linear activation function,  is an attention function with three convolutional layers,  is the observed data, and  is the predicted map.The empirical line model [11] is a frequently used method for converting digital numbers into surface reflectance based on before/after calibration drone-based images.The technique implies a linear correlation between the digital numbers of each pixel in an image and the surface reflectance.On average, one or more reflectance calibration panels with established reflectance values are employed to estimate this correlation [12].According to [13,14], the reflectance calibration factor for band j is as follows: where ρ j is the calibrated reflectance value for the jth spectral channel and L j is the radiance value for the calibrated reflectance panel.

Results and Discussion
The indicated technique's performance was determined by employing a single effective measure for pixel-based object localization.The researchers presented the ratio measurement between accurately categorized pixels as weed grasses and the total count of ground reference pixels, often referred to as the intersection over union (IoU) [15].We apply the WeedMap [13] dataset for the suggested network's training and evaluation.The RedEdge-M sensor-based drone-based multispectral images having a size of 480 × 360 pixels that are assigned for weed detection contain considerable crop and weed changes from a variety of settings that are located in Rheinbach, Germany.These dronebased images were captured on 18 September 2017.At the time of data collection, they were at an estimated one-month stage of development, with crops and weeds measuring 15 to 20 cm and 5 to 10 cm, correspondingly.
Figure 2 illustrates the test stage's findings of the localization of weed grasses.For the tested images, the suggested model acquires a mean IoU and a corresponding kappa value for the localization of weed grasses of 69.7% and 76.4% for the 1-shot and 73.2% and 80.7% for 10-shot, respectively.The empirical line model [11] is a frequently used method for converting digital numbers into surface reflectance based on before/after calibration drone-based images.The technique implies a linear correlation between the digital numbers of each pixel in an image and the surface reflectance.On average, one or more reflectance calibration panels with established reflectance values are employed to estimate this correlation [12].According to [13,14], the reflectance calibration factor for band j is as follows: where  is the calibrated reflectance value for the jth spectral channel and  is the radiance value for the calibrated reflectance panel.

Results and Discussion
The indicated techniqueʹs performance was determined by employing a single effective measure for pixel-based object localization.The researchers presented the ratio measurement between accurately categorized pixels as weed grasses and the total count of ground reference pixels, often referred to as the intersection over union (IoU) [15].We apply the WeedMap [13] dataset for the suggested networkʹs training and evaluation.The RedEdge-M sensor-based drone-based multispectral images having a size of 480 × 360 pixels that are assigned for weed detection contain considerable crop and weed changes from a variety of settings that are located in Rheinbach, Germany.These drone-based images were captured on 18 September 2017.At the time of data collection, they were at an estimated one-month stage of development, with crops and weeds measuring 15 to 20 cm and 5 to 10 cm, correspondingly.
Figure 2 illustrates the test stageʹs findings of the localization of weed grasses.For the tested images, the suggested model acquires a mean IoU and a corresponding kappa value for the localization of weed grasses of 69.7% and 76.4% for the 1-shot and 73.2% and 80.7% for 10-shot, respectively.

Figure 2 .
Figure 2. The proposed model predictions on weed grass localization.The image samples 1 to 5 belong to the different patches in four fields.

Figure 2 .
Figure 2. The proposed model predictions on weed grass localization.The image samples 1 to 5 belong to the different patches in four fields.