1. Introduction
Unmanned aerial vehicles (UAVs), commonly known as drones, have revolutionized plant protection operations in modern agriculture due to their high operational efficiency, flexibility, and adaptability to diverse terrains [
1,
2]. Particularly in large-scale farming, hilly regions, and areas with labor shortages, UAVs enable low-volume spraying that significantly enhances work efficiency compared to traditional manual or ground-based methods [
3]. In China alone, the area treated by plant protection UAVs exceeded 1.33 billion hectares in 2023, accounting for over 60% of total pesticide application operations [
4]. However, despite these advantages, challenges persist in achieving optimal spray quality, primarily due to variability in droplet deposition influenced by factors such as rotor downwash, wind conditions, flight parameters (e.g., height and speed), nozzle types, and environmental variables (e.g., temperature and humidity) [
5,
6,
7]. Uneven or excessive droplet deposition often results in low pesticide utilization rates, typically only 30–40% in conventional UAV and ground-based applications [
8,
9]. This low efficiency leads to significant resource waste, economic losses, environmental pollution, and potential health risks from pesticide residues. Off-target drift and runoff further exacerbate these issues, with studies estimating that up to 50–70% of applied pesticides may be lost to the environment under suboptimal conditions (e.g., high wind, improper nozzle selection, or inadequate flight parameters) [
10].
Real-time assessment of droplet deposition is crucial for optimizing spraying parameters and improving pesticide efficacy while minimizing off-target drift. Key deposition parameters, including coverage rate, droplet density, volume median diameter (VMD), and number median diameter (NMD), directly reflect spray uniformity and atomization quality [
11]. VMD and NMD are calculated indirectly from the measured stain diameters on the collector surface using a spread factor model (SF = 1.78 for typical water-based formulations on copperplate paper [
12]), which converts observed stain sizes to estimated actual droplet diameters. This transformation relies on empirical assumptions and may introduce some uncertainty, particularly for non-spherical or highly deformed stains. Traditional evaluation methods predominantly rely on copperplate paper cards or fluorescent tracers collected on copperplate paper cards, followed by laboratory-based scanning and manual or semi-automated analysis using software such as Deposit-Scan, Drop-Vision, or ImageJ [
13,
14,
15]. These approaches, while reliable, are time-consuming, labor-intensive, and unsuitable for on-site decision-making, limiting the ability to adjust UAV operations in real time during field applications [
16]. Commercial laboratory instruments are often bulky, expensive (exceeding USD 5000), and impractical for portable field use, while early smartphone-based solutions suffer from limited accuracy under variable outdoor lighting conditions and poor performance in segmenting adhered or overlapping droplets [
17,
18].
Recent advancements have explored alternative techniques to address these limitations. Fluorescent tracer methods combined with spectrometry enable quantitative deposition measurement but require sample washing and laboratory analysis, introducing delays and potential contamination [
19,
20]. Capacitive sensors offer real-time monitoring of total deposition but lack spatial distribution information and are sensitive to formulation dielectric properties [
21]. Optical methods, such as laser diffraction or infrared thermography, provide non-contact assessment but are hindered by high costs and environmental interference [
22]. Portable scanning systems, such as those developed by Zhu et al. [
23], improve field usability but still require post-processing and are not fully automated for immediate feedback.
The integration of digital imaging with embedded computing has emerged as a promising direction for field-portable droplet analysis. Smartphone applications like SnapCard [
11] use basic thresholding and achieve droplet count errors of 10–20% under controlled conditions but struggle with variable outdoor lighting and adhesion segmentation. Portable scanners [
22] employ semi-automated analysis with reported accuracies of 10–18% but are not real-time and require manual setup. Laboratory software such as DepositScan [
13] and DropVision utilize watershed-based segmentation under controlled lighting, reporting droplet count errors of 8–15%, but they are non-portable and require post-collection processing. Recent studies on UAV spraying performance [
24] focus on experimental and numerical analysis of downwash effects but do not address on-site image processing or adhesion segmentation.
However, few systems combine hardware portability, real-time processing (<3 s per card), and robust adhesion segmentation tailored for field-collected UAV spray cards under variable lighting and adhesion conditions. Existing methods either lack real-time capability, struggle with illumination variability, or have limited adhesion handling accuracy (typically >10% error). This study addresses these gaps by developing a low-cost, lightweight portable device with adaptive preprocessing and an improved hybrid adhesion segmentation algorithm, achieving a mean relative error of 6.3% in droplet count and real-time performance on embedded hardware.
To address these limitations, this study develops a portable, image-based detection device equipped with advanced image processing algorithms for real-time analysis (<3 s per card) of droplet deposition on spray cards during UAV plant protection spraying. The device utilizes a Raspberry Pi 5 as the core processor, integrated with a high-resolution camera and a standard chessboard calibration board for rapid, field-portable image acquisition. Key innovations include an adaptive background subtraction combined with local contrast enhancement to mitigate variable illumination challenges, and an improved algorithm for segmenting adhered droplets that integrates iterative morphological opening operations with refined distance transform-based concave point matching. Preliminary validation using existing spray card images demonstrated improved droplet extraction accuracy and coverage rate over conventional methods, with low relative error in adhesion segmentation.
This low-cost, lightweight device offers farmers and researchers a practical tool for immediate on-site spray quality evaluation, facilitating precise parameter adjustments to enhance pesticide utilization, reduce waste, and support sustainable precision agriculture practices. The remainder of this paper is organized as follows:
Section 2 describes the device design and algorithms;
Section 3 presents experimental validation; and
Section 4 discusses results and implications.
2. Materials and Methods
2.1. Portable Detection Device Design
The portable detection device was specifically engineered for real-time, on-site quantification of droplet deposition on copperplate paper cards following plant protection UAV spraying operations. The design prioritized low cost, lightweight construction, battery-powered operation, and robustness for field environments, enabling rapid feedback to operators for spray parameter optimization.
The portable pesticide droplet detection device shown in
Figure 1 consists of an image acquisition system and an image processing system. A standard checkerboard is placed on the platform of the device. The image sensor converts the droplet sampling cards placed on the standard checkerboard into detection data, which is then transmitted to a mobile terminal or host computer.
The image acquisition subsystem consisted of a high-resolution USB camera module based on the Sony IMX519 sensor (Sony Corporation, Tokyo, Japan) (12.3 megapixels, 4056 × 3040 native resolution, 1/2.3” CMOS). This sensor was selected for its excellent low-light performance and minimal rolling shutter distortion, critical for capturing faint droplet stains under variable outdoor illumination. The camera was equipped with a fixed-focus M12 lens (focal length 3.6 mm, f/2.0 aperture, horizontal field of view 80°), providing a working distance of 120–180 mm and spatial resolution of approximately 0.03 mm/pixel at the optimal focal plane. A circular LED ring light (48 SMD LEDs, color temperature 6500 K, adjustable intensity 0–100%) was mounted concentrically around the lens to supply uniform supplemental illumination and reduce shadows caused by ambient light gradients. The ring light was powered and controlled via PWM from the main processor to adapt to prevailing conditions.
The processing and control subsystem was centered on a Raspberry Pi 5 single-board computer (Broadcom BCM2712 quad-core Arm Cortex-A76 @ 2.4 GHz, 8 GB LPDDR4X RAM, VideoCore VII GPU). This platform was chosen for its superior computational performance compared to previous generations, enabling real-time execution of complex image processing pipelines (<3 s per card) while maintaining low power consumption. The Raspberry Pi handled image capture via libcamera, preprocessing, segmentation, parameter calculation, and user interface rendering. Built-in Wi-Fi 6 and Bluetooth 5.0 facilitated wireless data transfer to mobile devices for remote monitoring.
Power was supplied by a 10,000 mAh lithium-polymer battery pack (nominal 3.7 V, with 5 V/3 A step-up converter), yielding continuous operation exceeding 6 h under typical field usage (including LED illumination at 50% intensity). The entire system was enclosed in a custom 3D-printed ABS housing (external dimensions 180 mm × 120 mm × 100 mm, total mass 1.15 kg including battery). The device base plate incorporated a precision chessboard calibration pattern (9 × 6 squares, 10 mm × 10 mm each, matte black-on-white finish) to ensure repeatable card alignment and scale reference, as illustrated in
Figure 1.
All image processing and parameter calculations were implemented in Python 3.11 using OpenCV (version 4.8.0), NumPy (version 1.26.0), and libcamera (native to Raspberry Pi OS Bookworm). Manual validation and ground truth annotation were performed with ImageJ (version 1.54f). The hardware included a Raspberry Pi 5 single-board computer (Raspberry Pi Ltd., Cambridge, UK) as the core processor, and a high-resolution USB camera module based on the Sony IMX519 sensor (Sony Corporation, Japan).
2.2. Image Acquisition and Preprocessing
Image acquisition was performed immediately after UAV spraying to minimize evaporation effects on droplet stains. Copperplate paper cards (76 mm × 52 mm) were used as collectors. Copperplate paper serves as a cost-effective and reusable alternative to traditional water-sensitive paper (WSP), which is expensive and prone to degradation over time, making long-term data preservation challenging [
25]. While WSP generally offers higher precision in droplet size measurement due to more consistent spreading behavior [
11], copperplate paper, when treated with Allura Red tracer, provides clear, stable droplet stains that are particularly suitable for rapid field imaging and quantitative analysis under variable outdoor lighting conditions. This approach aligns with several UAV spray deposition studies employing tracer-treated cards [
26,
27].
The acquisition process involved positioning the stained copperplate paper card flat on the integrated chessboard calibration board beneath the camera lens. A fixed working distance of 150 mm was maintained using mechanical stops in the enclosure, ensuring consistent magnification and focus. The camera was operated in manual exposure mode (ISO 100 [
28], shutter speed 1/100 s) to avoid motion blur, with white balance locked to daylight (6500 K). Supplemental LED ring illumination was automatically adjusted based on histogram analysis of a pre-capture preview frame, targeting mean luminance of 180–200 in the central ROI to optimize dynamic range.
To achieve precise physical scaling and correct optical distortions inherent in compact lens systems, camera calibration was performed using Zhang’s method [
29]. A high-contrast chessboard pattern (9 × 6 internal corners, square size 20.0 ± 0.1 mm, printed on matte vinyl) was permanently affixed to the acquisition platform. Calibration involved capturing 20 images at varying tilts and positions (corner detection success rate >98% via cv2.findChessboardCorners with adaptive thresholding).
The intrinsic parameter matrix
and distortion coefficients were estimated by minimizing reprojection error:
where
are focal lengths in pixels,
the principal point,
skew (typically near zero), and
radial and tangential coefficients. Mean reprojection error achieved was 0.18 pixels, confirming sub-pixel accuracy.
For each stained card image, perspective distortion due to minor card misalignment was corrected via homography. Chessboard corners were detected in real time, and a perspective transform matrix was computed using cv2.getPerspectiveTransform to warp the image into a rectified frontal view. The active copperplate paper region was then cropped to a standardized ROI (2400 × 1560 pixels post-warp), yielding a calibrated spatial resolution of 0.0317 mm/pixel (standard deviation 0.0012 mm across 100 calibrations).
This rigorous calibration protocol ensured measurement repeatability <1% for coverage rate and <3% for droplet diameter across sessions, critical for quantitative comparison with laboratory references, as demonstrated in
Figure 2.
2.3. Image Preprocessing
Field-acquired images of copperplate paper cards are susceptible to significant quality degradation due to uneven illumination from sunlight gradients, shadows cast by canopy or equipment, cloud cover variations, and sensor noise in outdoor environments. These factors manifest as gradual brightness changes across the card, low contrast in shadowed regions, and amplified noise in faint droplet stains, severely impacting subsequent thresholding and segmentation accuracy [
30,
31]. To ensure robust droplet extraction under such challenging conditions, a sequential preprocessing pipeline was developed, incorporating adaptive background subtraction, local contrast enhancement, and controlled denoising.
The pipeline begins with a reference-based adaptive background subtraction to compensate for non-uniform illumination. A clean (unstained) copperplate paper reference image
, captured under similar ambient conditions prior to spraying, is processed with a large median filter (kernel size 51 × 51 pixels) to estimate the low-frequency illumination component
:
The stained image
is then subtracted:
where constant
prevents underflow and preserves faint stains. This operation effectively normalizes global illumination gradients while retaining high-frequency droplet edges, as validated on 150 field images where it reduced mean brightness variance by 68% compared to uncorrected raw images (see
Supplementary Table S1 for detailed statistics).
Subsequent local contrast enhancement addresses residual regional underexposure. Contrast Limited Adaptive Histogram Equalization (CLAHE) [
32] was applied exclusively to the value (V) channel in HSV color space, with clip limit 3.0 and tile grid size 8 × 8. CLAHE redistributes intensity values within local tiles while limiting amplification to prevent noise over-enhancement:
Followed by recombination into RGB. Parameter selection (clip limit and grid size) was optimized via grid search on a validation set of 100 unevenly lit cards, maximizing edge preservation (measured by Laplacian variance) while minimizing noise amplification. This step improved visibility of faint droplets in shadowed areas by an average contrast ratio increase of 2.4× and reduced false negatives in low-light regions by 42% compared to global histogram equalization (detailed quantitative comparison, including Laplacian variance, PSNR, and false negative rates across 100 validation cards, is provided in
Supplementary Table S2), confirming its superiority for field droplet imaging [
33].
Final denoising employed bilateral filtering [
12] to reduce Gaussian and salt-and-pepper noise without blurring droplet boundaries:
Bilateral filtering preserves edges by weighting pixels based on both spatial proximity and intensity similarity, outperforming simple Gaussian blur in retaining sharp stain perimeters.
The complete pipeline (
Figure 3) processes each image in <0.8 s on the Raspberry Pi 5, achieving >94% droplet visibility improvement across illumination variances (quantified by normalized contrast metric). Comparative evaluation against global histogram equalization showed CLAHE-based enhancement reduced false negatives in low-light regions by 42%, confirming its superiority for field droplet imaging [
33].
2.4. Thresholding and Binarization
Following preprocessing, binarization is essential to isolate droplet stains from the copperplate paper background for subsequent segmentation and parameter calculation. Global thresholding methods, such as Otsu’s algorithm [
34], assume bimodal histograms and often fail under residual illumination non-uniformity, resulting in over-segmentation in bright regions or loss of faint droplets in shadows. To achieve robust separation across variable field conditions, a hybrid adaptive thresholding strategy was implemented.
Local adaptive thresholding was first applied using the Gaussian method:
where
is a local window (block size 51 × 51 pixels), and constant
(optimized empirically to balance sensitivity and noise rejection). This produced an initial binary mask
sensitive to local intensity variations.
To enhance detection of low-contrast droplets, Sauvola thresholding [
35] was computed in parallel:
With window size 31 × 31, , and dynamic range . The resulting mask excels in textured or low-variance regions.
The final binary image was obtained by intersection followed by morphological refinement:
where opening removed small noise artifacts and closing filled minor intra-droplet gaps. This fusion reduced false positives by 35% compared to individual methods while preserving >96% of true droplets (validated on 120 annotated cards).
The approach ensured consistent binarization across illumination variances (coefficient of variation <5% in threshold values), providing a reliable foundation for adhesion segmentation.
2.5. Improved Adhesion Droplet Segmentation Algorithm
Adhered or overlapping droplets on copperplate paper cards are a major source of error in automated deposition analysis, as they form irregular connected regions that lead to underestimation of droplet counts and overestimation of individual stain areas [
36,
37]. Conventional watershed segmentation is prone to over-segmentation due to noise-induced local minima in the gradient image [
38], while simple concave point methods often fail to resolve strongly adhered clusters with ambiguous pairing [
39]. Ellipse fitting approaches assume near-circular stains and perform poorly under dense coverage or deformation [
40].
To overcome these limitations, this study proposes an improved hybrid algorithm that integrates iterative morphological opening for progressive separation of weakly adhered droplets with a marker-controlled watershed guided by refined distance transform markers and weighted concave point matching. The method explicitly generates reliable internal and external markers to constrain watershed propagation, reducing over-segmentation while achieving high separation accuracy in dense sprays.
2.5.1. Adhesion Detection Using Combined Shape Descriptors
Connected components in the binary mask are evaluated with multiple geometric descriptors to robustly detect adhesions:
Circularity (Shape Factor):
where
is the area and
the perimeter in pixels.
Solidity:
where
is the convex hull area.
Extent Ratio:
where
is the axis-aligned bounding box area. A composite adhesion score is computed as:
With weights
,
,
optimized via grid search on 300 labeled components (F1-score = 0.97). Components with
are flagged for segmentation. This multi-descriptor approach outperforms single circularity thresholds by reducing false positives in deformed isolated droplets [
38].
2.5.2. Iterative Morphological Opening for Weak Adhesion Separation
Weak adhesions (narrow necks) are detached progressively to avoid aggressive erosion:
Let
be a disk structuring element of radius
pixels (
). Iterative opening proceeds as:
Starting from the binary mask of flagged components . At iteration , newly isolated components (post-opening circularity ) are extracted and merged into the non-adhered set. The process terminates upon no new isolations or reaching . This yields ~65–75% separation of weak adhesions (coverage <30%) with minimal area loss (<4% for isolated droplets).
2.5.3. Marker Generation and Weighted Concave Point Matching for Strong Adhesions
Remaining clusters undergo Euclidean distance transformation:
where
is background pixels.
Internal markers
are regional maxima suppressed via h-minima transform (
pixels, selected to eliminate spurious peaks while preserving true centers):
External markers
are derived from background dilation. Concave points are detected by contour curvature:
With threshold
(normalized tangent vectors). Pairing minimizes weighted cost:
where
is normalized Euclidean distance,
the difference in local distance maxima,
alignment angle, and weights
,
,
(optimized on validation set for minimal crossing pairs).
Matched pairs define geodesic segmentation paths on inverted DT. Final segmentation imposes markers on the gradient magnitude and applies controlled watershed [
41]:
Ensuring basins grow only from reliable seeds.
2.5.4. Performance Evaluation and Parameter Optimization
All datasets used in this study were collected from the same real-world UAV plant protection spraying experimental campaign conducted in wheat fields under typical outdoor conditions (varying illumination, wind speeds <5 m/s, flight heights 2–4 m). There are two distinct datasets:
A set of 150 annotated copperplate paper cards used for algorithm parameter optimization (grid search) and internal validation.
An independent set of 21 field-collected cards used for final performance evaluation and comparison with ImageJ reference.
These two datasets do not overlap. No synthetic, laboratory-simulated, or external public datasets were used. All cards were acquired during the same series of field experiments to ensure consistency in spraying conditions and card preparation.
To minimize overfitting, the 150 annotated cards were split into a training subset (120 cards) and an internal validation subset (30 cards). Grid search was performed on the training subset, and the optimal parameter set was selected based on the highest Dice coefficient on the internal validation subset. The independent 21 field-collected cards were completely held out and never used during parameter tuning; they served solely as an external test set for performance evaluation. No cross-validation was performed due to the limited dataset size, but the clear separation between tuning and test data reduces the risk of overfitting. We acknowledge that the overall data volume is modest and that larger datasets would further strengthen confidence in the results.
The ground truth for adhesion segmentation and droplet counting was generated manually in ImageJ (version 1.53k) [
42,
43] by two experienced annotators (both co-authors with >5 years of experience in agricultural image analysis). Each card was independently annotated by both annotators using the freehand selection tool for droplet boundaries and the “Analyze Particles” function for counting and area measurement. Inter-annotator agreement was assessed on a subset of 30 cards, yielding an average Dice coefficient of 0.95 for segmentation masks and a mean relative difference in droplet count of <4%. Discrepancies were resolved through consensus to produce the final ground truth. This manual approach was chosen to provide the most accurate reference, avoiding errors from automated thresholding on field images with variable lighting and adhesion.
Parameters (thresholds, weights, h-value) were optimized on the 150 annotated cards through grid search to achieve the best match with manual segmentation. The hybrid algorithm integrates iterative morphological separation with a novel weighted cost function for concave point pairing, which helps constrain over-segmentation by generating reliable markers. The performance and comparisons to baseline methods are detailed in
Section 3.2. This approach enhances separation reliability, particularly in dense UAV sprays where adhesions are common. Full details of the parameter optimization process, including grid search ranges and final values, are provided in
Supplementary S1.
2.6. Calculation of Droplet Deposition Parameters
Following successful segmentation, key deposition parameters were computed to quantify spray quality. All calculations utilized the calibrated pixel-to-mm conversion factor (0.0317 mm/pixel) for physical accuracy.
The device’s spatial resolution is 0.0317 mm/pixel (with standard deviation 0.0012 mm across calibrations), enabling reliable detection of droplet equivalent diameters from approximately 0.10 mm to 5.0 mm. The lower limit (≈0.10 mm) is determined by the minimum stain area required for robust segmentation after preprocessing (typically 10–15 pixels to exceed noise thresholds), while the upper limit is set by the field of view of the cropped ROI (≈76 mm × 52 mm card). Smaller droplets (5 mm) are rare in UAV low-volume spraying but can still be segmented if present. This range covers the vast majority of droplet sizes encountered in plant protection UAV applications (typical VMD 100–300 µm).
Coverage Rate (CR): The proportion of card area covered by droplets:
where
is the area of the
-th droplet (pixels), and
is the total card area.
Droplet Density (DD): Number of droplets per unit area:
Equivalent Diameter: For each droplet,
Volume Median Diameter (VMD) and Number Median Diameter (NMD): Diameters were sorted, and median values identified where 50% of total volume or count lies below. Stain diameters were converted to actual droplet diameters using a spread factor SF = 1.78 (empirically derived for typical water-based formulations on copperplate paper [
12]):
Parameters were computed using NumPy vectorized operations for efficiency (<0.2 s per image) and displayed in real time via the device’s web interface, with CSV export for further analysis.
3. Results and Discussion
3.1. Performance of Preprocessing and Thresholding
The preprocessing pipeline effectively mitigated illumination variations and noise inherent in field-acquired images. Adaptive background subtraction normalized global gradients, while CLAHE enhancement recovered faint droplets in low-light regions. The hybrid Gaussian–Sauvola thresholding provided robust binarization across diverse lighting conditions.
Figure 4 illustrates the progressive improvements on a representative card exhibiting brightness gradients.
In the raw image (
Figure 4a), faint droplets in shadowed regions are barely visible. Grayscale conversion (
Figure 4b) preserves intensity information, while CLAHE enhancement (
Figure 4c) substantially recovers local contrast, making low-intensity stains discernible (as highlighted by red circles and enlarged insets in shadowed areas). Individual Gaussian (
Figure 4d) and Sauvola (
Figure 4e) thresholds each capture complementary aspects—Gaussian preserving edges in uniform areas and Sauvola handling textured backgrounds—but both exhibit residual artifacts and miss some faint droplets. The proposed hybrid intersection followed by morphological closure (
Figure 4f) yields the cleanest mask, with optimal droplet isolation and minimal false positives. The enlarged insets (bottom row) illustrate the step-by-step improvement in visibility, contrast recovery, and noise reduction, particularly demonstrating how the proposed pipeline recovers and cleanly isolates faint droplets in challenging shadowed regions.
This step-by-step visualization demonstrates how the hybrid approach leverages the strengths of both local methods, achieving superior performance in challenging field lighting compared to global Otsu thresholding.
Quantitative comparison of thresholding methods against ImageJ reference coverage rates across 21 cards is summarized in
Table 1.
The MAE and MRE values in
Table 1 represent errors in coverage rate estimation based on the binary masks produced by each thresholding method before adhesion segmentation. In contrast, the mean absolute error of 0.31% reported in
Section 3.3 refers to the final coverage rate after adhesion segmentation and bias correction.
The proposed hybrid method achieved the lowest mean absolute error (MAE = 0.12%) and mean relative error (MRE = 10.6%), with excellent linear correlation (R2 = 0.97). Global Otsu consistently overestimated coverage in bright regions and missed faint droplets in shadows (MRE >30% for low-coverage cards <0.6%). The intersection of complementary local thresholds, combined with morphological closure, reduced false positives by 36% while preserving >96% of true droplets.
These improvements were particularly evident in cards from upper canopy positions (e.g., “1.7”, reference CR = 3.05%), where dense deposition and specular reflections challenged single methods. The preprocessing and thresholding stages thus provided a reliable binary foundation for subsequent adhesion segmentation and parameter extraction in real field conditions.
3.2. Adhesion Segmentation Performance Comparison
Adhered droplets were prevalent in medium-to-high coverage cards, accounting for 24–46% of connected components. The proposed hybrid algorithm demonstrated superior separation accuracy compared to standard watershed and simple concave point matching.
Figure 5 presents qualitative results on three representative cards spanning coverage levels.
Figure 5 illustrates the superior performance of the proposed method across varying adhesion intensities. Standard watershed (column b) suffers from extensive over-segmentation due to noise-induced spurious minima, particularly evident in the annotated cluster in the high-coverage row (red circle and enlarged inset). Simple concave pairing (column c) fails to resolve tightly adhered multi-droplet clusters, resulting in persistent under-segmentation in the same region. In contrast, the proposed approach (column d) effectively detaches weak adhesions via iterative morphological opening and accurately splits strong clusters using weighted concave matching and marker control, achieving clean individual droplet separation as shown in the enlarged inset. These annotations highlight how the proposed method minimizes both over- and under-segmentation while preserving droplet integrity.
Droplet count accuracy is quantified in
Table 2. The proposed method reduced mean relative error to 6.3%, compared to 26.7% for standard watershed and 10.8% for simple concave pairing, demonstrating the effectiveness of the novel marker generation and weighted pairing strategy in reducing over- and under-segmentation.
In the low-coverage category (<0.6%, representative of sparse deposition often observed in upper canopy layers or low-volume UAV spraying), the proposed method achieved a mean relative error of 5.2 ± 2.7% (n = 8 cards), demonstrating robust performance even when droplet density is extremely low and adhesions are minimal.
All reported performance metrics on the 21 cards were obtained on this independent test set, with no overlap with the parameter tuning data (the 150 annotated cards used for grid search).
The proposed method reduced mean relative error to 6.3%, compared to 26.7% for watershed and 10.8% for simple concave pairing. Over-segmentation in watershed was pronounced in high-coverage cards, due to noise-induced spurious minima. Simple concave pairing under-segmented multi-droplet clusters with low curvature contrast.
Iterative morphological opening successfully detached 71% of weak adhesions prior to watershed, generating cleaner internal markers. Weighted cost function (Equation (9)) minimized erroneous pairings, with the distance transform term proving critical for resolving tightly adhered triplets. Processing time averaged 1.4 s per card on the Raspberry Pi 5, enabling real-time field feedback.
These results validate the algorithm’s effectiveness for UAV deposition patterns, where adhesion rates are elevated due to low-volume spraying and rotor-induced coalescence.
3.3. Accuracy Validation of Coverage Rate, Density, and Grain Size
Overall parameter accuracy was assessed against ImageJ reference values across the 21 field-acquired cards.
Coverage rate estimation exhibited strong agreement (
Figure 6), with linear regression yielding R
2 = 0.965, slope = 1.02, and intercept = −0.08.
Mean absolute error was 0.31% (mean relative error = 28.4%), with higher relative errors observed in low-coverage cards (<0.6%) due to challenges in detecting faint droplets. This 0.31% MAE is the final coverage rate error after adhesion segmentation and bias correction, while
Table 1 reports pre-segmentation errors. The slight positive bias (+0.31%) indicates minor overestimation by the device, primarily in medium-to-high coverage regions, while the narrow limits of agreement confirm acceptable consistency for practical field use. For very low coverage rates (<0.6%), relative errors were higher due to small absolute values amplifying relative differences, but absolute errors remained minimal (MAE ≈ 0.08%), confirming reliable performance in sparse deposition scenarios common in UAV applications.
The high R2 value (0.965) indicates strong linear agreement between the device’s estimated coverage rates and the ImageJ reference across the full range of values. However, the mean relative error (MRE = 28.4%) is elevated primarily because relative errors are amplified in low-coverage cards (<0.6%), where small absolute differences (e.g., 0.1% vs 0.05%) result in large percentage discrepancies. This is a common phenomenon when evaluating coverage rates near zero, as small absolute errors become disproportionately large when expressed relatively. In contrast, absolute errors remain low (MAE = 0.31%), and the narrow limits of agreement confirm the device’s practical utility, particularly for medium-to-high coverage scenarios.
Droplet density estimation showed good correlation with ImageJ reference values (
Figure 7), with linear regression yielding R
2 = 0.892. Mean relative error was 29.1% (MAE = 43.6 droplets per card), with systematic underestimation in high-density cards due to residual unseparated adhesions. The method reliably detects droplets in the range 0.10–5.0 mm equivalent diameter, consistent with typical UAV spray atomization profiles.
The moderate correlation in VMD estimation (R2 = 0.512) is largely attributable to residual unseparated adhesions in high-coverage scenarios, which cause overestimation of droplet areas and thus influence volume-based parameters.
3.4. Field Measurement Case Study and Limitations
Field trials revealed higher deposition on artificial Mylar cards compared to natural wheat leaves, consistent with canopy interception effects. The device enabled real-time identification of low-penetration zones, facilitating immediate parameter adjustments that improved average deposition by 23–26%.
The device’s real-time feedback capability enabled immediate adjustment of UAV spraying parameters (e.g., flight height, speed, or nozzle type) during field trials, resulting in an average deposition improvement of 23–26% compared to initial settings. This enhancement directly translates to higher pesticide utilization efficiency. In conventional UAV or ground-based applications, pesticide utilization rates typically range from 30 to 40% [
8,
9]; optimized deposition uniformity can potentially increase this to over 60%, reducing pesticide waste by approximately 20–30% per hectare.
From an environmental perspective, improved deposition uniformity minimizes off-target drift and runoff, which studies estimate can account for 50–70% of applied pesticides under suboptimal conditions [
10,
19]. By reducing these losses, the device contributes to lower environmental pollution, decreased risk of non-target organism exposure, and improved sustainability in precision agriculture. These quantitative benefits, demonstrated through our case studies, highlight the equipment’s potential to deliver both economic savings for farmers and reduced ecological impact.
Several studies have developed tools for droplet deposition analysis in UAV spraying, but most remain limited to laboratory conditions or lack real-time field applicability. For example, smartphone-based applications such as SnapCard [
11] and portable scanners [
22] improve accessibility over traditional ImageJ analysis, but they often struggle with variable outdoor illumination and adhesion segmentation, reporting droplet count errors of 10–20% in dense sprays. Laboratory software like DepositScan [
13] and DropVision achieve higher accuracy in controlled settings but require post-collection processing and are not suitable for on-site decision-making.
In contrast, the proposed portable device achieves real-time processing (<3 s per card) on a low-cost Raspberry Pi 5 platform (<USD 200), with adaptive preprocessing that ensures robustness under field lighting gradients. The hybrid adhesion segmentation algorithm reduces mean relative error to 6.3% (
Table 2), outperforming standard watershed (26.7%) and simple concave pairing (10.8%). A detailed comparison of key metrics with representative published methods is presented in
Table 3 and
Supplementary Table S3.
Comparison of key performance metrics for droplet deposition analysis tools in UAV spraying applications. The proposed device stands out in real-time field capability, low cost, and adhesion handling accuracy.
The current system relies on visible light illumination and chromogenic tracers (e.g., Allura Red) for droplet visualization, which limits its applicability to colorless or transparent pesticide formulations and may affect performance under extreme lighting conditions, such as direct strong sunlight or complete darkness. Limitations include reliance on visible tracers (e.g., Allura Red) for droplet visualization, sensitivity to extreme specular reflections, and moderate performance in VMD estimation under ultra-high coverage scenarios (R2 = 0.512), primarily due to residual errors in strong adhesion segmentation under high-coverage conditions. Future work will explore polarization filtering to reduce reflections, fluorescent tracers for colorless formulations, and integration of lightweight deep learning modules to further improve adhesion segmentation in dense spray scenarios; further exploration of algorithm robustness under such extreme lighting conditions will also be prioritized to broaden the device’s practical utility.
The current experimental validation is based on 21 field-collected spray cards, obtained under real-world UAV spraying conditions and encompassing a range of coverage levels (low <0.6%, medium 0.6–1.5%, high >1.5%), illumination gradients, and adhesion patterns (as detailed in
Table 2). This dataset provides a solid preliminary assessment of the device’s core performance across the most critical variables influencing droplet deposition in plant protection applications. Nevertheless, we recognize that the sample size is limited and that greater diversity (e.g., different crop canopies such as rice, wheat, or orchards; varying wind speeds, nozzle types, flight heights, and seasonal lighting conditions) would further strengthen the evaluation of generalizability and robustness. Future field trials will prioritize expanding the dataset to include these additional variables.
Overall, the portable device provides reliable, immediate spray quality feedback superior to laboratory-based methods, supporting precision UAV application and reduced pesticide waste.
4. Conclusions
This study developed a low-cost, portable image-based detection device for real-time assessment of droplet deposition on copperplate paper cards during plant protection UAV operations. The device integrates a Raspberry Pi 5 processor with a high-resolution camera and chessboard calibration board, enabling field-portable image acquisition and on-site analysis. Key algorithmic innovations include an adaptive preprocessing pipeline with background subtraction and CLAHE enhancement to address variable illumination, a hybrid Gaussian–Sauvola thresholding strategy for robust binarization, and an improved adhesion segmentation algorithm combining iterative morphological opening with weighted marker-controlled watershed based on distance transform and concave point matching.
Validation on 21 field-collected cards using ImageJ as reference demonstrated strong performance for primary spray quality metrics. Coverage rate estimation achieved R2 = 0.965, with mean absolute error of 0.31% and mean relative error of 28.4%, while droplet density yielded R2 = 0.892 and mean relative error of 29.1%. The adhesion segmentation algorithm reduced mean relative error in droplet count to 6.3%, significantly outperforming standard watershed (26.7%) and simple concave pairing (10.8%) methods, particularly in medium-to-high coverage scenarios typical of UAV low-volume spraying.
Field trials confirmed the device’s ability to detect deposition variability in real time, facilitating immediate flight parameter adjustments that improved average deposition by 23–26%. The system addresses critical limitations of traditional laboratory-based methods—time delay, bulkiness, and lack of portability—offering farmers and researchers a practical tool for on-site spray quality evaluation and data-driven optimization.
This study demonstrates the effectiveness of the proposed portable device for on-site droplet deposition analysis in UAV plant protection spraying. Key findings include strong linear agreement in coverage rate estimation (R2 = 0.965, MAE = 0.31%), robust adhesion segmentation (mean relative error = 6.3%), and practical utility in real-world field conditions. In case studies, the device’s feedback enabled immediate UAV parameter adjustments, resulting in an average deposition improvement of 23–26% compared to baseline settings (initial flight parameters without device guidance) across 3 independent flights on wheat plots (6 cards per flight, total 18 cards).
Main limitations include reliance on visible tracers, sensitivity to extreme lighting conditions, moderate VMD estimation performance (R2 = 0.512) due to residual strong adhesions, and the modest validation dataset size. Future refinements will target polarization filtering for reflection mitigation, fluorescent tracers for colorless formulations, deep learning enhancements for high-coverage adhesion segmentation, and larger-scale field validation across diverse crops and conditions.
Overall, the device offers a practical, low-cost tool for farmers and researchers to achieve rapid on-site spray quality assessment, supporting optimized pesticide use, reduced off-target losses, and more sustainable agricultural practices.