1. Introduction
Pollination plays a pivotal role in fruit cultivation by contributing to stable yields and maintaining high fruit quality. However, many fruit tree species encounter difficulties in achieving adequate fruit set through natural pollination because of the decline in insect populations caused by global warming [
1,
2]. In natural pollination, honeybees are widely relied on because of their ability to pollinate large areas efficiently with minimal human intervention. Despite these advantages, honeybee activity is highly susceptible to environmental conditions. Elevated temperatures and the broader impacts of climate change can lead to decreased foraging behavior, which results in unstable fruit set and deformities in fruit shape. Additionally, the ongoing decline in honeybee populations caused by environmental degradation and habitat loss has further undermined the reliability of natural pollination [
3,
4]. To address these limitations, artificial pollination methods have been adopted. Manual pollination by human workers is the most prevalent. This technique enables relatively stable fruit set outcomes by ensuring that each flower is carefully pollinated. However, it is extremely labor-intensive and increasingly impractical for application in large-scale orchards. The persistent labor shortage in the agricultural sector exacerbates this issue, particularly for commercial fruit farms that require scalable and sustainable pollination methods [
5,
6,
7]. Given the shortcomings of both natural and manual pollination, there is an increasing need for alternative solutions that combine stability with reduced labor dependency. In response to this demand, drone-based pollination systems have recently emerged as a promising approach because they have the potential to achieve efficient and consistent pollination across wide orchard areas with minimal human intervention.
Recent advancements in drone technology, particularly regarding flight stability, operability, and precise control, have led to its rapid adoption across various industries, including agriculture [
8]. In farming, drones equipped with high-precision cameras and advanced sensors are being used to monitor crop growth and detect pest infestations. These sensing technologies have greatly improved efficiency in agricultural monitoring by reducing the reliance on manual labor and enhancing productivity [
9]. Beyond agriculture, drones are being used in logistics [
10,
11], infrastructure maintenance [
12,
13], disaster management [
14,
15], and environmental monitoring [
16,
17], thereby catering to a wide range of societal needs. Given these technological advancements and the increasing demand for automation, a drone-based pollination system is a promising solution to improve the efficiency of pollination in agriculture [
18,
19,
20].
In this study, we focus on the development of a drone-based pollination system that detects the blooming status of pear flowers and autonomously performs pollination. The proposed system consists of two drones that work together to ensure an efficient pollination process (
Figure 1). The first, which is called the observation drone, is responsible for monitoring pear trees, whereas the second, which is called the pollination drone, performs actual pollination. The observation drone is tasked with monitoring the pear trees by simply flying between the pear trees and capturing both color and depth images of the blooming flowers. These images are then transmitted to a cloud-based server, where both flower detection and spatial clustering are performed using computer vision algorithms. Then, a flight path is generated based on the processed data for the pollination drone. The pollination drone then navigates to the identified flower clusters autonomously and performs targeted pollen spraying.
By assigning distinct roles to the observation drone and pollination drone, the system aims to automate the pollination process, reduce labor burden, and enhance operational efficiency in farming. Additionally, the integration of machine learning with drone technology allows for adaptive real-time decision making, which improves the consistency and accuracy of pollination. Despite these advantages, the implementation of drone-based pollination systems presents three critical technical challenges that must be addressed: (i) the accurate identification of the optimal pollination period, (ii) the realization of precise drone flight, and (iii) the determination of efficient pollination flight paths. The first challenge relates to the determination of optimal pollination timing, which is a key factor in maximizing fruit set rates. In our previous study, we proposed a method that leverages DeepSort [
21], a deep-learning-based object tracking algorithm, to monitor and count the number of blooming flowers and buds. This method processes video footage captured by the observation drone and uses machine learning techniques to automatically detect, track, and quantify individual floral structures throughout the blooming phase of pear trees. In experimental evaluations, we demonstrated that this approach accurately identified peak blooming periods, which enabled timely and effective pollination intervention. The second challenge involves achieving high-precision drone flight, which is essential for accurately targeting pollination sites. To address this, we developed a method based on RTK (Real-Time Kinematic) [
22] positioning, as detailed in our previous study [
23]. Through field experiments, we confirmed that the RTK-based approach significantly improved navigational accuracy, and enabled stable and reliable flight control, even in complex orchard environments. The third challenge is the determination of the pollination drone’s flight path. To execute effective and efficient pollination, it is essential to first detect the precise spatial coordinates of individual flowers. Once the observation drone has estimated the flower positions using integrated RGB and depth data, a flight path must be constructed that allows the pollination drone to visit the target flowers in an optimized sequence. The third challenge remains unresolved; hence, it is necessary to investigate effective methods for determining the flight path of the pollination drone.
In this study, we propose a cluster-based flight path construction method for use with autonomous drone pollination in orchard environments. The proposed system transmits RGB-D data that have been captured by the observation drone to a cloud-based platform, where the three-dimensional coordinates of the blooming pear flowers are estimated. Pear flowers typically bloom in dense clusters along each branch, which means that it is reasonable to define the spraying targets at the level of flower clusters rather than at the individual flower locations, as shown in
Figure 2. A clustering algorithm is therefore applied to the estimated spatial data to group the flowers based on their proximity and thus effectively identify the natural cluster formations on branches. Using the centroids of these clusters, the system then generates a structured flight path for the pollination drone to enable efficient navigation and precise pollen spraying while also minimizing unnecessary movements.
To validate the effectiveness of the proposed method, we conducted field experiments using video data recorded in an actual orchard environment. The results confirmed that the observation drone successfully detected and tracked flowers, estimated spatial coordinates with high accuracy, and provided reliable navigation data for the pollination drone. The experimental validation demonstrated that the system autonomously performed pollination with precision, thereby improving efficiency in large-scale fruit orchards.
The contributions of this paper are summarized as follows:
Integrated Flower Detection and 3D Localization: The proposed system uses an observation drone equipped with RGB and depth cameras to detect pear flowers and estimate their 3D coordinates with high accuracy, thereby enabling precise guidance for autonomous pollination.
Clustering-Based Flight Path Planning: We introduce a novel flight path construction method, which applies the clustering algorithm to group spatially proximate flowers and determine efficient, branch-level waypoints. This approach reduces unnecessary drone movement and improves overall pollination efficiency.
Experimental Validation with RTK-Based Control: We demonstrated the system’s effectiveness through real-world experiments using RTK-GNSS (Global Navigation Satellite System)-based flight control to confirm the accuracy of flower detection, position estimation, and autonomous pollination in orchard environments.
Fruit Set Rate Evaluation: The effectiveness of the proposed method was further validated by comparing the fruit set rate with that of natural and manual pollination. The results confirmed that our drone-based method achieves a higher fruit set rate than natural pollination and a comparable performance to manual pollination, demonstrating its practical utility in real-world agricultural settings.
The remainder of this paper is organized as follows.
Section 2 begins by reviewing the related work.
Section 3 then details the proposed flight path construction method used for the pollination drone.
Section 4 presents the experimental setup and provides a performance evaluation of the proposed system.
Section 5 discusses the implications and limitations of the proposed approach. Finally,
Section 6 presents our conclusions about the paper.
3. Proposed Flight Path Construction Method for the Pollination Drone
3.1. Overview
Figure 3 shows the flowchart for the proposed method. In the proposed drone pollination system, pollination is conducted through a coordinated operation involving the observation drone and the pollination drone, with each fulfilling a designated role. The observation drone begins by flying between the pear trees and capturing real-time images of pear flowers using both a conventional RGB camera and a depth camera. These images are then uploaded to a cloud-based system, where the machine learning algorithms immediately detect the flowers and extract their spatial information. The RGB camera captures high-resolution images for flower detection, whereas the depth camera provides distance information, thus allowing the system to estimate the 3D coordinates of the flowers. The pollination drone is then equipped with a pollen sprayer and navigates to the detected flower positions using the coordinate data provided by the observation drone. Using the analyzed video data, the pollination drone approaches each flower precisely and then sprays pollen at the optimal location. This structured approach ensures efficient, accurate pollination while also eliminating any need for manual labor.
To illustrate how the RGB and depth data are jointly used for accurate 3D flower localization,
Figure 4 compares a conventional RGB image and its corresponding depth heatmap. In the RGB image, the pear flowers are clearly visible, whereas the heatmap reveals the shapes and the relative depths of the flowers through the various color gradients. By analyzing the RGB and depth images in combination, the system estimates the 3D coordinates for each flower accurately. These estimated positions then serve as the basis for construction of an efficient flight path for the pollination drone.
Using RGB-D processing, the overall procedure for the proposed drone path construction method is shown in
Figure 5. Initially, the system performs flower detection using images captured by an RGB camera. Subsequently, it estimates the distance from the camera to each detected flower based on the corresponding depth data. By further calculating the distance from the drone’s takeoff location to the image capture position, the system can estimate the absolute coordinates of the flowers. Then, it clusters the detected flower positions using a suitable clustering algorithm and determines the centroid of each cluster. The system plans the drone’s flight path to sequentially visit these centroids. By hovering in front of each centroid, the drone can spray pollen in a manner that simultaneously targets multiple flowers within the cluster, thereby enhancing pollination efficiency. The detailed procedure for constructing the flight path is organized into the following steps:
- Step 1.
Train the flower detection algorithm.
- Step 2.
Capture video footage with the observation drone.
- Step 3.
Apply the flower detection algorithm to the video.
- Step 4.
Extract flower coordinates.
- Step 5.
Remove unnecessary points.
- Step 6.
Determine the flight path.
In Step 1, the flower detection algorithm is trained in advance of the blooming period. This training is performed on a cloud-based computer using a labeled dataset to enable the accurate identification of pear flowers during actual operation.
During the blooming season, Step 2 involves capturing video footage of the orchard using the observation drone. Once the recording is complete, the footage is transferred to the cloud-based computer for further processing.
Steps 3–6 are performed on the cloud-based computer, where the system detects flowers, estimates their coordinates, filters out unnecessary points, and generates the flight path for the pollination drone.
Once Step 6 is complete, the system transmits the computed flight path to the pollination drone. The pollination drone follows this flight path to perform pollination. The effectiveness of pollen spraying depends significantly on the performance of the sprayer, which makes it a subject for further study. The following sections provide a detailed explanation of each step in the process.
3.2. Training the Flower Detection Algorithm
For pear flower detection, YOLO (You Only Look Once) [
37,
38,
39] is employed. YOLO is a state-of-the-art, deep-learning-based object detection algorithm widely used in computer vision applications, particularly known for its high detection accuracy and real-time performance. Unlike traditional region-based methods, YOLO processes the entire image in a single forward pass through a neural network, thereby directly predicting bounding boxes and associated class probabilities for multiple objects.
YOLO uses a CNN for object detection. A CNN is a type of neural network designed to extract features from input images and classify them accordingly. Within a CNN, the convolutional layer plays a crucial role. This layer applies filters to the image to extract features such as edges, colors, textures, and shapes. These low-level features are progressively transformed into higher-level representations, which ultimately enables the prediction of object positions and categories.
Compared with conventional R-CNNs, YOLO had significant advantages in object detection. In an R-CNN, candidate regions are first generated and each region is processed individually, which results in high computational costs and slower processing speeds. By contrast, YOLO divides the image into a grid and each grid cell predicts the location of objects. This approach enables simultaneous object detection and classification, which significantly improves processing speed. In fact, YOLO can detect objects at high speed, which makes it suitable for real-time applications.
When an image is input into the YOLO network, it generates the following outputs:
Bounding Box A bounding box is a rectangle that indicates the potential region where an object is detected. It represents the position of the detected object within the image (
Figure 6). The bounding box is typically defined using four parameters: the central coordinates
, width
w, and height
h. For example, if an object is detected in an image at position
with width
w and height
h, the bounding box can be represented as:
Confidence Score Each bounding box is assigned a confidence score, which represents both the probability that an object is present within the box and the accuracy of the prediction. This confidence score
C is calculated as follows:
where
is the probability that an object is present inside the bounding box. IoU (intersection over union) represents the overlap ratio between the predicted bounding box and the ground truth bounding box. The higher the confidence score, the higher the probability that the bounding box contains an object and that the predicted box closely matches the actual object’s position.
To detect pear flowers that are suitable for pollination, flower states are classified using two labels: bud and flower. The bud label indicates that the pear flower has not yet bloomed and is not ready for pollination. Because the bud remains closed at this stage, artificial pollination cannot be performed; hence, these flowers are excluded from the pollination process. By contrast, the flower label indicates that the pear flower has fully bloomed and has reached a state where artificial pollination is possible. Flowers in this state are the targets for drone-assisted pollination. A dataset labeled with bud and flower is used to train the YOLO object detection algorithm. This classification is a crucial step in the pollination process because accurately distinguishing between buds and flowers ensures efficient and reliable pollination.
Figure 7 provides examples of images labeled as bud and flower. The images illustrate the two different states of pear flowers, which are the focus of this study. The preparation of a labeled dataset with these classifications enables YOLO to be trained to distinguish between bud and flower states. YOLO is a highly efficient and accurate algorithm for object detection. Because it processes the entire image in a single pass and predicts object locations simultaneously, it is particularly suited for tasks that require real-time processing, such as automated pollination.
This classification and training process establishes the foundation for an automated pollination system using drones. Accurate detection of flower states is essential for improving the efficiency of pollination operations and reducing the workload of agricultural labor. Proper labeling ensures that only flowers that are ready for pollination receive targeted action, which enhances the precision and effectiveness of the pollination system.
It is important to note that the flowers in the intermediate stages between bud and full blossom were excluded from the training dataset during this study. This is because these flowers are generally not considered to be suitable for effective pollination and may also introduce ambiguity during classification. As a result, the detection accuracy during these intermediate stages is expected to be low. There is also a risk that some of the flowers may also be misclassified as blossoms, thus potentially leading to unintended pollination attempts. In future work, we intend to include intermediate-stage samples explicitly among the training data and train the model to distinguish and exclude these samples, thus improving the overall robustness of the detection system.
3.3. Capture of the Observation Drone Video
To monitor the pear trees, the observation drone is programmed for daily flights through the orchard. The flight path is predefined using the ROS (Robot Operating System) (
Figure 8), ensuring that it follows a predetermined route at regular intervals. This systematic flight pattern allows the drone to operate stably within the orchard while efficiently observing different areas.
During its flight between trees, the drone orients its camera toward target pear trees to capture images and videos of flowers in specific regions. This setup enables the collection of essential visual data needed to detect flowers suitable for pollination. However, this method also introduces a challenge: unintended flowers may appear in the captured footage. For instance, flowers on the opposite side of a tree or from neighboring rows may be recorded because of the camera’s orientation. This issue arises because the field of view is determined by the drone’s flight direction, which potentially leads to the unintended detection of non-target flowers.
An attempt to eliminate such false detections at the time of image capture by the dynamic adjustment of the recorded footage would significantly increase system complexity and processing time. Additionally, in real-world agricultural environments, pear flowers are distributed irregularly, which makes it difficult to completely avoid false detections through real-time adjustments. Given these constraints, the proposed method does not incorporate specific measures to filter out unwanted flowers at the time of image capture.
Instead, after the system computes the flower coordinates in Step 4, a filtering process is introduced in Step 5 to remove mistakenly detected points. By allowing some degree of false detection in the early stages and eliminating unnecessary data in later processing steps, the system ensures that only flowers relevant for pollination are accurately identified. This approach effectively minimizes the effect of false detection while maintaining an efficient and reliable method for detecting pollination-ready pear flowers.
3.4. Estimation of Flower Coordinates
To accurately estimate the coordinates of flowers, the exploration drone uses a depth camera to measure the distance between the drone and the flowers. Additionally, by integrating the drone’s GNSS data and attitude information, the system can determine the flower coordinates in 3D space.
Figure 9 shows the procedure for flower coordinate estimation. First, the observation drone applies the YOLO algorithm, as described in the previous section, to the captured color image with a resolution of
to detect the positions of individual flowers. The system localizes each detected flower in the image as a pixel coordinate
. Subsequently, using both the color and corresponding depth images, the system projects the detected pixel positions into a point cloud aligned with the world coordinate system. This coordinate system is defined such that the image center serves as the origin and the positive
Z-axis is aligned with the direction of depth. The system obtains the 3D position
of a flower relative to the drone by accessing the
-th element of the point cloud.
Additionally, the drone’s flight position is recorded in the ENU (East-North-Up) coordinate system as
, where the origin corresponds to the takeoff location, and the positive directions are defined as eastward, northward, and upward, respectively. The drone’s orientation is represented by a quaternion
, which satisfies the unit-norm constraint:
To transform the flower’s position from the drone-relative coordinate system to the ENU coordinate system, we use the drone’s position
and attitude
corresponding to the timestamp closest to the image capture time. The ENU coordinates of a flower can be transformed into
using
, (
), and
. This transformation enables the representation of flower locations in a consistent geographic reference frame, which is essential for subsequent flight path planning. Let the coordinates of the
n-th detected flower be denoted by
and let the total number of detected flowers be
N. The set
of all detected flower positions is then represented as
3.5. Removal of Unnecessary Points
The set
of detected flowers may include flowers located on trees in the background or on the backside, which are not relevant for determining the spraying points.
Figure 10 shows the detection of unnecessary flowers. It contains examples of flowers on background trees or on the backside that have been detected. Therefore, such points must be removed from
because they are unnecessary for the planning of pollen spraying locations. If flowers beyond the designated flight path are incorrectly identified, the pollen-spraying drone may attempt to fly across trees during pollination, which increases the risk of a collision or crash. To address this issue, we propose a method to limit target flowers for pollen spraying using a depth camera. Specifically, the average distance of pear branches is measured in advance and set as a threshold. Among the flowers detected by the observation drone’s depth camera, we consider those located beyond this threshold distance to be on the opposite side of the tree and exclude them from the pollination process.
In the proposed method, unnecessary points are removed by ignoring flower positions that are located too far from the observation drone. Specifically, for each detected flower, if the distance between its coordinates and the position of the drone at the time of capture exceeds a predefined threshold
, we consider the flower to be on the backside and exclude it. The threshold
is determined in advance based on the measurement of branch lengths. Let
denote the set of flower positions after unnecessary points are removed:
where
denotes the drone’s position at the time when flower
was captured.
The implementation of this method enables unnecessary flowers to be filtered out at a lower cost and with less effort compared with the installation of additional sensors or measurement of the GNSS coordinates of each pear tree. This approach ensures efficient and safe pollination by preventing unnecessary flights that could lead to obstacles and operational risks.
3.6. Clustering of Flower Coordinates Using OPTICS
For the flower coordinate set , clustering using the OPTICS (Ordering Points To Identify the Clustering Structure) algorithm is performed. The OPTICS algorithm is a density-based clustering method that is designed to identify data clusters with varying densities, thus making it well-suited for the irregular distributions of pear flowers in natural environments. In pear orchards, the flowers tend to grow in groups along individual branches, thus creating spatial gaps between the clusters and creating varying cluster densities that depend on the branch structure. OPTICS is particularly effective in these settings because it does not require the number of clusters to be specified in advance and it can detect clusters across a wide range of densities while also handling spatial separation and noise.
OPTICS determines the clustering structure based on two parameters: minimum number of points
and maximum neighborhood radius
. Unlike representative clustering methods [
40], OPTICS does not directly produce a clustering. Instead, it generates an augmented ordering of the data points that captures the density-based clustering structure at multiple scales.
The clustering process of OPTICS consists of the following steps:
- Step 1.
Initialize each element as unprocessed. Subsequently, initialize as .
- Step 2.
If , terminate the algorithm; otherwise, select a specific element from . Update as .
- Step 3.
If
is processed, return to Step 2; otherwise, compute the neighborhood
as the set of points within a distance
. If the number of points in
is at least
, consider the point
a core point. Define the core distance
as the distance from
to its
-th nearest neighbor within
:
When the computation of the neighborhood is complete, mark the point as processed and append its entry to the ordered list to preserve the density-based traversal order.
- Step 4.
If the core distance
is defined, initialize a priority queue
. Evaluate each unprocessed point
and compute its reachability distance from
as
If does not yet have a reachability distance assigned to it, set the computed value and insert into the priority queue . If is already in the queue with a larger reachability distance, update its value and adjust its position in the queue accordingly.
- Step 5.
Although is not empty, remove the point with the smallest reachability distance from the queue. If is unprocessed, mark it as processed and add it to the ordered list. Then, compute its -neighborhood and evaluate its core distance . If is a core point, process its neighbors in the same manner as described in Step 4 and add them to .
- Step 6.
When the processing of all points reachable from is complete, return to Step 2.
Clusters are extracted from the output of the OPTICS algorithm by identifying changes in the density structure observed in the reachability plot. Specifically, after the reachability-ordered list of data points is constructed, the algorithm analyzes the plot to detect regions where the reachability distance exhibits significant local variations. It initiates a new cluster when the reachability distance decreases sharply, which indicates a transition into a denser region. The cluster ends when the reachability distance increases beyond a specified ratio. This behavior is controlled by the parameter , which defines the minimum relative change in reachability distance required to signal a cluster boundary. In essence, the algorithm detects valleys in the reachability plot whose slopes exceed the threshold defined by , which allows the identification of clusters based on density variations rather than a fixed global distance threshold.
Let
K denote the number of clusters obtained by the OPTICS algorithm and let each cluster be represented by
(
). Furthermore, let
denote the centroid of cluster
, which is defined as
where
is the number of points in cluster
and
represents the coordinate of each flower in the cluster. The centroids
(
) are rearranged in ascending order of their distance from the drone’s starting position
, which results in an ordered sequence denoted by
(
).
Because flying the drone directly to the position of
may result in a collision with branches, it is necessary to apply an offset
to each
to ensure a safe flight path. Because the vertical position should remain unchanged, the offset vector is defined as
The offset vector
is determined as follows. First, using the coordinates of the starting point
and ending point
, the unit vector
is defined as
Let
denote a vector orthogonal to
. Then,
is given by
The offset vector
is defined using
as follows:
where
is a predefined offset distance used to avoid collisions with branches. The parameter
is determined based on the length of the nozzle mounted on the pollination drone. In this study, we assume that the nozzle length is 0.8 m, and by adding a margin of 0.5 m,
is set to 1.3 m.
The drone’s flight path is defined using the centroids
(
) and offset vector
, which results in the following trajectory:
where
Figure 11 illustrates the relationship between the flight path and the mathematical notations that are used in the formulation. The path
represents the planned flight trajectory. This path ensures that the drone will approach each flower cluster from a safe offset distance position defined by
to perform pollination effectively.
5. Discussion
The proposed drone-based pollination system in this work integrates RGB-D image processing, clustering-based flight path construction, and an RTK-GNSS-based control system to automate pollination in pear orchards. The overall system realized accurate detection of blooming flowers and generated efficient flight paths, thus demonstrating its potential for real-world agricultural applications. The blossom detection component reached a high F1 score of 0.880, which indicated that the YOLO-based model effectively identifies flowers that are suitable for pollination. This level of accuracy is comparable with those reported in previous studies of vision-based flower detection using machine learning [
24,
25,
26]. However, recent advances in deep learning, including those in transformer-based architectures, have demonstrated improved detection capabilities in a variety of complex agricultural environments. Incorporation of these advanced models into our system could enhance the detection accuracy further, particularly under conditions involving occlusions, uneven lighting conditions, or dense flower clusters.
One notable contribution made by this study involves the use of the OPTICS clustering algorithm, which enabled robust grouping of the flower positions based on spatial proximity. In pear orchards, the flowers tend to appear in localized clusters along the branches, with a combination of natural gaps and non-uniform densities. The OPTICS method was particularly effective in adapting to this structure and then distinguishing flower clusters from the noise. Through comparison with the X-means, DBSCAN, and HDBSCAN methods, the OPTICS-based method yielded more accurate and more efficient pollination paths via identifying cluster centers appropriately. However, the performance of the OPTICS technique depends on important parameter settings, including the choice of . Future research should, therefore, investigate automated or adaptive parameter selection strategies to reduce manual tuning and improve consistency.
Previous drone path planning studies for use in agricultural tasks have often assumed use of open-field conditions that allow for relatively unconstrained aerial movement [
31,
33]. Many such works have primarily focused on simulation environments or have assumed sparsely distributed targets within flat terrains. In contrast, the clustered arrangements of the flowers along narrow tree branches in orchard environments impose unique drone navigation constraints. The cluster-based path planning approach proposed in this study is suited to such structured environments because it minimizes unnecessary flight maneuvers while also ensuring targeted pollination at meaningful spatial resolutions. By aligning the path planning strategy along with the physical distribution of the flowers, the proposed method is effective in real-world orchard settings.
Despite the positive results in observed in the overall fruit set rate, as shown in
Table 6, the proportion of the high-quality fruits obtained when using the drone-based system was lower than that when achieved by manual pollination alone. This discrepancy can likely be attributed to the coarse spraying mechanism, which targets the flower clusters as single units. In contrast, human pollinators can address the individual flowers selectively. Enhancement of the nozzle design and refining the spray control mechanisms will be essential in improving the selectivity and the effectiveness of drone-mediated pollination.
The current experiments were conducted under favorable weather conditions. Unfortunately, the system’s robustness against environmental variations, including wind, lighting changes, and occlusion from leaves or branches, was not explicitly examined. Evaluating the system’s performance under such conditions will ultimately be essential for practical deployment. Planned future work will include performing additional field experiments under diverse environments to assess the system’s stability and adaptability.
Filtering mechanisms that have been used to exclude background flowers use on the reliance of distance-based heuristics. Although this approach has been effective to a degree, incorporation of more advanced filtering techniques that consider the orchard geometry, the tree structure, and the viewpoint could enhance detection precision further.
Another potential direction for improvement involves real-time path adjustment. Although the current system generates flight paths that have been based on predetected flower positions, the orchard environments may be dynamic. Adaptive clustering and online path re-planning approaches could allow the system to respond to either unexpected obstacles or to flower distribution changes during operation. This flexibility is particularly important for large-scale deployments in unstructured and heterogeneous orchard layouts.
Finally, although this study has focused on spatial detection and navigation, the physical behavior of the pollen spray is a critical factor in influencing pollination success. Parameters including spray particle dispersion, the directionality, and the adhesion to floral surfaces will require more detailed modeling and empirical validation. Future studies will also investigate the spray mechanics dynamics along with improved hardware to enhance the system’s precision and its pollination efficiency.