Next Article in Journal
A Hierarchical Reinforcement Learning Method for Intelligent Decision-Making in Joint Operations of Sea–Air Unmanned Systems
Next Article in Special Issue
A Two-Stage Multi-UAV Task Allocation Approach Based on Graph Theory and a Learning-Inspired Immune Algorithm
Previous Article in Journal
An Aircraft Skin Defect Detection Method with UAV Based on GB-CPP and INN-YOLO
Previous Article in Special Issue
A Hierarchical Decoupling Task Planning Method for Multi-UAV Collaborative Multi-Region Coverage with Task Priority Awareness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Flight Speed and Height Parameters for Computer Vision Detection in UAV Search

Department of Fluid Mechanics, Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia
*
Author to whom correspondence should be addressed.
Drones 2025, 9(9), 595; https://doi.org/10.3390/drones9090595
Submission received: 14 July 2025 / Revised: 14 August 2025 / Accepted: 20 August 2025 / Published: 23 August 2025

Abstract

Unmanned Aerial Vehicles (UAVs) equipped with onboard cameras and deep-learning-based object detection algorithms are increasingly used in search operations. This study investigates the optimal flight parameters, specifically flight speed and ground sampling distance (GSD), to maximize a search efficiency metric called effective coverage. A custom dataset of 4468 aerial images with 35,410 annotated cardboard targets was collected and used to evaluate the influence of flight conditions on detection accuracy. The effects of flight speed and GSD were analyzed using regression modeling, revealing a trade-off between the area coverage and detection confidence of trained YOLOv8 and YOLOv11 models. Area coverage was modeled based on flight speed and camera specifications, enabling an estimation of the effective coverage. The results provide insights into how the detection performance varies across different operating conditions and demonstrate that a balance point exists where the combination of the detection reliability and coverage efficiency is optimized. Our table of the optimal flight regimes and metrics for the most commonly used cameras in UAV operations offers practical guidelines for efficient and reliable mission planning.

1. Introduction

UAVs equipped with camera sensors are a powerful tool for performing various inspection and surveillance tasks, as they can provide a bird’s-eye view of an area at relatively low cost. When coupled with machine learning detection algorithms, they can automatically detect and localize various objects. By improving the accuracy and speed of detection tasks, this integration significantly enhances operational efficiency and reduces the need for human intervention. As a result, UAVs equipped with intelligent vision systems can function as autonomous aerial observers, capable of supporting a wide range of applications such as search and rescue missions [1,2], wildlife monitoring [3], and border surveillance [4].
In area coverage tasks, the efficiency generally increases with the UAV’s height and speed, as the area captured in each image increases quadratically with height, and a higher velocity allows more new ground to be covered per image. However, the performance of the detection algorithm tends to decrease under these conditions. Specifically, as speed increases while height and image exposure time remain constant, the relative motion between the camera and the scene becomes more pronounced, causing motion blur and leading to a reduced detection accuracy [5,6]. Similarly, with a fixed camera configuration (image resolution and zoom level), the detection performance declines as the image capture height increases [7,8].
With an increase in the development of UAV technology, the number of UAV platforms and onboard sensors has grown significantly, with each featuring different specifications. As a result, it is important to not only consider the height at which an image was captured but also the specific characteristics of the camera in order to standardize and compare the detection results across different systems. A useful metric for achieving this is the GSD, which represents the real-world distance between the centers of two adjacent pixels in an image. The GSD provides a standardized way to relate the image resolution to the actual ground dimensions, making it easier to compare the detection performance across varying heights and camera configurations. In this research, GSD is used as a key parameter to account for differences in camera specifications and flight heights, ensuring a consistent evaluation of the detection results.
Achieving the optimal efficiency in search operations requires careful determination of the UAV flight regime to balance the trade-off between area coverage and detection performance. In pursuit of these objectives, the main contributions of this article can be summarized as follows:
  • A dataset for object detection in orthophoto aerial images containing 35,410 target objects in 4468 images taken at multiple locations. It includes labeled images captured at various heights and speeds, covering the UAV operational range for both parameters.
  • YOLOv8 and YOLOv11 detection models trained and validated on the collected dataset.
  • Performance comparison of the YOLOv8 and YOLOv11 models on the test set.
  • Guidelines for selecting the flight regime for specific sensor configurations for detection in aerial images. The optimal regime considers two of the most important objectives in UAV searches: the detection performance considering image degradation and the area covered by the UAV’s field of view. Both objectives are dominantly influenced by the UAV’s flight speed and height.
  • We analyzed the collected image dataset and trained the YOLO models using the proposed methodology. This allowed us to determine the optimal flight parameters for various commercial UAVs and cameras.

2. Related Work

Based on the number of papers published in the UAV domain [9], the most popular deep learning object recognition approaches are Faster Region-based Convolutional Neural Networks (R-CNN) [10] and You Only Look Once (YOLO) [11]. Faster R-CNN uses region proposal networks to generate bounding boxes for potential objects, followed by the classification stage and post-processing. Although Faster R-CNN typically achieves a strong detection performance on aerial images [12,13], its multi-stage architecture is difficult to optimize, as each component often requires separate training and fine-tuning. In contrast, YOLO is a single-stage detection algorithm that performs object classification and bounding box prediction simultaneously using a single neural network applied to the entire image in one pass. Although it can quickly detect the objects in an image, it often struggles with precise localization, particularly for small objects. While it generally exhibits higher localization errors compared to Faster R-CNN, it tends to produce fewer false positives in background regions [11]. Generally, single-shot detection methods require less memory and are faster [14], making them more suitable for onboard UAV detection systems.
Detection algorithms face numerous challenges when dealing with images captured by UAV cameras. These challenges include high object density due to wide camera angles, small object sizes resulting from high-height imaging, image degradation caused by camera motion, and the need for efficiency to enable real-time operation. An empirical study on the performance of CNN-based image classification on degraded images was conducted in [15]. It included six degradation levels across various methods, such as motion blur, low resolution, Gaussian blur, and fish-eye distortion, all of which led to a reduced classification performance. Another limitation is that objects viewed from a bird’s-eye perspective typically exhibit fewer distinguishing features compared to those from side or front views, which hinders the algorithm’s ability to accurately differentiate between targets [16].
Focusing on the effects of motion blur, the study in [6] evaluated the performance of the YOLOv5 algorithm on images captured by a camera under high-speed motion. The images were collected at various speeds using a camera mounted onto a four-wheeled experimental platform capable of moving at up to 10 m/s, resulting in different levels of motion blur. With a constant camera shutter speed, the only factor affecting the blur was the speed. As the speed increased, the motion blur intensified, leading to a decrease in the detection performance of the algorithm. The effects of blur and deblurring were studied in [5], where it was concluded that light motion blur can actually improve the detection performance, while heavy motion blur negatively impacts the tracking performance. Additionally, deblurring improves the detection in heavily blurred images but can degrade the performance for lightly blurred ones. One way to improve the detection performance is by including blurred images or images with other types of degradation in the training set, as demonstrated in the study by [17]. Two YOLO models were compared: one trained only on original images and the other trained with both original and degraded images (underexposure, rotation, blur, noise). Improved average precision, a greater generalization ability, and enhanced robustness were observed in the model trained with degraded images. Blur can also occur when a moving object passes in front of a stationary camera during the exposure time. Since blur intensity is related to the object’s speed, the speed of the object can be estimated from the amount of blur, as demonstrated in [18,19].
Image capture height is another important parameter when working with UAV imagery, as it has a significant impact on the detection performance. As demonstrated by [8] in the context of animal identification, the detection performance drops off with increasing height. This study provides detailed performance metrics across the tested height range, highlighting the trade-off between the coverage area and detection accuracy. Another study exploring human detection, trajectory, and pose estimation using a region-based convolutional neural network is presented in [20]. They conducted experiments at various heights and discussed the issue of perspective distortion, which becomes more pronounced at higher heights due to the camera’s viewing angle. The authors assumed that the UAV dynamics had a negligible impact on the performance due to the low flight velocity used. The YOLO detection models’ performance has also been studied in the context of object detection in infrared images [21] and human detection in water environments [22]. The study in [21] further examined the effect of height on the detection performance by testing different YOLO model sizes and comparing their results.
A key challenge with high-height imaging is that the target objects appear very small in the captured images. To address the problem of small object detection using the YOLO algorithm, researchers have proposed modifications to its backbone [23] or the neck component [24]. When the height is not so high that objects become too small to detect (e.g., between 10 and 30 m for detecting a person), the best detection performance is generally achieved at the height at which the training images were collected [25].
When considering the detection performance, height is not the only factor that has an impact. Image quality, or how many pixels are used to represent an object, is also important and is influenced by factors such as resolution and the sensor’s field of view. The number of pixels required to obtain a meaningful representation of an image and successfully identify the objects it contains is explored in [26]. Regarding the detection performance on images with varying levels of quality, ref. [27] investigates image enhancements beyond the native resolution and evaluates the detection performance on satellite images of different qualities. This study demonstrates an improved detection performance for objects with larger GSD values. The study in [28] investigated the trade-off between mapping quality and GSD. It provides guidelines based on the DORI (Detection, Observation, Recognition, Identification) standard to calculate GSD values corresponding to different levels of detail. The study in [29] explores the influence of the sensor resolution, image overlap, and UAV height on forest image reconstruction. It highlights the trade-off between efficiency and accuracy, aiming to achieve the optimal reconstruction quality without excessively increasing the UAV flight time or processing time.

3. Data Collection and the Detection Model

The focus of this study is the performance of YOLO, a state-of-the-art object detection framework known for its accuracy and computational efficiency. Two versions of the framework are analyzed: the more mature and stable YOLOv8, and the most recent release, YOLOv11. Although the methodology presented here can be applied to other object detection approaches, YOLO was chosen because it is capable of real-time detection and is widely used for object detection in UAV aerial imagery [30].
As part of this study, a custom dataset of aerial images was collected to train the detection model and validate its performance. Images were taken using three UAVs, each equipped with a different camera sensor. The UAVs used for data collection and their resolution and field of view (FOV) are outlined in Table 1. Each recorded image includes information about the UAV’s state at the moment of capture, with its velocity and height being most relevant to this study. Height is recorded as two values. The first is the absolute height—relative to sea level. The second is the relative height, where the UAV’s height is measured from its home point, defined as zero. Based on our experience, the absolute height was often inaccurate. Therefore, the image capture height was calculated using the relative height—recorded using a barometric sensor—in combination with the terrain’s Digital Elevation Model (DEM).
There are 100 unique targets, each painted with two different colors randomly chosen from a palette of seven, with the painting pattern applied arbitrarily. Some colors were chosen to be easily distinguishable by humans in the natural environment, such as red or orange, which mimic man-made objects, while others were selected to blend with the surroundings, like black and white tones resembling rocks and green to mimic grass or trees. Examples of the cardboard targets are shown in Figure 1.
The complete dataset consists of 4468 images containing 35,410 cardboard targets corresponding to object detection instances. The image capture heights ranged from 9.31 to 101.37 m, and the flight speed fell within 0 to 10.69 m/s. A subset of 1166 images, containing 27,600 object instances, was used for model training and validation with an 80–20 split, while the remaining data served as the test set to evaluate the model’s performance. Images were collected over multiple flight missions, with variations in the location or target placement. For the training set, the targets were positioned relatively close together, typically 10–20 m apart. Since images were captured at varying heights and velocities, higher-height flights contained a greater number of targets within a single frame. In contrast, the test set images were primarily obtained from separate missions where the targets were more widely dispersed to mimic realistic search scenarios, typically resulting in only one or a few targets per image, although this set also included some images with closely packed targets. This accounts for the relatively high number of object instances in the training set compared to that in the total dataset.
For both YOLOv8 and YOLOv11, training was initialized using the respective extra-large pre-trained weights (yolov8x.pt and yolo11x.pt) pre-trained on the COCO dataset. Training was performed for 500 epochs using an input image size of 1280 pixels, which was automatically scaled from the original resolution by the algorithm. All unique cardboard targets were grouped into a single class, resulting in a binary classification problem (target vs. background).
The performance metrics of the trained models were evaluated on the test set. YOLOv8 achieved an average precision (AP) of 0.85, while YOLOv11 performed slightly better with an AP of 0.851. The precision, recall, and F1 scores at various confidence thresholds for YOLOv8 and YOLOv11 are summarized in Table 2 and Table 3, respectively.
We observed a relatively strong influence of UAV flight speed and height—more precisely, the corresponding GSD—on image quality. This degradation in image quality, in turn, impacted detection success. A couple of example images where the image degradation is noticeable are shown in Figure 2. Additionally, Figure 3 shows a comparison of the detections between the YOLOv8 and YOLOv11 models on four pairs of images.
This pattern was analyzed by dividing the test dataset into bins corresponding to specific ranges of speed and GSD values and computing the AP metric for each bin. As shown in Figure 4, both models achieve their highest performance at lower speeds and GSD values, and the performance gradually decreases as these values increase. At low speeds and GSDs, both models perform comparably, but as the speed and GSD increase, YOLOv11 generally outperforms YOLOv8. This indicates that the more recent YOLO model is less sensitive to image degradation caused by higher speeds and larger GSDs.

4. Coverage Area Estimation

Coverage area estimation is a key step in planning and analyzing UAV-based image acquisition. It allows for the calculation of the covered area over time using flight parameters and camera specifications. This section outlines a method based on geometric relations and the GSD.
Below are the values that will be used in the calculation:
  • v—flight speed [m/s];
  • h—flight height [m];
  • Δ t —time between two successive images [s];
  • a—image width at height h [m];
  • b—image length at height h [m];
  • γ d —diagonal field of view of the camera [];
  • γ a —horizontal field of view of the camera [];
  • γ b —vertical field of view of the camera [];
  • r a —number of pixels along image width;
  • r b —number of pixels along image length;
  • G S D —ground sampling distance [cm/px];
  • G S D 0 —camera constant describing pixel size at a height of 1 m [cm/px];
  • Δ A —newly covered area by shooting each image [m2];
  • A ˙ —covered area by the drone per unit time [m2/s];
  • η —product of covered area and confidence [m2/s].
An UAV operates at a flight height (h), capturing orthophotos at fixed time intervals ( Δ t ). The area of the terrain imaged during each capture depends on several parameters: the characteristics of the onboard camera, the flight height (h), and the flight speed (v). These factors collectively determine the area covered ( A ˙ ) during the UAV flight.
To calculate the covered area ( A ˙ ) during UAV flight, it is necessary to consider the camera’s technical characteristics. The manufacturer usually provides information on the image’s horizontal resolution ( r a ) and vertical resolution ( r b ), as well as the diagonal FOV ( γ d ). These parameters will be used in the following calculation to determine the new area covered per image ( Δ A ).
For the purposes of the calculation, the horizontal FOV ( γ a ) and the vertical FOV ( γ b ) are derived from the diagonal FOV ( γ d ) and the image resolution ( r a and r a ) using
γ a = 2 · arctan tan γ d 2 · r a r a 2 + r b 2 ,
and
γ b = 2 · arctan tan γ d 2 · r b r a 2 + r b 2 .
Information about the camera’s characteristics is usually provided by the camera manufacturer but can often be found in the image metadata as well.
Given the camera’s FOV and the flight height, the actual image width (a) and image length (b) of the captured image can be calculated using
a = 2 · h · tan γ a 2 ,
and
b = 2 · h · tan γ b 2 .
Figure 5 illustrates the camera’s FOVs, image dimensions (width and length), and flight height. It also presents two possible imaging modes during UAV flight. In the first mode, successive images partially overlap, resulting in repeated coverage of parts of the ground. In the second mode, there is no overlap between consecutive images, and each frame captures a new section of the terrain. For the purpose of calculating the covered area, any overlapping regions are counted only once.
The area covered during the flight is calculated using two expressions. Which expression is used depends on the flight speed and the flight height. When the distance traveled between the creation of two images is less than the image length, then the newly covered area ( Δ A ) by shooting each image, i.e., the area which is recorded for the first time, can be calculated using
Δ A = a · v · Δ t if Δ t · v < b a · b otherwise .
When (5) is divided by Δ t , the expression for calculating the area covered ( A ˙ ) by the drone per unit time is obtained:
A ˙ = a · v if Δ t · v < b a · b · Δ t 1 otherwise .
If the distance traveled by the UAV between two consecutive images equals or exceeds the image length, this operating condition is undesirable, as increasing the flight speed under these circumstances does not increase the coverage area, which remains constant, but instead degrades the detection quality. To maintain effective coverage, either Δ t should be shortened or the flight speed reduced.
Each camera has its own specific characteristics (resolution and FOV). To standardize the calculation and enable comparisons across different camera models, the GSD will be used. The GSD represents the real-world distance between two adjacent pixels in the image. We limit this study to only square pixels, where G S D = G S D a = G S D b stands; hence, it is equivalent regardless of whether we define G S D by the a or b axis. The horizontal image dimensions and pixel count define the GSD as
G S D = a · 100 r a ,
where 100 is a unit conversion factor.
Plugging (3) and (4) into (7), a GSD calculation is obtained based on the camera’s characteristics and flight height:
G S D = h · 200 · tan γ a 2 r a .
The G S D depends on the flight height and camera characteristics. All camera characteristics can be consolidated into a single constant, G S D 0 , which is defined as the pixel size at a flight height of 1 m. The actual G S D is then calculated by
G S D = G S D 0 · h .
Utilizing (8) and (9), we can calculate G S D 0 from the camera’s characteristics:
G S D 0 = 200 · tan γ a 2 r a .
By applying Equations (6) and (7), the expression for the covered area as a function of the GSD and flight speed is defined as
A ˙ ( v , G S D ) = G S D · v · r a · 10 2 if Δ t · v < G S D · r b · 10 2 G S D 2 · r a · r b · Δ t 1 · 10 4 otherwise .
From Equation (11), it is evident that the covered area per unit of time depends on the camera resolution. For this reason, it is not possible to create a single coverage area graph that is universal for all cameras. Figure 6 shows four graphs for four different camera resolutions. The area on the graph above the dashed red line corresponds to the flight regime where image overlap occurs, while the area below the dashed red line is where there is no image overlap.

5. Confidence Estimation

Accurate object detection is essential in UAV missions employing deep learning models such as YOLO. This section outlines a methodical analysis of the detection confidence by correlating the image metadata with ground-truth markers. The influence of flight speed and the GSD on the detection reliability is quantified using regression modelling.

Database Analysis

To enable a quantitative evaluation of the detection performance, a structured database was created by processing the results of YOLO detections on UAV-acquired images. This section describes the procedure used to extract, organize, and interpret the metadata and detection outputs, forming the basis for the subsequent statistical analysis.
The data was processed as follows. Image metadata was extracted from each image, among which were the flight speed and the flight height at the time the image was captured. Each YOLO detection in the image has a layer confidence value that varies between 0 and 1. The values of all markers with the corresponding values for the confidence, flight speed, and flight height are saved in the final flight table with all markers. Note that all flight height values are converted into the GSD for universality of the data display. The confidence value is processed so that the accuracy of the model can be questioned. The confidence of all of the markers detected by YOLO (Figure 7a) remains equal to its original value. The confidence of false YOLO detections (Figure 7b) becomes a negative value, while the confidence of all markers not detected by YOLO (Figure 7c) is −1. Table 4 shows a summary of this calculation.
Figure 8 presents the results of the detection confidence analysis for the YOLOv8 (a–d) and YOLOv11 (e–h) models. Subplots (a) and (e) show the distribution of all recorded detections in the parameter space defined by the flight speed and GSD. The broad spread of points in both cases indicates that the datasets adequately cover the investigated range, providing a solid basis for model fitting. It must be noted that the same target and scene will result in nearly identical detection rates (confidence) in two consecutive images. Subplots (b) and (f) display the quadratic regression surfaces, which show the smooth variation in confidence with speed and GSD, ensuring a single, well-defined maximum that can be directly associated with the optimal operating conditions. The bilinear regression surfaces, capturing the general trends but offering less flexibility than the quadratic model, are shown in subplots (c) and (g). Finally, subplots (d) and (h) in Figure 8 present the piecewise bilinear surfaces, which divide the parameter space into multiple regions and fit a separate bilinear model to each. This improves the local accuracy but introduces discontinuities between regions.
Although the piecewise bilinear regression surfaces produced the lowest SSE for both the YOLOv8 and YOLOv11 models, quadratic regression was chosen for further analysis because it is simpler, provides a smooth surface, and has a single well-defined maximum, which makes it more appropriate for identifying the optimal UAV operating conditions. This model is subsequently applied to quantitatively evaluating the detection performance and determining flight regimes that maximize the search effectiveness.
Based on the database, quadratic regression functions were created for YOLOv8 and YOLOv11 to describe the detection reliability as a function of UAV flight speed and image GSD. These functions are general and can be applied across different cameras, and their predicted values for both models are shown in Figure 9. The quadratic regression function for YOLOv8 is given by
C ( v , GSD ) = 0.8136 + 0.0636 · v + 0.0474 · G S D 0.0104 · v 2 0.0307 · v · G S D 0.0895 · G S D 2 ,
and the corresponding function for YOLOv11 is
C ( v , GSD ) = 0.7374 0.0352 · v + 0.2935 · G S D 0.0101 · v 2 + 0.0332 · v · G S D 0.2046 · G S D 2 .

6. Effective Coverage

To achieve the best possible search performance, a balance between the covered area and detection performance (or detection model confidence) should be considered. The best-performing configuration is achieved by maximizing the effective coverage:
η ( v , G S D ) = A ˙ ( v , G S D ) · C ( v , G S D ) .
In the calculation of η , the coverage area depends on the properties of the camera used and the image capture frequency ( Δ t 1 ). Meanwhile, the confidence metric is based only on the detection model. This allows for validation of different images, considering various GSD values and speeds, using both the equipment and detection model characteristics. Figure 10 shows the η obtained across different resolutions, with the corresponding optimal GSD and flight speed for both YOLOv8 and YOLOv11.
The results on the optimal flight speed and GSD for the four camera resolutions are shown in Table 5 for YOLOv8 and in Table 6 for YOLOv11. For both models, cameras with higher resolutions reach the same optimal values within the respective model, while the optimal values differ between YOLOv8 and YOLOv11.
For all cameras that achieve an image overlap, under the conditions of 5.49 m/s and 1.38 cm/px for YOLOv8 and 5.35 m/s and 1.77 cm/px for YOLOv11, these flight speeds and GSD values are considered optimal. The overlapping exists if the following condition stands:
Δ t · v < b .
When this condition is not met, the flight speed will be reduced to the edge where the two images touch during the UAV’s flight. There will be no image overlap, but there will also be no uncovered terrain between the two images. Reducing the flight speed will allow the GSD to increase to achieve the optimal conditions for the maximum covered area. Figure 11 shows the optimal flight speed and GSD values for both models, corresponding to lower-resolution cameras that do not achieve an image overlap at 5.49 m/s and 1.38 cm/px for YOLOv8 and 5.35 m/s and 1.77 cm/px for YOLOv11.
The frequency of taking images obviously affects the covered area and consequently the effective coverage η . This is demonstrated in Figure 12, where higher Δ t values start to shift the optimal configuration towards a lower speed and higher GSD (Table 7 and Table 8).
For each vertical image resolution, there is a maximum value of the time interval between two consecutive images at which full ground coverage is achieved during UAV flight, without image overlap and without gaps between successive frames. This scenario represents the limiting case in which adjacent images precisely cover the terrain without redundancy or missing areas. Such a condition is satisfied when Δ t · v = b , which defines the relationship between the image dimensions and flight speed. By combining Equations (7) and (15) and incorporating the optimal flight speed along with the GSD, Equation (16) is derived. This equation enables the calculation of the maximum allowable time interval between images as a function of the vertical resolution of the image (Figure 13), thus providing a basis for precise aerial survey planning that ensures seamless terrain coverage.
Δ t m a x ( r b ) = G S D o p t 100 · v o p t · r b
Table 9 presents the optimal flight parameters for UAV-based aerial terrain imaging, tailored to the most commonly used camera and lens combinations in contemporary operational practice. Despite variations in the image resolution and FOV among the listed imaging systems, all cameras possess a vertical resolution exceeding the minimum required to achieve a sufficient image overlap while maintaining the optimal GSD value during flight. Consequently, uniform optimal values for the flight speed and GSD are applicable across all configurations, while the flight height and coverage area vary depending on the specific system.

7. Discussion

Even though the performance of YOLOv8 and YOLOv11 was compared the using standard detection model metrics in Section 3, a more practical, application-oriented comparison can be achieved by applying the presented methodology. We determined the optimal speed and GSD for the two models of detection: 5.49 m/s and 1.38 cm/px for YOLOv8 and 5.35 m/s and 1.77 cm/px for YOLOv11. Using these optimal parameters and assuming a sufficient image overlap based on the camera specifications and image capture frequency (as is the case for most modern cameras), we can assess the expected search performance. Under these conditions, a search conducted with the YOLOv11 model would cover a 24.99% greater area according to Equation (11) and achieve 10.63% greater effective coverage. More importantly, if the optimal parameters for YOLOv11 were used to conduct a search with YOLOv8, and vice versa, the resulting performance would still be near-optimal. This conclusion is supported by Figure 10 and Figure 12, which show that the peak of the maximal effective coverage is generally distributed similarly for both models.
Although the proposed methodology was developed under mathematically ideal conditions, it is important to note that such conditions rarely fully correspond to real-world UAV search operations. In practice, UAVs are unable to maintain a constant flight height due to uneven terrain, resulting in variations in the distance between the camera and the ground. Additionally, external factors such as wind can cause deviations from the planned trajectory and speed, further affecting the area coverage and the quality of the collected data.
These factors can lead to deviations from the ideal values for the GSD and total covered area, which may reduce the accuracy and reliability of object detection. It is also important to emphasize that the methodology presented in this paper was designed for straight and stable flight. In real operations, during drone rotations or tilting, an irregular image overlap may occur, leading to suboptimal ground coverage.
However, most of these issues can be mitigated by increasing the image overlap, which helps to ensure that no gaps occur in the ground coverage despite flight instabilities or deviations.
Although pursuing the exact optimal values is hard to achieve in real-world applications, the operator should at least aim to accomplish them. It would still provide a near-optimal flight regime for UAV searches and computer vision detection.

8. Conclusions

Detecting the targets in images collected in UAV search missions is one of most common utilizations of computer vision detection. However, the accomplishment of the detection model greatly depends on the quality of the collected images, while maximization of the covered area accelerates the search. Both the detection and coverage rate metrics are crucial for a successful search.
We observed that these metrics are greatly influenced by the UAV flight regime parameters chosen by the search operator, with flight height and speed being the most important. In order to analyze the influence of flight height and speed, we designed suitable cardboard targets and conducted numerous flights to collect a considerable number of orthophoto images. The collected database is provided in a supplementary public repository and consists of 35,410 labeled target objects (Supplementary Materials). The images were taken at different heights (providing different GSD values) and speeds and used for training and validation of the YOLOv8 and YOLOv11 detection models, which we have also provided in the repository.
We developed a procedure for calculating the effective coverage η , which appropriately combines the covered area and the confidence of the detection model. The covered area is calculated from the geometrical properties of the FOV while being dependent on the speed and height of the UAV. The confidence function is estimated using quadratic regression over the confidences for each record and is also speed- and height-dependent. The proposed concept of effective coverage is validated on the collected image dataset, with the available height, speed, and detection confidence reported by the YOLO model for each record. This allowed us to establish and analyze the effective coverage as a function of the height (represented by the GSD) and velocity of the UAV.
Practical guidelines are provided for achieving the optimal search performance by selecting the flight parameters in accordance with the specifications of the camera equipment used and the image capture frequency. For higher resolutions (modern HQ cameras), the optimal GSD and speed values are constant regardless of the other properties of the camera and considering that Δ t is sufficiently small. For the cardboard targets, the optimal operational parameters were determined as a flight speed of 5.49 m/s and a GSD of 1.38 cm/px for YOLOv8 and 5.35 m/s and 1.77 cm/px for YOLOv11. However, depending on the equipment specifications, these optimal values yield different optimal flight heights and results with different effective coverage values.
Although the presented results are specific to 50 × 50 cm cardboard targets, they are suitable for validation of the proposed methodology. The cardboard sheets are very practical for testing purposes, providing a great degree of reproducibility of the paper. It is reasonable to expect that the results for more useful applications, such as searches for lost humans, could be similar to those presented here due the selected target dimensions. The resulting metrics are useful for making decisions in UAV search operations, and they could be improved through additional enhancements of the detection model. The presented results could be included in approaches to autonomous UAV search control that consider effective coverage, such as [31]. The proposed model could be extended by regarding additional sources of disturbances in the images, such as yaw velocity, which can cause circular motion blur. Moreover, it would be valuable for future research to investigate the impact of varying environmental conditions, such as variable terrain, UAV rotations, and lighting variations, on the detection performance.

Supplementary Materials

The supplementary materials, including the complete labeled dataset and the trained YOLOv8 and YOLOv11 models used in this study, are available for download from the Open Science Framework repository https://osf.io/zq82s/ accessed on 13 August 2025.

Author Contributions

Conceptualization: S.I. and L.L.; methodology: S.I.; software: M.M.; validation: M.M. and L.L.; formal analysis: M.M.; investigation: M.M.; resources: K.J., L.L. and M.M.; data curation: M.M.; writing—original draft preparation: M.M.; writing—review and editing: S.I., M.M., K.J. and L.L.; visualization: M.M.; supervision: S.I.; project administration: S.I.; funding acquisition: S.I. All authors have read and agreed to the published version of the manuscript.

Funding

This publication was supported by the Croatian Science Foundation under project UIP-2020-02-5090.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Martinez-Alpiste, I.; Golcarenarenji, G.; Wang, Q.; Alcaraz-Calero, J.M. Search and rescue operation using UAVs: A case study. Expert Syst. Appl. 2021, 178, 114937. [Google Scholar] [CrossRef]
  2. Niedzielski, T.; Jurecka, M.; Miziński, B.; Pawul, W.; Motyl, T. First successful rescue of a lost person using the human detection system: A case study from Beskid Niski (SE Poland). Remote Sens. 2021, 13, 4903. [Google Scholar] [CrossRef]
  3. Lee, S.; Song, Y.; Kil, S.H. Feasibility analyses of real-time detection of wildlife using UAV-derived thermal and RGB images. Remote Sens. 2021, 13, 2169. [Google Scholar] [CrossRef]
  4. Iqbal, M.J.; Iqbal, M.M.; Ahmad, I.; Alassafi, M.O.; Alfakeeh, A.S.; Alhomoud, A. Real-Time Surveillance Using Deep Learning. Secur. Commun. Netw. 2021, 2021, 6184756. [Google Scholar] [CrossRef]
  5. Guo, Q.; Feng, W.; Gao, R.; Liu, Y.; Wang, S. Exploring the effects of blur and deblurring to visual object tracking. IEEE Trans. Image Process. 2021, 30, 1812–1824. [Google Scholar] [CrossRef]
  6. Yang, X.; Sang, F.; Wang, T.; Pei, X.; Wang, H.; Hou, T. Research on the influence of camera velocity on image blur and a method to improve object detection precision. In Proceedings of the 2021 International Conference on Cyber-Physical Social Intelligence (ICCSI), Beijing, China, 18–20 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
  7. Niu, S.; Nie, Z.; Li, G.; Zhu, W. Multi-Altitude Corn Tassel Detection and Counting Based on UAV RGB Imagery and Deep Learning. Drones 2024, 8, 198. [Google Scholar] [CrossRef]
  8. Petso, T.; Jamisola, R.S.; Mpoeleng, D.; Mmereki, W. Individual animal and herd identification using custom YOLO v3 and v4 with images taken from a UAV camera at different altitudes. In Proceedings of the 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 22–24 October 2021; IEEE: New York, NY, USA, 2021; pp. 33–39. [Google Scholar]
  9. Mittal, P.; Singh, R.; Sharma, A. Deep learning-based object detection in low-altitude UAV datasets: A survey. Image Vis. Comput. 2020, 104, 104046. [Google Scholar] [CrossRef]
  10. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
  11. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  12. Seo, D.M.; Woo, H.J.; Kim, M.S.; Hong, W.H.; Kim, I.H.; Baek, S.C. Identification of asbestos slates in buildings based on faster region-based convolutional neural network (faster R-CNN) and drone-based aerial imagery. Drones 2022, 6, 194. [Google Scholar] [CrossRef]
  13. Lou, X.; Huang, Y.; Fang, L.; Huang, S.; Gao, H.; Yang, L.; Weng, Y.; Hung, I.K. Measuring loblolly pine crowns with drone imagery through deep learning. J. For. Res. 2022, 33, 227–238. [Google Scholar] [CrossRef]
  14. Ramachandran, A.; Sangaiah, A.K. A review on object detection in unmanned aerial vehicle surveillance. Int. J. Cogn. Comput. Eng. 2021, 2, 215–228. [Google Scholar] [CrossRef]
  15. Pei, Y.; Huang, Y.; Zou, Q.; Zhang, X.; Wang, S. Effects of image degradation and degradation removal to CNN-based image classification. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1239–1253. [Google Scholar] [CrossRef] [PubMed]
  16. Du, D.; Qi, Y.; Yu, H.; Yang, Y.; Duan, K.; Li, G.; Zhang, W.; Huang, Q.; Tian, Q. The unmanned aerial vehicle benchmark: Object detection and tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 370–386. [Google Scholar]
  17. Liu, C.; Tao, Y.; Liang, J.; Li, K.; Chen, Y. Object detection based on YOLO network. In Proceedings of the 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 14–16 December 2018; IEEE: New York, NY, USA, 2018; pp. 799–803. [Google Scholar]
  18. Chen, Y.; Zhao, D.; Er, M.J.; Zhuang, Y.; Hu, H. A novel vehicle tracking and speed estimation with varying UAV altitude and video resolution. Int. J. Remote Sens. 2021, 42, 4441–4466. [Google Scholar] [CrossRef]
  19. Lin, H.Y.; Li, K.J.; Chang, C.H. Vehicle speed detection from a single motion blurred image. Image Vis. Comput. 2008, 26, 1327–1337. [Google Scholar] [CrossRef]
  20. Perera, A.G.; Al-Naji, A.; Law, Y.W.; Chahl, J. Human detection and motion analysis from a quadrotor UAV. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Melbourne, Australia, 15–16 September 2018; IOP Publishing: Bristol, UK, 2018; Volume 405, p. 012003. [Google Scholar]
  21. Suo, J.; Wang, T.; Zhang, X.; Chen, H.; Zhou, W.; Shi, W. HIT-UAV: A high-altitude infrared thermal dataset for Unmanned Aerial Vehicle-based object detection. Sci. Data 2023, 10, 227. [Google Scholar] [CrossRef] [PubMed]
  22. Li, Q.; Taipalmaa, J.; Queralta, J.P.; Gia, T.N.; Gabbouj, M.; Tenhunen, H.; Raitoharju, J.; Westerlund, T. Towards active vision with UAVs in marine search and rescue: Analyzing human detection at variable altitudes. In Proceedings of the 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Abu Dhabi, United Arab Emirates, 4–6 November 2020; IEEE: New York, NY, USA, 2020; pp. 65–70. [Google Scholar]
  23. Zhou, H.; Ma, A.; Niu, Y.; Ma, Z. Small-object detection for UAV-based images using a distance metric method. Drones 2022, 6, 308. [Google Scholar] [CrossRef]
  24. Zhang, Z. Drone-YOLO: An efficient neural network method for target detection in drone images. Drones 2023, 7, 526. [Google Scholar] [CrossRef]
  25. Salem, M.S.H.; Zaman, F.H.K.; Tahir, N.M. Effectiveness of human detection from aerial images taken from different heights. TEM J. 2021, 10, 522. [Google Scholar] [CrossRef]
  26. Torralba, A. How many pixels make an image? Vis. Neurosci. 2009, 26, 123–131. [Google Scholar] [CrossRef]
  27. Shermeyer, J.; Van Etten, A. The effects of super-resolution on object detection performance in satellite imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 1432–1441. [Google Scholar]
  28. Farah, M.; Alruwaili, A. Optimizing Ground Sampling Distance for Drone-Based GIS Mapping: A Case Study in Riyadh, Saudi Arabia. In Proceedings of the 2024 9th International Conference on Robotics and Automation Engineering (ICRAE), Singapore, 15–17 November 2024; IEEE: New York, NY, USA, 2024; pp. 1–5. [Google Scholar]
  29. Seifert, E.; Seifert, S.; Vogt, H.; Drew, D.; Van Aardt, J.; Kunneke, A.; Seifert, T. Influence of drone altitude, image overlap, and optical sensor resolution on multi-view reconstruction of forest images. Remote Sens. 2019, 11, 1252. [Google Scholar] [CrossRef]
  30. Chen, C.; Zheng, Z.; Xu, T.; Guo, S.; Feng, S.; Yao, W.; Lan, Y. Yolo-based uav technology: A review of the research and its applications. Drones 2023, 7, 190. [Google Scholar] [CrossRef]
  31. Lanča, L.; Jakac, K.; Ivić, S. Model predictive altitude and velocity control in ergodic potential field directed multi-UAV search. arXiv 2024, arXiv:2401.02899. [Google Scholar] [CrossRef]
Figure 1. Variations in cardboard targets placed on the environmental background typically encountered in the performed database collection.
Figure 1. Variations in cardboard targets placed on the environmental background typically encountered in the performed database collection.
Drones 09 00595 g001
Figure 2. Examples of photographed targets at different GSDs and flight speeds. (a) The GSD is 0.83 cm/px, and the flight speed is 1.20 m/s; (b) the GSD is 2.19 cm/px, and the flight speed is 1.10 m/s; and (c) the GSD is 1.59 cm/px, and the flight speed is 9.00 m/s. The size of each image is 100 × 100 pixels.
Figure 2. Examples of photographed targets at different GSDs and flight speeds. (a) The GSD is 0.83 cm/px, and the flight speed is 1.20 m/s; (b) the GSD is 2.19 cm/px, and the flight speed is 1.10 m/s; and (c) the GSD is 1.59 cm/px, and the flight speed is 9.00 m/s. The size of each image is 100 × 100 pixels.
Drones 09 00595 g002
Figure 3. Comparison of YOLOv8 and YOLOv11 detection results. The top row (ad) shows the detections obtained using YOLOv8, while the bottom row (eh) presents the detections obtained using YOLOv11. The blue rectangles show the YOLO model detections, while the red rectangles show the manually marked real targets.
Figure 3. Comparison of YOLOv8 and YOLOv11 detection results. The top row (ad) shows the detections obtained using YOLOv8, while the bottom row (eh) presents the detections obtained using YOLOv11. The blue rectangles show the YOLO model detections, while the red rectangles show the manually marked real targets.
Drones 09 00595 g003
Figure 4. AP analysis across bins of speed and GSD values for YOLOv8 (a) and YOLOv11 (b).
Figure 4. AP analysis across bins of speed and GSD values for YOLOv8 (a) and YOLOv11 (b).
Drones 09 00595 g004
Figure 5. The geometry of the camera’s fields of view and the coverage area with and without overlapping sequential images taken by the UAV.
Figure 5. The geometry of the camera’s fields of view and the coverage area with and without overlapping sequential images taken by the UAV.
Drones 09 00595 g005
Figure 6. Covered area per unit of time achieved with four different types of cameras: (a) a camera with a resolution = 1280 × 720 pixels, (b) a camera with a resolution = 1920 × 1080 pixels, (c) a camera with a resolution = 3840 × 2160 pixels, and (d) a camera with a resolution = 7680 × 4320 pixels. The graphs were created for Δ t = 3 s because such settings were used during the UAV search from which the data were extracted.
Figure 6. Covered area per unit of time achieved with four different types of cameras: (a) a camera with a resolution = 1280 × 720 pixels, (b) a camera with a resolution = 1920 × 1080 pixels, (c) a camera with a resolution = 3840 × 2160 pixels, and (d) a camera with a resolution = 7680 × 4320 pixels. The graphs were created for Δ t = 3 s because such settings were used during the UAV search from which the data were extracted.
Drones 09 00595 g006
Figure 7. Three possible detection scenarios are shown: (a) YOLO detected a marker, and there is a marker tag at that location; (b) YOLO detects something that is not a marker; and (c) there is a marker tag that is not detected by YOLO.
Figure 7. Three possible detection scenarios are shown: (a) YOLO detected a marker, and there is a marker tag at that location; (b) YOLO detects something that is not a marker; and (c) there is a marker tag that is not detected by YOLO.
Drones 09 00595 g007
Figure 8. An analysis of the confidence data for the YOLOv8 (a) and YOLOv11 (e) detection models. Quadratic fit captures the overall trends with smooth curvature (b,f). Bilinear fit (c,g) and piecewise bilinear fit (d,h) confirm similar trends while bringing slightly greater regression errors with and non-smooth regression functions without a clear maximum, respectively.
Figure 8. An analysis of the confidence data for the YOLOv8 (a) and YOLOv11 (e) detection models. Quadratic fit captures the overall trends with smooth curvature (b,f). Bilinear fit (c,g) and piecewise bilinear fit (d,h) confirm similar trends while bringing slightly greater regression errors with and non-smooth regression functions without a clear maximum, respectively.
Drones 09 00595 g008
Figure 9. A graphical representation of the quadratic regression functions for the YOLOv8 (a) and YOLOv11 (b) models.
Figure 9. A graphical representation of the quadratic regression functions for the YOLOv8 (a) and YOLOv11 (b) models.
Drones 09 00595 g009
Figure 10. Graphs of the product function of the covered area and confidence for four different camera resolutions. The plots in the left column (a,c,e,g) correspond to the YOLOv8 model, while those in the right column (b,d,f,h) correspond to YOLOv11. The analyzed camera resolutions are 1280 × 720 (a,b), 1920 × 1080 (c,d), 3840 × 2160 (e,f), and 7680 × 4320 pixels (g,h). The graphs were created for Δ t of 3 s because such settings were used during the UAV search from which the images were extracted. The red cross shows the maximum value reached on the graph.
Figure 10. Graphs of the product function of the covered area and confidence for four different camera resolutions. The plots in the left column (a,c,e,g) correspond to the YOLOv8 model, while those in the right column (b,d,f,h) correspond to YOLOv11. The analyzed camera resolutions are 1280 × 720 (a,b), 1920 × 1080 (c,d), 3840 × 2160 (e,f), and 7680 × 4320 pixels (g,h). The graphs were created for Δ t of 3 s because such settings were used during the UAV search from which the images were extracted. The red cross shows the maximum value reached on the graph.
Drones 09 00595 g010
Figure 11. The optimal flight speed and GSD when the image overlap condition is not satisfied. The graph shows the results for a Δ t value of 3 s. Graph (a) shows the results for the YOLOv8 model. Graph (b) shows the results for the YOLOv11 model.
Figure 11. The optimal flight speed and GSD when the image overlap condition is not satisfied. The graph shows the results for a Δ t value of 3 s. Graph (a) shows the results for the YOLOv8 model. Graph (b) shows the results for the YOLOv11 model.
Drones 09 00595 g011
Figure 12. The graphs show the product of the covered area and detection confidence in relation to the time interval between consecutive images for both models. All graphs show the situation for a camera with a resolution of 1920 × 1080 pixels. The graphs in the left column (a,c,e,g) correspond to the YOLOv8 model, while those in the right column (b,d,f,h) correspond to YOLOv11. Different image capture intervals are shown in rows: Δ t = 1 s (a,b), Δ t = 2 s (c,d), Δ t = 3 s (e,f), Δ t = 4 s (g,h).
Figure 12. The graphs show the product of the covered area and detection confidence in relation to the time interval between consecutive images for both models. All graphs show the situation for a camera with a resolution of 1920 × 1080 pixels. The graphs in the left column (a,c,e,g) correspond to the YOLOv8 model, while those in the right column (b,d,f,h) correspond to YOLOv11. Different image capture intervals are shown in rows: Δ t = 1 s (a,b), Δ t = 2 s (c,d), Δ t = 3 s (e,f), Δ t = 4 s (g,h).
Drones 09 00595 g012
Figure 13. The maximum allowed time interval between two images as a function of the vertical image resolution to ensure no uncovered area remains between images. Graph (a) shows the results for the YOLOv8 model, with the optimal values for a GSD = 1.38 cm/px and flight speed = 5.49 m/s. Graph (b) shows the results for the YOLOv11 model, with the optimal values for a GSD = 1.77 cm/px and flight speed = 5.35 m/s.
Figure 13. The maximum allowed time interval between two images as a function of the vertical image resolution to ensure no uncovered area remains between images. Graph (a) shows the results for the YOLOv8 model, with the optimal values for a GSD = 1.38 cm/px and flight speed = 5.49 m/s. Graph (b) shows the results for the YOLOv11 model, with the optimal values for a GSD = 1.77 cm/px and flight speed = 5.35 m/s.
Drones 09 00595 g013
Table 1. Specifications of UAVs and cameras used for image collection.
Table 1. Specifications of UAVs and cameras used for image collection.
UAV ModelDJI Matrice 210 V2DJI Matrice 30T
Max speed [m/s]22.523
CameraDJI Zenmuse X5SIntegrated wide camera
Resolution [px][5280, 3956][4000, 3000]
Diagonal FOV angle [°]76.784
Table 2. Performance metrics for YOLOv8.
Table 2. Performance metrics for YOLOv8.
Confidence ThresholdYOLOv8 PrecisionYOLOv8 RecallYOLOv8 F1
0.250.9780.8540.912
0.500.9880.8080.889
0.750.9940.7150.831
0.900.9980.2410.389
Table 3. Performance metrics for YOLOv11.
Table 3. Performance metrics for YOLOv11.
Confidence ThresholdYOLOv11 PrecisionYOLOv11 RecallYOLOv11 F1
0.250.9020.8620.881
0.500.9430.8240.880
0.750.9770.7520.850
0.900.9980.3260.492
Table 4. Adjusted confidence calculations for three possible scenarios.
Table 4. Adjusted confidence calculations for three possible scenarios.
CaseYOLO ConfidenceAdjusted Confidence
Marker tag and YOLO detectionnn
Only YOLO detectionn n · ( 1 )
Only marker tag 1
Table 5. YOLOv8—optimal flight speed and GSD for different camera resolutions.
Table 5. YOLOv8—optimal flight speed and GSD for different camera resolutions.
r a × r b [px] Δ t [s]v [m/s] GSD [cm/px] A ˙ [m2/s] C [-] η [m2/s]
1280 × 72034.151.7391.720.4945.24
1920 × 108035.231.45145.680.5174.18
3840 × 216035.491.38291.630.51148.86
7680 × 432035.491.38583.270.51297.73
Table 6. YOLOv11—optimal flight speed and GSD for different camera resolutions.
Table 6. YOLOv11—optimal flight speed and GSD for different camera resolutions.
r a × r b [px] Δ t [s]v [m/s] GSD [cm/px] A ˙ [m2/s] C [-] η [m2/s]
1280 × 72034.591.91112.160.4752.52
1920 × 108035.351.77182.270.4582.34
3840 × 216035.351.77364.540.45164.68
7680 × 432035.351.77729.070.45329.37
Table 7. YOLOv8—optimal flight speed and G S D during UAV flight for different times between images.
Table 7. YOLOv8—optimal flight speed and G S D during UAV flight for different times between images.
r a × r b [px] Δ t [s]v [m/s] GSD [cm/px] A ˙ [m2/s] C [-] η [m2/s]
1920 × 108015.491.38145.820.5174.43
1920 × 108025.491.38145.820.5174.43
1920 × 108035.231.45145.680.5174.18
1920 × 108044.461.65141.680.5070.48
Table 8. YOLOv11—optimal flight speed and G S D during UAV flight for different times between images.
Table 8. YOLOv11—optimal flight speed and G S D during UAV flight for different times between images.
r a × r b [px] Δ t [s]v [m/s] GSD [cm/px] A ˙ [m2/s] C [-] η [m2/s]
1920 × 108015.351.77182.270.4582.34
1920 × 108025.351.77182.270.4582.34
1920 × 108035.351.77182.270.4582.34
1920 × 108044.991.85177.030.4681.46
Table 9. Optimal flight speeds and heights and resulting metrics for most used cameras during UAV terrain search. Calculated values are valid for YOLOv11 detection model.
Table 9. Optimal flight speeds and heights and resulting metrics for most used cameras during UAV terrain search. Calculated values are valid for YOLOv11 detection model.
Camera and Lens r a × r b [px] γ a [°] γ d [°]v [m/s] GSD [cm/px]h [m] Δ t max [s] A ˙ [m2/s] C [-] η [m2/s]
Sony Alpha 7R IV (24 mm)9504 × 633673.7845.351.77112.2320.96899.980.45407.59
Sony Alpha 7R IV (35 mm)9504 × 633654635.351.77165.0820.96899.980.45407.59
Sony Alpha 7R IV (55 mm)9504 × 633636.3435.351.77256.5820.96899.980.45407.59
Sony Alpha 6000 (20 mm)6000 × 400083.5945.351.7759.4913.23568.170.45257.31
Sony Alpha 6000 (24 mm)6000 × 400073.7845.351.7770.8513.23568.170.45257.31
DJI Zenmuse P1 (24 mm)8192 × 546073.7845.351.7796.7418.06775.740.45351.32
DJI Zenmuse P1 (35 mm)8192 × 546054.563.55.351.77140.7718.06775.740.45351.32
DJI Zenmuse X5S (15 mm)5280 × 395664.776.75.351.7773.7713.09499.990.45226.44
DJI Zenmuse X7 (24 mm)6016 × 400852.2615.351.77108.6813.26569.690.45258.00
DJI Zenmuse X7 (35 mm)6016 × 400837.143.95.351.77158.6613.26569.690.45258.00
DJI Mavic 2 Zoom (24 mm)4000 × 300070.6835.351.7750.009.93378.780.45171.54
DJI Mavic 3 Pro (24 mm)5280 × 395671.6845.351.7764.7913.09499.990.45226.44
DJI Matrice 30T (Wide)4000 × 300071.6845.351.7749.089.93378.780.45171.54
Phase One iXM-100 (80 mm)11,664 × 875030.737.95.351.77376.0428.951104.520.45500.22
Phase One iXM-100 (150 mm)11,664 × 875016.620.75.351.77707.5928.951104.520.45500.22
Canon EOS R (24 mm)6720 × 448073.7845.351.7779.3514.82636.350.45288.19
Canon EOS R (35 mm)6720 × 448054635.351.77116.7214.82636.350.45288.19
MAPIR Survey3W RGB (19 mm)4000 × 30008799.75.351.7737.309.93378.780.45171.54
MAPIR Survey3N RGB (47 mm)4000 × 30004150.15.351.7794.689.93378.780.45171.54
Parrot Anafi (Wide)5344 × 40168496.85.351.7752.5313.29506.050.45229.18
Parrot Anafi (Rectilinear)4608 × 345675.588.15.351.7752.6711.43436.350.45197.62
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lanča, L.; Mališa, M.; Jakac, K.; Ivić, S. Optimal Flight Speed and Height Parameters for Computer Vision Detection in UAV Search. Drones 2025, 9, 595. https://doi.org/10.3390/drones9090595

AMA Style

Lanča L, Mališa M, Jakac K, Ivić S. Optimal Flight Speed and Height Parameters for Computer Vision Detection in UAV Search. Drones. 2025; 9(9):595. https://doi.org/10.3390/drones9090595

Chicago/Turabian Style

Lanča, Luka, Matej Mališa, Karlo Jakac, and Stefan Ivić. 2025. "Optimal Flight Speed and Height Parameters for Computer Vision Detection in UAV Search" Drones 9, no. 9: 595. https://doi.org/10.3390/drones9090595

APA Style

Lanča, L., Mališa, M., Jakac, K., & Ivić, S. (2025). Optimal Flight Speed and Height Parameters for Computer Vision Detection in UAV Search. Drones, 9(9), 595. https://doi.org/10.3390/drones9090595

Article Metrics

Back to TopTop