Next Article in Journal
Systemic Review and Meta-Analysis: The Application of AI-Powered Drone Technology with Computer Vision and Deep Learning Networks in Waste Management
Previous Article in Journal
Path Planning for Unmanned Aerial Vehicle: A-Star-Guided Potential Field Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Aerial Autonomy Under Adversity: Advances in Obstacle and Aircraft Detection Techniques for Unmanned Aerial Vehicles

by
Cristian Randieri
1,2,*,
Sai Venkata Ganesh
3,
Rayappa David Amar Raj
3,
Rama Muni Reddy Yanamala
4,
Archana Pallakonda
5 and
Christian Napoli
2,6
1
Department of Theoretical and Applied Sciences, eCampus University, Via Isimbardi 10, 22060 Novedrate, Italy
2
Department of Computer, Control, and Management Engineering “Antonio Ruberti”, Sapienza University of Rome, 00185 Rome, Italy
3
Amrita School of Artificial Intelligence, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India
4
Department of Electronics and Communication Engineering, Indian Institute of Information Technology Design and Manufacturing (IIITD&M) Kancheepuram, Chennai 600127, India
5
Department of Computer Science and Engineering, National Institute of Technology Warangal, Telangana 506004, India
6
Department of Artificial Intelligence, Czestochowa University of Technology, ul. Dqbrowskiego 69, 42-201 Czestochowa, Poland
*
Author to whom correspondence should be addressed.
Drones 2025, 9(8), 549; https://doi.org/10.3390/drones9080549
Submission received: 19 June 2025 / Revised: 31 July 2025 / Accepted: 1 August 2025 / Published: 4 August 2025

Abstract

Unmanned Aerial Vehicles (UAVs) have rapidly grown into different essential applications, including surveillance, disaster response, agriculture, and urban monitoring. However, for UAVS to steer safely and autonomously, the ability to detect obstacles and nearby aircraft remains crucial, especially under hard environmental conditions. This study comprehensively analyzes the recent landscape of obstacle and aircraft detection techniques tailored for UAVs acting in difficult scenarios such as fog, rain, smoke, low light, motion blur, and disorderly environments. It starts with a detailed discussion of key detection challenges and continues with an evaluation of different sensor types, from RGB and infrared cameras to LiDAR, radar, sonar, and event-based vision sensors. Both classical computer vision methods and deep learning-based detection techniques are examined in particular, highlighting their performance strengths and limitations under degraded sensing conditions. The paper additionally offers an overview of suitable UAV-specific datasets and the evaluation metrics generally used to evaluate detection systems. Finally, the paper examines open problems and coming research directions, emphasising the demand for lightweight, adaptive, and weather-resilient detection systems appropriate for real-time onboard processing. This study aims to guide students and engineers towards developing stronger and intelligent detection systems for next-generation UAV operations.

1. Introduction

Obstacle and aircraft detection in Unmanned Aerial Vehicles (UAVs) [1,2] directs to the process of sensing, determinating, and finding potential risks in the UAV’s aviation route [3]. These risks may have static obstacles such as buildings, trees, electric poles, and cables, as well as dynamic obstacles like moving vehicles, other UAVs, birds, or manned aircraft. Effective obstacle and aircraft detection is essential to provide the UAV’s ability to navigate safely, bypass collisions, and arrange alternative ways when required [4]. It forms a critical component of autonomous UAV operations, allowing drones to operate reliably in difficult, unexpected environments [5]. Precise detection of obstacles and aircraft is required to control mid-air collisions and accidents. Failures in detection can result in UAV collisions, causing damage to property, economic failure, or fatalities in occupied areas. For UAVs working without direct human control, reliable environmental understanding is essential. Separate missions such as container delivery, infrastructure check, supervision, and search-and-rescue tasks rely laboriously on real-time obstacle and aircraft detection to provide safe and effective operation Aviation authorities, including the Federal Aviation Administration (FAA) and the European Union Aviation Safety Agency (EASA), need the integration of crash out capabilities for UAVs, especially in urban environments and communicated airspace [6]. Detection systems are the basic requirement for legal and safe UAV deployment. The ability to detect and respond to obstacles in real time allows UAVs to develop safer and more efficient flight paths. This reduces energy consumption, trims mission duration, and improves the overall success rate of UAV operations [7]. A total of 108 references were included and cited in the preparation of this review, ensuring comprehensive coverage of recent advances and challenges in UAV obstacle and aircraft detection. The significant contributions of this report are as follows: The paper offers an in-depth survey of UAV obstacle and aircraft detection under challenging conditions such as fog, rain, smoke, low light, motion blur, and high dynamic range (HDR) scenes, emphasizing how these aspects impair visibility and evaluating the usefulness of detection methods and sensor robustness in each procedure. It delivers a precise taxonomy of the core challenges in UAV navigation, including detecting thin objects like wires, tracking dynamic obstacles, navigating lush environments, working with quickly motion-blurred images, and working within computational limitations, thus framing the hurdles faced in real-world deployments. The study thoroughly compares key sensor technologies, such as RGB and infrared cameras, LiDAR, radar, sonar, and event-based vision sensors, discussing their capabilities, limitations, and suitability under different environmental conditions to assist in optimal sensor selection for UAV systems. Again, it deviates from classical computer vision techniques (such as optical flow, stereo vision, and SIFT/SURF) with deep learning models (including YOLO, Faster R-CNN, and DETR), tracing their individual powers, limitations, and real-time feasibility on UAV hardware, particularly in degraded environments. The paper also studies significant standard datasets—UAVid, VisDrone, DOTA, and Foggy Cityscapes, concentrating on their environmental diversity, annotation classes, and relevance for training and testing detection algorithms under real-world and synthetic adverse conditions. Standardized performance metrics such as mean average precision (MAP), precision, recall, F1-score, latency, and robustness are summarized as important tools for setting detection methods in safety-critical UAV operations. A complete comparative analysis is delivered as a summary table. It describes detection methods by sensors used, conditions addressed, and their separate strengths and disadvantages, offering a proper decision-making mechanism for researchers and developers. The report also recognizes continuous limitations, including sensor degradation in adverse weather, computational demands of deep learning models, sensor fusion challenges, and poor vision of unknown environments. Finally, it discusses future research directions, highlighting the need for lightweight and power-efficient deep learning models, event-based processing, adaptive multi-sensor systems, and zero-shot detection to enhance robustness and independence in complex real-world UAV scenarios. In addition to conventional camera, LiDAR, and radar-based detection systems, laser-based visual identification techniques have emerged as effective solutions for multi-UAV detection and coordination, particularly through the use of laser spots or coded beacons as visual markers [8]. The overall structure of the paper is illustrated in Figure 1, which outlines the key sections including introduction, datasets, challenging conditions, detection methods, sensor technologies, and future directions.

2. Environmental Effects on Sensor Reliability

This section strictly focuses on how adverse environmental factors such as fog, rain, smoke, low light, motion blur, and high dynamic range conditions impair sensor input quality for UAV detection systems. The ability of UAVs to notice and avoid obstacles and aircraft in real-time is important for their safe operation, especially in complex conditions like rain, fog, smoke, and low light.Although sensor technology and obstacle recognition algorithms have reached a long way, there is still a need for more robust systems capable to handle dynamic environments with different visibility. Studies still focus on enhancing sensor fusion, developing more powerful algorithms, and employing fog computing to guarantee that even in bad temperature or lighting situations, UAVs may operate independently and securely. The following subsections describe the specific results of each state on UAV detection.

2.1. Impact of Rain on UAV Detection

Rain particularly involves UAV obstacle and aircraft detection, forcing light diffusion, water droplets on camera lenses, and lowered visibility. This challenge causes UAV methods to enhance their robustness by including sensor fusion, such as connecting optical cameras with radar or LiDAR, which are slightly affected by rain [9]. Drizzling conditions even boost the growth of advanced image processing methods like raindrop deduction algorithms and difference enhancement, allowing UAVS to maintain accurate obstacle detection even during serious drizzles. Therefore, rain acts as a useful condition for designing UAVs capable of operating reliably in real-world outdoor environments [10].

2.2. Fog Induced Detection Challenges

Fog makes a thick scattering medium that laboriously attenuates optical movements, creating a classic vision-based detection that is unpredictable. This challenge is driving the advancement of non-visual sensors like millimetre wave radar and the development of deep learning-based fog-robust perception algorithms. Handling fog effectively allows UAVs to detect obstacles and other aircraft even when visibility drops dramatically [11]. The existence of fog enables UAV detection systems to execute early-warning mechanisms and path-planning methods based on degraded sensing, improving overall system security in urban and mountainous areas where fog is common [12].

2.3. Smoke as a Detection Barrier

Smoke is recognized as a major challenge for UAV-based detection techniques as it decreases visibility and alters accurate detection. With varying density and movement patterns, smoke develops dynamic and unexpected visual disturbances. For UAVs, identifying obstacles via smoke issues the reliability of traditional image-processing methods. It motivates studies on multispectral vision and thermal imaging, where smoke penetration is better [13]. Applications like firefighting missions and disaster response depend on the development of systems performing obstacle and aircraft detection in smoky situations, where UAVs must traverse dangerous, smoke-filled locations, preserving precise situational awareness [14].

2.4. Low-Light and Nighttime Detection

Nighttime low-light conditions significantly reduce the use of traditional cameras, which causes poor contrary, noise, and missed detections. This challenge allows UAV systems to incorporate low-light enhancement algorithms, near-infrared sensors, and thermal imaging [15]. Raising the operation window of UAVs by being capable of consistently identifying obstacles and aircraft at night allows them to maintain necessary activities such as delivery, search-and-rescue, as well as nighttime surveillance. Hence, night-time low-light techniques are necessary for making UAV obstacle detection systems truly 24/7 operational [16].

2.5. Motion Blur in High-Speed UAVs

Motion blur in UAV obstacle detection relates to the warping in captured images when the UAV forces quickly or experience sudden directional shifts. During short or high-speed flight, UAVS regularly suffer from motion blur, warping the captured images and hindering obstacle detection precision. This challenge allows the design of fast-shutter imaging, inertial measurement unit (IMU)-based stabilisation, and strong action deblurring algorithms [17]. Overwhelming motion blur is essential for UAVs used in dynamic applications like high-speed inspections, racing, or emergency response, where fast responses are required, and obstacle avoidance must remain accurate despite the UAV’s speed [18].

2.6. High Dynamic Range Scenes

Places with both extremely bright and very dark spots develop significant difficulties for obstacle recognition, as ordinary cameras are unable to capture all features at once. This challenge requires adaptable orientation control and High Dynamic Range (HDR) image techniques. By effectively managing such scenarios, UAVs can identify obstacles hidden in shadows or against bright backgrounds like the sun [19]. Resolving the High Dynamic Range (HDR) issue allows UAVs to work consistently in tunnels, at dawn as well as sunset, and close to positively reflective surfaces, thus enhancing their ability to adapt in many settings. These environmental conditions primarily affect sensor input quality and image acquisition [20].

3. Real-Time Perception and Obstacle Detection Challenges

This section focuses on real-time perception challenges during UAV flight, including dynamic object detection, thin obstacle recognition, motion-induced blur, and onboard computational constraints. Unmanned Aerial Vehicles (UAVs) are increasingly employed in various environments, ranging from urban infrastructure monitoring to disaster response and agricultural surveillance. Yet, sufficient obstacle detection remains a crucial challenge for providing safe and independent navigation. Different environmental and hardware-related elements slow reliable perception, particularly in difficult or dynamic settings. Below are the key challenges that UAVs face in detecting obstacles during flight, which must be managed to improve their operational security and efficiency [21]. These challenges reflect the UAV’s need to process visual and spatial information accurately during real-time navigation.UAVs face multiple perception challenges during flight under difficult environmental conditions. These include fog, rain, smoke, and motion blur, which severely affect detection performance. The major challenging conditions for UAV obstacle detection are summarized in Figure 2.

3.1. Detecting Wires, Cables, and Thin Branches

UAV navigation encounters problems discovering miniature things like thin branches, cables, and wires. Occasionally, extremely light, difficult to speak from the background, and possibly not efficiently reaped up by classic sensors and such challenges [22]. Overlooking these items could result in substantial damage, especially during low-altitude aviation in towns, forests, or utility inspection operations. Enhancing the capability to find small things provides reliable and safe UAV operations in hard settings [23].

3.2. Real-Time Tracking of Moving Objects

Dynamic obstacles pose more problems because they are always changing positions [24]. Automobiles, people, animals, and other UAVs need a drone to see and anticipate their actions in real time [25]. Secure travelling leans on rapid and accurate identification of dynamic obstacles, mainly in crowded urban areas, disaster zones, or airports [26]. Interacting with cruising objects helps UAVs evade collisions and allows smooth, smart navigation in dark settings [27].

3.3. Handling Dense Stationary Environments

Urban and natural environments are filled with static clutter such as trees, buildings, signposts, and poles [28]. Although these objects are fixed, they can create thick locations, making obstacle detection more challenging. Insufficient handling of static clutter may result in extreme collision risks or conventional flight paths [29]. A healthy detection method must determine significant obstacles from the environment and develop efficient ways, particularly in city missions, mapping, and inspection tasks [30].

3.4. Robust Detection in Harsh Weather Conditions

Weather conditions like fog, rain, smoke, and dust can severely reduce the implementation of UAV sensors [31]. Visibility is decreased, sensor noise grows, and the accuracy of obstacle detection may drop significantly. In observed applications, UAVs must usually work even in less-than-ideal weather [13]. Thus, developing strong detection systems to degrade data is important, assuring that UAVS can continue their missions safely even when environmental conditions are challenging [32].

3.5. Fast Motion Causing Blur and Image Distortion

When drones depart at increased speeds or create quick schemes, the resultant motion blur and image distortions can make it difficult for cameras and detection algorithms to catch evident information [5]. Blurred images reduce the importance of obstacle detection, increasing the risk of accidents. Making systems that can take motion blur allows UAVs to keep correct situational understanding during secured operations, which is essential for urgent deliveries, surveillance, and aerial sports [33].

3.6. Limited Computing Power on UAVs

Due to size, importance, and energy limits, UAVs cannot carry large, robust computers onboard. They must perform obstacle detection quickly using limited hardware. This makes it important to develop lightweight, efficient algorithms that do not overload the system but still provide reliable obstacle and aircraft detection. Real-time processing with minimal resources helps UAVs maintain responsiveness and flight safety while preserving battery life, which is important for completing long missions [34]. Table 1 summarizes how each challenge impacts UAV operations and outlines the corresponding detection requirements.

4. Sensors Used for Detection

To navigate safely and avoid obstacles, UAVs rely on a variety of sensors that capture and interpret information from their surroundings. Each sensor type offers unique advantages and limitations depending on environmental conditions, required range, resolution, and processing capabilities. The following subsections describe the most commonly used sensors in UAV obstacle detection systems, highlighting their working principles and suitability for different applications.

4.1. Visual and Thermal Imaging for Obstacle Detection

Cameras, including RGB, infrared (IR), and event-based types, are generally employed for detecting obstacles in UAVs. RGB cameras capture images employing visible light, showing high-resolution images rich in particular. Infrared cameras, which catch heat signatures, perform competently in low-light circumstances and are particularly helpful in detecting obstacles in full darkness [41]. Event-based cameras vary from traditional cameras by detecting differences in a location rather than capturing full frames. These cameras help track carrying objects at high speeds and in high dynamic range scenarios [42]. Despite their benefits, cameras are defined in bad weather conditions such as rain, fog, and smoke, and may suffer from motion blur in fast-moving scenarios [43].

4.2. High-Resolution 3D Mapping with Laser Pulses

Lidar operates by sending laser pulses and calculating the time it takes for the light to reflect off an object [44]. This method permits the innovation of particular 3D maps of the environment and gives clear depth knowledge about surrounding objects. Lidar serves well in low-light situations and is not dependent on ambient light, unlike cameras [45]. Yet, Lidar can be affected by weather conditions such as heavy rain, fog, and smoke, which spray the laser pulses and decrease accuracy [46]. While Lidar delivers high-precision data, it is more expensive and bulkier than other sensors, making it less suitable for smaller UAVs [47].

4.3. Robust Detection in Adverse Weather Conditions

Radar utilises radio waves to catch objects by radiating waves and calculating the duration taken for the signal to reflect back [48]. Because atmospheric particles scatter radio waves less effectively than light does, it is useful in harsh weather conditions including rain, fog, and smoke [49]. Radar systems are useful for UAVs because they may provide vital information like an object’s distance, speed, and direction. They can also function in low-visibility conditions. Radar typically has a lesser resolution than Lidar and cameras, though, and its usefulness can be influenced by clutter and interference, especially in urban settings [50].

4.4. Ultrasonic Sensing for Short-Range Detection

Sonar works by sending out sound waves and measuring how long it takes to generate sound following an object’s strike. While sonar is primarily used underwater, it involves in the air for short-range obstacle detection. It is useful in atmospheres with narrow visual information, such as low-light conditions, and can detect nearby obstacles effectively. Yet, its major limitation is its short-range detection and exposure to environmental noise, making it inappropriate for long-range detection [51]. UAVs rely on multiple sensing modalities to perceive their environment effectively. Different sensors, such as cameras, LiDAR, radar, and sonar, each have unique strengths and limitations under varying conditions. A comparative overview of these sensors and their fusion for robust perception is illustrated in Figure 3. To enhance obstacle and aircraft detection, UAVs utilize different sensor types, each with unique operational principles and performance trade-offs. A detailed comparison of these commonly used sensors, including their advantages and limitations under adverse conditions, is shown in Table 2.

5. Obstacle and Aircraft Detection Methods

This section presents a comprehensive overview of the key methodologies employed for obstacle and aircraft detection in UAV systems. These methods are broadly categorized into classical computer vision techniques and modern deep learning approaches. Classical techniques—including optical flow, stereo vision, feature tracking, and motion detection—have historically formed the foundation of UAV perception but face challenges under adverse environmental conditions. In contrast, recent advances in deep learning, such as transformer-based models, event-based vision, and sensor fusion networks, have significantly improved detection accuracy and robustness in complex and dynamic scenarios. The subsections below detail these approaches, highlighting their fundamental principles, advantages, and limitations in practical UAV applications.

5.1. Classical Methods

Traditional computer vision techniques laid the groundwork for before UAV obstacle and aircraft detection systems. Optical Flow is one way that calculates objects’ movement between two successive image structures based on the obvious movement of brightness patterns [56]. It has been widely utilised for detecting driving obstacles and comprehending depth through image series. However, optical Flow is sensitive to environmental conditions like fog, rain, or motion blur [57]. Rain droplets and fog can yield pixel-level inconsistencies, leading to errors in movement analysis, while fast UAV motion can distort the flow field and result in unpredictable obstacle detection blur [58].
Stereo Vision involves utilising two spatially isolated cameras to simulate human binocular vision and evaluate the deep of objects. It helps specify the distance between the UAV and probable obstacles [59]. While stereo vision serves well in structured environments with good consistency and lighting, it works in low-light or foggy scenes, where fitting corresponding issues between the two images is hard. Moreover, rain and smoke can raise incorrect matches due to the hidden or loud image input [60].
Feature Tracking utilising algorithms like SIFT (Scale-Invariant Feature Transform) and SURF (Speeded-Up Robust Features) notices and checks different components across edges for object localization and search [61]. These methods perform well on textured and static objects but have difficulty when features are washed out due to rain, fog, or fast movement. In high-speed flights, feature blurring and occlusions degrade the matching accuracy, making tracking unstable and error-prone [62,63].
Motion Detection, such as background subtraction, allows for recognizing shifting objects in a video series by approximating the present structures to a reference background model. It is computationally efficient and served for easy surveillance tasks [64]. Yet, background subtraction functions badly under dynamic lighting, fog, or rain. The continuous environmental changes, moving shadows, and sensor noise present relics that compromise the reliability of motion-based detection in real-world UAV applications [43].

5.2. Modern Deep Learning Methods

With the advancement of deep learning, obstacle and aircraft detection in UAVs has witnessed surprising advancements in accuracy and robustness [65]. CNN-based object detectors, such as YOLO (You Only Look Once), Faster R-CNN, and SSD (Single Shot Detector), have become widely utilized due to their capability to know wealthy hierarchical components directly from big annotated datasets [66]. These models detect and classify multiple objects within a single image in real time [67]. While these CNN-based models beat classical methods in accuracy, their implementation in challenging environments, like rain, fog, and night, still relies heavily on the quality and variety of training data [68]. For example, these models may fail to generalise to adverse conditions without exposure to weather-distorted inputs during training [69].
Transformer-based detectors, such as DETR (Detection Transformer) and Deformable DETR, leverage attention mechanisms to more suitable model spatial connections between objects. These models deliver strong performance on disorderly scenes and can catch small or overlapping objects additional reliably than CNNs [70]. Yet, they are computationally fatty, which determines their deployment on resource-constrained UAVs. Their robustness in degraded conditions is good, but still determined by the availability of specialised training datasets [71].
Event-based vision networks are designed to process data from event cameras, which capture changes in the scene at microsecond resolution rather than recording full frames. This technology is positively helpful in high-speed UAV applications, as it takes movement blur effectively. Unlike traditional cameras, event cameras do not suffer from frame-based disclosure issues and function nicely in low light or fast motion [72]. Yet, the use of event-based networks is still arising and requires additional research and hardware development before it becomes mainstream in UAV systems [73].
Sensor fusion networks aim to overcome individual sensor limitations by integrating data from multiple sources, such as combining RGB cameras with LiDAR. This approach enhances obstacle detection under difficult conditions—cameras provide texture and color information, while LiDAR contributes precise depth data [74]. Deep learning models designed for fusion can handle foggy or rainy conditions better than single-sensor systems, but synchronization issues, calibration complexity, and increased computational demands remain challenges to be addressed [75]. Table 3 provides a detailed comparison of obstacle and aircraft detection methods, outlining their strengths, limitations in degraded conditions, and typical use cases across both classical and deep learning paradigms.

5.3. Obstacle Detection Methods

Obstacle detection refers to the identification of static or semi-static physical objects in the UAV’s flight environment, such as buildings, poles, trees, terrain features, and power lines [84]. These obstacles pose a threat to UAV navigation and require reliable, often short-to-mid-range sensing solutions. Generally used sensors for obstacle detection include RGB cameras, stereo vision, LiDAR, ultrasonic sensors, and event-based cameras. Conventional techniques depend on optical flow, edge detection, or SLAM-based algorithms. In recent years, deep learning techniques such as YOLO, Faster R-CNN, and semantic segmentation have seriously enhanced the robustness and rate of obstacle detection [79].
Sensor fusion methods, incorporating data from numerous sources (e.g., RGB + LiDAR), have demonstrated effectiveness in harsh conditions like fog or low light. For close-range hazard release, monocular depth estimation and structure-from-motion methods are also used [80].

5.4. Aircraft Detection Methods

Aircraft detection concentrates on identifying and following other airborne objects that may intersect the UAV’s flight path [85]. This contains human-crewed aircraft, different drones, and even flocks of birds [86]. Unlike static obstacle detection, aircraft detection needs high-speed, long-range sensing and prediction mechanisms. Techniques usually affect radar, infrared imaging, thermal sensors, ADS-B transponders, and long-range visual tracking operating object detection models [87].
Detection of non-cooperative aerial targets that do not publish title calls depends laboriously on computer vision and machine learning methods. Deep learning models introduced on aerial imagery (e.g., DOTA, VEDAI) are utilized for detection and trajectory estimation [88]. Temporal standards like Kalman filters or LSTMs are also used for the prediction of moving targets [89]. Table 4 highlights the key differences between obstacle and aircraft detection, focusing on detection range, response time, motion patterns, and sensing technologies involved.

5.5. Laser-Based Visual Identification Techniques for Multi-UAV Detection

Laser-based identification techniques offer precise and scalable solutions for detecting and localizing multiple UAVs in real time, especially under GPS-denied or visually degraded conditions [91]. These systems typically integrate a laser range finder (LRF) with a camera-telescope setup, co-aligned using a dichroic mirror. A fast steering mirror (FSM) dynamically adjusts the alignment based on the angular position of UAVs within the field of view, enabling accurate distance estimation at up to 14 Hz per target. To improve autonomous landing precision, a hybrid approach combining LiDAR and visual cameras has been proposed [92]. This method relates the color of landing markers to laser point cloud intensity, enabling UAVs to locate landing zones even in low light or adverse weather. At closer distances, visual corner detection is used to refine position estimates and guide precise landings [93]. Another solution focuses on emergency landing in unstructured environments using LiDAR point clouds and Principal Component Analysis (PCA) to identify flat regions based on slope, roughness, and obstacle proximity. The system continuously evaluates potential landing sites and selects the safest option in real time [94].

6. Standard Benchmark Datasets

High-quality datasets are essential for making and setting obstacle and aircraft detection techniques for UAVs under challenging conditions. These datasets must describe different real-world strategies, including overcrowded environments, changing weather conditions, low-light scenes, and motion-induced blur. A well-curated dataset allows researchers to prepare rich detection models and benchmark their performance under real conditions. Below are some of the numerous widely employed and appropriate datasets in this domain.

6.1. High-Resolution Urban UAV Video Dataset

UAVid is a videotape dataset captured by UAVs in metropolitan environments. It concentrates on thick, real-world locations with driving vehicles, buildings, trees, and pedestrians. The videos contain high-resolution frames with pixel-level semantic annotations, making this dataset suitable for estimating UAV-based background understanding and object detection models. UAVid represents real-world conditions but is limited in terms of adverse weather variations [95,96].
The UAVid dataset provides high-resolution UAV images of complex urban scenes, making it ideal for evaluating obstacle and object detection methods in realistic environments. Figure 4 shows a sample image from this dataset, illustrating the challenges of urban UAV perception.

6.2. Diverse Urban Object Detection Dataset

VisDrone is another popular UAV dataset collected from various cities in China. It includes thousands of images and videos captured in crowded urban areas, containing a wide range of object classes such as pedestrians, cars, buses, and bicycles. It features different times of the day, weather conditions, and scene complexities. The dataset provides bounding box annotations, making it ideal for training object detection and tracking algorithms. VisDrone is based on real-world data and is highly valued for its diversity and scale [98].
The VisDrone dataset provides diverse aerial images captured by drones in urban environments Figure 5, featuring detailed annotations for vehicles, pedestrians, and other objects. It is widely used to train and evaluate object detection and tracking algorithms for UAV-based surveillance and traffic monitoring [99].

6.3. Large-Scale Aerial Object Detection Dataset

DOTA (Dataset for Object Detection in Aerial Images) is a large-scale dataset specifically designed for detecting objects in aerial views [88]. Unlike traditional datasets, DOTA includes objects at multiple orientations and scales, such as airplanes, ships, tanks, and vehicles [100]. It is particularly valuable for evaluating detection systems that must operate in aerial surveillance and UAV-based monitoring tasks. The dataset consists of high-resolution satellite and aerial images, and it is based on real-world scenes [101]. The Figure 6 shows multiple orientations and scales of different Objects in different conditions.

6.4. Synthetic Fog for Adverse Weather Evaluation

Foggy Cityscapes is an extension of the Cityscapes dataset with synthetic fog added to simulate real-world foggy weather [102]. While not captured from UAVs, it is used to train models that need to operate in low-visibility environments [103]. The dataset provides pixel-wise semantic segmentation labels, helping researchers assess how detection algorithms perform under degraded visibility [104]. Although synthetic, it allows controlled experimentation for adverse weather training [105]. Figure 7 shows a variety of cityscapes shown in different conditions.
A summary of widely used benchmark datasets for UAV and aerial image-based detection tasks, highlighting their characteristics and application scenarios, is presented in Figure 8.
A detailed comparison of commonly used UAV detection datasets, including their environments, conditions, annotation types, and key applications, is presented in Table 5.

7. Performance Evalution Metrices

  • Mean Average Precision (MAP): Mean Average Precision is one of the numerous widely employed metrics in object detection. It estimates how well a detection model recognizes objects with both high precision and recall. For each class, Average Precision is calculated based on the area under the precision-recall curve, and MAP is the mean of all MAPS across all classes [107]. In UAV applications, MAP delivers a complete view of the model’s ability to correctly see obstacles like wires, vehicles, or aircraft under different conditions, such as rain or fog. A high MAP indicates constant and accurate detection across multiple object types and scenarios [108].
    mAP = 1 N i = 1 N AP i
    where N is the number of classes or recall levels, and AP i is the Average Precision for class i.
  • Precision and Recall: Precision calculates how many detected objects are appropriate, while Recall estimates how many suitable objects are detected. In UAV obstacle detection, high precision means more occasional wrong warnings (e.g., detecting a shadow as an obstacle), and high Recall points more occasional missed detections (e.g., not detecting a thin wire) [109]. These metrics are especially critical in safety-critical applications like midair collision avoidance, where missing an obstacle (low Recall) or detecting a non-existent one (low precision) could lead to serious consequences [110].
    Precision = T P T P + F P
    where T P is the number of true positives and F P is the number of false positives.
    Recall = T P T P + F N
    where F N is the number of false negatives.
  • F1 Score: The F1 Score is the harmonic mean of precision and recall. It delivers a single metric that negates both values, making it particularly useful when there’s an unstable trade-off between precision and recall [111]. In UAV detection schemes, the F1 Score is a helpful indicator of how well the technique performs overall, specifically in disorderly environments or low-visibility conditions where false positives and false negatives are likely [112].
    F 1 Score = 2 · Precision · Recall Precision + Recall
    This balances both precision and recall.
  • Latency: Latency refers to the time taken from the point an image is charged to the point an obstacle is detected and the reaction is triggered. In UAVs, lower latency is essential because decisions must be made in real-time to avoid collisions [113,114].
  • Robustness under Degraded Conditions: Many models that perform well in ideal scenarios often degrade significantly in real-world environments. Robustness is evaluated by testing the model on datasets specifically designed to simulate or contain real-world degradation. It is a key requirement for UAVs operating in unpredictable outdoor conditions [115]. To evaluate the effectiveness of object detection systems in UAV applications, several key performance metrics are employed. These include precision, recall, F1 score, latency, and robustness, which collectively help assess detection accuracy and real-time responsiveness. An overview of these performance metrics is shown in Figure 9.
A wide range of detection methods has been developed for UAV obstacle and aircraft detection, each with different sensor requirements, environmental handling capabilities, strengths, and limitations. A comparative analysis of these methods is presented in Table 6.

8. Limitations of Current Detection Approaches

This section highlights the limitations of current detection systems, including deep learning generalization issues, sensor fusion complexity, dataset gaps, and reduced performance under adverse conditions. Despite notable improvements in obstacle and aircraft detection systems for UAVs, several key hurdles remain. One of the numerous ongoing issues is achieving high detection accuracy in unfavourable weather conditions such as rain, fog, smoke, and snow [5]. These environments decrease visibility, warp sensor data, and introduce noise, resulting in improved false positives and missed detections. Cameras struggle due to reduced contrast and lens obstructions; LiDAR sensors are scattered by rain droplets; and radar, while more robust, lacks the resolution for small or thin obstacles. Current detection methods, especially deep learning-based models, still fail in real-world dynamic scenarios, particularly when exposed to motion blur from high-speed flights, changing lighting, and small or fast-moving objects. This problem is further exacerbated by the strong dependence of CNN-based models on the specific characteristics of the training datasets. As shown in [118], convolutional architectures often struggle to generalize across datasets with differing levels of class imbalance, manipulation complexity, or annotation granularity, issues which in similar way may affect aerial perception tasks in adverse environments. These models often require large, annotated datasets for training, and when such conditions are not well-represented in the dataset, performance drops sharply. Moreover, they demand high computational resources, which are often not feasible for lightweight, power-limited UAV platforms. Comparable concerns regarding model complexity and inference speed have been addressed in other domains, such as medical imaging, where lightweight CNN architectures have demonstrated exceptional accuracy while ensuring real-time performance on embedded systems [119]. Sensor fusion the combination of multiple sensors (e.g., RGB camera + LiDAR or radar) is a promising direction to mitigate individual sensor limitations. While it enhances reliability and environmental adaptability, it presents challenges in calibration, synchronization, and real-time data fusion, mainly under hardware constraints [31]. Each sensor also gets its own loss points, and inadequate performance in one can impact the mixed output. Event-based cameras, which catch differences in brightness asynchronously, offer incredible stuff for motion blur and low-light techniques. Yet, their high cost, little commercial availability, and the complexity of data processing and algorithm techniques pose useful challenges. Their result varies fundamentally from conventional image frames, demanding novel methods for detection tasks.
Despite the rapid advancements in detection systems for UAVs, some tough challenges still linger. Most methods struggle when the environment becomes complicated, such as in fog, heavy rain, or low-light conditions, which can all affect accuracy. Even though combining sensors like LiDAR and cameras can improve results, it’s not a perfect solution. These methods usually become bulky, costly, and more difficult to process in real-time. Event-based cameras, while promising in motion-heavy scenes, are somewhat unique and arrive with their own technical and economic hurdles. The big takeaway here is that existing explanations are good but not yet completely dedicated across all methods. There’s still space for advancement in creating UAV detection systems that are smarter, faster, and more flexible, particularly in the dirty conditions that exist beyond regulated environments. This section discusses the systemic limitations of existing technical solutions. Real-time multi-sensor systems often involve trade-offs between latency, detection accuracy, and robustness. Studies such as [120] have evaluated fusion frameworks integrating LiDAR and vision, demonstrating detection latency under 50 ms and F1-scores above 0.85 in controlled environments [121]. These improvements, however, come with increased computational demands, often requiring specialized edge hardware like NVIDIA Jetson Xavier or Intel Movidius to meet real-time constraints in UAV applications.A detailed overview of the key limitations affecting UAV obstacle and aircraft detection systems, including their causes and operational constraints, is illustrated in Figure 10.

9. Future Directions

As UAV technology continues to develop, several open challenges remain in performing rich, autonomous aerial navigation under unfavorable conditions. Future work could focus on deploying ultra-lightweight deep learning models such as YOLOv8n, Mobile-DETR, or EfficientDet-D0, which offer fast inference with fewer parameters, suitable for real-time UAV tasks. These models offer reduced parameters (<5M) and fast inference times on edge devices, though often at the cost of slightly reduced accuracy under complex backgrounds. The following future research directions are recognized as particularly crucial:
  • Real-time Processing under Low Visibility: Adverse weather, such as fog, smoke, and heavy rain, particularly debases sensor inputs. Coming systems must improve computational efficiency to allow quick and dependable detection under such visibility conditions.
  • Lightweight Deep Learning Models: Deep neural networks deliver high accuracy, but are usually computationally rich. There is a requirement for optimized, resource-efficient measures that maintain high detection accuracy while working within the limited memory and power capabilities of UAV platforms.
  • Event-based Vision Processing: Event cameras show important benefits in high-speed or low-light techniques by capturing only differences in a scene. However, existing algorithms for processing such data are immature. Further work is required to design robust algorithms and incorporate them effectively with UAV techniques for dynamic obstacle and aircraft detection.
  • Adaptive Multi-Sensor Fusion: Combining inputs from RGB, LiDAR, radar, and event cameras can enhance detection robustness. Future strategies should dynamically adjust sensor weighting based on the environment (e.g., increasing radar weight in fog)—however, challenges in sensor calibration and data synchronization remain.
  • Zero-Shot Detection: This method enables UAVs to detect unrecognized objects during deployment without prior training. It carries enormous promise for unpredictable scenarios, such as natural disasters or emerging threats. Future work must concentrate on improving generalization capabilities and semantic knowledge.
  • Human-Machine Interfaces for UAV Control: Reflexive control systems, such as gesture-based interaction (e.g., wearable gloves), can improve responsiveness in real-time, particularly in domain operations. Such systems lower cognitive load and permit seamless human-drone collaboration.
These directions collectively aim to make UAV detection systems more intelligent, flexible, and resilient, ensuring reliable operations regardless of environmental or hardware limitations.

10. Conclusions

Obstacle and aircraft detection play a critical role in the safe and autonomous navigation of UAVs, especially as they become more prominent in areas like delivery, surveillance, and disaster response. Ensuring that UAVs can reliably detect and avoid both static and dynamic obstacles is vital to prevent collisions and enable dependable performance in real-world environments. Throughout this study, we studied a broad range of detection techniques, beginning with classic methods such as optical flow, stereo vision, and feature tracking. These classical techniques are applied as the basis for current systems, but they struggle with insufficient visibility or high-speed techniques. In contrast, deep learning models especially those based on CNNs and Transformers—have got important advancements in detection accuracy, particularly when supported by sensor fusion systems incorporating multiple modalities like LiDAR, radar, and cameras. Despite these improvements, significant challenges remain. Most existing systems struggle under degraded conditions such as fog, rain, and low light. Many are also computationally heavy and difficult to deploy on lightweight UAV platforms with limited onboard resources. Additionally, few models generalize well to unseen obstacles or rapidly changing environments. As a result, the field now requires a shift towards more robust, weather-resilient, and real-time solutions. This encloses the action of lightweight models, sufficiently event-based processing, adaptive multi-sensor frameworks, and methods capable of taking unknown problems via zero-shot learning. A persistent study in these areas will be required to make the next era of UAV systems that can pass safely and autonomously, no matter the conditions.

Author Contributions

Conceptualization, C.R., S.V.G. and C.N.; methodology, R.D.A.R. and C.N.; software, S.V.G., R.D.A.R. and R.M.R.Y.; formal analysis, R.M.R.Y. and A.P.; investigation, C.R., S.V.G. and R.D.A.R.; resources, A.P.; supervision, C.N.; project administration, A.P. and C.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable. This study did not involve humans or animals and therefore did not require ethical approval.

Informed Consent Statement

Not applicable. This study did not involve human participants.

Data Availability Statement

The datasets used in this study are publicly available and can be accessed through the original source cited in the manuscript. No new data were generated in this study.

Acknowledgments

The authors thank their respective institutions for providing computational resources and infrastructure. No external administrative or technical support was involved.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyzes, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Priya, S.S.; Sanjana, P.S.; Yanamala, R.M.R.; Raj, R.D.A.; Pallakonda, A.; Napoli, C.; Randieri, C. Flight-Safe Inference: SVD-Compressed LSTM Acceleration for Real-Time UAV Engine Monitoring Using Custom FPGA Hardware Architecture. Drones 2025, 9, 494. [Google Scholar] [CrossRef]
  2. Yeddula, L.R.; Pallakonda, A.; Raj, R.D.A.; Yanamala, R.M.R.; Prakasha, K.K.; Kumar, M.S. YOLOv8n-GBE: A Hybrid YOLOv8n Model with Ghost Convolutions and BiFPN-ECA Attention for Solar PV Defect Localization. IEEE Access 2025, 13, 114012–114028. [Google Scholar] [CrossRef]
  3. Fiorio, M.; Galatolo, R.; Di Rito, G. Development and Experimental Validation of a Sense-and-Avoid System for a Mini-UAV. Drones 2025, 9, 96. [Google Scholar] [CrossRef]
  4. Fasano, G.; Opromolla, R. Analytical Framework for Sensing Requirements Definition in Non-Cooperative UAS Sense and Avoid. Drones 2023, 7, 621. [Google Scholar] [CrossRef]
  5. Gandhi, T.; Yang, M.-T.; Kasturi, R.; Camps, O.; Coraor, L.; McCandless, J. Detection of obstacles in the flight path of an aircraft. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 176–191. [Google Scholar] [CrossRef]
  6. Ameli, Z.; Aremanda, Y.; Friess, W.A.; Landis, E.N. Impact of UAV Hardware Options on Bridge Inspection Mission Capabilities. Drones 2022, 6, 64. [Google Scholar] [CrossRef]
  7. Merei, A.; Mcheick, H.; Ghaddar, A.; Rebaine, D. A Survey on Obstacle Detection and Avoidance Methods for UAVs. Drones 2025, 9, 203. [Google Scholar] [CrossRef]
  8. Ojdanić, D.; Gräf, B.; Sinn, A.; Yoo, H.W.; Schitter, G. Camera-guided real-time laser ranging for multi-UAV distance measurement. Appl. Opt. 2022, 61, 9233–9240. [Google Scholar] [CrossRef]
  9. Sumi, Y.; Kim, B.K.; Ogure, T.; Kodama, M.; Sakai, N.; Kobayashi, M. Impact of Rainfall on the Detection Performance of Non-Contact Safety Sensors for UAVs/UGVs. Sensors 2024, 24, 2713. [Google Scholar] [CrossRef]
  10. Singh, P.; Gupta, K.; Jain, A.K.; Jain, A.; Jain, A. Vision-based UAV detection in complex backgrounds and rainy conditions. In Proceedings of the 2024 2nd International Conference on Disruptive Technologies (ICDT), Greater Noida, India, 15–16 March 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1097–1102. [Google Scholar]
  11. Aswini, N.; Kumar, E.K.; Uma, S.V. UAV and obstacle sensing techniques—A perspective. Int. J. Intell. Unmanned Syst. 2018, 6, 32–46. [Google Scholar] [CrossRef]
  12. Fasano, G.; Accardo, D.; Tirri, A.E.; Moccia, A.; De Lellis, E. Sky region obstacle detection and tracking for vision-based UAS sense and avoid. J. Intell. Robot. Syst. 2016, 84, 121–144. [Google Scholar] [CrossRef]
  13. Shabnam Sadeghi, E. Mixed reality and remote sensing application of unmanned aerial vehicle in fire and smoke detection. J. Ind. Inf. Integr. 2019, 15, 42–49. [Google Scholar] [CrossRef]
  14. Chao, P.-Y.; Hsu, W.-C.; Chen, W.-Y. Design of Automatic Correction System for UAV’s Smoke Trajectory Angle Based on KNN Algorithm. Electronics 2022, 11, 3587. [Google Scholar] [CrossRef]
  15. Loffi, J.M.; Wallace, R.J.; Vance, S.M.; Jacob, J.; Dunlap, J.C.; Mitchell, T.A.; Johnson, D.C. Pilot Visual Detection of Small Unmanned Aircraft on Final Approach during Nighttime Conditions. Int. J. Aviat. Aeronaut. Aerosp. 2021, 8, 11. [Google Scholar] [CrossRef]
  16. Wang, Z.; Zhao, D.; Cao, Y. Visual navigation algorithm for night landing of fixed-wing unmanned aerial vehicle. Aerospace 2022, 9, 615. [Google Scholar] [CrossRef]
  17. Zhan, Q.; Zhou, Y.; Zhang, J.; Sun, C.; Shen, R.; He, B. A novel method for measuring center-axis velocity of unmanned aerial vehicles through synthetic motion blur images. Auton. Intell. Syst. 2024, 4, 16. [Google Scholar] [CrossRef]
  18. Ruf, B.; Monka, S.; Kollmann, M.; Grinberg, M. Real-Time on-Board Obstacle Avoidance for UAVS Based on Embedded Stereo Vision. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-1, 363–370. [Google Scholar] [CrossRef]
  19. Wang, Z.; Zhao, D.; Cao, Y. Image Quality enhancement with applications to unmanned aerial vehicle obstacle detection. Aerospace 2022, 9, 829. [Google Scholar] [CrossRef]
  20. Li, J.; Xiong, X.; Yan, Y.; Yang, Y. A survey of indoor uav obstacle avoidance research. IEEE Access 2023, 11, 51861–51891. [Google Scholar] [CrossRef]
  21. Stambler, A.; Sherwin, G.; Rowe, P. Detection and reconstruction of wires using cameras for aircraft safety systems. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 697–703. [Google Scholar]
  22. Junior, V.A.A.; Cugnasca, P.S. Detecting cables and power lines in Small-UAS (Unmanned Aircraft Systems) images through deep learning. In Proceedings of the 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), San Antonio, TX, USA, 3–7 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar]
  23. Zhou, C.; Yang, J.; Zhao, C.; Hua, G. Accurate thin-structure obstacle detection for autonomous mobile robots. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1–10. [Google Scholar]
  24. Shin, H.-S.; Tsourdos, A.; White, B.; Shanmugavel, M.; Tahk, M.-J. UAV conflict detection and resolution for static and dynamic obstacles. In Proceedings of the AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, HI, USA, 18–21 August 2008; p. 6521. [Google Scholar]
  25. Chen, H.; Lu, P. Real-time identification and avoidance of simultaneous static and dynamic obstacles on point cloud for UAVs navigation. Robot. Auton. Syst. 2022, 154, 104124. [Google Scholar] [CrossRef]
  26. Lim, C.; Li, B.; Ng, E.M.; Liu, X.; Low, K.H. Three-dimensional (3D) dynamic obstacle perception in a detect-and-avoid framework for unmanned aerial vehicles. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 996–1004. [Google Scholar]
  27. Saunders, J.; Call, B.; Curtis, A.; Beard, R.; McLain, T. Static and dynamic obstacle avoidance in miniature air vehicles. In Proceedings of the Infotech@ Aerospace, Arlington, VA, USA, 26–29 September 2005; p. 6950. [Google Scholar]
  28. Aldao, E.; González-deSantos, L.M.; Michinel, H.; Higinio, G.-J. UAV obstacle avoidance algorithm to navigate in dynamic building environments. Drones 2022, 6, 16. [Google Scholar] [CrossRef]
  29. Sandino, J.; Vanegas, F.; Maire, F.; Caccetta, P.; Sanderson, C.; Gonzalez, F. UAV framework for autonomous onboard navigation and people/object detection in cluttered indoor environments. Remote Sens. 2020, 12, 3386. [Google Scholar] [CrossRef]
  30. Govindaraju, V.; Leng, G.; Qian, Z. Visibility-based UAV path planning for surveillance in cluttered environments. In Proceedings of the 2014 IEEE International Symposium on Safety, Security, Rescue Robotics, Toyako, Japan, 27–30 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–6. [Google Scholar]
  31. Gageik, N.; Benz, P.; Montenegro, S. Obstacle detection and collision avoidance for a UAV with complementary low-cost sensors. IEEE Access 2015, 3, 599–609. [Google Scholar] [CrossRef]
  32. Stambler, A.; Spiker, S.; Bergerman, M.; Singh, S. Toward autonomous rotorcraft flight in degraded visual environments: Experiments and lessons learned. In Proceedings of the Degraded Visual Environments: Enhanced, Synthetic, and External Vision Solutions, Baltimore, MD, USA, 19–20 April 2016; SPIE: Bellingham, WA, USA, 2016; Volume 9839, pp. 19–30. [Google Scholar]
  33. Sieberth, T.; Wackrow, R.; Chandler, J.H. Automatic detection of blurred images in UAV image sets. ISPRS J. Photogramm. Remote Sens. 2016, 122, 1–16. [Google Scholar] [CrossRef]
  34. Mao, Y.; Chen, M.; Wei, X.; Chen, B. Obstacle recognition and avoidance for UAVs under resource-constrained environments. IEEE Access 2020, 8, 169408–169422. [Google Scholar] [CrossRef]
  35. Jaffari, R.; Hashmani, M.A.; Reyes-Aldasoro, C.C.; Aziz, N.; Rizvi, S.S.H. Deep learning object detection techniques for thin objects in computer vision: An experimental investigation. In Proceedings of the 2021 7th International Conference on Control, Automation and Robotics (ICCAR), Singapore, 23–26 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 295–302. [Google Scholar]
  36. Baykara, H.C.; Bıyık, E.; Gül, G.; Onural, D.; Öztürk, A.S.; Yıldız, I. Tracking and classification of multiple moving objects in UAV videos. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, USA, 6–8 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 945–950. [Google Scholar]
  37. Petrlík, M.; Báča, T.; Heřt, D.; Vrba, M.; Krajník, T.; Saska, M. A robust UAV system for operations in a constrained environment. IEEE Robot. Autom. Lett. 2020, 5, 2169–2176. [Google Scholar] [CrossRef]
  38. Bello, A.B.; Navarro, F.; Raposo, J.; Miranda, M.; Zazo, A.; Álvarez, M. Fixed-wing UAV flight operation under harsh weather conditions: A case study in Livingston island glaciers, Antarctica. Drones 2022, 6, 384. [Google Scholar] [CrossRef]
  39. Oktay, T.; Celik, H.; Turkmen, I. Maximizing autonomous performance of fixed-wing unmanned aerial vehicle to reduce motion blur in taken images. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2018, 232, 857–868. [Google Scholar] [CrossRef]
  40. Eom, S.; Lee, H.; Park, J.; Lee, I. UAV-aided wireless communication designs with propulsion energy limitations. IEEE Trans. Veh. Technol. 2019, 69, 651–662. [Google Scholar] [CrossRef]
  41. Miled, B.; Meriem; Zeng, Q.; Liu, Y. Discussion on event-based cameras for dynamic obstacles recognition and detection for UAVs in outdoor environments. In UKRAS22 Conference “Robotics for Unconstrained Environments” Proceedings; EPSRC UK-RAS Network:: Manchester, UK, 2022. [Google Scholar]
  42. Gallego, G.; Delbruck, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.; Conradt, J.; Daniilidis, K.; et al. Event-based Vision: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 169–190. [Google Scholar] [CrossRef]
  43. Dinaux, R.M. Obstacle Detection and Avoidance onboard an MAV using a Monocular Event-Based Camera. Master’s Thesis, Delft University of Technology, Delft, The Netherlands, 2021. [Google Scholar]
  44. Moffatt, A.; Platt, E.; Mondragon, B.; Kwok, A.; Uryeu, D.; Bhandari, S. Obstacle detection and avoidance system for small UAVs using a LiDAR. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 633–640. [Google Scholar]
  45. Aldao, E.; Santos, L.M.G.-D.; González-Jorge, H. LiDAR based detect and avoid system for UAV navigation in UAM corridors. Drones 2022, 6, 185. [Google Scholar] [CrossRef]
  46. Zheng, L.; Zhang, P.; Tan, J.; Li, F. The obstacle detection method of uav based on 2D lidar. IEEE Access 2019, 7, 163437–163448. [Google Scholar] [CrossRef]
  47. Pao, W.Y.; Howorth, J.; Li, L.; Agelin-Chaab, M.; Roy, L.; Knutzen, J.; Baltazar-y-Jimenez, A.; Muenker, K. Investigation of automotive LiDAR vision in rain from material and optical perspectives. Sensors 2024, 24, 2997. [Google Scholar] [CrossRef] [PubMed]
  48. Forlenza, L.; Fasano, G.; Accardo, D.; Moccia, A.; Rispoli, A. Integrated obstacle detection system based on radar and optical sensors. In Proceedings of the AIAA Infotech@ Aerospace 2010, Atlanta, Georgia, 20–22 April 2010. AIAA 2010-3421. [Google Scholar]
  49. Papa, U.; Del Core, G.; Giordano, G.; Ponte, S. Obstacle detection and ranging sensor integration for a small unmanned aircraft system. In Proceedings of the 2017 IEEE International Workshop on Metrology for AeroSpace (MetroAeroSpace), Padua, Italy, 21–23 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 571–577. [Google Scholar]
  50. Poitevin, P.; Pelletier, M.; Lamontagne, P. Challenges in detecting UAS with radar. In Proceedings of the 2017 International Carnahan Conference on Security Technology (ICCST), Madrid, Spain, 23–26 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  51. Gibbs, G.; Jia, H.; Madani, I. Obstacle detection with ultrasonic sensors and signal analysis metrics. Transp. Res. Procedia 2017, 28, 173–182. [Google Scholar] [CrossRef]
  52. Cazzato, D.; Cimarelli, C.; Sanchez-Lopez, J.L.; Voos, H.; Leo, M. A survey of computer vision methods for 2d object detection from unmanned aerial vehicles. J. Imaging 2020, 6, 78. [Google Scholar] [CrossRef] [PubMed]
  53. Vrba, M.; Walter, V.; Pritzl, V.; Pliska, M.; Báča, T.; Spurný, V.; Heřt, D.; Saska, M. On onboard LiDAR-based flying object detection. IEEE Trans. Robot. 2024, 41, 593–611. [Google Scholar] [CrossRef]
  54. Moses, A.; Rutherford, M.J.; Valavanis, K.P. Radar-based detection and identification for miniature air vehicles. In Proceedings of the 2011 IEEE International Conference on Control Applications (CCA), Denver, CO, USA, 28–30 September 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 933–940. [Google Scholar]
  55. Papa, U.; Core, G.D. Design of sonar sensor model for safe landing of an UAV. In Proceedings of the 2015 IEEE Metrology for Aerospace (MetroAeroSpace), Benevento, Italy, 4–5 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 346–350. [Google Scholar]
  56. Vera-Yanez, D.; Pereira, A.; Rodrigues, N.; Molina, J.P.; García, A.S.; Fernández-Caballero, A. Optical flow-based obstacle detection for mid-air collision avoidance. Sensors 2024, 24, 3016. [Google Scholar] [CrossRef]
  57. Thomas, B. Optical flow: Traditional approaches. In Computer Vision: A Reference Guide; Springer: Berlin/Heidelberg, Germany, 2021; pp. 921–925. [Google Scholar]
  58. Braillon, C.; Pradalier, C.; Crowley, J.L.; Laugier, C. Real-time moving obstacle detection using optical flow models. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Meguro-Ku, Japan, 13–15 June 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 466–471. [Google Scholar]
  59. Collins; Brendan, M.; Kornhauser, A.L. Stereo Vision for Obstacle Detection in Autonomous Navigation; DARPA Grand Challenge; Princeton University Technical Paper; Princeton University: Princeton, NJ, USA, 2006; pp. 255–264. [Google Scholar]
  60. Bernini, N.; Bertozzi, M.; Castangia, L.; Patander, M.; Sabbatelli, M. Real-time obstacle detection using stereo vision for autonomous ground vehicles: A survey. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 873–878. [Google Scholar]
  61. Riabko, A.; Averyanova, Y. Comparative analysis of SIFT and SURF methods for local feature detection in satellite imagery. In Proceedings of the CEUR Workshop Proceedings, CMSE 2024, Kyiv, Ukraine, 17 June 2024; Volume 3732, pp. 21–31. [Google Scholar]
  62. Hussey, T.B. Surface Defect Detection in Aircraft Skin & Visual Navigation based on Forced Feature Selection through Segmentation. 2021. Available online: https://apps.dtic.mil/sti/trecms/pdf/AD1132718.pdf (accessed on 16 June 2005).
  63. Tsapparellas, K.; Jelev, N.; Waters, J.; Brunswicker, S.; Mihaylova, L.S. Vision-based runway detection and landing for unmanned aerial vehicle enhanced autonomy. In Proceedings of the 2023 IEEE International Conference on Mechatronics and Automation (ICMA), Harbin, China, 6–9 August 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 239–246. [Google Scholar]
  64. Kalsotra, R.; Arora, S. Background subtraction for moving object detection: Explorations of recent developments and challenges. Vis. Comput. 2022, 38, 4151–4178. [Google Scholar] [CrossRef]
  65. Bilous, N.; Malko, V.; Frohme, M.; Nechyporenko, A. Comparison of CNN-Based Architectures for Detection of Different Object Classes. AI 2024, 5, 113. [Google Scholar] [CrossRef]
  66. Tian, Y. Effective Image Enhancement and Fast Object Detection for Improved UAV Applications. Master’s Thesis, University of Strathclyde, Glasgow, UK, 2023. [Google Scholar]
  67. Kim, J.; Cho, J. Rgdinet: Efficient onboard object detection with faster r-cnn for air-to-ground surveillance. Sensors 2021, 21, 1677. [Google Scholar] [CrossRef]
  68. Ramesh, G.; Jeswin, Y.; Divith, R.R.; Suhaag, B.R.; Kiran Raj, K.M. Real Time Object Detection and Tracking Using SSD Mobilenetv2 on Jetbot GPU. In Proceedings of the 2024 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER), Mangalore, India, 18–19 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 255–260. [Google Scholar]
  69. Turay, T.; Vladimirova, T. Toward performing image classification and object detection with convolutional neural networks in autonomous driving systems: A survey. IEEE Access 2022, 10, 14076–14119. [Google Scholar] [CrossRef]
  70. Liu, S.; Li, Y.; Qu, J.; Wu, R. Airport UAV and birds detection based on deformable DETR. J. Phys. Conf. Ser. 2022, 2253, 012024. [Google Scholar]
  71. Şengül, F.; Adem, K. Detection of Military Aircraft Using YOLO and Transformer-Based Object Detection Models in Complex Environments. BilişIm Teknol. Derg. 2025, 18, 85–97. [Google Scholar] [CrossRef]
  72. Zheng, X.; Liu, Y.; Lu, Y.; Hua, T.; Pan, T.; Zhang, W.; Tao, D.; Wang, L. Deep learning for event-based vision: A comprehensive survey and benchmarks. arXiv 2023, arXiv:2302.08890. [Google Scholar]
  73. Sun, H. Moving Objects Detection and Tracking Using Hybrid Event-Based and Frame-Based Vision for Autonomous Driving. Ph.D. Dissertation, École centrale de Nantes, Nantes, France, 2023. [Google Scholar]
  74. Thakur, A.; Mishra, S.K. An in-depth evaluation of deep learning-enabled adaptive approaches for detecting obstacles using sensor-fused data in autonomous vehicles. Eng. Appl. Artif. Intell. 2024, 133, 108550. [Google Scholar] [CrossRef]
  75. Alaba, S.; Gurbuz, A.; Ball, J. A comprehensive survey of deep learning multisensor fusion-based 3d object detection for autonomous driving: Methods, challenges, open issues, and future directions. TechRxiv 2022. [Google Scholar] [CrossRef]
  76. Zhang, J. An aircraft image detection and tracking method based on improved optical flow method. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2512–2516. [Google Scholar]
  77. Bota, S.; Nedevschi, S.; Konig, M. A framework for object detection, tracking and classification in urban traffic scenarios using stereovision. In Proceedings of the 2009 IEEE 5th International Conference on Intelligent Computer Communication and Processing, Cluj-Napoca, Romania, 27–29 August 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 153–156. [Google Scholar]
  78. Garcia-Garcia, B.; Bouwmans, T.; Silva, A.J.R. Background subtraction in real applications: Challenges, current models and future directions. Comput. Sci. Rev. 2020, 35, 100204. [Google Scholar] [CrossRef]
  79. Azam, B.; Khan, M.J.; Bhatti, F.A.; Maud, A.R.M.; Hussain, S.F.; Hashmi, A.J.; Khurshid, K. Aircraft detection in satellite imagery using deep learning-based object detectors. Microprocess. Microsyst. 2022, 94, 104630. [Google Scholar] [CrossRef]
  80. Alganci, U.; Soydas, M.; Sertel, E. Comparative research on deep learning approaches for airplane detection from very high-resolution satellite images. Remote Sens. 2020, 12, 458. [Google Scholar] [CrossRef]
  81. Jiang, Z.P.; Wang, Z.Q.; Zhang, Y.S.; Yu, Y.; Cheng, B.B.; Zhao, L.H.; Zhang, M.W. A vehicle object detection algorithm in UAV video stream based on improved Deformable DETR. Comput. Eng. Sci. 2024, 46, 91. [Google Scholar]
  82. Mitrokhin, A.; Fermüller, C.; Parameshwara, C.; Aloimonos, Y. Event-based moving object detection and tracking. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–9. [Google Scholar]
  83. Lee, S.; Seo, S.-W. Sensor fusion for aircraft detection at airport ramps using conditional random fields. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18100–18112. [Google Scholar] [CrossRef]
  84. Ruz, J.J.; Pajares, G.; de la Cruz, J.M.; Arevalo, O. UAV Trajectory Planning for Static and Dynamic Environments; INTECH Open Access Publisher: London, UK, 2009. [Google Scholar]
  85. Fasano, G.; Accado, D.; Moccia, A.; Moroney, D. Sense and avoid for unmanned aircraft systems. IEEE Aerosp. Electron. Syst. Mag. 2016, 31, 82–110. [Google Scholar] [CrossRef]
  86. Li, X.; Li, X.; Pan, H. Multi-scale vehicle detection in high-resolution aerial images with context information. IEEE Access 2020, 8, 208643–208657. [Google Scholar] [CrossRef]
  87. Hwang, S.; Lee, J.; Shin, H.; Cho, S.; Shim, D.H. Aircraft detection using deep convolutional neural network in small unmanned aircraft systems. In Proceedings of the 2018 AIAA Information Systems-AIAA Infotech@ Aerospace, Kissimmee, FL, USA, 8–12 January 2018; p. 2137. [Google Scholar]
  88. Xia, G.-S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3974–3983. [Google Scholar]
  89. Al-Absi, M.A.; Fu, R.; Kim, K.-H.; Lee, Y.-S.; Al-Absi, A.A.; Lee, H.-J. Tracking unmanned aerial vehicles based on the Kalman filter considering uncertainty and error aware. Electronics 2021, 10, 3067. [Google Scholar] [CrossRef]
  90. Wang, T.; Chen, B.; Wang, N.; Ji, Y.; Li, H.; Zhang, M. Zero-shot obstacle detection using panoramic vision in farmland. J. Field Robot. 2024, 41, 2169–2183. [Google Scholar] [CrossRef]
  91. Ming, R.; Zhou, Z.; Lyu, Z.; Luo, X.; Zi, L.; Song, C.; Zang, Y.; Liu, W.; Jiang, R. Laser tracking leader-follower automatic cooperative navigation system for UAVs. Int. J. Agric. Biol. Eng. 2022, 15, 165–176. [Google Scholar] [CrossRef]
  92. Zou, C.; Li, L.; Cai, G.; Lin, R. Fixed-point landing method for unmanned aerial vehicles using multi-sensor pattern detection. Unmanned Syst. 2024, 12, 173–182. [Google Scholar] [CrossRef]
  93. Loureiro, G.; Dias, A.; Martins, A.; Almeida, J. Emergency landing spot detection algorithm for unmanned aerial vehicles. Remote Sens. 2021, 13, 1930. [Google Scholar] [CrossRef]
  94. Abdulridha, J.; Ampatzidis, Y.; Qureshi, J.; Roberts, P. Laboratory and UAV-based identification and classification of tomato yellow leaf curl, bacterial spot, and target spot diseases in tomato utilizing hyperspectral imaging and machine learning. Remote Sens. 2020, 12, 2732. [Google Scholar] [CrossRef]
  95. Lyu, Y.; Vosselman, G.; Xia, G.S.; Yilmaz, A.; Yang, M.Y. UAVid: A semantic segmentation dataset for UAV imagery. ISPRS J. Photogramm. Remote Sens. 2020, 165, 108–119. [Google Scholar] [CrossRef]
  96. Lyu, Y.; Vosselman, G.; Xia, G.; Yilmaz, A.; Yang, M.Y. The uavid dataset for video semantic segmentation. arXiv 2018, arXiv:1810.10438. [Google Scholar]
  97. Zeng, Y.; Duan, Q.; Chen, X.; Peng, D.; Mao, Y.; Yang, K. UAVData: A dataset for unmanned aerial vehicle detection. Soft Comput. 2021, 25, 5385–5393. [Google Scholar] [CrossRef]
  98. Cao, Y.; He, Z.; Wang, L.; Wang, W.; Yuan, Y.; Zhang, D.; Zhang, J.; Zhu, P.; Van Gool, L.; Han, J.; et al. VisDrone-DET2021: The vision meets drone object detection challenge results. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2847–2854. [Google Scholar]
  99. Muzammul, M.; Algarni, A.; Ghadi, Y.Y.; Assam, M. Enhancing UAV aerial image analysis: Integrating advanced SAHI techniques with real-time detection models on the VisDrone dataset. IEEE Access 2024, 12, 21621–21633. [Google Scholar] [CrossRef]
  100. Vo, N.D.; Ngo, H.G.; Le, T.; Nguyen, Q.; Doan, D.D.; Nguyen, B.; Le-Huu, D.; Le-Duy, N.; Tran, H.; Nguyen, K.D.; et al. Aerial Data Exploration: An in-Depth Study From Horizontal to Oriented Viewpoint. IEEE Access 2024, 12, 37799–37824. [Google Scholar] [CrossRef]
  101. Ding, J.; Xue, N.; Xia, G.S.; Bai, X.; Yang, W.; Yang, M.Y.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; et al. Object detection in aerial images: A large-scale benchmark and challenges. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 7778–7796. [Google Scholar] [CrossRef]
  102. Gao, Q.; Shen, X.; Niu, W. Large-scale synthetic urban dataset for aerial scene understanding. IEEE Access 2020, 8, 42131–42140. [Google Scholar] [CrossRef]
  103. Tran, M.T.; Tran, B.V.; Vo, N.D.; Nguyen, K. Uit-dronefog: Toward high-performance object detection via high-quality aerial foggy dataset. In Proceedings of the 2021 8th NAFOSTED Conference on Information and Computer Science (NICS), Hanoi, Vietnam, 21–22 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 290–295. [Google Scholar]
  104. Sakaridis, C.; Dai, D.; Van Gool, L. Semantic foggy scene understanding with synthetic data. Int. J. Comput. Vis. 2018, 126, 973–992. [Google Scholar] [CrossRef]
  105. Binas, J.; Neil, D.; Liu, S.C.; Delbruck, T. DDD17: End-to-end DAVIS driving dataset. arXiv 2017, arXiv:1711.01458. [Google Scholar]
  106. Fang, W.; Zhang, G.; Zheng, Y.; Chen, Y. Multi-task learning for uav aerial object detection in foggy weather condition. Remote Sens. 2023, 15, 4617. [Google Scholar] [CrossRef]
  107. Henderson, P.; Ferrari, V. End-to-end training of object class detectors for mean average precision. In Proceedings of the Computer Vision–ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, 20–24 November 2016; Revised Selected Papers, Part V 13. Springer: Berlin/Heidelberg, Germany, 2017; pp. 198–213. [Google Scholar]
  108. Sobti, A.; Arora, C.; Balakrishnan, M. Object detection in real-time systems: Going beyond precision. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1020–1028. [Google Scholar]
  109. Carrio, A.; Lin, Y.; Saripalli, S.; Campoy, P. Obstacle detection system for small UAVs using ADS-B and thermal imaging. J. Intell. Robot. Syst. 2017, 88, 583–595. [Google Scholar] [CrossRef]
  110. Ali, W. Deep Learning-Based Obstacle-Avoiding Autonomous UAV for GPS-Denied Structures. Master’s Thesis, University of Manitoba, Winnipeg, MB, Canada, 2023. [Google Scholar]
  111. Li, Y.; Li, M.; Qi, J.; Zhou, D.; Zou, Z.; Liu, K. Detection of typical obstacles in orchards based on deep convolutional neural network. Comput. Electron. Agric. 2021, 181, 105932. [Google Scholar] [CrossRef]
  112. He, Y.; Liu, Z. A feature fusion method to improve the driving obstacle detection under foggy weather. IEEE Trans. Transp. Electrif. 2021, 7, 2505–2515. [Google Scholar] [CrossRef]
  113. Falanga, D.; Kim, S.; Scaramuzza, D. How fast is too fast? The role of perception latency in high-speed sense and avoid. IEEE Robot. Autom. Lett. 2019, 4, 1884–1891. [Google Scholar] [CrossRef]
  114. Garnett, N.; Silberstein, S.; Oron, S.; Fetaya, E.; Verner, U.; Ayash, A.; Goldner, V.; Cohen, R.; Horn, K.; Levi, D. Real-time category-based and general obstacle detection for autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 198–205. [Google Scholar]
  115. Fang, Z.; Yang, S.; Jain, S.; Dubey, G.; Roth, S.; Maeta, S.; Nuske, S.; Zhang, Y.; Scherer, S. Robust autonomous flight in constrained and visually degraded shipboard environments. J. Field Robot. 2017, 34, 25–52. [Google Scholar] [CrossRef]
  116. Martínez-Martín, E.; del Pobil, A.P. Motion detection in static backgrounds. In Robust Motion Detection in Real-Life Scenarios; Springer: London, UK, 2012; pp. 5–42. [Google Scholar]
  117. Paek, D.-H.; Kong, S.-H.; Wijaya, K.T. K-radar: 4d radar object detection for autonomous driving in various weather conditions. Adv. Neural Inf. Process. Syst. 2022, 35, 3819–3829. [Google Scholar]
  118. Dell’Olmo, P.V.; Kuznetsov, O.; Frontoni, E.; Arnesano, M.; Napoli, C.; Randieri, C. Dataset Dependency in CNN-Based Copy-Move Forgery Detection: A Multi-Dataset Comparative Analysis. Mach. Learn. Knowl. Extr. 2025, 7, 54. [Google Scholar] [CrossRef]
  119. Randieri, C.; Perrotta, A.; Puglisi, A.; Bocci, M.G.; Napoli, C. CNN-Based Framework for Classifying COVID-19, Pneumonia, and Normal Chest X-Rays. Big Data Cogn. Comput. 2025, 9, 186. [Google Scholar] [CrossRef]
  120. Lee, J.; Wang, J.; Crandall, D.; Šabanović, S.; Fox, G. Real-time, cloud-based object detection for unmanned aerial vehicles. In Proceedings of the 2017 First IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 10–12 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 36–43. [Google Scholar]
  121. Shi, C.; Lai, G.; Yu, Y.; Bellone, M.; Lippiello, V. Real-time multi-modal active vision for object detection on UAVs equipped with limited field of view LiDAR and camera. IEEE Robot. Autom. Lett. 2023, 8, 6571–6578. [Google Scholar] [CrossRef]
Figure 1. Overview of the paper.
Figure 1. Overview of the paper.
Drones 09 00549 g001
Figure 2. Major Challenging Conditions.
Figure 2. Major Challenging Conditions.
Drones 09 00549 g002
Figure 3. Five key sensor technologies used in UAV systems for detecting obstacles.
Figure 3. Five key sensor technologies used in UAV systems for detecting obstacles.
Drones 09 00549 g003
Figure 4. Sample Images from the UAVid dataset [97] (UAVID Dataset,https://www.kaggle.com/datasets/giavuongnguyen/modified-uavid-dataset, (accessed on 28 June 2025)).
Figure 4. Sample Images from the UAVid dataset [97] (UAVID Dataset,https://www.kaggle.com/datasets/giavuongnguyen/modified-uavid-dataset, (accessed on 28 June 2025)).
Drones 09 00549 g004
Figure 5. Sample Images from the VisDrone [98] (VISDRONE Dataset,https://www.kaggle.com/datasets/abhimanyubhowmik1/visdrone(acccessed on 28 June 2025)).
Figure 5. Sample Images from the VisDrone [98] (VISDRONE Dataset,https://www.kaggle.com/datasets/abhimanyubhowmik1/visdrone(acccessed on 28 June 2025)).
Drones 09 00549 g005
Figure 6. Sample Images from the DOTA dataset [88] (DOTA Dataset,https://www.kaggle.com/datasets/chandlertimm/dota-data(acccessed on 28 June 2025)).
Figure 6. Sample Images from the DOTA dataset [88] (DOTA Dataset,https://www.kaggle.com/datasets/chandlertimm/dota-data(acccessed on 28 June 2025)).
Drones 09 00549 g006
Figure 7. Sample Images from the Foggy Cityscapes dataset [106] (FOGGY CITYSCAPES Dataset (https://www.kaggle.com/datasets/yessicatuteja/foggy-cityscapes-image-dataset (accessed on 28 06 2025))).
Figure 7. Sample Images from the Foggy Cityscapes dataset [106] (FOGGY CITYSCAPES Dataset (https://www.kaggle.com/datasets/yessicatuteja/foggy-cityscapes-image-dataset (accessed on 28 06 2025))).
Drones 09 00549 g007
Figure 8. Overview of standard benchmark UAV Detection Datasets.
Figure 8. Overview of standard benchmark UAV Detection Datasets.
Drones 09 00549 g008
Figure 9. Key performance metrics used to assess object detection systems.
Figure 9. Key performance metrics used to assess object detection systems.
Drones 09 00549 g009
Figure 10. Key limitations of UAV obstacle and aircraft detection systems.
Figure 10. Key limitations of UAV obstacle and aircraft detection systems.
Drones 09 00549 g010
Table 1. Comparison of Obstacle and Aircraft Detection Methods for UAVs in Challenging Conditions.
Table 1. Comparison of Obstacle and Aircraft Detection Methods for UAVs in Challenging Conditions.
ChallengeDescriptionImpact on UAV OperationDetection Requirements
Thin Object Detection [35]Difficulty in identifying small, low-contrast objects like wires, cables, and branches, especially in complex backgrounds.High risk of collisions during low-altitude or urban flights.High-resolution sensing, edge-enhancement algorithms, adaptive thresholding.
Real-Time Tracking of Moving Objects [36]Obstacles like vehicles, humans, animals, and other UAVs constantly change position.Requires fast decision-making in dynamic environments such as disaster zones or airports.Temporal tracking, motion prediction, multi-frame data fusion.
Dense Stationary Environments [37]Urban and forested areas present cluttered scenes with many static obstacles.Poor detection may lead to inefficient routes or mid-air collisions.Context-aware detection, semantic segmentation, spatial filtering.
Harsh Weather Conditions [38]Rain, fog, smoke, and dust reduce visibility and sensor performance.Degraded perception, increased false detections or failures.Sensor fusion (e.g., radar + camera), robust feature extraction, weather-resilient models.
Motion Blur at High Speed [39]Sudden movements or high-speed navigation cause image distortion and blur.Reduces detection accuracy, especially in fast missions (e.g., surveillance, delivery).Fast shutter sensors, motion-deblurring algorithms, event-based cameras.
Limited Onboard Computing Power [40]UAVs have constraints on weight, power, and processing hardware.Real-time detection becomes difficult without lightweight models.Efficient algorithms, model pruning, hardware acceleration (e.g., TPU/FPGA).
Table 2. Comparison of Commonly Used Sensors in UAV Obstacle and Aircraft Detection.
Table 2. Comparison of Commonly Used Sensors in UAV Obstacle and Aircraft Detection.
Sensor TypeWorking PrincipleAdvantage in Bad ConditionsDisadvantage in Bad ConditionsBest for Challenging Conditions
RGB Camera [52]Uses visible light (RGB), infrared (IR) for heat signatures, or detects changes in scenes (event-based).High-resolution images, cheap, work well in low light, and detect heat.Efforts in fog, rain, smoke, and low light; poor in weighty rain, fog, and smoke.Night time detection with IR aid; night-time or thermal detection.
LiDAR [53]Emits laser pulsations to construct a 3D map of surroundings based on light return time.Real deep, operates in lower light.Affected by rain, fog, and smoke.3D habitat mapping.
Radar [54]Emits radio waves to detect objects by computing the reflected signals.Works well in rain, fog, and smoke.Low resolution, confused in urban areas.Long-range detection in inadequate visibility.
Sonar [55]Uses sound waves to detect objects by evaluating sound wave reflections.Short-range detection, good in low light.Limited range, impacted by ambient noise.Close-range obstacle detection.
Table 3. Comparison of Obstacle and Aircraft Detection Methods for UAVs in Challenging Conditions.
Table 3. Comparison of Obstacle and Aircraft Detection Methods for UAVs in Challenging Conditions.
MethodTypeStrengthsLimitations in Challenging ConditionsTypical Use Cases
Optical Flow [76]ClassicalEstimates motion; useful for dynamic obstacle detection.Sensitive to rain, fog, motion blur; mistakes in inadequate lighting.Motion tracking, obstacle avoidance.
Stereo Vision [77]ClassicalDeep analysis; emulates human binocular vision.Inadequate implementation in fog/smoke; low accuracy in textureless circumstances.3D mapping, length measurement.
Feature Tracking (SIFT, SURF) [61]ClassicalDedicated on textured surfaces; rotation/scale invariant.Fails under blur, low contrast, rain/fog distortion.Object tracking, navigation.
Background Subtraction [78]ClassicalEasy and quick for detecting rolling objects.Insufficient under lighting differences, fog, rain; not rich in dynamic scenes.Surveillance, simple obstacle detection.
YOLO, Faster R-CNN, SSD [79,80]Deep LearningReal-time, high accuracy; multiple object detection.Needs diverse training data; performance drops in unseen bad conditions.Aircraft and obstacle detection, real-time UAV vision.
DETR, Deformable DETR [81]Deep LearningGood for cluttered scenes; models long-range dependencies.Computationally costly; narrow on UAV hardware.Tough object detection, multi-target tracking.
Event-based Vision Networks [82]Deep LearningEffective in rapid movement and low light; low latency.Needs event cameras; fixed datasets and hardware help.High-speed UAV navigation, blur-resistant detection.
Sensor Fusion (Camera + LiDAR) [83]Deep LearningCombines depth and texture; improves detection in fog/rain.Complex calibration; high computational load.Robust UAV detection in bad weather.
Table 4. Comparison of Obstacle and Aircraft Detection Methods for UAVs in Challenging Conditions.
Table 4. Comparison of Obstacle and Aircraft Detection Methods for UAVs in Challenging Conditions.
Detection TypeSensors UsedAlgorithms/ModelsLimitations in Challenging ConditionsTypical Use Cases
Obstacle DetectionLiDAR, RGB/stereo cameras, ultrasonic, event camerasSLAM, YOLO, optical flow, semantic segmentation [76,77,79]Sensitive to fog, rain, thin objects, motion blur; short-range limitationsUrban navigation, low-altitude flight, indoor mapping, precision landing
Aircraft DetectionRadar, infrared, thermal, long-range vision, ADS-BKalman filter, tracking networks, CNN/DETR models [90]Issues with long-range accuracy, small object visibility, dynamic flight pathsAirspace monitoring, collision avoidance, multi-UAV coordination
Table 5. Summary of Common UAV Detection Datasets.
Table 5. Summary of Common UAV Detection Datasets.
DatasetEnvironmentWeather/LightingTypeAnnotationsKey Applications
UAVid [97]Urban (real-world)Mostly clear, daylightReal-worldSemantic segmentationUrban scene understanding, object detection
VisDrone [98]Urban, crowded scenesVaries (clear, overcast, etc.)Real-worldBounding boxes, object class, trackingObject detection, tracking, crowd analysis
DOTA [88]Aerial/satellite imagesClear (daylight scenes)Real-worldOriented bounding boxesAircraft/vehicle detection from aerial views
Foggy Cityscapes [106]Urban (synthetic fog)Foggy conditions (simulated)SyntheticPixel-level segmentationAdverse weather evaluation, semantic segmentation
Table 6. Comparative Analysis of Detection Methods.
Table 6. Comparative Analysis of Detection Methods.
MethodSensors UsedConditions HandledStrengthsWeaknesses
Optical Flow [76]RGB CameraLimited fog, moderate motion blurSimple, lightweightFails in poor lighting, sensitive to noise
Stereo Vision [77]Dual RGB CamerasGood in daylight, not in fog/rain/nightDepth estimationHigh computation, unreliable in bad weather
Feature Tracking (SIFT, SURF) [61]RGB CameraClear, moderately cluttered scenesRobust to scale/rotationFails in low-texture, fog, smoke
Motion Detection [116]RGB CameraStatic backgrounds, low clutterFast, low resourceNot effective in UAV motion/dynamic scenes
YOLO/Faster R-CNN/SSD [79,80]RGB/IR CameraSome fog, night with IRHigh accuracy, fast inferenceNeeds large dataset, struggles in dense fog/rain
Deformable DETR [81]RGB CameraClutter, blur, partial occlusionCaptures global contextHeavy model, higher latency
Event-based Neural Networks [82]Event CameraFast motion, low light, blurExcellent for motion blur, low latencyExpensive hardware, limited datasets
Sensor Fusion (Camera+LiDAR) [83]RGB/IR Camera + LiDARFog, clutter, occlusionCombines depth/image, robustComplex calibration, degraded in heavy rain
Radar-based Detection [117]Radar (Camera)Rain, fog, smokeWorks in harsh weather, long rangeLow spatial resolution
Sonar-based Detection [55]SonarShort-range, indoor/low-speedLightweight, noise resistantLimited range/accuracy
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Randieri, C.; Ganesh, S.V.; Raj, R.D.A.; Yanamala, R.M.R.; Pallakonda, A.; Napoli, C. Aerial Autonomy Under Adversity: Advances in Obstacle and Aircraft Detection Techniques for Unmanned Aerial Vehicles. Drones 2025, 9, 549. https://doi.org/10.3390/drones9080549

AMA Style

Randieri C, Ganesh SV, Raj RDA, Yanamala RMR, Pallakonda A, Napoli C. Aerial Autonomy Under Adversity: Advances in Obstacle and Aircraft Detection Techniques for Unmanned Aerial Vehicles. Drones. 2025; 9(8):549. https://doi.org/10.3390/drones9080549

Chicago/Turabian Style

Randieri, Cristian, Sai Venkata Ganesh, Rayappa David Amar Raj, Rama Muni Reddy Yanamala, Archana Pallakonda, and Christian Napoli. 2025. "Aerial Autonomy Under Adversity: Advances in Obstacle and Aircraft Detection Techniques for Unmanned Aerial Vehicles" Drones 9, no. 8: 549. https://doi.org/10.3390/drones9080549

APA Style

Randieri, C., Ganesh, S. V., Raj, R. D. A., Yanamala, R. M. R., Pallakonda, A., & Napoli, C. (2025). Aerial Autonomy Under Adversity: Advances in Obstacle and Aircraft Detection Techniques for Unmanned Aerial Vehicles. Drones, 9(8), 549. https://doi.org/10.3390/drones9080549

Article Metrics

Back to TopTop