Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (78)

Search Parameters:
Keywords = UAV-based rescue system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6496 KiB  
Article
Real-Time Search and Rescue with Drones: A Deep Learning Approach for Small-Object Detection Based on YOLO
by Francesco Ciccone and Alessandro Ceruti
Drones 2025, 9(8), 514; https://doi.org/10.3390/drones9080514 - 22 Jul 2025
Viewed by 53
Abstract
Unmanned aerial vehicles are increasingly used in civil Search and Rescue operations due to their rapid deployment and wide-area coverage capabilities. However, detecting missing persons from aerial imagery remains challenging due to small object sizes, cluttered backgrounds, and limited onboard computational resources, especially [...] Read more.
Unmanned aerial vehicles are increasingly used in civil Search and Rescue operations due to their rapid deployment and wide-area coverage capabilities. However, detecting missing persons from aerial imagery remains challenging due to small object sizes, cluttered backgrounds, and limited onboard computational resources, especially when managed by civil agencies. In this work, we present a comprehensive methodology for optimizing YOLO-based object detection models for real-time Search and Rescue scenarios. A two-stage transfer learning strategy was employed using VisDrone for general aerial object detection and Heridal for Search and Rescue-specific fine-tuning. We explored various architectural modifications, including enhanced feature fusion (FPN, BiFPN, PB-FPN), additional detection heads (P2), and modules such as CBAM, Transformers, and deconvolution, analyzing their impact on performance and computational efficiency. The best-performing configuration (YOLOv5s-PBfpn-Deconv) achieved a mAP@50 of 0.802 on the Heridal dataset while maintaining real-time inference on embedded hardware (Jetson Nano). Further tests at different flight altitudes and explainability analyses using EigenCAM confirmed the robustness and interpretability of the model in real-world conditions. The proposed solution offers a viable framework for deploying lightweight, interpretable AI systems for UAV-based Search and Rescue operations managed by civil protection authorities. Limitations and future directions include the integration of multimodal sensors and adaptation to broader environmental conditions. Full article
Show Figures

Figure 1

19 pages, 3520 KiB  
Article
Vision-Guided Maritime UAV Rescue System with Optimized GPS Path Planning and Dual-Target Tracking
by Suli Wang, Yang Zhao, Chang Zhou, Xiaodong Ma, Zijun Jiao, Zesheng Zhou, Xiaolu Liu, Tianhai Peng and Changxing Shao
Drones 2025, 9(7), 502; https://doi.org/10.3390/drones9070502 - 16 Jul 2025
Viewed by 367
Abstract
With the global increase in maritime activities, the frequency of maritime accidents has risen, underscoring the urgent need for faster and more efficient search and rescue (SAR) solutions. This study presents an intelligent unmanned aerial vehicle (UAV)-based maritime rescue system that combines GPS-driven [...] Read more.
With the global increase in maritime activities, the frequency of maritime accidents has risen, underscoring the urgent need for faster and more efficient search and rescue (SAR) solutions. This study presents an intelligent unmanned aerial vehicle (UAV)-based maritime rescue system that combines GPS-driven dynamic path planning with vision-based dual-target detection and tracking. Developed within the Gazebo simulation environment and based on modular ROS architecture, the system supports stable takeoff and smooth transitions between multi-rotor and fixed-wing flight modes. An external command module enables real-time waypoint updates. This study proposes three path-planning schemes based on the characteristics of drones. Comparative experiments have demonstrated that the triangular path is the optimal route. Compared with the other schemes, this path reduces the flight distance by 30–40%. Robust target recognition is achieved using a darknet-ROS implementation of the YOLOv4 model, enhanced with data augmentation to improve performance in complex maritime conditions. A monocular vision-based ranging algorithm ensures accurate distance estimation and continuous tracking of rescue vessels. Furthermore, a dual-target-tracking algorithm—integrating motion prediction with color-based landing zone recognition—achieves a 96% success rate in precision landings under dynamic conditions. Experimental results show a 4% increase in the overall mission success rate compared to traditional SAR methods, along with significant gains in responsiveness and reliability. This research delivers a technically innovative and cost-effective UAV solution, offering strong potential for real-world maritime emergency response applications. Full article
Show Figures

Figure 1

26 pages, 14110 KiB  
Article
Gemini: A Cascaded Dual-Agent DRL Framework for Task Chain Planning in UAV-UGV Collaborative Disaster Rescue
by Mengxuan Wen, Yunxiao Guo, Changhao Qiu, Bangbang Ren, Mengmeng Zhang and Xueshan Luo
Drones 2025, 9(7), 492; https://doi.org/10.3390/drones9070492 - 11 Jul 2025
Viewed by 430
Abstract
In recent years, UAV (unmanned aerial vehicle)-UGV (unmanned ground vehicle) collaborative systems have played a crucial role in emergency disaster rescue. To improve rescue efficiency, heterogeneous network and task chain methods are introduced to cooperatively develop rescue sequences within a short time for [...] Read more.
In recent years, UAV (unmanned aerial vehicle)-UGV (unmanned ground vehicle) collaborative systems have played a crucial role in emergency disaster rescue. To improve rescue efficiency, heterogeneous network and task chain methods are introduced to cooperatively develop rescue sequences within a short time for collaborative systems. However, current methods also overlook resource overload for heterogeneous units and limit planning to a single task chain in cross-platform rescue scenarios, resulting in low robustness and limited flexibility. To this end, this paper proposes Gemini, a cascaded dual-agent deep reinforcement learning (DRL) framework based on the Heterogeneous Service Network (HSN) for multiple task chains planning in UAV-UGV collaboration. Specifically, this framework comprises a chain selection agent and a resource allocation agent: The chain selection agent plans paths for task chains, and the resource allocation agent distributes platform loads along generated paths. For each mission, a well-trained Gemini can not only allocate resources in load balancing but also plan multiple task chains simultaneously, which enhances the robustness in cross-platform rescue. Simulation results show that Gemini can increase rescue effectiveness by approximately 60% and improve load balancing by approximately 80%, compared to the baseline algorithm. Additionally, Gemini’s performance is stable and better than the baseline in various disaster scenarios, which verifies its generalization. Full article
Show Figures

Figure 1

28 pages, 47806 KiB  
Article
Experimental Validation of UAV Search and Detection System in Real Wilderness Environment
by Stella Dumenčić, Luka Lanča, Karlo Jakac and Stefan Ivić
Drones 2025, 9(7), 473; https://doi.org/10.3390/drones9070473 - 3 Jul 2025
Cited by 1 | Viewed by 280
Abstract
Search and rescue (SAR) missions require reliable search methods to locate survivors, especially in challenging environments. Introducing unmanned aerial vehicles (UAVs) can enhance the efficiency of SAR missions while simultaneously increasing the safety of everyone involved. Motivated by this, we experiment with autonomous [...] Read more.
Search and rescue (SAR) missions require reliable search methods to locate survivors, especially in challenging environments. Introducing unmanned aerial vehicles (UAVs) can enhance the efficiency of SAR missions while simultaneously increasing the safety of everyone involved. Motivated by this, we experiment with autonomous UAV search for humans in Mediterranean karst environment. The UAVs are directed using the Heat equation-driven area coverage (HEDAC) ergodic control method based on known probability density and detection function. The sensing framework consists of a probabilistic search model, motion control system, and object detection enabling to calculate the target’s detection probability. This paper focuses on the experimental validation of the proposed sensing framework. The uniform probability density, achieved by assigning suitable tasks to 78 volunteers, ensures the even probability of finding targets. The detection model is based on the You Only Look Once (YOLO) model trained on a previously collected orthophoto image database. The experimental search is carefully planned and conducted, while recording as many parameters as possible. The thorough analysis includes the motion control system, object detection, and search validation. The assessment of the detection and search performance strongly indicates that the detection model in the UAV control algorithm is aligned with real-world results. Full article
Show Figures

Figure 1

20 pages, 741 KiB  
Article
Long-Endurance Collaborative Search and Rescue Based on Maritime Unmanned Systems and Deep-Reinforcement Learning
by Pengyan Dong, Jiahong Liu, Hang Tao, Yang Zhao, Zhijie Feng and Hanjiang Luo
Sensors 2025, 25(13), 4025; https://doi.org/10.3390/s25134025 - 27 Jun 2025
Viewed by 282
Abstract
Maritime vision sensing can be applied to maritime unmanned systems to perform search and rescue (SAR) missions under complex marine environments, as multiple unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) are able to conduct vision sensing through the air, the water-surface, [...] Read more.
Maritime vision sensing can be applied to maritime unmanned systems to perform search and rescue (SAR) missions under complex marine environments, as multiple unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) are able to conduct vision sensing through the air, the water-surface, and underwater. However, in these vision-based maritime SAR systems, collaboration between UAVs and USVs is a critical issue for successful SAR operations. To address this challenge, in this paper, we propose a long-endurance collaborative SAR scheme which exploits the complementary strengths of the maritime unmanned systems. In this scheme, a swarm of UAVs leverages a multi-agent reinforcement-learning (MARL) method and probability maps to perform cooperative first-phase search exploiting UAV’s high altitude and wide field of view of vision sensing. Then, multiple USVs conduct precise real-time second-phase operations by refining the probabilistic map. To deal with the energy constraints of UAVs and perform long-endurance collaborative SAR missions, a multi-USV charging scheduling method is proposed based on MARL to prolong the UAVs’ flight time. Through extensive simulations, the experimental results verified the effectiveness of the proposed scheme and long-endurance search capabilities. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System: 2nd Edition)
Show Figures

Figure 1

30 pages, 16390 KiB  
Article
Model-Based RL Decision-Making for UAVs Operating in GNSS-Denied, Degraded Visibility Conditions with Limited Sensor Capabilities
by Sebastien Boiteau, Fernando Vanegas, Julian Galvez-Serna and Felipe Gonzalez
Drones 2025, 9(6), 410; https://doi.org/10.3390/drones9060410 - 4 Jun 2025
Viewed by 1572
Abstract
Autonomy in Unmanned Aerial Vehicle (UAV) navigation has enabled applications in diverse fields such as mining, precision agriculture, and planetary exploration. However, challenging applications in complex environments complicate the interaction between the agent and its surroundings. Conditions such as the absence of a [...] Read more.
Autonomy in Unmanned Aerial Vehicle (UAV) navigation has enabled applications in diverse fields such as mining, precision agriculture, and planetary exploration. However, challenging applications in complex environments complicate the interaction between the agent and its surroundings. Conditions such as the absence of a Global Navigation Satellite System (GNSS), low visibility, and cluttered environments significantly increase uncertainty levels and cause partial observability. These challenges grow when compact, low-cost, entry-level sensors are employed. This study proposes a model-based reinforcement learning (RL) approach to enable UAVs to navigate and make decisions autonomously in environments where the GNSS is unavailable and visibility is limited. Designed for search and rescue operations, the system enables UAVs to navigate cluttered indoor environments, detect targets, and avoid obstacles under low-visibility conditions. The architecture integrates onboard sensors, including a thermal camera to detect a collapsed person (target), a 2D LiDAR and an IMU for localization. The decision-making module employs the ABT solver for real-time policy computation. The framework presented in this work relies on low-cost, entry-level sensors, making it suitable for lightweight UAV platforms. Experimental results demonstrate high success rates in target detection and robust performance in obstacle avoidance and navigation despite uncertainties in pose estimation and detection. The framework was first assessed in simulation, compared with a baseline algorithm, and then through real-life testing across several scenarios. The proposed system represents a step forward in UAV autonomy for critical applications, with potential extensions to unknown and fully stochastic environments. Full article
Show Figures

Figure 1

26 pages, 1272 KiB  
Article
Distributed Relative Pose Estimation for Multi-UAV Systems Based on Inertial Navigation and Data Link Fusion
by Kun Li, Shuhui Bu, Jiapeng Li, Zhenyv Xia, Jvboxi Wang and Xiaohan Li
Drones 2025, 9(6), 405; https://doi.org/10.3390/drones9060405 - 30 May 2025
Viewed by 585
Abstract
Accurate self-localization and mutual state estimation are essential for autonomous aerial swarm operations in cooperative exploration, target tracking, and search-and-rescue missions. However, achieving reliable formation positioning in GNSS-denied environments remains a significant challenge. This paper proposes a UAV formation positioning system that integrates [...] Read more.
Accurate self-localization and mutual state estimation are essential for autonomous aerial swarm operations in cooperative exploration, target tracking, and search-and-rescue missions. However, achieving reliable formation positioning in GNSS-denied environments remains a significant challenge. This paper proposes a UAV formation positioning system that integrates inertial navigation with data link-based relative measurements to improve positioning accuracy. Each UAV independently estimates its flight state in real time using onboard IMU data through an inertial navigation fusion method. The estimated states are then transmitted to other UAVs in the formation via a data link, which also provides relative position measurements. Upon receiving data link information, each UAV filters erroneous measurements, time aligns them with its state estimates, and constructs a relative pose optimization factor graph for real-time state estimation. Furthermore, a data selection strategy and a sliding window algorithm are implemented to control data accumulation and mitigate inertial navigation drift. The proposed method is validated through both simulations and real-world two-UAV formation flight experiments. The experimental results demonstrate that the system achieves a 76% reduction in positioning error compared to using data link measurements alone. This approach provides a robust and reliable solution for maintaining precise relative positioning in formation flight without reliance on GNSS. Full article
(This article belongs to the Special Issue Advances in Guidance, Navigation, and Control)
Show Figures

Figure 1

21 pages, 2504 KiB  
Article
A Distributed Low-Degree-of-Freedom Aerial Target Localization Method Based on Hybrid Measurements
by Xiaoshuang Jiao, Jinming Chen, Lifeng Jiang, Weiping Li, Xiaochao Yang, Weiwei Wang and Jun Zhang
Remote Sens. 2025, 17(10), 1705; https://doi.org/10.3390/rs17101705 - 13 May 2025
Viewed by 418
Abstract
For real-time detection scenarios such as battlefield reconnaissance and surveillance, where high positioning accuracy is required and receiving station resources are limited, we propose an innovative distributed aerial target localization method with low degrees of freedom. This method is based on a hybrid [...] Read more.
For real-time detection scenarios such as battlefield reconnaissance and surveillance, where high positioning accuracy is required and receiving station resources are limited, we propose an innovative distributed aerial target localization method with low degrees of freedom. This method is based on a hybrid measurement approach. First, a measurement model is established using the spatial geometric relationship between the distributed node network configuration and the target, with angle of arrival (AOA) and time difference of arrival (TDOA) measurements employed to estimate partial target parameters. Then, frequency difference of arrival (FDOA) measurements are utilized to enhance the accuracy of parameter estimation. Finally, using inter-node measurements, a pseudo-linear system of equations is constructed to complete the three-node aerial target localization. The method uses satellites as radiation sources to transmit signals, with unmanned aerial vehicles (UAVs) acting as receiving station nodes to capture the signals. It effectively utilizes hybrid measurement information, enabling aerial target localization with only three receiving stations. Simulation results validate the significant advantages of the proposed algorithm in enhancing localization accuracy, reducing system costs, and optimizing resource allocation. This technology not only provides an efficient and practical localization solution for battlefield reconnaissance and surveillance systems but also offers robust technical support and broad application prospects for the future development of unmanned systems, intelligent surveillance, and emergency rescue. Full article
Show Figures

Figure 1

24 pages, 10940 KiB  
Article
LSTM-DQN-APF Path Planning Algorithm Empowered by Twins in Complex Scenarios
by Ying Lu, Xiaodan Wang, Yang Yang, Man Ding, Shaochun Qu and Yanfang Fu
Appl. Sci. 2025, 15(8), 4565; https://doi.org/10.3390/app15084565 - 21 Apr 2025
Cited by 1 | Viewed by 566
Abstract
In response to the issues of unreachable targets, local minima, and insufficient real-time performance in drone path planning in urban low-altitude complex scenarios, this paper proposes a fusion algorithm based on digital twin, integrating LSTM (long short-term memory), DQN (Deep Q-Network), and APF [...] Read more.
In response to the issues of unreachable targets, local minima, and insufficient real-time performance in drone path planning in urban low-altitude complex scenarios, this paper proposes a fusion algorithm based on digital twin, integrating LSTM (long short-term memory), DQN (Deep Q-Network), and APF (artificial potential field). The algorithm relies on a twin system, integrating multi-sensor fusion technology and Kalman filtering to input obstacle information and UAV trajectory predictions into the DQN, which outputs action decisions for intelligent obstacle avoidance. Additionally, to address the blind search problem in trajectory planning, the algorithm introduces exploration rewards and heuristic reward components, as well as adding velocity and acceleration compensation terms to the attraction and repulsion functions, reducing the path deviation of UAVs during dynamic obstacle avoidance. Finally, to tackle the issues of insufficient training sample size and simulation accuracy, this paper leverages a digital twin platform, utilizing a dual feedback mechanism from virtual and physical environments to generate a large number of complex urban scenario samples. This approach effectively enhances the diversity and accuracy of training samples while significantly reducing the experimental costs of the algorithm. The results demonstrate that the LSTM-DQN-APF algorithm, combined with the digital twin platform, can significantly improve the issues of unreachable goals, local optimality, and real-time performance in UAV operations in complex environments. Compared to traditional algorithms, it notably enhances path planning speed and obstacle avoidance success rates. After thorough training, the proposed improved algorithm can be applied to real-world UAV systems, providing reliable technical support for applications such as smart city inspections and emergency rescue operations. Full article
Show Figures

Figure 1

26 pages, 4783 KiB  
Article
A Hybrid Decision-Making Framework for UAV-Assisted MEC Systems: Integrating a Dynamic Adaptive Genetic Optimization Algorithm and Soft Actor–Critic Algorithm with Hierarchical Action Decomposition and Uncertainty-Quantified Critic Ensemble
by Yu Yang, Yanjun Shi, Xing Cui, Jiajian Li and Xijun Zhao
Drones 2025, 9(3), 206; https://doi.org/10.3390/drones9030206 - 13 Mar 2025
Viewed by 1087
Abstract
With the continuous progress of UAV technology and the rapid development of mobile edge computing (MEC), the UAV-assisted MEC system has shown great application potential in special fields such as disaster rescue and emergency response. However, traditional deep reinforcement learning (DRL) decision-making methods [...] Read more.
With the continuous progress of UAV technology and the rapid development of mobile edge computing (MEC), the UAV-assisted MEC system has shown great application potential in special fields such as disaster rescue and emergency response. However, traditional deep reinforcement learning (DRL) decision-making methods suffer from limitations such as difficulty in balancing multiple objectives and training convergence when making mixed action space decisions for UAV path planning and task offloading. This article innovatively proposes a hybrid decision framework based on the improved Dynamic Adaptive Genetic Optimization Algorithm (DAGOA) and soft actor–critic with hierarchical action decomposition, an uncertainty-quantified critic ensemble, and adaptive entropy temperature, where DAGOA performs an effective search and optimization in discrete action space, while SAC can perform fine control and adjustment in continuous action space. By combining the above algorithms, the joint optimization of drone path planning and task offloading can be achieved, improving the overall performance of the system. The experimental results show that the framework offers significant advantages in improving system performance, reducing energy consumption, and enhancing task completion efficiency. When the system adopts a hybrid decision framework, the reward score increases by a maximum of 153.53% compared to pure deep reinforcement learning algorithms for decision-making. Moreover, it can achieve an average improvement of 61.09% on the basis of various reinforcement learning algorithms such as proposed SAC, proximal policy optimization (PPO), deep deterministic policy gradient (DDPG), and twin delayed deep deterministic policy gradient (TD3). Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Enhanced Emergency Response)
Show Figures

Figure 1

23 pages, 5215 KiB  
Article
A Feature-Enhanced Small Object Detection Algorithm Based on Attention Mechanism
by Zhe Quan and Jun Sun
Sensors 2025, 25(2), 589; https://doi.org/10.3390/s25020589 - 20 Jan 2025
Viewed by 2420
Abstract
With the rapid development of AI algorithms and computational power, object recognition based on deep learning frameworks has become a major research direction in computer vision. UAVs equipped with object detection systems are increasingly used in fields like smart transportation, disaster warning, and [...] Read more.
With the rapid development of AI algorithms and computational power, object recognition based on deep learning frameworks has become a major research direction in computer vision. UAVs equipped with object detection systems are increasingly used in fields like smart transportation, disaster warning, and emergency rescue. However, due to factors such as the environment, lighting, altitude, and angle, UAV images face challenges like small object sizes, high object density, and significant background interference, making object detection tasks difficult. To address these issues, we use YOLOv8s as the basic framework and introduce a multi-level feature fusion algorithm. Additionally, we design an attention mechanism that links distant pixels to improve small object feature extraction. To address missed detections and inaccurate localization, we replace the detection head with a dynamic head, allowing the model to route objects to the appropriate head for final output. We also introduce Slideloss to improve the model’s learning of difficult samples and ShapeIoU to better account for the shape and scale of bounding boxes. Experiments on datasets like VisDrone2019 show that our method improves accuracy by nearly 10% and recall by about 11% compared to the baseline. Additionally, on the AI-TODv1.5 dataset, our method improves the mAP50 from 38.8 to 45.2. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

24 pages, 9714 KiB  
Article
A Simultaneous Control, Localization, and Mapping System for UAVs in GPS-Denied Environments
by Rodrigo Munguia, Antoni Grau, Yolanda Bolea and Guillermo Obregón-Pulido
Drones 2025, 9(1), 69; https://doi.org/10.3390/drones9010069 - 18 Jan 2025
Cited by 2 | Viewed by 2023
Abstract
Unmanned Aerial Vehicles (UAVs) have gained significant attention due to their versatility in applications such as surveillance, reconnaissance, and search-and-rescue operations. In GPS-denied environments, where traditional navigation systems fail, the need for alternative solutions is critical. This paper presents a novel visual-based Simultaneous [...] Read more.
Unmanned Aerial Vehicles (UAVs) have gained significant attention due to their versatility in applications such as surveillance, reconnaissance, and search-and-rescue operations. In GPS-denied environments, where traditional navigation systems fail, the need for alternative solutions is critical. This paper presents a novel visual-based Simultaneous Control, Localization, and Mapping (SCLAM) system tailored for UAVs operating in GPS-denied environments. The proposed system integrates monocular-based SLAM and high-level control strategies, enabling autonomous navigation, real-time mapping, and robust localization. The experimental results demonstrate the system’s effectiveness in allowing UAVs to autonomously explore, return to a home position, and maintain consistent mapping in virtual GPS-denied scenarios. This work contributes a flexible architecture capable of addressing the challenges of autonomous UAV navigation and mapping, with potential for further development and real-world application. Full article
(This article belongs to the Collection Feature Papers of Drones Volume II)
Show Figures

Figure 1

23 pages, 22602 KiB  
Article
Enhancing Human Detection in Occlusion-Heavy Disaster Scenarios: A Visibility-Enhanced DINO (VE-DINO) Model with Reassembled Occlusion Dataset
by Zi-An Zhao, Shidan Wang, Min-Xin Chen, Ye-Jiao Mao, Andy Chi-Ho Chan, Derek Ka-Hei Lai, Duo Wai-Chi Wong and James Chung-Wai Cheung
Smart Cities 2025, 8(1), 12; https://doi.org/10.3390/smartcities8010012 - 16 Jan 2025
Cited by 2 | Viewed by 2179
Abstract
Natural disasters create complex environments where effective human detection is both critical and challenging, especially when individuals are partially occluded. While recent advancements in computer vision have improved detection capabilities, there remains a significant need for efficient solutions that can enhance search-and-rescue (SAR) [...] Read more.
Natural disasters create complex environments where effective human detection is both critical and challenging, especially when individuals are partially occluded. While recent advancements in computer vision have improved detection capabilities, there remains a significant need for efficient solutions that can enhance search-and-rescue (SAR) operations in resource-constrained disaster scenarios. This study modified the original DINO (Detection Transformer with Improved Denoising Anchor Boxes) model and introduced the visibility-enhanced DINO (VE-DINO) model, designed for robust human detection in occlusion-heavy environments, with potential integration into SAR system. VE-DINO enhances detection accuracy by incorporating body part key point information and employing a specialized loss function. The model was trained and validated using the COCO2017 dataset, with additional external testing conducted on the Disaster Occlusion Detection Dataset (DODD), which we developed by meticulously compiling relevant images from existing public datasets to represent occlusion scenarios in disaster contexts. The VE-DINO achieved an average precision of 0.615 at IoU 0.50:0.90 on all bounding boxes, outperforming the original DINO model (0.491) in the testing set. The external testing of VE-DINO achieved an average precision of 0.500. An ablation study was conducted and demonstrated the robustness of the model subject when confronted with varying degrees of body occlusion. Furthermore, to illustrate the practicality, we conducted a case study demonstrating the usability of the model when integrated into an unmanned aerial vehicle (UAV)-based SAR system, showcasing its potential in real-world scenarios. Full article
Show Figures

Figure 1

30 pages, 578 KiB  
Review
Recent Research Progress on Ground-to-Air Vision-Based Anti-UAV Detection and Tracking Methodologies: A Review
by Arowa Yasmeen and Ovidiu Daescu
Drones 2025, 9(1), 58; https://doi.org/10.3390/drones9010058 - 15 Jan 2025
Cited by 1 | Viewed by 2503
Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly gaining popularity, and their consistent prevalence in various applications such as surveillance, search and rescue, and environmental monitoring requires the development of specialized policies for UAV traffic management. Integrating this novel aerial traffic into existing airspace frameworks [...] Read more.
Unmanned Aerial Vehicles (UAVs) are increasingly gaining popularity, and their consistent prevalence in various applications such as surveillance, search and rescue, and environmental monitoring requires the development of specialized policies for UAV traffic management. Integrating this novel aerial traffic into existing airspace frameworks presents unique challenges, particularly regarding safety and security. Consequently, there is an urgent need for robust contingency management systems, such as Anti-UAV technologies, to ensure safe air traffic. This survey paper critically examines the recent advancements in ground-to-air vision-based Anti-UAV detection and tracking methodologies, addressing the many challenges inherent in UAV detection and tracking. Our study examines recent UAV detection and tracking algorithms, outlining their operational principles, advantages, and disadvantages. Publicly available datasets specifically designed for Anti-UAV research are also thoroughly reviewed, providing insights into their characteristics and suitability. Furthermore, this survey explores the various Anti-UAV systems being developed and deployed globally, evaluating their effectiveness in facilitating the integration of small UAVs into low-altitude airspace. The study aims to provide researchers with a well-rounded understanding of the field by synthesizing current research trends, identifying key technological gaps, and highlighting promising directions for future research and development in Anti-UAV technologies. Full article
(This article belongs to the Special Issue Unmanned Traffic Management Systems)
Show Figures

Figure 1

24 pages, 9850 KiB  
Article
RTAPM: A Robust Top-View Absolute Positioning Method with Visual–Inertial Assisted Joint Optimization
by Pengfei Tong, Xuerong Yang, Xuanzhi Peng and Longfei Wang
Drones 2025, 9(1), 37; https://doi.org/10.3390/drones9010037 - 7 Jan 2025
Viewed by 1085
Abstract
In challenging environments such as disaster aid or forest rescue, unmanned aerial vehicles (UAVs) have been hampered by inconsistent or even denied global navigation satellite system (GNSS) signals, resulting in UAVs becoming incapable of operating normally. Currently, there is no unmanned aerial vehicle [...] Read more.
In challenging environments such as disaster aid or forest rescue, unmanned aerial vehicles (UAVs) have been hampered by inconsistent or even denied global navigation satellite system (GNSS) signals, resulting in UAVs becoming incapable of operating normally. Currently, there is no unmanned aerial vehicle (UAV) positioning method that is capable of substituting or temporarily replacing GNSS positioning. This study proposes a reliable UAV top-down absolute positioning method (RTAPM) based on a monocular RGB camera that employs joint optimization and visual–inertial assistance. The proposed method employs a bird’s-eye view monocular RGB camera to estimate the UAV’s moving position. By comparing real-time aerial images with pre-existing satellite images of the flight area, utilizing components such as template geo-registration, UAV motion constraints, point–line image matching, and joint state estimation, a method is provided to substitute satellites and obtain short-term absolute positioning information of UAVs in challenging and dynamic environments. Based on two open-source datasets and real-time flight experimental tests, the method proposed in this study has significant advantages in positioning accuracy and system robustness over existing typical UAV absolute positioning methods, and it can temporarily replace GNSS for application in challenging environments such as disaster aid or forest rescue. Full article
(This article belongs to the Special Issue Autonomous Drone Navigation in GPS-Denied Environments)
Show Figures

Figure 1

Back to TopTop