Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (273)

Search Parameters:
Keywords = drone surveillance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 12997 KiB  
Article
Aerial-Ground Cross-View Vehicle Re-Identification: A Benchmark Dataset and Baseline
by Linzhi Shang, Chen Min, Juan Wang, Liang Xiao, Dawei Zhao and Yiming Nie
Remote Sens. 2025, 17(15), 2653; https://doi.org/10.3390/rs17152653 - 31 Jul 2025
Viewed by 220
Abstract
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, [...] Read more.
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, typically elevated viewpoints, these settings do not fully reflect complex aerial-ground collaborative remote sensing scenarios. In this work, we introduce a novel and challenging task: aerial-ground cross-view vehicle Re-ID, which involves retrieving vehicles in ground-view image galleries using query images captured from aerial (top-down) perspectives. This task is increasingly relevant due to the integration of drone-based surveillance and ground-level monitoring in multi-source remote sensing systems, yet it poses substantial challenges due to significant appearance variations between aerial and ground views. To support this task, we present AGID (Aerial-Ground Vehicle Re-Identification), the first benchmark dataset specifically designed for aerial-ground cross-view vehicle Re-ID. AGID comprises 20,785 remote sensing images of 834 vehicle identities, collected using drones and fixed ground cameras. We further propose a novel method, Enhanced Self-Correlation Feature Computation (ESFC), which enhances spatial relationships between semantically similar regions and incorporates shape information to improve feature discrimination. Extensive experiments on the AGID dataset and three widely used vehicle Re-ID benchmarks validate the effectiveness of our method, which achieves a Rank-1 accuracy of 69.0% on AGID, surpassing state-of-the-art approaches by 2.1%. Full article
Show Figures

Figure 1

27 pages, 405 KiB  
Article
Comparative Analysis of Centralized and Distributed Multi-UAV Task Allocation Algorithms: A Unified Evaluation Framework
by Yunze Song, Zhexuan Ma, Nuo Chen, Shenghao Zhou and Sutthiphong Srigrarom
Drones 2025, 9(8), 530; https://doi.org/10.3390/drones9080530 - 28 Jul 2025
Viewed by 337
Abstract
Unmanned aerial vehicles (UAVs), commonly known as drones, offer unprecedented flexibility for complex missions such as area surveillance, search and rescue, and cooperative inspection. This paper presents a unified evaluation framework for the comparison of centralized and distributed task allocation algorithms specifically tailored [...] Read more.
Unmanned aerial vehicles (UAVs), commonly known as drones, offer unprecedented flexibility for complex missions such as area surveillance, search and rescue, and cooperative inspection. This paper presents a unified evaluation framework for the comparison of centralized and distributed task allocation algorithms specifically tailored to multi-UAV operations. We first contextualize the classical assignment problem (AP) under UAV mission constraints, including the flight time, propulsion energy capacity, and communication range, and evaluate optimal one-to-one solvers including the Hungarian algorithm, the Bertsekas ϵ-auction algorithm, and a minimum cost maximum flow formulation. To reflect the dynamic, uncertain environments that UAV fleets encounter, we extend our analysis to distributed multi-UAV task allocation (MUTA) methods. In particular, we examine the consensus-based bundle algorithm (CBBA) and a distributed auction 2-opt refinement strategy, both of which iteratively negotiate task bundles across UAVs to accommodate real-time task arrivals and intermittent connectivity. Finally, we outline how reinforcement learning (RL) can be incorporated to learn adaptive policies that balance energy efficiency and mission success under varying wind conditions and obstacle fields. Through simulations incorporating UAV-specific cost models and communication topologies, we assess each algorithm’s mission completion time, total energy expenditure, communication overhead, and resilience to UAV failures. Our results highlight the trade-off between strict optimality, which is suitable for small fleets in static scenarios, and scalable, robust coordination, necessary for large, dynamic multi-UAV deployments. Full article
Show Figures

Figure 1

35 pages, 2590 KiB  
Review
Advanced Chemometric Techniques for Environmental Pollution Monitoring and Assessment: A Review
by Shaikh Manirul Haque, Yunusa Umar and Abuzar Kabir
Chemosensors 2025, 13(7), 268; https://doi.org/10.3390/chemosensors13070268 - 21 Jul 2025
Viewed by 399
Abstract
Chemometrics has emerged as a powerful approach for deciphering complex environmental systems, enabling the identification of pollution sources through the integration of faunal community structures with physicochemical parameters and in situ analytical data. Leveraging advanced technologies—including satellite imaging, drone surveillance, sensor networks, and [...] Read more.
Chemometrics has emerged as a powerful approach for deciphering complex environmental systems, enabling the identification of pollution sources through the integration of faunal community structures with physicochemical parameters and in situ analytical data. Leveraging advanced technologies—including satellite imaging, drone surveillance, sensor networks, and Internet of Things platforms—chemometric methods facilitate real-time and longitudinal monitoring of both pristine and anthropogenically influenced ecosystems. This review provides a critical and comprehensive overview of the foundational principles underpinning chemometric applications in environmental science. Emphasis is placed on identifying pollution sources, their ecological distribution, and potential impacts on human health. Furthermore, the study highlights the role of chemometrics in interpreting multidimensional datasets, thereby enhancing the accuracy and efficiency of modern environmental monitoring systems across diverse geographic and industrial contexts. A comparative analysis of analytical techniques, target analytes, application domains, and the strengths and limitations of selected in situ and remote sensing-based chemometric approaches is also presented. Full article
(This article belongs to the Special Issue Chemometrics Tools Used in Chemical Detection and Analysis)
Show Figures

Figure 1

15 pages, 2325 KiB  
Article
Evaluation of Additive Manufacturing Techniques to Increase Drone Radar Detection Range
by Christian Maya, Joaquin Vico Navarro and Juan V. Balbastre Tejedor
Drones 2025, 9(7), 499; https://doi.org/10.3390/drones9070499 - 15 Jul 2025
Viewed by 365
Abstract
The deployment of drones in civil airspaces has increased rapidly in recent years. However, their small radar cross-section hinders detectability by existing surveillance systems. This paper evaluates the feasibility of using additive manufacturing techniques with materials with enhanced electromagnetic properties to increase RADAR [...] Read more.
The deployment of drones in civil airspaces has increased rapidly in recent years. However, their small radar cross-section hinders detectability by existing surveillance systems. This paper evaluates the feasibility of using additive manufacturing techniques with materials with enhanced electromagnetic properties to increase RADAR drone detectability. The examined methods include the production of propellers and landing gear with additive techniques and metal-doped plastics. Field experiments were performed measuring the radar detection range of an off-the-shelf drone with a 24 GHz RADAR module, and findings indicate that the use of specialized materials for drone manufacturing greatly improves drone detectability, increasing the detection range by a factor of 3.9 without changes to the drone design or inclusion of external electromagnetic reflectors. Full article
(This article belongs to the Section Innovative Urban Mobility)
Show Figures

Figure 1

21 pages, 3826 KiB  
Article
UAV-OVD: Open-Vocabulary Object Detection in UAV Imagery via Multi-Level Text-Guided Decoding
by Lijie Tao, Guoting Wei, Zhuo Wang, Zhaoshuai Qi, Ying Li and Haokui Zhang
Drones 2025, 9(7), 495; https://doi.org/10.3390/drones9070495 - 14 Jul 2025
Viewed by 499
Abstract
Object detection in drone-captured imagery has attracted significant attention due to its wide range of real-world applications, including surveillance, disaster response, and environmental monitoring. Although the majority of existing methods are developed under closed-set assumptions, and some recent studies have begun to explore [...] Read more.
Object detection in drone-captured imagery has attracted significant attention due to its wide range of real-world applications, including surveillance, disaster response, and environmental monitoring. Although the majority of existing methods are developed under closed-set assumptions, and some recent studies have begun to explore open-vocabulary or open-world detection, their application to UAV imagery remains limited and underexplored. In this paper, we address this limitation by exploring the relationship between images and textual semantics to extend object detection in UAV imagery to an open-vocabulary setting. We propose a novel and efficient detector named Unmanned Aerial Vehicle Open-Vocabulary Detector (UAV-OVD), specifically designed for drone-captured scenes. To facilitate open-vocabulary object detection, we propose improvements from three complementary perspectives. First, at the training level, we design a region–text contrastive loss to replace conventional classification loss, allowing the model to align visual regions with textual descriptions beyond fixed category sets. Structurally, building on this, we introduce a multi-level text-guided fusion decoder that integrates visual features across multiple spatial scales under language guidance, thereby improving overall detection performance and enhancing the representation and perception of small objects. Finally, from the data perspective, we enrich the original dataset with synonym-augmented category labels, enabling more flexible and semantically expressive supervision. Experiments conducted on two widely used benchmark datasets demonstrate that our approach achieves significant improvements in both mean mAP and Recall. For instance, for Zero-Shot Detection on xView, UAV-OVD achieves 9.9 mAP and 67.3 Recall, 1.1 and 25.6 higher than that of YOLO-World. In terms of speed, UAV-OVD achieves 53.8 FPS, nearly twice as fast as YOLO-World and five times faster than DetrReg, demonstrating its strong potential for real-time open-vocabulary detection in UAV imagery. Full article
(This article belongs to the Special Issue Applications of UVs in Digital Photogrammetry and Image Processing)
Show Figures

Figure 1

28 pages, 19790 KiB  
Article
HSF-DETR: A Special Vehicle Detection Algorithm Based on Hypergraph Spatial Features and Bipolar Attention
by Kaipeng Wang, Guanglin He and Xinmin Li
Sensors 2025, 25(14), 4381; https://doi.org/10.3390/s25144381 - 13 Jul 2025
Viewed by 469
Abstract
Special vehicle detection in intelligent surveillance, emergency rescue, and reconnaissance faces significant challenges in accuracy and robustness under complex environments, necessitating advanced detection algorithms for critical applications. This paper proposes HSF-DETR (Hypergraph Spatial Feature DETR), integrating four innovative modules: a Cascaded Spatial Feature [...] Read more.
Special vehicle detection in intelligent surveillance, emergency rescue, and reconnaissance faces significant challenges in accuracy and robustness under complex environments, necessitating advanced detection algorithms for critical applications. This paper proposes HSF-DETR (Hypergraph Spatial Feature DETR), integrating four innovative modules: a Cascaded Spatial Feature Network (CSFNet) backbone with Cross-Efficient Convolutional Gating (CECG) for enhanced long-range detection through hybrid state-space modeling; a Hypergraph-Enhanced Spatial Feature Modulation (HyperSFM) network utilizing hypergraph structures for high-order feature correlations and adaptive multi-scale fusion; a Dual-Domain Feature Encoder (DDFE) combining Bipolar Efficient Attention (BEA) and Frequency-Enhanced Feed-Forward Network (FEFFN) for precise feature weight allocation; and a Spatial-Channel Fusion Upsampling Block (SCFUB) improving feature fidelity through depth-wise separable convolution and channel shift mixing. Experiments conducted on a self-built special vehicle dataset containing 2388 images demonstrate that HSF-DETR achieves mAP50 and mAP50-95 of 96.6% and 70.6%, respectively, representing improvements of 3.1% and 4.6% over baseline RT-DETR while maintaining computational efficiency at 59.7 GFLOPs and 18.07 M parameters. Cross-domain validation on VisDrone2019 and BDD100K datasets confirms the method’s generalization capability and robustness across diverse scenarios, establishing HSF-DETR as an effective solution for special vehicle detection in complex environments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 17084 KiB  
Article
Training First Responders Through VR-Based Situated Digital Twins
by Nikolaos Partarakis, Theodoros Evdaimon, Menelaos Katsantonis and Xenophon Zabulis
Computers 2025, 14(7), 274; https://doi.org/10.3390/computers14070274 - 11 Jul 2025
Viewed by 538
Abstract
This study examines first responder training to deliver realistic, adaptable, and scalable solutions aimed at equipping personnel to handle high-risk, rapidly developing scenarios. The proposed method leverages Virtual Reality, Augmented Reality, and digital twins to enable immersive and situationally relevant training for security-critical [...] Read more.
This study examines first responder training to deliver realistic, adaptable, and scalable solutions aimed at equipping personnel to handle high-risk, rapidly developing scenarios. The proposed method leverages Virtual Reality, Augmented Reality, and digital twins to enable immersive and situationally relevant training for security-critical incidents. The method is structured into three distinct phases: definition, digitization, and implementation. The outcome of this approach is the creation of virtual training scenarios that simulate real situations and incident dynamics. The methodology employs photogrammetric reconstruction, simulation of human behavior through locomotion, and virtual security systems, such as surveillance and drone technology. Alongside the methodology, a case study of a large public event is presented to illustrate its feasibility in real-world applications. This study offers a comprehensive and adaptive structure for the design and deployment of digitally augmented training systems. This provides a practical basis for enhancing readiness in a range of operational domains. Full article
Show Figures

Figure 1

29 pages, 16466 KiB  
Article
DMF-YOLO: Dynamic Multi-Scale Feature Fusion Network-Driven Small Target Detection in UAV Aerial Images
by Xiaojia Yan, Shiyan Sun, Huimin Zhu, Qingping Hu, Wenjian Ying and Yinglei Li
Remote Sens. 2025, 17(14), 2385; https://doi.org/10.3390/rs17142385 - 10 Jul 2025
Viewed by 542
Abstract
Target detection in UAV aerial images has found increasingly widespread applications in emergency rescue, maritime monitoring, and environmental surveillance. However, traditional detection models suffer significant performance degradation due to challenges including substantial scale variations, high proportions of small targets, and dense occlusions in [...] Read more.
Target detection in UAV aerial images has found increasingly widespread applications in emergency rescue, maritime monitoring, and environmental surveillance. However, traditional detection models suffer significant performance degradation due to challenges including substantial scale variations, high proportions of small targets, and dense occlusions in UAV-captured images. To address these issues, this paper proposes DMF-YOLO, a high-precision detection network based on YOLOv10 improvements. First, we design Dynamic Dilated Snake Convolution (DDSConv) to adaptively adjust the receptive field and dilation rate of convolution kernels, enhancing local feature extraction for small targets with weak textures. Second, we construct a Multi-scale Feature Aggregation Module (MFAM) that integrates dual-branch spatial attention mechanisms to achieve efficient cross-layer feature fusion, mitigating information conflicts between shallow details and deep semantics. Finally, we propose an Expanded Window-based Bounding Box Regression Loss Function (EW-BBRLF), which optimizes localization accuracy through dynamic auxiliary bounding boxes, effectively reducing missed detections of small targets. Experiments on the VisDrone2019 and HIT-UAV datasets demonstrate that DMF-YOLOv10 achieves 50.1% and 81.4% mAP50, respectively, significantly outperforming the baseline YOLOv10s by 27.1% and 2.6%, with parameter increases limited to 24.4% and 11.9%. The method exhibits superior robustness in dense scenarios, complex backgrounds, and long-range target detection. This approach provides an efficient solution for UAV real-time perception tasks and offers novel insights for multi-scale object detection algorithm design. Full article
Show Figures

Graphical abstract

22 pages, 3045 KiB  
Article
Optimization of RIS-Assisted 6G NTN Architectures for High-Mobility UAV Communication Scenarios
by Muhammad Shoaib Ayub, Muhammad Saadi and Insoo Koo
Drones 2025, 9(7), 486; https://doi.org/10.3390/drones9070486 - 10 Jul 2025
Viewed by 491
Abstract
The integration of reconfigurable intelligent surfaces (RISs) with non-terrestrial networks (NTNs), particularly those enabled by unmanned aerial vehicles (UAVs) or drone-based platforms, has emerged as a transformative approach to enhance 6G connectivity in high-mobility scenarios. UAV-assisted NTNs offer flexible deployment, dynamic altitude control, [...] Read more.
The integration of reconfigurable intelligent surfaces (RISs) with non-terrestrial networks (NTNs), particularly those enabled by unmanned aerial vehicles (UAVs) or drone-based platforms, has emerged as a transformative approach to enhance 6G connectivity in high-mobility scenarios. UAV-assisted NTNs offer flexible deployment, dynamic altitude control, and rapid network reconfiguration, making them ideal candidates for RIS-based signal optimization. However, the high mobility of UAVs and their three-dimensional trajectory dynamics introduce unique challenges in maintaining robust, low-latency links and seamless handovers. This paper presents a comprehensive performance analysis of RIS-assisted UAV-based NTNs, focusing on optimizing RIS phase shifts to maximize the signal-to-interference-plus-noise ratio (SINR), throughput, energy efficiency, and reliability under UAV mobility constraints. A joint optimization framework is proposed that accounts for UAV path loss, aerial shadowing, interference, and user mobility patterns, tailored specifically for aerial communication networks. Extensive simulations are conducted across various UAV operation scenarios, including urban air corridors, rural surveillance routes, drone swarms, emergency response, and aerial delivery systems. The results reveal that RIS deployment significantly enhances the SINR and throughput while navigating energy and latency trade-offs in real time. These findings offer vital insights for deploying RIS-enhanced aerial networks in 6G, supporting mission-critical drone applications and next-generation autonomous systems. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

19 pages, 598 KiB  
Article
Trajectory Planning and Optimisation for Following Drone to Rendezvous Leading Drone by State Estimation with Adaptive Time Horizon
by Javier Lee Hongrui and Sutthiphong Srigrarom
Aerospace 2025, 12(7), 606; https://doi.org/10.3390/aerospace12070606 - 4 Jul 2025
Viewed by 347
Abstract
With the increased proliferation of drone use for many purposes, counter drone technology has become crucial. This rapid expansion has inherently introduced significant opportunities and applications. This creates applications such as aerial surveillance, delivery services, agriculture monitoring, and, most importantly, security operations. Due [...] Read more.
With the increased proliferation of drone use for many purposes, counter drone technology has become crucial. This rapid expansion has inherently introduced significant opportunities and applications. This creates applications such as aerial surveillance, delivery services, agriculture monitoring, and, most importantly, security operations. Due to the relative simplicity of learning and operating a small-scale UAV, malicious organizations can field and use UAVs (drones) to form substantial threats. Their interception may then be hindered by evasive manoeuvres performed by the malicious UAV (mUAV). Novice operators may also unintentionally fly UAVs into restricted airspace such as civilian airports, posing a hazard to other air operations. This paper explores predictive trajectory code and methods for the neutralisation of mUAVs by following drones, using state estimation techniques such as the extended Kalman filter (EKF) and particle filter (PF). Interception strategies and optimization techniques are analysed to improve interception efficiency and robustness. The novelty introduced by this paper is the implementation of adaptive time horizon (ATH) and velocity control (VC) in the predictive process. Simulations in MATLAB were used to evaluate the effectiveness of trajectory prediction models and interception strategies against evasive manoeuvres. The tests discussed in this paper then demonstrated the following: the EKF predictive method achieved a significantly higher neutralisation rate (41%) compared to the PF method (30%) in linear trajectory scenarios, and a similar neutralisation rate of 5% in stochastic trajectory scenarios. Later, after incorporating adaptive time horizon (ATH) and 20 velocity control (VC) measures, the EKF method achieved a 98% neutralization rate, demonstrating significant improvement in performance. Full article
Show Figures

Figure 1

19 pages, 11127 KiB  
Article
Drone State Estimation Based on Frame-to-Frame Template Matching with Optimal Windows
by Seokwon Yeom
Drones 2025, 9(7), 457; https://doi.org/10.3390/drones9070457 - 24 Jun 2025
Viewed by 411
Abstract
The flight capability of drones expands the surveillance area and allows drones to be mobile platforms. Therefore, it is important to estimate the kinematic state of drones. In this paper, the kinematic state of a mini drone in flight is estimated based on [...] Read more.
The flight capability of drones expands the surveillance area and allows drones to be mobile platforms. Therefore, it is important to estimate the kinematic state of drones. In this paper, the kinematic state of a mini drone in flight is estimated based on the video captured by its camera. A novel frame-to-frame template-matching technique is proposed. The instantaneous velocity of the drone is measured through image-to-position conversion and frame-to-frame template matching using optimal windows. Multiple templates are defined by their corresponding windows in a frame. The size and location of the windows are obtained by minimizing the sum of the least square errors between the piecewise linear regression model and the nonlinear image-to-position conversion function. The displacement between two consecutive frames is obtained via frame-to-frame template matching that minimizes the sum of normalized squared differences. The kinematic state of the drone is estimated by a Kalman filter based on the velocity computed from the displacement. The Kalman filter is augmented to simultaneously estimate the state and velocity bias of the drone. For faster processing, a zero-order hold scheme is adopted to reuse the measurement. In the experiments, two 150 m long roadways were tested; one road is in an urban environment and the other in a suburban environment. A mini drone starts from a hovering state, reaches top speed, and then continues to fly at a nearly constant speed. The drone captures video 10 times on each road from a height of 40 m at a 60-degree camera tilt angle. It will be shown that the proposed method achieves average distance errors at low meter levels after the flight. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)
Show Figures

Figure 1

23 pages, 551 KiB  
Review
Drones and AI-Driven Solutions for Wildlife Monitoring
by Nourdine Aliane
Drones 2025, 9(7), 455; https://doi.org/10.3390/drones9070455 - 24 Jun 2025
Viewed by 2126
Abstract
Wildlife monitoring has entered a transformative era with the convergence of drone technology and artificial intelligence (AI). Drones provide access to remote and dangerous habitats, while AI unlocks the potential to process vast amounts of wildlife data. This synergy is reshaping wildlife monitoring, [...] Read more.
Wildlife monitoring has entered a transformative era with the convergence of drone technology and artificial intelligence (AI). Drones provide access to remote and dangerous habitats, while AI unlocks the potential to process vast amounts of wildlife data. This synergy is reshaping wildlife monitoring, offering novel solutions to tackle challenges in species identification, animal tracking, anti-poaching, population estimation, and habitat analysis. This paper conducts a comprehensive literature review to examine the recent advancements in drone and AI systems for wildlife monitoring, focusing on two critical dimensions: (1) Methodologies, algorithms, and applications, analyzing the AI techniques employed in wildlife monitoring, including their operational frameworks and real-world implementations. (2) Challenges and opportunities, identifying current limitations, including technical hurdles and regulatory constraints, as well as exploring the untapped potential in drone and AI integration to enhance wildlife monitoring and conservation efforts. By synthesizing these insights, this paper will provide researchers with a structured framework for leveraging drone and AI systems in wildlife monitoring, identifying best practices and outlining actionable pathways for future innovation in the field. Full article
Show Figures

Figure 1

17 pages, 1647 KiB  
Proceeding Paper
Enhanced Drone Detection Model for Edge Devices Using Knowledge Distillation and Bayesian Optimization
by Maryam Lawan Salisu, Farouk Lawan Gambo, Aminu Musa and Aminu Aliyu Abdullahi
Eng. Proc. 2025, 87(1), 71; https://doi.org/10.3390/engproc2025087071 - 4 Jun 2025
Viewed by 630
Abstract
The emergence of Unmanned Aerial Vehicles (UAVs), commonly known as drones, has presented numerous transformative opportunities across sectors such as agriculture, commerce, and security surveillance systems. However, the proliferation of these technologies raises significant concerns regarding security and privacy, as they could potentially [...] Read more.
The emergence of Unmanned Aerial Vehicles (UAVs), commonly known as drones, has presented numerous transformative opportunities across sectors such as agriculture, commerce, and security surveillance systems. However, the proliferation of these technologies raises significant concerns regarding security and privacy, as they could potentially be exploited for unauthorized surveillance or even targeted attacks. Various research endeavors have proposed drone detection models for security purposes. Yet, deploying these models on edge devices proves challenging due to resource constraints, which limit the feasibility of complex deep learning models. The need for lightweight models capable of efficient deployment on edge devices becomes evident, particularly for the anonymous detection of drones in various disguises to prevent potential intrusions. This study introduces a lightweight deep learning-based drone detection model (LDDm-CNN) by fusing knowledge distillation with Bayesian optimization. Knowledge distillation (KD) is utilized to transfer knowledge from a complex model (teacher) to a simpler one (student), preserving performance while reducing computational complexity, thereby achieving a lightweight model. However, selecting optimal hyper-parameters for knowledge distillation is challenging due to a large number of search space and complexity requirements. Therefore, through the integration of Bayesian optimization with knowledge distillation, we present an enhanced CNN-KD model. This novel approach employs an optimization algorithm to determine the most suitable hyper-parameters, enhancing the efficiency and effectiveness of the drone detection model. Validation on a dedicated drone detection dataset illustrates the model’s efficacy, achieving a remarkable accuracy of 96% while significantly reducing computational and memory requirements. With just 102,000 parameters, the proposed model is five times smaller than the teacher model, underscoring its potential for practical deployment in real-world scenarios. Full article
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

29 pages, 43709 KiB  
Article
Outdoor Dataset for Flying a UAV at an Appropriate Altitude
by Theyab Alotaibi, Kamal Jambi, Maher Khemakhem, Fathy Eassa and Farid Bourennani
Drones 2025, 9(6), 406; https://doi.org/10.3390/drones9060406 - 31 May 2025
Viewed by 787
Abstract
The increasing popularity of drones for Internet of Things (IoT) applications has led to significant research interest in autonomous navigation within unknown and dynamic environments. Researchers are utilizing supervised learning techniques that rely on image datasets to train drones for autonomous navigation, which [...] Read more.
The increasing popularity of drones for Internet of Things (IoT) applications has led to significant research interest in autonomous navigation within unknown and dynamic environments. Researchers are utilizing supervised learning techniques that rely on image datasets to train drones for autonomous navigation, which are typically used for rescue, surveillance, and medical aid delivery. Current datasets lack data that allow drones to navigate in a 3D environment; most of these data are dedicated to self-driving cars or navigation inside buildings. Therefore, this study presents an image dataset for training drones for 3D navigation. We developed an algorithm to capture these data from multiple worlds on the Gazebo simulator using a quadcopter. This dataset includes images of obstacles at various flight altitudes and images of the horizon to assist a drone in flying at an appropriate altitude, which allows it to avoid obstacles and prevents it from flying unnecessarily high. We used deep learning (DL) to develop a model to classify and predict the image types. Eleven experiments performed with the Gazebo simulator using a drone and a convolution neural network (CNN) proved the database’s effectiveness in avoiding different types of obstacles while maintaining an appropriate altitude and the drone’s ability to navigate in a 3D environment. Full article
Show Figures

Figure 1

39 pages, 3695 KiB  
Article
Fast Identification and Detection Algorithm for Maneuverable Unmanned Aircraft Based on Multimodal Data Fusion
by Tian Luan, Shixiong Zhou, Yicheng Zhang and Weijun Pan
Mathematics 2025, 13(11), 1825; https://doi.org/10.3390/math13111825 - 30 May 2025
Viewed by 822
Abstract
To address the critical challenges of insufficient monitoring capabilities and vulnerable defense systems against drones in regional airports, this study proposes a multi-source data fusion framework for rapid UAV detection. Building upon the YOLO v11 architecture, we develop an enhanced model incorporating four [...] Read more.
To address the critical challenges of insufficient monitoring capabilities and vulnerable defense systems against drones in regional airports, this study proposes a multi-source data fusion framework for rapid UAV detection. Building upon the YOLO v11 architecture, we develop an enhanced model incorporating four key innovations: (1) A dual-path RGB-IR fusion architecture that exploits complementary multi-modal data; (2) C3k2-DATB dynamic attention modules for enhanced feature extraction and semantic perception; (3) A bilevel routing attention mechanism with agent queries (BRSA) for precise target localization; (4) A semantic-detail injection (SDI) module coupled with windmill-shaped convolutional detection heads (PCHead) and Wasserstein Distance loss to expand receptive fields and accelerate convergence. Experimental results demonstrate superior performance with 99.3% mAP@50 (17.4% improvement over baseline YOLOv11), while maintaining lightweight characteristics (2.54M parameters, 7.8 GFLOPS). For practical deployment, we further enhance tracking robustness through an improved BoT-SORT algorithm within an interactive multiple model framework, achieving 91.3% MOTA and 93.0% IDF1 under low-light conditions. This integrated solution provides cost-effective, high-precision drone surveillance for resource-constrained airports. Full article
Show Figures

Figure 1

Back to TopTop