Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,652)

Search Parameters:
Keywords = aerial drone

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 589 KiB  
Article
Intelligent Queue Scheduling Method for SPMA-Based UAV Networks
by Kui Yang, Chenyang Xu, Guanhua Qiao, Jinke Zhong and Xiaoning Zhang
Drones 2025, 9(8), 552; https://doi.org/10.3390/drones9080552 - 6 Aug 2025
Abstract
Static Priority-based Multiple Access (SPMA) is an emerging and promising wireless MAC protocol which is widely used in Unmanned Aerial Vehicle (UAV) networks. UAV (Unmanned Aerial Vehicle) networks, also known as drone networks, refer to a system of interconnected UAVs that communicate and [...] Read more.
Static Priority-based Multiple Access (SPMA) is an emerging and promising wireless MAC protocol which is widely used in Unmanned Aerial Vehicle (UAV) networks. UAV (Unmanned Aerial Vehicle) networks, also known as drone networks, refer to a system of interconnected UAVs that communicate and collaborate to perform tasks autonomously or semi-autonomously. These networks leverage wireless communication technologies to share data, coordinate movements, and optimize mission execution. In SPMA, traffic arriving at the UAV network node can be divided into multiple priorities according to the information timeliness, and the packets of each priority are stored in the corresponding queues with different thresholds to transmit packet, thus guaranteeing the high success rate and low latency for the highest-priority traffic. Unfortunately, the multi-priority queue scheduling of SPMA deprives the packet transmitting opportunity of low-priority traffic, which results in unfair conditions among different-priority traffic. To address this problem, in this paper we propose the method of Adaptive Credit-Based Shaper with Reinforcement Learning (abbreviated as ACBS-RL) to balance the performance of all-priority traffic. In ACBS-RL, the Credit-Based Shaper (CBS) is introduced to SPMA to provide relatively fair packet transmission opportunity among multiple traffic queues by limiting the transmission rate. Due to the dynamic situations of the wireless environment, the Q-learning-based reinforcement learning method is leveraged to adaptively adjust the parameters of CBS (i.e., idleslope and sendslope) to achieve better performance among all priority queues. The extensive simulation results show that compared with traditional SPMA protocol, the proposed ACBS-RL can increase UAV network throughput while guaranteeing Quality of Service (QoS) requirements of all priority traffic. Full article
Show Figures

Figure 1

21 pages, 4331 KiB  
Article
Research on Lightweight Tracking of Small-Sized UAVs Based on the Improved YOLOv8N-Drone Architecture
by Yongjuan Zhao, Qiang Ma, Guannan Lei, Lijin Wang and Chaozhe Guo
Drones 2025, 9(8), 551; https://doi.org/10.3390/drones9080551 - 5 Aug 2025
Abstract
Traditional unmanned aerial vehicle (UAV) detection and tracking methods have long faced the twin challenges of high cost and poor efficiency. In real-world battlefield environments with complex backgrounds, occlusions, and varying speeds, existing techniques struggle to track small UAVs accurately and stably. To [...] Read more.
Traditional unmanned aerial vehicle (UAV) detection and tracking methods have long faced the twin challenges of high cost and poor efficiency. In real-world battlefield environments with complex backgrounds, occlusions, and varying speeds, existing techniques struggle to track small UAVs accurately and stably. To tackle these issues, this paper presents an enhanced YOLOv8N-Drone-based algorithm for improved target tracking of small UAVs. Firstly, a novel module named C2f-DSFEM (Depthwise-Separable and Sobel Feature Enhancement Module) is designed, integrating Sobel convolution with depthwise separable convolution across layers. Edge detail extraction and multi-scale feature representation are synchronized through a bidirectional feature enhancement mechanism, and the discriminability of target features in complex backgrounds is thus significantly enhanced. For the feature confusion problem, the improved lightweight Context Anchored Attention (CAA) mechanism is integrated into the Neck network, which effectively improves the system’s adaptability to complex scenes. By employing a position-aware weight allocation strategy, this approach enables adaptive suppression of background interference and precise focus on the target region, thereby improving localization accuracy. At the level of loss function optimization, the traditional classification loss is replaced by the focal loss (Focal Loss). This mechanism effectively suppresses the contribution of easy-to-classify samples through a dynamic weight adjustment strategy, while significantly increasing the priority of difficult samples in the training process. The class imbalance that exists between the positive and negative samples is then significantly mitigated. Experimental results show the enhanced YOLOv8 boosts mean average precision (Map@0.5) by 12.3%, hitting 99.2%. In terms of tracking performance, the proposed YOLOv8 N-Drone algorithm achieves a 19.2% improvement in Multiple Object Tracking Accuracy (MOTA) under complex multi-scenario conditions. Additionally, the IDF1 score increases by 6.8%, and the number of ID switches is reduced by 85.2%, indicating significant improvements in both accuracy and stability of UAV tracking. Compared to other mainstream algorithms, the proposed improved method demonstrates significant advantages in tracking performance, offering a more effective and reliable solution for small-target tracking tasks in UAV applications. Full article
Show Figures

Figure 1

26 pages, 2933 KiB  
Article
Comparative Analysis of Object Detection Models for Edge Devices in UAV Swarms
by Dimitrios Meimetis, Ioannis Daramouskas, Niki Patrinopoulou, Vaios Lappas and Vassilis Kostopoulos
Machines 2025, 13(8), 684; https://doi.org/10.3390/machines13080684 - 4 Aug 2025
Abstract
This study presented a comprehensive investigation into the performance of object detection models tailored for edge devices, particularly in the context of Unmanned Aerial Vehicle swarms. Object detection plays a pivotal role in enhancing autonomous navigation, situational awareness, and target tracking capabilities within [...] Read more.
This study presented a comprehensive investigation into the performance of object detection models tailored for edge devices, particularly in the context of Unmanned Aerial Vehicle swarms. Object detection plays a pivotal role in enhancing autonomous navigation, situational awareness, and target tracking capabilities within UAV swarms, where computing resources are constrained by the onboard low-cost computers. Initially, a thorough review of the existing literature was conducted to identify state-of-the-art object detection models suitable for deployment on edge devices. These models are evaluated based on their speed, accuracy, and efficiency, with a focus on real-time inference capabilities crucial for Unmanned Aerial Vehicle applications. Following the literature review, selected models undergo empirical validation through custom training using the Vision Meets Drone dataset, a widely recognized dataset for Unmanned Aerial Vehicle-based object detection tasks. Performance metrics such as mean average precision, inference speed, and resource utilization were measured and compared across different models. Lastly, the study extended its analysis beyond traditional object detection to explore the efficacy of instance segmentation and proposed an optimization to an object tracking technique within the context of unmanned Aerial Vehicles. Instance segmentation offers finer-grained object delineation, enabling more precise target or landmark identification and tracking, albeit at higher resource usage and higher inference time. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

26 pages, 2560 KiB  
Article
Benchmarking YOLO Models for Marine Search and Rescue in Variable Weather Conditions
by Aysha Alshibli and Qurban Memon
Automation 2025, 6(3), 35; https://doi.org/10.3390/automation6030035 - 2 Aug 2025
Viewed by 115
Abstract
Deep learning with unmanned aerial vehicles (UAVs) is transforming maritime search and rescue (SAR) by enabling rapid object identification in challenging marine environments. This study benchmarks the performance of YOLO models for maritime SAR under diverse weather conditions using the SeaDronesSee and AFO [...] Read more.
Deep learning with unmanned aerial vehicles (UAVs) is transforming maritime search and rescue (SAR) by enabling rapid object identification in challenging marine environments. This study benchmarks the performance of YOLO models for maritime SAR under diverse weather conditions using the SeaDronesSee and AFO datasets. The results show that while YOLOv7 achieved the highest mAP@50, it struggled with detecting small objects. In contrast, YOLOv10 and YOLOv11 deliver faster inference speeds but compromise slightly on precision. The key challenges discussed include environmental variability, sensor limitations, and scarce annotated data, which can be addressed by such techniques as attention modules and multimodal data fusion. Overall, the research results provide practical guidance for deploying efficient deep learning models in SAR, emphasizing specialized datasets and lightweight architectures for edge devices. Full article
(This article belongs to the Section Intelligent Control and Machine Learning)
Show Figures

Figure 1

41 pages, 86958 KiB  
Article
An Efficient Aerial Image Detection with Variable Receptive Fields
by Wenbin Liu, Liangren Shi and Guocheng An
Remote Sens. 2025, 17(15), 2672; https://doi.org/10.3390/rs17152672 - 2 Aug 2025
Viewed by 364
Abstract
This article presents VRF-DETR, a lightweight real-time object detection framework for aerial remote sensing images, aimed at addressing the challenge of insufficient receptive fields for easily confused categories due to differences in height and perspective. Based on the RT-DETR architecture, our approach introduces [...] Read more.
This article presents VRF-DETR, a lightweight real-time object detection framework for aerial remote sensing images, aimed at addressing the challenge of insufficient receptive fields for easily confused categories due to differences in height and perspective. Based on the RT-DETR architecture, our approach introduces three key innovations: the multi-scale receptive field adaptive fusion (MSRF2) module replaces the Transformer encoder with parallel dilated convolutions and spatial-channel attention to adjust receptive fields for confusing objects dynamically; the gated multi-scale context (GMSC) block reconstructs the backbone using Gated Multi-Scale Context units with attention-gated convolution (AGConv), reducing parameters while enhancing multi-scale feature extraction; and the context-guided fusion (CGF) module optimizes feature fusion via context-guided weighting to resolve multi-scale semantic conflicts. Evaluations were conducted on both the VisDrone2019 and UAVDT datasets, where VRF-DETR achieved the mAP50 of 52.1% and the mAP50-95 of 32.2% on the VisDrone2019 validation set, surpassing RT-DETR by 4.9% and 3.5%, respectively, while reducing parameters by 32% and FLOPs by 22%. It maintains real-time performance (62.1 FPS) and generalizes effectively, outperforming state-of-the-art methods in accuracy-efficiency trade-offs for aerial object detection. Full article
(This article belongs to the Special Issue Deep Learning Innovations in Remote Sensing)
Show Figures

Figure 1

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 - 31 Jul 2025
Viewed by 170
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

29 pages, 5503 KiB  
Article
Feature Selection Framework for Improved UAV-Based Detection of Solenopsis invicta Mounds in Agricultural Landscapes
by Chun-Han Shih, Cheng-En Song, Su-Fen Wang and Chung-Chi Lin
Insects 2025, 16(8), 793; https://doi.org/10.3390/insects16080793 - 31 Jul 2025
Viewed by 227
Abstract
The red imported fire ant (RIFA; Solenopsis invicta) is an invasive species that severely threatens ecology, agriculture, and public health in Taiwan. In this study, the feasibility of applying multispectral imagery captured by unmanned aerial vehicles (UAVs) to detect red fire ant [...] Read more.
The red imported fire ant (RIFA; Solenopsis invicta) is an invasive species that severely threatens ecology, agriculture, and public health in Taiwan. In this study, the feasibility of applying multispectral imagery captured by unmanned aerial vehicles (UAVs) to detect red fire ant mounds was evaluated in Fenlin Township, Hualien, Taiwan. A DJI Phantom 4 multispectral drone collected reflectance in five bands (blue, green, red, red-edge, and near-infrared), derived indices (normalized difference vegetation index, NDVI, soil-adjusted vegetation index, SAVI, and photochemical pigment reflectance index, PPR), and textural features. According to analysis of variance F-scores and random forest recursive feature elimination, vegetation indices and spectral features (e.g., NDVI, NIR, SAVI, and PPR) were the most significant predictors of ecological characteristics such as vegetation density and soil visibility. Texture features exhibited moderate importance and the potential to capture intricate spatial patterns in nonlinear models. Despite limitations in the analytics, including trade-offs related to flight height and environmental variability, the study findings suggest that UAVs are an inexpensive, high-precision means of obtaining multispectral data for RIFA monitoring. These findings can be used to develop efficient mass-detection protocols for integrated pest control, with broader implications for invasive species monitoring. Full article
(This article belongs to the Special Issue Surveillance and Management of Invasive Insects)
Show Figures

Figure 1

22 pages, 1007 KiB  
Systematic Review
Mapping Drone Applications in Rural and Regional Cities: A Scoping Review of the Australian State of Practice
by Christine Steinmetz-Weiss, Nancy Marshall, Kate Bishop and Yuan Wei
Appl. Sci. 2025, 15(15), 8519; https://doi.org/10.3390/app15158519 (registering DOI) - 31 Jul 2025
Viewed by 140
Abstract
Consumer-accessible and user-friendly smart products such as unmanned aerial vehicles (UAVs), or drones, have become widely used, adaptable, and acceptable devices to observe, assess, measure, and explore urban and natural environments. A drone’s relatively low cost and flexibility in the level of expertise [...] Read more.
Consumer-accessible and user-friendly smart products such as unmanned aerial vehicles (UAVs), or drones, have become widely used, adaptable, and acceptable devices to observe, assess, measure, and explore urban and natural environments. A drone’s relatively low cost and flexibility in the level of expertise required to operate it has enabled users from novice to industry professionals to adapt a malleable technology to various disciplines. This review examines the academic literature and maps how drones are currently being used in 93 rural and regional city councils in New South Wales, Australia. Through a systematic review of the academic literature and scrutiny of current drone use in these councils using publicly available information found on council websites, findings reveal potential uses of drone technology for local governments who want to engage with smart technology devices. We looked at how drones were being used in the management of the council’s environment; health and safety initiatives; infrastructure; planning; social and community programmes; and waste and recycling. These findings suggest that drone technology is increasingly being utilised in rural and regional areas. While the focus is on rural and regional New South Wales, a review of the academic literature and local council websites provides a snapshot of drone use examples that holds global relevance for local councils in urban and remote areas seeking to incorporate drone technology into their daily practice of city, town, or region governance. Full article
Show Figures

Figure 1

21 pages, 12997 KiB  
Article
Aerial-Ground Cross-View Vehicle Re-Identification: A Benchmark Dataset and Baseline
by Linzhi Shang, Chen Min, Juan Wang, Liang Xiao, Dawei Zhao and Yiming Nie
Remote Sens. 2025, 17(15), 2653; https://doi.org/10.3390/rs17152653 - 31 Jul 2025
Viewed by 227
Abstract
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, [...] Read more.
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, typically elevated viewpoints, these settings do not fully reflect complex aerial-ground collaborative remote sensing scenarios. In this work, we introduce a novel and challenging task: aerial-ground cross-view vehicle Re-ID, which involves retrieving vehicles in ground-view image galleries using query images captured from aerial (top-down) perspectives. This task is increasingly relevant due to the integration of drone-based surveillance and ground-level monitoring in multi-source remote sensing systems, yet it poses substantial challenges due to significant appearance variations between aerial and ground views. To support this task, we present AGID (Aerial-Ground Vehicle Re-Identification), the first benchmark dataset specifically designed for aerial-ground cross-view vehicle Re-ID. AGID comprises 20,785 remote sensing images of 834 vehicle identities, collected using drones and fixed ground cameras. We further propose a novel method, Enhanced Self-Correlation Feature Computation (ESFC), which enhances spatial relationships between semantically similar regions and incorporates shape information to improve feature discrimination. Extensive experiments on the AGID dataset and three widely used vehicle Re-ID benchmarks validate the effectiveness of our method, which achieves a Rank-1 accuracy of 69.0% on AGID, surpassing state-of-the-art approaches by 2.1%. Full article
Show Figures

Figure 1

19 pages, 9284 KiB  
Article
UAV-YOLO12: A Multi-Scale Road Segmentation Model for UAV Remote Sensing Imagery
by Bingyan Cui, Zhen Liu and Qifeng Yang
Drones 2025, 9(8), 533; https://doi.org/10.3390/drones9080533 - 29 Jul 2025
Viewed by 413
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used for road infrastructure inspection and monitoring. However, challenges such as scale variation, complex background interference, and the scarcity of annotated UAV datasets limit the performance of traditional segmentation models. To address these challenges, this study proposes UAV-YOLOv12, a multi-scale segmentation model specifically designed for UAV-based road imagery analysis. The proposed model builds on the YOLOv12 architecture by adding two key modules. It uses a Selective Kernel Network (SKNet) to adjust receptive fields dynamically and a Partial Convolution (PConv) module to improve spatial focus and robustness in occluded regions. These enhancements help the model better detect small and irregular road features in complex aerial scenes. Experimental results on a custom UAV dataset collected from national highways in Wuxi, China, show that UAV-YOLOv12 achieves F1-scores of 0.902 for highways (road-H) and 0.825 for paths (road-P), outperforming the original YOLOv12 by 5% and 3.2%, respectively. Inference speed is maintained at 11.1 ms per image, supporting near real-time performance. Moreover, comparative evaluations with U-Net show that UAV-YOLOv12 improves by 7.1% and 9.5%. The model also exhibits strong generalization ability, achieving F1-scores above 0.87 on public datasets such as VHR-10 and the Drone Vehicle dataset. These results demonstrate that the proposed UAV-YOLOv12 can achieve high accuracy and robustness in diverse road environments and object scales. Full article
Show Figures

Figure 1

19 pages, 11455 KiB  
Article
Characterizing Tracer Flux Ratio Methods for Methane Emission Quantification Using Small Unmanned Aerial System
by Ezekiel Alaba, Bryan Rainwater, Ethan Emerson, Ezra Levin, Michael Moy, Ryan Brouwer and Daniel Zimmerle
Methane 2025, 4(3), 18; https://doi.org/10.3390/methane4030018 - 29 Jul 2025
Viewed by 169
Abstract
Accurate methane emission estimates are essential for climate policy, yet current field methods often struggle with spatial constraints and source complexity. Ground-based mobile approaches frequently miss key plume features, introducing bias and uncertainty in emission rate estimates. This study addresses these limitations by [...] Read more.
Accurate methane emission estimates are essential for climate policy, yet current field methods often struggle with spatial constraints and source complexity. Ground-based mobile approaches frequently miss key plume features, introducing bias and uncertainty in emission rate estimates. This study addresses these limitations by using small unmanned aerial systems equipped with precision gas sensors to measure methane alongside co-released tracers. We tested whether arc-shaped flight paths and alternative ratio estimation methods could improve the accuracy of tracer-based emission quantification under real-world constraints. Controlled releases using ethane and nitrous oxide tracers showed that (1) arc flights provided stronger plume capture and higher correlation between methane and tracer concentrations than traditional flight paths; (2) the cumulative sum method yielded the lowest relative error (as low as 3.3%) under ideal mixing conditions; and (3) the arc flight pattern yielded the lowest relative error and uncertainty across all experimental configurations, demonstrating its robustness for quantifying methane emissions from downwind plume measurements. These findings demonstrate a practical and scalable approach to reducing uncertainty in methane quantification. The method is well-suited for challenging environments and lays the groundwork for future applications at the facility scale. Full article
Show Figures

Figure 1

27 pages, 405 KiB  
Article
Comparative Analysis of Centralized and Distributed Multi-UAV Task Allocation Algorithms: A Unified Evaluation Framework
by Yunze Song, Zhexuan Ma, Nuo Chen, Shenghao Zhou and Sutthiphong Srigrarom
Drones 2025, 9(8), 530; https://doi.org/10.3390/drones9080530 - 28 Jul 2025
Viewed by 361
Abstract
Unmanned aerial vehicles (UAVs), commonly known as drones, offer unprecedented flexibility for complex missions such as area surveillance, search and rescue, and cooperative inspection. This paper presents a unified evaluation framework for the comparison of centralized and distributed task allocation algorithms specifically tailored [...] Read more.
Unmanned aerial vehicles (UAVs), commonly known as drones, offer unprecedented flexibility for complex missions such as area surveillance, search and rescue, and cooperative inspection. This paper presents a unified evaluation framework for the comparison of centralized and distributed task allocation algorithms specifically tailored to multi-UAV operations. We first contextualize the classical assignment problem (AP) under UAV mission constraints, including the flight time, propulsion energy capacity, and communication range, and evaluate optimal one-to-one solvers including the Hungarian algorithm, the Bertsekas ϵ-auction algorithm, and a minimum cost maximum flow formulation. To reflect the dynamic, uncertain environments that UAV fleets encounter, we extend our analysis to distributed multi-UAV task allocation (MUTA) methods. In particular, we examine the consensus-based bundle algorithm (CBBA) and a distributed auction 2-opt refinement strategy, both of which iteratively negotiate task bundles across UAVs to accommodate real-time task arrivals and intermittent connectivity. Finally, we outline how reinforcement learning (RL) can be incorporated to learn adaptive policies that balance energy efficiency and mission success under varying wind conditions and obstacle fields. Through simulations incorporating UAV-specific cost models and communication topologies, we assess each algorithm’s mission completion time, total energy expenditure, communication overhead, and resilience to UAV failures. Our results highlight the trade-off between strict optimality, which is suitable for small fleets in static scenarios, and scalable, robust coordination, necessary for large, dynamic multi-UAV deployments. Full article
Show Figures

Figure 1

29 pages, 1659 KiB  
Article
A Mixed-Integer Programming Framework for Drone Routing and Scheduling with Flexible Multiple Visits in Highway Traffic Monitoring
by Nasrin Mohabbati-Kalejahi, Sepideh Alavi and Oguz Toragay
Mathematics 2025, 13(15), 2427; https://doi.org/10.3390/math13152427 - 28 Jul 2025
Viewed by 314
Abstract
Traffic crashes and congestion generate high social and economic costs, yet traditional traffic monitoring methods, such as police patrols, fixed cameras, and helicopters, are costly, labor-intensive, and limited in spatial coverage. This paper presents a novel Drone Routing and Scheduling with Flexible Multiple [...] Read more.
Traffic crashes and congestion generate high social and economic costs, yet traditional traffic monitoring methods, such as police patrols, fixed cameras, and helicopters, are costly, labor-intensive, and limited in spatial coverage. This paper presents a novel Drone Routing and Scheduling with Flexible Multiple Visits (DRSFMV) framework, an optimization model for planning drone-based highway monitoring under realistic operational constraints, including battery limits, variable monitoring durations, recharging at a depot, and target-specific inter-visit time limits. A mixed-integer nonlinear programming (MINLP) model and a linearized version (MILP) are presented to solve the problem. Due to the NP-hard nature of the underlying problem structure, a heuristic solver, Hexaly, is also used. A case study using real traffic census data from three Southern California counties tests the models across various network sizes and configurations. The MILP solves small and medium instances efficiently, and Hexaly produces high-quality solutions for large-scale networks. Results show clear trade-offs between drone availability and time-slot flexibility, and demonstrate that stricter revisit constraints raise operational cost. Full article
Show Figures

Figure 1

17 pages, 6208 KiB  
Article
A Low-Cost Experimental Quadcopter Drone Design for Autonomous Search-and-Rescue Missions in GNSS-Denied Environments
by Shane Allan and Martin Barczyk
Drones 2025, 9(8), 523; https://doi.org/10.3390/drones9080523 - 25 Jul 2025
Viewed by 523
Abstract
Autonomous drones may be called on to perform search-and-rescue operations in environments without access to signals from the global navigation satellite system (GNSS), such as underground mines, subterranean caverns, or confined tunnels. While technology to perform such missions has been demonstrated at events [...] Read more.
Autonomous drones may be called on to perform search-and-rescue operations in environments without access to signals from the global navigation satellite system (GNSS), such as underground mines, subterranean caverns, or confined tunnels. While technology to perform such missions has been demonstrated at events such as DARPA’s Subterranean (Sub-T) Challenge, the hardware deployed for these missions relies on heavy and expensive sensors, such as LiDAR, carried by costly mobile platforms, such as legged robots and heavy-lift multicopters, creating barriers for deployment and training with this technology for all but the wealthiest search-and-rescue organizations. To address this issue, we have developed a custom four-rotor aerial drone platform specifically built around low-cost low-weight sensors in order to minimize costs and maximize flight time for search-and-rescue operations in GNSS-denied environments. We document the various issues we encountered during the building and testing of the vehicle and how they were solved, for instance a novel redesign of the airframe to handle the aggressive yaw maneuvers commanded by the FUEL exploration framework running onboard the drone. The resulting system is successfully validated through a hardware autonomous flight experiment performed in an underground environment without access to GNSS signals. The contribution of the article is to share our experiences with other groups interested in low-cost search-and-rescue drones to help them advance their own programs. Full article
Show Figures

Figure 1

20 pages, 21323 KiB  
Article
C Band 360° Triangular Phase Shift Detector for Precise Vertical Landing RF System
by Víctor Araña-Pulido, B. Pablo Dorta-Naranjo, Francisco Cabrera-Almeida and Eugenio Jiménez-Yguácel
Appl. Sci. 2025, 15(15), 8236; https://doi.org/10.3390/app15158236 - 24 Jul 2025
Viewed by 152
Abstract
This paper presents a novel design for precise vertical landing of drones based on the detection of three phase shifts in the range of ±180°. The design has three inputs to which the signal transmitted from an oscillator located at the landing point [...] Read more.
This paper presents a novel design for precise vertical landing of drones based on the detection of three phase shifts in the range of ±180°. The design has three inputs to which the signal transmitted from an oscillator located at the landing point arrives with different delays. The circuit increases the aerial tracking volume relative to that achieved by detectors with theoretical unambiguous detection ranges of ±90°. The phase shift measurement circuit uses an analog phase detector (mixer), detecting a maximum range of ±90°and a double multiplication of the input signals, in phase and phase-shifted, without the need to fulfill the quadrature condition. The calibration procedure, phase detector curve modeling, and calculation of the input signal phase shift are significantly simplified by the use of an automatic gain control on each branch, dwhich keeps input amplitudes to the analog phase detectors constant. A simple program to determine phase shifts and guidance instructions is proposed, which could be integrated into the same flight control platform, thus avoiding the need to add additional processing components. A prototype has been manufactured in C band to explain the details of the procedure design. The circuit uses commercial circuits and microstrip technology, avoiding the crossing of lines by means of switches, which allows the design topology to be extrapolated to much higher frequencies. Calibration and measurements at 5.3 GHz show a dynamic range greater than 50 dB and a non-ambiguous detection range of ±180°. These specifications would allow one to track the drone during the landing maneuver in an inverted cone formed by a surface with an 11 m radius at 10 m high and the landing point, when 4 cm between RF inputs is considered. The errors of the phase shifts used in the landing maneuver are less than ±3°, which translates into 1.7% losses over the detector theoretical range in the worst case. The circuit has a frequency bandwidth of 4.8 GHz to 5.6 GHz, considering a 3 dB variation in the input power when the AGC is limiting the output signal to 0 dBm at the circuit reference point of each branch. In addition, the evolution of phases in the landing maneuver is shown by means of a small simulation program in which the drone trajectory is inside and outside the tracking range of ±180°. Full article
(This article belongs to the Section Applied Physics General)
Show Figures

Figure 1

Back to TopTop