Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (366)

Search Parameters:
Keywords = aerial videos

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13739 KiB  
Article
Traffic Accident Rescue Action Recognition Method Based on Real-Time UAV Video
by Bo Yang, Jianan Lu, Tao Liu, Bixing Zhang, Chen Geng, Yan Tian and Siyu Zhang
Drones 2025, 9(8), 519; https://doi.org/10.3390/drones9080519 - 24 Jul 2025
Viewed by 427
Abstract
Low-altitude drones, which are unimpeded by traffic congestion or urban terrain, have become a critical asset in emergency rescue missions. To address the current lack of emergency rescue data, UAV aerial videos were collected to create an experimental dataset for action classification and [...] Read more.
Low-altitude drones, which are unimpeded by traffic congestion or urban terrain, have become a critical asset in emergency rescue missions. To address the current lack of emergency rescue data, UAV aerial videos were collected to create an experimental dataset for action classification and localization annotation. A total of 5082 keyframes were labeled with 1–5 targets each, and 14,412 instances of data were prepared (including flight altitude and camera angles) for action classification and position annotation. To mitigate the challenges posed by high-resolution drone footage with excessive redundant information, we propose the SlowFast-Traffic (SF-T) framework, a spatio-temporal sequence-based algorithm for recognizing traffic accident rescue actions. For more efficient extraction of target–background correlation features, we introduce the Actor-Centric Relation Network (ACRN) module, which employs temporal max pooling to enhance the time-dimensional features of static backgrounds, significantly reducing redundancy-induced interference. Additionally, smaller ROI feature map outputs are adopted to boost computational speed. To tackle class imbalance in incident samples, we integrate a Class-Balanced Focal Loss (CB-Focal Loss) function, effectively resolving rare-action recognition in specific rescue scenarios. We replace the original Faster R-CNN with YOLOX-s to improve the target detection rate. On our proposed dataset, the SF-T model achieves a mean average precision (mAP) of 83.9%, which is 8.5% higher than that of the standard SlowFast architecture while maintaining a processing speed of 34.9 tasks/s. Both accuracy-related metrics and computational efficiency are substantially improved. The proposed method demonstrates strong robustness and real-time analysis capabilities for modern traffic rescue action recognition. Full article
(This article belongs to the Special Issue Cooperative Perception for Modern Transportation)
Show Figures

Figure 1

18 pages, 5137 KiB  
Article
Comparative Analysis of Energy Efficiency and Position Stability of Sub-250 g Quadcopter and Bicopter with Similar Mass Under Varying Conditions
by Artur Kierzkowski, Mateusz Woźniak and Paweł Bury
Energies 2025, 18(14), 3728; https://doi.org/10.3390/en18143728 - 14 Jul 2025
Viewed by 339
Abstract
This paper investigates the energy efficiency and positional stability of two types of ultralight unmanned aerial vehicles (UAVs)—bicopter and quadcopter—both with mass below 250 g, under varying flight conditions. The study is motivated by increasing interest in low-weight drones due to their regulatory [...] Read more.
This paper investigates the energy efficiency and positional stability of two types of ultralight unmanned aerial vehicles (UAVs)—bicopter and quadcopter—both with mass below 250 g, under varying flight conditions. The study is motivated by increasing interest in low-weight drones due to their regulatory flexibility and application potential in constrained environments. A comparative methodology was adopted, involving the construction of both UAV types using identical components where possible, including motors, sensors, and power supply, differing only in propulsion configuration. Experimental tests were conducted in wind-free and wind-induced environments to assess power consumption and stability. The data were collected through onboard blackbox logging, and positional deviation was tracked via video analysis. Results show that while the quadcopter consistently demonstrated lower energy consumption (by 6–22%) and higher positional stability, the bicopter offered advantages in simplicity of frame design and reduced component count. However, the bicopter required extensive manual tuning of PID parameters due to the inherent instability introduced by servo-based control. The findings highlight the potential of bicopters in constrained applications, though they emphasize the need for precise control strategies and high-performance servos. The study fills a gap in empirical analysis of energy consumption in lightweight bicopter UAVs. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

11 pages, 1733 KiB  
Article
PV Panels Fault Detection Video Method Based on Mini-Patterns
by Codrin Donciu, Marinel Costel Temneanu and Elena Serea
AppliedMath 2025, 5(3), 89; https://doi.org/10.3390/appliedmath5030089 - 10 Jul 2025
Viewed by 240
Abstract
The development of solar technologies and the widespread adoption of photovoltaic (PV) panels have significantly transformed the global energy landscape. PV panels have evolved from niche applications to become a primary source of electricity generation, driven by their environmental benefits and declining costs. [...] Read more.
The development of solar technologies and the widespread adoption of photovoltaic (PV) panels have significantly transformed the global energy landscape. PV panels have evolved from niche applications to become a primary source of electricity generation, driven by their environmental benefits and declining costs. However, the performance and operational lifespan of PV systems are often compromised by various faults, which can lead to efficiency losses and increased maintenance costs. Consequently, effective and timely fault detection methods have become a critical focus of current research in the field. This work proposes an innovative video-based method for the dimensional evaluation and detection of malfunctions in solar panels, utilizing processing techniques applied to aerial images captured by unmanned aerial vehicles (drones). The method is based on a novel mini-pattern matching algorithm designed to identify specific defect features despite challenging environmental conditions such as strong gradients of non-uniform lighting, partial shading effects, or the presence of accidental deposits that obscure panel surfaces. The proposed approach aims to enhance the accuracy and reliability of fault detection, enabling more efficient monitoring and maintenance of PV installations. Full article
Show Figures

Figure 1

26 pages, 3670 KiB  
Article
Video Instance Segmentation Through Hierarchical Offset Compensation and Temporal Memory Update for UAV Aerial Images
by Ying Huang, Yinhui Zhang, Zifen He and Yunnan Deng
Sensors 2025, 25(14), 4274; https://doi.org/10.3390/s25144274 - 9 Jul 2025
Viewed by 285
Abstract
Despite the pivotal role of unmanned aerial vehicles (UAVs) in intelligent inspection tasks, existing video instance segmentation methods struggle with irregular deforming targets, leading to inconsistent segmentation results due to ineffective feature offset capture and temporal correlation modeling. To address this issue, we [...] Read more.
Despite the pivotal role of unmanned aerial vehicles (UAVs) in intelligent inspection tasks, existing video instance segmentation methods struggle with irregular deforming targets, leading to inconsistent segmentation results due to ineffective feature offset capture and temporal correlation modeling. To address this issue, we propose a hierarchical offset compensation and temporal memory update method for video instance segmentation (HT-VIS) with a high generalization ability. Firstly, a hierarchical offset compensation (HOC) module in the form of a sequential and parallel connection is designed to perform deformable offset for the same flexible target across frames, which benefits from compensating for spatial motion features at the time sequence. Next, the temporal memory update (TMU) module is developed by employing convolutional long-short-term memory (ConvLSTM) between the current and adjacent frames to establish the temporal dynamic context correlation and update the current frame feature effectively. Finally, extensive experimental results demonstrate the superiority of the proposed HDNet method when applied to the public YouTubeVIS-2019 dataset and a self-built UAV-Seg segmentation dataset. On four typical datasets (i.e., Zoo, Street, Vehicle, and Sport) extracted from YoutubeVIS-2019 according to category characteristics, the proposed HT-VIS outperforms the state-of-the-art CNN-based VIS methods CrossVIS by 3.9%, 2.0%, 0.3%, and 3.8% in average segmentation accuracy, respectively. On the self-built UAV-VIS dataset, our HT-VIS with PHOC surpasses the baseline SipMask by 2.1% and achieves the highest average segmentation accuracy of 37.4% in the CNN-based methods, demonstrating the effectiveness and robustness of our proposed framework. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 4232 KiB  
Article
Multimodal Fusion Image Stabilization Algorithm for Bio-Inspired Flapping-Wing Aircraft
by Zhikai Wang, Sen Wang, Yiwen Hu, Yangfan Zhou, Na Li and Xiaofeng Zhang
Biomimetics 2025, 10(7), 448; https://doi.org/10.3390/biomimetics10070448 - 7 Jul 2025
Viewed by 474
Abstract
This paper presents FWStab, a specialized video stabilization dataset tailored for flapping-wing platforms. The dataset encompasses five typical flight scenarios, featuring 48 video clips with intense dynamic jitter. The corresponding Inertial Measurement Unit (IMU) sensor data are synchronously collected, which jointly provide reliable [...] Read more.
This paper presents FWStab, a specialized video stabilization dataset tailored for flapping-wing platforms. The dataset encompasses five typical flight scenarios, featuring 48 video clips with intense dynamic jitter. The corresponding Inertial Measurement Unit (IMU) sensor data are synchronously collected, which jointly provide reliable support for multimodal modeling. Based on this, to address the issue of poor image acquisition quality due to severe vibrations in aerial vehicles, this paper proposes a multi-modal signal fusion video stabilization framework. This framework effectively integrates image features and inertial sensor features to predict smooth and stable camera poses. During the video stabilization process, the true camera motion originally estimated based on sensors is warped to the smooth trajectory predicted by the network, thereby optimizing the inter-frame stability. This approach maintains the global rigidity of scene motion, avoids visual artifacts caused by traditional dense optical flow-based spatiotemporal warping, and rectifies rolling shutter-induced distortions. Furthermore, the network is trained in an unsupervised manner by leveraging a joint loss function that integrates camera pose smoothness and optical flow residuals. When coupled with a multi-stage training strategy, this framework demonstrates remarkable stabilization adaptability across a wide range of scenarios. The entire framework employs Long Short-Term Memory (LSTM) to model the temporal characteristics of camera trajectories, enabling high-precision prediction of smooth trajectories. Full article
Show Figures

Figure 1

24 pages, 8079 KiB  
Article
Enhancing the Scale Adaptation of Global Trackers for Infrared UAV Tracking
by Zicheng Feng, Wenlong Zhang, Erting Pan, Donghui Liu and Qifeng Yu
Drones 2025, 9(7), 469; https://doi.org/10.3390/drones9070469 - 1 Jul 2025
Viewed by 360
Abstract
Tracking unmanned aerial vehicles (UAVs) in infrared video is an essential technology for the anti-UAV task. Given frequent UAV target disappearances caused by occlusion or moving out of view, global trackers, which have the unique ability to recapture targets, are widely used in [...] Read more.
Tracking unmanned aerial vehicles (UAVs) in infrared video is an essential technology for the anti-UAV task. Given frequent UAV target disappearances caused by occlusion or moving out of view, global trackers, which have the unique ability to recapture targets, are widely used in infrared UAV tracking. However, global trackers perform poorly when dealing with large target scale variation because they cannot maintain approximate consistency between target sizes in the template and the search region. To enhance the scale adaptation of global trackers, we propose a plug-and-play scale adaptation enhancement module (SAEM). This can generate a scale adaptation enhancement kernel according to the target size in the previous frame, and then perform implicit scale adaptation enhancement on the extracted target template features. To optimize training, we introduce an auxiliary branch to supervise the learning of SAEM and add Gaussian noise to the input size to improve its robustness. In addition, we propose a one-stage anchor-free global tracker (OSGT), which has a more concise structure than other global trackers to meet the real-time requirement. Extensive experiments on three Anti-UAV Challenge datasets and the Anti-UAV410 dataset demonstrate the superior performance of our method and verify that our proposed SAEM can effectively enhance the scale adaptation of existing global trackers. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
Show Figures

Figure 1

25 pages, 5064 KiB  
Article
Enhancing Drone Detection via Transformer Neural Network and Positive–Negative Momentum Optimizers
by Pavel Lyakhov, Denis Butusov, Vadim Pismennyy, Ruslan Abdulkadirov, Nikolay Nagornov, Valerii Ostrovskii and Diana Kalita
Big Data Cogn. Comput. 2025, 9(7), 167; https://doi.org/10.3390/bdcc9070167 - 26 Jun 2025
Viewed by 523
Abstract
The rapid development of unmanned aerial vehicles (UAVs) has had a significant impact on the growth of the economic, industrial, and social welfare of society. The possibility of reaching places that are difficult and dangerous for humans to access with minimal use of [...] Read more.
The rapid development of unmanned aerial vehicles (UAVs) has had a significant impact on the growth of the economic, industrial, and social welfare of society. The possibility of reaching places that are difficult and dangerous for humans to access with minimal use of third-party resources increases the efficiency and quality of maintenance of construction structures, agriculture, and exploration, which are carried out with the help of drones with a predetermined trajectory. The widespread use of UAVs has caused problems with the control of the drones’ correctness following a given route, which leads to emergencies and accidents. Therefore, UAV monitoring with video cameras is of great importance. In this paper, we propose a Yolov12 architecture with positive–negative pulse-based optimization algorithms to solve the problem of drone detection on video data. Self-attention-based mechanisms in transformer neural networks (NNs) improved the quality of drone detection on video. The developed algorithms for training NN architectures improved the accuracy of drone detection by achieving the global extremum of the loss function in fewer epochs using positive–negative pulse-based optimization algorithms. The proposed approach improved object detection accuracy by 2.8 percentage points compared to known state-of-the-art analogs. Full article
Show Figures

Figure 1

18 pages, 29416 KiB  
Article
Novel Matching Algorithm for Effective Drone Detection and Identification by Radio Feature Extraction
by Teng Wu, Yan Du, Runze Mao, Hui Xie, Shengjun Wei and Changzhen Hu
Information 2025, 16(7), 541; https://doi.org/10.3390/info16070541 - 25 Jun 2025
Viewed by 540
Abstract
With the rapid advancement of drone technology, the demand for the precise detection and identification of drones has been steadily increasing. Existing detection methods, such as radio frequency (RF), radar, optical, and acoustic technologies, often fail to meet the accuracy and speed requirements [...] Read more.
With the rapid advancement of drone technology, the demand for the precise detection and identification of drones has been steadily increasing. Existing detection methods, such as radio frequency (RF), radar, optical, and acoustic technologies, often fail to meet the accuracy and speed requirements of real-world counter-drone scenarios. To address this challenge, this paper proposes a novel drone detection and identification algorithm based on transmission signal analysis. The proposed algorithm introduces an innovative feature extraction method that enhances signal analysis by extracting key characteristics from the signals, including bandwidth, power, duration, and interval time. Furthermore, we developed a signal processing algorithm that achieves efficient and accurate drone identification through bandwidth filtering and the matching of duration and interval time sequences. The effectiveness of the proposed approach is validated using the DroneRF820 dataset, which is specifically designed for drone identification and counter-drone applications. The experimental results demonstrate that the proposed method enables highly accurate and rapid drone detection. Full article
Show Figures

Figure 1

24 pages, 3610 KiB  
Article
Safety Evaluation of Highways with Sharp Curves in Highland Mountainous Areas Using an Enhanced Stacking and Low-Cost Dataset Production Method
by Xu Gong, Wu Bo, Fei Chen, Xinhang Wu, Xue Zhang, Delu Li, Fengying Gou and Haisheng Ren
Sustainability 2025, 17(13), 5857; https://doi.org/10.3390/su17135857 - 25 Jun 2025
Viewed by 347
Abstract
This paper proposes an integrated tree model architecture and a low-cost data construction method based on an improved Stacking strategy. It systematically analyzes the importance of safety indicators for mountainous sharp bends in plateau regions and conducts safety evaluation and optimization-strategy research for [...] Read more.
This paper proposes an integrated tree model architecture and a low-cost data construction method based on an improved Stacking strategy. It systematically analyzes the importance of safety indicators for mountainous sharp bends in plateau regions and conducts safety evaluation and optimization-strategy research for ten typical sharp-bend road segments in Tibet. In response to the challenges of traditional data collection in Tibet’s unique geographical and policy constraints, we innovatively use drone aerial video as the data source, integrating Tracker motion trajectory analysis, SegFormer road segmentation, and CAD annotation techniques to construct a dataset covering multi-dimensional features of “human–vehicle–road–environment” for mountainous plateau sharp-bend highways. Compared with similar studies, the cost of this dataset is significantly lower. Based on the strong interpretability of tree models and the excellent generalization ability of ensemble learning, we propose an improved Stacking strategy tree model structure to interpret the importance of each indicator. The Spearman correlation coefficient and TOPSIS algorithm are used to conduct safety evaluation for ten sharp-bend roads in Tibet. The results show that the output of the improved Stacking strategy and the sensitivity analysis of the three tree models indicate that curvature variation rate and acceleration are the most significant factors influencing safety, while speed and road width are secondary factors. The study also provides a safety ranking for the ten selected sharp-bend roads, offering a reference for the 318 Quality Improvement Project. From the perspective of indicator importance, curvature variation rate, acceleration, vehicle speed, and road width are crucial for the safety of mountainous plateau sharp-bend roads. It is recommended to implement speed limits for vehicles and widen the road-bend radius. The technical framework constructed in this study provides a reusable methodology for safety assessment of high-altitude roads in complex terrains. Full article
Show Figures

Figure 1

40 pages, 3342 KiB  
Article
Enhancing Infotainment Services in Integrated Aerial–Ground Mobility Networks
by Chenn-Jung Huang, Liang-Chun Chen, Yu-Sen Cheng, Ken-Wen Hu and Mei-En Jian
Sensors 2025, 25(13), 3891; https://doi.org/10.3390/s25133891 - 22 Jun 2025
Viewed by 366
Abstract
The growing demand for bandwidth-intensive vehicular applications—particularly ultra-high-definition streaming and immersive panoramic video—is pushing current network infrastructures beyond their limits, especially in urban areas with severe congestion and degraded user experience. To address these challenges, we propose an aerial-assisted vehicular network architecture that [...] Read more.
The growing demand for bandwidth-intensive vehicular applications—particularly ultra-high-definition streaming and immersive panoramic video—is pushing current network infrastructures beyond their limits, especially in urban areas with severe congestion and degraded user experience. To address these challenges, we propose an aerial-assisted vehicular network architecture that integrates 6G base stations, distributed massive MIMO networks, visible light communication (VLC), and a heterogeneous aerial network of high-altitude platforms (HAPs) and drones. At its core is a context-aware dynamic bandwidth allocation algorithm that intelligently routes infotainment data through optimal aerial relays, bridging connectivity gaps in coverage-challenged areas. Simulation results show a 47% increase in average available bandwidth over conventional first-come-first-served schemes. Our system also satisfies the stringent latency and reliability requirements of emergency and live infotainment services, creating a sustainable ecosystem that enhances user experience, service delivery, and network efficiency. This work marks a key step toward enabling high-bandwidth, low-latency smart mobility in next-generation urban networks. Full article
(This article belongs to the Special Issue Sensing and Machine Learning Control: Progress and Applications)
Show Figures

Figure 1

27 pages, 1880 KiB  
Article
UAV-Enabled Video Streaming Architecture for Urban Air Mobility: A 6G-Based Approach Toward Low-Altitude 3D Transportation
by Liang-Chun Chen, Chenn-Jung Huang, Yu-Sen Cheng, Ken-Wen Hu and Mei-En Jian
Drones 2025, 9(6), 448; https://doi.org/10.3390/drones9060448 - 18 Jun 2025
Viewed by 693
Abstract
As urban populations expand and congestion intensifies, traditional ground transportation struggles to satisfy escalating mobility demands. Unmanned Electric Vertical Take-Off and Landing (eVTOL) aircraft, as a key enabler of Urban Air Mobility (UAM), leverage low-altitude airspace to alleviate ground traffic while offering environmentally [...] Read more.
As urban populations expand and congestion intensifies, traditional ground transportation struggles to satisfy escalating mobility demands. Unmanned Electric Vertical Take-Off and Landing (eVTOL) aircraft, as a key enabler of Urban Air Mobility (UAM), leverage low-altitude airspace to alleviate ground traffic while offering environmentally sustainable solutions. However, supporting high bandwidth, real-time video applications, such as Virtual Reality (VR), Augmented Reality (AR), and 360° streaming, remains a major challenge, particularly within bandwidth-constrained metropolitan regions. This study proposes a novel Unmanned Aerial Vehicle (UAV)-enabled video streaming architecture that integrates 6G wireless technologies with intelligent routing strategies across cooperative airborne nodes, including unmanned eVTOLs and High-Altitude Platform Systems (HAPS). By relaying video data from low-congestion ground base stations to high-demand urban zones via autonomous aerial relays, the proposed system enhances spectrum utilization and improves streaming stability. Simulation results validate the framework’s capability to support immersive media applications in next-generation autonomous air mobility systems, aligning with the vision of scalable, resilient 3D transportation infrastructure. Full article
Show Figures

Figure 1

28 pages, 1707 KiB  
Review
Video Stabilization: A Comprehensive Survey from Classical Mechanics to Deep Learning Paradigms
by Qian Xu, Qian Huang, Chuanxu Jiang, Xin Li and Yiming Wang
Modelling 2025, 6(2), 49; https://doi.org/10.3390/modelling6020049 - 17 Jun 2025
Viewed by 971
Abstract
Video stabilization is a critical technology for enhancing video quality by eliminating or reducing image instability caused by camera shake, thereby improving the visual viewing experience. It has deeply integrated into diverse applications—including handheld recording, UAV aerial photography, and vehicle-mounted surveillance. Propelled by [...] Read more.
Video stabilization is a critical technology for enhancing video quality by eliminating or reducing image instability caused by camera shake, thereby improving the visual viewing experience. It has deeply integrated into diverse applications—including handheld recording, UAV aerial photography, and vehicle-mounted surveillance. Propelled by advances in deep learning, data-driven stabilization methods have emerged as prominent solutions, demonstrating superior efficacy in handling jitter while achieving enhanced processing efficiency. This review systematically examines the field of video stabilization. First, this paper delineates the paradigm shift from classical to deep learning-based approaches. Subsequently, it elucidates conventional digital stabilization frameworks and their deep learning counterparts along with establishing standardized assessment metrics and benchmark datasets for comparative analysis. Finally, this review addresses critical challenges such as robustness limitations in complex motion scenarios and latency constraints in real-time processing. By integrating interdisciplinary perspectives, this work provides scholars with academically rigorous and practically relevant insights to advance video stabilization research. Full article
Show Figures

Graphical abstract

35 pages, 21267 KiB  
Article
Unmanned Aerial Vehicle–Unmanned Ground Vehicle Centric Visual Semantic Simultaneous Localization and Mapping Framework with Remote Interaction for Dynamic Scenarios
by Chang Liu, Yang Zhang, Liqun Ma, Yong Huang, Keyan Liu and Guangwei Wang
Drones 2025, 9(6), 424; https://doi.org/10.3390/drones9060424 - 10 Jun 2025
Viewed by 1266
Abstract
In this study, we introduce an Unmanned Aerial Vehicle (UAV) centric visual semantic simultaneous localization and mapping (SLAM) framework that integrates RGB–D cameras, inertial measurement units (IMUs), and a 5G–enabled remote interaction module. Our system addresses three critical limitations in existing approaches: (1) [...] Read more.
In this study, we introduce an Unmanned Aerial Vehicle (UAV) centric visual semantic simultaneous localization and mapping (SLAM) framework that integrates RGB–D cameras, inertial measurement units (IMUs), and a 5G–enabled remote interaction module. Our system addresses three critical limitations in existing approaches: (1) Distance constraints in remote operations; (2) Static map assumptions in dynamic environments; and (3) High–dimensional perception requirements for UAV–based applications. By combining YOLO–based object detection with epipolar–constraint-based dynamic feature removal, our method achieves real-time semantic mapping while rejecting motion artifacts. The framework further incorporates a dual–channel communication architecture to enable seamless human–in–the–loop control over UAV–Unmanned Ground Vehicle (UGV) teams in large–scale scenarios. Experimental validation across indoor and outdoor environments indicates that the system can achieve a detection rate of up to 75 frames per second (FPS) on an NVIDIA Jetson AGX Xavier using YOLO–FASTEST, ensuring the rapid identification of dynamic objects. In dynamic scenarios, the localization accuracy attains an average absolute pose error (APE) of 0.1275 m. This outperforms state–of–the–art methods like Dynamic–VINS (0.211 m) and ORB–SLAM3 (0.148 m) on the EuRoC MAV Dataset. The dual-channel communication architecture (Web Real–Time Communication (WebRTC) for video and Message Queuing Telemetry Transport (MQTT) for telemetry) reduces bandwidth consumption by 65% compared to traditional TCP–based protocols. Moreover, our hybrid dynamic feature filtering can reject 89% of dynamic features in occluded scenarios, guaranteeing accurate mapping in complex environments. Our framework represents a significant advancement in enabling intelligent UAVs/UGVs to navigate and interact in complex, dynamic environments, offering real-time semantic understanding and accurate localization. Full article
(This article belongs to the Special Issue Advances in Perception, Communications, and Control for Drones)
Show Figures

Figure 1

24 pages, 2229 KiB  
Article
Mathematical Modeling of Optimal Drone Flight Trajectories for Enhanced Object Detection in Video Streams Using Kolmogorov–Arnold Networks
by Aida Issembayeva, Oleksandr Kuznetsov, Anargul Shaushenova, Ardak Nurpeisova, Gabit Shuitenov and Maral Ongarbayeva
Technologies 2025, 13(6), 235; https://doi.org/10.3390/technologies13060235 - 6 Jun 2025
Viewed by 937
Abstract
This study addresses the critical challenge of optimizing drone flight parameters for enhanced object detection in video streams. While most research focuses on improving detection algorithms, the relationship between flight parameters and detection performance remains poorly understood. We present a novel approach using [...] Read more.
This study addresses the critical challenge of optimizing drone flight parameters for enhanced object detection in video streams. While most research focuses on improving detection algorithms, the relationship between flight parameters and detection performance remains poorly understood. We present a novel approach using Kolmogorov–Arnold Networks (KANs) to model complex, non-linear relationships between altitude, pitch angle, speed, and object detection performance. Our main contributions include the following: (1) the systematic analysis of flight parameters’ effects on detection performance using the AU-AIR dataset, (2) development of a KAN-based mathematical model achieving R2 = 0.99, (3) identification of optimal flight parameters through multi-start optimization, and (4) creation of a flexible implementation framework adaptable to different UAV platforms. Sensitivity analysis confirms the solution’s robustness with only 7.3% performance degradation under ±10% parameter variations. This research bridges flight operations and detection algorithms, offering practical guidelines that enhance the detection capability by optimizing image acquisition rather than modifying detection algorithms. Full article
(This article belongs to the Special Issue AI Robotics Technologies and Their Applications)
Show Figures

Figure 1

28 pages, 10289 KiB  
Article
Synchronized Multi-Point UAV-Based Traffic Monitoring for Urban Infrastructure Decision Support
by Igor Kabashkin, Alua Kulmurzina, Batyrlan Nadimov, Gulnar Tlepiyeva, Zura Sansyzbayeva and Timur Sultanov
Drones 2025, 9(5), 370; https://doi.org/10.3390/drones9050370 - 14 May 2025
Viewed by 976
Abstract
This study presents a comprehensive methodology for urban traffic monitoring and infrastructure decision making, centered on synchronous, simultaneous aerial data collection through a distributed multi-point UAV deployment. Conducted in the GreenLine district of Astana, Kazakhstan, the research utilized a coordinated fleet of UAVs [...] Read more.
This study presents a comprehensive methodology for urban traffic monitoring and infrastructure decision making, centered on synchronous, simultaneous aerial data collection through a distributed multi-point UAV deployment. Conducted in the GreenLine district of Astana, Kazakhstan, the research utilized a coordinated fleet of UAVs to capture real-time video footage at 30 critical observation points during peak traffic periods, enabling a network-wide view of traffic dynamics. The collected data were processed to extract key traffic parameters, such as flow rates, vehicle speeds, and delays, which informed the calibration of a detailed traffic simulation model. Based on this model, six infrastructure development scenarios were evaluated using a multi-criteria decision-making framework to identify the most effective intervention strategies. This study introduces a replicable, data-driven approach that links synchronized UAV sensing with simulation-based evaluation, offering a practical decision support tool for improving urban infrastructure performance within the context of smart and rapidly evolving cities. Full article
(This article belongs to the Section Innovative Urban Mobility)
Show Figures

Figure 1

Back to TopTop