Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,640)

Search Parameters:
Keywords = drone vehicle

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7306 KB  
Article
Drone-Based Maritime Anomaly Detection with YOLO and Motion/Appearance Fusion
by Nutchanon Suvittawat, De Wen Soh and Sutthiphong Srigrarom
Remote Sens. 2026, 18(3), 412; https://doi.org/10.3390/rs18030412 - 26 Jan 2026
Abstract
Maritime surveillance is critical for ensuring the safety and continuity of sea logistics, port operations, and coastal activities in the presence of anomalies such as unlawful maritime activities, security-related incidents, and anomalous events (e.g., tsunamis or aggressive marine wildlife). Recent advances in unmanned [...] Read more.
Maritime surveillance is critical for ensuring the safety and continuity of sea logistics, port operations, and coastal activities in the presence of anomalies such as unlawful maritime activities, security-related incidents, and anomalous events (e.g., tsunamis or aggressive marine wildlife). Recent advances in unmanned aerial vehicles (UAVs)/drones and computer vision enable automated, wide-area monitoring that can reduce dependence on continuous human observation and mitigate the limitations of traditional methods in complex maritime environments (e.g., waves, ship clutter, and marine animal movement). This study proposes a hybrid anomaly detection and tracking pipeline that integrates YOLOv12, as the primary object detector, with two auxiliary modules: (i) motion assistance for tracking moving anomalies and (ii) stillness (appearance) assistance for tracking slow-moving or stationary anomalies. The system is trained and evaluated on a custom maritime dataset captured using a DJI Mini 2 drone operating around a port area near Bayshore MRT Station (TE29), Singapore. Windsurfers are used as proxy (dummy) anomalies because real anomaly footage is restricted for security reasons. On the held-out test set, the trained model achieves over 90% on Precision, Recall, and mAP50 across all classes. When deployed on real maritime video sequences, the pipeline attains a mean Precision of 92.89% (SD 13.31), a mean Recall of 90.44% (SD 15.24), and a mean Accuracy of 98.50% (SD 2.00%), indicating strong potential for real-world maritime anomaly detection. This proof of concept provides a basis for future deployment and retraining on genuine anomaly footage obtained from relevant authorities to further enhance operational readiness for maritime and coastal security. Full article
27 pages, 49730 KB  
Article
AMSRDet: An Adaptive Multi-Scale UAV Infrared-Visible Remote Sensing Vehicle Detection Network
by Zekai Yan and Yuheng Li
Sensors 2026, 26(3), 817; https://doi.org/10.3390/s26030817 - 26 Jan 2026
Abstract
Unmanned Aerial Vehicle (UAV) platforms enable flexible and cost-effective vehicle detection for intelligent transportation systems, yet small-scale vehicles in complex aerial scenes pose substantial challenges from extreme scale variations, environmental interference, and single-sensor limitations. We present AMSRDet (Adaptive Multi-Scale Remote Sensing Detector), an [...] Read more.
Unmanned Aerial Vehicle (UAV) platforms enable flexible and cost-effective vehicle detection for intelligent transportation systems, yet small-scale vehicles in complex aerial scenes pose substantial challenges from extreme scale variations, environmental interference, and single-sensor limitations. We present AMSRDet (Adaptive Multi-Scale Remote Sensing Detector), an adaptive multi-scale detection network fusing infrared (IR) and visible (RGB) modalities for robust UAV-based vehicle detection. Our framework comprises four novel components: (1) a MobileMamba-based dual-stream encoder extracting complementary features via Selective State-Space 2D (SS2D) blocks with linear complexity O(HWC), achieving 2.1× efficiency improvement over standard Transformers; (2) a Cross-Modal Global Fusion (CMGF) module capturing global dependencies through spatial-channel attention while suppressing modality-specific noise via adaptive gating; (3) a Scale-Coordinate Attention Fusion (SCAF) module integrating multi-scale features via coordinate attention and learned scale-aware weighting, improving small object detection by 2.5 percentage points; and (4) a Separable Dynamic Decoder generating scale-adaptive predictions through content-aware dynamic convolution, reducing computational cost by 48.9% compared to standard DETR decoders. On the DroneVehicle dataset, AMSRDet achieves 45.8% mAP@0.5:0.95 (81.2% mAP@0.5) at 68.3 Frames Per Second (FPS) with 28.6 million (M) parameters and 47.2 Giga Floating Point Operations (GFLOPs), outperforming twenty state-of-the-art detectors including YOLOv12 (+0.7% mAP), DEIM (+0.8% mAP), and Mamba-YOLO (+1.5% mAP). Cross-dataset evaluation on Camera-vehicle yields 52.3% mAP without fine-tuning, demonstrating strong generalization across viewpoints and scenarios. Full article
(This article belongs to the Special Issue AI and Smart Sensors for Intelligent Transportation Systems)
Show Figures

Figure 1

25 pages, 4936 KB  
Article
Drone-Enabled Non-Invasive Ultrasound Method for Rodent Deterrence
by Marija Ratković, Vasilije Kovačević, Matija Marijan, Maksim Kostadinov, Tatjana Miljković and Miloš Bjelić
Drones 2026, 10(2), 84; https://doi.org/10.3390/drones10020084 - 25 Jan 2026
Abstract
Unmanned aerial vehicles open new possibilities for developing technologies that support more sustainable and efficient agriculture. This paper presents a non-invasive method for repelling rodents from crop fields using ultrasound. The proposed system is implemented as a spherical-cap ultrasound loudspeaker array consisting of [...] Read more.
Unmanned aerial vehicles open new possibilities for developing technologies that support more sustainable and efficient agriculture. This paper presents a non-invasive method for repelling rodents from crop fields using ultrasound. The proposed system is implemented as a spherical-cap ultrasound loudspeaker array consisting of eight transducers, mounted on a drone that overflies the field while emitting sound in the 20–70 kHz range. The hardware design includes both the loudspeaker array and a custom printed circuit board hosting power amplifiers and a signal generator tailored to drive multiple ultrasonic transducers. In parallel, a genetic algorithm is used to compute flight paths that maximize coverage and increase the probability of driving rodents away from the protected area. As part of the validation phase, artificial intelligence models for rodent detection using a thermal camera are developed to provide quantitative feedback on system performance. The complete prototype is evaluated through a series of experiments conducted both in controlled laboratory conditions and in the field. Field trials highlight which parts of the concept are already effective and identify open challenges that need to be addressed in future work to move from a research prototype toward a deployable product. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture—2nd Edition)
Show Figures

Figure 1

18 pages, 6362 KB  
Article
From Human Teams to Autonomous Swarms: A Reinforcement Learning-Based Benchmarking Framework for Unmanned Aerial Vehicle Search and Rescue Missions
by Julian Bialas, Mohammad Reza Mohebbi, Michiel J. van Veelen, Abraham Mejia-Aguilar, Robert Kathrein and Mario Döller
Drones 2026, 10(2), 79; https://doi.org/10.3390/drones10020079 - 23 Jan 2026
Viewed by 62
Abstract
The adoption of novel technologies such as Unmanned Aerial Vehicles (UAVs) in Search and Rescue (SAR) operations remains limited. As a result, their full potential is not yet realized. Although UAVs have been deployed on an ad hoc basis, typically under manual control [...] Read more.
The adoption of novel technologies such as Unmanned Aerial Vehicles (UAVs) in Search and Rescue (SAR) operations remains limited. As a result, their full potential is not yet realized. Although UAVs have been deployed on an ad hoc basis, typically under manual control by dedicated operators, assisted and fully autonomous configurations remain largely unexplored. In this study, three SAR frameworks are systematically evaluated within a unified benchmarking framework: conventional ground missions, UAV-assisted missions, and fully autonomous UAV operations. As the key performance indicator, the target localization time was quantified and used as the means of comparison amongst frameworks. The conventional and assisted frameworks were experimentally tested through physical hardware in a controlled outdoor setting, wherein simulated callouts occurred via rescue teams. The autonomous swarm framework was simulated in the form of a multi-agent Reinforcement Learning (RL) method via the use of the Proximal Policy Optimization (PPO) algorithm. This enabled the optimization of the decentralized cooperative actions that could occur for efficient exploration of a partially observed three-dimensional environment. Our results demonstrated that the autonomous swarm significantly outperformed the conventional and assisted approaches in terms of speed and coverage. Finally, a detailed depiction of the framework’s integration into an operational system is provided. Full article
Show Figures

Figure 1

23 pages, 53605 KB  
Article
Multispectral Sparse Cross-Attention Guided Mamba Network for Small Object Detection in Remote Sensing
by Wen Xiang, Yamin Li, Liu Duan, Qifeng Wu, Jiaqi Ruan, Yucheng Wan and Sihan Wu
Remote Sens. 2026, 18(3), 381; https://doi.org/10.3390/rs18030381 - 23 Jan 2026
Viewed by 81
Abstract
Remote sensing small object detection remains a challenging task due to limited feature representation and interference from complex backgrounds. Existing methods that rely exclusively on either visible or infrared modalities often fail to achieve both accuracy and robustness in detection. Effectively integrating cross-modal [...] Read more.
Remote sensing small object detection remains a challenging task due to limited feature representation and interference from complex backgrounds. Existing methods that rely exclusively on either visible or infrared modalities often fail to achieve both accuracy and robustness in detection. Effectively integrating cross-modal information to enhance detection performance remains a critical challenge. To address this issue, we propose a novel Multispectral Sparse Cross-Attention Guided Mamba Network (MSCGMN) for small object detection in remote sensing. The proposed MSCGMN architecture comprises three key components: Multispectral Sparse Cross-Attention Guidance Module (MSCAG), Dynamic Grouped Mamba Block (DGMB), and Gated Enhanced Attention Module (GEAM). Specifically, the MSCAG module selectively fuses RGB and infrared (IR) features using sparse cross-modal attention, effectively capturing complementary information across modalities while suppressing redundancy. The DGMB introduces a dynamic grouping strategy to improve the computational efficiency of Mamba, enabling effective global context modeling. In remote sensing images, small objects occupy limited areas, making it difficult to capture their critical features. We design the GEAM module to enhance both global and local feature representations for small object detection. Experiments on the VEDAI and DroneVehicle datasets show that MSCGMN achieves mAP50 scores of 83.9% and 84.4%, outperforming existing state-of-the-art methods and demonstrating strong competitiveness in small object detection tasks. Full article
21 pages, 6553 KB  
Article
Analyzing Key Factors for Warehouse UAV Integration Through Complex Network Modeling
by Chommaphat Malang and Ratapol Wudhikarn
Logistics 2026, 10(2), 28; https://doi.org/10.3390/logistics10020028 - 23 Jan 2026
Viewed by 136
Abstract
Background: The integration of unmanned aerial vehicles (UAVs) into warehouse management is shaped by a broad spectrum of influencing factors, yet practical adoption lagged behind its potential due to scarce quantitative models of factor interdependencies. Methods: This study systematically reviewed academic [...] Read more.
Background: The integration of unmanned aerial vehicles (UAVs) into warehouse management is shaped by a broad spectrum of influencing factors, yet practical adoption lagged behind its potential due to scarce quantitative models of factor interdependencies. Methods: This study systematically reviewed academic literature to identify key factors affecting UAV adoption and explored their interrelationships using complex network and social network analysis. Results: Sixty-six distinct factors were identified and mapped into a weighted network with 527 connections, highlighting the multifaceted nature of UAV integration. Notably, two factors, i.e., Disturbance Prediction and System Resilience, were found to be isolated, suggesting they have received little research attention. The overall network is characterized by low density but includes a set of 25 core factors that strongly influence the system. Significant interconnections were uncovered among factors such as drone design, societal factors, rack characteristics, environmental influences, and simulation software. Conclusions: These findings provide a comprehensive understanding of the dynamics shaping UAV adoption in warehouse management. Furthermore, the open-access dataset and network model developed in this research offer valuable resources to support future studies and practical decision-making in the field. Full article
(This article belongs to the Topic Decision Science Applications and Models (DSAM))
Show Figures

Figure 1

30 pages, 7812 KB  
Article
Drone-Based Road Marking Condition Mapping: A Drone Imaging and Geospatial Pipeline for Asset Management
by Minh Dinh Bui, Jubin Lee, Kanghyeok Choi, HyunSoo Kim and Changjae Kim
Drones 2026, 10(2), 77; https://doi.org/10.3390/drones10020077 - 23 Jan 2026
Viewed by 61
Abstract
This study presents a drone-based method for assessing the condition of road markings from high-resolution imagery acquired by a UAV. A DJI Matrice 300 RTK (Real-Time Kinematic) equipped with a Zenmuse P1 camera (DJI, China) is flown over urban road corridors to capture [...] Read more.
This study presents a drone-based method for assessing the condition of road markings from high-resolution imagery acquired by a UAV. A DJI Matrice 300 RTK (Real-Time Kinematic) equipped with a Zenmuse P1 camera (DJI, China) is flown over urban road corridors to capture images with centimeter-level ground sampling distance. In contrast to common approaches that rely on vehicle-mounted or street-view cameras, using a UAV reduces survey time and deployment effort while still providing views that are suitable for marking. The flight altitude, overlap, and corridor pattern are chosen to limit occlusions from traffic and building shadows while preserving the resolution required for condition assessment. From these images, the method locates individual markings, assigns a class to each marking, and estimates its level of deterioration. Candidate markings are first detected with YOLOv9 on the UAV imagery. The detections are cropped and segmented, which refines marking boundaries and thin structures. The condition is then estimated at the pixel level by modeling gray-level statistics with kernel density estimation (KDE) and a two-component Gaussian mixture model (GMM) to separate intact and distressed material. Subsequently, we compute a per-instance damage ratio that summarizes the proportion of degraded pixels within each marking. All results are georeferenced to map coordinates using a 3D reference model, allowing visualization on base maps and integration into road asset inventories. Experiments on unseen urban areas report detection performance (precision, recall, mean average precision) and segmentation performance (intersection over union), and analyze the stability of the damage ratio and processing time. The findings indicate that the drone-based method can identify road markings, estimate their condition, and attach each record to geographic space in a way that is useful for inspection scheduling and maintenance planning. Full article
(This article belongs to the Special Issue Urban Traffic Monitoring and Analysis Using UAVs)
36 pages, 3544 KB  
Article
Distinguishing a Drone from Birds Based on Trajectory Movement and Deep Learning
by Andrii Nesteruk, Valerii Nikitin, Yosyp Albrekht, Łukasz Ścisło, Damian Grela and Paweł Król
Sensors 2026, 26(3), 755; https://doi.org/10.3390/s26030755 - 23 Jan 2026
Viewed by 93
Abstract
Unmanned aerial vehicles (UAVs) increasingly share low-altitude airspace with birds, making early distinguishing between drones and biological targets critical for safety and security. This work addresses long-range scenarios where objects occupy only a few pixels and appearance-based recognition becomes unreliable. We develop a [...] Read more.
Unmanned aerial vehicles (UAVs) increasingly share low-altitude airspace with birds, making early distinguishing between drones and biological targets critical for safety and security. This work addresses long-range scenarios where objects occupy only a few pixels and appearance-based recognition becomes unreliable. We develop a model-driven simulation pipeline that generates synthetic data with a controlled camera model, atmospheric background and realistic motion of three aerial target types: multicopter, fixed-wing UAV and bird. From these sequences, each track is encoded as a time series of image-plane coordinates and apparent size, and a bidirectional long short-term memory (LSTM) network is trained to classify trajectories as drone-like or bird-like. The model learns characteristic differences in smoothness, turning behavior and velocity fluctuations, and to achieve reliable separation between drone and bird motion patterns on synthetic test data. Motion-trajectory cues alone can support early distinguishing of drones from birds when visual details are scarce, providing a complementary signal to conventional image-based detection. The proposed synthetic data and sequence classification pipeline forms a reproducible testbed that can be extended with real trajectories from radar or video tracking systems and used to prototype and benchmark trajectory-based recognizers for integrated surveillance solutions. The proposed method is designed to generalize naturally to real surveillance systems, as it relies on trajectory-level motion patterns rather than appearance-based features that are sensitive to sensor quality, illumination, or weather conditions. Full article
(This article belongs to the Section Industrial Sensors)
23 pages, 38941 KB  
Article
Fusion Framework of Remote Sensing and Electromagnetic Scattering Features of Drones for Monitoring Freighters
by Zeyang Zhou and Jun Huang
Drones 2026, 10(1), 74; https://doi.org/10.3390/drones10010074 - 22 Jan 2026
Viewed by 41
Abstract
Certain types of unmanned aerial vehicles (UAVs) represent convenient platforms for remote sensing observation as well as low-altitude targets that are themselves monitored by other devices. In order to study remote sensing grayscale and radar cross-section (RCS) in an example drone, we present [...] Read more.
Certain types of unmanned aerial vehicles (UAVs) represent convenient platforms for remote sensing observation as well as low-altitude targets that are themselves monitored by other devices. In order to study remote sensing grayscale and radar cross-section (RCS) in an example drone, we present a fusion framework based on remote sensing imaging and electromagnetic scattering calculations. The results indicate that the quadcopter drone shows weak visual effects in remote sensing grayscale images while exhibiting strong dynamic electromagnetic scattering features that can exceed 29.6815 dBm2 fluctuations. The average and peak RCS of the example UAV are higher than those of the quadcopter in the given cases. The example freighter exhibits the most intuitive grayscale features and the largest RCS mean under the given observation conditions, with a peak of 51.6186 dBm2. Compared to the UAV, the small boat with a sharp bow design has similar dimensions while exhibiting lower RCS features and intuitive remote sensing grayscale. Under cross-scale conditions, grayscale imaging is beneficial for monitoring UAVs, freighters, and other nearby boats. Dynamic RCS features and grayscale local magnification are suitable for locating and recognizing drones. The established approach is effective in learning remote sensing grayscale and electromagnetic scattering features of drones used for observing freighters. Full article
Show Figures

Figure 1

21 pages, 2142 KB  
Article
Real-Life ISO 15189 Qualification of Long-Range Drone Transportation of Medical Biological Samples: Results from a Clinical Trial
by Baptiste Demey, Olivier Bury, Morgane Choquet, Julie Fontaine, Myriam Dollerschell, Hugo Thorel, Charlotte Durand-Maugard, Olivier Leroy, Mathieu Pecquet, Annelise Voyer, Gautier Dhaussy and Sandrine Castelain
Drones 2026, 10(1), 71; https://doi.org/10.3390/drones10010071 - 21 Jan 2026
Viewed by 65
Abstract
Controlling pre-analytical conditions for medical biology tests, particularly during transport, is crucial for complying with the ISO 15189 standard and ensuring high-quality medical services. The use of drones, also known as unmanned aerial vehicles, to transport clinical samples is growing in scale, but [...] Read more.
Controlling pre-analytical conditions for medical biology tests, particularly during transport, is crucial for complying with the ISO 15189 standard and ensuring high-quality medical services. The use of drones, also known as unmanned aerial vehicles, to transport clinical samples is growing in scale, but requires prior validation to verify that there is no negative impact on the test results provided to doctors. This study aimed to establish a secure, high-quality solution for transporting biological samples by drone in a coastal region of France. The 80 km routes passed over several densely populated urban areas, with take-off and landing points within hospital grounds. The analytical and clinical impact of this mode of transport was compared according to two protocols: an interventional clinical trial on 30 volunteers compared to the reference transport by car, and an observational study on samples from 126 hospitalized patients compared to no transport. The system enabled samples to be transported without damage by maintaining freezing, refrigerated, and room temperatures throughout the flight, without any significant gain in travel time. Analytical variations were observed for sodium, folate, GGT, and platelet levels, with no clinical impact on the interpretation of the results. There is a risk of time-dependent alterations of blood glucose measurements in heparin tubes, which can be corrected by using fluoride tubes. This demonstrated the feasibility and security of transporting biological samples over long distances in line with the ISO 15189 standard. Controlling transport times remains crucial to assessing the quality of analyses. It is imperative to devise contingency plans for backup solutions to ensure the continuity of transportation in the event of inclement weather. Full article
(This article belongs to the Special Issue Recent Advances in Healthcare Applications of Drones)
Show Figures

Figure 1

17 pages, 5027 KB  
Article
Symmetry-Enhanced YOLOv8s Algorithm for Small-Target Detection in UAV Aerial Photography
by Zhiyi Zhou, Chengyun Wei, Lubin Wang and Qiang Yu
Symmetry 2026, 18(1), 197; https://doi.org/10.3390/sym18010197 - 20 Jan 2026
Viewed by 163
Abstract
In order to solve the problems of small-target detection in UAV aerial photography, such as small scale, blurred features and complex background interference, this article proposes the ACS-YOLOv8s method to optimize the YOLOv8s network: notably, most small man-made targets in UAV aerial scenes [...] Read more.
In order to solve the problems of small-target detection in UAV aerial photography, such as small scale, blurred features and complex background interference, this article proposes the ACS-YOLOv8s method to optimize the YOLOv8s network: notably, most small man-made targets in UAV aerial scenes (e.g., small vehicles, micro-drones) inherently possess symmetry, a key geometric attribute that can significantly enhance the discriminability of blurred or incomplete target features, and thus symmetry-aware mechanisms are integrated into the aforementioned improved modules to further boost detection performance. The backbone network introduces an adaptive feature enhancement module, the edge and detail representation of small targets is enhanced by dynamically modulating the receptive field with deformable attention while also capturing symmetric contour features to strengthen the perception of target geometric structures; a cascaded multi-receptive field module is embedded at the end of the trunk to integrate multi-scale features in a hierarchical manner to take into account both expressive ability and computational efficiency with a focus on fusing symmetric multi-scale features to optimize feature representation; the neck is integrated with a spatially adaptive feature modulation network to achieve dynamic weighting of cross-layer features and detail fidelity and, meanwhile, models symmetric feature dependencies across channels to reduce the loss of discriminative information. Experimental results based on the VisDrone2019 data set show that ACS-YOLOv8s is superior to the baseline model in precision, recall, and mAP indicators, with mAP50 increased by 2.8% to 41.6% and mAP50:90 increased by 1.9% to 25.0%, verifying its effectiveness and robustness in small-target detection in complex drone aerial-photography scenarios. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 21878 KB  
Article
STC-SORT: A Dynamic Spatio-Temporal Consistency Framework for Multi-Object Tracking in UAV Videos
by Ziang Ma, Chuanzhi Chen, Jinbao Chen and Yuhan Jiang
Appl. Sci. 2026, 16(2), 1062; https://doi.org/10.3390/app16021062 - 20 Jan 2026
Viewed by 93
Abstract
Multi-object tracking (MOT) in videos captured by Unmanned Aerial Vehicles (UAVs) is critically challenged by significant camera ego-motion, frequent occlusions, and complex object interactions. To address the limitations of conventional trackers that depend on static, rule-based association strategies, this paper introduces STC-SORT, a [...] Read more.
Multi-object tracking (MOT) in videos captured by Unmanned Aerial Vehicles (UAVs) is critically challenged by significant camera ego-motion, frequent occlusions, and complex object interactions. To address the limitations of conventional trackers that depend on static, rule-based association strategies, this paper introduces STC-SORT, a novel tracking framework whose core is a two-level reasoning architecture for data association. First, a Spatio-Temporal Consistency Graph Network (STC-GN) models inter-object relationships via graph attention to learn adaptive weights for fusing motion, appearance, and geometric cues. Second, these dynamic weights are integrated into a 4D association cost volume, enabling globally optimal matching across a temporal window. When integrated with an enhanced AEE-YOLO detector, STC-SORT achieves significant and statistically robust improvements on major UAV tracking benchmarks. It elevates MOTA by 13.0% on UAVDT and 6.5% on VisDrone, while boosting IDF1 by 9.7% and 9.9%, respectively. The framework also maintains real-time inference speed (75.5 FPS) and demonstrates substantial reductions in identity switches. These results validate STC-SORT as having strong potential for robust multi-object tracking in challenging UAV scenarios. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

32 pages, 21400 KB  
Article
Assessment of a Weathering-Induced Rockfall Event and Development of Minimal-Intervention Mitigation Strategies in an Urban Environment
by Ömer Ündül, Mohammad Manzoor Nasery, Mehmet Mert Doğu and Enes Zengin
Appl. Sci. 2026, 16(2), 1045; https://doi.org/10.3390/app16021045 - 20 Jan 2026
Viewed by 109
Abstract
The increase in population and demand for the various needs of citizens increases the interaction with the geo-environment. Thus, the rate of natural events affecting daily human life increases. Such an event occurred on a rock cliff in a densely populated area in [...] Read more.
The increase in population and demand for the various needs of citizens increases the interaction with the geo-environment. Thus, the rate of natural events affecting daily human life increases. Such an event occurred on a rock cliff in a densely populated area in İstanbul (Türkiye). More than four rock blocks (approximately 3–5 m3) belonging to the Paleozoic sequence of İstanbul, composed of nodular limestone with sandy-clay interlayers, detached and fell. The blocks traveled along a path of approximately 60 m and stopped by crushing a couple of buildings downslope. The path was rough and contained various surface conditions (e.g., bedrock, talus, and plants). This study was initiated by the examination of the dimensions of failed rock blocks, their paths, and topographic conditions. Unmanned vehicles (drones) facilitated the generation of 3D numerical models of topographic changes on the site. Quantifying discontinuity properties (such as persistence, spacing, roughness, etc.) and defining weathering properties comprises the second stage, along with sampling. Based on digital topographic data and field observations, cross-sections were defined by means of possible rockfall areas within the area of potentially unstable blocks. Numerical analysis and rockfall analysis were conducted along these critical sections. Interpretation of laboratory data and results obtained from numerical studies leads to an understanding of the mechanism of the recent rockfall event and demonstrates the most critical areas to be considered and reinforced. The research comprises proposing appropriate reinforcement techniques due to the strong Turkish regulations along the “Bosphorus Waterfront Protected Zone”. The study advises pre-cleaning of potentially unstable blocks after a fence production on paths where rocks could fall, and rock anchors in some localities with varying lengths. The latest part of the research covers the re-assessment of mitigation processes with numerical models, which shows that the factor of safety increased to the desired levels. The reinforcement applications at the site match well with the proposed prevention methods. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

23 pages, 54360 KB  
Article
ATM-Net: A Lightweight Multimodal Fusion Network for Real-Time UAV-Based Object Detection
by Jiawei Chen, Junyu Huang, Zuye Zhang, Jinxin Yang, Zhifeng Wu and Renbo Luo
Drones 2026, 10(1), 67; https://doi.org/10.3390/drones10010067 - 20 Jan 2026
Viewed by 125
Abstract
UAV-based object detection faces critical challenges including extreme scale variations (targets occupy 0.1–2% image area), bird’s-eye view complexities, and all-weather operational demands. Single RGB sensors degrade under poor illumination while infrared sensors lack spatial details. We propose ATM-Net, a lightweight multimodal RGB–infrared fusion [...] Read more.
UAV-based object detection faces critical challenges including extreme scale variations (targets occupy 0.1–2% image area), bird’s-eye view complexities, and all-weather operational demands. Single RGB sensors degrade under poor illumination while infrared sensors lack spatial details. We propose ATM-Net, a lightweight multimodal RGB–infrared fusion network for robust UAV vehicle detection. ATM-Net integrates three innovations: (1) Asymmetric Recurrent Fusion Module (ARFM) performs “extraction→fusion→separation” cycles across pyramid levels, balancing cross-modal collaboration and modality independence. (2) Tri-Dimensional Attention (TDA) recalibrates features through orthogonal Channel-Width, Height-Channel, and Height-Width branches, enabling comprehensive multi-dimensional feature enhancement. (3) Multi-scale Adaptive Feature Pyramid Network (MAFPN) constructs enhanced representations via bidirectional flow and multi-path aggregation. Experiments on VEDAI and DroneVehicle datasets demonstrate superior performance—92.4% mAP50 and 64.7% mAP50-95 on VEDAI, 83.7% mAP on DroneVehicle—with only 4.83M parameters. ATM-Net achieves optimal accuracy–efficiency balance for resource-constrained UAV edge platforms. Full article
Show Figures

Figure 1

23 pages, 40307 KB  
Article
EFPNet: An Efficient Feature Perception Network for Real-Time Detection of Small UAV Targets
by Jiahao Huang, Wei Jin, Huifeng Tao, Yunsong Feng, Yuanxin Shang, Siyu Wang and Aibing Liu
Remote Sens. 2026, 18(2), 340; https://doi.org/10.3390/rs18020340 - 20 Jan 2026
Viewed by 122
Abstract
In recent years, unmanned aerial vehicles (UAVs) have become increasingly prevalent across diverse application scenarios due to their high maneuverability, compact size, and cost-effectiveness. However, these advantages also introduce significant challenges for UAV detection in complex environments. This paper proposes an efficient feature [...] Read more.
In recent years, unmanned aerial vehicles (UAVs) have become increasingly prevalent across diverse application scenarios due to their high maneuverability, compact size, and cost-effectiveness. However, these advantages also introduce significant challenges for UAV detection in complex environments. This paper proposes an efficient feature perception network (EFPNet) for UAV detection, developed on the foundation of the RT-DETR framework. Specifically, a dual-branch HiLo-ConvMix attention (HCM-Attn) mechanism and a pyramid sparse feature transformer network (PSFT-Net) are introduced, along with the integration of a DySample dynamic upsampling module. The HCM-Attn module facilitates interaction between high- and low-frequency information, effectively suppressing background noise interference. The PSFT-Net is designed to leverage deep-level features to guide the encoding and fusion of shallow features, thereby enhancing the model’s capability to perceive UAV texture characteristics. Furthermore, the integrated DySample dynamic upsampling module ensures efficient reconstruction and restoration of feature representations. On the TIB and Drone-vs-Bird datasets, the proposed EFPNet achieves mAP50 scores of 94.1% and 98.1%, representing improvements of 3.2% and 1.9% over the baseline models, respectively. Our experimental results demonstrate the effectiveness of the proposed method for small UAV detection. Full article
Show Figures

Figure 1

Back to TopTop