Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,623)

Search Parameters:
Keywords = unmanned aerial vehicle (UAV) image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 5410 KB  
Review
Ambrosia artemisiifolia in Hungary: A Review of Challenges, Impacts, and Precision Agriculture Approaches for Sustainable Site-Specific Weed Management Using UAV Technologies
by Sherwan Yassin Hammad, Gergő Péter Kovács and Gábor Milics
AgriEngineering 2026, 8(1), 30; https://doi.org/10.3390/agriengineering8010030 - 15 Jan 2026
Abstract
Weed management has become a critical agricultural practice, as weeds compete with crops for nutrients, host pests and diseases, and cause major economic losses. The invasive weed Ambrosia artemisiifolia (common ragweed) is particularly problematic in Hungary, endangering crop productivity and public health through [...] Read more.
Weed management has become a critical agricultural practice, as weeds compete with crops for nutrients, host pests and diseases, and cause major economic losses. The invasive weed Ambrosia artemisiifolia (common ragweed) is particularly problematic in Hungary, endangering crop productivity and public health through its fast proliferation and allergenic pollen. This review examines the current challenges and impacts of A. artemisiifolia while exploring sustainable approaches to its management through precision agriculture. Recent advancements in unmanned aerial vehicles (UAVs) equipped with advanced imaging systems, remote sensing, and artificial intelligence, particularly deep learning models such as convolutional neural networks (CNNs) and Support Vector Machines (SVMs), enable accurate detection, mapping, and classification of weed infestations. These technologies facilitate site-specific weed management (SSWM) by optimizing herbicide application, reducing chemical inputs, and minimizing environmental impacts. The results of recent studies demonstrate the high potential of UAV-based monitoring for real-time, data-driven weed management. The review concludes that integrating UAV and AI technologies into weed management offers a sustainable, cost-effective, mitigate the socioeconomic impacts and environmentally responsible solution, emphasizing the need for collaboration between agricultural researchers and technology developers to enhance precision agriculture practices in Hungary. Full article
Show Figures

Figure 1

22 pages, 6609 KB  
Article
CAMS-AI: A Coarse-to-Fine Framework for Efficient Small Object Detection in High-Resolution Images
by Zhanqi Chen, Zhao Chen, Baohui Yang, Qian Guo, Haoran Wang and Xiangquan Zeng
Remote Sens. 2026, 18(2), 259; https://doi.org/10.3390/rs18020259 - 14 Jan 2026
Abstract
Automated livestock monitoring in wide-area grasslands is a critical component of smart agriculture development. Devices such as Unmanned Aerial Vehicles (UAVs), remote sensing, and high-mounted cameras provide unique monitoring perspectives for this purpose. The high-resolution images they capture cover vast grassland backgrounds, where [...] Read more.
Automated livestock monitoring in wide-area grasslands is a critical component of smart agriculture development. Devices such as Unmanned Aerial Vehicles (UAVs), remote sensing, and high-mounted cameras provide unique monitoring perspectives for this purpose. The high-resolution images they capture cover vast grassland backgrounds, where targets often appear as small, distant objects and are extremely unevenly distributed. Applying standard detectors directly to such images yields poor results and extremely high miss rates. To improve the detection accuracy of small targets in high-resolution images, methods represented by Slicing Aided Hyper Inference (SAHI) have been widely adopted. However, in specific scenarios, SAHI’s drawbacks are dramatically amplified. Its strategy of uniform global slicing divides each original image into a fixed number of sub-images, many of which may be pure background (negative samples) containing no targets. This results in a significant waste of computational resources and a precipitous drop in inference speed, falling far short of practical application requirements. To resolve this conflict between accuracy and efficiency, this paper proposes an efficient detection framework named CAMS-AI (Clustering and Adaptive Multi-level Slicing for Aided Inference). CAMS-AI adopts a “coarse-to-fine” intelligent focusing strategy: First, a Region Proposal Network (RPN) is used to rapidly locate all potential target areas. Next, a clustering algorithm is employed to generate precise Regions of Interest (ROIs), effectively focusing computational resources on target-dense areas. Finally, an innovative multi-level slicing strategy and a high-precision model are applied only to these high-quality ROIs for fine-grained detection. Experimental results demonstrate that the CAMS-AI framework achieves a mean Average Precision (mAP) comparable to SAHI while significantly increasing inference speed. Taking the RT-DETR detector as an example, while achieving 96% of the mAP50–95 accuracy level of the SAHI method, CAMS-AI’s end-to-end frames per second (FPS) is 10.3 times that of SAHI, showcasing its immense application potential in real-world, high-resolution monitoring scenarios. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

24 pages, 6383 KB  
Article
FF-Mamba-YOLO: An SSM-Based Benchmark for Forest Fire Detection in UAV Remote Sensing Images
by Binhua Guo, Dinghui Liu, Zhou Shen and Tiebin Wang
J. Imaging 2026, 12(1), 43; https://doi.org/10.3390/jimaging12010043 - 13 Jan 2026
Abstract
Timely and accurate detection of forest fires through unmanned aerial vehicle (UAV) remote sensing target detection technology is of paramount importance. However, multiscale targets and complex environmental interference in UAV remote sensing images pose significant challenges during detection tasks. To address these obstacles, [...] Read more.
Timely and accurate detection of forest fires through unmanned aerial vehicle (UAV) remote sensing target detection technology is of paramount importance. However, multiscale targets and complex environmental interference in UAV remote sensing images pose significant challenges during detection tasks. To address these obstacles, this paper presents FF-Mamba-YOLO, a novel framework based on the principles of Mamba and YOLO (You Only Look Once) that leverages innovative modules and architectures to overcome these limitations. Specifically, we introduce MFEBlock and MFFBlock based on state space models (SSMs) in the backbone and neck parts of the network, respectively, enabling the model to effectively capture global dependencies. Second, we construct CFEBlock, a module that performs feature enhancement before SSM processing, improving local feature processing capabilities. Furthermore, we propose MGBlock, which adopts a dynamic gating mechanism, enhancing the model’s adaptive processing capabilities and robustness. Finally, we enhance the structure of Path Aggregation Feature Pyramid Network (PAFPN) to improve feature fusion quality and introduce DySample to enhance image resolution without significantly increasing computational costs. Experimental results on our self-constructed forest fire image dataset demonstrate that the model achieves 67.4% mAP@50, 36.3% mAP@50:95, and 64.8% precision, outperforming previous state-of-the-art methods. These results highlight the potential of FF-Mamba-YOLO in forest fire monitoring. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

22 pages, 3834 KB  
Article
Image-Based Spatio-Temporal Graph Learning for Diffusion Forecasting in Digital Management Systems
by Chenxi Du, Zhengjie Fu, Yifan Hu, Yibin Liu, Jingwen Cao, Siyuan Liu and Yan Zhan
Electronics 2026, 15(2), 356; https://doi.org/10.3390/electronics15020356 - 13 Jan 2026
Viewed by 17
Abstract
With the widespread application of high-resolution remote sensing imagery and unmanned aerial vehicle technologies in agricultural scenarios, accurately characterizing spatial pest diffusion from multi-temporal images has become a critical issue in intelligent agricultural management. To overcome the limitations of existing machine learning approaches [...] Read more.
With the widespread application of high-resolution remote sensing imagery and unmanned aerial vehicle technologies in agricultural scenarios, accurately characterizing spatial pest diffusion from multi-temporal images has become a critical issue in intelligent agricultural management. To overcome the limitations of existing machine learning approaches that focus mainly on static recognition and lack effective spatio-temporal diffusion modeling, a UAV-based pest diffusion prediction and simulation framework is proposed. Multi-temporal UAV RGB and multispectral imagery are jointly modeled using a graph-based representation of farmland parcels, while temporal modeling and environmental embedding mechanisms are incorporated to enable simultaneous prediction of diffusion intensity and propagation paths. Experiments conducted on two real agricultural regions, Bayan Nur and Tangshan, demonstrate that the proposed method consistently outperforms representative spatio-temporal baselines. Compared with ST-GCN, the proposed framework achieves approximately 17–22% reductions in MAE and MSE, together with 8–12% improvements in PMR, while maintaining robust classification performance with precision, recall, and F1-score exceeding 0.82. These results indicate that the proposed approach can provide reliable support for agricultural information systems and diffusion-aware decision generation. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images, 2nd Edition)
Show Figures

Figure 1

26 pages, 19685 KB  
Article
UAV NDVI-Based Vigor Zoning Predicts PR-Protein Accumulation and Protein Instability in Chardonnay and Sauvignon Blanc Wines
by Adrián Vera-Esmeraldas, Mauricio Galleguillos, Mariela Labbé, Alejandro Cáceres-Mella, Francisco Rojo and Fernando Salazar
Plants 2026, 15(2), 243; https://doi.org/10.3390/plants15020243 - 13 Jan 2026
Viewed by 65
Abstract
Protein instability in white wines is mainly caused by pathogenesis-related (PR) proteins that survive winemaking and can form haze in bottle. Because PR-protein synthesis is modulated by vine stress, this study evaluated whether unmanned aerial vehicle (UAV) multispectral imagery and NDVI-based vigor zoning [...] Read more.
Protein instability in white wines is mainly caused by pathogenesis-related (PR) proteins that survive winemaking and can form haze in bottle. Because PR-protein synthesis is modulated by vine stress, this study evaluated whether unmanned aerial vehicle (UAV) multispectral imagery and NDVI-based vigor zoning can be used as early predictors of protein instability in commercial Chardonnay and Sauvignon Blanc wines. High-resolution multispectral images were acquired over two seasons (2023–2024) in two vineyards, and three vigor zones (high, medium, low) were delineated from the NDVI at the individual vine scale. A total of 180 georeferenced vines were sampled, and musts were analyzed for thaumatin-like proteins and chitinases via RP-HPLC. Separate microvinifications were carried out for each vigor zone and cultivar, and the resulting wines were evaluated for protein instability (heat test) and bentonite requirements. Low-vigor vines consistently produced musts with higher PR-protein concentrations, greater turbidity after heating, and higher bentonite demand than high-vigor vines, with stronger effects in Sauvignon Blanc. These vigor-dependent patterns were stable across vintages, despite contrasting seasonal conditions. Linear discriminant analysis using NDVI, PR-protein content, turbidity, and bentonite dosage correctly separated vigor classes. Overall, UAV NDVI–based vigor zoning provided a robust, non-destructive tool for identifying vineyard zones with increased risk of protein instability. This approach supports precision enology by enabling site-specific stabilization strategies that reduce overtreatment with bentonite and preserve white wine quality. Full article
Show Figures

Figure 1

28 pages, 2897 KB  
Review
Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava
by Wasiu Akande Ahmed, Olayinka Ademola Abiola, Dongkai Yang, Seyi Festus Olatoyinbo and Guifei Jing
Horticulturae 2026, 12(1), 87; https://doi.org/10.3390/horticulturae12010087 - 12 Jan 2026
Viewed by 86
Abstract
Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes [...] Read more.
Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes current advances in combining unmanned aerial vehicles (UAVs) with deep learning (DL) to enable scalable, data-driven cassava disease detection. It examines UAV platforms, sensor technologies, flight protocols, image preprocessing pipelines, DL architectures, and existing datasets, and it evaluates how these components interact within UAV–DL disease-monitoring frameworks. The review also compares model performance across convolutional neural network-based and Transformer-based architectures, highlighting metrics such as accuracy, recall, F1-score, inference speed, and deployment feasibility. Persistent challenges—such as limited UAV-acquired datasets, annotation inconsistencies, geographic model bias, and inadequate real-time deployment—are identified and discussed. Finally, the paper proposes a structured research agenda including lightweight edge-deployable models, UAV-ready benchmarking protocols, and multimodal data fusion. This review provides a consolidated reference for researchers and practitioners seeking to develop practical and scalable cassava-disease detection systems. Full article
Show Figures

Figure 1

55 pages, 1599 KB  
Review
The Survey of Evolutionary Deep Learning-Based UAV Intelligent Power Inspection
by Shanshan Fan and Bin Cao
Drones 2026, 10(1), 55; https://doi.org/10.3390/drones10010055 - 12 Jan 2026
Viewed by 211
Abstract
With the rapid development of the power Internet of Things (IoT), the traditional manual inspection mode can no longer meet the growing demand for power equipment inspection. Unmanned aerial vehicle (UAV) intelligent inspection technology, with its efficient and flexible features, has become the [...] Read more.
With the rapid development of the power Internet of Things (IoT), the traditional manual inspection mode can no longer meet the growing demand for power equipment inspection. Unmanned aerial vehicle (UAV) intelligent inspection technology, with its efficient and flexible features, has become the mainstream solution. The rapid development of computer vision and deep learning (DL) has significantly improved the accuracy and efficiency of UAV intelligent inspection systems for power equipment. However, mainstream deep learning models have complex structures, and manual design is time-consuming and labor-intensive. In addition, the images collected during the power inspection process by UAVs have problems such as complex backgrounds, uneven lighting, and significant differences in object sizes, which require expert DL domain knowledge and many trial-and-error experiments to design models suitable for application scenarios involving power inspection with UAVs. In response to these difficult problems, evolutionary computation (EC) technology has demonstrated unique advantages in simulating the natural evolutionary process. This technology can independently design lightweight and high-precision deep learning models by automatically optimizing the network structure and hyperparameters. Therefore, this review summarizes the development of evolutionary deep learning (EDL) technology and provides a reference for applying EDL in object detection models used in UAV intelligent power inspection systems. First, the application status of DL-based object detection models in power inspection is reviewed. Then, how EDL technology improves the performance of the models in challenging scenarios such as complex terrain and extreme weather is analyzed by optimizing the network architecture. Finally, the challenges and future research directions of EDL technology in the field of UAV power inspection are discussed, including key issues such as improving the environmental adaptability of the model and reducing computing energy consumption, providing theoretical references for promoting the development of UAV power inspection technology to a higher level. Full article
Show Figures

Figure 1

23 pages, 5292 KB  
Article
Research on Rapid 3D Model Reconstruction Based on 3D Gaussian Splatting for Power Scenarios
by Huanruo Qi, Yi Zhou, Chen Chen, Lu Zhang, Peipei He, Xiangyang Yan and Mengqi Zhai
Sustainability 2026, 18(2), 726; https://doi.org/10.3390/su18020726 - 10 Jan 2026
Viewed by 191
Abstract
As core infrastructure of power transmission networks, power towers require high-precision 3D models, which are critical for intelligent inspection and digital twin applications of power transmission lines. Traditional reconstruction methods, such as LiDAR scanning and oblique photogrammetry, suffer from issues including high operational [...] Read more.
As core infrastructure of power transmission networks, power towers require high-precision 3D models, which are critical for intelligent inspection and digital twin applications of power transmission lines. Traditional reconstruction methods, such as LiDAR scanning and oblique photogrammetry, suffer from issues including high operational risks, low modeling efficiency, and loss of fine details. To address these limitations, this paper proposes a 3D Gaussian Splatting (3DGS)-based method for power tower 3D reconstruction to enhance reconstruction efficiency and detail preservation capability. First, a multi-view data acquisition scheme combining “unmanned aerial vehicle + oblique photogrammetry” was designed to capture RGB images acquired by Unmanned Aerial Vehicle (UAV) platforms, which are used as the primary input for 3D reconstruction. Second, a sparse point cloud was generated via Structure from Motion. Finally, based on 3DGS, Gaussian model initialization, differentiable rendering, and adaptive density control were performed to produce high-precision 3D models of power towers. Taking two typical power tower types as experimental subjects, comparisons were made with the oblique photogrammetry + ContextCapture method. Experimental results demonstrate that 3DGS not only achieves high model completeness (with the reconstructed model nearly indistinguishable from the original images) but also excels in preserving fine details such as angle steels and cables. Additionally, the final modeling time is reduced by over 70% compared to traditional oblique photogrammetry. 3DGS enables efficient and high-precision reconstruction of power tower 3D models, providing a reliable technical foundation for digital twin applications in power transmission lines. By significantly improving reconstruction efficiency and reducing operational costs, the proposed method supports sustainable power infrastructure inspection, asset lifecycle management, and energy-efficient digital twin applications. Full article
Show Figures

Figure 1

18 pages, 4523 KB  
Article
Remote Sensing of Nematode Stress in Coffee: UAV-Based Multispectral and Thermal Imaging Approaches
by Daniele de Brum, Gabriel Araújo e Silva Ferraz, Luana Mendes dos Santos, Felipe Augusto Fernandes, Marco Antonio Zanella, Patrícia Ferreira Ponciano Ferraz, Willian César Terra, Vicente Paulo Campos, Thieres George Freire da Silva, Ênio Farias de França e Silva and Alexsandro Oliveira da Silva
AgriEngineering 2026, 8(1), 22; https://doi.org/10.3390/agriengineering8010022 - 8 Jan 2026
Viewed by 174
Abstract
Early and non-destructive detection of plant-parasitic nematodes is critical for implementing site-specific management in coffee production systems. This study evaluated the potential of unmanned aerial vehicle (UAV) multispectral and thermal imaging, combined with textural analysis, to detect Meloidogyne exigua infestation in Coffea arabica [...] Read more.
Early and non-destructive detection of plant-parasitic nematodes is critical for implementing site-specific management in coffee production systems. This study evaluated the potential of unmanned aerial vehicle (UAV) multispectral and thermal imaging, combined with textural analysis, to detect Meloidogyne exigua infestation in Coffea arabica (Topázio variety). Field surveys were conducted in two contrasting seasons (dry and rainy), and nematode incidence was identified and quantified by counting root galls. Vegetation indices (NDVI, GNDVI, NGRDI, NDRE, OSAVI), individual spectral bands, canopy temperature, and Haralick texture features were extracted from UAV-derived imagery and correlated with gall counts. Under the conditions of this experiment, strong correlations were observed between gall number and the red spectral band in both seasons (R > 0.60), while GNDVI (dry season) and NGRDI (rainy season) showed strong negative correlations with gall density. Thermal imaging revealed moderate positive correlations with infestation levels during the dry season, indicating potential for early stress detection when foliar symptoms were absent. Texture metrics from the red and green bands further improved detection capacity, particularly with a 3 × 3 pixel window at 135°. These results demonstrate that UAV-based multispectral and thermal imaging, enhanced by texture analysis, can provide reliable early indicators of nematode infestation in coffee. Full article
Show Figures

Figure 1

24 pages, 10131 KB  
Article
A Cooperative UAV Hyperspectral Imaging and USV In Situ Sampling Framework for Rapid Chlorophyll-a Retrieval
by Zixiang Ye, Xuewen Chen, Lvxin Qian, Chaojun Lin and Wenbin Pan
Drones 2026, 10(1), 39; https://doi.org/10.3390/drones10010039 - 7 Jan 2026
Viewed by 108
Abstract
Traditional water quality monitoring methods are limited in providing timely chlorophyll-a (Chl-a) assessments in small inland reservoirs. This study presents a rapid Chl-a retrieval approach based on a cooperative unmanned aerial vehicle–uncrewed surface vessel (UAV–USV) framework that integrates UAV [...] Read more.
Traditional water quality monitoring methods are limited in providing timely chlorophyll-a (Chl-a) assessments in small inland reservoirs. This study presents a rapid Chl-a retrieval approach based on a cooperative unmanned aerial vehicle–uncrewed surface vessel (UAV–USV) framework that integrates UAV hyperspectral imaging, machine learning algorithms, and synchronized USV in situ sampling. We carried out a three-day cooperative monitoring campaign in the Longhu Reservoir of Fujian Province, during which high-frequency hyperspectral imagery and water samples were collected. An innovative median-based correction method was developed to suppress striping noise in UAV hyperspectral data, and a two-step band selection strategy combining correlation analysis and variance inflation factor screening was used to determine the input features for the subsequent inversion models. Four commonly used machine-learning-based inversion models were constructed and evaluated, with the random forest model achieving the highest accuracy and stability across both training and testing datasets. The generated Chl-a maps revealed overall good water quality, with localized higher concentrations in weakly hydrodynamic zones. Overall, the cooperative UAV–USV framework enables synchronized data acquisition, rapid processing, and fine-scale mapping, demonstrating strong potential for fast-response and emergency water-quality monitoring in small inland drinking-water reservoirs. Full article
(This article belongs to the Section Drones in Ecology)
Show Figures

Figure 1

13 pages, 4494 KB  
Article
Direct UAV-Based Detection of Botrytis cinerea in Vineyards Using Chlorophyll-Absorption Indices and YOLO Deep Learning
by Guillem Montalban-Faet, Enrique Pérez-Mateo, Rafael Fayos-Jordan, Pablo Benlloch-Caballero, Aleksandr Lada, Jaume Segura-Garcia and Miguel Garcia-Pineda
Sensors 2026, 26(2), 374; https://doi.org/10.3390/s26020374 - 6 Jan 2026
Viewed by 232
Abstract
The transition toward Agriculture 5.0 requires intelligent and autonomous monitoring systems capable of providing early, accurate, and scalable crop health assessment. This study presents the design and field evaluation of an artificial intelligence (AI)–based unmanned aerial vehicle (UAV) system for the detection of [...] Read more.
The transition toward Agriculture 5.0 requires intelligent and autonomous monitoring systems capable of providing early, accurate, and scalable crop health assessment. This study presents the design and field evaluation of an artificial intelligence (AI)–based unmanned aerial vehicle (UAV) system for the detection of Botrytis cinerea in vineyards using multispectral imagery and deep learning. The proposed system integrates calibrated multispectral data with vegetation indices and a YOLOv8 object detection model to enable automated, geolocated disease detection. Experimental results obtained under real vineyard conditions show that training the model using the Chlorophyll Absorption Ratio Index (CARI) significantly improves detection performance compared to RGB imagery, achieving a precision of 92.6%, a recall of 89.6%, an F1-score of 91.1%, and a mean Average Precision (mAP@50) of 93.9%. In contrast, the RGB-based configuration yielded an F1-score of 68.1% and an mAP@50 of 68.5%. The system achieved an average inference time below 50 ms per image, supporting near real-time UAV operation. These results demonstrate that physiologically informed spectral feature selection substantially enhances early Botrytis cinerea detection and confirm the suitability of the proposed UAV–AI framework for precision viticulture within the Agriculture 5.0 paradigm. Full article
(This article belongs to the Special Issue AI-IoT for New Challenges in Smart Cities)
Show Figures

Figure 1

14 pages, 9038 KB  
Article
BSGNet: Vehicle Detection in UAV Imagery of Construction Scenes via Biomimetic Edge Awareness and Global Receptive Field Modeling
by Yongwei Wang, Yuan Chen, Yakun Xie, Jun Zhu, Chao Dang and Hao Zhu
Drones 2026, 10(1), 32; https://doi.org/10.3390/drones10010032 - 5 Jan 2026
Viewed by 133
Abstract
Detecting vehicles in remote sensing images of construction sites captured by Unmanned Aerial Vehicles (UAVs) faces severe challenges, including extremely small target scales, high inter-class visual similarity, cluttered backgrounds, and highly variable imaging conditions. To address these issues, we propose BSGNet (Biomimetic Sharpening [...] Read more.
Detecting vehicles in remote sensing images of construction sites captured by Unmanned Aerial Vehicles (UAVs) faces severe challenges, including extremely small target scales, high inter-class visual similarity, cluttered backgrounds, and highly variable imaging conditions. To address these issues, we propose BSGNet (Biomimetic Sharpening and Global Receptive Field Network)—a novel detection architecture that synergistically fuses biologically inspired visual mechanisms with global receptive field modeling. Inspired by the Sustained Contrast Detection (SCD) mechanism in frog retinal ganglion cells, we design a Perceptual Sharpening Module (PSM). This module combines dual-path contrast enhancement with spatial attention mechanisms to significantly improve sensitivity to the high-frequency edge structures of small targets while effectively suppressing interfering backgrounds. To overcome the inherent limitation of such biomimetic mechanisms—specifically their restricted local receptive fields—we further introduce a Global Heterogeneous Receptive Field Learning Module (GRM). This module employs parallel multi-branch dilated convolutions and local detail enhancement paths to achieve joint modeling of long-range semantic context and fine-grained local features. Extensive experiments on our newly constructed UAV Construction Vehicle (UCV) dataset demonstrate that BSGNet achieves state-of-the-art performance: obtaining 64.9% APs on small targets and 81.2% on the overall mAP@0.5 metric, with an inference latency of only 31.4 milliseconds, outperforming existing mainstream detection frameworks in multiple metrics. Furthermore, the model demonstrates robust generalization performance on public datasets. Full article
Show Figures

Figure 1

30 pages, 5730 KB  
Article
Indoor UAV 3D Localization Using 5G CSI Fingerprinting
by Mohsen Shahraki, Ahmed Elamin and Ahmed El-Rabbany
ISPRS Int. J. Geo-Inf. 2026, 15(1), 24; https://doi.org/10.3390/ijgi15010024 - 5 Jan 2026
Viewed by 240
Abstract
Fifth-generation (5G) wireless networks have been widely deployed across various applications, including indoor positioning. This paper presents a model for 3D indoor localization of an unmanned aerial vehicle (UAV) using 5G millimeter-wave technology. Wireless InSite software is used to simulate a real-world environment [...] Read more.
Fifth-generation (5G) wireless networks have been widely deployed across various applications, including indoor positioning. This paper presents a model for 3D indoor localization of an unmanned aerial vehicle (UAV) using 5G millimeter-wave technology. Wireless InSite software is used to simulate a real-world environment and extract channel state information from multiple 5G next-generation NodeBs (gNBs), which is then used to generate channel frequency response (CFR) images. These images are employed in a fingerprinting method, where a deep convolutional neural network is trained for accurate position prediction. The model is trained across multiple scenarios involving changes in the number of gNBs, receiver positions, and spacing. In all scenarios, the model is tested using a UAV flying along a trajectory at variable speed. It is shown that a mean positioning error (MPE) of 0.36 m in 2D and 0.43 m in 3D is achieved when twelve gNBs with receivers spaced at 0.25 m are used. In addition, the corresponding root mean square error (RMSE) values of 0.32 m (2D) and 0.33 m (3D) further confirm the stability of the localization performance by indicating a low dispersion of positioning errors. This demonstrates that high positioning accuracy is feasible, even when synchronization errors and hardware imperfections exist. Full article
(This article belongs to the Special Issue Indoor Mobile Mapping and Location-Based Knowledge Services)
Show Figures

Figure 1

23 pages, 17044 KB  
Article
BEHAVE-UAV: A Behaviour-Aware Synthetic Data Pipeline for Wildlife Detection from UAV Imagery
by Larisa Taskina, Kirill Vorobyev, Leonid Abakumov and Timofey Kazarkin
Drones 2026, 10(1), 29; https://doi.org/10.3390/drones10010029 - 4 Jan 2026
Viewed by 185
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used to monitor wildlife, but training robust detectors still requires large, consistently annotated datasets collected across seasons, habitats and flight altitudes. In practice, such data are scarce and expensive to label, especially when animals occupy only a [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used to monitor wildlife, but training robust detectors still requires large, consistently annotated datasets collected across seasons, habitats and flight altitudes. In practice, such data are scarce and expensive to label, especially when animals occupy only a few pixels in high-altitude imagery. We present a behaviour-aware synthetic data pipeline, implemented in Unreal Engine 5, that combines parameterised animal agents, procedurally varied environments and UAV-accurate camera trajectories to generate large volumes of labelled UAV imagery without manual annotation. Each frame is exported with instance masks, YOLO-format bounding boxes and tracking metadata, enabling both object detection and downstream behavioural analysis. Using this pipeline, we study YOLOv8s trained under six regimes that vary by data source (synthetic versus real) and input resolution, including a fractional fine-tuning sweep on a public deer dataset. High-resolution synthetic pre-training at 1280 px substantially improves small-object detection and, after fine-tuning on only 50% of the real images, recovers nearly all performance achieved with the fully labelled real set. At lower resolution (640 px), synthetic initialisation matches real-only training after fine-tuning, indicating that synthetic data do not harm and can accelerate convergence. These results show that behaviour-aware synthetic data can make UAV wildlife monitoring more sample-efficient while reducing annotation cost. Full article
(This article belongs to the Section Drones in Ecology)
Show Figures

Figure 1

22 pages, 31354 KB  
Article
Heritage Conservation and Management of Traditional Anhui Dwellings Using 3D Digitization: A Case Study of the Architectural Heritage Clusters in Huangshan City
by Jianfu Chen, Jie Zhong, Qingqian Ning, Zhengjia Xu and Hiroatsu Fukuda
Buildings 2026, 16(1), 211; https://doi.org/10.3390/buildings16010211 - 2 Jan 2026
Viewed by 442
Abstract
Traditional villages stand as irreplaceable treasures of global cultural heritage, embodying profound historical, cultural, and esthetic values. However, the accelerating pace of urbanization has exposed them to unprecedented threats, including structural degradation, loss of intangible cultural practices, and the homogenization of rural landscapes. [...] Read more.
Traditional villages stand as irreplaceable treasures of global cultural heritage, embodying profound historical, cultural, and esthetic values. However, the accelerating pace of urbanization has exposed them to unprecedented threats, including structural degradation, loss of intangible cultural practices, and the homogenization of rural landscapes. In recent years, three-dimensional (3D) laser scanning, unmanned aerial vehicles (UAVs), and other advanced geospatial technologies have been increasingly applied in the conservation and restoration of architectural heritage. The digital documentation of traditional dwellings not only ensures the accuracy and efficiency of conservation efforts but also minimizes physical intervention, thereby safeguarding the authenticity and integrity of heritage sites. This study examines the architectural characteristics and conservation challenges of traditional Huizhou dwellings in Huangshan City, Anhui Province, by integrating oblique photogrammetry, terrestrial laser scanning (TLS), and 3D modeling. Close-range photogrammetry, combined with image matching algorithms and computer vision techniques, was used to produce highly detailed 3D models of historical structures. UAV-based data acquisition was further employed to generate Heritage Building Information Modeling (HBIM) from point cloud datasets, which were subsequently pre-processed and denoised for restoration simulations. In addition, HBIM was utilized to conduct quantitative analyses of architectural components, providing critical support for heritage management and decision-making in conservation planning. The findings demonstrate that 3D digitization offers a sustainable and replicable model for the protection, revitalization, and adaptive reuse of traditional villages, contributing to the long-term preservation of their cultural and architectural legacy. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Back to TopTop