Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,979)

Search Parameters:
Keywords = UAV imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2951 KB  
Article
Multi-View Camera-Based UAV 3D Trajectory Reconstruction Using an Optical Imaging Geometric Model
by Chen Ji, Yiyue Wang, Junfan Yi, Xiangtian Zheng, Wanxuan Geng and Liang Cheng
Electronics 2026, 15(7), 1425; https://doi.org/10.3390/electronics15071425 (registering DOI) - 30 Mar 2026
Abstract
In low-altitude complex environments, accurately reconstructing the three-dimensional (3D) flight trajectories of small unmanned aerial vehicles (UAV) without onboard positioning modules remains challenging. To address this issue, this paper proposes a multi-view ground camera-based UAV 3D trajectory detection method founded on an optical [...] Read more.
In low-altitude complex environments, accurately reconstructing the three-dimensional (3D) flight trajectories of small unmanned aerial vehicles (UAV) without onboard positioning modules remains challenging. To address this issue, this paper proposes a multi-view ground camera-based UAV 3D trajectory detection method founded on an optical imaging geometric model. Multiple ground cameras are used to synchronously observe UAV flight, enabling stable 3D trajectory reconstruction without relying on onboard Global Navigation Satellite System (GNSS). At the two-dimensional (2D) observation level, a lightweight object detection model is employed for rapid UAV detection. Foreground segmentation is further introduced to extract accurate UAV contours, and geometric centroids are computed to obtain precise image plane coordinates. At the 3D reconstruction stage, camera extrinsic parameters are estimated using a back intersection method with ground control points, and the UAV spatial position in the world coordinate system is recovered via multi-view forward intersection. Field experiments demonstrate that the proposed method achieves stable 3D trajectory reconstruction in real urban environments, with a median error of 4.93 m and a mean error of 5.83 m. The mean errors along the X, Y, and Z axes are 2.28 m, 4.58 m, and 1.09 m, respectively, confirming its effectiveness for low-cost UAV trajectory monitoring. Full article
Show Figures

Figure 1

28 pages, 3135 KB  
Article
Zoom Long-Wave Infrared Constant Ground Resolution Imaging Optical System Design
by Zhiqiang Yang, Wenna Zhang, Bohan Wu, Liguo Wang, Yao Li, Lihong Yang and Lei Gong
Photonics 2026, 13(4), 332; https://doi.org/10.3390/photonics13040332 (registering DOI) - 29 Mar 2026
Abstract
Long-wave infrared (LWIR) airborne optical systems for ground imaging are widely utilized in applications such as ground reconnaissance, agricultural monitoring, counterterrorism, and other fields. Traditional oblique-view ground-imaging optical systems suffer from a critical drawback compared to nadir-view systems: the significant variation in object [...] Read more.
Long-wave infrared (LWIR) airborne optical systems for ground imaging are widely utilized in applications such as ground reconnaissance, agricultural monitoring, counterterrorism, and other fields. Traditional oblique-view ground-imaging optical systems suffer from a critical drawback compared to nadir-view systems: the significant variation in object distances between distant and nearby targets. This disparity leads to inconsistent ground resolution (GR), manifesting in images where distant targets exhibit significantly lower resolution than nearby ones. This characteristic is highly detrimental to information acquisition and three-dimensional modeling of the system. Furthermore, the limited field of view of fixed focal length systems prevents the unmanned aerial vehicle (UAV) from acquiring target information effectively across varying flight altitudes. To address this issue, this paper designs an oblique imaging optical system capable of achieving both constant GR and zoom functionality in the LWIR band. By controlling the ground resolution, a LWIR continuous zoom optical system was designed. The system maintains constant GR over the entire field of view. Its modulation transfer function (MTF) approaches the diffraction limit across the full field of view, and the spot diagram remains within Airy’s disk at each view angle. The radius of the spot diagram is smaller than that of the Airy disk, indicating that the geometric aberrations of the system are well corrected. The imaging performance is primarily determined by the wavelength and the F-number. In the case of LWIR, the longer wavelength results in a larger Airy disk radius. The system meets imaging quality requirements and is suitable for air-to-ground target reconnaissance imaging. Full article
31 pages, 11688 KB  
Article
RShDet: An Adaptive Spectral-Aware Network for Remote Sensing Object Detection Under Haze Corruption
by Wei Zhang, Yuantao Wang, Haowei Yang and Xuerui Mao
Remote Sens. 2026, 18(7), 1020; https://doi.org/10.3390/rs18071020 (registering DOI) - 29 Mar 2026
Abstract
Remote sensing (RS) object detection faces intrinsic challenges arising from the overhead imaging paradigm and the diversity of climatic conditions. In particular, atmospheric phenomena such as clouds and haze cause severe visual degradation, making reliable object detection difficult. However, most existing detectors are [...] Read more.
Remote sensing (RS) object detection faces intrinsic challenges arising from the overhead imaging paradigm and the diversity of climatic conditions. In particular, atmospheric phenomena such as clouds and haze cause severe visual degradation, making reliable object detection difficult. However, most existing detectors are developed under clear-weather conditions, which limits their generalization capability in realistic haze-degraded RS scenarios. To alleviate this issue, an adaptive spectral-aware network for RS object detection under haze interference is proposed, termed RShDet, which is designed to handle both high-altitude RS imagery and low-altitude Unmanned Aerial Vehicle (UAV) scenarios. Firstly, the Object-Centered Dynamic Enhancement (OCDE) module dynamically adjusts the spatial positions of key-value pairs through query-agnostic offsets, enabling the network to emphasize object-relevant regions while suppressing haze-induced background interference. Secondly, the Dynamic Multi-Spectral Perception and Filtering (DSPF) module introduces a multi-spectral attention mechanism that adaptively selects informative frequency components, thereby enhancing discriminative feature representations in hazy environments. Thirdly, the Frequency-Domain Multi-Feature Fusion (FDMF) module employs learnable weights to complementarily integrate amplitude and phase information in the frequency domain, enabling effective cross-task feature interaction between the enhancement and detection branches. Extensive experiments demonstrate that RShDet consistently achieves superior detection performance under hazy conditions across both synthetic and real-world benchmarks. Specifically, it achieves improvements of 2.4% mAP50 on Hazy-DOTA, 1.9% mAP on HazyDet, and 2.33% mAP on the real-world foggy dataset RTTS, surpassing existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
18 pages, 5072 KB  
Article
Overwintering Peat Fires in Russia’s Boreal Forests: Persistence, Detection, and Suppression
by Grigory Kuksin, Ilia Sekerin, Linda See and Dmitry Schepaschenko
Fire 2026, 9(4), 144; https://doi.org/10.3390/fire9040144 (registering DOI) - 28 Mar 2026
Abstract
Overwintering peat fires are increasingly reported in the boreal regions, where they persist underground through winter and reignite in spring, intensifying greenhouse gas emissions and landscape degradation. This study investigates the conditions that enable peat fires to survive freezing and snow cover, and [...] Read more.
Overwintering peat fires are increasingly reported in the boreal regions, where they persist underground through winter and reignite in spring, intensifying greenhouse gas emissions and landscape degradation. This study investigates the conditions that enable peat fires to survive freezing and snow cover, and presents practical methods for their winter detection and suppression. We combined satellite data, UAV-based thermal imaging, time-lapse photography, and ground measurements of temperature, groundwater depth, and peat moisture to identify active overwintering hotspots. Our results show that these fires persist primarily where groundwater levels remain below 60 cm, particularly under tree roots, compacted soil, or elevated terrain that limits moisture recharge. UAV thermal imaging proved the most reliable detection tool, identifying 98% of hotspots. We developed and successfully applied a winter extinguishing method that involves mechanical disruption and dispersion of smoldering peat over frozen ground, allowing rapid cooling without re-ignition. These findings clarify the mechanisms sustaining overwintering fires and provide an effective approach for their mitigation, contributing to reduced emissions and improved management of boreal peatlands vulnerable to climate change. Full article
Show Figures

Figure 1

50 pages, 7780 KB  
Systematic Review
Intelligent Eyes on Buildings: A Scientometric Mapping and Systematic Review of AI-Based Crack Detection and Predictive Diagnostics of Building Structures
by Mehdi Mohagheghi, Ali Bahadori-Jahromi and Shah Room
Encyclopedia 2026, 6(4), 75; https://doi.org/10.3390/encyclopedia6040075 - 27 Mar 2026
Viewed by 229
Abstract
Artificial Intelligence (AI)-based crack detection in buildings uses computer vision and deep learning to automatically identify structural cracks from inspection images. In recent years, many studies have explored this topic, but the overall development of the field, its methodological practices, and the remaining [...] Read more.
Artificial Intelligence (AI)-based crack detection in buildings uses computer vision and deep learning to automatically identify structural cracks from inspection images. In recent years, many studies have explored this topic, but the overall development of the field, its methodological practices, and the remaining challenges are still not fully clear. Unlike most previous reviews that focus mainly on technical methods, this study combines a large-scale scientometric mapping of the research field with a focused technical analysis of recent AI-based crack detection methods specifically applied to building structures. This study therefore provides a dual-layer review covering research published between 2015 and 2025. A total of 146 Scopus-indexed publications were analysed using Visualization of Similarities viewer (VOSviewer) to examine publication growth, thematic evolution, collaboration patterns, and citation structures. In addition, a focused technical review of 36 highly relevant studies was carried out to analyse task formulations, model families, datasets, evaluation protocols, and methodological practices. The results show a rapid increase in research activity after 2020, largely driven by advances in deep-learning and Unmanned Aerial Vehicle (UAV)-based inspections. At the same time, collaboration networks remain uneven, and citation influence is concentrated in a limited number of research communities. The technical review further shows that most studies focus on detection-level tasks, particularly You Only Look Once (YOLO)-based models, while predictive diagnostics, automated inspection reporting, and decision-oriented Structural Health Monitoring (SHM) are still rarely addressed. Current datasets and evaluation protocols also remain mostly perception-oriented, which makes it difficult to assess robustness, generalisability and long-term predictive capability. Full article
Show Figures

Figure 1

24 pages, 15151 KB  
Article
SG-YOLO: A Multispectral Small-Object Detector for UAV Imagery Based on YOLO
by Binjie Zhang, Lin Wang, Quanwei Yao, Keyang Li and Qinyan Tan
Remote Sens. 2026, 18(7), 1003; https://doi.org/10.3390/rs18071003 - 27 Mar 2026
Viewed by 186
Abstract
Object detection in unmanned aerial vehicle (UAV) imagery remains a crucial yet challenging task due to complex backgrounds, large scale variations, and the prevalence of small objects. Visible-spectrum images lack robustness under all-weather and all-illumination conditions; by contrast, multispectral sensing provides complementary cues [...] Read more.
Object detection in unmanned aerial vehicle (UAV) imagery remains a crucial yet challenging task due to complex backgrounds, large scale variations, and the prevalence of small objects. Visible-spectrum images lack robustness under all-weather and all-illumination conditions; by contrast, multispectral sensing provides complementary cues (e.g., thermal signatures) that improve detection robustness. However, existing multispectral solutions often incur high computational costs and are therefore difficult to deploy on resource-constrained UAV platforms. To address these issues, SG-YOLO is proposed, a lightweight and efficient multispectral object detection framework that aims to balance accuracy and efficiency. First, a Spectral Gated Downsampling Stem (SGDS) is designed, in which grouped convolutions and a gating mechanism are employed at the early stage of the network to extract band-specific features, thereby maximizing spectral complementarity while minimizing redundancy. Second, a Spectral–Spatial Iterative Attention Fusion (SSIAF) module is introduced, in which spectral-wise (channel) attention and spatial-wise attention are iteratively coupled and cascaded in a multi-scale manner to jointly model cross-band dependencies and spatial saliency, thereby aggregating high-level semantic information while suppressing redundant spectral responses. Finally, a Spatial–Channel Synergistic Fusion (SCSF) module is designed to enhance multi-scale and cross-channel feature integration in the neck. Experiments on the MODA dataset show that SG-YOLOs achieves 72.4% mAP50, outperforming the baseline by 3.2%. Moreover, compared with a range of mainstream one-stage detectors and multispectral detection methods, SG-YOLO delivers the best overall performance, providing an effective solution for UAV object detection while maintaining a favorable trade-off between model size and detection accuracy. Full article
Show Figures

Figure 1

37 pages, 4825 KB  
Article
Effects of Cane Density on Primocane Raspberry Assessed Using UAV-Based Multispectral Imaging
by Kamil Buczyński, Magdalena Kapłan and Zbigniew Jarosz
Agriculture 2026, 16(7), 742; https://doi.org/10.3390/agriculture16070742 - 27 Mar 2026
Viewed by 219
Abstract
Cane density is a key management factor in raspberry production, directly affecting yield formation and canopy structure. However, most previous studies have focused on floricane cultivars and relied on conventional field measurements, while the response of primocane raspberries and their canopy level dynamics [...] Read more.
Cane density is a key management factor in raspberry production, directly affecting yield formation and canopy structure. However, most previous studies have focused on floricane cultivars and relied on conventional field measurements, while the response of primocane raspberries and their canopy level dynamics remain less explored. The objective of this study was to evaluate how cane density influences yield components, cane growth, and canopy structure in primocane raspberry cultivars, and to assess whether these effects can be captured using UAV-based multispectral imaging. Field experiments were conducted over two growing seasons using two primocane cultivars grown under different cane density treatments. Yield components and cane growth parameters were measured, and repeated drone multispectral surveys were performed during the production period to quantify the spatial and temporal variability of vegetation indices. Increasing cane density led to higher total yield per unit area in both cultivars, mainly through an increase in fruit number rather than fruit weight, indicating a compensatory yield response. Cane density significantly modified canopy architecture, with responses varying between cultivars and seasons. Multispectral vegetation indices revealed predominantly consistent density-dependent gradients, characterized by higher mean values and reduced spatial and temporal variability at higher cane densities. Denser cane configurations were associated with lower total temporal amplitude and smoother seasonal trajectories, indicating a stabilization of canopy reflectance dynamics. Although this overall pattern was preserved across indices, the magnitude and regularity of temporal responses were index-specific and cultivar-dependent. The results demonstrate that cane density management in primocane raspberries affects both yield formation and canopy structure, and that these effects can be effectively monitored using UAV-based multispectral imaging. Integrating remote sensing with field measurements offers a valuable approach for supporting data-driven optimization of raspberry production systems. Full article
Show Figures

Figure 1

25 pages, 9555 KB  
Article
EFSL-YOLO: An Improved Model for Small Object Detection in UAV Vision
by Meng Zhou, Shuke He, Chang Wang and Jing Wang
Drones 2026, 10(4), 243; https://doi.org/10.3390/drones10040243 - 27 Mar 2026
Viewed by 109
Abstract
To address the challenges in UAV remote sensing imagery, such as small object size, dense occlusion and complex background interference, this paper proposes an enhanced small object detection algorithm based on an improved YOLOv13 model for drone applications in complex weather environments. First, [...] Read more.
To address the challenges in UAV remote sensing imagery, such as small object size, dense occlusion and complex background interference, this paper proposes an enhanced small object detection algorithm based on an improved YOLOv13 model for drone applications in complex weather environments. First, an enhanced feature fusion attention network (EFFA-Net) is designed in the preprocessing stage to reduce image degradation and suppress the interference caused by smoke and haze. Then, in the backbone, a swish-gated convolution (SwiGLUConv) module is designed to adaptively expand the receptive field and enhance multi-scale feature extraction, which strengthens the representation of small targets while maintaining efficient computation. Furthermore, a locally enhanced multi-scale context fusion (LF-MSCF) module is integrated into the feature fusion neck of YOLO, combining multi-head self-attention, channel attention, and spatial attention to suppress background noise and redundant responses, thereby improving detection accuracy. Extensive experiments on the VisDrone-DET2019 dataset, UAVDT dataset, and HazyDet dataset demonstrate that the proposed algorithm outperforms other mainstream methods, showcasing excellent detection accuracy and robustness in complex UAV aerial scenarios. Full article
33 pages, 172200 KB  
Article
HDCGAN+: A Low-Illumination UAV Remote Sensing Image Enhancement and Evaluation Method Based on WPID
by Kelly Chen Ke, Min Sun, Xinyi Wang, Dong Liu and Hanjun Yang
Remote Sens. 2026, 18(7), 999; https://doi.org/10.3390/rs18070999 - 26 Mar 2026
Viewed by 143
Abstract
Remote sensing images acquired by UAVs under nighttime or low-illumination conditions suffer from insufficient illumination, leading to degraded image quality, detail loss, and noise, which restrict their application in public security and disaster emergency scenarios. Although existing machine learning-based enhancement methods can recover [...] Read more.
Remote sensing images acquired by UAVs under nighttime or low-illumination conditions suffer from insufficient illumination, leading to degraded image quality, detail loss, and noise, which restrict their application in public security and disaster emergency scenarios. Although existing machine learning-based enhancement methods can recover part of the missing information, they often cause color distortion and texture inconsistency. This study proposes an improved low-illumination image enhancement method based on a Weakly Paired Image Dataset (WPID), combining the Hierarchical Deep Convolutional Generative Adversarial Network (HDCGAN) with a low-rank image fusion strategy to enhance the quality of low-illumination UAV remote sensing images. First, YCbCr color channel separation is applied to preserve color information from visible images. Then, a Low-Rank Representation Fusion Network (LRRNet) is employed to perform structure-aware fusion between thermal infrared (TIR) and visible images, thereby enabling effective preservation of structural details and realistic color appearance. Furthermore, a weakly paired training mechanism is incorporated into HDCGAN to enhance detail restoration and structural fidelity. To achieve objective evaluation, a structural consistency assessment framework is constructed based on semantic segmentation results from the Segment Anything Model (SAM). Experimental results demonstrate that the proposed method outperforms state-of-the-art approaches in both visual quality and application-oriented evaluation metrics. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

20 pages, 1782 KB  
Article
Comparing Machine Learning Using UAVs to Ground Survey Methods to Quantify Milkweed Stem Density and Habitat Characteristics in ROWs
by Adam M. Baker, Greg Emerick, Christie Bahlai and Scott Eikenbary
Insects 2026, 17(4), 359; https://doi.org/10.3390/insects17040359 - 25 Mar 2026
Viewed by 463
Abstract
Monarch butterflies have declined in both eastern and western populations. Conservation initiatives that support this imperiled species are being implemented in lands managed by the energy and transportation sectors. Vegetation management strategies that encourage the presence of milkweed (Asclepias spp.), the larval [...] Read more.
Monarch butterflies have declined in both eastern and western populations. Conservation initiatives that support this imperiled species are being implemented in lands managed by the energy and transportation sectors. Vegetation management strategies that encourage the presence of milkweed (Asclepias spp.), the larval host of monarch butterflies (Danaus plexippus), or floral resources to support pollinators are being practiced across North America; however, survey methods to evaluate the success of these strategies vary in accuracy and scalability. In this study, we compared five methods to quantify milkweed stem density and land cover estimates: (1) Site al, (2) Transect plot, (3) Square plot, (4) Large transect (informed by the Monarch CCAA methodology), and (5) Machine learning of images collected by UAVs. These methods encompass full coverage ground counts, partial ground counts, and aerial imagery using object-based image analysis. Sites included distribution, transmission, and gas line ROWs, solar arrays, and transportation easements. We found that Site al and Machine learning most consistently quantified milkweed stem density across sites. Partial ground count methods were likely to over or underestimate milkweed populations. Habitat characteristics (woody, broadleaf, grass, and bare ground) estimates were inconsistent across method and site. The intent of this study was to provide land managers with insight as to the most accurate, efficient, and economical approach to quantify milkweed populations and habitat characteristics. Full article
(This article belongs to the Special Issue Ecology, Diversity and Conservation of Butterflies)
Show Figures

Figure 1

19 pages, 3682 KB  
Article
Estimation of Cotton Above-Ground Biomass Based on Fusion of UAV Spectral and Texture Features
by Guldana Sarsen, Qiuxiang Tang, Yabin Li, Longlong Bao, Yuhang Xu, Guangyun Sun, Jianwen Wu, Yierxiati Abulaiti, Qingqing Lv, Fubin Liang, Na Zhang, Rensong Guo, Liang Wang, Jianping Cui and Tao Lin
Agronomy 2026, 16(6), 668; https://doi.org/10.3390/agronomy16060668 - 22 Mar 2026
Viewed by 199
Abstract
Cotton above-ground biomass (AGB) is a key indicator of crop growth and yield potential. Traditional monitoring methods are labor-intensive and destructive, limiting their suitability for precision agriculture. This study developed a high-precision, non-destructive model for estimating cotton AGB by integrating spectral and texture [...] Read more.
Cotton above-ground biomass (AGB) is a key indicator of crop growth and yield potential. Traditional monitoring methods are labor-intensive and destructive, limiting their suitability for precision agriculture. This study developed a high-precision, non-destructive model for estimating cotton AGB by integrating spectral and texture features derived from UAV multispectral and RGB images. UAV data were collected at major growth stages in 2024. Eight vegetation indices (VIs) and eight texture features (TFs) were extracted. Four machine learning algorithms—support vector regression (SVR), random forest regression (RFR), partial least squares regression (PLSR), and extreme gradient boosting (XGB)—were evaluated using independent validation data. Models based on fused spectral and texture features outperformed single-feature models. RFR achieved the best performance (R2 = 0.811; RMSE = 2.931 t ha−1). Texture features alone also showed strong predictive capability (R2 = 0.789), highlighting their value in capturing canopy structural information. These results demonstrate that spectral–texture fusion significantly improves cotton AGB estimation and that RFR provides a robust modeling framework for UAV-based crop monitoring. Full article
Show Figures

Figure 1

20 pages, 39023 KB  
Article
Lightweight Insulator Defect Detection in High-Resolution UAV Imagery via System-Level Co-Design
by Yujie Zhu, Guanhua Chen, Linghao Zhang, Jiajun Zhou, Junwei Kuang and Jiangxiong Zhu
Remote Sens. 2026, 18(6), 953; https://doi.org/10.3390/rs18060953 - 21 Mar 2026
Viewed by 211
Abstract
The inspection of minuscule insulator defects from high-resolution (HR) UAV imagery presents a significant algorithmic challenge. The severe scale mismatch between HR images and low-resolution model inputs often leads to feature distortion for sparsely distributed targets. To address these issues, this paper proposes [...] Read more.
The inspection of minuscule insulator defects from high-resolution (HR) UAV imagery presents a significant algorithmic challenge. The severe scale mismatch between HR images and low-resolution model inputs often leads to feature distortion for sparsely distributed targets. To address these issues, this paper proposes an integrated data–model collaborative framework. At the data level, an offline label-guided optimal tiling (LGOT) strategy is introduced to alleviate scale mismatch by curating information-dense training tiles. At the model level, we design the semi-decoupled prior-driven detection head (SDPD-Head), which leverages evolutionary priors to stabilize the learning of microscopic spatial features. During inference, an online inference-time adaptive tiling (ITAT) strategy is used to match the spatial scale distribution between training and inference and to reduce feature loss caused by direct downscaling. Experiments on a real-world inspection dataset show that the proposed framework achieves an mAP@50 of 92.9% with 2.17 M parameters and 4.7 GFLOPs. Full article
Show Figures

Figure 1

17 pages, 4872 KB  
Article
Aerial Thermography Using UAV Platforms: Modernization of Critical Energy Infrastructure Diagnostics
by Matej Ščerba, Marek Kišš, Robert Wieszala, Jacek Mendala and Adam Tomaszewski
Appl. Sci. 2026, 16(6), 3014; https://doi.org/10.3390/app16063014 - 20 Mar 2026
Viewed by 140
Abstract
Unmanned aerial vehicles (UAVs) are increasingly being used as diagnostic platforms in electricity transmission and distribution, enabling safer and faster inspections compared to manual climbing operations or manned aerial support. This article presents an implementation-oriented inspection process that integrates RGB imaging, infrared (IR) [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly being used as diagnostic platforms in electricity transmission and distribution, enabling safer and faster inspections compared to manual climbing operations or manned aerial support. This article presents an implementation-oriented inspection process that integrates RGB imaging, infrared (IR) thermography and (optionally) LiDAR documentation for critical energy infrastructure and photovoltaic (PV) installations. The survey consists of two stages: a preliminary stage under controlled conditions and an operational stage in a real-world environment, limited only by UAV flight restrictions. Thermal measurements are recorded in radiometric formats and analyzed using polygon- and profile-based tools to identify temperature anomalies (hot spots) and support maintenance escalation decisions. This manuscript presents standardized sample templates for mission logs, QA/QC activities, and anomaly lists, intended to support reproducible data collection in future studies. The proposed process supports predictive maintenance by enabling repeatable inspections, archive-based trend analysis, and integration with asset management processes, while minimizing operational risk and avoiding power outages when technically feasible. Full article
Show Figures

Figure 1

21 pages, 4335 KB  
Article
Real-Time Small UAV Detection in Complex Airspace Using YOLOv11 with Residual Attention and High-Resolution Feature Enhancement
by Chuang Han, Md Redwan Ullah, Amrul Kayes, Khalid Hasan, Md Abdur Rouf, Md Rakib Hasan, Shen Tao, Guo Gengli and Mohammad Masum Billah
J. Imaging 2026, 12(3), 140; https://doi.org/10.3390/jimaging12030140 - 20 Mar 2026
Viewed by 245
Abstract
Detecting small unmanned aerial vehicles (UAVs) in complex airspace presents significant challenges due to their minimal pixel footprint, resemblance to birds, and frequent occlusion. To address these issues, we propose YOLOv11-ResCBAM, a novel real-time detection framework that integrates a Residual Convolutional Block Attention [...] Read more.
Detecting small unmanned aerial vehicles (UAVs) in complex airspace presents significant challenges due to their minimal pixel footprint, resemblance to birds, and frequent occlusion. To address these issues, we propose YOLOv11-ResCBAM, a novel real-time detection framework that integrates a Residual Convolutional Block Attention Module (ResCBAM) and a high-resolution P2 detection head into the YOLOv11 architecture. ResCBAM enhances channel and spatial feature refinement while preserving original feature contexts through residual connections, and the P2 head maintains fine spatial details crucial for small-object localization. Evaluated on a custom dataset of 4917 images (11,733 after augmentation) across three classes (drone, bird, airplane), our model achieves a mean average precision at the 0.5–0.95 IoU threshold (mAP@0.5–0.95) of 0.845, representing a 7.9% improvement over the baseline YOLOv11n, while maintaining real-time inference at 50.51 FPS. Cross-dataset validation on VisDrone2019-DET and UAVDT benchmarks demonstrates promising generalization trends. This work demonstrates the effectiveness of the proposed approach for UAV surveillance systems, balancing detection accuracy with computational efficiency for deployment in security-critical environments. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 3rd Edition)
Show Figures

Figure 1

29 pages, 6237 KB  
Article
Development of a Multi-Scale Spectrum Phenotyping Framework for High-Throughput Screening of Salt-Tolerant Rice Varieties
by Xiaorui Li, Jiahao Han, Dongdong Han, Shibo Fang, Zhanhao Zhang, Li Yang, Chunyan Zhou, Chengming Jin and Xuejian Zhang
Agronomy 2026, 16(6), 658; https://doi.org/10.3390/agronomy16060658 - 20 Mar 2026
Viewed by 229
Abstract
Soil salinization severely threatens agricultural sustainability in saline–alkali regions, and high-throughput, efficient screening of salt-tolerant rice varieties is critical to mitigating this threat. Traditional evaluation methods are constrained by low throughput, limited spatiotemporal resolution, and the lack of standardized indicators. To address these [...] Read more.
Soil salinization severely threatens agricultural sustainability in saline–alkali regions, and high-throughput, efficient screening of salt-tolerant rice varieties is critical to mitigating this threat. Traditional evaluation methods are constrained by low throughput, limited spatiotemporal resolution, and the lack of standardized indicators. To address these gaps, this study established a multi-scale spectral phenotyping framework integrating ground-based hyperspectral, UAV-borne multispectral, and Sentinel-2 satellite remote sensing data for high-throughput screening of salt-tolerant rice. Field experiments were conducted with 12 rice lines at five key growth stages in Ningxia, China, with synchronous ground spectral measurements and UAV image acquisition on the same day for each stage. Five feature selection methods were employed to screen salt stress-sensitive hyperspectral bands, with classification accuracy validated via a Support Vector Machine (SVM) model. The results showed that: (1) rice spectral characteristics varied dynamically across growth stages, and first-order differential transformation effectively amplified subtle spectral variations in stress-sensitive regions; (2) the Minimum Redundancy–Maximum Relevance (mRMR) method outperformed other methods, achieving 100% classification accuracy at key growth stages, with sensitive bands dominated by red edge bands (58.33%); (3) the constructed Salt Stress Index (SIR) showed strong correlations with classical vegetation indices and rice yield, and could clearly distinguish salt-tolerant and salt-sensitive rice varieties, with stable performance against field environmental noise; and (4) band matching between UAV and Sentinel-2 data enabled multi-scale data fusion and regional-scale salt stress monitoring. This framework realizes the transformation from qualitative spectral description to quantitative salt tolerance evaluation, providing standardized technical support for salt-tolerant rice breeding and precision management of saline–alkali lands. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop