Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,483)

Search Parameters:
Keywords = aerial remote sensing image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 5186 KB  
Article
UAV-Based Remote Sensing Methods in the Structural Assessment of Remediated Landfills
by Grzegorz Pasternak, Łukasz Wodzyński, Jacek Jóźwiak, Eugeniusz Koda, Janina Zaczek-Peplinska and Anna Podlasek
Remote Sens. 2026, 18(1), 57; https://doi.org/10.3390/rs18010057 - 24 Dec 2025
Abstract
Remediated landfills require long-term monitoring due to ongoing processes such as settlement, water infiltration, leachate migration, and biogas emissions, which may lead to cover degradation and environmental risks. Traditional ground-based inspections are often time-consuming, costly, and limited in terms of spatial coverage. This [...] Read more.
Remediated landfills require long-term monitoring due to ongoing processes such as settlement, water infiltration, leachate migration, and biogas emissions, which may lead to cover degradation and environmental risks. Traditional ground-based inspections are often time-consuming, costly, and limited in terms of spatial coverage. This study presents the application of Unmanned Aerial Vehicle (UAV)-based remote sensing methods for the structural assessment of a remediated landfill. A multi-sensor approach was employed, combining geometric data (Light Detection and Ranging (LiDAR) and photogrammetry), hydrological modeling (surface water accumulation and runoff), multispectral imaging, and thermal data. The results showed that subsidence-induced depressions modified surface drainage, leading to water accumulation, concentrated runoff, and vegetation stress. Multispectral imaging successfully identified zones of persistent instability, while UAV thermal imaging detected a distinct leachate-related anomaly that was not visible in red–green–blue (RGB) or multispectral data. By integrating geometric, hydrological, spectral, and thermal information, this paper demonstrates practical applications of remote sensing data in detecting cover degradation on remediated landfills. Compared to traditional methods, UAV-based monitoring is a low-cost and repeatable approach that can cover large areas with high spatial and temporal resolution. The proposed approach provides an effective tool for post-closure landfill management and can be applied to other engineered earth structures. Full article
Show Figures

Figure 1

28 pages, 6632 KB  
Article
Reliable Crack Evolution Monitoring from UAV Remote Sensing: Bridging Detection and Temporal Dynamics
by Canwei Wang and Jin Tang
Remote Sens. 2026, 18(1), 51; https://doi.org/10.3390/rs18010051 - 24 Dec 2025
Abstract
Surface crack detection and temporal evolution analysis are fundamental tasks in remote sensing and photogrammetry, providing critical information for slope stability assessment, infrastructure safety inspection, and long-term geohazard monitoring. However, current unmanned aerial vehicle (UAV)-based crack detection pipelines typically treat spatial detection and [...] Read more.
Surface crack detection and temporal evolution analysis are fundamental tasks in remote sensing and photogrammetry, providing critical information for slope stability assessment, infrastructure safety inspection, and long-term geohazard monitoring. However, current unmanned aerial vehicle (UAV)-based crack detection pipelines typically treat spatial detection and temporal change analysis as separate processes, leading to weak geometric consistency across time and limiting the interpretability of crack evolution patterns. To overcome these limitations, we propose the Longitudinal Crack Fitting Network (LCFNet), a unified and physically interpretable framework that achieves, for the first time, integrated time-series crack detection and evolution analysis from UAV remote sensing imagery. At its core, the Longitudinal Crack Fitting Convolution (LCFConv) integrates Fourier-series decomposition with affine Lie group convolution, enabling anisotropic feature representation that preserves equivariance to translation, rotation, and scale. This design effectively captures the elongated and oscillatory morphology of surface cracks while suppressing background interference under complex aerial viewpoints. Beyond detection, a Lie-group-based Temporal Crack Change Detection (LTCCD) module is introduced to perform geometrically consistent matching between bi-temporal UAV images, guided by a partial differential equation (PDE) formulation that models the continuous propagation of surface fractures, providing a bridge between discrete perception and physical dynamics. Extensive experiments on the constructed UAV-Filiform Crack Dataset (10,588 remote sensing images) demonstrate that LCFNet surpasses advanced detection frameworks such as You only look once v12 (YOLOv12), RT-DETR, and RS-Mamba, achieving superior performance (mAP50:95 = 75.3%, F1 = 85.5%, and CDR = 85.6%) while maintaining real-time inference speed (88.9 FPS). Field deployment on a UAV–IoT monitoring platform further confirms the robustness of LCFNet in multi-temporal remote sensing applications, accurately identifying newly formed and extended cracks under varying illumination and terrain conditions. This work establishes the first end-to-end paradigm that unifies spatial crack detection and temporal evolution modeling in UAV remote sensing, bridging discrete deep learning inference with continuous physical dynamics. The proposed LCFNet provides both algorithmic robustness and physical interpretability, offering a new foundation for intelligent remote sensing-based structural health assessment and high-precision photogrammetric monitoring. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Technology for Ground Deformation)
Show Figures

Figure 1

21 pages, 10337 KB  
Article
A Spatial Consistency-Guided Sampling Algorithm for UAV Remote Sensing Heterogeneous Image Matching
by Runjing Chen, Haozhe Lv, Jiaxing Zhou, Zhigao Chen, Taohong Li, Xinping Zhang, Yunpeng Li and Zhibin Zhan
Sensors 2026, 26(1), 102; https://doi.org/10.3390/s26010102 - 23 Dec 2025
Abstract
In UAV visual localization applications, the quality of image matching directly affects both the precision and reliability of the visual localization task. In UAV visual localization tasks, high-resolution remote sensing images are typically used as reference maps, whereas UAV-acquired aerial images serve as [...] Read more.
In UAV visual localization applications, the quality of image matching directly affects both the precision and reliability of the visual localization task. In UAV visual localization tasks, high-resolution remote sensing images are typically used as reference maps, whereas UAV-acquired aerial images serve as real-time inputs, enabling the estimation of the UAV’s spatial position through image matching. However, due to the substantial difference in imaging mechanisms and acquisition conditions between reference and real-time images, heterogeneous image pairs often contain numerous outliers, which significantly hinder the direct application of traditional matching algorithms such as RANSAC. To address these challenges, a spatial consistency-guided sampling algorithm is proposed. First, the initial correspondences are constructed based on triplet relationships, and their structural features are subsequently extracted. Then, a minimal subset sampling strategy is developed to improve sampling efficiency. Next, a data subset refinement strategy is introduced to further improve the robustness of sampling. Finally, extensive comparative experiments are conducted on the University-1652 and DenseUAV public datasets against several state-of-the-art feature matching algorithms. The experimental results demonstrate that the proposed algorithm achieves superior performance in correct matching rate, substantially enhancing the matching performance in heterogeneous image matching. Moreover, the proposed algorithm requires approximately 0.15 s per matching on average, and while maintaining the highest matching accuracy, it exhibits significantly higher computational efficiency than advanced sampling algorithms such as TRESAC and RANSAC, demonstrating strong potential for real-time applications in UAV visual localization tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 5277 KB  
Article
Estimation of Leaf Nitrogen Content in Rice Coupling Feature Fusion and Deep Learning with Multi-Sensor Images from UAV
by Xinlei Xu, Xingang Xu, Sizhe Xu, Yang Meng, Guijun Yang, Bo Xu, Xiaodong Yang, Xiaoyu Song, Hanyu Xue, Yuekun Song and Tuo Wang
Agronomy 2025, 15(12), 2915; https://doi.org/10.3390/agronomy15122915 - 18 Dec 2025
Viewed by 264
Abstract
Assessing Leaf Nitrogen Content (LNC) is critical for evaluating crop nutritional status and monitoring growth. While Unmanned Aerial Vehicle (UAV) remote sensing has become a pivotal tool for nitrogen monitoring at the field scale, current research predominantly relies on uni-modal feature variables. Consequently, [...] Read more.
Assessing Leaf Nitrogen Content (LNC) is critical for evaluating crop nutritional status and monitoring growth. While Unmanned Aerial Vehicle (UAV) remote sensing has become a pivotal tool for nitrogen monitoring at the field scale, current research predominantly relies on uni-modal feature variables. Consequently, the integration of multidimensional feature information for nitrogen assessment remains largely underutilized in existing literature. In this study, the four types of feature variables (two kinds of spectral indices, color space parameters and texture features from UAV images of RGB and multispectral sensors) were extracted from three dimensions, and crop nitrogen-sensitive feature variables were selected by GCA (Gray Correlation Analysis), followed by one fused deep neural network (DNN-F2) for remote sensing monitoring of rice nitrogen and a comparative analysis with five common machine learning algorithms (RF, GPR, PLSR, SVM and ANN). Experimental results indicate that the DNN-F2 model consistently outperformed conventional machine learning algorithms across all three growth stages. Notably, the model achieved an average R2 improvement of 40%, peaking at the rice jointing stage with R2 of 0.72, RMSE of 0.08, and NRMSE of 0.019. The study shows that the fusion of multidimensional feature information from UAVs combined with deep learning algorithms has great potential for nitrogen nutrient monitoring in rice crops, and can also provide technical support to guide decisions on fertilizer application in rice fields. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

35 pages, 18467 KB  
Article
Monitoring Rubber Plantation Distribution and Biomass with Sentinel-2 Using Deep Learning and Machine Learning Algorithm (2019–2024)
by Yingtan Chen, Jialong Duanmu, Zhongke Feng, Jun Qian, Zhikuan Liu, Huiqing Pei, Pietro Grimaldi and Zixuan Qiu
Remote Sens. 2025, 17(24), 4042; https://doi.org/10.3390/rs17244042 - 16 Dec 2025
Viewed by 264
Abstract
The number of rubber plantations has increased significantly since 2000, especially in Southeast Asia and China, and their ecological impacts are becoming more evident. A robust rubber supply monitoring system is currently required at both the production and ecological levels. This study used [...] Read more.
The number of rubber plantations has increased significantly since 2000, especially in Southeast Asia and China, and their ecological impacts are becoming more evident. A robust rubber supply monitoring system is currently required at both the production and ecological levels. This study used Sentinel-2 multi-rule remote sensing images and a deep learning method to construct a deep learning model that could generate a distribution map of rubber plantations in Danzhou City, Hainan Province, from 2019 to 2024. For biomass modeling, 52 sample plots (27 of which were historical plots) were integrated, and the canopy structure was extracted as an auxiliary variable from the point cloud data generated by an unmanned aerial vehicle survey. Five algorithms, namely Random Forest (RF), Gradient Boosting Decision Tree, Convolutional Neural Network, Back Propagation Neural Network, and Extreme Gradient Boosting, were used to characterize the spatiotemporal changes in rubber plantation biomass and analyze the driving mechanisms. The developed deep learning model was exceptional at identifying rubber plantations (overall accuracy = 91.63%, Kappa = 0.83). The RF model performed the best in terms of biomass prediction (R2 = 0.72, RRMSE = 21.48 Mg/ha). Research shows that canopy height as a characteristic factor enhances the explanatory power and stability of the biomass model. However, due to limitations such as sample plot size, image differences, canopy closure degree, and point cloud density, uncertainties in its generalization across years and regions remain. In summary, the proposed framework effectively captures the spatial and temporal dynamics of rubber plantations and estimates their biomass with high accuracy. This study provides a crucial reference for the refined management and ongoing monitoring of rubber plantations. Full article
Show Figures

Graphical abstract

25 pages, 12181 KB  
Article
Characterizing Growth and Estimating Yield in Winter Wheat Breeding Lines and Registered Varieties Using Multi-Temporal UAV Data
by Liwei Liu, Xinxing Zhou, Tao Liu, Dongtao Liu, Jing Liu, Jing Wang, Yuan Yi, Xuecheng Zhu, Na Zhang, Huiyun Zhang, Guohua Feng and Hongbo Ma
Agriculture 2025, 15(24), 2554; https://doi.org/10.3390/agriculture15242554 - 10 Dec 2025
Viewed by 324
Abstract
Grain yield is one of the most critical indicators for evaluating the performance of wheat breeding. However, the assessment process, from early-stage breeding lines to officially registered varieties that have passed the DUS (Distinctness, Uniformity, and Stability) test, is often time-consuming and labor-intensive. [...] Read more.
Grain yield is one of the most critical indicators for evaluating the performance of wheat breeding. However, the assessment process, from early-stage breeding lines to officially registered varieties that have passed the DUS (Distinctness, Uniformity, and Stability) test, is often time-consuming and labor-intensive. Multispectral remote sensing based on unmanned aerial vehicles (UAVs) has demonstrated significant potential in crop phenotyping and yield estimation due to its high throughput, non-destructive nature, and ability to rapidly collect large-scale, multi-temporal data. In this study, multi-temporal UAV-based multispectral imagery, RGB images, and canopy height data were collected throughout the entire wheat growth stage (2023–2024) in Xuzhou, Jiangsu Province, China, to characterize the dynamic growth patterns of both breeding lines and registered cultivars. Vegetation indices (VIs), texture parameters (Tes), and a time-series crop height model (CHM), including the logistic-derived growth rate (GR) and the projected area (PA), were extracted to construct a comprehensive multi-source feature set. Four machine learning algorithms, namely a random forest (RF), support vector machine regression (SVR), extreme gradient boosting (XGBoost), and partial least squares regression (PLSR), were employed to model and estimate yield. The results demonstrated that spectral, texture, and canopy height features derived from multi-temporal UAV data effectively captured phenotypic differences among wheat types and contributed to yield estimation. Features obtained from later growth stages generally led to higher estimation accuracy. The integration of vegetation indices and texture features outperformed models using single-feature types. Furthermore, the integration of time-series features and feature selection further improved predictive accuracy, with XGBoost incorporating VIs, Tes, GR, and PA yielding the best performance (R2 = 0.714, RMSE = 0.516 t/ha, rRMSE = 5.96%). Overall, the proposed multi-source modeling framework offers a practical and efficient solution for yield estimation in early-stage wheat breeding and can support breeders and growers by enabling earlier, more accurate selection and management decisions in real-world production environments. Full article
Show Figures

Figure 1

23 pages, 3401 KB  
Article
Remote Sensing Applied to Dynamic Landscape: Seventy Years of Change Along the Southern Adriatic Coast
by Federica Pontieri, Michele Innangi, Mirko Di Febbraro and Maria Laura Carranza
Remote Sens. 2025, 17(24), 3961; https://doi.org/10.3390/rs17243961 - 8 Dec 2025
Viewed by 367
Abstract
Coastal landscapes are complex socio-ecological systems that undergo rapid transformations driven by both natural dynamics and human pressures. Their sustainable management depends on robust, cost-effective remote sensing methodologies for long-term monitoring and quantitative assessment of spatiotemporal change. In this study, we developed an [...] Read more.
Coastal landscapes are complex socio-ecological systems that undergo rapid transformations driven by both natural dynamics and human pressures. Their sustainable management depends on robust, cost-effective remote sensing methodologies for long-term monitoring and quantitative assessment of spatiotemporal change. In this study, we developed an integrated remote-sensing-based framework that combines historical aerial photograph interpretation, transition matrix analysis, and machine learning to assess coastal dune landscape dynamics over a seventy-year period. Georeferenced orthorectified and preprocessed aerial imagery freely available from the Italian Ministry of the Environment for the years 1954, 1986, and Google Satellite Images for 2022 were used to generate detailed land-cover maps, enabling the analysis of two temporal intervals (1954–1986 and 1986–2022). Transition matrices quantified land-cover conversions and identified sixteen dynamic processes, while a Random Forest (RF) classifier, optimized through parameter tuning and cross-validation, modeled and compared landscape dynamics within protected Long-Term Ecological Research (LTER) sites and adjacent unprotected areas. Model performance was evaluated using Balanced Accuracy (BA) to ensure robustness and to interpret the relative importance of change-driving variables. The RF model achieved high accuracy in distinguishing change processes inside and outside LTER sites, effectively capturing subtle yet ecologically relevant transitions. Results reveal non-random, contrasting landscape trajectories between management regimes: protected sites tend toward naturalization, whereas unprotected sites exhibit persistent urban influence. Overall, this research demonstrates the potential of integrating multi-temporal remote sensing, spatial statistics, and machine learning as a scalable and transferable framework for long-term coastal landscape monitoring and conservation planning. Full article
(This article belongs to the Special Issue Emerging Remote Sensing Technologies in Coastal Observation)
Show Figures

Figure 1

24 pages, 5626 KB  
Article
Radar Coincidence Imaging Based on Dual-Frequency Dual-Phase-Center Dual-Polarized Antenna
by Shu-Yang Wan, Chen Miao, Shi-Shan Qi and Wen Wu
Electronics 2025, 14(24), 4820; https://doi.org/10.3390/electronics14244820 - 7 Dec 2025
Viewed by 262
Abstract
Radar coincidence imaging (RCI) is widely used in military reconnaissance, hovering unmanned aerial vehicles (UAVs), and non-local Earth observation due to its superior super-resolution imaging performance. However, in portable radar exploration or UAV remote sensing scenarios, the imaging resolution may be limited by [...] Read more.
Radar coincidence imaging (RCI) is widely used in military reconnaissance, hovering unmanned aerial vehicles (UAVs), and non-local Earth observation due to its superior super-resolution imaging performance. However, in portable radar exploration or UAV remote sensing scenarios, the imaging resolution may be limited by the size constraints of the radar’s aperture. Moreover, although the resolution of RCI depends on the randomness of the signal, an excessively random signal setup may be difficult to implement in engineering applications due to rapid frequency jumps and related issues. Therefore, it is essential to achieve super-resolution imaging while maintaining a small aperture and an effectively random signal. In this paper, an amplitude-random linear frequency modulation (AR-LFM) waveform is employed in RCI using a dual-frequency, dual-phase-center, and dual-polarized antenna (DDPA). A multi-channel structure is introduced, and different frequencies and polarization modes are combined using the proposed method, which provides more independent signal information while maintaining a small aperture and effectively reducing signal coherence. This approach increases the singularity between grid points in the target area, thereby enhancing the effective rank of the reference matrix. The simulation results show that the angular resolution of the proposed imaging method is 15 times higher than that of conventional radar imaging. Furthermore, the proposed structure can improve the resolution improvement factor (RIF) by more than two times compared with the traditional RCI method using a conventional antenna and random signals. Full article
Show Figures

Figure 1

16 pages, 17447 KB  
Article
AI-Powered Aerial Multispectral Imaging for Forage Crop Maturity Assessment: A Case Study in Northern Kazakhstan
by Marden Baidalin, Tomiris Rakhimzhanova, Akhama Akhet, Saltanat Baidalina, Abylaikhan Myrzakhanov, Ildar Bogapov, Zhanat Salikova and Huseyin Atakan Varol
Agronomy 2025, 15(12), 2807; https://doi.org/10.3390/agronomy15122807 - 6 Dec 2025
Viewed by 391
Abstract
Forage crops play a vital role in ensuring livestock productivity and food security in Northern Kazakhstan, a region characterized by highly variable weather conditions. However, traditional methods for assessing crop maturity remain time-consuming and labor-intensive, underscoring the need for automated monitoring solutions. Recent [...] Read more.
Forage crops play a vital role in ensuring livestock productivity and food security in Northern Kazakhstan, a region characterized by highly variable weather conditions. However, traditional methods for assessing crop maturity remain time-consuming and labor-intensive, underscoring the need for automated monitoring solutions. Recent advances in remote sensing and artificial intelligence (AI) offer new opportunities to address this challenge. In this study, unmanned aerial vehicle (UAV)-based multispectral imaging was used to monitor the development of forage crops—pea, sudangrass, common vetch, oat—and their mixtures under field conditions in Northern Kazakhstan. A multispectral dataset consisting of five spectral bands was collected and processed to generate vegetation indices. Using a ResNet-based neural network model, the study achieved a high predictive accuracy (R2 = 0.985) for estimating the continuous maturity index. The trained model was further integrated into a web-based platform to enable real-time visualization and analysis, providing a practical tool for automated crop maturity assessment and long-term agricultural monitoring. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Graphical abstract

25 pages, 8947 KB  
Article
Advancing Real-Time Aerial Wildfire Detection Through Plume Recognition and Knowledge Distillation
by Pirunthan Keerthinathan, Juan Sandino, Sutharsan Mahendren, Anuraj Uthayasooriyan, Julian Galvez, Grant Hamilton and Felipe Gonzalez
Drones 2025, 9(12), 827; https://doi.org/10.3390/drones9120827 - 28 Nov 2025
Viewed by 433
Abstract
Uncrewed aerial systems (UAS)-based remote sensing and artificial intelligence (AI) analysis enable real-time wildfire or bushfire detection, facilitating early response to minimize damage and protect lives and property. However, their effectiveness is limited by three issues: distinguishing smoke from fog, the high cost [...] Read more.
Uncrewed aerial systems (UAS)-based remote sensing and artificial intelligence (AI) analysis enable real-time wildfire or bushfire detection, facilitating early response to minimize damage and protect lives and property. However, their effectiveness is limited by three issues: distinguishing smoke from fog, the high cost of manual annotation, and the computational demands of large models. This study addresses the three key challenges by introducing plume as a new indicator to better distinguish smoke from similar visual elements, and by employing a hybrid annotation method using knowledge distillation (KD) to reduce expert labour and accelerate labelling. Additionally, it leverages lightweight YOLO Nano models trained with pseudo-labels generated from a fine-tuned teacher network to lower computational demands while maintaining high detection accuracy for real-time wildfire monitoring. Controlled pile burns in Canungra, QLD, Australia, were conducted to collect UAS-captured images over deciduous vegetation, which were subsequently augmented with the Flame2 dataset, which contains wildfire images of coniferous vegetation. A Grounding DINO model, fine-tuned using few-shot learning, served as the teacher network to generate pseudo-labels for a significant portion of the Flame2 dataset. These pseudo-labels were then used to train student networks consisting of YOLO Nano architectures, specifically versions 5, 8, and 11 (YOLOv5n, YOLOv8n, YOLOv11n). The experimental results show that YOLOv8n and YOLOv5n achieved an mAP@0.5 of 0.721. Plume detection outperforms smoke indicators (F1: 76.1–85.7% vs. 70%) in fog and wildfire scenarios. These findings underscore the value of incorporating plume as a distinct class and utilizing KD, both of which enhance detection accuracy and scalability, ultimately supporting more reliable and timelier wildfire monitoring and response. Full article
Show Figures

Figure 1

20 pages, 15632 KB  
Article
Investigating an Earthquake Surface Rupture Along the Kumysh Fault (Eastern Tianshan, Central Asia) from High-Resolution Topographic Data
by Jiahui Han, Haiyun Bi, Wenjun Zheng, Hui Qiu, Fuer Yang, Xinyuan Chen and Jiaoyan Yang
Remote Sens. 2025, 17(23), 3847; https://doi.org/10.3390/rs17233847 - 27 Nov 2025
Viewed by 292
Abstract
As direct geomorphic evidence and records of earthquakes on the surface, coseismic surface ruptures have long been a key focus in earthquake research. However, compared with strike-slip and normal faults, studies on reverse-fault surface ruptures remain relatively scarce. In this study, surface rupture [...] Read more.
As direct geomorphic evidence and records of earthquakes on the surface, coseismic surface ruptures have long been a key focus in earthquake research. However, compared with strike-slip and normal faults, studies on reverse-fault surface ruptures remain relatively scarce. In this study, surface rupture characteristics of the most recent earthquake on the Kumysh thrust fault in eastern Tianshan were investigated using high-resolution topographic data, including 0.5 m- and 5 cm-resolution Digital Elevation Models (DEMs) generated from the WorldView-2 satellite stereo image pairs and Unmanned Aerial Vehicle (UAV) images, respectively. We carefully mapped the spatial geometry of the surface rupture and measured 120 vertical displacements along the rupture strike. Using the moving-window method and statistical analysis, both moving-mean and moving-maximum coseismic displacement curves were obtained for the entire rupture zone. Results show that the most recent rupture on the Kumysh Fault extends ~25 km with an overall NWW strike, exhibits complex spatial geometry, and can be subdivided into five secondary segments, which are discontinuously distributed in arcuate shapes across both piedmont alluvial fans and mountain fronts. Reverse fault scarps dominate the rupture pattern. The along-strike coseismic displacements generally form three asymmetric triangles, with an average displacement of 0.9–1.1 m and a maximum displacement of 2.8–3.2 m, yielding an estimated earthquake magnitude of Mw 6.6–6.7. This study not only highlights the strong potential of high-resolution remote sensing data for investigating surface earthquake ruptures, but also provides an additional example to the relatively underexplored reverse-fault surface ruptures. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

20 pages, 8646 KB  
Article
Fine-Grained Multispectral Fusion for Oriented Object Detection in Remote Sensing
by Xin Lan, Shaolin Zhang, Yuhao Bai and Xiaolin Qin
Remote Sens. 2025, 17(22), 3769; https://doi.org/10.3390/rs17223769 - 20 Nov 2025
Viewed by 641
Abstract
Infrared–visible-oriented object detection aims to combine the strengths of both infrared and visible images, overcoming the limitations of a single imaging modality to achieve more robust detection with oriented bounding boxes under diverse environmental conditions. However, current methods often suffer from two issues: [...] Read more.
Infrared–visible-oriented object detection aims to combine the strengths of both infrared and visible images, overcoming the limitations of a single imaging modality to achieve more robust detection with oriented bounding boxes under diverse environmental conditions. However, current methods often suffer from two issues: (1) modality misalignment caused by hardware and annotation errors, leading to inaccurate feature fusion that degrades downstream task performance; and (2) insufficient directional priors in square convolutional kernels, impeding robust object detection with diverse directions, especially in densely packed scenes. To tackle these challenges, in this paper, we propose a novel method, Fine-Grained Multispectral Fusion (FGMF), for oriented object detection in the paired aerial images. Specifically, we design a dual-enhancement and fusion module (DEFM) to obtain the calibrated and complementary features through weighted addition and subtraction-based attention mechanisms. Furthermore, we propose an orientation aggregation module (OAM) that employs large rotated strip convolutions to capture directional context and long-range dependencies. Extensive experiments on the DroneVehicle and VEDAI datasets demonstrate the effectiveness of our proposed method, yielding impressive results with accuracies of 80.2% and 66.3%, respectively. These results highlight the effectiveness of FGMF in oriented object detection within complex remote sensing scenarios. Full article
Show Figures

Figure 1

23 pages, 14455 KB  
Article
Analysis of LightGlue Matching for Robust TIN-Based UAV Image Mosaicking
by Sunghyeon Kim, Seunghwan Ban, Hongjin Kim and Taejung Kim
Remote Sens. 2025, 17(22), 3767; https://doi.org/10.3390/rs17223767 - 19 Nov 2025
Viewed by 611
Abstract
Recent advances in UAV (Unmanned Aerial Vehicle)-based remote sensing have significantly enhanced the efficiency of monitoring and managing agricultural and forested areas. However, the low-altitude and narrow-field-of-view characteristics of UAVs make robust image mosaicking essential for generating large-area composites. A TIN (triangulated irregular [...] Read more.
Recent advances in UAV (Unmanned Aerial Vehicle)-based remote sensing have significantly enhanced the efficiency of monitoring and managing agricultural and forested areas. However, the low-altitude and narrow-field-of-view characteristics of UAVs make robust image mosaicking essential for generating large-area composites. A TIN (triangulated irregular network)-based mosaicking framework is herein proposed to address this challenge. A TIN-based mosaicking method constructs a TIN from extracted tiepoints and the sparse point clouds generated by bundle adjustment, enabling rapid mosaic generation. Its performance strongly depends on the quality of tiepoint extraction. Traditional matching combinations, such as SIFT with Brute-Force and SIFT with FLANN, have been widely used due to their robustness in texture-rich areas, yet they often struggle in homogeneous or repetitive-pattern regions, leading to insufficient tiepoints and reduced mosaic quality. More recently, deep learning-based methods such as LightGlue have emerged, offering strong matching capabilities, but their robustness under UAV conditions involving large rotational variations remains insufficiently validated. In this study, we applied the publicly available LightGlue matcher to a TIN-based UAV mosaicking pipeline and compared its performance with traditional approaches to determine the most effective tiepoint extraction strategy. The evaluation encompassed three major stages—tiepoint extraction, bundle adjustment, and mosaic generation—using UAV datasets acquired over diverse terrains, including agricultural fields and forested areas. Both qualitative and quantitative assessments were conducted to analyze tiepoint distribution, geometric adjustment accuracy, and mosaic completeness. The experimental results demonstrated that the hybrid combination of SIFT and LightGlue consistently achieved stable and reliable performance across all datasets. Compared with traditional matching methods, this combination detected a greater number of tiepoints with a more uniform spatial distribution while maintaining competitive reprojection accuracy. It also improved the continuity of the TIN structure in low-texture regions and reduced mosaic voids, effectively mitigating the limitations of conventional approaches. These results demonstrate that the integration of LightGlue enhances the robustness of TIN-based UAV mosaicking without compromising geometric accuracy. Furthermore, this study provides a practical improvement to the photogrammetric TIN-based UAV mosaicking pipeline by incorporating a LightGlue matching technique, enabling more stable and continuous mosaicking even in challenging low-texture environments. Full article
Show Figures

Figure 1

24 pages, 39644 KB  
Article
Locate then Calibrate: A Synergistic Framework for Small Object Detection from Aerial Imagery to Ground-Level Views
by Kaiye Lin, Zhexiang Zhao and Na Niu
Remote Sens. 2025, 17(22), 3750; https://doi.org/10.3390/rs17223750 - 18 Nov 2025
Viewed by 434
Abstract
Detection of small objects in aerial images captured by Unmanned Aerial Vehicles (UAVs) is a critical task in remote sensing. It is vital for applications like urban monitoring and disaster assessment. This task, however, is challenged by unique viewpoints, diminutive target sizes, and [...] Read more.
Detection of small objects in aerial images captured by Unmanned Aerial Vehicles (UAVs) is a critical task in remote sensing. It is vital for applications like urban monitoring and disaster assessment. This task, however, is challenged by unique viewpoints, diminutive target sizes, and dense scenes. To surmount these challenges, this paper introduces the Locate then Calibrate (LTC) framework. It is a deep learning architecture designed to enhance visual perception systems, specifically for the accurate and robust detection of small objects. Our model builds upon the YOLOv8 architecture and incorporates three synergistic innovations. (1) An Efficient Multi-Scale Attention (EMA) mechanism is employed to ‘Locate’ salient targets by capturing critical cross-dimensional dependencies. (2) We propose a novel Adaptive Multi-Scale (AMS) convolution module to ‘Calibrate’ features, using dynamically learned weights to optimally fuse multi-scale information. (3) An additional high-resolution P2 detection head preserves the fine-grained details essential for localizing diminutive targets. Extensive experimental evaluations demonstrate that the proposed model substantially outperforms the YOLOv8n baseline. Notably, it achieves significant performance gains on the challenging VisDrone aerial dataset. On this dataset, the model achieves a remarkable 11.7% relative increase in mean Average Precision (mAP50). The framework also shows strong generalization. Considerable improvements are recorded on ground-level autonomous driving benchmarks such as KITTI and TT100K_mini. This validated effectiveness proves that LTC is a robust solution for high-accuracy detection: it achieves significant accuracy gains at the cost of a deliberate increase in computational GFLOPs, while maintaining a lightweight parameter count. This design choice positions LTC as a solution for edge applications where accuracy is prioritized over minimal computational cost. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

13 pages, 2928 KB  
Article
Application Research on General Technology for Safety Appraisal of Existing Buildings Based on Unmanned Aerial Vehicles and Stair-Climbing Robots
by Zizhen Shen, Rui Wang, Lianbo Wang, Wenhao Lu and Wei Wang
Buildings 2025, 15(22), 4145; https://doi.org/10.3390/buildings15224145 - 17 Nov 2025
Viewed by 346
Abstract
Structure detection (SD) has emerged as a critical technology for ensuring the safety and longevity of infrastructure, particularly in housing and civil engineering. Traditional SD methods often rely on manual inspections, which are time-consuming, labor-intensive, and prone to human error, especially in complex [...] Read more.
Structure detection (SD) has emerged as a critical technology for ensuring the safety and longevity of infrastructure, particularly in housing and civil engineering. Traditional SD methods often rely on manual inspections, which are time-consuming, labor-intensive, and prone to human error, especially in complex environments such as dense urban settings or aging buildings with deteriorated materials. Recent advances in autonomous systems—such as Unmanned Aerial Vehicles (UAVs) and climbing robots—have shown promise in addressing these limitations by enabling efficient, real-time data collection. However, challenges persist in accurately detecting and analyzing structural defects (e.g., masonry cracks, concrete spalling) amidst cluttered backgrounds, hardware constraints, and the need for multi-scale feature integration. The integration of machine learning (ML) and deep learning (DL) has revolutionized SD by enabling automated feature extraction and robust defect recognition. For instance, RepConv architectures have been widely adopted for multi-scale object detection, while attention mechanisms like TAM (Technology Acceptance Model) have improved spatial feature fusion in complex scenes. Nevertheless, existing works often focus on singular sensing modalities (e.g., UAVs alone) or neglect the fusion of complementary data streams (e.g., ground-based robot imagery) to enhance detection accuracy. Furthermore, computational redundancy in multi-scale processing and inconsistent bounding box regression in detection frameworks remain underexplored. This study addresses these gaps by proposing a generalized safety inspection system that synergizes UAV and stair-climbing robot data. We introduce a novel multi-scale targeted feature extraction path (Rep-FasterNet TAM block) to unify automated RepConv-based feature refinement with dynamic-scale fusion, reducing computational overhead while preserving critical structural details. For detection, we combine traditional methods with remote sensor fusion to mitigate feature loss during image upsampling/downsampling, supported by a structural model GIOU [Mathematical Definition: GIOU = IOU − (C − U)/C] that enhances bounding box regression through shape/scale-aware constraints and real-time analysis. By siting our work within the context of recent reviews on ML/DL for SD, we demonstrate how our hybrid approach bridges the gap between autonomous inspection hardware and AI-driven defect analysis, offering a scalable solution for large-scale housing safety assessments. In response to challenges in detecting objects accurately during housing safety assessments—including large/dense objects, complex backgrounds, and hardware limitations—we propose a generalized inspection system leveraging data from UAVs and stair-climbing robots. To address multi-scale feature extraction inefficiencies, we design a Rep-FasterNet TAM block that integrates RepConv for automated feature refinement and a multi-scale attention module to enhance spatial feature consistency. For detection, we combine dynamic-scale remote feature fusion with traditional methods, supported by a structural GIOU model that improves bounding box regression through shape/scale constraints and real-time analysis. Experiments demonstrate that our system increases masonry/concrete assessment accuracy by 11.6% and 20.9%, respectively, while reducing manual drawing restoration workload by 16.54%. This validates the effectiveness of our hybrid approach in unifying autonomous inspection hardware with AI-driven analysis, offering a scalable solution for SD in housing infrastructure. Full article
(This article belongs to the Special Issue AI-Powered Structural Health Monitoring: Innovations and Applications)
Show Figures

Figure 1

Back to TopTop