Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (595)

Search Parameters:
Keywords = drone remote sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 125846 KB  
Article
Optimizing Plant Production Through Drone-Based Remote Sensing and Label-Free Instance Segmentation for Individual Plant Phenotyping
by Ruth Hofman, Joris Mattheijssens, Johan Van Huylenbroeck, Jan Verwaeren and Peter Lootens
Horticulturae 2025, 11(9), 1043; https://doi.org/10.3390/horticulturae11091043 - 2 Sep 2025
Abstract
A crucial initial step for the automatic extraction of plant traits from imagery is the segmentation of individual plants. This is typically performed using supervised deep learning (DL) models, which require the creation of an annotated dataset for training, a time-consuming and labor-intensive [...] Read more.
A crucial initial step for the automatic extraction of plant traits from imagery is the segmentation of individual plants. This is typically performed using supervised deep learning (DL) models, which require the creation of an annotated dataset for training, a time-consuming and labor-intensive process. In addition, the models are often only applicable to the conditions represented in the training data. In this study, we propose a pipeline for the automatic extraction of plant traits from high-resolution unmanned aerial vehicle (UAV)-based RGB imagery, applying Segment Anything Model 2.1 (SAM 2.1) for label-free segmentation. To prevent the segmentation of irrelevant objects such as soil or weeds, the model is guided using point prompts, which correspond to local maxima in the canopy height model (CHM). The pipeline was used to measure the crown diameter of approximately 15000 ball-shaped chrysanthemums (Chrysanthemum morifolium (Ramat)) in a 6158 m2 field on two dates. Nearly all plants were successfully segmented, resulting in a recall of 96.86%, a precision of 99.96%, and an F1 score of 98.38%. The estimated diameters showed strong agreement with manual measurements. The results demonstrate the potential of the proposed pipeline for accurate plant trait extraction across varying field conditions without the need for model training or data annotation. Full article
(This article belongs to the Special Issue Emerging Technologies in Smart Agriculture)
Show Figures

Figure 1

19 pages, 13244 KB  
Article
MWR-Net: An Edge-Oriented Lightweight Framework for Image Restoration in Single-Lens Infrared Computational Imaging
by Xuanyu Qian, Xuquan Wang, Yujie Xing, Guishuo Yang, Xiong Dun, Zhanshan Wang and Xinbin Cheng
Remote Sens. 2025, 17(17), 3005; https://doi.org/10.3390/rs17173005 - 29 Aug 2025
Viewed by 266
Abstract
Infrared video imaging is an cornerstone technology for environmental perception, particularly in drone-based remote sensing applications such as disaster assessment and infrastructure inspection. Conventional systems, however, rely on bulky optical architectures that limit deployment on lightweight aerial platforms. Computational imaging offers a promising [...] Read more.
Infrared video imaging is an cornerstone technology for environmental perception, particularly in drone-based remote sensing applications such as disaster assessment and infrastructure inspection. Conventional systems, however, rely on bulky optical architectures that limit deployment on lightweight aerial platforms. Computational imaging offers a promising alternative by integrating optical encoding with algorithmic reconstruction, enabling compact hardware while maintaining imaging performance comparable to sophisticated multi-lens systems. Nonetheless, achieving real-time video-rate computational image restoration on resource-constrained unmanned aerial vehicles (UAVs) remains a critical challenge. To address this, we propose Mobile Wavelet Restoration-Net (MWR-Net), a lightweight deep learning framework tailored for real-time infrared image restoration. Built on a MobileNetV4 backbone, MWR-Net leverages depthwise separable convolutions and an optimized downsampling scheme to minimize parameters and computational overhead. A novel wavelet-domain loss enhances high-frequency detail recovery, while the modulation transfer function (MTF) is adopted as an optics-aware evaluation metric. With only 666.37 K parameters and 6.17 G MACs, MWR-Net achieves a PSNR of 37.10 dB and an SSIM of 0.964 on a custom dataset, outperforming a pruned U-Net baseline. Deployed on an RK3588 chip, it runs at 42 FPS. These results demonstrate MWR-Net’s potential as an efficient and practical solution for UAV-based infrared sensing applications. Full article
Show Figures

Figure 1

26 pages, 29132 KB  
Article
DCS-YOLOv8: A Lightweight Context-Aware Network for Small Object Detection in UAV Remote Sensing Imagery
by Xiaozheng Zhao, Zhongjun Yang and Huaici Zhao
Remote Sens. 2025, 17(17), 2989; https://doi.org/10.3390/rs17172989 - 28 Aug 2025
Viewed by 365
Abstract
Small object detection in UAV-based remote sensing imagery is crucial for applications such as traffic monitoring, emergency response, and urban management. However, aerial images often suffer from low object resolution, complex backgrounds, and varying lighting conditions, leading to missed or false detections. To [...] Read more.
Small object detection in UAV-based remote sensing imagery is crucial for applications such as traffic monitoring, emergency response, and urban management. However, aerial images often suffer from low object resolution, complex backgrounds, and varying lighting conditions, leading to missed or false detections. To address these challenges, we propose DCS-YOLOv8, an enhanced object detection framework tailored for small target detection in UAV scenarios. The proposed model integrates a Dynamic Convolution Attention Mixture (DCAM) module to improve global feature representation and combines it with the C2f module to form the C2f-DCAM block. The C2f-DCAM block, together with a lightweight SCDown module for efficient downsampling, constitutes the backbone DCS-Net. In addition, a dedicated P2 detection layer is introduced to better capture high-resolution spatial features of small objects. To further enhance detection accuracy and robustness, we replace the conventional CIoU loss with a novel Scale-based Dynamic Balanced IoU (SDBIoU) loss, which dynamically adjusts loss weights based on object scale. Extensive experiments on the VisDrone2019 dataset demonstrate that the proposed DCS-YOLOv8 significantly improves small object detection performance while maintaining efficiency. Compared to the baseline YOLOv8s, our model increases precision from 51.8% to 54.2%, recall from 39.4% to 42.1%, mAP0.5 from 40.6% to 44.5%, and mAP0.5:0.95 from 24.3% to 26.9%, while reducing parameters from 11.1 M to 9.9 M. Moreover, real-time inference on RK3588 embedded hardware validates the model’s suitability for onboard UAV deployment in remote sensing applications. Full article
Show Figures

Figure 1

55 pages, 5431 KB  
Review
Integration of Drones in Landscape Research: Technological Approaches and Applications
by Ayşe Karahan, Neslihan Demircan, Mustafa Özgeriş, Oğuz Gökçe and Faris Karahan
Drones 2025, 9(9), 603; https://doi.org/10.3390/drones9090603 - 26 Aug 2025
Viewed by 399
Abstract
Drones have rapidly emerged as transformative tools in landscape research, enabling high-resolution spatial data acquisition, real-time environmental monitoring, and advanced modelling that surpass the limitations of traditional methodologies. This scoping review systematically explores and synthesises the technological applications of drones within the context [...] Read more.
Drones have rapidly emerged as transformative tools in landscape research, enabling high-resolution spatial data acquisition, real-time environmental monitoring, and advanced modelling that surpass the limitations of traditional methodologies. This scoping review systematically explores and synthesises the technological applications of drones within the context of landscape studies, addressing a significant gap in the integration of Uncrewed Aerial Systems (UASs) into environmental and spatial planning disciplines. The study investigates the typologies of drone platforms—including fixed-wing, rotary-wing, and hybrid systems—alongside a detailed examination of sensor technologies such as RGB, LiDAR, multispectral, and hyperspectral imaging. Following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines, a comprehensive literature search was conducted across Scopus, Web of Science, and Google Scholar, utilising predefined inclusion and exclusion criteria. The findings reveal that drone technologies are predominantly applied in mapping and modelling, vegetation and biodiversity analysis, water resource management, urban planning, cultural heritage documentation, and sustainable tourism development. Notably, vegetation analysis and water management have shown a remarkable surge in application over the past five years, highlighting global shifts towards sustainability-focused landscape interventions. These applications are critically evaluated in terms of spatial efficiency, operational flexibility, and interdisciplinary relevance. This review concludes that integrating drones with Geographic Information Systems (GISs), artificial intelligence (AI), and remote sensing frameworks substantially enhances analytical capacity, supports climate-resilient landscape planning, and offers novel pathways for multi-scalar environmental research and practice. Full article
(This article belongs to the Special Issue Drones for Green Areas, Green Infrastructure and Landscape Monitoring)
Show Figures

Figure 1

10 pages, 1376 KB  
Proceeding Paper
Mapping Soil Moisture Using Drones: Challenges and Opportunities
by Ricardo Díaz-Delgado, Pauline Buysse, Thibaut Peres, Thomas Houet, Yannick Hamon, Mikaël Faucheux and Ophelie Fovert
Eng. Proc. 2025, 94(1), 18; https://doi.org/10.3390/engproc2025094018 - 25 Aug 2025
Viewed by 879
Abstract
Droughts are becoming more frequent, severe, and impactful across the globe. Agroecosystems, which are human-made ecosystems with high water demand that provide essential ecosystem services, are vulnerable to extreme droughts. Although water use efficiency in agriculture has increased in rec ent decades, drought [...] Read more.
Droughts are becoming more frequent, severe, and impactful across the globe. Agroecosystems, which are human-made ecosystems with high water demand that provide essential ecosystem services, are vulnerable to extreme droughts. Although water use efficiency in agriculture has increased in rec ent decades, drought management should be based on long-term, proactive strategies rather than crisis management. The AgrHyS network of sites in French Brittany collects high-resolution soil moisture data from agronomic stations and catchments to improve understanding of temporal soil moisture dynamics and enhance water use efficiency. Frequent mapping of soil moisture and plant water stress is crucial for assessing water stress risk in the context of global warming. Although satellite remote sensing provides reliable, periodic global data on surface soil moisture, it does so at a very coarse spatial resolution. The intrinsic spatial heterogeneity of surface soil moisture requires a higher spatial resolution in order to address upcoming challenges on a local scale. Drones are an excellent tool for upscaling point measurements to catchment level using different onboard cameras. In this study, we evaluated the potential of multispectral images, thermal images and LiDAR data captured in several concurrent drone flights for high-resolution mapping of soil moisture spatial variability, using in situ point measurements of soil water content and plant water stress in both agricultural areas and natural ecosystems. Statistical models were fitted to map soil water content in two areas: a natural marshland and a grassland-covered agricultural field. Our results demonstrate the statistical significance of topography, land surface temperature and red band reflectance in the natural area for retrieving soil water content. In contrast, the grasslands were best predicted by the transformed normalised difference vegetation index (TNDVI). Full article
Show Figures

Figure 1

22 pages, 17156 KB  
Article
Adaptive Clustering-Guided Multi-Scale Integration for Traffic Density Estimation in Remote Sensing Images
by Xin Liu, Qiao Meng, Xiangqing Zhang, Xinli Li and Shihao Li
Remote Sens. 2025, 17(16), 2796; https://doi.org/10.3390/rs17162796 - 12 Aug 2025
Viewed by 401
Abstract
Grading and providing early warning of traffic congestion density is crucial for the timely coordination and optimization of traffic management. However, current traffic density detection methods primarily rely on historical traffic flow data, resulting in ambiguous thresholds for congestion classification. To overcome these [...] Read more.
Grading and providing early warning of traffic congestion density is crucial for the timely coordination and optimization of traffic management. However, current traffic density detection methods primarily rely on historical traffic flow data, resulting in ambiguous thresholds for congestion classification. To overcome these challenges, this paper proposes a traffic density grading algorithm for remote sensing images that integrates adaptive clustering and multi-scale fusion. A dynamic neighborhood radius adjustment mechanism guided by spatial distribution characteristics is introduced to ensure consistency between the density clustering parameter space and the decision domain for image cropping, thereby addressing the issues of large errors and low efficiency in existing cropping techniques. Furthermore, a hierarchical detection framework is developed by incorporating a dynamic background suppression strategy to fuse multi-scale spatiotemporal features, thereby enhancing the detection accuracy of small objects in remote sensing imagery. Additionally, we propose a novel method that combines density analysis with pixel-level gradient quantification to construct a traffic state evaluation model featuring a dual optimization strategy. This enables precise detection and grading of traffic congestion areas while maintaining low computational overhead. Experimental results demonstrate that the proposed approach achieves average precision (AP) scores of 32.6% on the VisDrone dataset and 16.2% on the UAVDT dataset. Full article
Show Figures

Figure 1

21 pages, 5690 KB  
Article
Machine Learning-Based Soil Moisture Inversion from Drone-Borne X-Band Microwave Radiometry
by Xiangkun Wan, Xiaofeng Li, Tao Jiang, Xingming Zheng and Lei Li
Remote Sens. 2025, 17(16), 2781; https://doi.org/10.3390/rs17162781 - 11 Aug 2025
Viewed by 460
Abstract
Surface soil moisture (SSM) is a critical land surface parameter affecting a wide variety of economically and environmentally important processes. Spaceborne microwave remote sensing has been extensively employed for monitoring SSM. Active microwave sensors offering high spatial resolution are typically utilized to capture [...] Read more.
Surface soil moisture (SSM) is a critical land surface parameter affecting a wide variety of economically and environmentally important processes. Spaceborne microwave remote sensing has been extensively employed for monitoring SSM. Active microwave sensors offering high spatial resolution are typically utilized to capture dynamic fluctuations in soil moisture, albeit with low temporal resolution, whereas passive sensors are typically used to monitor the absolute values of large-scale soil moisture, but offer coarser spatial resolutions (~10 km). In this study, a passive microwave observation system using an X-band microwave radiometer mounted on a drone was established to obtain high-resolution (~1 m) radiative brightness temperature within the observation region. The region was a control experimental field established to validate the proposed approach. Additionally, machine learning models were employed to invert the soil moisture. Based on the site-based validation the trained inversion model performed well, with estimation accuracies of 0.74 and 2.47% in terms of the coefficient of determination and the root mean square error, respectively. This study introduces a methodology for generating high-spatial resolution and high-accuracy soil moisture maps in the context of precision agriculture at the field scale. Full article
Show Figures

Figure 1

22 pages, 15242 KB  
Article
A Modality Alignment and Fusion-Based Method for Around-the-Clock Remote Sensing Object Detection
by Yongjun Qi, Shaohua Yang, Jiahao Chen, Meng Zhang, Jie Zhu, Xin Liu and Hongxing Zheng
Sensors 2025, 25(16), 4964; https://doi.org/10.3390/s25164964 - 11 Aug 2025
Viewed by 463
Abstract
Cross-modal remote sensing object detection holds significant potential for around-the-clock applications. However, the modality differences between cross-modal data and the degradation of feature quality under adverse weather conditions limit detection performance. To address these challenges, this paper presents a novel cross-modal remote sensing [...] Read more.
Cross-modal remote sensing object detection holds significant potential for around-the-clock applications. However, the modality differences between cross-modal data and the degradation of feature quality under adverse weather conditions limit detection performance. To address these challenges, this paper presents a novel cross-modal remote sensing object detection framework designed to overcome two critical challenges in around-the-clock applications: (1) significant modality disparities between visible light, infrared, and synthetic aperture radar data, and (2) severe feature degradation under adverse weather conditions including fog, and nighttime scenarios. Our primary contributions are as follows: First, we develop a multi-scale feature extraction module that employs a hierarchical convolutional architecture to capture both fine-grained details and contextual information, effectively compensating for missing or blurred features in degraded visible-light images. Second, we introduce an innovative feature interaction module that utilizes cross-attention mechanisms to establish long-range dependencies across modalities while dynamically suppressing noise interference through adaptive feature selection. Third, we propose a feature correction fusion module that performs spatial alignment of object boundaries and channel-wise optimization of global feature consistency, enabling robust fusion of complementary information from different modalities. The proposed framework is validated on visible light, infrared, and SAR modalities. Extensive experiments on three challenging datasets (LLVIP, OGSOD, and Drone Vehicle) demonstrate our framework’s superior performance, achieving state-of-the-art mean average precision scores of 66.3%, 58.6%, and 71.7%, respectively, representing significant improvements over existing methods in scenarios with modality differences or extreme weather conditions. The proposed solution not only advances the technical frontier of cross-modal object detection but also provides practical value for mission-critical applications such as 24/7 surveillance systems, military reconnaissance, and emergency response operations where reliable around-the-clock detection is essential. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

32 pages, 19346 KB  
Article
Three-Dimensional Intelligent Understanding and Preventive Conservation Prediction for Linear Cultural Heritage
by Ruoxin Wang, Ming Guo, Yaru Zhang, Jiangjihong Chen, Yaxuan Wei and Li Zhu
Buildings 2025, 15(16), 2827; https://doi.org/10.3390/buildings15162827 - 8 Aug 2025
Viewed by 417
Abstract
This study proposes an innovative method that integrates multi-source remote sensing technologies and artificial intelligence to meet the urgent needs of deformation monitoring and ecohydrological environment analysis in Great Wall heritage protection. By integrating synthetic aperture radar (InSAR) technology, low-altitude oblique photogrammetry models, [...] Read more.
This study proposes an innovative method that integrates multi-source remote sensing technologies and artificial intelligence to meet the urgent needs of deformation monitoring and ecohydrological environment analysis in Great Wall heritage protection. By integrating synthetic aperture radar (InSAR) technology, low-altitude oblique photogrammetry models, and the three-dimensional Gaussian splatting model, an integrated air–space–ground system for monitoring and understanding the Great Wall is constructed. Low-altitude tilt photogrammetry combined with the Gaussian splatting model, through drone images and intelligent generation algorithms (e.g., generative adversarial networks), quickly constructs high-precision 3D models, significantly improving texture details and reconstruction efficiency. Based on the 3D Gaussian splatting model of the AHLLM-3D network, the integration of point cloud data and the large language model achieves multimodal semantic understanding and spatial analysis of the Great Wall’s architectural structure. The results show that the multi-source data fusion method can effectively identify high-risk deformation zones (with annual subsidence reaching −25 mm) and optimize modeling accuracy through intelligent algorithms (reducing detail error by 30%), providing accurate deformation warnings and repair bases for Great Wall protection. Future studies will further combine the concept of ecological water wisdom to explore heritage protection strategies under multi-hazard coupling, promoting the digital transformation of cultural heritage preservation. Full article
Show Figures

Figure 1

27 pages, 5688 KB  
Review
Tree Biomass Estimation in Agroforestry for Carbon Farming: A Comparative Analysis of Timing, Costs, and Methods
by Niccolò Conti, Gianni Della Rocca, Federico Franciamore, Elena Marra, Francesco Nigro, Emanuele Nigrone, Ramadhan Ramadhan, Pierluigi Paris, Gema Tárraga-Martínez, José Belenguer-Ballester, Lorenzo Scatena, Eleonora Lombardi and Cesare Garosi
Forests 2025, 16(8), 1287; https://doi.org/10.3390/f16081287 - 7 Aug 2025
Viewed by 570
Abstract
Agroforestry systems (AFSs) enhance long-term carbon sequestration through tree biomass accumulation. As the European Union’s Carbon Farming Certification (CRCF) Regulation now recognizes AFSs in carbon farming (CF) schemes, accurate tree biomass estimation becomes essential for certification. This review examines field-destructive and remote sensing [...] Read more.
Agroforestry systems (AFSs) enhance long-term carbon sequestration through tree biomass accumulation. As the European Union’s Carbon Farming Certification (CRCF) Regulation now recognizes AFSs in carbon farming (CF) schemes, accurate tree biomass estimation becomes essential for certification. This review examines field-destructive and remote sensing methods for estimating tree aboveground biomass (AGB) in AFSs, with a specific focus on their advantages, limitations, timing, and associated costs. Destructive methods, although accurate and necessary for developing and validating allometric equations, are time-consuming, costly, and labour-intensive. Conversely, satellite- and drone-based remote sensing offer scalable and non-invasive alternatives, increasingly supported by advances in machine learning and high-resolution imagery. Using data from the INNO4CFIs project, which conducted parallel destructive and remote measurements in an AFS in Tuscany (Italy), this study provides a novel quantitative comparison of the resources each method requires. The findings highlight that while destructive measurements remain indispensable for model calibration and new species assessment, their feasibility is limited by practical constraints. Meanwhile, remote sensing approaches, despite some accuracy challenges in heterogeneous AFSs, offer a promising path forward for cost-effective, repeatable biomass monitoring but in turn require reliable field data. The integration of both approaches might represent a valid strategy to optimize precision and resource efficiency in carbon farming applications. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

41 pages, 86958 KB  
Article
An Efficient Aerial Image Detection with Variable Receptive Fields
by Wenbin Liu, Liangren Shi and Guocheng An
Remote Sens. 2025, 17(15), 2672; https://doi.org/10.3390/rs17152672 - 2 Aug 2025
Viewed by 781
Abstract
This article presents VRF-DETR, a lightweight real-time object detection framework for aerial remote sensing images, aimed at addressing the challenge of insufficient receptive fields for easily confused categories due to differences in height and perspective. Based on the RT-DETR architecture, our approach introduces [...] Read more.
This article presents VRF-DETR, a lightweight real-time object detection framework for aerial remote sensing images, aimed at addressing the challenge of insufficient receptive fields for easily confused categories due to differences in height and perspective. Based on the RT-DETR architecture, our approach introduces three key innovations: the multi-scale receptive field adaptive fusion (MSRF2) module replaces the Transformer encoder with parallel dilated convolutions and spatial-channel attention to adjust receptive fields for confusing objects dynamically; the gated multi-scale context (GMSC) block reconstructs the backbone using Gated Multi-Scale Context units with attention-gated convolution (AGConv), reducing parameters while enhancing multi-scale feature extraction; and the context-guided fusion (CGF) module optimizes feature fusion via context-guided weighting to resolve multi-scale semantic conflicts. Evaluations were conducted on both the VisDrone2019 and UAVDT datasets, where VRF-DETR achieved the mAP50 of 52.1% and the mAP50-95 of 32.2% on the VisDrone2019 validation set, surpassing RT-DETR by 4.9% and 3.5%, respectively, while reducing parameters by 32% and FLOPs by 22%. It maintains real-time performance (62.1 FPS) and generalizes effectively, outperforming state-of-the-art methods in accuracy-efficiency trade-offs for aerial object detection. Full article
(This article belongs to the Special Issue Deep Learning Innovations in Remote Sensing)
Show Figures

Figure 1

17 pages, 2404 KB  
Article
Geographically Weighted Regression Enhances Spectral Diversity–Biodiversity Relationships in Inner Mongolian Grasslands
by Yu Dai, Huawei Wan, Longhui Lu, Fengming Wan, Haowei Duan, Cui Xiao, Yusha Zhang, Zhiru Zhang, Yongcai Wang, Peirong Shi and Xuwei Sun
Diversity 2025, 17(8), 541; https://doi.org/10.3390/d17080541 - 1 Aug 2025
Viewed by 352
Abstract
The spectral variation hypothesis (SVH) posits that the complexity of spectral information in remote sensing imagery can serve as a proxy for regional biodiversity. However, the relationship between spectral diversity (SD) and biodiversity differs for different environmental conditions. Previous SVH studies often overlooked [...] Read more.
The spectral variation hypothesis (SVH) posits that the complexity of spectral information in remote sensing imagery can serve as a proxy for regional biodiversity. However, the relationship between spectral diversity (SD) and biodiversity differs for different environmental conditions. Previous SVH studies often overlooked these differences. We utilized species data from field surveys in Inner Mongolia and drone-derived multispectral imagery to establish a quantitative relationship between SD and biodiversity. A geographically weighted regression (GWR) model was used to describe the SD–biodiversity relationship and map the biodiversity indices in different experimental areas in Inner Mongolia, China. Spatial autocorrelation analysis revealed that both SD and biodiversity indices exhibited strong and statistically significant spatial autocorrelation in their distribution patterns. Among all spectral diversity indices, the convex hull area exhibited the best model fit with the Margalef richness index (Margalef), the coefficient of variation showed the strongest predictive performance for species richness (Richness), and the convex hull volume provided the highest explanatory power for Shannon diversity (Shannon). Predictions for Shannon achieved the lowest relative root mean square error (RRMSE = 0.17), indicating the highest predictive accuracy, whereas Richness exhibited systematic underestimation with a higher RRMSE (0.23). Compared to the commonly used linear regression model in SVH studies, the GWR model exhibited a 4.7- to 26.5-fold improvement in goodness-of-fit. Despite the relatively low R2 value (≤0.59), the model yields biodiversity predictions that are broadly aligned with field observations. Our approach explicitly considers the spatial heterogeneity of the SD–biodiversity relationship. The GWR model had significantly higher fitting accuracy than the linear regression model, indicating its potential for remote sensing-based biodiversity assessments. Full article
(This article belongs to the Special Issue Ecology and Restoration of Grassland—2nd Edition)
Show Figures

Figure 1

21 pages, 12997 KB  
Article
Aerial-Ground Cross-View Vehicle Re-Identification: A Benchmark Dataset and Baseline
by Linzhi Shang, Chen Min, Juan Wang, Liang Xiao, Dawei Zhao and Yiming Nie
Remote Sens. 2025, 17(15), 2653; https://doi.org/10.3390/rs17152653 - 31 Jul 2025
Viewed by 540
Abstract
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, [...] Read more.
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, typically elevated viewpoints, these settings do not fully reflect complex aerial-ground collaborative remote sensing scenarios. In this work, we introduce a novel and challenging task: aerial-ground cross-view vehicle Re-ID, which involves retrieving vehicles in ground-view image galleries using query images captured from aerial (top-down) perspectives. This task is increasingly relevant due to the integration of drone-based surveillance and ground-level monitoring in multi-source remote sensing systems, yet it poses substantial challenges due to significant appearance variations between aerial and ground views. To support this task, we present AGID (Aerial-Ground Vehicle Re-Identification), the first benchmark dataset specifically designed for aerial-ground cross-view vehicle Re-ID. AGID comprises 20,785 remote sensing images of 834 vehicle identities, collected using drones and fixed ground cameras. We further propose a novel method, Enhanced Self-Correlation Feature Computation (ESFC), which enhances spatial relationships between semantically similar regions and incorporates shape information to improve feature discrimination. Extensive experiments on the AGID dataset and three widely used vehicle Re-ID benchmarks validate the effectiveness of our method, which achieves a Rank-1 accuracy of 69.0% on AGID, surpassing state-of-the-art approaches by 2.1%. Full article
Show Figures

Figure 1

26 pages, 11912 KB  
Article
Multi-Dimensional Estimation of Leaf Loss Rate from Larch Caterpillar Under Insect Pest Stress Using UAV-Based Multi-Source Remote Sensing
by He-Ya Sa, Xiaojun Huang, Li Ling, Debao Zhou, Junsheng Zhang, Gang Bao, Siqin Tong, Yuhai Bao, Dashzebeg Ganbat, Mungunkhuyag Ariunaa, Dorjsuren Altanchimeg and Davaadorj Enkhnasan
Drones 2025, 9(8), 529; https://doi.org/10.3390/drones9080529 - 28 Jul 2025
Viewed by 418
Abstract
Leaf loss caused by pest infestations poses a serious threat to forest health. The leaf loss rate (LLR) refers to the percentage of the overall tree-crown leaf loss per unit area and is an important indicator for evaluating forest health. Therefore, rapid and [...] Read more.
Leaf loss caused by pest infestations poses a serious threat to forest health. The leaf loss rate (LLR) refers to the percentage of the overall tree-crown leaf loss per unit area and is an important indicator for evaluating forest health. Therefore, rapid and accurate acquisition of the LLR via remote sensing monitoring is crucial. This study is based on drone hyperspectral and LiDAR data as well as ground survey data, calculating hyperspectral indices (HSI), multispectral indices (MSI), and LiDAR indices (LI). It employs Savitzky–Golay (S–G) smoothing with different window sizes (W) and polynomial orders (P) combined with recursive feature elimination (RFE) to select sensitive features. Using Random Forest Regression (RFR) and Convolutional Neural Network Regression (CNNR) to construct a multidimensional (horizontal and vertical) estimation model for LLR, combined with LiDAR point cloud data, achieved a three-dimensional visualization of the leaf loss rate of trees. The results of the study showed: (1) The optimal combination of HSI and MSI was determined to be W11P3, and the LI was W5P2. (2) The optimal combination of the number of sensitive features extracted by the RFE algorithm was 13 HSI, 16 MSI, and hierarchical LI (2 in layer I, 9 in layer II, and 11 in layer III). (3) In terms of the horizontal estimation of the defoliation rate, the model performance index of the CNNRHSI model (MPI = 0.9383) was significantly better than that of RFRMSI (MPI = 0.8817), indicating that the continuous bands of hyperspectral could better monitor the subtle changes of LLR. (4) The I-CNNRHSI+LI, II-CNNRHSI+LI, and III-CNNRHSI+LI vertical estimation models were constructed by combining the CNNRHSI model with the best accuracy and the LI sensitive to different vertical levels, respectively, and their MPIs reached more than 0.8, indicating that the LLR estimation of different vertical levels had high accuracy. According to the model, the pixel-level LLR of the sample tree was estimated, and the three-dimensional display of the LLR for forest trees under the pest stress of larch caterpillars was generated, providing a high-precision research scheme for LLR estimation under pest stress. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

22 pages, 6010 KB  
Article
Mapping Waterbird Habitats with UAV-Derived 2D Orthomosaic Along Belgium’s Lieve Canal
by Xingzhen Liu, Andrée De Cock, Long Ho, Kim Pham, Diego Panique-Casso, Marie Anne Eurie Forio, Wouter H. Maes and Peter L. M. Goethals
Remote Sens. 2025, 17(15), 2602; https://doi.org/10.3390/rs17152602 - 26 Jul 2025
Viewed by 682
Abstract
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, [...] Read more.
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, Belgium. We systematically classified habitats into residential, industrial, riparian tree, and herbaceous vegetation zones, examining their influence on the spatial distribution of three focal waterbird species: Eurasian coot (Fulica atra), common moorhen (Gallinula chloropus), and wild duck (Anas platyrhynchos). Herbaceous vegetation zones consistently supported the highest waterbird densities, attributed to abundant nesting substrates and minimal human disturbance. UAV-based waterbird counts correlated strongly with ground-based surveys (R2 = 0.668), though species-specific detectability varied significantly due to morphological visibility and ecological behaviors. Detection accuracy was highest for coots, intermediate for ducks, and lowest for moorhens, highlighting the crucial role of image resolution ground sampling distance (GSD) in aerial monitoring. Operational challenges, including image occlusion and habitat complexity, underline the need for tailored survey protocols and advanced sensing techniques. Our findings demonstrate that UAV imagery provides a reliable and scalable method for monitoring waterbird habitats, offering critical insights for biodiversity conservation and sustainable management practices in aquatic landscapes. Full article
Show Figures

Figure 1

Back to TopTop