Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (274)

Search Parameters:
Keywords = UAV and satellite imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5891 KiB  
Article
Potential of Multi-Source Multispectral vs. Hyperspectral Remote Sensing for Winter Wheat Nitrogen Monitoring
by Xiaokai Chen, Yuxin Miao, Krzysztof Kusnierek, Fenling Li, Chao Wang, Botai Shi, Fei Wu, Qingrui Chang and Kang Yu
Remote Sens. 2025, 17(15), 2666; https://doi.org/10.3390/rs17152666 - 1 Aug 2025
Viewed by 139
Abstract
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral [...] Read more.
Timely and accurate monitoring of crop nitrogen (N) status is essential for precision agriculture. UAV-based hyperspectral remote sensing offers high-resolution data for estimating plant nitrogen concentration (PNC), but its cost and complexity limit large-scale application. This study compares the performance of UAV hyperspectral data (S185 sensor) with simulated multispectral data from DJI Phantom 4 Multispectral (P4M), PlanetScope (PS), and Sentinel-2A (S2) in estimating winter wheat PNC. Spectral data were collected across six growth stages over two seasons and resampled to match the spectral characteristics of the three multispectral sensors. Three variable selection strategies (one-dimensional (1D) spectral reflectance, optimized two-dimensional (2D), and three-dimensional (3D) spectral indices) were combined with Random Forest Regression (RFR), Support Vector Machine Regression (SVMR), and Partial Least Squares Regression (PLSR) to build PNC prediction models. Results showed that, while hyperspectral data yielded slightly higher accuracy, optimized multispectral indices, particularly from PS and S2, achieved comparable performance. Among models, SVM and RFR showed consistent effectiveness across strategies. These findings highlight the potential of low-cost multispectral platforms for practical crop N monitoring. Future work should validate these models using real satellite imagery and explore multi-source data fusion with advanced learning algorithms. Full article
(This article belongs to the Special Issue Perspectives of Remote Sensing for Precision Agriculture)
Show Figures

Figure 1

27 pages, 2978 KiB  
Article
Dynamic Monitoring and Precision Fertilization Decision System for Agricultural Soil Nutrients Using UAV Remote Sensing and GIS
by Xiaolong Chen, Hongfeng Zhang and Cora Un In Wong
Agriculture 2025, 15(15), 1627; https://doi.org/10.3390/agriculture15151627 - 27 Jul 2025
Viewed by 382
Abstract
We propose a dynamic monitoring and precision fertilization decision system for agricultural soil nutrients, integrating UAV remote sensing and GIS technologies to address the limitations of traditional soil nutrient assessment methods. The proposed method combines multi-source data fusion, including hyperspectral and multispectral UAV [...] Read more.
We propose a dynamic monitoring and precision fertilization decision system for agricultural soil nutrients, integrating UAV remote sensing and GIS technologies to address the limitations of traditional soil nutrient assessment methods. The proposed method combines multi-source data fusion, including hyperspectral and multispectral UAV imagery with ground sensor data, to achieve high-resolution spatial and spectral analysis of soil nutrients. Real-time data processing algorithms enable rapid updates of soil nutrient status, while a time-series dynamic model captures seasonal variations and crop growth stage influences, improving prediction accuracy (RMSE reductions of 43–70% for nitrogen, phosphorus, and potassium compared to conventional laboratory-based methods and satellite NDVI approaches). The experimental validation compared the proposed system against two conventional approaches: (1) laboratory soil testing with standardized fertilization recommendations and (2) satellite NDVI-based fertilization. Field trials across three distinct agroecological zones demonstrated that the proposed system reduced fertilizer inputs by 18–27% while increasing crop yields by 4–11%, outperforming both conventional methods. Furthermore, an intelligent fertilization decision model generates tailored fertilization plans by analyzing real-time soil conditions, crop demands, and climate factors, with continuous learning enhancing its precision over time. The system also incorporates GIS-based visualization tools, providing intuitive spatial representations of nutrient distributions and interactive functionalities for detailed insights. Our approach significantly advances precision agriculture by automating the entire workflow from data collection to decision-making, reducing resource waste and optimizing crop yields. The integration of UAV remote sensing, dynamic modeling, and machine learning distinguishes this work from conventional static systems, offering a scalable and adaptive framework for sustainable farming practices. Full article
(This article belongs to the Section Agricultural Soils)
Show Figures

Figure 1

24 pages, 5039 KiB  
Article
Advanced Estimation of Winter Wheat Leaf’s Relative Chlorophyll Content Across Growth Stages Using Satellite-Derived Texture Indices in a Region with Various Sowing Dates
by Jingyun Chen, Quan Yin, Jianjun Wang, Weilong Li, Zhi Ding, Pei Sun Loh, Guisheng Zhou and Zhongyang Huo
Plants 2025, 14(15), 2297; https://doi.org/10.3390/plants14152297 - 25 Jul 2025
Viewed by 275
Abstract
Accurately estimating leaves’ relative chlorophyll contents (widely represented by Soil and Plant Analysis Development (SPAD) values) across growth stages is crucial for assessing crop health, particularly in regions characterized by varying sowing dates. Unlike previous studies focusing on high-resolution UAV imagery or specific [...] Read more.
Accurately estimating leaves’ relative chlorophyll contents (widely represented by Soil and Plant Analysis Development (SPAD) values) across growth stages is crucial for assessing crop health, particularly in regions characterized by varying sowing dates. Unlike previous studies focusing on high-resolution UAV imagery or specific growth stages, this research incorporates satellite-derived texture indices (TIs) into a SPAD value estimation model applicable across multiple growth stages (from tillering to grain-filling). Field experiments were conducted in Jiangsu Province, China, where winter wheat sowing dates varied significantly from field to field. Sentinel-2 imagery was employed to extract vegetation indices (VIs) and TIs. Following a two-step variable selection method, Random Forest (RF)-LassoCV, five machine learning algorithms were applied to develop estimation models. The newly developed model (SVR-RBFVIs+TIs) exhibited robust estimation performance (R2 = 0.8131, RMSE = 3.2333, RRMSE = 0.0710, and RPD = 2.3424) when validated against independent SPAD value datasets collected from fields with varying sowing dates. Moreover, this optimal model also exhibited a notable level of transferability at another location with different sowing times, wheat varieties, and soil types from the modeling area. In addition, this research revealed that despite the lower resolution of satellite imagery compared to UAV imagery, the incorporation of TIs significantly improved estimation accuracies compared to the sole use of VIs typical in previous studies. Full article
Show Figures

Figure 1

23 pages, 16886 KiB  
Article
SAVL: Scene-Adaptive UAV Visual Localization Using Sparse Feature Extraction and Incremental Descriptor Mapping
by Ganchao Liu, Zhengxi Li, Qiang Gao and Yuan Yuan
Remote Sens. 2025, 17(14), 2408; https://doi.org/10.3390/rs17142408 - 12 Jul 2025
Viewed by 418
Abstract
In recent years, the use of UAVs has become widespread. Long distance flight of UAVs requires obtaining precise geographic coordinates. Global Navigation Satellite Systems (GNSS) are the most common positioning models, but their signals are susceptible to interference from obstacles and complex electromagnetic [...] Read more.
In recent years, the use of UAVs has become widespread. Long distance flight of UAVs requires obtaining precise geographic coordinates. Global Navigation Satellite Systems (GNSS) are the most common positioning models, but their signals are susceptible to interference from obstacles and complex electromagnetic environments. In this case, vision-based technology can serve as an alternative solution to ensure the self-positioning capability of UAVs. Therefore, a scene adaptive UAV visual localization framework (SAVL) is proposed. In the proposed framework, UAV images are mapped to satellite images with geographic coordinates through pixel-level matching to locate UAVs. Firstly, to tackle the challenge of inaccurate localization resulting from sparse terrain features, this work proposes a novel feature extraction network grounded in a general visual model, leveraging the robust zero-shot generalization capability of the pre-trained model and extracting sparse features from UAV and satellite imagery. Secondly, in order to overcome the problem of weak generalization ability in unknown scenarios, a descriptor incremental mapping module was designed, which reduces multi-source image differences at the semantic level through UAV satellite image descriptor mapping and constructs a confidence-based incremental strategy to dynamically adapt to the scene. Finally, due to the lack of annotated public datasets, a scene-rich UAV dataset (RealUAV) was constructed to study UAV visual localization in real-world environments. In order to evaluate the localization performance of the proposed framework, several related methods were compared and analyzed in detail. The results on the dataset indicate that the proposed method achieves excellent positioning accuracy, with an average error of only 8.71 m. Full article
Show Figures

Figure 1

17 pages, 36560 KiB  
Article
Comparative Calculation of Spectral Indices for Post-Fire Changes Using UAV Visible/Thermal Infrared and JL1 Imagery in Jinyun Mountain, Chongqing, China
by Juncheng Zhu, Yijun Liu, Xiaocui Liang and Falin Liu
Forests 2025, 16(7), 1147; https://doi.org/10.3390/f16071147 - 11 Jul 2025
Viewed by 222
Abstract
This study used Jilin-1 satellite data and unmanned aerial vehicle (UAV)-collected visible-thermal infrared imagery to calculate twelve spectral indices and evaluate their effectiveness in distinguishing post-fire forest areas and identifying human-altered land-cover changes in Jinyun Mountain, Chongqing. The research goals included mapping wildfire [...] Read more.
This study used Jilin-1 satellite data and unmanned aerial vehicle (UAV)-collected visible-thermal infrared imagery to calculate twelve spectral indices and evaluate their effectiveness in distinguishing post-fire forest areas and identifying human-altered land-cover changes in Jinyun Mountain, Chongqing. The research goals included mapping wildfire impacts with M-statistic separability, measuring land-cover distinguishability through Jeffries–Matusita (JM) distance analysis, classifying land-cover types using the random forest (RF) algorithm, and verifying classification accuracy. Cumulative human disturbances—such as land clearing, replanting, and road construction—significantly blocked the natural recovery of burn scars, and during long-term human-assisted recovery periods over one year, the Red Green Blue Index (RGBI), Green Leaf Index (GLI), and Excess Green Index (EXG) showed high classification accuracy for six land-cover types: road, bare soil, deadwood, bamboo, broadleaf, and grass. Key accuracy measures showed producer accuracy (PA) > 0.8, user accuracy (UA) > 0.8, overall accuracy (OA) > 90%, and a kappa coefficient > 0.85. Validation results confirmed that visible-spectrum indices are good at distinguishing photosynthetic vegetation, thermal bands help identify artificial surfaces, and combined thermal-visible indices solve spectral confusion in deadwood recognition. Spectral indices provide high-precision quantitative evidence for monitoring post-fire land-cover changes, especially under human intervention, thus offering important data support for time-based modeling of post-fire forest recovery and improvement of ecological restoration plans. Full article
(This article belongs to the Special Issue Wildfire Behavior and the Effects of Climate Change in Forests)
Show Figures

Graphical abstract

24 pages, 5310 KiB  
Article
Deep Learning-Driven Multi-Temporal Detection: Leveraging DeeplabV3+/Efficientnet-B08 Semantic Segmentation for Deforestation and Forest Fire Detection
by Joe Soundararajan, Andrew Kalukin, Jordan Malof and Dong Xu
Remote Sens. 2025, 17(14), 2333; https://doi.org/10.3390/rs17142333 - 8 Jul 2025
Viewed by 598
Abstract
Deforestation and forest fires are escalating global threats that require timely, scalable, and cost-effective monitoring systems. While UAV and ground-based solutions offer fine-grained data, they are often constrained by limited spatial coverage, high operational costs, and logistical challenges. In contrast, satellite imagery provides [...] Read more.
Deforestation and forest fires are escalating global threats that require timely, scalable, and cost-effective monitoring systems. While UAV and ground-based solutions offer fine-grained data, they are often constrained by limited spatial coverage, high operational costs, and logistical challenges. In contrast, satellite imagery provides broad, repeatable, and economically feasible coverage. This study presents a deep learning framework that combines the DeepLabV3+ architecture with an EfficientNet-B08 backbone to address both deforestation and wildfire detection using satellite imagery. The system utilizes advanced multi-scale feature extraction and Group Normalization to enable robust semantic segmentation under challenging atmospheric conditions and complex forest structures. It is evaluated on two benchmark datasets. In the Amazon forest segmentation dataset, the model achieves a validation Intersection over Union (IoU) of 0.9100 and a pixel accuracy of 0.9605, demonstrating strong performance in delineating forest boundaries. In FireDataset_20m, which presents a severe class imbalance between fire and non-fire pixels, the framework achieves 99.95% accuracy, 93.16% precision, and 91.47% recall. A qualitative analysis confirms the model’s ability to accurately localize fire hotspots and deforested areas. These results highlight the model’s dual-purpose utility for high-resolution, multi-temporal environmental monitoring. Its balanced performance across metrics and adaptability to complex terrain conditions make it a promising tool for supporting forest conservation, early fire detection, and evidence-based policy interventions. Full article
Show Figures

Figure 1

21 pages, 41092 KiB  
Article
UAV as a Bridge: Mapping Key Rice Growth Stage with Sentinel-2 Imagery and Novel Vegetation Indices
by Jianping Zhang, Rundong Zhang, Qi Meng, Yanying Chen, Jie Deng and Bingtai Chen
Remote Sens. 2025, 17(13), 2180; https://doi.org/10.3390/rs17132180 - 25 Jun 2025
Viewed by 440
Abstract
Rice is one of the three primary staple crops worldwide. The accurate monitoring of its key growth stages is crucial for agricultural management, disaster early warning, and ensuring food security. The effective collection of ground reference data is a critical step for monitoring [...] Read more.
Rice is one of the three primary staple crops worldwide. The accurate monitoring of its key growth stages is crucial for agricultural management, disaster early warning, and ensuring food security. The effective collection of ground reference data is a critical step for monitoring rice growth stages using satellite imagery, traditionally achieved through labor-intensive field surveys. Here, we propose utilizing UAVs as an alternative means to collect spatially continuous ground reference data across larger areas, thereby enhancing the efficiency and scalability of training and validation processes for rice growth stage mapping products. The UAV data collection involved the Nanchuan, Yongchuan, Tongnan, and Kaizhou districts of Chongqing City, encompassing a total area of 377.5 hectares. After visual interpretation, centimeter-level high-resolution labels of the key rice growth stages were constructed. These labels were then mapped to Sentinel-2 imagery through spatiotemporal matching and scale conversion, resulting in a reference dataset of Sentinel 2 data that covered growth stages such as jointing and heading. Furthermore, we employed 30 vegetation index calculation methods to explore 48,600 spectral band combinations derived from 10 Sentinel-2 spectral bands, thereby constructing a series of novel vegetation indices. Based on the maximum relevance minimum redundancy (mRMR) algorithm, we identified an optimal subset of features that were both highly correlated with rice growth stages and mutually complementary. The results demonstrate that multi-feature modeling significantly enhanced classification performance. The optimal model, incorporating 300 features, achieved an F1 score of 0.864, representing a 2.5% improvement over models based on original spectral bands and a 38.8% improvement over models using a single feature. Notably, a model utilizing only 12 features maintained a high classification accuracy (F1 = 0.855) while substantially reducing computational costs. Compared with existing methods, this study constructed a large-scale ground-truth reference dataset for satellite imagery based on UAV observations, demonstrating its potential as an effective technical framework and providing an effective technical framework for the large-scale mapping of rice growth stages using satellite data. Full article
(This article belongs to the Special Issue Recent Progress in UAV-AI Remote Sensing II)
Show Figures

Figure 1

26 pages, 9416 KiB  
Article
Multi-Component Remote Sensing for Mapping Buried Water Pipelines
by John Lioumbas, Thomas Spahos, Aikaterini Christodoulou, Ioannis Mitzias, Panagiota Stournara, Ioannis Kavouras, Alexandros Mentes, Nopi Theodoridou and Agis Papadopoulos
Remote Sens. 2025, 17(12), 2109; https://doi.org/10.3390/rs17122109 - 19 Jun 2025
Viewed by 567
Abstract
Accurate localization of buried water pipelines in rural areas is crucial for maintenance and leak management but is often hindered by outdated maps and the limitations of traditional geophysical methods. This study aimed to develop and validate a multi-source remote-sensing workflow, integrating UAV [...] Read more.
Accurate localization of buried water pipelines in rural areas is crucial for maintenance and leak management but is often hindered by outdated maps and the limitations of traditional geophysical methods. This study aimed to develop and validate a multi-source remote-sensing workflow, integrating UAV (unmanned aerial vehicle)-borne near-infrared (NIR) surveys, multi-temporal Sentinel-2 imagery, and historical Google Earth orthophotos to precisely map pipeline locations and establish a surface baseline for future monitoring. Each dataset was processed within a unified least-squares framework to delineate pipeline axes from surface anomalies (vegetation stress, soil discoloration, and proxies) and rigorously quantify positional uncertainty, with findings validated against RTK-GNSS (Real-Time Kinematic—Global Navigation Satellite System) surveys of an excavated trench. The combined approach yielded sub-meter accuracy (±0.3 m) with UAV data, meter-scale precision (≈±1 m) with Google Earth, and precision up to several meters (±13.0 m) with Sentinel-2, significantly improving upon inaccurate legacy maps (up to a 300 m divergence) and successfully guiding excavation to locate a pipeline segment. The methodology demonstrated seasonal variability in detection capabilities, with optimal UAV-based identification occurring during early-vegetation growth phases (NDVI, Normalized Difference Vegetation Index ≈ 0.30–0.45) and post-harvest periods. A Sentinel-2 analysis of 221 cloud-free scenes revealed persistent soil discoloration patterns spanning 15–30 m in width, while Google Earth historical imagery provided crucial bridging data with intermediate spatial and temporal resolution. Ground-truth validation confirmed the pipeline location within 0.4 m of the Google Earth-derived position. This integrated, cost-effective workflow provides a transferable methodology for enhanced pipeline mapping and establishes a vital baseline of surface signatures, enabling more effective future monitoring and proactive maintenance to detect leaks or structural failures. This methodology is particularly valuable for water utility companies, municipal infrastructure managers, consulting engineers specializing in buried utilities, and remote-sensing practitioners working in pipeline detection and monitoring applications. Full article
(This article belongs to the Special Issue Remote Sensing Applications for Infrastructures)
Show Figures

Graphical abstract

26 pages, 4992 KiB  
Article
NDVI and Beyond: Vegetation Indices as Features for Crop Recognition and Segmentation in Hyperspectral Data
by Andreea Nițu, Corneliu Florea, Mihai Ivanovici and Andrei Racoviteanu
Sensors 2025, 25(12), 3817; https://doi.org/10.3390/s25123817 - 18 Jun 2025
Viewed by 590
Abstract
Vegetation indices have long been central to vegetation monitoring through remote sensing. The most popular one is the Normalized Difference Vegetation Index (NDVI), yet many vegetation indices (VIs) exist. In this paper, we investigate their distinctiveness and discriminative power in the context of [...] Read more.
Vegetation indices have long been central to vegetation monitoring through remote sensing. The most popular one is the Normalized Difference Vegetation Index (NDVI), yet many vegetation indices (VIs) exist. In this paper, we investigate their distinctiveness and discriminative power in the context of applications for agriculture based on hyperspectral data. More precisely, this paper merges two complementary perspectives: an unsupervised analysis with PRISMA satellite imagery to explore whether these indices are truly distinct in practice and a supervised classification over UAV hyperspectral data. We assess their discriminative power, statistical correlations, and perceptual similarities. Our findings suggest that while many VIs have a certain correlation with the NDVI, meaningful differences emerge depending on landscape and application context, thus supporting their effectiveness as discriminative features usable in remote crop segmentation and recognition applications. Full article
Show Figures

Figure 1

15 pages, 9753 KiB  
Article
Integrating UAV-RGB Spectral Indices by Deep Learning Model Enables High-Precision Olive Tree Segmentation Under Small Sample
by Yuqi Zhang, Lili Wei, Yuling Zhou, Weili Kou and Shukor Sanim Mohd Fauzi
Forests 2025, 16(6), 924; https://doi.org/10.3390/f16060924 - 31 May 2025
Viewed by 478
Abstract
Accurate maps of olive plantations are very important to monitor and manage the rapid expansion of olive cultivation. Nevertheless, in situations where data samples are limited and the study area is relatively small, the low spatial resolution of satellite imagery poses challenges in [...] Read more.
Accurate maps of olive plantations are very important to monitor and manage the rapid expansion of olive cultivation. Nevertheless, in situations where data samples are limited and the study area is relatively small, the low spatial resolution of satellite imagery poses challenges in accurately distinguishing olive trees from surrounding vegetation. This study presents an automated extraction model for the rapid and accurate identification of olive plantations using unmanned aerial vehicle RGB (UAV-RGB) imagery, multi-index combinations, and deep learning algorithm based on ENVI-Net5. The combined use of Lightness, Normalized Green-Blue Difference Index (NGBDI), and Modified Green-Blue Vegetation Index (MGBVI) indices effectively capture subtle spectral differences between olive trees and surrounding vegetation, enabling more precise classification. Study results indicate that the proposed model minimizes omission and misclassification errors through incorporating ENVI-Net5 and the three spectral indices, especially in differentiating olive trees from other vegetation. Compared to conventional models such as Random Forest (RF) and Support Vector Machine (SVM), the proposed method yields the highest metrics—overall Accuracy (OA) of 0.98, kappa coefficient of 0.96, producer’s accuracy (PA) of 0.95, and user’s accuracy (UA) of 0.92. These values represent an improvement of 7%–8% in OA and 15%–17% in the kappa coefficient over baseline models. Additionally, the study highlights the sensitivity of ENVI-Net5 performance to iterations, underlining the importance of selecting an optimal number of iterations for achieving peak model accuracy. This research provides a valuable technical foundation for the effective monitoring of olive plantations. Full article
Show Figures

Figure 1

21 pages, 10875 KiB  
Article
FIM-JFF: Lightweight and Fine-Grained Visual UAV Localization Algorithms in Complex Urban Electromagnetic Environments
by Faming Gong, Junjie Hao, Chengze Du, Hao Wang, Yanpu Zhao, Yi Yu and Xiaofeng Ji
Information 2025, 16(6), 452; https://doi.org/10.3390/info16060452 - 27 May 2025
Viewed by 447
Abstract
Unmanned aerial vehicles (UAVs) are a key driver of the low-altitude economy, where precise localization is critical for autonomous flight and complex task execution. However, conventional global positioning system (GPS) methods suffer from signal instability and degraded accuracy in dense urban areas. This [...] Read more.
Unmanned aerial vehicles (UAVs) are a key driver of the low-altitude economy, where precise localization is critical for autonomous flight and complex task execution. However, conventional global positioning system (GPS) methods suffer from signal instability and degraded accuracy in dense urban areas. This paper proposes a lightweight and fine-grained visual UAV localization algorithm (FIM-JFF) suitable for complex electromagnetic environments. FIM-JFF integrates both shallow and global image features to leverage contextual information from satellite and UAV imagery. Specifically, a local feature extraction module (LFE) is designed to capture rotation, scale, and illumination-invariant features. Additionally, an environment-adaptive lightweight network (EnvNet-Lite) is developed to extract global semantic features while adapting to lighting, texture, and contrast variations. Finally, UAV geolocation is determined by matching feature points and their spatial distributions across multi-source images. To validate the proposed method, a real-world dataset UAVs-1100 was constructed in complex urban electromagnetic environments. The experimental results demonstrate that FIM-JFF achieves an average localization error of 4.03 m with a processing time of 2.89 s, outperforming state-of-the-art methods by improving localization accuracy by 14.9% while reducing processing time by 0.76 s. Full article
Show Figures

Figure 1

25 pages, 11085 KiB  
Article
Quantitative Vulnerability Assessment of Buildings Exposed to Landslides Under Extreme Rainfall Scenarios
by Guangming Li, Dong Liu, Mengjiao Ruan, Yuhua Zhang, Jun He, Zizheng Guo, Haojie Wang and Mengchen Cheng
Buildings 2025, 15(11), 1838; https://doi.org/10.3390/buildings15111838 - 27 May 2025
Viewed by 444
Abstract
Landslides triggered by extreme rainfall often cause severe casualties and property losses. Therefore, it is essential to accurately assess and predict building vulnerability under landslide scenarios for effective risk mitigation. This study proposed a quantitative framework for vulnerability assessments of structures. It integrated [...] Read more.
Landslides triggered by extreme rainfall often cause severe casualties and property losses. Therefore, it is essential to accurately assess and predict building vulnerability under landslide scenarios for effective risk mitigation. This study proposed a quantitative framework for vulnerability assessments of structures. It integrated extreme rainfall analysis, landslide kinematic assessment, and the dynamic response of structures. The study area is located in the northern mountainous region of Tianjin, China. It lies within the Yanshan Mountains, serving as a key transportation corridor linking North and Northeast China. The Sentinel-1A satellite imagery consisting of 77 SLC scenes (from October 2014 to November 2023) identified a slow-moving landslide in the region by using the SBAS-InSAR technique. High-resolution topographic data of the slope were first acquired through UAV-based remote sensing. Next, historical rainfall data from 1980 to 2017 were analyzed. The Gumbel distribution was used to determine the return periods of extreme rainfall events. The potential slope failure range and kinematic processes of the landslide were then simulated by using numerical simulations. The dynamic responses of buildings impacted by the landslide were modeled by using ABAQUS. These simulations allowed for the estimation of building vulnerability and the generation of vulnerability maps. Results showed that increased rainfall intensity significantly enlarged the plastic zone within the slope. This raised the likelihood of landslide occurrence and led to more severe building damage. When the rainfall return period increased from 50 to 100 years, the number of damaged buildings rose by about 10%. The vulnerability of individual buildings increased by 10% to 15%. The maximum vulnerability value increased from 0.87 to 1.0. This model offers a valuable addition to current quantitative landslide risk assessment frameworks. It is especially suitable for areas where landslides have not yet occurred. Full article
(This article belongs to the Special Issue Buildings and Infrastructures under Natural Hazards)
Show Figures

Figure 1

23 pages, 3461 KiB  
Article
High-Resolution Water Quality Monitoring of Small Reservoirs Using UAV-Based Multispectral Imaging
by Changyu Long, Jingyu Zhang, Xiaolin Xia, Dandan Liu, Lei Chen and Xiqin Yan
Water 2025, 17(11), 1566; https://doi.org/10.3390/w17111566 - 22 May 2025
Viewed by 701
Abstract
Multispectral satellite imagery has been widely applied in water quality monitoring, but limitations in spatial–temporal resolution and acquisition delays often hinder accurate assessments in small water bodies. In this study, a DJI M600PRO UAV equipped with a Sequoia multispectral sensor was used to [...] Read more.
Multispectral satellite imagery has been widely applied in water quality monitoring, but limitations in spatial–temporal resolution and acquisition delays often hinder accurate assessments in small water bodies. In this study, a DJI M600PRO UAV equipped with a Sequoia multispectral sensor was used to assess the water quality in Zhangshan Reservoir, a small inland reservoir in Chuzhou, Anhui, China. Two regression approaches—the Window Averaging Method (WAM) and the Matching Pixel-by-Pixel Method (MPP)—were used to link UAV-derived spectral indices with in situ measurements of total nitrogen (TN), total phosphorus (TP), and chemical oxygen demand (COD). Despite a limited sample size (n = 60) and single-day sampling, MPP outperformed WAM, achieving higher predictive accuracy (R2 = 0.970 for TN, 0.902 for TP, and 0.695 for COD). The findings demonstrate that UAV-based MPP effectively captures fine-scale spatial heterogeneity and offers a promising solution for monitoring water quality in small and turbid reservoirs, overcoming key limitations of satellite-based remote sensing. However, the study is constrained by the temporal coverage and sample density, and future work should integrate multi-temporal UAV observations and expand the dataset to improve the model robustness and generalizability. Full article
(This article belongs to the Special Issue Applications of Remote Sensing and GISs in River Basin Ecosystems)
Show Figures

Figure 1

28 pages, 2816 KiB  
Article
Enhancing Urban Understanding Through Fine-Grained Segmentation of Very-High-Resolution Aerial Imagery
by Umamaheswaran Raman Kumar, Toon Goedemé and Patrick Vandewalle
Remote Sens. 2025, 17(10), 1771; https://doi.org/10.3390/rs17101771 - 19 May 2025
Viewed by 726
Abstract
Despite the growing availability of very-high-resolution (VHR) remote sensing imagery, extracting fine-grained urban features and materials remains a complex task. Land use/land cover (LULC) maps generated from satellite imagery often fall short in providing the resolution needed for detailed urban studies. While hyperspectral [...] Read more.
Despite the growing availability of very-high-resolution (VHR) remote sensing imagery, extracting fine-grained urban features and materials remains a complex task. Land use/land cover (LULC) maps generated from satellite imagery often fall short in providing the resolution needed for detailed urban studies. While hyperspectral imagery offers rich spectral information ideal for material classification, its complex acquisition process limits its use on aerial platforms such as manned aircraft and unmanned aerial vehicles (UAVs), reducing its feasibility for large-scale urban mapping. This study explores the potential of using only RGB and LiDAR data from VHR aerial imagery as an alternative for urban material classification. We introduce an end-to-end workflow that leverages a multi-head segmentation network to jointly classify roof and ground materials while also segmenting individual roof components. The workflow includes a multi-offset self-ensemble inference strategy optimized for aerial data and a post-processing step based on digital elevation models (DEMs). In addition, we present a systematic method for extracting roof parts as polygons enriched with material attributes. The study is conducted on six cities in Flanders, Belgium, covering 18 material classes—including rare categories such as green roofs, wood, and glass. The results show a 9.88% improvement in mean intersection over union (mIOU) for building and ground segmentation, and a 3.66% increase in mIOU for material segmentation compared to a baseline pyramid attention network (PAN). These findings demonstrate the potential of RGB and LiDAR data for high-resolution material segmentation in urban analysis. Full article
(This article belongs to the Special Issue Applications of AI and Remote Sensing in Urban Systems II)
Show Figures

Figure 1

21 pages, 5836 KiB  
Article
Application of Remote Sensing Floodplain Vegetation Data in a Dynamic Roughness Distributed Runoff Model
by Andre A. Fortes, Masakazu Hashimoto and Keiko Udo
Remote Sens. 2025, 17(10), 1672; https://doi.org/10.3390/rs17101672 - 9 May 2025
Viewed by 499
Abstract
Riparian vegetation reduces the conveyance capacity and increases the likelihood of floods. Studies that consider vegetation in flow modeling rely on unmanned aerial vehicle (UAV) data, which restrict the covered area. In contrast, this study explores advances in remote sensing and machine learning [...] Read more.
Riparian vegetation reduces the conveyance capacity and increases the likelihood of floods. Studies that consider vegetation in flow modeling rely on unmanned aerial vehicle (UAV) data, which restrict the covered area. In contrast, this study explores advances in remote sensing and machine learning techniques to obtain vegetation data for an entire river by relying solely on satellite data, superior to UAVs in terms of spatial coverage, temporal frequency, and cost effectiveness. This study proposes a machine learning method to obtain key vegetation parameters at a resolution of 10 m. The goal was to evaluate the applicability of remotely sensed vegetation data using the proposed method on a dynamic roughness distributed runoff model in the Abukuma River to assess the effect of vegetation on the typhoon Hagibis flood (12 October 2019). Two machine learning models were trained to obtain vegetation height and density using different satellite sources, and the parameters were mapped in the river floodplains with 10 m resolution based on Sentinel-2 imagery. The vegetation parameters were successfully estimated, with the vegetation height overestimated in the urban areas, particularly in the downstream part of the river, then integrated into a dynamic roughness calculation routine and patched into the RRI model. The simulations with and without vegetation were also compared. The machine learning models for density and height obtained fair results, with an R2 of 0.62 and 0.55, respectively, and a slight overestimation of height. The results showed a considerable increase in water depth (up to 17.7% at the Fushiguro station) and a decrease in discharge (28.1% at the Tateyama station) when vegetation was considered. Full article
Show Figures

Figure 1

Back to TopTop