Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (288)

Search Parameters:
Keywords = very high resolution remote sensing imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 61181 KiB  
Article
Stepwise Building Damage Estimation Through Time-Scaled Multi-Sensor Integration: A Case Study of the 2024 Noto Peninsula Earthquake
by Satomi Kimijima, Chun Ping, Shono Fujita, Makoto Hanashima, Shingo Toride and Hitoshi Taguchi
Remote Sens. 2025, 17(15), 2638; https://doi.org/10.3390/rs17152638 - 30 Jul 2025
Viewed by 208
Abstract
Rapid and comprehensive assessment of building damage caused by earthquakes is essential for effective emergency response and rescue efforts in the immediate aftermath. Advanced technologies, including real-time simulations, remote sensing, and multi-sensor systems, can effectively enhance situational awareness and structural damage evaluations. However, [...] Read more.
Rapid and comprehensive assessment of building damage caused by earthquakes is essential for effective emergency response and rescue efforts in the immediate aftermath. Advanced technologies, including real-time simulations, remote sensing, and multi-sensor systems, can effectively enhance situational awareness and structural damage evaluations. However, most existing methods rely on isolated time snapshots, and few studies have systematically explored the continuous, time-scaled integration and update of building damage estimates from multiple data sources. This study proposes a stepwise framework that continuously updates time-scaled, single-damage estimation outputs using the best available multi-sensor data for estimating earthquake-induced building damage. We demonstrated the framework using the 2024 Noto Peninsula Earthquake as a case study and incorporated official damage reports from the Ishikawa Prefectural Government, real-time earthquake building damage estimation (REBDE) data, and satellite-based damage estimation data (ALOS-2-building damage estimation (BDE)). By integrating the REBDE and ALOS-2-BDE datasets, we created a composite damage estimation product (integrated-BDE). These datasets were statistically validated against official damage records. Our framework showed significant improvements in accuracy, as demonstrated by the mean absolute percentage error, when the datasets were integrated and updated over time: 177.2% for REBDE, 58.1% for ALOS-2-BDE, and 25.0% for integrated-BDE. Finally, for stepwise damage estimation, we proposed a methodological framework that incorporates social media content to further confirm the accuracy of damage assessments. Potential supplementary datasets, including data from Internet of Things-enabled home appliances, real-time traffic data, very-high-resolution optical imagery, and structural health monitoring systems, can also be integrated to improve accuracy. The proposed framework is expected to improve the timeliness and accuracy of building damage assessments, foster shared understanding of disaster impacts across stakeholders, and support more effective emergency response planning, resource allocation, and decision-making in the early stages of disaster management in the future, particularly when comprehensive official damage reports are unavailable. Full article
Show Figures

Figure 1

8 pages, 4452 KiB  
Proceeding Paper
Synthetic Aperture Radar Imagery Modelling and Simulation for Investigating the Composite Scattering Between Targets and the Environment
by Raphaël Valeri, Fabrice Comblet, Ali Khenchaf, Jacques Petit-Frère and Philippe Pouliguen
Eng. Proc. 2025, 94(1), 11; https://doi.org/10.3390/engproc2025094011 - 25 Jul 2025
Viewed by 206
Abstract
The high resolution of the Synthetic Aperture Radar (SAR) imagery, in addition to its capability to see through clouds and rain, makes it a crucial remote sensing technique. However, SAR images are very sensitive to radar parameters, the observation geometry and the scene’s [...] Read more.
The high resolution of the Synthetic Aperture Radar (SAR) imagery, in addition to its capability to see through clouds and rain, makes it a crucial remote sensing technique. However, SAR images are very sensitive to radar parameters, the observation geometry and the scene’s characteristics. Moreover, for a complex scene of interest with targets located on a rough soil, a composite scattering between the target and the surface occurs and creates distortions on the SAR image. These characteristics can make the SAR images difficult to analyse and process. To better understand the complex EM phenomena and their signature in the SAR image, we propose a methodology to generate raw SAR signals and SAR images for scenes of interest with a target located on a rough surface. With this prospect, the entire radar acquisition chain is considered: the sensor parameters, the atmospheric attenuation, the interactions between the incident EM field and the scene, and the SAR image formation. Simulation results are presented for a rough dielectric soil and a canonical target considered as a Perfect Electric Conductor (PEC). These results highlight the importance of the composite scattering signature between the target and the soil. Its power is 21 dB higher that that of the target for the target–soil configuration considered. Finally, these simulations allow for the retrieval of characteristics present in actual SAR images and show the potential of the presented model in investigating EM phenomena and their signatures in SAR images. Full article
Show Figures

Figure 1

20 pages, 6074 KiB  
Article
Remote Sensing Archaeology of the Xixia Imperial Tombs: Analyzing Burial Landscapes and Geomantic Layouts
by Wei Ji, Li Li, Jia Yang, Yuqi Hao and Lei Luo
Remote Sens. 2025, 17(14), 2395; https://doi.org/10.3390/rs17142395 - 11 Jul 2025
Viewed by 514
Abstract
The Xixia Imperial Tombs (XITs) represent a crucial, yet still largely mysterious, component of the Tangut civilization’s legacy. Located in northwestern China, this extensive necropolis offers invaluable insights into the Tangut state, culture, and burial practices. This study employs an integrated approach utilizing [...] Read more.
The Xixia Imperial Tombs (XITs) represent a crucial, yet still largely mysterious, component of the Tangut civilization’s legacy. Located in northwestern China, this extensive necropolis offers invaluable insights into the Tangut state, culture, and burial practices. This study employs an integrated approach utilizing multi-resolution and multi-temporal satellite remote sensing data, including Gaofen-2 (GF-2), Landsat-8 OLI, declassified GAMBIT imagery, and Google Earth, combined with deep learning techniques, to conduct a comprehensive archaeological investigation of the XITs’ burial landscape. We performed geomorphological analysis of the surrounding environment and automated identification and mapping of burial mounds and mausoleum features using YOLOv5, complemented by manual interpretation of very-high-resolution (VHR) satellite imagery. Spectral indices and image fusion techniques were applied to enhance the detection of archaeological features. Our findings demonstrated the efficacy of this combined methodology for archaeology prospect, providing valuable insights into the spatial layout, geomantic considerations, and preservation status of the XITs. Notably, the analysis of declassified GAMBIT imagery facilitated the identification of a suspected true location for the ninth imperial tomb (M9), a significant contribution to understanding Xixia history through remote sensing archaeology. This research provides a replicable framework for the detection and preservation of archaeological sites using readily available satellite data, underscoring the power of advanced remote sensing and machine learning in heritage studies. Full article
Show Figures

Figure 1

27 pages, 20285 KiB  
Article
Using an Area-Weighted Loss Function to Address Class Imbalance in Deep Learning-Based Mapping of Small Water Bodies in a Low-Latitude Region
by Pu Zhou, Giles Foody, Yihang Zhang, Yalan Wang, Xia Wang, Sisi Li, Laiyin Shen, Yun Du and Xiaodong Li
Remote Sens. 2025, 17(11), 1868; https://doi.org/10.3390/rs17111868 - 28 May 2025
Viewed by 674
Abstract
Recent advances in very high resolution PlanetScope imagery and deep-learning techniques have enabled effective mapping of small water bodies (SWBs), including ponds and ditches. SWBs typically occupy a minor proportion of remote-sensing imagery. This creates significant class imbalance that introduces bias in trained [...] Read more.
Recent advances in very high resolution PlanetScope imagery and deep-learning techniques have enabled effective mapping of small water bodies (SWBs), including ponds and ditches. SWBs typically occupy a minor proportion of remote-sensing imagery. This creates significant class imbalance that introduces bias in trained models. Most existing deep-learning approaches fail to adequately address this imbalance. Such an imbalance introduces bias in trained models. Most existing deep-learning approaches fail to adequately address the inter-class (water vs. non-water) and intra-class (SWBs vs. large water bodies) simultaneously. Consequently, they show poor detection of SWBs. To address these challenges, we propose an area-based weighted binary cross-entropy (AWBCE) loss function. AWBCE dynamically weights water bodies according to their size during model training. We evaluated our approach through large-scale SWB mapping in the middle and east of Hubei Province, China. The models were trained on 14,509 manually annotated PlanetScope image patches (512 × 512 pixels each). We implemented the AWBCE loss function in State-of-the-Art segmentation models (UNet, DeepLabV3+, HRNet, LANet, UNetFormer, and LETNet) and evaluated them using overall accuracy, F1-score, intersection over union, and Matthews correlation coefficient as accuracy metrics. The AWBCE loss function consistently improved performance, achieving better boundary accuracy and higher scores across all metrics. Quantitative and visual comparisons demonstrated AWBCE’s superiority over other imbalance-focused loss functions (weighted BCE, Dice, and Focal losses). These findings emphasize the importance of specialized approaches for comprehensive SWB mapping using high-resolution PlanetScope imagery in low-latitude regions. Full article
Show Figures

Figure 1

28 pages, 2816 KiB  
Article
Enhancing Urban Understanding Through Fine-Grained Segmentation of Very-High-Resolution Aerial Imagery
by Umamaheswaran Raman Kumar, Toon Goedemé and Patrick Vandewalle
Remote Sens. 2025, 17(10), 1771; https://doi.org/10.3390/rs17101771 - 19 May 2025
Viewed by 706
Abstract
Despite the growing availability of very-high-resolution (VHR) remote sensing imagery, extracting fine-grained urban features and materials remains a complex task. Land use/land cover (LULC) maps generated from satellite imagery often fall short in providing the resolution needed for detailed urban studies. While hyperspectral [...] Read more.
Despite the growing availability of very-high-resolution (VHR) remote sensing imagery, extracting fine-grained urban features and materials remains a complex task. Land use/land cover (LULC) maps generated from satellite imagery often fall short in providing the resolution needed for detailed urban studies. While hyperspectral imagery offers rich spectral information ideal for material classification, its complex acquisition process limits its use on aerial platforms such as manned aircraft and unmanned aerial vehicles (UAVs), reducing its feasibility for large-scale urban mapping. This study explores the potential of using only RGB and LiDAR data from VHR aerial imagery as an alternative for urban material classification. We introduce an end-to-end workflow that leverages a multi-head segmentation network to jointly classify roof and ground materials while also segmenting individual roof components. The workflow includes a multi-offset self-ensemble inference strategy optimized for aerial data and a post-processing step based on digital elevation models (DEMs). In addition, we present a systematic method for extracting roof parts as polygons enriched with material attributes. The study is conducted on six cities in Flanders, Belgium, covering 18 material classes—including rare categories such as green roofs, wood, and glass. The results show a 9.88% improvement in mean intersection over union (mIOU) for building and ground segmentation, and a 3.66% increase in mIOU for material segmentation compared to a baseline pyramid attention network (PAN). These findings demonstrate the potential of RGB and LiDAR data for high-resolution material segmentation in urban analysis. Full article
(This article belongs to the Special Issue Applications of AI and Remote Sensing in Urban Systems II)
Show Figures

Figure 1

18 pages, 3261 KiB  
Article
Exploring Burnt Area Delineation with Cross-Resolution Mapping: A Case Study of Very High and Medium-Resolution Data
by Sai Balakavi, Vineet Vadrevu and Kristofer Lasko
Sensors 2025, 25(10), 3009; https://doi.org/10.3390/s25103009 - 10 May 2025
Viewed by 536
Abstract
Remote sensing is essential for mapping and monitoring burnt areas. Integrating Very High-Resolution (VHR) data with medium-resolution datasets like Landsat and deep learning algorithms can enhance mapping accuracy. This study employs two deep learning algorithms, UNET and Gated Recurrent Unit (GRU), to classify [...] Read more.
Remote sensing is essential for mapping and monitoring burnt areas. Integrating Very High-Resolution (VHR) data with medium-resolution datasets like Landsat and deep learning algorithms can enhance mapping accuracy. This study employs two deep learning algorithms, UNET and Gated Recurrent Unit (GRU), to classify burnt areas in the Bandipur Forest, Karnataka, India. We explore using VHR imagery with limited samples to train models on Landsat imagery for burnt area delineation. Four models were analyzed: (a) custom UNET with Landsat labels, (b) custom UNET with PlanetScope-labeled data on Landsat, (c) custom UNET-GRU with Landsat labels, and (d) custom UNET-GRU with PlanetScope-labeled data on Landsat. Custom UNET with Landsat labels achieved the best performance, excelling in precision (0.89), accuracy (0.98), and segmentation quality (Mean IOU: 0.65, Dice Coefficient: 0.78). Using PlanetScope labels resulted in slightly lower performance, but its high recall (0.87 for UNET-GRU) demonstrating its potential for identifying positive instances. In the study, we highlight the potential and limitations of integrating VHR with medium-resolution satellite data for burnt area delineation using deep learning. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

28 pages, 27039 KiB  
Article
Deep Learning-Based Urban Tree Species Mapping with High-Resolution Pléiades Imagery in Nanjing, China
by Xiaolei Cui, Min Sun, Zhili Chen, Mingshi Li and Xiaowei Zhang
Forests 2025, 16(5), 783; https://doi.org/10.3390/f16050783 - 7 May 2025
Cited by 1 | Viewed by 671
Abstract
In rapidly urbanizing regions, encroachment on native green spaces has exacerbated ecological issues such as urban heat islands and flooding. Accurate mapping of tree species distribution is therefore vital for sustainable urban management. However, the high heterogeneity of urban landscapes, resulting from the [...] Read more.
In rapidly urbanizing regions, encroachment on native green spaces has exacerbated ecological issues such as urban heat islands and flooding. Accurate mapping of tree species distribution is therefore vital for sustainable urban management. However, the high heterogeneity of urban landscapes, resulting from the coexistence of diverse land covers, built infrastructure, and anthropogenic activities, often leads to reduced robustness and transferability of remote sensing classification methods across different images and regions. In this study, we used very high–resolution Pléiades imagery and field-verified samples of eight common urban trees and background land covers. By employing transfer learning with advanced segmentation networks, we evaluated each model’s accuracy, robustness, and efficiency. The best-performing network delivered markedly superior classification consistency and required substantially less training time than a model trained from scratch. These findings offer concise, practical guidance for selecting and deploying deep learning methods in urban tree species mapping, supporting improved ecological monitoring and planning. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

25 pages, 14670 KiB  
Article
Urban Functional Zone Classification Based on High-Resolution Remote Sensing Imagery and Nighttime Light Imagery
by Junyu Chen, Yingbiao Chen, Zihao Zheng, Zhenxiang Ling, Xianxin Meng, Junyu Kuang, Xianghua Shi, Yifan Yang, Wentao Chen and Zhifeng Wu
Remote Sens. 2025, 17(9), 1588; https://doi.org/10.3390/rs17091588 - 30 Apr 2025
Cited by 1 | Viewed by 648
Abstract
Urbanization has led to rapid changes in the landscapes of cities, making the quick and accurate identification of urban functional zones crucial for urban development. Identifying urban functional zones requires understanding not only the physical characteristics of a city but also its social [...] Read more.
Urbanization has led to rapid changes in the landscapes of cities, making the quick and accurate identification of urban functional zones crucial for urban development. Identifying urban functional zones requires understanding not only the physical characteristics of a city but also its social attributes. However, traditional methods relying on single-modal features for classification struggle to ensure accuracy, posing challenges for subsequent fine-grained urban studies. To address the limitations of single-modal models, this study proposes an end-to-end Cross-modal Spatial Alignment Gated Fusion Deep Neural Network (CSAGFNet). This model extracts information from high-resolution remote sensing imagery and nighttime light imagery to classify urban functional zones. The CSAGFNet aligns features from different modalities using a cross-modal spatial alignment module, ensuring consistency in the same spatial dimension. Following this, a gated fusion mechanism dynamically controls the weighted integration of modal features, optimizing their interaction. In tests, CSAGFNet achieved a mean intersection over union (mIoU) value of 0.853, outperforming single-modal models by at least 5% and significantly demonstrating its superiority. Extensive ablation experiments validated the effectiveness of the core components of CSAGFNet. Full article
(This article belongs to the Special Issue Nighttime Light Remote Sensing Products for Urban Applications)
Show Figures

Figure 1

19 pages, 51492 KiB  
Article
Detection of Photovoltaic Arrays in High-Spatial-Resolution Remote Sensing Images Using a Weight-Adaptive YOLO Model
by Zhumao Lu, Xiaokai Meng, Jinsong Li, Hua Yu, Shuai Wang, Zeng Qu and Jiayun Wang
Energies 2025, 18(8), 1916; https://doi.org/10.3390/en18081916 - 9 Apr 2025
Viewed by 424
Abstract
This study addresses the issue of inadequate remote sensing monitoring accuracy for photovoltaic (PV) arrays in complex geographical environments against the backdrop of rapid global expansion in PV power generation. Particularly concerning the complex spatial distribution characteristics formed by multiple types of PV [...] Read more.
This study addresses the issue of inadequate remote sensing monitoring accuracy for photovoltaic (PV) arrays in complex geographical environments against the backdrop of rapid global expansion in PV power generation. Particularly concerning the complex spatial distribution characteristics formed by multiple types of PV power stations within China, this study overcomes traditional technical limitations that rely on very high-resolution (0.3–0.8 m) aerial imagery and manual annotation templates. Instead, it proposes an intelligent recognition method for PV arrays based on satellite remote sensing imagery. By enhancing the C3 feature extraction module of the YOLOv5 object detection model and innovatively introducing a weight-adaptive adjustment mechanism, the model’s ability to represent features of PV components across multiple scenarios is significantly improved. Experimental results demonstrate that the improved model achieves enhancements of 6.13% in recall, 3.06% in precision, 5% in F1 score, and 4.6% in mean Average Precision (mAP), respectively. Notably, the false detection rate in low-resolution (<5 m) panchromatic imagery is significantly reduced. Comparative analysis reveals that the optimized model reduces the error rate for small object detection in black-and-white imagery and complex scenarios by 19.8% compared to the baseline model. The technical solution proposed in this study provides a feasible technical pathway for constructing a dynamic monitoring system for large-scale PV facilities. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

26 pages, 7339 KiB  
Article
Remote Sensing Reveals Multidecadal Trends in Coral Cover at Heron Reef, Australia
by David E. Carrasco Rivera, Faye F. Diederiks, Nicholas M. Hammerman, Timothy Staples, Eva Kovacs, Kathryn Markey and Chris M. Roelfsema
Remote Sens. 2025, 17(7), 1286; https://doi.org/10.3390/rs17071286 - 3 Apr 2025
Viewed by 1943
Abstract
Coral reefs are experiencing increasing disturbance regimes. The influence these disturbances have on coral reef health is traditionally captured through field-based monitoring, representing a very small reef area (<1%). Satellite-based observations offer the ability to up-scale the spatial extent of monitoring efforts to [...] Read more.
Coral reefs are experiencing increasing disturbance regimes. The influence these disturbances have on coral reef health is traditionally captured through field-based monitoring, representing a very small reef area (<1%). Satellite-based observations offer the ability to up-scale the spatial extent of monitoring efforts to larger reef areas, providing valuable insights into benthic trajectories through time. Our aim was to demonstrate a repeatable benthic habitat mapping approach integrating field and satellite data acquired annually over 21 years. With this dataset, we analyzed the trends in benthic composition for a shallow platform reef: Heron Reef, Australia. Annual benthic habitat maps were created for the period of 2002 to 2023, using a random forest classifier and object-based contextual editing, with annual in situ benthic data derived from geolocated photoquadrats and coincident high-spatial-resolution (2–5 m pixel size) multi-spectral satellite imagery. Field data that were not used for calibration were used to conduct accuracy assessments. The results demonstrated the capability of remote sensing to map the time series of benthic habitats with overall accuracies between 59 and 81%. We identified various ecological trajectories for the benthic types, such as decline and recovery, over time and space. These trajectories were derived from satellite data and compared with those from the field data. Remote sensing offered valuable insights at both reef and within-reef scales (i.e., geomorphic zones), complementing percentage cover data with precise surface area metrics. We demonstrated that monitoring benthic trajectories at the reef scale every 2 to 3 years effectively captured ecological trends, which is crucial for balancing resource allocation. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

22 pages, 11865 KiB  
Article
Detection and Optimization of Photovoltaic Arrays’ Tilt Angles Using Remote Sensing Data
by Niko Lukač, Sebastijan Seme, Klemen Sredenšek, Gorazd Štumberger, Domen Mongus, Borut Žalik and Marko Bizjak
Appl. Sci. 2025, 15(7), 3598; https://doi.org/10.3390/app15073598 - 25 Mar 2025
Viewed by 676
Abstract
Maximizing the energy output of photovoltaic (PV) systems is becoming increasingly important. Consequently, numerous approaches have been developed over the past few years that utilize remote sensing data to predict or map solar potential. However, they primarily address hypothetical scenarios, and few focus [...] Read more.
Maximizing the energy output of photovoltaic (PV) systems is becoming increasingly important. Consequently, numerous approaches have been developed over the past few years that utilize remote sensing data to predict or map solar potential. However, they primarily address hypothetical scenarios, and few focus on improving existing installations. This paper presents a novel method for optimizing the tilt angles of existing PV arrays by integrating Very High Resolution (VHR) satellite imagery and airborne Light Detection and Ranging (LiDAR) data. At first, semantic segmentation of VHR imagery using a deep learning model is performed in order to detect PV modules. The segmentation is refined using a Fine Optimization Module (FOM). LiDAR data are used to construct a 2.5D grid to estimate the modules’ tilt (inclination) and aspect (orientation) angles. The modules are grouped into arrays, and tilt angles are optimized using a Simulated Annealing (SA) algorithm, which maximizes simulated solar irradiance while accounting for shadowing, direct, and anisotropic diffuse irradiances. The method was validated using PV systems in Maribor, Slovenia, achieving a 0.952 F1-score for module detection (using FT-UnetFormer with SwinTransformer backbone) and an estimated electricity production error of below 6.7%. Optimization results showed potential energy gains of up to 4.9%. Full article
Show Figures

Figure 1

16 pages, 2587 KiB  
Article
In-Season Estimation of Japanese Squash Using High-Spatial-Resolution Time-Series Satellite Imagery
by Nan Li, Todd H. Skaggs and Elia Scudiero
Sensors 2025, 25(7), 1999; https://doi.org/10.3390/s25071999 - 22 Mar 2025
Viewed by 524
Abstract
Yield maps and in-season forecasts help optimize agricultural practices. The traditional approaches to predicting yield during the growing season often rely on ground-based observations, which are time-consuming and labor-intensive. Remote sensing offers a promising alternative by providing frequent and spatially extensive information on [...] Read more.
Yield maps and in-season forecasts help optimize agricultural practices. The traditional approaches to predicting yield during the growing season often rely on ground-based observations, which are time-consuming and labor-intensive. Remote sensing offers a promising alternative by providing frequent and spatially extensive information on crop development. In this study, we evaluated the feasibility of high-resolution satellite imagery for the early yield prediction of an under-investigated crop, Japanese squash (Cucurbita maxima), in a small farm in Hollister, California, over the growing seasons of 2022 and 2023 using vegetation indices, including the Normalized Difference Vegetation Index (NDVI) and the Soil-Adjusted Vegetation Index (SAVI). We identified the optimal time for yield prediction and compared the performances across satellite platforms (Sentinel-2: 10 m; PlanetScope: 3 m; SkySat: 0.5 m). Pearson’s correlation coefficient (r) was employed to determine the dependencies between the yield and vegetation indices measured at various stages throughout the squash growing season. The results showed that SkySat-derived vegetation indices outperformed those of Sentinel-2 and PlanetScope in explaining the squash yields (R2 = 0.75–0.76; RMSE = 0.8–1.9 tons/ha). Remote sensing showed very strong correlations with yield as early as 29 days after planting in 2022 and 37 and 76 days in 2023 for the NDVI and the SAVI, respectively. These early dates corresponded with the vegetative stages when the crop canopy became denser before fruit development. These findings highlight the utility of high-resolution imagery for in-season yield estimation and within-field variability detection. Detecting yield variability early enables timely management interventions to optimize crop productivity and resource efficiency, a critical advantage for small-scale farms, where marginal yield changes impact economic outcomes. Full article
Show Figures

Figure 1

32 pages, 5922 KiB  
Review
Potential of Earth Observation for the German North Sea Coast—A Review
by Karina Raquel Alvarez, Felix Bachofer and Claudia Kuenzer
Remote Sens. 2025, 17(6), 1073; https://doi.org/10.3390/rs17061073 - 18 Mar 2025
Viewed by 727
Abstract
Rising sea levels, warming ocean temperatures, and other climate change impacts threaten the German North Sea coast, making monitoring of this system even more critical. This study reviews the potential of remote sensing for the German North Sea coast, analyzing 97 publications from [...] Read more.
Rising sea levels, warming ocean temperatures, and other climate change impacts threaten the German North Sea coast, making monitoring of this system even more critical. This study reviews the potential of remote sensing for the German North Sea coast, analyzing 97 publications from 2000 to 2024. Publications fell into four main research topics: coastal morphology (33), water quality (34), ecology (22), and sediment (8). More than two-thirds of these papers (69%) used satellite platforms, whereas about one third (29%) used aircrafts and very few (4%) used uncrewed aerial vehicles (UAVs). Multispectral data were the most used data type in these studies (59%), followed by synthetic aperture radar data (SAR) (23%). Studies on intertidal topography were the most numerous overall, making up one-fifth (21%) of articles. Research gaps identified in this review include coastal morphology and ecology studies over large areas, especially at scales that align with administrative or management areas such as the German Wadden Sea National Parks. Additionally, few studies utilized free, publicly available high spatial resolution imagery, such as that from Sentinel-2 or newly available very high spatial resolution satellite imagery. This review finds that remote sensing plays a notable role in monitoring the German North Sea coast at local scales, but fewer studies investigated large areas at sub-annual temporal resolution, especially for coastal morphology and ecology topics. Earth Observation, however, has the potential to fill this gap and provide critical information about impacts of coastal hazards on this region. Full article
Show Figures

Graphical abstract

26 pages, 17384 KiB  
Article
Adversarial Positive-Unlabeled Learning-Based Invasive Plant Detection in Alpine Wetland Using Jilin-1 and Sentinel-2 Imageries
by Enzhao Zhu, Alim Samat, Erzhu Li, Ren Xu, Wei Li and Wenbo Li
Remote Sens. 2025, 17(6), 1041; https://doi.org/10.3390/rs17061041 - 16 Mar 2025
Viewed by 623
Abstract
Invasive plants (IPs) pose a significant threat to local ecosystems. Recent advances in remote sensing (RS) and deep learning (DL) significantly improved the accuracy of IP detection. However, mainstream DL methods often require large, high-quality labeled data, leading to resource inefficiencies. In this [...] Read more.
Invasive plants (IPs) pose a significant threat to local ecosystems. Recent advances in remote sensing (RS) and deep learning (DL) significantly improved the accuracy of IP detection. However, mainstream DL methods often require large, high-quality labeled data, leading to resource inefficiencies. In this study, a deep learning framework called adversarial positive-unlabeled learning (APUL) was proposed to achieve high-precision IP detection using a limited number of target plant samples. APUL employs a dual-branch discriminator to constrain the class prior-free classifier, effectively harnessing information from positive-unlabeled data through the adversarial process and enhancing the accuracy of IP detection. The framework was tested on very high-resolution Jilin-1 and Sentinel-2 imagery of Bayinbuluke grasslands in Xinjiang, where the invasion of Pedicularis kansuensis has caused serious ecological and livestock damage. Results indicate that the adversarial structure can significantly improve the performance of positive-unlabeled learning (PUL) methods, and the class prior-free approach outperforms traditional PUL methods in IP detection. APUL achieved an overall accuracy of 92.2% and an F1-score of 0.80, revealing that Pedicularis kansuensis has invaded 4.43% of the local plant population in the Bayinbuluke grasslands, underscoring the urgent need for timely control measures. Full article
(This article belongs to the Special Issue Remote Sensing for Management of Invasive Species)
Show Figures

Figure 1

19 pages, 5267 KiB  
Article
Remote-Sensed Spatio-Temporal Study of the Tropical Cyclone Freddy Exceptional Case
by Giuseppe Ciardullo, Leonardo Primavera, Fabrizio Ferrucci, Fabio Lepreti and Vincenzo Carbone
Remote Sens. 2025, 17(6), 981; https://doi.org/10.3390/rs17060981 - 11 Mar 2025
Viewed by 1070
Abstract
Dynamical processes during the different stages of evolution of tropical cyclones play crucial roles in their development and intensification, making them one of the most powerful natural forces on Earth. Given their classification as extreme atmospheric events resulting from multiple interacting factors, it [...] Read more.
Dynamical processes during the different stages of evolution of tropical cyclones play crucial roles in their development and intensification, making them one of the most powerful natural forces on Earth. Given their classification as extreme atmospheric events resulting from multiple interacting factors, it is significant to study their dynamical behavior and the nonlinear effects generated by emerging structures during scales and intensity transitions, correlating them with the surrounding environment. This study investigates the extraordinary and record-breaking case of Tropical Cyclone Freddy (2023 Indian Ocean tropical season) from a purely dynamical perspective, examining the superposition of energetic structures at different spatio-temporal scales, by mainly considering thermal fluctuations over 12 days of its evolution. The tool used for this investigation is the Proper Orthogonal Decomposition (POD), in which a set of empirical basis functions is built up, retaining the maximum energetic content of the turbulent flow. The method is applied on a satellite imagery dataset acquired from the SEVIRI radiometer onboard the Meteosat Second Generation-8 (MSG-8) geostationary platform, from which the cloud-top temperature scalar field is remote sensed looking at the cloud’s associated system. For this application, considering Freddy’s very long life period and exceptionally wide path of evolution, reanalysis and tracking data archives are taken into account in order to create an appropriately dynamic spatial grid. Freddy’s eye is followed after its first shape formation with very high temporal resolution snapshots of the temperature field. The energy content in three different characteristic scale ranges is analyzed through the associated spatial and temporal component spectra, focusing both on the total period and on the transitions between different categories. The results of the analysis outline several interesting aspects of the dynamics of Freddy related to both its transitions stages and total period. The reconstructions of the temperature field point out that the most consistent vortexes are found in the outermost cyclonic regions and in proximity of the eyewall. Additionally, we find a significant consistency of the results of the investigation of the maximum intensity phase of Freddy’s life cycle, in the spatio-temporal characteristics of its dynamics, and in comparison with one analogous case study of the Faraji tropical cyclone. Full article
Show Figures

Figure 1

Back to TopTop