Skip to Content

Remote Sensing

Remote Sensing is an international, peer-reviewed, open access journal about the science and application of remote sensing technology, published semimonthly online by MDPI.
The Remote Sensing Society of Japan (RSSJ) and Japan Society of Photogrammetry and Remote Sensing (JSPRS) are affiliated with Remote Sensing and their members receive discounts on the article processing charge.
Quartile Ranking JCR - Q1 (Geosciences, Multidisciplinary)

All Articles (40,909)

High-precision inversion of shallow-water depth is crucial to marine resource development, ecological protection, and national defense security. Traditional acoustic detection, LiDAR, and empirical models are limited by high cost, low efficiency, or water quality dependence, struggling to meet people’s growing demand for shallow-water depth. With the rapid development of theories and technologies such as remote sensing information, computer science, and artificial intelligence, bathymetric inversion based on remote sensing images and deep learning models has become a research hotspot. In this study, journal articles and conference papers were searched in the Web of Science (WOS) and Google Scholar databases using keywords such as “remote sensing image”, “bathymetry”, and “deep learning model”. The publication time of the papers ranges from January 2021 to September 2025. A total of 309 relevant studies were retrieved and, after screening and quality control, 132 core studies were finally selected as the research objects for this review. These studies were classified according to deep learning models, including CNN, U-Net, MLP, and RNN. The study analyzed and summarized the characteristics of different deep learning models in bathymetric inversion, as well as their data source selection, inversion accuracy, and limitations. Additionally, the future development trends were discussed in combination with the latest research results.

27 February 2026

Statistics of the number of published papers from 2001 to 2025.

Rapid urbanization in coastal regions presents complex challenges for environmental management and public safety. Accurate, high-resolution wind field monitoring is critical for urban disaster mitigation, infrastructure resilience, and pollutant dispersion analysis in these densely populated areas. However, utilizing massive multi-source satellite remote sensing data for precise prediction remains difficult due to the spatiotemporal heterogeneity caused by the land–sea interface. To address this, this study proposes a novel lightweight Geospatial Artificial Intelligence (GeoAI) framework (DA-DSC-UNet) designed to predict wind fields in coastal urban environments (e.g., Fujian, China). We constructed a dataset by integrating multi-source satellite scatterometer products (including Advanced Scatterometer (ASCAT), Fengyun-3E (FY-3E), and Quick Scatterometer (QuickSCAT)) and buoy observations. The framework employs a UNet architecture enhanced with dual attention mechanisms (Efficient Channel Attention (ECA) and Convolutional Block Attention Module (CBAM)) to adaptively extract features from remote sensing signals, focusing on critical spatial regions like urban coastlines. Additionally, depthwise separable convolutions (DSCs) are introduced to ensure the model is lightweight and efficient for potential deployment in urban monitoring systems. Results demonstrate that our approach significantly outperforms existing deep learning models (reducing Mean Absolute Error (MAE) by 14–25.8%) and exhibits exceptional robustness against observational noise. This work demonstrates the potential of deep learning in enhancing the value of remote sensing data for urban resilience, sustainable development (SDG 11), and environmental monitoring in complex coastal zones.

27 February 2026

Overall research framework of the proposed DA-DSC-UNet model. The workflow consists of four major modules: data preprocessing, model training and inference, validation and evaluation, and robustness evaluation. The process forms a closed-loop system from multi-source data integration to model optimization and stability verification.

Machine Learning (ML) modeling for disaster management is a growing field, but existing works focus more on mapping the extent of floods or broad categories of damage and they lack methods for explainability to help users understand model outputs. In this study, we propose Flood Assessment using Dempster–Shafer Fusion (FADS-Fusion), a tool for addressing post-flood damage assessment using Dempster–Shafer fusion to combine outputs from multiple deep learning models. FADS-Fusion is generalized to use any pretrained models, once outputs are post-processed for consistency, making it applicable for other disaster management or change detection applications. The novelty of our work comes from the application of Dempster–Shafer for multi-model fusion and uncertainty quantification on a flood dataset for segmenting both buildings and roads. We trained and evaluated models using the SpaceNet 8 challenge dataset and demonstrated that the fusion of the SpaceNet 8 Baseline (SN8) and Siamese Nested UNet (SNUNet) models has a modest overall improvement +1.93% to mAP, while a +12.3% increase for Precision and a −15.0% decrease in Recall are statistically significant compared to the baseline. FADS-Fusion also quantifies uncertainty by using the conflict of evidence, with a discount factor, with Dempster–Shafer fusion as both a quantitative and qualitative explainability method. While uncertainty correlates with a drop in performance, this relationship depends on values for class-weighted uncertainty and location. Mapping uncertainty back onto the original image allows for a visual inspection on fusion quality and indicates areas where a human will need to reassess. Our work demonstrates that FADS-Fusion improves post-flood segmentation performance and adds the benefit of uncertainty quantification for explainability, an aspect important for reliability and user decision-making but understudied in ML for disaster management in the literature.

27 February 2026

Side-by-side comparison of a Pre-flooding Event image on the left and a Post-flooding Even image on the right. The images are from the Germany region of the dataset and both have the Post-flooding ground truth labels added. Lines in blue are flooded roads while lines in green are non-flooded roads. Likewise, Polygons in blue are flooded buildines while those in green are not flooded.

Ku-band scatterometers lose extensive Sea Surface Vector Wind (SSVW) observations under extreme winds, heavy precipitation, or instrument anomalies, degrading forecast and assimilation skill. Traditional interpolation fails to reconstruct non-linear wind structures, whereas existing deep learning inpainting is hampered by scarce public datasets, high computational cost and insufficient continuity modeling. We propose WMamba, an Attention-Structured State Space Duality (ASSD)-based framework that exploits wind continuity to encode global dependencies with O(N) complexity for accurate SSVW inpainting. A Grouped Multiscale Attention Block (GMAB) ensures accurate fine-scale wind detail reconstruction by mitigating local pixel degradation. We also introduce L-WMamba, a lightweight 0.36 M-parameter variant suitable for resource-limited devices. Moreover, we release the SSVW Inpainting Dataset (WID), comprising 123,841 high-wind HY-2B HSCAT samples (2018–2022), as an open benchmark. Experiments demonstrate that WMamba outperforms GRL (state-of-the-art) decreasing the RMSE for wind speed and direction by 11.4% and 6.3%, respectively, while achieving a 94.7% reduction in parameters. In particular, WMamba effectively inpaints wind details, as evidenced by the highest MS-SSIM and RAPSD scores. This framework and dataset establish a robust baseline for extreme-weather SSVW recovery.

27 February 2026

The observation results of HY-2B HSCAT on 7 March 2019 indicated regions of data loss marked in black, with a loss rate exceeding 14.20%.

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Remote Sensing of Vegetation Function and Traits
Reprint

Remote Sensing of Vegetation Function and Traits

Editors: Tawanda W. Gara, Cletah Shoko, Timothy Dube
Remote Sensing of Vegetation
Reprint

Remote Sensing of Vegetation

Mapping, Trend Analysis, and Drivers of Change
Editors: Sadegh Jamali, Torbern Tagesson, Feng Tian, Meisam Amani, Per-Ola Olsson, Arsalan Ghorbanian

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Remote Sens. - ISSN 2072-4292