Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,575)

Search Parameters:
Journal = Remote Sensing
Section = Engineering Remote Sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 11722 KB  
Article
Simultaneous Hyperspectral and Radar Satellite Measurements of Soil Moisture for Hydrogeological Risk Monitoring
by Kalliopi Karadima, Andrea Massi, Alessandro Patacchini, Federica Verde, Claudia Masciulli, Carlo Esposito, Paolo Mazzanti, Valeria Giliberti and Michele Ortolani
Remote Sens. 2026, 18(3), 393; https://doi.org/10.3390/rs18030393 - 24 Jan 2026
Viewed by 192
Abstract
Emerging landslides and severe floods highlight the urgent need to analyse and support predictive models and early warning systems. Soil moisture is a crucial parameter and it can now be determined from space with a resolution of a few tens of meters, potentially [...] Read more.
Emerging landslides and severe floods highlight the urgent need to analyse and support predictive models and early warning systems. Soil moisture is a crucial parameter and it can now be determined from space with a resolution of a few tens of meters, potentially leading to the continuous global monitoring of landslide risk. We address this issue by determining the volumetric water content (VWC) of a testbed in Southern Italy (bare soil with significant flood and landslide hazard) through the comparison of two different satellite observations on the same day. In the first observation (Sentinel-1 mission of the European Space Agency, C-band Synthetic Aperture Radar (SAR)), the back-scattered radar signal is used to determine the VWC from the dielectric constant in the microwave range, using a time-series approach to calibrate the algorithm. In the second observation (hyperspectral PRISMA mission of the Italian Space Agency), the short-wave infrared (SWIR) reflectance spectra are used to calculate the VWC from the spectral weight of a vibrational absorption line of liquid water (wavelengths 1800–1950 nm). As the main result, we obtained a Pearson’s correlation coefficient of 0.4 between the VWC values measured with the two techniques and a separate ground-truth confirmation of absolute VWC values in the range of 0.10–0.30 within ±0.05. This overlap validates that both SAR and hyperspectral data can be well calibrated and mapped with 30 m ground resolution, given the absence of artifacts or anomalies in this particular testbed (e.g., vegetation canopy or cloud presence). If hyperspectral data in the SWIR range become more broadly available in the future, our systematic procedure to synchronise these two technologies in both space and time can be further adapted to cross-validate the global high-resolution soil moisture dataset. Ultimately, multi-mission data integration could lead to quasi-real-time hydrogeological risk monitoring from space. Full article
(This article belongs to the Special Issue Remote Sensing in Geomatics (Second Edition))
Show Figures

Figure 1

19 pages, 5567 KB  
Article
Quantitative Analysis of Lightning Rod Impacts on the Radiation Pattern and Polarimetric Characteristics of S-Band Weather Radar
by Xiaopeng Wang, Jiazhi Yin, Fei Ye, Ting Yang, Yi Xie, Haifeng Yu and Dongming Hu
Remote Sens. 2026, 18(3), 392; https://doi.org/10.3390/rs18030392 - 23 Jan 2026
Viewed by 118
Abstract
Lightning rods, while essential for protecting weather radars from direct lightning strikes, act as persistent non-meteorological scatterers that can interfere with signal transmission and reception and thereby degrade detection accuracy and product quality. Existing studies have mainly focused on X-band and C-band systems, [...] Read more.
Lightning rods, while essential for protecting weather radars from direct lightning strikes, act as persistent non-meteorological scatterers that can interfere with signal transmission and reception and thereby degrade detection accuracy and product quality. Existing studies have mainly focused on X-band and C-band systems, and robust, measurement-based quantitative assessments for S-band dual-polarization radars remain scarce. In this study, a controllable tilting lightning rod, a high-precision Far-field Antenna Measurement System (FAMS), and an S-band dual-polarization weather radar (SAD radar) are jointly employed to systematically quantify lightning-rod impacts on antenna electromagnetic parameters under different rod elevation angles and azimuth configurations. Typical precipitation events were analyzed to evaluate the influence of the lightning rods on dual-polarization parameters. The results show that the lightning rod substantially elevates sidelobe levels, with a maximum enhancement of 4.55 dB, while producing only limited changes in the antenna main-beam azimuth and beamwidth. Differential reflectivity (ZDR) is the most sensitive polarimetric parameter, exhibiting a persistent positive bias of about 0.24–0.25 dB in snowfall and mixed-phase precipitation, while no persistent azimuthal anomaly is evident during freezing rain; the co-polar correlation coefficient (ρhv) is only marginally affected. Collectively, these results provide quantitative, far-field evidence of lightning-rod interference in S-band dual-polarization radars and provide practical guidance for more reasonable lightning-rod placement and configuration, as well as useful references for ZDR-oriented polarimetric quality-control and correction strategies. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

24 pages, 10940 KB  
Article
A Few-Shot Object Detection Framework for Remote Sensing Images Based on Adaptive Decision Boundary and Multi-Scale Feature Enhancement
by Lijiale Yang, Bangjie Li, Dongdong Guan and Deliang Xiang
Remote Sens. 2026, 18(3), 388; https://doi.org/10.3390/rs18030388 - 23 Jan 2026
Viewed by 119
Abstract
Given the high cost of acquiring large-scale annotated datasets, few-shot object detection (FSOD) has emerged as an increasingly important research direction. However, existing FSOD methods face two critical challenges in remote sensing images (RSIs): (1) features of small targets within remote sensing images [...] Read more.
Given the high cost of acquiring large-scale annotated datasets, few-shot object detection (FSOD) has emerged as an increasingly important research direction. However, existing FSOD methods face two critical challenges in remote sensing images (RSIs): (1) features of small targets within remote sensing images are incompletely represented due to extremely small-scale and cluttered backgrounds, which weakens discriminability and leads to significant detection degradation; (2) unified classification boundaries fail to handle the distinct confidence distributions between well-sampled base classes and sparsely sampled novel classes, leading to ineffective knowledge transfer. To address these issues, we propose TS-FSOD, a Transfer-Stable FSOD framework with two key innovations. First, the proposed detector integrates a Feature Enhancement Module (FEM) leveraging hierarchical attention mechanisms to alleviate small target feature attenuation, and an Adaptive Fusion Unit (AFU) utilizing spatial-channel selection to strengthen target feature representations while mitigating background interference. Second, Dynamic Temperature-scaling Learnable Classifier (DTLC) employs separate learnable temperature parameters for base and novel classes, combined with difficulty-aware weighting and dynamic adjustment, to adaptively calibrate decision boundaries for stable knowledge transfer. Experiments on DIOR and NWPU VHR-10 datasets show that TS-FSOD achieves competitive or superior performance compared to state-of-the-art methods, with improvements up to 4.30% mAP, particularly excelling in 3-shot and 5-shot scenarios. Full article
Show Figures

Figure 1

24 pages, 5216 KB  
Article
Characterizing L-Band Backscatter in Inundated and Non-Inundated Rice Paddies for Water Management Monitoring
by Go Segami, Kei Oyoshi, Shinichi Sobue and Wataru Takeuchi
Remote Sens. 2026, 18(2), 370; https://doi.org/10.3390/rs18020370 - 22 Jan 2026
Viewed by 60
Abstract
Methane emissions from rice paddies account for over 11% of global atmospheric CH4, making water management practices such as Alternate Wetting and Drying (AWD) critical for climate change mitigation. Remote sensing offers an objective approach to monitoring AWD implementation and improving [...] Read more.
Methane emissions from rice paddies account for over 11% of global atmospheric CH4, making water management practices such as Alternate Wetting and Drying (AWD) critical for climate change mitigation. Remote sensing offers an objective approach to monitoring AWD implementation and improving greenhouse gas estimation accuracy. This study investigates the backscattering mechanisms of L-band SAR for inundation/non-inundation classification in paddy fields using full-polarimetric ALOS-2 PALSAR-2 data. Field surveys and satellite observations were conducted in Ryugasaki (Ibaraki) and Sekikawa (Niigata), Japan, collecting 1360 ground samples during the 2024 growing season. Freeman–Durden decomposition was applied, and relationships with plant height and water level were analyzed. The results indicate that plant height strongly influences backscatter, with backscattering contributions from the surface decreasing beyond 70 cm, reducing classification accuracy. Random forest models can classify inundated and non-inundated fields with up to 88% accuracy when plant height is below 70 cm. However, when using this method, it is necessary to know the plant height. Volume scattering proved robust to incidence angle and observation direction, suggesting its potential for phenological monitoring. These findings highlight the effectiveness of L-band SAR for water management monitoring and the need for integrating crop height estimation and regional adaptation to enhance classification performance. Full article
Show Figures

Figure 1

24 pages, 3748 KB  
Article
Automated Recognition of Rock Mass Discontinuities on Vegetated High Slopes Using UAV Photogrammetry and an Improved Superpoint Transformer
by Peng Wan, Xianquan Han, Ruoming Zhai and Xiaoqing Gan
Remote Sens. 2026, 18(2), 357; https://doi.org/10.3390/rs18020357 - 21 Jan 2026
Viewed by 94
Abstract
Automated recognition of rock mass discontinuities in vegetated high-slope terrains remains a challenging task critical to geohazard assessment and slope stability analysis. This study presents an integrated framework combining close-range UAV photogrammetry with an Improved Superpoint Transformer (ISPT) for semantic segmentation and structural [...] Read more.
Automated recognition of rock mass discontinuities in vegetated high-slope terrains remains a challenging task critical to geohazard assessment and slope stability analysis. This study presents an integrated framework combining close-range UAV photogrammetry with an Improved Superpoint Transformer (ISPT) for semantic segmentation and structural characterization. High-resolution UAV imagery was processed using an SfM–MVS photogrammetric workflow to generate dense point clouds, followed by a three-stage filtering workflow comprising cloth simulation filtering, volumetric density analysis, and VDVI-based vegetation discrimination. Feature augmentation using volumetric density and the Visible-Band Difference Vegetation Index (VDVI), together with connected-component segmentation, enhanced robustness under vegetation occlusion. Validation on four vegetated slopes in Buyun Mountain, China, achieved an overall classification accuracy of 89.5%, exceeding CANUPO (78.2%) and the baseline SPT (85.8%), with a 25-fold improvement in computational efficiency. In total, 4918 structural planes were extracted, and their orientations, dip angles, and trace lengths were automatically derived. The proposed ISPT-based framework provides an efficient and reliable approach for high-precision geotechnical characterization in complex, vegetation-covered rock mass environments. Full article
Show Figures

Figure 1

25 pages, 10321 KB  
Article
Improving the Accuracy of Optical Satellite-Derived Bathymetry Through High Spatial, Spectral, and Temporal Resolutions
by Giovanni Andrea Nocera, Valeria Lo Presti, Attilio Sulli and Antonino Maltese
Remote Sens. 2026, 18(2), 270; https://doi.org/10.3390/rs18020270 - 14 Jan 2026
Viewed by 178
Abstract
Accurate nearshore bathymetry is essential for various marine applications, including navigation, resource management, and the protection of coastal ecosystems and the services they provide. This study presents an approach to enhance the accuracy of bathymetric estimates derived from high-spatial- and high-temporal-resolution optical satellite [...] Read more.
Accurate nearshore bathymetry is essential for various marine applications, including navigation, resource management, and the protection of coastal ecosystems and the services they provide. This study presents an approach to enhance the accuracy of bathymetric estimates derived from high-spatial- and high-temporal-resolution optical satellite imagery. The proposed technique is particularly suited for multispectral sensors that acquire spectral bands sequentially rather than simultaneously. PlanetScope SuperDove imagery was employed and validated against bathymetric data collected using a multibeam echosounder. The study area is the Gulf of Sciacca, located along the southwestern coast of Sicily in the Mediterranean Sea. Here, multibeam data were acquired along transects that are subparallel to the shoreline, covering depths ranging from approximately 7 m to 50 m. Satellite imagery was radiometrically and atmospherically corrected and then processed using a simplified radiative transfer transformation to generate a continuous bathymetric map extending over the entire gulf. The resulting satellite-derived bathymetry achieved reliable accuracy between approximately 5 m and 25 m depth. Beyond these limits, excessive signal attenuation for higher depths and increased water turbidity close to shore introduced significant uncertainties. The innovative aspect of this approach lies in the combined use of spectral averaging among the most water-penetrating bands, temporal averaging across multiple acquisitions, and a liquid-facets noise reduction technique. The integration of these multi-layer inputs led to improved accuracy compared to using single-date or single-band imagery alone. Results show a strong correlation between the satellite-derived bathymetry and multibeam measurements over sandy substrates, with an estimated error of ±6% at a 95% confidence interval. Some discrepancies, however, were observed in the presence of mixed pixels (e.g., submerged vegetation or rocky substrates) or surface artifacts. Full article
Show Figures

Figure 1

19 pages, 7228 KB  
Article
Trace Modelling: A Quantitative Approach to the Interpretation of Ground-Penetrating Radar Profiles
by Antonio Schettino, Annalisa Ghezzi, Luca Tassi, Ilaria Catapano and Raffaele Persico
Remote Sens. 2026, 18(2), 208; https://doi.org/10.3390/rs18020208 - 8 Jan 2026
Viewed by 155
Abstract
The analysis of ground-penetrating radar data generally relies on the visual identification of structures on selected profiles and their interpretation in terms of buried features. In simple cases, inverse modelling of the acquired data set can facilitate interpretation and reduce subjectivity. These methods [...] Read more.
The analysis of ground-penetrating radar data generally relies on the visual identification of structures on selected profiles and their interpretation in terms of buried features. In simple cases, inverse modelling of the acquired data set can facilitate interpretation and reduce subjectivity. These methods suffer from severe restrictions due to antenna resolution limits, which prevent the identification of tiny structures, particularly in forensic, stratigraphic, and engineering applications. Here, we describe a technique to obtain a high-resolution characterization of the underground, based on the forward modelling of individual traces (A-scans) of selected radar profiles. The model traces are built by superposition of Ricker wavelets with different polarities, amplitudes, and arrival times and are used to create reflectivity diagrams that plot reflection amplitudes and polarities versus depth. A thin bed is defined as a layer of higher or lower permittivity relative to the surrounding material, such that the top and bottom reflections are subject to constructive interference, determining the formation of an anomalous peak in the trace (tuning effect). The proposed method allows the detection of ultra-thin layers, well beyond the Rayleigh vertical resolution of GPR antennas. This approach requires a preliminary estimation of the instrumental uncertainty of common monostatic antennas and takes into account the frequency-dependent attenuation, which causes a spectral shift of the dominant frequency acquired by the receiver antenna. Such a quantitative approach to analyzing radar data can be used in several applications, notably in stratigraphic, forensic, paleontological, civil engineering, heritage protection, and soil stratigraphy applications. Full article
Show Figures

Figure 1

21 pages, 8693 KB  
Article
Integration of InSAR and GNSS Data: Improved Precision and Spatial Resolution of 3D Deformation
by Xiaoyong Wu, Yun Shao, Zimeng Yang, Lihua Lan, Xiaolin Bian and Ming Liu
Remote Sens. 2026, 18(1), 142; https://doi.org/10.3390/rs18010142 - 1 Jan 2026
Viewed by 540
Abstract
High-precision and high-resolution surface deformation provide crucial constraints for studying the kinematic characteristics and dynamic mechanisms of crustal movement. Considering the limitations of existing geodetic observations, we used Sentinel-1 SAR images and accurate GNSS velocity to obtain a high-resolution three-dimensional (3D) surface velocity [...] Read more.
High-precision and high-resolution surface deformation provide crucial constraints for studying the kinematic characteristics and dynamic mechanisms of crustal movement. Considering the limitations of existing geodetic observations, we used Sentinel-1 SAR images and accurate GNSS velocity to obtain a high-resolution three-dimensional (3D) surface velocity map across the Laohushan segment and the 1920 Haiyuan earthquake rupture zone of the Haiyuan Fault on the northeastern Tibetan Plateau. We tied the InSAR LOS (Line of Sight) velocity to the stable Eurasian reference frame adopted by GNSS. Using Kriging interpolation constrained by GNSS north–south components, we decomposed the ascending and descending InSAR velocities into east–west and vertical components to derive a high-resolution 3D deformation. We found that a sharp velocity gradient extending ~45 km along the strike of the Laohushan segment, with a differential movement of ~3 mm/a across the fault, manifests in the east–west velocity component, suggesting that shallow creep has propagated to the surface. However, the east–west velocity component did not exhibit an abrupt discontinuity in the rupture zone of the Haiyuan earthquake. Subsidence caused by anthropogenic and hydrological processes in the region, such as groundwater extraction, coal mining, and hydrologic effects, exhibited distinct distribution characteristics in the vertical velocity component. Our study provides valuable insights into the crustal movement in this region. Full article
Show Figures

Figure 1

36 pages, 35595 KB  
Article
Robust ISAR Autofocus for Maneuvering Ships Using Centerline-Driven Adaptive Partitioning and Resampling
by Wenao Ruan, Chang Liu and Dahu Wang
Remote Sens. 2026, 18(1), 105; https://doi.org/10.3390/rs18010105 - 27 Dec 2025
Viewed by 326
Abstract
Synthetic aperture radar (SAR) is a critical enabling technology for maritime surveillance. However, maneuvering ships often appear defocused in SAR images, posing significant challenges for subsequent ship detection and recognition. To address this problem, this study proposes an improved iteration phase gradient resampling [...] Read more.
Synthetic aperture radar (SAR) is a critical enabling technology for maritime surveillance. However, maneuvering ships often appear defocused in SAR images, posing significant challenges for subsequent ship detection and recognition. To address this problem, this study proposes an improved iteration phase gradient resampling autofocus (IIPGRA) method. First, we extract the defocused ships from SAR images, followed by azimuth decompression and translational motion compensation. Subsequently, a centerline-driven adaptive azimuth partitioning strategy is proposed: the geometric centerline of the vessel is extracted from coarsely focused images using an enhanced RANSAC algorithm, and the target is partitioned into upper and lower sub-blocks along the azimuth direction to maximize the separation of rotational centers between sub-blocks, establishing a foundation for the accurate estimation of spatially variant phase errors. Next, phase gradient autofocus (PGA) is employed to estimate the phase errors of each sub-block and compute their differential. Then, resampling the original echoes based on this differential phase error linearizes non-uniform rotational motion. Furthermore, this study introduces the Rotational Uniformity Coefficient (β) as the convergence criterion. This coefficient can stably and reliably quantify the linearity of the rotational phase, thereby ensuring robust termination of the iterative process. Simulation and real airborne SAR data validate the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

20 pages, 3382 KB  
Article
CFFCNet: Center-Guided Feature Fusion Completion for Accurate Vehicle Localization and Dimension Estimation from Lidar Point Clouds
by Xiaoyi Chen, Xiao Feng, Shichen Zhang, Wen Xiao, Miao Tang and Kun Sun
Remote Sens. 2026, 18(1), 39; https://doi.org/10.3390/rs18010039 - 23 Dec 2025
Viewed by 371
Abstract
Accurate scene understanding from 3D point cloud data is fundamental to intelligent transportation systems and geospatial digital twins. However, point clouds acquired from lidar sensors in urban environments suffer from incompleteness due to occlusions and limited sensor resolution, presenting significant challenges for precise [...] Read more.
Accurate scene understanding from 3D point cloud data is fundamental to intelligent transportation systems and geospatial digital twins. However, point clouds acquired from lidar sensors in urban environments suffer from incompleteness due to occlusions and limited sensor resolution, presenting significant challenges for precise object localization and geometric reconstruction—critical requirements for traffic safety monitoring and autonomous navigation. To address these point cloud processing challenges, we propose a Center-guided Feature Fusion Completion Network (CFFCNet) that enhances vehicle representation through geometry-aware point cloud completion. The network incorporates a Branch-assisted Center Perception (BCP) module that learns to predict geometric centers while extracting multi-scale spatial features, generating initial coarse completions that account for the misalignment between detection centers and true geometric centers in real-world data. Subsequently, a Multi-scale Feature Blending Upsampling (MFBU) module progressively refines these completions by fusing hierarchical features across multiple stages, producing accurate and complete vehicle point clouds. Comprehensive evaluations on the KITTI dataset demonstrate substantial improvements in geometric accuracy, with localization mean absolute error (MAE) reduced to 0.0928 m and length MAE to 0.085 m. The method’s generalization capability is further validated on a real-world roadside lidar dataset (CUG-Roadside) without fine-tuning, achieving localization MAE of 0.051 m and length MAE of 0.051 m. These results demonstrate the effectiveness of geometry-guided completion for point cloud scene understanding in infrastructure-based traffic monitoring applications, contributing to the development of robust 3D perception systems for urban geospatial environments. Full article
(This article belongs to the Special Issue Point Cloud Data Analysis and Applications)
Show Figures

Figure 1

27 pages, 3305 KB  
Article
SatViT-Seg: A Transformer-Only Lightweight Semantic Segmentation Model for Real-Time Land Cover Mapping of High-Resolution Remote Sensing Imagery on Satellites
by Daoyu Shu, Zhan Zhang, Fang Wan, Wang Ru, Bingnan Yang, Yan Zhang, Jianzhong Lu and Xiaoling Chen
Remote Sens. 2026, 18(1), 1; https://doi.org/10.3390/rs18010001 - 19 Dec 2025
Viewed by 610
Abstract
The demand for real-time land cover mapping from high-resolution remote sensing (HR-RS) imagery motivates lightweight segmentation models running directly on satellites. By processing on-board and transmitting only fine-grained semantic products instead of massive raw imagery, these models provide timely support for disaster response, [...] Read more.
The demand for real-time land cover mapping from high-resolution remote sensing (HR-RS) imagery motivates lightweight segmentation models running directly on satellites. By processing on-board and transmitting only fine-grained semantic products instead of massive raw imagery, these models provide timely support for disaster response, environmental monitoring, and precision agriculture. Many recent methods combine convolutional neural networks (CNNs) with Transformers to balance local and global feature modeling, with convolutions as explicit information aggregation modules. Such heterogeneous hybrids may be unnecessary for lightweight models if similar aggregation can be achieved homogeneously, and operator inconsistency complicates optimization and hinders deployment on resource-constrained satellites. Meanwhile, lightweight Transformer components in these architectures often adopt aggressive channel compression and shallow contextual interaction to meet compute budgets, impairing boundary delineation and recognition of small or rare classes. To address this, we propose SatViT-Seg, a lightweight semantic segmentation model with a pure Vision Transformer (ViT) backbone. Unlike CNN-Transformer hybrids, SatViT-Seg adopts a homogeneous two-module design: a Local-Global Aggregation and Distribution (LGAD) module that uses window self-attention for local modeling and dynamically pooled global tokens with linear attention for long-range interaction, and a Bi-dimensional Attentive Feed-Forward Network (FFN) that enhances representation learning by modulating channel and spatial attention. This unified design overcomes common lightweight ViT issues such as channel compression and weak spatial correlation modeling. SatViT-Seg is implemented and evaluated in LuoJiaNET and PyTorch; comparative experiments with existing methods are run in PyTorch with unified training and data preprocessing for fairness, while the LuoJiaNET implementation highlights deployment-oriented efficiency on a graph-compiled runtime. Compared with the strongest baseline, SatViT-Seg improves mIoU by up to 1.81% while maintaining the lowest FLOPs among all methods. These results indicate that homogeneous Transformers offer strong potential for resource-constrained, on-board real-time land cover mapping in satellite missions. Full article
(This article belongs to the Special Issue Geospatial Artificial Intelligence (GeoAI) in Remote Sensing)
Show Figures

Figure 1

22 pages, 10061 KB  
Article
Precipitable Water Vapor from PPP Estimation with Multi-Analysis-Center Real-Time Products
by Wei Li, Heng Gong, Bo Deng, Liangchun Hua, Fei Ye, Hongliang Lian and Lingzhi Cao
Remote Sens. 2025, 17(24), 4055; https://doi.org/10.3390/rs17244055 - 18 Dec 2025
Viewed by 417
Abstract
Precipitable water vapor (PWV) is an important component of atmospheric spatial parameters and plays a vital role in meteorological studies. In this study, PWV retrieval by real-time precise point positioning (PPP) technique is validated by using global navigation satellite system (GNSS) observations and [...] Read more.
Precipitable water vapor (PWV) is an important component of atmospheric spatial parameters and plays a vital role in meteorological studies. In this study, PWV retrieval by real-time precise point positioning (PPP) technique is validated by using global navigation satellite system (GNSS) observations and four real-time products from different analysis centers, which are Centre National d’Etudes Spatiales (CNES), Internation GNSS Service (IGS), Japan Aerospace Exploration Agency (JAXA), and Wuhan University (WHU). To comparatively analyze the performance of each scenario, the single-system (GPS/Galileo/BDS3), and multi-system (GPS + Galileo + BDS) PPP techniques are applied for zenith tropospheric delay (ZTD) and PWV retrieval. Then, the ZTD and PWV are evaluated by comparison with the IGS final ZTD product, the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5) data, and radiosondes observations provided by the University of Wyoming. Experimental results demonstrate that the root mean squares error (RMS) of ZTD differences from multi-system solutions are below 11 mm with respect to the four-product series and the RMS of PWV differences are below 3.5 mm. As for single-system solution, the IGS real-time products lead to the worst accuracy compared with the other products. Besides the scenario of BDS3 observations with IGS real-time products, the RMS of ZTD differences from the GPS-only and Galileo-only solutions are all less than 15 mm compared to the four-product series, as well as the RMS of PWV differences is under 5 mm, which meets the accuracy requirement for GNSS atmosphere sounding. Full article
(This article belongs to the Special Issue BDS/GNSS for Earth Observation (Third Edition))
Show Figures

Graphical abstract

25 pages, 22959 KB  
Article
A Semi-Automatic Framework for Dry Beach Extraction in Tailings Ponds Using Photogrammetry and Deep Learning
by Bei Cao, Yinsheng Wang, Yani Li, Xudong Zhu, Zicheng Yang, Xinlong Liu and Guangyin Lu
Remote Sens. 2025, 17(24), 4022; https://doi.org/10.3390/rs17244022 - 13 Dec 2025
Viewed by 367
Abstract
The spatial characteristics of the dry beach in tailings ponds are critical indicators for the safety assessment of tailings dams. This study presents a method for dry beach extraction that combines deep learning-based semantic segmentation with 3D reconstruction, overcoming the limitations of 2D [...] Read more.
The spatial characteristics of the dry beach in tailings ponds are critical indicators for the safety assessment of tailings dams. This study presents a method for dry beach extraction that combines deep learning-based semantic segmentation with 3D reconstruction, overcoming the limitations of 2D methods in spatial analysis. The workflow includes four steps: (1) High-resolution 3D point clouds are reconstructed from UAV images, and the projection matrix of each image is derived to link 2D pixels with 3D points. (2) AlexNet and GoogLeNet are employed to extract image features and automatically select images containing the dry beach boundary. (3) A DeepLabv3+ network is trained on manually labeled samples to perform semantic segmentation of the dry beach, with a lightweight incremental training strategy for enhanced adaptability. (4) Boundary pixels are detected and back-projected into 3D space to generate consistent point cloud boundaries. The method was validated on two-phase UAV datasets from a tailings pond in Yunnan Province, China. In phase I, the model achieved high segmentation performance, with a mean Accuracy and IoU of approximately 0.95 and a BF of 0.8267. When applied to phase II without retraining, the model maintained stable performance on dam boundaries, while slight performance degradation was observed on hillside and water boundaries. The 3D back-projection converted 2D boundary pixels into 3D coordinates, enabling the extraction of dry beach point clouds and supporting reliable dry beach length monitoring and deposition morphology analysis. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Graphical abstract

25 pages, 4675 KB  
Article
DLiteNet: A Dual-Branch Lightweight Framework for Efficient and Precise Building Extraction from Visible and SAR Imagery
by Zhe Zhao, Boya Zhao, Ruitong Du, Yuanfeng Wu, Jiaen Chen and Yuchen Zheng
Remote Sens. 2025, 17(24), 3939; https://doi.org/10.3390/rs17243939 - 5 Dec 2025
Viewed by 484
Abstract
High-precision and efficient building extraction by fusing visible and synthetic aperture radar (SAR) imagery is critical for applications such as smart cities, disaster response, and UAV navigation. However, existing approaches often rely on complex multimodal feature extraction and deep fusion mechanisms, resulting in [...] Read more.
High-precision and efficient building extraction by fusing visible and synthetic aperture radar (SAR) imagery is critical for applications such as smart cities, disaster response, and UAV navigation. However, existing approaches often rely on complex multimodal feature extraction and deep fusion mechanisms, resulting in over-parameterized models and excessive computation, which makes it challenging to balance accuracy and efficiency. To address this issue, we propose a dual-branch lightweight architecture, DLiteNet, which functionally decouples the multimodal building extraction task into two sub-tasks: global context modeling and spatial detail capturing. Accordingly, we design a lightweight context branch and spatial branch to achieve an optimal trade-off between semantic accuracy and computational efficiency. The context branch jointly processes visible and SAR images, leveraging our proposed Multi-scale Context Attention Module (MCAM) to adaptively fuse multimodal contextual information, followed by a lightweight Short-Term Dense Atrous Concatenate (STDAC) module for extracting high-level semantics. The spatial branch focuses on capturing textures and edge structures from visible imagery and employs a Context-Detail Aggregation Module (CDAM) to fuse contextual priors and refine building contours. Experiments on the MSAW and DFC23 Track2 datasets demonstrate that DLiteNet achieves strong performance with only 5.6 M parameters and extremely low computational costs (51.7/5.8 GFLOPs), significantly outperforming state-of-the-art models such as CMGFNet (85.2 M, 490.9/150.3 GFLOPs) and MCANet (71.2 M, 874.5/375.9 GFLOPs). On the MSAW dataset, DLiteNet achieves the highest accuracy (83.6% IoU, 91.1% F1-score), exceeding the best MCANet baseline by 1.0% IoU and 0.6% F1-score. Furthermore, deployment tests on the Jetson Orin NX edge device show that DLiteNet achieves a low inference latency of 14.97 ms per frame under FP32 precision, highlighting its real-time capability and deployment potential in edge computing scenarios. Full article
Show Figures

Graphical abstract

28 pages, 3284 KB  
Article
Diffusion-Enhanced Underwater Debris Detection via Improved YOLOv12n Framework
by Jianghan Tao, Fan Zhao, Yijia Chen, Yongying Liu, Feng Xue, Jian Song, Hao Wu, Jundong Chen, Peiran Li and Nan Xu
Remote Sens. 2025, 17(23), 3910; https://doi.org/10.3390/rs17233910 - 2 Dec 2025
Viewed by 703
Abstract
Detecting underwater debris is important for monitoring the marine environment but remains challenging due to poor image quality, visual noise, object occlusions, and diverse debris appearances in underwater scenes. This study proposes UDD-YOLO, a novel detection framework that, for the first time, applies [...] Read more.
Detecting underwater debris is important for monitoring the marine environment but remains challenging due to poor image quality, visual noise, object occlusions, and diverse debris appearances in underwater scenes. This study proposes UDD-YOLO, a novel detection framework that, for the first time, applies a diffusion-based model to underwater image enhancement, introducing a new paradigm for improving perceptual quality in marine vision tasks. Specifically, the proposed framework integrates three key components: (1) a Cold Diffusion module that acts as a pre-processing stage to restore image clarity and contrast by reversing deterministic degradation such as blur and occlusion—without injecting stochastic noise—making it the first diffusion-based enhancement applied to underwater object detection; (2) an AMC2f feature extraction module that combines multi-scale separable convolutions and learnable normalization to improve representation for targets with complex morphology and scale variation; and (3) a Unified-IoU (UIoU) loss function designed to dynamically balance localization learning between high- and low-quality predictions, thereby reducing errors caused by occlusion or boundary ambiguity. Extensive experiments are conducted on the public underwater plastic pollution detection dataset, which includes 15 categories of underwater debris. The proposed method achieves a mAP50 of 81.8%, with 87.3% precision and 75.1% recall, surpassing eleven advanced detection models such as Faster R-CNN, RT-DETR-L, YOLOv8n, and YOLOv12n. Ablation studies verify the function of every module. These findings show that diffusion-driven enhancement, when coupled with feature extraction and localization optimization, offers a promising direction for accurate, robust underwater perception, opening new opportunities for environmental monitoring and autonomous marine systems. Full article
Show Figures

Figure 1

Back to TopTop