Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (972)

Search Parameters:
Keywords = Operational Land Imager

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 51503 KiB  
Article
LSANet: Lightweight Super Resolution via Large Separable Kernel Attention for Edge Remote Sensing
by Tingting Yong and Xiaofang Liu
Appl. Sci. 2025, 15(13), 7497; https://doi.org/10.3390/app15137497 - 3 Jul 2025
Viewed by 286
Abstract
In recent years, remote sensing imagery has become indispensable for applications such as environmental monitoring, land use classification, and urban planning. However, the physical constraints of satellite imaging systems frequently limit the spatial resolution of these images, impeding the extraction of fine-grained information [...] Read more.
In recent years, remote sensing imagery has become indispensable for applications such as environmental monitoring, land use classification, and urban planning. However, the physical constraints of satellite imaging systems frequently limit the spatial resolution of these images, impeding the extraction of fine-grained information critical to downstream tasks. Super-resolution (SR) techniques thus emerge as a pivotal solution to enhance the spatial fidelity of remote sensing images via computational approaches. While deep learning-based SR methods have advanced reconstruction accuracy, their high computational complexity and large parameter counts restrict practical deployment in real-world remote sensing scenarios—particularly on edge or low-power devices. To address this gap, we propose LSANet, a lightweight SR network customized for remote sensing imagery. The core of LSANet is the large separable kernel attention mechanism, which efficiently expands the receptive field while retaining low computational overhead. By integrating this mechanism into an enhanced residual feature distillation module, the network captures long-range dependencies more effectively than traditional shallow residual blocks. Additionally, a residual feature enhancement module, leveraging contrast-aware channel attention and hierarchical skip connections, strengthens the extraction and integration of multi-level discriminative features. This design preserves fine textures and ensures smooth information propagation across the network. Extensive experiments on public datasets such as UC Merced Land Use and NWPU-RESISC45 demonstrate LSANet’s competitive or superior performance compared to state-of-the-art methods. On the UC Merced Land Use dataset, LSANet achieves a PSNR of 34.33, outperforming the best-baseline HSENet with its PSNR of 34.23 by 0.1. For SSIM, LSANet reaches 0.9328, closely matching HSENet’s 0.9332 while demonstrating excellent metric-balancing performance. On the NWPU-RESISC45 dataset, LSANet attains a PSNR of 35.02, marking a significant improvement over prior methods, and an SSIM of 0.9305, maintaining strong competitiveness. These results, combined with the notable reduction in parameters and floating-point operations, highlight the superiority of LSANet in remote sensing image super-resolution tasks. Full article
Show Figures

Figure 1

18 pages, 3896 KiB  
Article
The Contribution of Meteosat Third Generation–Flexible Combined Imager (MTG-FCI) Observations to the Monitoring of Thermal Volcanic Activity: The Mount Etna (Italy) February–March 2025 Eruption
by Carolina Filizzola, Giuseppe Mazzeo, Francesco Marchese, Carla Pietrapertosa and Nicola Pergola
Remote Sens. 2025, 17(12), 2102; https://doi.org/10.3390/rs17122102 - 19 Jun 2025
Viewed by 476
Abstract
The Flexible Combined Imager (FCI) instrument aboard the Meteosat Third Generation (MTG-I) geostationary satellite, launched in December 2022 and operational since September 2024, by providing shortwave infrared (SWIR), medium infrared (MIR) and thermal infrared (TIR) data, with an image refreshing time of 10 [...] Read more.
The Flexible Combined Imager (FCI) instrument aboard the Meteosat Third Generation (MTG-I) geostationary satellite, launched in December 2022 and operational since September 2024, by providing shortwave infrared (SWIR), medium infrared (MIR) and thermal infrared (TIR) data, with an image refreshing time of 10 min and a spatial resolution ranging between 500 m in the high-resolution (HR) and 1–2 km in the normal-resolution (NR) mode, may represent a very promising instrument for monitoring thermal volcanic activity from space, also in operational contexts. In this work, we assess this potential by investigating the recent Mount Etna (Italy, Sicily) eruption of February–March 2025 through the analysis of daytime and night-time SWIR observations in the NR mode. The time series of a normalized hotspot index retrieved over Mt. Etna indicates that the effusive eruption started on 8 February at 13:40 UTC (14:40 LT), i.e., before information from independent sources. This observation is corroborated by the analysis of the MIR signal performed using an adapted Robust Satellite Technique (RST) approach, also revealing the occurrence of less intense thermal activity over the Mt. Etna area a few hours before (10.50 UTC) the possible start of lava effusion. By analyzing changes in total SWIR radiance (TSR), calculated starting from hot pixels detected using the preliminary NHI algorithm configuration tailored to FCI data, we inferred information about variations in thermal volcanic activity. The results show that the Mt. Etna eruption was particularly intense during 17–19 February, when the radiative power was estimated to be around 1–3 GW from other sensors. These outcomes, which are consistent with Multispectral Instrument (MSI) and Operational Land Imager (OLI) observations at a higher spatial resolution, providing accurate information about areas inundated by the lava, demonstrate that the FCI may provide a relevant contribution to the near-real-time monitoring of Mt. Etna activity. The usage of FCI data, in the HR mode, may further improve the timely identification of high-temperature features in the framework of early warning contexts, devoted to mitigating the social, environmental and economic impacts of effusive eruptions, especially over less monitored volcanic areas. Full article
Show Figures

Figure 1

42 pages, 29424 KiB  
Article
Mapping of Flood Impacts Caused by the September 2023 Storm Daniel in Thessaly’s Plain (Greece) with the Use of Remote Sensing Satellite Data
by Triantafyllos Falaras, Anna Dosiou, Stamatina Tounta, Michalis Diakakis, Efthymios Lekkas and Issaak Parcharidis
Remote Sens. 2025, 17(10), 1750; https://doi.org/10.3390/rs17101750 - 16 May 2025
Viewed by 1691
Abstract
Floods caused by extreme weather events critically impact human and natural systems. Remote sensing can be a very useful tool in mapping these impacts. However, processing and analyzing satellite imagery covering extensive periods is computationally intensive and time-consuming, especially when data from different [...] Read more.
Floods caused by extreme weather events critically impact human and natural systems. Remote sensing can be a very useful tool in mapping these impacts. However, processing and analyzing satellite imagery covering extensive periods is computationally intensive and time-consuming, especially when data from different sensors need to be integrated, hampering its operational use. To address this issue, the present study focuses on mapping flooded areas and analyzing the impacts of the 2023 Storm Daniel flood in the Thessaly region (Greece), utilizing Earth Observation and GIS methods. The study uses multiple Sentinel-1, Sentinel-2, and Landsat 8/9 satellite images based on backscatter histogram statistics thresholding for SAR and Modified Normalized Difference Water Index (MNDWI) for multispectral images to delineate the extent of flooded areas triggered by the 2023 Storm Daniel in Thessaly region (Greece). Cloud computing on the Google Earth Engine (GEE) platform is utilized to process satellite image acquisitions and track floodwater evolution dynamics until the complete drainage of the area, making the process significantly faster. The study examines the usability and transferability of the approach to evaluate flood impact through land cover, linear infrastructure, buildings, and population-related geospatial datasets. The results highlight the vital role of the proposed approach of integrating remote sensing and geospatial analysis for effective emergency response, disaster management, and recovery planning. Full article
Show Figures

Figure 1

20 pages, 4617 KiB  
Article
Rapid Probabilistic Inundation Mapping Using Local Thresholds and Sentinel-1 SAR Data on Google Earth Engine
by Jiayong Liang, Desheng Liu, Lihan Feng and Kangning Huang
Remote Sens. 2025, 17(10), 1747; https://doi.org/10.3390/rs17101747 - 16 May 2025
Viewed by 560
Abstract
Traditional inundation mapping often relies on deterministic methods that offer only binary outcomes (inundated or not) based on satellite imagery analysis. While widely used, these methods do not convey the level of confidence in inundation classifications to account for ambiguity or uncertainty, limiting [...] Read more.
Traditional inundation mapping often relies on deterministic methods that offer only binary outcomes (inundated or not) based on satellite imagery analysis. While widely used, these methods do not convey the level of confidence in inundation classifications to account for ambiguity or uncertainty, limiting their utility in operational decision-making and rapid response contexts. To address these limitations, we propose a rapid probabilistic inundation mapping method that integrates local thresholds derived from Sentinel-1 SAR images and land cover data to estimate surface water probabilities. Tested on different flood events across five continents, this approach proved both efficient and effective, particularly when deployed via the Google Earth Engine (GEE) platform. The performance metrics—Brier Scores (0.05–0.07), Logarithmic Loss (0.1–0.2), Expected Calibration Error (0.03–0.04), and Reliability Diagrams—demonstrated reliable accuracy. VV (vertical transmit and vertical receive) polarization, given appropriate samples, yielded strong results. Additionally, the influence of different land cover types on the performance was also observed. Unlike conventional deterministic methods, this probabilistic framework allows for the estimation of inundation likelihood while accounting for variations in SAR signal characteristics across different land cover types. Moreover, it enables users to refine local thresholds or integrate on-the-ground knowledge, providing enhanced adaptability over traditional methods. Full article
Show Figures

Graphical abstract

30 pages, 3489 KiB  
Article
Assessing the Robustness of Multispectral Satellite Imagery with LiDAR Topographic Attributes and Ancillary Data to Predict Vertical Structure in a Wet Eucalypt Forest
by Bechu K. V. Yadav, Arko Lucieer, Gregory J. Jordan and Susan C. Baker
Remote Sens. 2025, 17(10), 1733; https://doi.org/10.3390/rs17101733 - 15 May 2025
Viewed by 603
Abstract
Remote sensing approaches can be cost-effective for estimating forest structural attributes. This study aims to use airborne LiDAR data to assess the robustness of multispectral satellite imagery and topographic attributes derived from DEMs to predict the density of three vegetation layers in a [...] Read more.
Remote sensing approaches can be cost-effective for estimating forest structural attributes. This study aims to use airborne LiDAR data to assess the robustness of multispectral satellite imagery and topographic attributes derived from DEMs to predict the density of three vegetation layers in a wet eucalypt forest in Tasmania, Australia. We compared the predictive capacity of medium-resolution Landsat-8 Operational Land Imager (OLI) surface reflectance and three pixel sizes from high-resolution WorldView-3 satellite imagery. These datasets were combined with topographic attributes extracted from resampled LiDAR-derived DEMs and a geology layer and validated with vegetation density layers extracted from high-density LiDAR. Using spectral bands, indices, texture features, a geology layer, and topographic attributes as predictor variables, we evaluated the predictive power of 13 data schemes at three different pixel sizes (1.6 m, 7.5 m, and 30 m). The schemes of the 30 m Landsat-8 (OLI) dataset provided better model accuracy than the WorldView-3 dataset across all three pixel sizes (R2 values from 0.15 to 0.65) and all three vegetation layers. The model accuracies increased with an increase in the number of predictor variables. For predicting the density of the overstorey vegetation, spectral indices (R2 = 0.48) and texture features (R2 = 0.47) were useful, and when both were combined, they produced higher model accuracy (R2 = 0.56) than either dataset alone. Model prediction improved further when all five data sources were included (R2 = 0.65). The best models for mid-storey (R2 = 0.46) and understorey (R2 = 0.44) vegetation had lower predictive capacity than for the overstorey. The models validated using an independent dataset confirmed the robustness. The spectral indices and texture features derived from the Landsat data products integrated with the low-density LiDAR data can provide valuable information on the forest structure of larger geographical areas for sustainable management and monitoring of the forest landscape. Full article
Show Figures

Figure 1

21 pages, 6759 KiB  
Article
Changes in Land Use and Land Cover Patterns in Two Desert Basins Using Remote Sensing Data
by Abdullah F. Alqurashi and Omar A. Alharbi
Geosciences 2025, 15(5), 178; https://doi.org/10.3390/geosciences15050178 - 15 May 2025
Viewed by 496
Abstract
Land use and land cover (LULC) changes can potentially impact natural ecosystems and are considered key components of global environmental change. The majority of LULC changes are related to human activities. Anthropogenic modifications have resulted in significant changes in the structure and fragmentation [...] Read more.
Land use and land cover (LULC) changes can potentially impact natural ecosystems and are considered key components of global environmental change. The majority of LULC changes are related to human activities. Anthropogenic modifications have resulted in significant changes in the structure and fragmentation of landscapes. This research aimed to analyze LULC changes using satellite images in the following two main basins in the Makkah region: the Wadi Fatimah and Wadi Uranah fluvial systems. First, image classification was conducted using remote sensing data from different satellite platforms, namely the Multispectral Scanner, the Landsat Thematic Mapper, the Enhanced Thematic Mapper Plus, and the Operational Land Imager. Images from these platforms were acquired for the years 1972, 1985, 1990, 2000, 2014, and 2022. A combination of object-based image analysis and a support vector machine classifier was used to produce LULC thematic maps. The obtained results were then used to calculate landscape metrics to quantify landscape patterns and fragmentation. The results showed that the landscape has undergone remarkable changes over the past 46 years. Built-up areas exhibited the most significant increase, while vegetation cover was the most dynamic land cover type. This was attributed mainly to the dry climatic conditions in the study area. These results suggest that LULC changes have influenced the natural environment in the studied area and are likely to contribute to further environmental impacts in the future. Measuring the spatial LULC distribution will help planners and ecologists to develop sustainable management strategies to mitigate future environmental consequences. Full article
Show Figures

Figure 1

28 pages, 32576 KiB  
Article
Machine Learning Algorithms of Remote Sensing Data Processing for Mapping Changes in Land Cover Types over Central Apennines, Italy
by Polina Lemenkova
J. Imaging 2025, 11(5), 153; https://doi.org/10.3390/jimaging11050153 - 12 May 2025
Viewed by 1051
Abstract
This work presents the use of remote sensing data for land cover mapping with a case of Central Apennines, Italy. The data include 8 Landsat 8-9 Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) satellite images in six-year period (2018–2024). The operational workflow included satellite [...] Read more.
This work presents the use of remote sensing data for land cover mapping with a case of Central Apennines, Italy. The data include 8 Landsat 8-9 Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) satellite images in six-year period (2018–2024). The operational workflow included satellite image processing which were classified into raster maps with automatically detected 10 classes of land cover types over the tested study. The approach was implemented by using a set of modules in Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). To classify remote sensing (RS) data, two types of approaches were carried out. The first is unsupervised classification based on the MaxLike approach and clustering which extracted Digital Numbers (DN) of landscape feature based on the spectral reflectance of signals, and the second is supervised classification performed using several methods of Machine Learning (ML), technically realised in GRASS GIS scripting software. The latter included four ML algorithms embedded from the Python’s Scikit-Learn library. These classifiers have been implemented to detect subtle changes in land cover types as derived from the satellite images showing different vegetation conditions in spring and autumn periods in central Apennines, northern Italy. Full article
Show Figures

Figure 1

17 pages, 8295 KiB  
Article
CGLCS-Net: Addressing Multi-Temporal and Multi-Angle Challenges in Remote Sensing Change Detection
by Ke Liu, Hang Xue, Caiyi Huang, Jiaqi Huo and Guoxuan Chen
Sensors 2025, 25(9), 2836; https://doi.org/10.3390/s25092836 - 30 Apr 2025
Viewed by 373
Abstract
Currently, deep learning networks based on architectures such as CNN and Transformer have achieved significant advances in remote sensing image change detection, effectively addressing the issue of false changes due to spectral and radiometric discrepancies. However, when handling remote sensing image data from [...] Read more.
Currently, deep learning networks based on architectures such as CNN and Transformer have achieved significant advances in remote sensing image change detection, effectively addressing the issue of false changes due to spectral and radiometric discrepancies. However, when handling remote sensing image data from multiple sensors, different viewing angles, and extended periods, these models show limitations in modelling dynamic interactions and feature representations in change regions, restricting their ability to model the integrity and precision of irregular change areas. We propose the Context-Aware Global-Local Subspace Attention Change Detection Network (CGLCS-Net) to resolve these issues and introduce the Global-Local Context-Aware Selector (GLCAS) and the Subspace-based Self-Attention Fusion (SSAF) module. GLCAS dynamically selects receptive fields at different feature extraction stages through a joint pooling attention mechanism and depthwise separable convolution, enhancing global context and local feature extraction capabilities and improving feature representation for multi-scale and irregular change regions. The SSAF module establishes dynamic interactions between dual-temporal features via feature decomposition and self-attention mechanisms, focusing on semantic change areas to address challenges such as sensor viewpoint variations and the texture and spectral inconsistencies caused by long periods. Compared to ChangeFormer, CGLCS-Net achieved improvements in the IoU metric of 0.95%, 9.23%, and 13.16% on the three public datasets, i.e., LEVIR-CD, SYSU-CD, and S2Looking, respectively. Additionally, it reduced model parameters by 70.05%, floating-point operations by 7.5%, and inference time by 11.5%. These improvements enhance its applicability for continuous land use and land cover change monitoring. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 5514 KiB  
Article
Crop-Free-Ridge Navigation Line Recognition Based on the Lightweight Structure Improvement of YOLOv8
by Runyi Lv, Jianping Hu, Tengfei Zhang, Xinxin Chen and Wei Liu
Agriculture 2025, 15(9), 942; https://doi.org/10.3390/agriculture15090942 - 26 Apr 2025
Cited by 3 | Viewed by 540
Abstract
This study is situated against the background of shortages in the agricultural labor force and shortages of cultivated land. In order to improve the intelligence level and operational efficiency of agricultural machinery and solve the problems of difficulties in recognizing navigation lines and [...] Read more.
This study is situated against the background of shortages in the agricultural labor force and shortages of cultivated land. In order to improve the intelligence level and operational efficiency of agricultural machinery and solve the problems of difficulties in recognizing navigation lines and a lack of real-time performance of transplanters in the crop-free ridge environment, we propose a crop-free-ridge navigation line recognition method based on an improved YOLOv8 segmentation algorithm. First, this method reduces the parameters and computational complexity of the model by replacing the YOLOv8 backbone network with MobileNetV4 and the feature extraction module C2f with ShuffleNetV2, thereby improving the real-time segmentation of crop-free ridges. Second, we use the least-squares method to fit the obtained point set to accurately obtain navigation lines. Finally, the method is applied to testing and analyzing the field experimental ridges. The results showed that the average precision of the improved neural network model using this method was 90.4%, with a Params of 1.8 M, a FLOPs of 8.8 G, and an FPS of 49.5. The results indicate that the model maintains high accuracy while significantly outperforming Mask-RCNN, YOLACT++, YOLOv8, and YOLO11 in terms of computational speed. The detection frame rate increased significantly, improving the real-time performance of detection. This method uses the least-squares method to fit the 55% ridge contour feature points under the picture, and the fitting navigation line shows no large deviation compared with the image ridge centerline; the result is better than that of the RANSAC fitting method. The research results indicate that this method significantly reduces the size of the model parameters and improves the recognition speed, providing a more efficient solution for the autonomous navigation of intelligent carrier aircraft. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

27 pages, 10030 KiB  
Article
Enhancing Deforestation Detection Through Multi-Domain Adaptation with Uncertainty Estimation
by Luiz Fernando de Moura, Pedro Juan Soto Vega, Gilson Alexandre Ostwald Pedro da Costa and Guilherme Lucio Abelha Mota
Forests 2025, 16(5), 742; https://doi.org/10.3390/f16050742 - 26 Apr 2025
Viewed by 458
Abstract
Deep learning models have shown great potential in scientific research, particularly in remote sensing for monitoring natural resources, environmental changes, land cover, and land use. Deep semantic segmentation techniques enable land cover classification, change detection, object identification, and vegetation health assessment, among other [...] Read more.
Deep learning models have shown great potential in scientific research, particularly in remote sensing for monitoring natural resources, environmental changes, land cover, and land use. Deep semantic segmentation techniques enable land cover classification, change detection, object identification, and vegetation health assessment, among other applications. However, their effectiveness relies on large labeled datasets, which are costly and time-consuming to obtain. Domain adaptation (DA) techniques address this challenge by transferring knowledge from a labeled source domain to one or more unlabeled target domains. While most DA research focuses on single-target single-source problems, multi-target and multi-source scenarios remain underexplored. This work proposes a deep learning approach that uses Domain Adversarial Neural Networks (DANNs) for deforestation detection in multi-domain settings. Additionally, an uncertainty estimation phase is introduced to guide human review in high-uncertainty areas. Our approach is evaluated on a set of Landsat-8 images from the Amazon and Brazilian Cerrado biomes. In the multi-target experiments, a single source domain contains labeled data, while samples from the target domains are unlabeled. In multi-source scenarios, labeled samples from multiple source domains are used to train the deep learning models, later evaluated on a single target domain. The results show significant accuracy improvements over lower-bound baselines, as indicated by F1-Score values, and the uncertainty-based review showed a further potential to enhance performance, reaching upper-bound baselines in certain domain combinations. As our approach is independent of the semantic segmentation network architecture, we believe it opens new perspectives for improving the generalization capacity of deep learning-based deforestation detection methods. Furthermore, from an operational point of view, it has the potential to enable deforestation detection in areas around the world that lack accurate reference data to adequately train deep learning models for the task. Full article
(This article belongs to the Special Issue Modeling Forest Dynamics)
Show Figures

Figure 1

17 pages, 7946 KiB  
Article
Optical Camera Characterization for Feature-Based Navigation in Lunar Orbit
by Pierluigi Federici, Antonio Genova, Simone Andolfo, Martina Ciambellini, Riccardo Teodori and Tommaso Torrini
Aerospace 2025, 12(5), 374; https://doi.org/10.3390/aerospace12050374 - 26 Apr 2025
Viewed by 518
Abstract
Accurate localization is a key requirement for deep-space exploration, enabling spacecraft operations with limited ground support. Upcoming commercial and scientific missions to the Moon are designed to extensively use optical measurements during low-altitude orbital phases, descent and landing, and high-risk operations, due to [...] Read more.
Accurate localization is a key requirement for deep-space exploration, enabling spacecraft operations with limited ground support. Upcoming commercial and scientific missions to the Moon are designed to extensively use optical measurements during low-altitude orbital phases, descent and landing, and high-risk operations, due to the versatility and suitability of these data for onboard processing. Navigation frameworks based on optical data analysis have been developed to support semi- or fully-autonomous onboard systems, enabling precise relative localization. To achieve high-accuracy navigation, optical data have been combined with complementary measurements using sensor fusion techniques. Absolute localization is further supported by integrating onboard maps of cataloged surface features, enabling position estimation in an inertial reference frame. This study presents a navigation framework for optical image processing aimed at supporting the autonomous operations of lunar orbiters. The primary objective is a comprehensive characterization of the navigation camera’s properties and performance to ensure orbit determination uncertainties remain below 1% of the spacecraft altitude. In addition to an analysis of measurement noise, which accounts for both hardware and software contributions and is evaluated across multiple levels consistent with prior literature, this study emphasizes the impact of process noise on orbit determination accuracy. The mismodeling of orbital dynamics significantly degrades orbit estimation performance, even in scenarios involving high-performing navigation cameras. To evaluate the trade-off between measurement and process noise, representing the relative accuracy of the navigation camera and the onboard orbit propagator, numerical simulations were carried out in a synthetic lunar environment using a near-polar, low-altitude orbital configuration. Under nominal conditions, the optical measurement noise was set to 2.5 px, corresponding to a ground resolution of approximately 160 m based on the focal length, pixel pitch, and altitude of the modeled camera. With a conservative process noise model, position errors of about 200 m are observed in both transverse and normal directions. The results demonstrate the estimation framework’s robustness to modeling uncertainties, adaptability to varying measurement conditions, and potential to support increased onboard autonomy for small spacecraft in deep-space missions. Full article
(This article belongs to the Special Issue Planetary Exploration)
Show Figures

Figure 1

23 pages, 14157 KiB  
Article
A Spatial–Frequency Combined Transformer for Cloud Removal of Optical Remote Sensing Images
by Fulian Zhao, Chenlong Ding, Xin Li, Runliang Xia, Caifeng Wu and Xin Lyu
Remote Sens. 2025, 17(9), 1499; https://doi.org/10.3390/rs17091499 - 23 Apr 2025
Viewed by 705
Abstract
Cloud removal is a vital preprocessing step in optical remote sensing images (RSIs), directly enhancing image quality and providing a high-quality data foundation for downstream tasks, such as water body extraction and land cover classification. Existing methods attempt to combine spatial and frequency [...] Read more.
Cloud removal is a vital preprocessing step in optical remote sensing images (RSIs), directly enhancing image quality and providing a high-quality data foundation for downstream tasks, such as water body extraction and land cover classification. Existing methods attempt to combine spatial and frequency features for cloud removal, but they rely on shallow feature concatenation or simplistic addition operations, which fail to establish effective cross-domain synergistic mechanisms. These approaches lead to edge blurring and noticeable color distortions. To address this issue, we propose a spatial–frequency collaborative enhancement Transformer network named SFCRFormer, which significantly improves cloud removal performance. The core of SFCRFormer is the spatial–frequency combined Transformer (SFCT) block, which implements cross-domain feature reinforcement through a dual-branch spatial attention (DBSA) module and frequency self-attention (FreSA) module to effectively capture global context information. The DBSA module enhances the representation of spatial features by decoupling spatial-channel dependencies via parallelized feature refinement paths, surpassing the performance of traditional single-branch attention mechanisms in maintaining the overall structure of the image. FreSA leverages fast Fourier transform to convert features into the frequency domain, using frequency differences between object and cloud regions to achieve precise cloud detection and fine-grained removal. In order to further enhance the features extracted by DBSA and FreSA, we design the dual-domain feed-forward network (DDFFN), which effectively improves the detail fidelity of the restored image by multi-scale convolution for local refinement and frequency transformation for global structural optimization. A composite loss function, incorporating Charbonnier loss and Structural Similarity Index (SSIM) loss, is employed to optimize model training and balance pixel-level accuracy with structural fidelity. Experimental evaluations on the public datasets demonstrate that SFCRFormer outperforms state-of-the-art methods across various quantitative metrics, including PSNR and SSIM, while delivering superior visual results. Full article
Show Figures

Figure 1

28 pages, 5449 KiB  
Review
The Evolution and Development Trends of LNG Loading and Unloading Arms
by Mingqin Liu, Jiachao Wang, Han Zhang, Yuming Zhang, Jingquan Zhu and Kun Zhu
Appl. Sci. 2025, 15(8), 4316; https://doi.org/10.3390/app15084316 - 14 Apr 2025
Viewed by 948
Abstract
In recent years, the rapid growth in demand for liquefied natural gas (LNG) has brought significant challenges and opportunities to LNG storage and transportation technologies. As critical equipment for LNG loading operations, marine and land-based LNG loading and unloading arms play a vital [...] Read more.
In recent years, the rapid growth in demand for liquefied natural gas (LNG) has brought significant challenges and opportunities to LNG storage and transportation technologies. As critical equipment for LNG loading operations, marine and land-based LNG loading and unloading arms play a vital role in improving LNG storage and transportation efficiency and ensuring safety performance. By extensively collecting relevant domestic and international literature, technical standards, and engineering cases, systematically reviewing and analyzing existing achievements, and engaging with technical personnel from related enterprises, the current development status of marine and land-based LNG loading and unloading arms is introduced from multiple perspectives, including overall structure, sealing technology, safety protection devices, and intelligent and automated development. This paper highlights trajectory planning and image processing involved in the automatic docking technology. Marine loading/unloading arms need to operate in high-humidity, high-corrosion, and even extreme weather conditions. In the future, they should further enhance stability in marine high-corrosion environments and improve anti-overturning capability under extreme conditions by simplifying mechanical structures, developing new balancing systems, and using low-temperature-resistant alloy materials. Land-based loading and unloading arms focus on multi-vehicle parallel operations, improving operational efficiency through simplified mechanical structures, integrated intelligent positioning systems, and adaptive control algorithms. Full article
Show Figures

Figure 1

29 pages, 6481 KiB  
Article
MDFFN: Multi-Scale Dual-Aggregated Feature Fusion Network for Hyperspectral Image Classification
by Ge Song, Xiaoqi Luo, Yuqiao Deng, Fei Zhao, Xiaofei Yang, Jiaxin Chen and Jinjie Chen
Electronics 2025, 14(7), 1477; https://doi.org/10.3390/electronics14071477 - 7 Apr 2025
Viewed by 492
Abstract
Employing the multi-scale strategy in hyperspectral image (HSI) classification enables the exploration of complex land-cover structures with diverse shapes. However, existing multi-scale methods still have limitations for fine feature extraction and deep feature fusion, which hinder the further improvement of classification performance. In [...] Read more.
Employing the multi-scale strategy in hyperspectral image (HSI) classification enables the exploration of complex land-cover structures with diverse shapes. However, existing multi-scale methods still have limitations for fine feature extraction and deep feature fusion, which hinder the further improvement of classification performance. In this paper, we propose a multi-scale dual-aggregated feature fusion network (MDFFN) for both balanced and imbalanced environments. The network comprises two main core modules: a multi-scale convolutional information embedding (MCIE) module and a dual aggregated cross-attention (DACA) module. The proposed MCIE module introduces a multi-scale pooling operation to aggregate local features, which efficiently highlights discriminative spectral–spatial information and especially learns key features in small target samples in the imbalanced environment. Furthermore, the proposed DACA module employs a cross-scale interaction strategy to realize the deep fusion of multi-scale features and designs a dual aggregation mechanism to mitigate the loss of information, which facilitates further spatial–spectral feature enhancement. The experimental results demonstrate that the proposed method outperforms state-of-the-art methods on three classical HSI datasets, proving the superiority of the proposed MDFFN. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 3028 KiB  
Article
Multitemporal Analysis Using Remote Sensing and GIS to Monitor Wetlands Changes and Degradation in the Central Andes of Ecuador (Period 1986–2022)
by Juan Carlos Carrasco Baquero, Daisy Carolina Carrasco López, Jorge Daniel Córdova Lliquín, Adriana Catalina Guzmán Guaraca, David Alejandro León Gualán, Vicente Javier Parra León and Verónica Lucía Caballero Serrano
Resources 2025, 14(4), 61; https://doi.org/10.3390/resources14040061 - 4 Apr 2025
Viewed by 1384
Abstract
Wetlands are transitional lands between terrestrial and aquatic systems that provide various ecosystem services. The objective of this study was to evaluate the change in wetlands in the Chimborazo Wildlife Reserve (CR) in the period 1986–2022 using geographic information systems (GISs), multitemporal satellite [...] Read more.
Wetlands are transitional lands between terrestrial and aquatic systems that provide various ecosystem services. The objective of this study was to evaluate the change in wetlands in the Chimborazo Wildlife Reserve (CR) in the period 1986–2022 using geographic information systems (GISs), multitemporal satellite data, and field data from the 16 wetlands of the reserve. Images from Landsat satellite collections (five from Thematic Mapper, seven from Enhanced Thematic Mapper, and eight from Operational Land Imager and Thermal Infrared Sensor) were used. Image analysis and processing was performed, and the resulting maps were evaluated in a GIS environment to determine the land cover change and growth rate of hydrophilic opportunistic vegetation (HOV) according to hillside orientation. The results show that there are negative annual anomalies in the water-covered areas, which coincide with the increase in HOV. This shows that the constancy or increase in the rate of increase in HOV, which varies between 0.0018 and 0.0028, causes the disappearance of these ecosystems. The importance of the study lies in its potential contribution to the decision-making process in the management of the CR. Full article
Show Figures

Figure 1

Back to TopTop