Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = super critical extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1481 KiB  
Article
Ambiguities, Built-In Biases, and Flaws in Big Data Insight Extraction
by Serge Galam
Information 2025, 16(8), 661; https://doi.org/10.3390/info16080661 (registering DOI) - 2 Aug 2025
Viewed by 39
Abstract
I address the challenge of extracting reliable insights from large datasets using a simplified model that illustrates how hierarchical classification can distort outcomes. The model consists of discrete pixels labeled red, blue, or white. Red and blue indicate distinct properties, while white represents [...] Read more.
I address the challenge of extracting reliable insights from large datasets using a simplified model that illustrates how hierarchical classification can distort outcomes. The model consists of discrete pixels labeled red, blue, or white. Red and blue indicate distinct properties, while white represents unclassified or ambiguous data. A macro-color is assigned only if one color holds a strict majority among the pixels. Otherwise, the aggregate is labeled white, reflecting uncertainty. This setup mimics a percolation threshold at fifty percent. Assuming that directly accessing the various proportions from the data of colors is infeasible, I implement a hierarchical coarse-graining procedure. Elements (first pixels, then aggregates) are recursively grouped and reclassified via local majority rules, ultimately producing a single super-aggregate for which the color represents the inferred macro-property of the collection of pixels as a whole. Analytical results supported by simulations show that the process introduces additional white aggregates beyond white pixels, which could be present initially; these arise from groups lacking a clear majority, requiring arbitrary symmetry-breaking decisions to attribute a color to them. While each local resolution may appear minor and inconsequential, their repetitions introduce a growing systematic bias. Even with complete data, unavoidable asymmetries in local rules are shown to skew outcomes. This study highlights a critical limitation of recursive data reduction. Insight extraction is shaped not only by data quality but also by how local ambiguity is handled, resulting in built-in biases. Thus, the related flaws are not due to the data but to structural choices made during local aggregations. Although based on a simple model, these findings expose a high likelihood of inherent flaws in widely used hierarchical classification techniques. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

19 pages, 7161 KiB  
Article
Dynamic Snake Convolution Neural Network for Enhanced Image Super-Resolution
by Weiqiang Xin, Ziang Wu, Qi Zhu, Tingting Bi, Bing Li and Chunwei Tian
Mathematics 2025, 13(15), 2457; https://doi.org/10.3390/math13152457 - 30 Jul 2025
Viewed by 203
Abstract
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To [...] Read more.
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To address these limitations, this paper proposes DSCNN, a dynamic snake convolution neural network for enhanced image super-resolution. DSCNN optimizes feature extraction and network architecture to enhance both performance and efficiency: To improve feature extraction, the core innovation is a feature extraction and enhancement module with dynamic snake convolution that dynamically adjusts the convolution kernel’s shape and position to better fit the image’s geometric structures, significantly improving feature extraction. To optimize the network’s structure, DSCNN employs an enhanced residual network framework. This framework utilizes parallel convolutional layers and a global feature fusion mechanism to further strengthen feature extraction capability and gradient flow efficiency. Additionally, the network incorporates a SwishReLU-based activation function and a multi-scale convolutional concatenation structure. This multi-scale design effectively captures both local details and global image structure, enhancing SR reconstruction. In summary, the proposed DSCNN outperforms existing methods in both objective metrics and visual perception (e.g., our method achieved optimal PSNR and SSIM results on the Set5 ×4 dataset). Full article
(This article belongs to the Special Issue Structural Networks for Image Application)
Show Figures

Figure 1

22 pages, 5937 KiB  
Article
CSAN: A Channel–Spatial Attention-Based Network for Meteorological Satellite Image Super-Resolution
by Weiliang Liang and Yuan Liu
Remote Sens. 2025, 17(14), 2513; https://doi.org/10.3390/rs17142513 - 19 Jul 2025
Viewed by 411
Abstract
Meteorological satellites play a critical role in weather forecasting, climate monitoring, water resource management, and more. These satellites feature an array of radiative imaging bands, capturing dozens of spectral images that span from visible to infrared. However, the spatial resolution of these bands [...] Read more.
Meteorological satellites play a critical role in weather forecasting, climate monitoring, water resource management, and more. These satellites feature an array of radiative imaging bands, capturing dozens of spectral images that span from visible to infrared. However, the spatial resolution of these bands varies, with images at longer wavelengths typically exhibiting lower spatial resolutions, which limits the accuracy and reliability of subsequent applications. To alleviate this issue, we propose a channel–spatial attention-based network, named CSAN, designed to super-resolve all low-resolution (LR) bands to the available maximal high-resolution (HR) scale. The CSAN consists of an information fusion unit, a feature extraction module, and an image restoration unit. The information fusion unit adaptively fuses LR and HR images, effectively capturing inter-band spectral relationships and spatial details to enhance the input representation. The feature extraction module integrates channel and spatial attention into the residual network, enabling the extraction of informative spectral and spatial features from the fused inputs. Using these deep features, the image restoration unit reconstructs the missing spatial details in LR images. Extensive experiments demonstrate that the proposed network outperforms other state-of-the-art approaches quantitatively and visually. Full article
Show Figures

Figure 1

14 pages, 16969 KiB  
Article
FTT: A Frequency-Aware Texture Matching Transformer for Digital Bathymetry Model Super-Resolution
by Peikun Xiao, Jianping Wu and Yingjie Wang
J. Mar. Sci. Eng. 2025, 13(7), 1365; https://doi.org/10.3390/jmse13071365 - 17 Jul 2025
Viewed by 175
Abstract
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, [...] Read more.
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, which leads to serious distortions and inaccuracies. Given the critical role of HR DBM in marine resource exploitation, economic development, and scientific innovation, we propose a frequency-aware texture matching transformer (FTT) for DBM SR, incorporating global terrain feature extraction (GTFE), high-frequency feature extraction (HFFE), and a terrain matching block (TMB). GTFE has the capability to perceive spatial heterogeneity and spatial locations, allowing it to accurately capture large-scale terrain features. HFFE can explicitly extract high-frequency priors beneficial for DBM SR and implicitly refine the representation of high-frequency information in the global terrain feature. TMB improves fidelity of generated HR DBM by generating position offsets to restore warped textures in deep features. Experimental results have demonstrated that the proposed FTT has superior performance in terms of elevation, slope, aspect, and fidelity of generated HR DBM. Notably, the root mean square error (RMSE) of elevation in steep terrain has been reduced by 4.89 m, which is a significant improvement in the accuracy and precision of the reconstruction. This research holds significant implications for improving the accuracy of DBM SR methods and the usefulness of HR bathymetry products for future marine research. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 51503 KiB  
Article
LSANet: Lightweight Super Resolution via Large Separable Kernel Attention for Edge Remote Sensing
by Tingting Yong and Xiaofang Liu
Appl. Sci. 2025, 15(13), 7497; https://doi.org/10.3390/app15137497 - 3 Jul 2025
Viewed by 332
Abstract
In recent years, remote sensing imagery has become indispensable for applications such as environmental monitoring, land use classification, and urban planning. However, the physical constraints of satellite imaging systems frequently limit the spatial resolution of these images, impeding the extraction of fine-grained information [...] Read more.
In recent years, remote sensing imagery has become indispensable for applications such as environmental monitoring, land use classification, and urban planning. However, the physical constraints of satellite imaging systems frequently limit the spatial resolution of these images, impeding the extraction of fine-grained information critical to downstream tasks. Super-resolution (SR) techniques thus emerge as a pivotal solution to enhance the spatial fidelity of remote sensing images via computational approaches. While deep learning-based SR methods have advanced reconstruction accuracy, their high computational complexity and large parameter counts restrict practical deployment in real-world remote sensing scenarios—particularly on edge or low-power devices. To address this gap, we propose LSANet, a lightweight SR network customized for remote sensing imagery. The core of LSANet is the large separable kernel attention mechanism, which efficiently expands the receptive field while retaining low computational overhead. By integrating this mechanism into an enhanced residual feature distillation module, the network captures long-range dependencies more effectively than traditional shallow residual blocks. Additionally, a residual feature enhancement module, leveraging contrast-aware channel attention and hierarchical skip connections, strengthens the extraction and integration of multi-level discriminative features. This design preserves fine textures and ensures smooth information propagation across the network. Extensive experiments on public datasets such as UC Merced Land Use and NWPU-RESISC45 demonstrate LSANet’s competitive or superior performance compared to state-of-the-art methods. On the UC Merced Land Use dataset, LSANet achieves a PSNR of 34.33, outperforming the best-baseline HSENet with its PSNR of 34.23 by 0.1. For SSIM, LSANet reaches 0.9328, closely matching HSENet’s 0.9332 while demonstrating excellent metric-balancing performance. On the NWPU-RESISC45 dataset, LSANet attains a PSNR of 35.02, marking a significant improvement over prior methods, and an SSIM of 0.9305, maintaining strong competitiveness. These results, combined with the notable reduction in parameters and floating-point operations, highlight the superiority of LSANet in remote sensing image super-resolution tasks. Full article
Show Figures

Figure 1

40 pages, 4919 KiB  
Article
NGSTGAN: N-Gram Swin Transformer and Multi-Attention U-Net Discriminator for Efficient Multi-Spectral Remote Sensing Image Super-Resolution
by Chao Zhan, Chunyang Wang, Bibo Lu, Wei Yang, Xian Zhang and Gaige Wang
Remote Sens. 2025, 17(12), 2079; https://doi.org/10.3390/rs17122079 - 17 Jun 2025
Viewed by 548
Abstract
The reconstruction of high-resolution (HR) remote sensing images (RSIs) from low-resolution (LR) counterparts is a critical task in remote sensing image super-resolution (RSISR). Recent advancements in convolutional neural networks (CNNs) and Transformers have significantly improved RSISR performance due to their capabilities in local [...] Read more.
The reconstruction of high-resolution (HR) remote sensing images (RSIs) from low-resolution (LR) counterparts is a critical task in remote sensing image super-resolution (RSISR). Recent advancements in convolutional neural networks (CNNs) and Transformers have significantly improved RSISR performance due to their capabilities in local feature extraction and global modeling. However, several limitations remain, including the underutilization of multi-scale features in RSIs, the limited receptive field of Swin Transformer’s window self-attention (WSA), and the computational complexity of existing methods. To address these issues, this paper introduces the NGSTGAN model, which employs an N-Gram Swin Transformer as the generator and a multi-attention U-Net as the discriminator. The discriminator enhances attention to multi-scale key features through the addition of channel, spatial, and pixel attention (CSPA) modules, while the generator utilizes an improved shallow feature extraction (ISFE) module to extract multi-scale and multi-directional features, enhancing the capture of complex textures and details. The N-Gram concept is introduced to expand the receptive field of Swin Transformer, and sliding window self-attention (S-WSA) is employed to facilitate interaction between neighboring windows. Additionally, channel-reducing group convolution (CRGC) is used to reduce the number of parameters and computational complexity. A cross-sensor multispectral dataset combining Landsat-8 (L8) and Sentinel-2 (S2) is constructed for the resolution enhancement of L8’s blue (B), green (G), red (R), and near-infrared (NIR) bands from 30 m to 10 m. Experiments show that NGSTGAN outperforms the state-of-the-art (SOTA) method, achieving improvements of 0.5180 dB in the peak signal-to-noise ratio (PSNR) and 0.0153 in the structural similarity index measure (SSIM) over the second best method, offering a more effective solution to the task. Full article
Show Figures

Figure 1

23 pages, 5084 KiB  
Article
A Hybrid Dropout Method for High-Precision Seafloor Topography Reconstruction and Uncertainty Quantification
by Xinye Cui, Houpu Li, Yanting Yu, Shaofeng Bian and Guojun Zhai
Appl. Sci. 2025, 15(11), 6113; https://doi.org/10.3390/app15116113 - 29 May 2025
Viewed by 339
Abstract
Seafloor topography super-resolution reconstruction is critical for marine resource exploration, geological monitoring, and navigation safety. However, sparse acoustic data frequently result in the loss of high-frequency details, and traditional deep learning models exhibit limitations in uncertainty quantification, impeding their practical application. To address [...] Read more.
Seafloor topography super-resolution reconstruction is critical for marine resource exploration, geological monitoring, and navigation safety. However, sparse acoustic data frequently result in the loss of high-frequency details, and traditional deep learning models exhibit limitations in uncertainty quantification, impeding their practical application. To address these challenges, this study systematically investigates the combined effects of various regularization strategies and uncertainty quantification modules. It proposes a hybrid dropout model that jointly optimizes high-precision reconstruction and uncertainty estimation. The model integrates residual blocks, squeeze-and-excitation (SE) modules, and a multi-scale feature extraction network while employing Monte Carlo Dropout (MC-Dropout) alongside heteroscedastic noise modeling to dynamically gate the uncertainty quantification process. By adaptively modulating the regularization strength based on feature activations, the model preserves high-frequency information and accurately estimates predictive uncertainty. The experimental results demonstrate significant improvements in the Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Peak Signal-to-Noise Ratio (PSNR). Compared to conventional dropout architectures, the proposed method achieves a PSNR increase of 46.5% to 60.5% in test regions with a marked reduction in artifacts. Overall, the synergistic effect of employed regularization strategies and uncertainty quantification modules substantially enhances detail recovery and robustness in complex seafloor topography reconstruction, offering valuable theoretical insights and practical guidance for further optimization of deep learning models in challenging applications. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

21 pages, 10091 KiB  
Article
Scalable Hyperspectral Enhancement via Patch-Wise Sparse Residual Learning: Insights from Super-Resolved EnMAP Data
by Parth Naik, Rupsa Chakraborty, Sam Thiele and Richard Gloaguen
Remote Sens. 2025, 17(11), 1878; https://doi.org/10.3390/rs17111878 - 28 May 2025
Viewed by 725
Abstract
A majority of hyperspectral super-resolution methods aim to enhance the spatial resolution of hyperspectral imaging data (HSI) by integrating high-resolution multispectral imaging data (MSI), leveraging rich spectral information for various geospatial applications. Key challenges include spectral distortions from high-frequency spatial data, high computational [...] Read more.
A majority of hyperspectral super-resolution methods aim to enhance the spatial resolution of hyperspectral imaging data (HSI) by integrating high-resolution multispectral imaging data (MSI), leveraging rich spectral information for various geospatial applications. Key challenges include spectral distortions from high-frequency spatial data, high computational complexity, and limited training data, particularly for new-generation sensors with unique noise patterns. In this contribution, we propose a novel parallel patch-wise sparse residual learning (P2SR) algorithm for resolution enhancement based on fusion of HSI and MSI. The proposed method uses multi-decomposition techniques (i.e., Independent component analysis, Non-negative matrix factorization, and 3D wavelet transforms) to extract spatial and spectral features to form a sparse dictionary. The spectral and spatial characteristics of the scene encoded in the dictionary enable reconstruction through a first-order optimization algorithm to ensure an efficient sparse representation. The final spatially enhanced HSI is reconstructed by combining the learned features from low-resolution HSI and applying an MSI-regulated guided filter to enhance spatial fidelity while minimizing artifacts. P2SR is deployable on a high-performance computing (HPC) system with parallel processing, ensuring scalability and computational efficiency for large HSI datasets. Extensive evaluations on three diverse study sites demonstrate that P2SR consistently outperforms traditional and state-of-the-art (SOA) methods in both quantitative metrics and qualitative spatial assessments. Specifically, P2SR achieved the best average PSNR (25.2100) and SAM (12.4542) scores, indicating superior spatio-spectral reconstruction contributing to sharper spatial features, reduced mixed pixels, and enhanced geological features. P2SR also achieved the best average ERGAS (8.9295) and Q2n (0.5156), which suggests better overall fidelity across all bands and perceptual accuracy with the least spectral distortions. Importantly, we show that P2SR preserves critical spectral signatures, such as Fe2+ absorption, and improves the detection of fine-scale environmental and geological structures. P2SR’s ability to maintain spectral fidelity while enhancing spatial detail makes it a powerful tool for high-precision remote sensing applications, including mineral mapping, land-use analysis, and environmental monitoring. Full article
Show Figures

Graphical abstract

19 pages, 1649 KiB  
Article
SFSIN: A Lightweight Model for Remote Sensing Image Super-Resolution with Strip-like Feature Superpixel Interaction Network
by Yanxia Lyu, Yuhang Liu, Qianqian Zhao, Ziwen Hao and Xin Song
Mathematics 2025, 13(11), 1720; https://doi.org/10.3390/math13111720 - 23 May 2025
Viewed by 461
Abstract
Remote sensing image (RSI) super-resolution plays a critical role in improving image details and reducing costs associated with physical imaging devices. However, existing super-resolution methods are not applicable to resource-constrained edge devices because they are hampered by a large number of parameters and [...] Read more.
Remote sensing image (RSI) super-resolution plays a critical role in improving image details and reducing costs associated with physical imaging devices. However, existing super-resolution methods are not applicable to resource-constrained edge devices because they are hampered by a large number of parameters and significant computational complexity. To address these challenges, we propose a novel lightweight super-resolution model for remote sensing images, a strip-like feature superpixel interaction network (SFSIN), which combines the flexibility of convolutional neural networks (CNNs) with the long-range learning capabilities of a Transformer. Specifically, the Transformer captures global context information through long-range dependencies, while the CNN performs shape-adaptive convolutions. By stacking strip-like feature superpixel interaction (SFSI) modules, we aggregate strip-like features to enable deep feature extraction from local and global perspectives. In addition to traditional methods that rely solely on direct upsampling for reconstruction, our model uses the convolutional block attention module with upsampling convolution (CBAMUpConv), which integrates deep features from spatial and channel dimensions to improve reconstruction performance. Extensive experiments on the AID dataset show that SFSIN outperforms ten state-of-the-art lightweight models. SFSIN achieves a PSNR of 33.10 dB and an SSIM of 0.8715 on the ×2 scale, outperforming competitive models in both quantity and quality, while also excelling at higher scales. Full article
Show Figures

Figure 1

22 pages, 4976 KiB  
Article
MambaOSR: Leveraging Spatial-Frequency Mamba for Distortion-Guided Omnidirectional Image Super-Resolution
by Weilei Wen, Qianqian Zhao and Xiuli Shao
Entropy 2025, 27(4), 446; https://doi.org/10.3390/e27040446 - 20 Apr 2025
Viewed by 696
Abstract
Omnidirectional image super-resolution (ODISR) is critical for VR/AR applications, as high-quality 360° visual content significantly enhances immersive experiences. However, existing ODISR methods suffer from limited receptive fields and high computational complexity, which restricts their ability to model long-range dependencies and extract global structural [...] Read more.
Omnidirectional image super-resolution (ODISR) is critical for VR/AR applications, as high-quality 360° visual content significantly enhances immersive experiences. However, existing ODISR methods suffer from limited receptive fields and high computational complexity, which restricts their ability to model long-range dependencies and extract global structural features. Consequently, these limitations hinder the effective reconstruction of high-frequency details. To address these issues, we propose a novel Mamba-based ODISR network, termed MambaOSR, which consists of three key modules working collaboratively for accurate reconstruction. Specifically, we first introduce a spatial-frequency visual state space model (SF-VSSM) to capture global contextual information via dual-domain representation learning, thereby enhancing the preservation of high-frequency details. Subsequently, we design a distortion-guided module (DGM) that leverages distortion map priors to adaptively model geometric distortions, effectively suppressing artifacts resulting from equirectangular projections. Finally, we develop a multi-scale feature fusion module (MFFM) that integrates complementary features across multiple scales, further improving reconstruction quality. Extensive experiments conducted on the SUN360 dataset demonstrate that our proposed MambaOSR achieves a 0.16 dB improvement in WS-PSNR and increases the mutual information by 1.99% compared with state-of-the-art methods, significantly enhancing both visual quality and the information richness of omnidirectional images. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

23 pages, 7802 KiB  
Article
Can Separation Enhance Fusion? An Efficient Framework for Target Detection in Multimodal Remote Sensing Imagery
by Yong Wang, Jiexuan Jia, Rui Liu, Qiusheng Cao, Jie Feng, Danping Li and Lei Wang
Remote Sens. 2025, 17(8), 1350; https://doi.org/10.3390/rs17081350 - 10 Apr 2025
Viewed by 606
Abstract
Target detection in remote sensing images has garnered significant attention due to its wide range of applications. Many traditional methods primarily rely on unimodal data, which often struggle to address the complexities of remote sensing environments. Furthermore, small-target detection remains a critical challenge [...] Read more.
Target detection in remote sensing images has garnered significant attention due to its wide range of applications. Many traditional methods primarily rely on unimodal data, which often struggle to address the complexities of remote sensing environments. Furthermore, small-target detection remains a critical challenge in remote sensing image analysis, as small targets occupy only a few pixels, making feature extraction difficult and prone to errors. To address these challenges, this paper revisits the existing multimodal fusion methodologies and proposes a novel framework of separation before fusion (SBF). Leveraging this framework, we present Sep-Fusion—an efficient target detection approach tailored for multimodal remote sensing aerial imagery. Within the modality separation module (MSM), the method separates the three RGB channels of visible light images into independent modalities aligned with infrared image channels. Each channel undergoes independent feature extraction through the unimodal block (UB) to effectively capture modality-specific features. The extracted features are then fused using the feature attention fusion (FAF) module, which integrates channel attention and spatial attention mechanisms to enhance multimodal feature interaction. To improve the detection of small targets, an image regeneration module is exploited during the training stage. It incorporates the super-resolution strategy with attention mechanisms to further optimize high-resolution feature representations for subsequent positioning and detection. Sep-Fusion is currently developed on the YOLO series to make itself a potential real-time detector. Its lightweight architecture enables the model to achieve high computational efficiency while maintaining the desired detection accuracy. Experimental results on the multimodal VEDAI dataset show that Sep-Fusion achieves 77.9% mAP50, surpassing many state-of-the-art models. Ablation experiments further illustrate the respective contribution of modality separation and attention fusion. The adaptation of our multimodal method to unimodal target detection is also verified on NWPU VHR-10 and DIOR datasets, which proves Sep-Fusion to be a suitable alternative to current detectors in various remote sensing scenarios. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
Show Figures

Figure 1

19 pages, 12848 KiB  
Article
Research on Super-Resolution Reconstruction Algorithms for Remote Sensing Images of Coastal Zone Based on Deep Learning
by Dong Lei, Xiaowen Luo, Zefei Zhang, Xiaoming Qin and Jiaxin Cui
Land 2025, 14(4), 733; https://doi.org/10.3390/land14040733 - 29 Mar 2025
Viewed by 785
Abstract
High-resolution multispectral remote sensing imagery is widely used in critical fields such as coastal zone management and marine engineering. However, obtaining such images at a low cost remains a significant challenge. To address this issue, we propose the MRSRGAN method (multi-scale residual super-resolution [...] Read more.
High-resolution multispectral remote sensing imagery is widely used in critical fields such as coastal zone management and marine engineering. However, obtaining such images at a low cost remains a significant challenge. To address this issue, we propose the MRSRGAN method (multi-scale residual super-resolution generative adversarial network). The method leverages Sentinel-2 and GF-2 imagery, selecting nine typical land cover types in coastal zones, and constructs a small sample dataset containing 5210 images. MRSRGAN extracts the differential features between high-resolution (HR) and low-resolution (LR) images to generate super-resolution images. In our MRSRGAN approach, we design three key modules: the fusion attention-enhanced residual module (FAERM), multi-scale attention fusion (MSAF), and multi-scale feature extraction (MSFE). These modules mitigate gradient vanishing and extract image features at different scales to enhance super-resolution reconstruction. We conducted experiments to verify their effectiveness. The results demonstrate that our approach reduces the Learned Perceptual Image Patch Similarity (LPIPS) by 14.34% and improves the Structural Similarity Index (SSIM) by 11.85%. It effectively improves the issue where the large-scale span of ground objects in remote sensing images makes single-scale convolution insufficient for capturing multi-scale detailed features, thereby improving the restoration effect of image details and significantly enhancing the sharpness of ground object edges. Full article
Show Figures

Figure 1

30 pages, 3294 KiB  
Review
Recent Advancements in Na Super Ionic Conductor-Incorporated Composite Polymer Electrolytes for Sodium-Ion Battery Application
by Kanya Koothanatham Senthilkumar, Rajagopalan Thiruvengadathan and Ramanujam Brahmadesam Thoopul Srinivasa Raghava
Electrochem 2025, 6(1), 6; https://doi.org/10.3390/electrochem6010006 - 3 Mar 2025
Cited by 2 | Viewed by 4092
Abstract
Sodium-ion batteries (SIBs) have garnered significant attention as a cost-effective and sustainable alternative to lithium-ion batteries (LIBs) due to the abundance and eco-friendly extraction of sodium. Despite the larger ionic radius and heavier mass of sodium ions, SIBs are ideal for large-scale applications, [...] Read more.
Sodium-ion batteries (SIBs) have garnered significant attention as a cost-effective and sustainable alternative to lithium-ion batteries (LIBs) due to the abundance and eco-friendly extraction of sodium. Despite the larger ionic radius and heavier mass of sodium ions, SIBs are ideal for large-scale applications, such as grid energy storage and electric vehicles, where cost and resource availability outweigh the constraints of size and weight. A critical component in SIBs is the electrolyte, which governs specific capacity, energy density, and battery lifespan by enabling ion transport between electrodes. Among various electrolytes, composite polymer electrolytes (CPEs) stand out for their non-leakage and non-flammable nature and tunable physicochemical properties. The incorporation of NASICON (Na Super Ionic CONductor) fillers into polymer matrices has shown transformative potential in enhancing SIB performance. NASICON fillers improve ionic conductivity by forming continuous ion conduction pathways and reduce polymer matrix crystallinity, thereby facilitating higher sodium-ion mobility. Additionally, these fillers enhance the mechanical properties and electrochemical performance of CPEs. Hence, this review focuses on the pivotal roles of NASICON fillers in optimizing the properties of CPEs, including ionic conductivity, structural integrity, and electrochemical stability. The mechanisms underlying sodium-ion transport facilitated by NASICON fillers in CPE will be explored, with emphasis on the influence of filler morphology and composition on electrochemical properties. By scrutinizing the recent findings, this review underscores the potential of NASICON-based composite polymer electrolytes as appropriate material for the development of advanced sodium-ion batteries. Full article
Show Figures

Figure 1

23 pages, 2118 KiB  
Article
MBGPIN: Multi-Branch Generative Prior Integration Network for Super-Resolution Satellite Imagery
by Furkat Safarov, Ugiloy Khojamuratova, Misirov Komoliddin, Furkat Bolikulov, Shakhnoza Muksimova and Young-Im Cho
Remote Sens. 2025, 17(5), 805; https://doi.org/10.3390/rs17050805 - 25 Feb 2025
Cited by 4 | Viewed by 931
Abstract
Achieving super-resolution with satellite images is a critical task for enhancing the utility of remote sensing data across various applications, including urban planning, disaster management, and environmental monitoring. Traditional interpolation methods often fail to recover fine details, while deep-learning-based approaches, including convolutional neural [...] Read more.
Achieving super-resolution with satellite images is a critical task for enhancing the utility of remote sensing data across various applications, including urban planning, disaster management, and environmental monitoring. Traditional interpolation methods often fail to recover fine details, while deep-learning-based approaches, including convolutional neural networks (CNNs) and generative adversarial networks (GANs), have significantly advanced super-resolution performance. Recent studies have explored large-scale models, such as Transformer-based architectures and diffusion models, demonstrating improved texture realism and generalization across diverse datasets. However, these methods frequently have high computational costs and require extensive datasets for training, making real-world deployment challenging. We propose the multi-branch generative prior integration network (MBGPIN) to address these limitations. This novel framework integrates multiscale feature extraction, hybrid attention mechanisms, and generative priors derived from pretrained VQGAN models. The dual-pathway architecture of the MBGPIN includes a feature extraction pathway for spatial features and a generative prior pathway for external guidance, dynamically fused using an adaptive generative prior fusion (AGPF) module. Extensive experiments on benchmark datasets such as UC Merced, NWPU-RESISC45, and RSSCN7 demonstrate that the MBGPIN achieves superior performance compared to state-of-the-art methods, including large-scale super-resolution models. The MBGPIN delivers a higher peak signal-to-noise ratio (PSNR) and higher structural similarity index measure (SSIM) scores while preserving high-frequency details and complex textures. The model also achieves significant computational efficiency, with reduced floating point operations (FLOPs) and faster inference times, making it scalable for real-world applications. Full article
Show Figures

Figure 1

18 pages, 20822 KiB  
Article
DSpix2pix: A New Dual-Style Controlled Reconstruction Network for Remote Sensing Image Super-Resolution
by Zhouyi Wang and Changcheng Wang
Appl. Sci. 2025, 15(3), 1179; https://doi.org/10.3390/app15031179 - 24 Jan 2025
Cited by 2 | Viewed by 934
Abstract
Super-resolution reconstruction is a critical task in remote sensing image classification, and generative adversarial networks (GANs) have emerged as a dominant approach in this field. Traditional generative networks often produce low-quality images at resolutions like 256 × 256, and current research on single-image [...] Read more.
Super-resolution reconstruction is a critical task in remote sensing image classification, and generative adversarial networks (GANs) have emerged as a dominant approach in this field. Traditional generative networks often produce low-quality images at resolutions like 256 × 256, and current research on single-image super-resolution typically focuses on resolution enhancement factors of two to four (2×–4×), which do not meet practical application demands. Building upon the framework of StyleGAN, this study introduces a dual-style controlled super-resolution reconstruction network referred to as DSpix2pix. It uses a fixed style vector (Style 1) from StyleGAN-v2, generated through its mapping network and applied to each layer in the generator. And an additional style vector (Style 2) is extracted from example images and injected into the decoder using AdIn, enhancing the balance of styles in the generated images. DSpix2pix is capable of generating high-quality, smoother, noise-reduced, and more realistic super-resolution remote sensing images at 512 × 512 and 1024 × 1024 resolutions. In terms of visual metrics such as RMSE, PSNR, SSIM, and LPIPS, it outperforms traditional super-resolution networks like SRGAN and UNIT, with RMSE consistently exceeding 10. The network excels in 2× and 4× super-resolution tasks, demonstrating potential for remote sensing image interpretation, and shows promising results in 8x super-resolution tasks. Full article
(This article belongs to the Special Issue Advanced Remote Sensing Technologies and Their Applications)
Show Figures

Figure 1

Back to TopTop