Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (310)

Search Parameters:
Keywords = single image super-resolution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 16969 KiB  
Article
FTT: A Frequency-Aware Texture Matching Transformer for Digital Bathymetry Model Super-Resolution
by Peikun Xiao, Jianping Wu and Yingjie Wang
J. Mar. Sci. Eng. 2025, 13(7), 1365; https://doi.org/10.3390/jmse13071365 (registering DOI) - 17 Jul 2025
Abstract
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, [...] Read more.
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, which leads to serious distortions and inaccuracies. Given the critical role of HR DBM in marine resource exploitation, economic development, and scientific innovation, we propose a frequency-aware texture matching transformer (FTT) for DBM SR, incorporating global terrain feature extraction (GTFE), high-frequency feature extraction (HFFE), and a terrain matching block (TMB). GTFE has the capability to perceive spatial heterogeneity and spatial locations, allowing it to accurately capture large-scale terrain features. HFFE can explicitly extract high-frequency priors beneficial for DBM SR and implicitly refine the representation of high-frequency information in the global terrain feature. TMB improves fidelity of generated HR DBM by generating position offsets to restore warped textures in deep features. Experimental results have demonstrated that the proposed FTT has superior performance in terms of elevation, slope, aspect, and fidelity of generated HR DBM. Notably, the root mean square error (RMSE) of elevation in steep terrain has been reduced by 4.89 m, which is a significant improvement in the accuracy and precision of the reconstruction. This research holds significant implications for improving the accuracy of DBM SR methods and the usefulness of HR bathymetry products for future marine research. Full article
(This article belongs to the Section Ocean Engineering)
12 pages, 5633 KiB  
Article
Study on Joint Intensity in Real-Space and k-Space of SFS Super-Resolution Imaging via Multiplex Illumination Modulation
by Xiaoyu Yang, Haonan Zhang, Feihong Lin, Xu Liu and Qing Yang
Photonics 2025, 12(7), 717; https://doi.org/10.3390/photonics12070717 - 16 Jul 2025
Abstract
This paper studied the general mechanism of spatial-frequency-shift (SFS) super-resolution imaging based on multiplex illumination modulation. The theory of SFS joint intensity was first proposed. Experiments on parallel slots with discrete spatial frequency (SF) distribution and V-shape slots with continuous SF distribution were [...] Read more.
This paper studied the general mechanism of spatial-frequency-shift (SFS) super-resolution imaging based on multiplex illumination modulation. The theory of SFS joint intensity was first proposed. Experiments on parallel slots with discrete spatial frequency (SF) distribution and V-shape slots with continuous SF distribution were carried out, and their real-space images and k-space images were obtained. The influence of single illumination with different SFS and mixed illumination with various combinations on SFS super-resolution imaging was analyzed. The phenomena of sample SF coverage were discussed. The SFS super-resolution imaging characteristics based on low-coherence illumination and highly localized light fields were discovered. The phenomenon of image magnification during SFS super-resolution imaging process was discussed. The differences and connections between the SF spectrum of objects and the k-space images obtained in SFS super-resolution imaging process were explained. This provides certain support for optimization of high-throughput SFS super-resolution imaging. Full article
Show Figures

Figure 1

25 pages, 5162 KiB  
Perspective
The Emerging Role of Omics-Based Approaches in Plant Virology
by Viktoriya Samarskaya, Nadezhda Spechenkova, Natalia O. Kalinina, Andrew J. Love and Michael Taliansky
Viruses 2025, 17(7), 986; https://doi.org/10.3390/v17070986 (registering DOI) - 15 Jul 2025
Viewed by 58
Abstract
Virus infections in plants are a major threat to crop production and sustainable agriculture, which results in significant yield losses globally. The past decade has seen the development and deployment of sophisticated high-throughput omics technologies including genomics, transcriptomics, proteomics, and metabolomics, in order [...] Read more.
Virus infections in plants are a major threat to crop production and sustainable agriculture, which results in significant yield losses globally. The past decade has seen the development and deployment of sophisticated high-throughput omics technologies including genomics, transcriptomics, proteomics, and metabolomics, in order to try to understand the mechanisms underlying plant–virus interactions and implement strategies to ameliorate crop losses. In this review, we discuss the current state-of-the-art applications of such key omics techniques, their challenges, future, and combinatorial use (e.g., single cell and spatial omics coupled with super-resolution high-throughput imaging methods and artificial intelligence-based predictive models) to obtain new mechanistic insights into plant–virus interactions, which could be exploited for more effective plant disease management and monitoring. Full article
(This article belongs to the Section Viruses of Plants, Fungi and Protozoa)
Show Figures

Figure 1

23 pages, 4237 KiB  
Article
Debris-Flow Erosion Volume Estimation Using a Single High-Resolution Optical Satellite Image
by Peng Zhang, Shang Wang, Guangyao Zhou, Yueze Zheng, Kexin Li and Luyan Ji
Remote Sens. 2025, 17(14), 2413; https://doi.org/10.3390/rs17142413 - 12 Jul 2025
Viewed by 204
Abstract
Debris flows pose significant risks to mountainous regions, and quick, accurate volume estimation is crucial for hazard assessment and post-disaster response. Traditional volume estimation methods, such as ground surveys and aerial photogrammetry, are often limited by cost, accessibility, and timeliness. While remote sensing [...] Read more.
Debris flows pose significant risks to mountainous regions, and quick, accurate volume estimation is crucial for hazard assessment and post-disaster response. Traditional volume estimation methods, such as ground surveys and aerial photogrammetry, are often limited by cost, accessibility, and timeliness. While remote sensing offers wide coverage, existing optical and Synthetic Aperture Radar (SAR)-based techniques face challenges in direct volume estimation due to resolution constraints and rapid terrain changes. This study proposes a Super-Resolution Shape from Shading (SRSFS) approach enhanced by a Non-local Piecewise-smooth albedo Constraint (NPC), hereafter referred to as NPC SRSFS, to estimate debris-flow erosion volume using single high-resolution optical satellite imagery. By integrating publicly available global Digital Elevation Model (DEM) data as prior terrain reference, the method enables accurate post-disaster topography reconstruction from a single optical image, thereby reducing reliance on stereo imagery. The NPC constraint improves the robustness of albedo estimation under heterogeneous surface conditions, enhancing depth recovery accuracy. The methodology is evaluated using Gaofen-6 satellite imagery, with quantitative comparisons to aerial Light Detection and Ranging (LiDAR) data. Results show that the proposed method achieves reliable terrain reconstruction and erosion volume estimates, with accuracy comparable to airborne LiDAR. This study demonstrates the potential of NPC SRSFS as a rapid, cost-effective alternative for post-disaster debris-flow assessment. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

16 pages, 2376 KiB  
Article
Nested U-Net-Based GAN Model for Super-Resolution of Stained Light Microscopy Images
by Seong-Hyeon Kang and Ji-Youn Kim
Photonics 2025, 12(7), 665; https://doi.org/10.3390/photonics12070665 - 1 Jul 2025
Viewed by 283
Abstract
The purpose of this study was to propose a deep learning-based model for the super-resolution reconstruction of stained light microscopy images. To achieve this, perceptual loss was applied to the generator to reflect multichannel signal intensity, distribution, and structural similarity. A nested U-Net [...] Read more.
The purpose of this study was to propose a deep learning-based model for the super-resolution reconstruction of stained light microscopy images. To achieve this, perceptual loss was applied to the generator to reflect multichannel signal intensity, distribution, and structural similarity. A nested U-Net architecture was employed to address the representational limitations of the conventional U-Net. For quantitative evaluation, the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and correlation coefficient (CC) were calculated. In addition, intensity profile analysis was performed to assess the model’s ability to restore the boundary signals more precisely. The experimental results demonstrated that the proposed model outperformed both the signal and structural restoration compared to single U-Net and U-Net-based generative adversarial network (GAN) models. Consequently, the PSNR, SSIM, and CC values demonstrated relative improvements of approximately 1.017, 1.023, and 1.010 times, respectively, compared to the input images. In particular, the intensity profile analysis confirmed the effectiveness of the nested U-Net-based generator in restoring cellular boundaries and structures in the stained microscopy images. In conclusion, the proposed model effectively enhanced the resolution of stained light microscopy images acquired in a multichannel format. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Optics and Biophotonics)
Show Figures

Figure 1

20 pages, 3406 KiB  
Article
Single-Image Super-Resolution via Cascaded Non-Local Mean Network and Dual-Path Multi-Branch Fusion
by Yu Xu and Yi Wang
Sensors 2025, 25(13), 4044; https://doi.org/10.3390/s25134044 - 28 Jun 2025
Viewed by 482
Abstract
Image super-resolution (SR) aims to reconstruct high-resolution (HR) images from low-resolution (LR) inputs. It plays a crucial role in applications such as medical imaging, surveillance, and remote sensing. However, due to the ill-posed nature of the task and the inherent limitations of imaging [...] Read more.
Image super-resolution (SR) aims to reconstruct high-resolution (HR) images from low-resolution (LR) inputs. It plays a crucial role in applications such as medical imaging, surveillance, and remote sensing. However, due to the ill-posed nature of the task and the inherent limitations of imaging sensors, obtaining accurate HR images remains challenging. While numerous methods have been proposed, the traditional approaches suffer from oversmoothing and limited generalization; CNN-based models lack the ability to capture long-range dependencies; and Transformer-based solutions, although effective in modeling global context, are computationally intensive and prone to texture loss. To address these issues, we propose a hybrid CNN–Transformer architecture that cascades a pixel-wise self-attention non-local means module (PSNLM) and an adaptive dual-path multi-scale fusion block (ADMFB). The PSNLM is inspired by the non-local means (NLM) algorithm. We use weighted patches to estimate the similarity between pixels centered at each patch while limiting the search region and constructing a communication mechanism across ranges. The ADMFB enhances texture reconstruction by adaptively aggregating multi-scale features through dual attention paths. The experimental results demonstrate that our method achieves superior performance on multiple benchmarks. For instance, in challenging ×4 super-resolution, our method outperforms the second-best method by 0.0201 regarding the Structural Similarity Index (SSIM) on the BSD100 dataset. On the texture-rich Urban100 dataset, our method achieves a 26.56 dB Peak Signal-to-Noise Ratio (PSNR) and 0.8133 SSIM. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 2149 KiB  
Article
Accelerating Facial Image Super-Resolution via Sparse Momentum and Encoder State Reuse
by Kerang Cao, Na Bao, Shuai Zheng, Ye Liu and Xing Wang
Electronics 2025, 14(13), 2616; https://doi.org/10.3390/electronics14132616 - 28 Jun 2025
Viewed by 361
Abstract
Single image super-resolution (SISR) aims to reconstruct high-quality images from low-resolution inputs, a persistent challenge in computer vision with critical applications in medical imaging, satellite imagery, and video enhancement. Traditional diffusion model-based (DM-based) methods, while effective in restoring fine details, suffer from computational [...] Read more.
Single image super-resolution (SISR) aims to reconstruct high-quality images from low-resolution inputs, a persistent challenge in computer vision with critical applications in medical imaging, satellite imagery, and video enhancement. Traditional diffusion model-based (DM-based) methods, while effective in restoring fine details, suffer from computational inefficiency due to their iterative denoising process. To address this, we introduce the Sparse Momentum-based Faster Diffusion Model (SMFDM), designed for rapid and high-fidelity super-resolution. SMFDM integrates a novel encoder state reuse mechanism that selectively omits non-critical time steps during the denoising phase, significantly reducing computational redundancy. Additionally, the model employs a sparse momentum mechanism, enabling robust representation capabilities while utilizing only a fraction of the original model weights. Experiments demonstrate that SMFDM achieves an impressive 71.04% acceleration in the diffusion process, requiring only 15% of the original weights, while maintaining high-quality outputs with effective preservation of image details and textures. Our work highlights the potential of combining sparse learning and efficient sampling strategies to enhance the practical applicability of diffusion models for super-resolution tasks. Full article
Show Figures

Figure 1

14 pages, 6476 KiB  
Article
Evaluating Second-Generation Deep Learning Technique for Noise Reduction in Myocardial T1-Mapping Magnetic Resonance Imaging
by Shungo Sawamura, Shingo Kato, Naofumi Yasuda, Takumi Iwahashi, Takamasa Hirano, Taiga Kato and Daisuke Utsunomiya
Diseases 2025, 13(5), 157; https://doi.org/10.3390/diseases13050157 - 18 May 2025
Viewed by 508
Abstract
Background: T1 mapping has become a valuable technique in cardiac magnetic resonance imaging (CMR) for evaluating myocardial tissue properties. However, its quantitative accuracy remains limited by noise-related variability. Super-resolution deep learning-based reconstruction (SR-DLR) has shown potential in enhancing image quality across various MRI [...] Read more.
Background: T1 mapping has become a valuable technique in cardiac magnetic resonance imaging (CMR) for evaluating myocardial tissue properties. However, its quantitative accuracy remains limited by noise-related variability. Super-resolution deep learning-based reconstruction (SR-DLR) has shown potential in enhancing image quality across various MRI applications, yet its effectiveness in myocardial T1 mapping has not been thoroughly investigated. This study aimed to evaluate the impact of SR-DLR on noise reduction and measurement consistency in myocardial T1 mapping. Methods: This single-center retrospective observational study included 36 patients who underwent CMR between July and December 2023. T1 mapping was performed using a modified Look-Locker inversion recovery (MOLLI) sequence before and after contrast administration. Images were reconstructed with and without SR-DLR using identical scan data. Phantom studies using seven homemade phantoms with different Gd-DOTA dilution ratios were also conducted. Quantitative evaluation included mean T1 values, standard deviation (SD), and coefficient of variation (CV). Intraclass correlation coefficients (ICCs) were calculated to assess inter-observer agreement. Results: SR-DLR had no significant effect on mean native or post-contrast T1 values but significantly reduced SD and CV in both patient and phantom studies. SD decreased from 44.0 to 31.8 ms (native) and 20.0 to 14.1 ms (post-contrast), and CV also improved. ICCs indicated excellent inter-observer reproducibility (native: 0.822; post-contrast: 0.955). Conclusions: SR-DLR effectively reduces measurement variability while preserving T1 accuracy, enhancing the reliability of myocardial T1 mapping in both clinical and research settings. Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

13 pages, 3165 KiB  
Article
Self-Supervised Infrared Video Super-Resolution Based on Deformable Convolution
by Jian Chen, Yan Zhao, Mo Chen, Yuwei Wang and Xin Ye
Electronics 2025, 14(10), 1995; https://doi.org/10.3390/electronics14101995 - 14 May 2025
Viewed by 423
Abstract
Infrared video often encounters low resolution, which makes it difficult to perform the target detection and recognition task. Super-resolution (SR) is an effective technology to enhance the resolution of infrared video. However, the existing SR method of infrared image is basically a single [...] Read more.
Infrared video often encounters low resolution, which makes it difficult to perform the target detection and recognition task. Super-resolution (SR) is an effective technology to enhance the resolution of infrared video. However, the existing SR method of infrared image is basically a single image SR, which restricts the performance of SR due to ignoring the strong inter-frame correlation in video. We propose a self-supervised SR method for infrared video that can estimate the blur kernel and generate paired data from raw low-resolution infrared video itself, without the need for additional high-resolution videos for supervision. Furthermore, to overcome the limitations of optical flow prediction in handling complex motion, a deformable convolutional network is introduced to adaptively learn motion information to capture more accurate, tiny motion changes between adjacent images in an infrared video. Experimental results show that the proposed method can achieve an outstanding performance of restored image in both visual effect and quantitative metrics. Full article
Show Figures

Figure 1

10 pages, 2448 KiB  
Article
Image Generation and Super-Resolution Reconstruction of Synthetic Aperture Radar Images Based on an Improved Single-Image Generative Adversarial Network
by Xuguang Yang, Lixia Nie, Yun Zhang and Ling Zhang
Information 2025, 16(5), 370; https://doi.org/10.3390/info16050370 - 30 Apr 2025
Viewed by 493
Abstract
This paper presents a novel method for the super-resolution reconstruction and generation of synthetic aperture radar (SAR) images with an improved single-image generative adversarial network (ISinGAN). Unlike traditional machine learning methods typically requiring large datasets, SinGAN needs only a single input image to [...] Read more.
This paper presents a novel method for the super-resolution reconstruction and generation of synthetic aperture radar (SAR) images with an improved single-image generative adversarial network (ISinGAN). Unlike traditional machine learning methods typically requiring large datasets, SinGAN needs only a single input image to extract internal structural details and generate high-quality samples. To improve this framework further, we introduced SinGAN with a self-attention module and incorporated noise specific to SAR images. These enhancements ensure that the generated images are more aligned with real-world SAR scenarios while also improving the robustness of the SinGAN framework. Experimental results demonstrate that ISinGAN significantly enhances SAR image resolution and target recognition performance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

19 pages, 2806 KiB  
Article
SP-IGAN: An Improved GAN Framework for Effective Utilization of Semantic Priors in Real-World Image Super-Resolution
by Meng Wang, Zhengnan Li, Haipeng Liu, Zhaoyu Chen and Kewei Cai
Entropy 2025, 27(4), 414; https://doi.org/10.3390/e27040414 - 11 Apr 2025
Cited by 1 | Viewed by 518
Abstract
Single-image super-resolution (SISR) based on GANs has achieved significant progress. However, these methods still face challenges when reconstructing locally consistent textures due to a lack of semantic understanding of image categories. This highlights the necessity of focusing on contextual information comprehension and the [...] Read more.
Single-image super-resolution (SISR) based on GANs has achieved significant progress. However, these methods still face challenges when reconstructing locally consistent textures due to a lack of semantic understanding of image categories. This highlights the necessity of focusing on contextual information comprehension and the acquisition of high-frequency details in model design. To address this issue, we propose the Semantic Prior-Improved GAN (SP-IGAN) framework, which incorporates additional contextual semantic information into the Real-ESRGAN model. The framework consists of two branches. The main branch introduces a Graph Convolutional Channel Attention (GCCA) module to transform channel dependencies into adjacency relationships between feature vertices, thereby enhancing pixel associations. The auxiliary branch strengthens the correlation between semantic category information and regional textures in the Residual-in-Residual Dense Block (RRDB) module. The auxiliary branch employs a pretrained segmentation model to accurately extract regional semantic information from the input low-resolution image. This information is injected into the RRDB module through Spatial Feature Transform (SFT) layers, generating more accurate and semantically consistent texture details. Additionally, a wavelet loss is incorporated into the loss function to capture high-frequency details that are often overlooked. The experimental results demonstrate that the proposed SP-IGAN outperforms state-of-the-art (SOTA) super-resolution models across multiple public datasets. For the X4 super-resolution task, SP-IGAN achieves a 0.55 dB improvement in Peak Signal-to-Noise Ratio (PSNR) and a 0.0363 increase in Structural Similarity Index (SSIM) compared to the baseline model Real-ESRGAN. Full article
Show Figures

Figure 1

28 pages, 33565 KiB  
Article
Taming a Diffusion Model to Revitalize Remote Sensing Image Super-Resolution
by Chao Zhu, Yong Liu, Shan Huang and Fei Wang
Remote Sens. 2025, 17(8), 1348; https://doi.org/10.3390/rs17081348 - 10 Apr 2025
Cited by 1 | Viewed by 1503
Abstract
Conventional neural network-based approaches for single remote sensing image super-resolution (SRSISR) have made remarkable progress. However, the super-resolution outputs produced by these methods often fall short in terms of visual quality. Recent advances in diffusion models for image generation have demonstrated remarkable potential [...] Read more.
Conventional neural network-based approaches for single remote sensing image super-resolution (SRSISR) have made remarkable progress. However, the super-resolution outputs produced by these methods often fall short in terms of visual quality. Recent advances in diffusion models for image generation have demonstrated remarkable potential for enhancing the visual content of super-resolved images. Despite this promise, existing large diffusion models are predominantly trained on natural images, which have huge differences in data distribution, making them hard to apply in remote sensing images (RSIs). This disparity poses challenges for directly applying these models to RSIs. Moreover, while diffusion models possess powerful generative capabilities, their output must be carefully controlled to generate accurate details as the objects in RSIs are small and blurry. In this paper, we introduce RSDiffSR, a novel SRSISR method based on a conditional diffusion model. This framework ensures the high-quality super-resolution of RSIs through three key contributions. First, it leverages a large diffusion model as a generative prior, which substantially enhances the visual quality of super-resolved RSIs. Second, it incorporates low-rank adaptation into the diffusion UNet and multi-stage training process to address the domain gap caused by differences in data distributions. Third, an enhanced control mechanism is designed to process the content and edge information of RSIs, providing effective guidance during the diffusion process. Experimental results demonstrate that the proposed RSDiffSR achieves state-of-the-art performance in both quantitative and qualitative evaluations across multiple benchmarks. Full article
(This article belongs to the Special Issue Remote Sensing Cross-Modal Research: Algorithms and Practices)
Show Figures

Graphical abstract

19 pages, 2647 KiB  
Article
FDI-VSR: Video Super-Resolution Through Frequency-Domain Integration and Dynamic Offset Estimation
by Donghun Lim and Janghoon Choi
Sensors 2025, 25(8), 2402; https://doi.org/10.3390/s25082402 - 10 Apr 2025
Viewed by 779
Abstract
The increasing adoption of high-resolution imaging sensors across various fields has led to a growing demand for techniques to enhance video quality. Video super-resolution (VSR) addresses this need by reconstructing high-resolution videos from lower-resolution inputs; however, directly applying single-image super-resolution (SISR) methods to [...] Read more.
The increasing adoption of high-resolution imaging sensors across various fields has led to a growing demand for techniques to enhance video quality. Video super-resolution (VSR) addresses this need by reconstructing high-resolution videos from lower-resolution inputs; however, directly applying single-image super-resolution (SISR) methods to video sequences neglects temporal information, resulting in inconsistent and unnatural outputs. In this paper, we propose FDI-VSR, a novel framework that integrates spatiotemporal dynamics and frequency-domain analysis into conventional SISR models without extensive modifications. We introduce two key modules: the Spatiotemporal Feature Extraction Module (STFEM), which employs dynamic offset estimation, spatial alignment, and multi-stage temporal aggregation using residual channel attention blocks (RCABs); and the Frequency–Spatial Integration Module (FSIM), which transforms deep features into the frequency domain to effectively capture global context beyond the limited receptive field of standard convolutions. Extensive experiments on the Vid4, SPMCs, REDS4, and UDM10 benchmarks, supported by detailed ablation studies, demonstrate that FDI-VSR not only surpasses conventional VSR methods but also achieves competitive results compared to recent state-of-the-art methods, with improvements of up to 0.82 dB in PSNR on the SPMCs benchmark and notable reductions in visual artifacts, all while maintaining lower computational complexity and faster inference. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 13012 KiB  
Article
Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images
by Milad Niroumand-Jadidi, Carl J. Legleiter and Francesca Bovolo
Remote Sens. 2025, 17(7), 1309; https://doi.org/10.3390/rs17071309 - 6 Apr 2025
Viewed by 596
Abstract
CubeSats provide a wealth of high-frequency observations at a meter-scale spatial resolution. However, most current methods of inferring water depth from satellite data consider only a single image. This approach is sensitive to the radiometric quality of the data acquired at that particular [...] Read more.
CubeSats provide a wealth of high-frequency observations at a meter-scale spatial resolution. However, most current methods of inferring water depth from satellite data consider only a single image. This approach is sensitive to the radiometric quality of the data acquired at that particular instant in time, which could be degraded by various confounding factors, such as sun glint or atmospheric effects. Moreover, using single images in isolation fails to exploit recent improvements in the frequency of satellite image acquisition. This study aims to leverage the dense image time series from the SuperDove constellation via an ensembling framework that helps to improve empirical (regression-based) bathymetry retrieval. Unlike previous studies that only ensembled the original spectral data, we introduce a neural network-based method that instead ensembles the water depths derived from multi-temporal imagery, provided the data are acquired under steady flow conditions. We refer to this new approach as NN-depth ensembling. First, every image is treated individually to derive multitemporal depth estimates. Then, we use another NN regressor to ensemble the temporal water depths. This step serves to automatically weight the contribution of the bathymetric estimates from each time instance to the final bathymetry product. Unlike methods that ensemble spectral data, NN-depth ensembling mitigates against propagation of uncertainties in spectral data (e.g., noise due to sun glint) to the final bathymetric product. The proposed NN-depth ensembling is applied to temporal SuperDove imagery of reaches from the American, Potomac, and Colorado rivers with depths of up to 10 m and evaluated against in situ measurements. The proposed method provided more accurate and robust bathymetry retrieval than single-image analyses and other ensembling approaches. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of the Inland and Coastal Water Zones II)
Show Figures

Graphical abstract

20 pages, 2203 KiB  
Article
PixelCraftSR: Efficient Super-Resolution with Multi-Agent Reinforcement for Edge Devices
by M. J. Aashik Rasool, Shabir Ahmed, S. M. A. Sharif, Mardieva Sevara and Taeg Keun Whangbo
Sensors 2025, 25(7), 2242; https://doi.org/10.3390/s25072242 - 2 Apr 2025
Viewed by 839
Abstract
Single-image super-resolution imaging methods are increasingly being employed owing to their immense applicability in numerous domains, such as medical imaging, display manufacturing, and digital zooming. Despite their widespread usability, the existing learning-based super-resolution (SR) methods are computationally expensive and inefficient for resource-constrained IoT [...] Read more.
Single-image super-resolution imaging methods are increasingly being employed owing to their immense applicability in numerous domains, such as medical imaging, display manufacturing, and digital zooming. Despite their widespread usability, the existing learning-based super-resolution (SR) methods are computationally expensive and inefficient for resource-constrained IoT devices. In this study, we propose a lightweight model based on a multi-agent reinforcement-learning approach that employs multiple agents at the pixel level to construct super-resolution images by following the asynchronous actor–critic policy. The agents iteratively select a predefined set of actions to be executed within five time steps based on the new image state, followed by the action that maximizes the cumulative reward. We thoroughly evaluate and compare our proposed method with existing super-resolution methods. Experimental results illustrate that the proposed method can outperform the existing models in both qualitative and quantitative scores despite having significantly less computational complexity. The practicability of the proposed method is confirmed further by evaluating it on numerous IoT platforms, including edge devices. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Back to TopTop