Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (401)

Search Parameters:
Keywords = image reconstruction-restoration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 60623 KiB  
Article
Super Resolution for Mangrove UAV Remote Sensing Images
by Qin Qin, Wenlong Dai and Xin Wang
Symmetry 2025, 17(8), 1250; https://doi.org/10.3390/sym17081250 - 6 Aug 2025
Abstract
Mangroves play a crucial role in ecosystems, and the accurate classification and real-time monitoring of mangrove species are essential for their protection and restoration. To improve the segmentation performance of mangrove UAV remote sensing images, this study performs species segmentation after the super-resolution [...] Read more.
Mangroves play a crucial role in ecosystems, and the accurate classification and real-time monitoring of mangrove species are essential for their protection and restoration. To improve the segmentation performance of mangrove UAV remote sensing images, this study performs species segmentation after the super-resolution (SR) reconstruction of images. Therefore, we propose SwinNET, an SR reconstruction network. We design a convolutional enhanced channel attention (CEA) module within a network to enhance feature reconstruction through channel attention. Additionally, the Neighborhood Attention Transformer (NAT) is introduced to help the model better focus on domain features, aiming to improve the reconstruction of leaf details. These two attention mechanisms are symmetrically integrated within the network to jointly capture complementary information from spatial and channel dimensions. The experimental results demonstrate that SwinNET not only achieves superior performance in SR tasks but also significantly enhances the segmentation accuracy of mangrove species. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

18 pages, 28832 KiB  
Article
Mars-On-Orbit Color Image Spectrum Model and Color Restoration
by Hongfeng Long, Sainan Liu, Yuebo Ma, Junzhe Zeng, Kaili Lu and Rujin Zhao
Aerospace 2025, 12(8), 696; https://doi.org/10.3390/aerospace12080696 - 4 Aug 2025
Viewed by 174
Abstract
Deep space Color Remote Sensing Images (DCRSIs) are of great significance in reconstructing the three-dimensional appearance of celestial bodies. Among them, deep space color restoration, as a means to ensure the authenticity of deep space image colors, has significant research value. The existing [...] Read more.
Deep space Color Remote Sensing Images (DCRSIs) are of great significance in reconstructing the three-dimensional appearance of celestial bodies. Among them, deep space color restoration, as a means to ensure the authenticity of deep space image colors, has significant research value. The existing deep space color restoration methods have gradually evolved into a joint restoration mode that integrates color images and spectrometers to overcome the limitations of on-orbit calibration plates; however, there is limited research on theoretical models for this type of method. Therefore, this article begins with the physical process of deep space color imaging, gradually establishes a color imaging spectral model, and proposes a new color restoration method for the color restoration of Mars remote sensing images. The experiment verifies that our proposed method can significantly reduce color deviation, achieving an average of 8.43 CIE DE 2000 color deviation units, a decrease of 2.63 (23.78%) compared to the least squares method. The color deviation decreased by 21.47 (71.81%) compared to before restoration. Hence, our method can improve the accuracy of color restoration of DCRSIs in space orbit. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

28 pages, 3794 KiB  
Article
A Robust System for Super-Resolution Imaging in Remote Sensing via Attention-Based Residual Learning
by Rogelio Reyes-Reyes, Yeredith G. Mora-Martinez, Beatriz P. Garcia-Salgado, Volodymyr Ponomaryov, Jose A. Almaraz-Damian, Clara Cruz-Ramos and Sergiy Sadovnychiy
Mathematics 2025, 13(15), 2400; https://doi.org/10.3390/math13152400 - 25 Jul 2025
Viewed by 221
Abstract
Deep learning-based super-resolution (SR) frameworks are widely used in remote sensing applications. However, existing SR models still face limitations, particularly in recovering contours, fine features, and textures, as well as in effectively integrating channel information. To address these challenges, this study introduces a [...] Read more.
Deep learning-based super-resolution (SR) frameworks are widely used in remote sensing applications. However, existing SR models still face limitations, particularly in recovering contours, fine features, and textures, as well as in effectively integrating channel information. To address these challenges, this study introduces a novel residual model named OARN (Optimized Attention Residual Network) specifically designed to enhance the visual quality of low-resolution images. The network operates on the Y channel of the YCbCr color space and integrates LKA (Large Kernel Attention) and OCM (Optimized Convolutional Module) blocks. These components can restore large-scale spatial relationships and refine textures and contours, improving feature reconstruction without significantly increasing computational complexity. The performance of OARN was evaluated using satellite images from WorldView-2, GaoFen-2, and Microsoft Virtual Earth. Evaluation was conducted using objective quality metrics, such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Edge Preservation Index (EPI), and Perceptual Image Patch Similarity (LPIPS), demonstrating superior results compared to state-of-the-art methods in both objective measurements and subjective visual perception. Moreover, OARN achieves this performance while maintaining computational efficiency, offering a balanced trade-off between processing time and reconstruction quality. Full article
Show Figures

Figure 1

21 pages, 4388 KiB  
Article
An Omni-Dimensional Dynamic Convolutional Network for Single-Image Super-Resolution Tasks
by Xi Chen, Ziang Wu, Weiping Zhang, Tingting Bi and Chunwei Tian
Mathematics 2025, 13(15), 2388; https://doi.org/10.3390/math13152388 - 25 Jul 2025
Viewed by 286
Abstract
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of [...] Read more.
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of high-frequency details, high computational complexity, and insufficient adaptability to complex scenes. To address these challenges, we propose an Omni-dimensional Dynamic Convolutional Network (ODConvNet) tailored for SISR tasks. Specifically, ODConvNet comprises four key components: a Feature Extraction Block (FEB) that captures low-level spatial features; an Omni-dimensional Dynamic Convolution Block (DCB), which utilizes a multidimensional attention mechanism to dynamically reweight convolution kernels across spatial, channel, and kernel dimensions, thereby enhancing feature expressiveness and context modeling; a Deep Feature Extraction Block (DFEB) that stacks multiple convolutional layers with residual connections to progressively extract and fuse high-level features; and a Reconstruction Block (RB) that employs subpixel convolution to upscale features and refine the final HR output. This mechanism significantly enhances feature extraction and effectively captures rich contextual information. Additionally, we employ an improved residual network structure combined with a refined Charbonnier loss function to alleviate gradient vanishing and exploding to enhance the robustness of model training. Extensive experiments conducted on widely used benchmark datasets, including DIV2K, Set5, Set14, B100, and Urban100, demonstrate that, compared with existing deep learning-based SR methods, our ODConvNet method improves Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the visual quality of SR images is also improved. Ablation studies further validate the effectiveness and contribution of each component in our network. The proposed ODConvNet offers an effective, flexible, and efficient solution for the SISR task and provides promising directions for future research. Full article
Show Figures

Figure 1

25 pages, 6911 KiB  
Article
Image Inpainting Algorithm Based on Structure-Guided Generative Adversarial Network
by Li Zhao, Tongyang Zhu, Chuang Wang, Feng Tian and Hongge Yao
Mathematics 2025, 13(15), 2370; https://doi.org/10.3390/math13152370 - 24 Jul 2025
Viewed by 331
Abstract
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a [...] Read more.
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a two-stage restoration paradigm: (1) Structural Prior Extraction, where adaptive edge detection algorithms identify residual contours in corrupted regions, and a transformer-enhanced network reconstructs globally consistent structural maps through contextual feature propagation; (2) Structure-Constrained Texture Synthesis, wherein a multi-scale generator with hybrid dilated convolutions and channel attention mechanisms iteratively refines high-fidelity textures under explicit structural guidance. The framework introduces three innovations: (1) a hierarchical feature fusion architecture that synergizes multi-scale receptive fields with spatial-channel attention to preserve long-range dependencies and local details simultaneously; (2) spectral-normalized Markovian discriminator with gradient-penalty regularization, enabling adversarial training stability while enforcing patch-level structural consistency; and (3) dual-branch loss formulation combining perceptual similarity metrics with edge-aware constraints to align synthesized content with both semantic coherence and geometric fidelity. Our experiments on the two benchmark datasets (Places2 and CelebA) have demonstrated that our framework achieves more unified textures and structures, bringing the restored images closer to their original semantic content. Full article
Show Figures

Figure 1

27 pages, 8957 KiB  
Article
DFAN: Single Image Super-Resolution Using Stationary Wavelet-Based Dual Frequency Adaptation Network
by Gyu-Il Kim and Jaesung Lee
Symmetry 2025, 17(8), 1175; https://doi.org/10.3390/sym17081175 - 23 Jul 2025
Viewed by 307
Abstract
Single image super-resolution is the inverse problem of reconstructing a high-resolution image from its low-resolution counterpart. Although recent Transformer-based architectures leverage global context integration to improve reconstruction quality, they often overlook frequency-specific characteristics, resulting in the loss of high-frequency information. To address this [...] Read more.
Single image super-resolution is the inverse problem of reconstructing a high-resolution image from its low-resolution counterpart. Although recent Transformer-based architectures leverage global context integration to improve reconstruction quality, they often overlook frequency-specific characteristics, resulting in the loss of high-frequency information. To address this limitation, we propose the Dual Frequency Adaptive Network (DFAN). DFAN first decomposes the input into low- and high-frequency components via Stationary Wavelet Transform. In the low-frequency branch, Swin Transformer layers restore global structures and color consistency. In contrast, the high-frequency branch features a dedicated module that combines Directional Convolution with Residual Dense Blocks, precisely reinforcing edges and textures. A frequency fusion module then adaptively merges these complementary features using depthwise and pointwise convolutions, achieving a balanced reconstruction. During training, we introduce a frequency-aware multi-term loss alongside the standard pixel-wise loss to explicitly encourage high-frequency preservation. Extensive experiments on the Set5, Set14, BSD100, Urban100, and Manga109 benchmarks show that DFAN achieves up to +0.64 dBpeak signal-to-noise ratio, +0.01 structural similarity index measure, and −0.01learned perceptual image patch similarity over the strongest frequency-domain baselines, while also delivering visibly sharper textures and cleaner edges. By unifying spatial and frequency-domain advantages, DFAN effectively mitigates high-frequency degradation and enhances SISR performance. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

24 pages, 5200 KiB  
Article
DRFAN: A Lightweight Hybrid Attention Network for High-Fidelity Image Super-Resolution in Visual Inspection Applications
by Ze-Long Li, Bai Jiang, Liang Xu, Zhe Lu, Zi-Teng Wang, Bin Liu, Si-Ye Jia, Hong-Dan Liu and Bing Li
Algorithms 2025, 18(8), 454; https://doi.org/10.3390/a18080454 - 22 Jul 2025
Viewed by 314
Abstract
Single-image super-resolution (SISR) plays a critical role in enhancing visual quality for real-world applications, including industrial inspection and embedded vision systems. While deep learning-based approaches have made significant progress in SR, existing lightweight SR models often fail to accurately reconstruct high-frequency textures, especially [...] Read more.
Single-image super-resolution (SISR) plays a critical role in enhancing visual quality for real-world applications, including industrial inspection and embedded vision systems. While deep learning-based approaches have made significant progress in SR, existing lightweight SR models often fail to accurately reconstruct high-frequency textures, especially under complex degradation scenarios, resulting in blurry edges and structural artifacts. To address this challenge, we propose a Dense Residual Fused Attention Network (DRFAN), a novel lightweight hybrid architecture designed to enhance high-frequency texture recovery in challenging degradation conditions. Moreover, by coupling convolutional layers and attention mechanisms through gated interaction modules, the DRFAN enhances local details and global dependencies with linear computational complexity, enabling the efficient utilization of multi-level spatial information while effectively alleviating the loss of high-frequency texture details. To evaluate its effectiveness, we conducted ×4 super-resolution experiments on five public benchmarks. The DRFAN achieves the best performance among all compared lightweight models. Visual comparisons show that the DRFAN restores more accurate geometric structures, with up to +1.2 dB/+0.0281 SSIM gain over SwinIR-S on Urban100 samples. Additionally, on a domain-specific rice grain dataset, the DRFAN outperforms SwinIR-S by +0.19 dB in PSNR and +0.0015 in SSIM, restoring clearer textures and grain boundaries essential for industrial quality inspection. The proposed method provides a compelling balance between model complexity and image reconstruction fidelity, making it well-suited for deployment in resource-constrained visual systems and industrial applications. Full article
Show Figures

Figure 1

13 pages, 2438 KiB  
Article
The Integration of Micro-CT Imaging and Finite Element Simulations for Modelling Tooth-Inlay Systems for Mechanical Stress Analysis: A Preliminary Study
by Nikoleta Nikolova, Miryana Raykovska, Nikolay Petkov, Martin Tsvetkov, Ivan Georgiev, Eugeni Koytchev, Roumen Iankov, Mariana Dimova-Gabrovska and Angela Gusiyska
J. Funct. Biomater. 2025, 16(7), 267; https://doi.org/10.3390/jfb16070267 - 21 Jul 2025
Viewed by 570
Abstract
This study presents a methodology for developing and validating digital models of tooth-inlay systems, aiming to trace the complete workflow from clinical procedures to simulation by involving dental professionals—dentists for manual cavity preparation and dental technicians for restoration modelling—while integrating micro-computed tomography (micro-CT) [...] Read more.
This study presents a methodology for developing and validating digital models of tooth-inlay systems, aiming to trace the complete workflow from clinical procedures to simulation by involving dental professionals—dentists for manual cavity preparation and dental technicians for restoration modelling—while integrating micro-computed tomography (micro-CT) imaging with finite element analysis (FEA). The proposed workflow includes (1) the acquisition of high-resolution 3D micro-CT scans of a non-restored tooth, (2) image segmentation and reconstruction to create anatomically accurate digital twins and mesh generation, (3) the selection of proper resin and the 3D printing of four typodonts, (4) the manual preparation of cavities on the typodonts, (5) the acquisition of high-resolution 3D micro-CT scans of the typodonts, (6) mesh generation, digital inlay and onlay modelling and material property assignment, and (7) nonlinear FEA simulations under representative masticatory loading. The approach enables the visualisation of stress and deformation patterns, with preliminary results indicating stress concentrations at the tooth-restoration interface integrating different cavity alternatives and restorations on the same tooth. Quantitative outputs include von Mises stress, strain energy density, and displacement distribution. This study demonstrates the feasibility of using image-based, tooth-specific digital twins for biomechanical modelling in dentistry. The developed framework lays the groundwork for future investigations into the optimisation of restoration design and material selection in clinical applications. Full article
(This article belongs to the Section Dental Biomaterials)
Show Figures

Figure 1

22 pages, 5937 KiB  
Article
CSAN: A Channel–Spatial Attention-Based Network for Meteorological Satellite Image Super-Resolution
by Weiliang Liang and Yuan Liu
Remote Sens. 2025, 17(14), 2513; https://doi.org/10.3390/rs17142513 - 19 Jul 2025
Viewed by 423
Abstract
Meteorological satellites play a critical role in weather forecasting, climate monitoring, water resource management, and more. These satellites feature an array of radiative imaging bands, capturing dozens of spectral images that span from visible to infrared. However, the spatial resolution of these bands [...] Read more.
Meteorological satellites play a critical role in weather forecasting, climate monitoring, water resource management, and more. These satellites feature an array of radiative imaging bands, capturing dozens of spectral images that span from visible to infrared. However, the spatial resolution of these bands varies, with images at longer wavelengths typically exhibiting lower spatial resolutions, which limits the accuracy and reliability of subsequent applications. To alleviate this issue, we propose a channel–spatial attention-based network, named CSAN, designed to super-resolve all low-resolution (LR) bands to the available maximal high-resolution (HR) scale. The CSAN consists of an information fusion unit, a feature extraction module, and an image restoration unit. The information fusion unit adaptively fuses LR and HR images, effectively capturing inter-band spectral relationships and spatial details to enhance the input representation. The feature extraction module integrates channel and spatial attention into the residual network, enabling the extraction of informative spectral and spatial features from the fused inputs. Using these deep features, the image restoration unit reconstructs the missing spatial details in LR images. Extensive experiments demonstrate that the proposed network outperforms other state-of-the-art approaches quantitatively and visually. Full article
Show Figures

Figure 1

14 pages, 16969 KiB  
Article
FTT: A Frequency-Aware Texture Matching Transformer for Digital Bathymetry Model Super-Resolution
by Peikun Xiao, Jianping Wu and Yingjie Wang
J. Mar. Sci. Eng. 2025, 13(7), 1365; https://doi.org/10.3390/jmse13071365 - 17 Jul 2025
Viewed by 183
Abstract
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, [...] Read more.
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, which leads to serious distortions and inaccuracies. Given the critical role of HR DBM in marine resource exploitation, economic development, and scientific innovation, we propose a frequency-aware texture matching transformer (FTT) for DBM SR, incorporating global terrain feature extraction (GTFE), high-frequency feature extraction (HFFE), and a terrain matching block (TMB). GTFE has the capability to perceive spatial heterogeneity and spatial locations, allowing it to accurately capture large-scale terrain features. HFFE can explicitly extract high-frequency priors beneficial for DBM SR and implicitly refine the representation of high-frequency information in the global terrain feature. TMB improves fidelity of generated HR DBM by generating position offsets to restore warped textures in deep features. Experimental results have demonstrated that the proposed FTT has superior performance in terms of elevation, slope, aspect, and fidelity of generated HR DBM. Notably, the root mean square error (RMSE) of elevation in steep terrain has been reduced by 4.89 m, which is a significant improvement in the accuracy and precision of the reconstruction. This research holds significant implications for improving the accuracy of DBM SR methods and the usefulness of HR bathymetry products for future marine research. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

26 pages, 7178 KiB  
Article
Super-Resolution Reconstruction of Formation MicroScanner Images Based on the SRGAN Algorithm
by Changqiang Ma, Xinghua Qi, Liangyu Chen, Yonggui Li, Jianwei Fu and Zejun Liu
Processes 2025, 13(7), 2284; https://doi.org/10.3390/pr13072284 - 17 Jul 2025
Viewed by 337
Abstract
Formation MicroScanner Image (FMI) technology is a key method for identifying fractured reservoirs and optimizing oil and gas exploration, but its inherent insufficient resolution severely constrains the fine characterization of geological features. This study innovatively applies a Super-Resolution Generative Adversarial Network (SRGAN) to [...] Read more.
Formation MicroScanner Image (FMI) technology is a key method for identifying fractured reservoirs and optimizing oil and gas exploration, but its inherent insufficient resolution severely constrains the fine characterization of geological features. This study innovatively applies a Super-Resolution Generative Adversarial Network (SRGAN) to the super-resolution reconstruction of FMI logging image to address this bottleneck problem. By collecting FMI logging image of glutenite from a well in Xinjiang, a training set containing 24,275 images was constructed, and preprocessing strategies such as grayscale conversion and binarization were employed to optimize input features. Leveraging SRGAN’s generator-discriminator adversarial mechanism and perceptual loss function, high-quality mapping from low-resolution FMI logging image to high-resolution images was achieved. This study yields significant results: in RGB image reconstruction, SRGAN achieved a Peak Signal-to-Noise Ratio (PSNR) of 41.39 dB, surpassing the optimal traditional method (bicubic interpolation) by 61.6%; its Structural Similarity Index (SSIM) reached 0.992, representing a 34.1% improvement; in grayscale image processing, SRGAN effectively eliminated edge blurring, with the PSNR (40.15 dB) and SSIM (0.990) exceeding the suboptimal method (bilinear interpolation) by 36.6% and 9.9%, respectively. These results fully confirm that SRGAN can significantly restore edge contours and structural details in FMI logging image, with performance far exceeding traditional interpolation methods. This study not only systematically verifies, for the first time, SRGAN’s exceptional capability in enhancing FMI resolution, but also provides a high-precision data foundation for reservoir parameter inversion and geological modeling, holding significant application value for advancing the intelligent exploration of complex hydrocarbon reservoirs. Full article
Show Figures

Figure 1

17 pages, 7786 KiB  
Article
Video Coding Based on Ladder Subband Recovery and ResGroup Module
by Libo Wei, Aolin Zhang, Lei Liu, Jun Wang and Shuai Wang
Entropy 2025, 27(7), 734; https://doi.org/10.3390/e27070734 - 8 Jul 2025
Viewed by 341
Abstract
With the rapid development of video encoding technology in the field of computer vision, the demand for tasks such as video frame reconstruction, denoising, and super-resolution has been continuously increasing. However, traditional video encoding methods typically focus on extracting spatial or temporal domain [...] Read more.
With the rapid development of video encoding technology in the field of computer vision, the demand for tasks such as video frame reconstruction, denoising, and super-resolution has been continuously increasing. However, traditional video encoding methods typically focus on extracting spatial or temporal domain information, often facing challenges of insufficient accuracy and information loss when reconstructing high-frequency details, edges, and textures of images. To address this issue, this paper proposes an innovative LadderConv framework, which combines discrete wavelet transform (DWT) with spatial and channel attention mechanisms. By progressively recovering wavelet subbands, it effectively enhances the video frame encoding quality. Specifically, the LadderConv framework adopts a stepwise recovery approach for wavelet subbands, first processing high-frequency detail subbands with relatively less information, then enhancing the interaction between these subbands, and ultimately synthesizing a high-quality reconstructed image through inverse wavelet transform. Moreover, the framework introduces spatial and channel attention mechanisms, which further strengthen the focus on key regions and channel features, leading to notable improvements in detail restoration and image reconstruction accuracy. To optimize the performance of the LadderConv framework, particularly in detail recovery and high-frequency information extraction tasks, this paper designs an innovative ResGroup module. By using multi-layer convolution operations along with feature map compression and recovery, the ResGroup module enhances the network’s expressive capability and effectively reduces computational complexity. The ResGroup module captures multi-level features from low level to high level and retains rich feature information through residual connections, thus improving the overall reconstruction performance of the model. In experiments, the combination of the LadderConv framework and the ResGroup module demonstrates superior performance in video frame reconstruction tasks, particularly in recovering high-frequency information, image clarity, and detail representation. Full article
(This article belongs to the Special Issue Rethinking Representation Learning in the Age of Large Models)
Show Figures

Figure 1

16 pages, 2134 KiB  
Article
Research on Field-of-View Reconstruction Technology of Specific Bands for Spatial Integral Field Spectrographs
by Jie Song, Yuyu Tang, Jun Wei and Xiaoxian Huang
Photonics 2025, 12(7), 682; https://doi.org/10.3390/photonics12070682 - 7 Jul 2025
Viewed by 234
Abstract
Integral field technology, as an advanced spectroscopic imaging technique, can be used to acquire the spatial and spectral information of the target area simultaneously. In this paper, we propose a method for the field reconstruction of characteristic wavelength bands of a space integral [...] Read more.
Integral field technology, as an advanced spectroscopic imaging technique, can be used to acquire the spatial and spectral information of the target area simultaneously. In this paper, we propose a method for the field reconstruction of characteristic wavelength bands of a space integral field spectrograph. The precise positioning of the image slicer is crucial to ensure that the spectrograph can accurately capture the position of each slicer in space. Firstly, the line spread function information and the characteristic location coordinates are obtained. Next, the positioning points of each group of image slicers under a specific spectral band are determined by quintic spline interpolation and a double-closed-loop optimization framework, thus establishing connection points for the responses of different image slicers. Then, the accuracy and reliability of the data are further improved by fitting the signal intensity of pixel points. Finally, the data of all image slicers are aligned to complete the field reconstruction of the characteristic wavelength bands of the space integral field spectrograph. This provides new ideas for the two-dimensional spatial reconstruction of spectrographs using image slicers as integral field units in specific spectral bands and accurately restores the two-dimensional spatial field observations of spatial integral field spectrographs. Full article
Show Figures

Figure 1

20 pages, 20508 KiB  
Article
MSRGAN: A Multi-Scale Residual GAN for High-Resolution Precipitation Downscaling
by Yida Liu, Zhuang Li, Guangzhen Cao, Qiong Wang, Yizhe Li and Zhenyu Lu
Remote Sens. 2025, 17(13), 2281; https://doi.org/10.3390/rs17132281 - 3 Jul 2025
Viewed by 351
Abstract
To address the challenge of insufficient spatial resolution in remote sensing precipitation data, this paper proposes a novel Multi-Scale Residual Generative Adversarial Network (MSRGAN) for reconstructing high-resolution precipitation images. The model integrates multi-source meteorological information and topographic priors, and it employs a Deep [...] Read more.
To address the challenge of insufficient spatial resolution in remote sensing precipitation data, this paper proposes a novel Multi-Scale Residual Generative Adversarial Network (MSRGAN) for reconstructing high-resolution precipitation images. The model integrates multi-source meteorological information and topographic priors, and it employs a Deep Multi-Scale Perception Module (DeepInception), a Multi-Scale Feature Modulation Module (MSFM), and a Spatial-Channel Attention Network (SCAN) to achieve high-fidelity restoration of complex precipitation structures. Experiments conducted using Weather Research and Forecasting (WRF) simulation data over the continental United States demonstrate that MSRGAN outperforms traditional interpolation methods and state-of-the-art deep learning models across various metrics, including Critical Success Index (CSI), Heidke Skill Score (HSS), False Alarm Rate (FAR), and Jensen–Shannon divergence. Notably, it exhibits significant advantages in detecting heavy precipitation events. Ablation studies further validate the effectiveness of each module. The results indicate that MSRGAN not only improves the accuracy of precipitation downscaling but also preserves spatial structural consistency and physical plausibility, offering a novel technological approach for urban flood warning, weather forecasting, and regional hydrological modeling. Full article
Show Figures

Figure 1

16 pages, 2376 KiB  
Article
Nested U-Net-Based GAN Model for Super-Resolution of Stained Light Microscopy Images
by Seong-Hyeon Kang and Ji-Youn Kim
Photonics 2025, 12(7), 665; https://doi.org/10.3390/photonics12070665 - 1 Jul 2025
Viewed by 390
Abstract
The purpose of this study was to propose a deep learning-based model for the super-resolution reconstruction of stained light microscopy images. To achieve this, perceptual loss was applied to the generator to reflect multichannel signal intensity, distribution, and structural similarity. A nested U-Net [...] Read more.
The purpose of this study was to propose a deep learning-based model for the super-resolution reconstruction of stained light microscopy images. To achieve this, perceptual loss was applied to the generator to reflect multichannel signal intensity, distribution, and structural similarity. A nested U-Net architecture was employed to address the representational limitations of the conventional U-Net. For quantitative evaluation, the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and correlation coefficient (CC) were calculated. In addition, intensity profile analysis was performed to assess the model’s ability to restore the boundary signals more precisely. The experimental results demonstrated that the proposed model outperformed both the signal and structural restoration compared to single U-Net and U-Net-based generative adversarial network (GAN) models. Consequently, the PSNR, SSIM, and CC values demonstrated relative improvements of approximately 1.017, 1.023, and 1.010 times, respectively, compared to the input images. In particular, the intensity profile analysis confirmed the effectiveness of the nested U-Net-based generator in restoring cellular boundaries and structures in the stained microscopy images. In conclusion, the proposed model effectively enhanced the resolution of stained light microscopy images acquired in a multichannel format. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Optics and Biophotonics)
Show Figures

Figure 1

Back to TopTop