Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = very-deep super-resolution (VDSR)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 23269 KiB  
Article
Improving Medical Image Quality Using a Super-Resolution Technique with Attention Mechanism
by Dong Yun Lee, Jang Yeop Kim and Soo Young Cho
Appl. Sci. 2025, 15(2), 867; https://doi.org/10.3390/app15020867 - 17 Jan 2025
Cited by 2 | Viewed by 2688
Abstract
Image quality plays a critical role in medical image analysis, significantly impacting diagnostic outcomes. Sharp and detailed images are essential for accurate diagnoses, but acquiring high-resolution medical images often demands sophisticated and costly equipment. To address this challenge, this study proposes a convolutional [...] Read more.
Image quality plays a critical role in medical image analysis, significantly impacting diagnostic outcomes. Sharp and detailed images are essential for accurate diagnoses, but acquiring high-resolution medical images often demands sophisticated and costly equipment. To address this challenge, this study proposes a convolutional neural network (CNN)-based super-resolution architecture, utilizing a melanoma dataset to enhance image resolution through deep learning techniques. The proposed model incorporates a convolutional self-attention block that combines channel and spatial attention to emphasize important image features. Channel attention uses global average pooling and fully connected layers to enhance high-frequency features within channels. Meanwhile, spatial attention applies a single-channel convolution to emphasize high-frequency features in the spatial domain. By integrating various attention blocks, feature extraction is optimized and further expanded through subpixel convolution to produce high-quality super-resolution images. The model uses L1 loss to generate realistic and smooth outputs, outperforming existing deep learning methods in capturing contours and textures. Evaluations with the ISIC 2020 dataset—containing 33126 training and 10982 test images for skin lesion analysis—showed a 1–2% improvement in peak signal-to-noise ratio (PSNR) compared to very deep super-resolution (VDSR) and enhanced deep super-resolution (EDSR) architectures. Full article
(This article belongs to the Special Issue Exploring AI: Methods and Applications for Data Mining)
Show Figures

Figure 1

20 pages, 11204 KiB  
Article
Estimating the Spectral Response of Eight-Band MSFA One-Shot Cameras Using Deep Learning
by Pierre Gouton, Kacoutchy Jean Ayikpa and Diarra Mamadou
Algorithms 2024, 17(11), 473; https://doi.org/10.3390/a17110473 - 22 Oct 2024
Viewed by 1268
Abstract
Eight-band one-shot MSFA (multispectral filter array) cameras are innovative technologies used to capture multispectral images by capturing multiple spectral bands simultaneously. They thus make it possible to collect detailed information on the spectral properties of the observed scenes economically. These cameras are widely [...] Read more.
Eight-band one-shot MSFA (multispectral filter array) cameras are innovative technologies used to capture multispectral images by capturing multiple spectral bands simultaneously. They thus make it possible to collect detailed information on the spectral properties of the observed scenes economically. These cameras are widely used for object detection, material analysis, and agronomy. The evolution of one-shot MSFA cameras from 8 to 32 bands makes obtaining much more detailed spectral data possible, which is crucial for applications requiring delicate and precise analysis of the spectral properties of the observed scenes. Our study aims to develop models based on deep learning to estimate the spectral response of this type of camera and provide images close to the spectral properties of objects. First, we prepare our experiment data by projecting them to reflect the characteristics of our camera. Next, we harness the power of deep super-resolution neural networks, such as very deep super-resolution (VDSR), Laplacian pyramid super-resolution networks (LapSRN), and deeply recursive convolutional networks (DRCN), which we adapt to approximate the spectral response. These models learn the complex relationship between 8-band multispectral data from the camera and 31-band multispectral data from the multi-object database, enabling accurate and efficient conversion. Finally, we evaluate the images’ quality using metrics such as loss function, PSNR, and SSIM. The model evaluation revealed that DRCN outperforms others in crucial performance. DRCN achieved the lowest loss with 0.0047 and stood out in image quality metrics, with a PSNR of 25.5059, SSIM of 0.8355, and SAM of 0.13215, indicating better preservation of details and textures. Additionally, DRCN showed the lowest RMSE 0.05849 and MAE 0.0415 values, confirming its ability to minimize reconstruction errors more effectively than VDSR and LapSRN. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (2nd Edition))
Show Figures

Figure 1

26 pages, 8436 KiB  
Article
A DEM Super-Resolution Reconstruction Network Combining Internal and External Learning
by Xu Lin, Qingqing Zhang, Hongyue Wang, Chaolong Yao, Changxin Chen, Lin Cheng and Zhaoxiong Li
Remote Sens. 2022, 14(9), 2181; https://doi.org/10.3390/rs14092181 - 2 May 2022
Cited by 18 | Viewed by 3415
Abstract
The study of digital elevation model (DEM) super-resolution reconstruction algorithms has solved the problem of the need for high-resolution DEMs. However, the DEM super-resolution reconstruction algorithm itself is an inverse problem, and making full use of the DEM a priori information is an [...] Read more.
The study of digital elevation model (DEM) super-resolution reconstruction algorithms has solved the problem of the need for high-resolution DEMs. However, the DEM super-resolution reconstruction algorithm itself is an inverse problem, and making full use of the DEM a priori information is an effective way to solve this problem. In our work, a new DEM super-resolution reconstruction method is proposed based on the complementary relationship between internally learned super-resolution reconstruction methods and externally learned super-resolution reconstruction methods. The method is based on the presence of a large amount of repetitive information within the DEM. Using an internal learning approach to learn the internal prior of the DEM, a low-resolution dataset of the DEM rich in detailed features is generated, and based on this, the training of a constrained external learning network is constructed for the discrepancy data pair. Finally, it introduces residual learning based on the network model to accelerate the operation rate of the network and to solve the model degradation problem brought about by the deepening of the network. This enables the better transfer of learned detailed features in deeper network mappings, which in turn ensures accurate learning of the DEM prior information. The network utilizes the internal prior of the specific DEM as well as the external prior of the DEM dataset and achieves better super-resolution reconstruction results in the experimental results. The results of super-resolution reconstruction by the Bicubic method, Super-Resolution Convolutional Neural Networks (SRCNN), very deep convolutional networks (VDSR), ”Zero-Shot” Super-Resolution networks (ZSSR) and the new method in this paper were compared, and the average RMSE of the super-resolution reconstruction results of the five methods were 8.48 m, 8.30 m, 8.09 m, 7.02 m and 6.65 m, respectively. The mean elevation error at the same resolution is 21.6% better than that of the Bicubic method, 19.9% better than that of the SRCNN, 17.8% better than that of the VDSR method, and 5.3% better than that of the ZSSR method. Full article
(This article belongs to the Special Issue Perspectives on Digital Elevation Model Applications)
Show Figures

Figure 1

19 pages, 13577 KiB  
Article
Fast Target Localization Method for FMCW MIMO Radar via VDSR Neural Network
by Jingyu Cong, Xianpeng Wang, Xiang Lan, Mengxing Huang and Liangtian Wan
Remote Sens. 2021, 13(10), 1956; https://doi.org/10.3390/rs13101956 - 17 May 2021
Cited by 28 | Viewed by 4373
Abstract
The traditional frequency-modulated continuous wave (FMCW) multiple-input multiple-output (MIMO) radar two-dimensional (2D) super-resolution (SR) estimation algorithm for target localization has high computational complexity, which runs counter to the increasing demand for real-time radar imaging. In this paper, a fast joint direction-of-arrival (DOA) and [...] Read more.
The traditional frequency-modulated continuous wave (FMCW) multiple-input multiple-output (MIMO) radar two-dimensional (2D) super-resolution (SR) estimation algorithm for target localization has high computational complexity, which runs counter to the increasing demand for real-time radar imaging. In this paper, a fast joint direction-of-arrival (DOA) and range estimation framework for target localization is proposed; it utilizes a very deep super-resolution (VDSR) neural network (NN) framework to accelerate the imaging process while ensuring estimation accuracy. Firstly, we propose a fast low-resolution imaging algorithm based on the Nystrom method. The approximate signal subspace matrix is obtained from partial data, and low-resolution imaging is performed on a low-density grid. Then, the bicubic interpolation algorithm is used to expand the low-resolution image to the desired dimensions. Next, the deep SR network is used to obtain the high-resolution image, and the final joint DOA and range estimation is achieved based on the reconstructed image. Simulations and experiments were carried out to validate the computational efficiency and effectiveness of the proposed framework. Full article
(This article belongs to the Special Issue Radar Signal Processing for Target Tracking)
Show Figures

Graphical abstract

20 pages, 7068 KiB  
Article
Spatiotemporal Fusion of Formosat-2 and Landsat-8 Satellite Images: A Comparison of “Super Resolution-Then-Blend” and “Blend-Then-Super Resolution” Approaches
by Tee-Ann Teo and Yu-Ju Fu
Remote Sens. 2021, 13(4), 606; https://doi.org/10.3390/rs13040606 - 8 Feb 2021
Cited by 9 | Viewed by 3632
Abstract
The spatiotemporal fusion technique has the advantages of generating time-series images with high-spatial and high-temporal resolution from coarse-resolution to fine-resolution images. A hybrid fusion method that integrates image blending (i.e., spatial and temporal adaptive reflectance fusion model, STARFM) and super-resolution (i.e., very deep [...] Read more.
The spatiotemporal fusion technique has the advantages of generating time-series images with high-spatial and high-temporal resolution from coarse-resolution to fine-resolution images. A hybrid fusion method that integrates image blending (i.e., spatial and temporal adaptive reflectance fusion model, STARFM) and super-resolution (i.e., very deep super resolution, VDSR) techniques for the spatiotemporal fusion of 8 m Formosat-2 and 30 m Landsat-8 satellite images is proposed. Two different fusion approaches, namely Blend-then-Super-Resolution and Super-Resolution (SR)-then-Blend, were developed to improve the results of spatiotemporal fusion. The SR-then-Blend approach performs SR before image blending. The SR refines the image resampling stage on generating the same pixel-size of coarse- and fine-resolution images. The Blend-then-SR approach is aimed at refining the spatial details after image blending. Several quality indices were used to analyze the quality of the different fusion approaches. Experimental results showed that the performance of the hybrid method is slightly better than the traditional approach. Images obtained using SR-then-Blend are more similar to the real observed images compared with images acquired using Blend-then-SR. The overall mean bias of SR-then-Blend was 4% lower than Blend-then-SR, and nearly 3% improvement for overall standard deviation in SR-B. The VDSR technique reduces the systematic deviation in spectral band between Formosat-2 and Landsat-8 satellite images. The integration of STARFM and the VDSR model is useful for improving the quality of spatiotemporal fusion. Full article
(This article belongs to the Special Issue Fusion of High-Level Remote Sensing Products)
Show Figures

Graphical abstract

18 pages, 6501 KiB  
Article
Retrieval of Ocean Wind Speed Using Super-Resolution Delay-Doppler Maps
by Hao-Yu Wang and Jyh-Ching Juang
Remote Sens. 2020, 12(6), 916; https://doi.org/10.3390/rs12060916 - 12 Mar 2020
Cited by 5 | Viewed by 4497
Abstract
The use of reflected Global Navigation Satellite System (GNSS) signals has shown to be effective for some remote sensing applications. In a GNSS Reflectometry (GNSS-R) system, a set of delay-Doppler maps (DDMs) related to scattered GNSS signals is formed and serves as a [...] Read more.
The use of reflected Global Navigation Satellite System (GNSS) signals has shown to be effective for some remote sensing applications. In a GNSS Reflectometry (GNSS-R) system, a set of delay-Doppler maps (DDMs) related to scattered GNSS signals is formed and serves as a measurement of ocean wind speed and roughness. The design of the DDM receiver involves a trade-off between computation/communication complexity and the effectiveness of data retrieval. A fine-resolution DDM reveals more information in data retrieval while consuming more resources in terms of onboard processing and downlinking. As a result, existing missions typically use a compressed or low-resolution DDM as a data product, and a high-resolution DDM is processed for special purposes such as calibration. In this paper, a deep learning, super resolution algorithm is developed to construct a high-resolution DDM based on a low-resolution DDM. This may potentially enhance the data retrieval results with no impact on the instrument design. The proposed method is applied to process the DDM products disseminated by the Cyclone GNSS (CYGNSS) and the effectiveness of wind speed retrieval is demonstrated. Full article
(This article belongs to the Special Issue GPS/GNSS for Earth Science and Applications)
Show Figures

Graphical abstract

Back to TopTop