Special Issue "Remote Sensing Image Restoration and Reconstruction"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 December 2019.

Special Issue Editors

Prof. Liangpei Zhang
E-Mail Website
Guest Editor
State Key Lab. LIESMARS, Wuhan University, Wuhan 430072, China
Interests: pattern analysis and machine learning; image processing engineering; application of remote sensing; computational Intelligence and its application in remote sensing image processing; application of remote sensing
Special Issues and Collections in MDPI journals
Prof. Huanfeng Shen
E-Mail Website
Guest Editor
School of Resource and Environmental Science, Wuhan University, Wuhan 430079, China
Tel. +8613163235536
Interests: image quality improvement; remote sensing mapping and application; data fusion and assimilation; regional and global environmental changes
Prof. Qiangqiang Yuan
E-Mail Website
Guest Editor
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
Tel. 86-27-68758427
Interests: image reconstruction; image denoising; image super-resolution; remote sensing image processing; data fusion and application

Special Issue Information

Dear Colleagues,

In real cases, remote sensing images usually suffer from noises (Gaussian noise, stripe noise, impulse noise, spectral noise, speckle noise, temporal noise, mixed noise, etc.); data missing (thick/thin cloud, shadow, sensor malfunction etc.); and spatial resolution degradation due to equipment limitations, working conditions, limited radiance energy, and generally narrow band width. These phenomena severely degrade the quality of remote sensing images and limit the performance of the subsequent processing, e.g., classification, unmixing, and target detection. Therefore, it is a critical preprocessing step to improve the quality of remote sensing images. Remote sensing image restoration and reconstructing provides solutions to deal with above degradation problems.

This Special Issue concerns the restoration and reconstructing methods and applications for processing remote sensing images. In general, in this Special Issue, the latest advances and trends of restoration and reconstructing algorithms and applications for remote sensing image processing will be presented, addressing novel thoughts and practical solutions to above questions. The aim is to increase the data usability and quality of remote sensing images. Moreover, authors are encouraged to present hybrid methods that might include the use of machine learning approaches. Topics of interest include but are not limited to the following:

  • Remote sensing image denoising;
  • Remote sensing image fusion;
  • Remote sensing image super resolution;
  • Remote sensing image missing data reconstruction;
  • Remote sensing image radiation correction;
  • Remote sensing image geometric correction;
  • Remote sensing image restoration.
Prof. Liangpei Zhang
Prof. Huanfeng Shen
Prof. Qiangqiang Yuan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multispectral image denoising
  • Hyperspectral image denoising
  • SAR image despeckling
  • Remote sensing image destriping
  • Remote sensing image restoration
  • Missing data reconstruction
  • Remote sensing image super-resolution
  • Pansharpening
  • Spatiotemporal fusion
  • Cloud/shadow removal

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Deep Self-Learning Network for Adaptive Pansharpening
Remote Sens. 2019, 11(20), 2395; https://doi.org/10.3390/rs11202395 - 16 Oct 2019
Abstract
Deep learning (DL)-based paradigms have recently made many advances in image pansharpening. However, most of the existing methods directly downscale the multispectral (MSI) and panchromatic (PAN) images with default blur kernel to construct the training set, which will lead to the deteriorative results [...] Read more.
Deep learning (DL)-based paradigms have recently made many advances in image pansharpening. However, most of the existing methods directly downscale the multispectral (MSI) and panchromatic (PAN) images with default blur kernel to construct the training set, which will lead to the deteriorative results when the real image does not obey this degradation. In this paper, a deep self-learning (DSL) network is proposed for adaptive image pansharpening. First, rather than using the fixed blur kernel, a point spread function (PSF) estimation algorithm is proposed to obtain the blur kernel of the MSI. Second, an edge-detection-based pixel-to-pixel image registration method is designed to recover the local misalignments between MSI and PAN. Third, the original data is downscaled by the estimated PSF and the pansharpening network is trained in the down-sampled domain. The high-resolution result can be finally predicted by the trained DSL network using the original MSI and PAN. Extensive experiments on three images collected by different satellites prove the superiority of our DSL technique, compared with some state-of-the-art approaches. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

Open AccessArticle
Spatial–Spectral Fusion in Different Swath Widths by a Recurrent Expanding Residual Convolutional Neural Network
Remote Sens. 2019, 11(19), 2203; https://doi.org/10.3390/rs11192203 - 20 Sep 2019
Abstract
The quality of remotely sensed images is usually determined by their spatial resolution, spectral resolution, and coverage. However, due to limitations in the sensor hardware, the spectral resolution, spatial resolution, and swath width of the coverage are mutually constrained. Remote sensing image fusion [...] Read more.
The quality of remotely sensed images is usually determined by their spatial resolution, spectral resolution, and coverage. However, due to limitations in the sensor hardware, the spectral resolution, spatial resolution, and swath width of the coverage are mutually constrained. Remote sensing image fusion aims at overcoming the different constraints of remote sensing images, to achieve the purpose of combining the useful information in the different images. However, the traditional spatial–spectral fusion approach is to use data in the same swath width that covers the same area and only considers the mutually constrained conditions between the spectral resolution and spatial resolution. To simultaneously solve the image fusion problems of the swath width, spatial resolution, and spectral resolution, this paper introduces a method with multi-scale feature extraction and residual learning with recurrent expanding. To discuss the sensitivity of convolution operation to different variables of images in different swath widths, we set the sensitivity experiments in the coverage ratio and offset position. We also performed the simulation and real experiments to verify the effectiveness of the proposed framework with the Sentinel-2 data, which simulated the different widths. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

Open AccessArticle
Underwater Image Restoration Based on a Parallel Convolutional Neural Network
Remote Sens. 2019, 11(13), 1591; https://doi.org/10.3390/rs11131591 - 04 Jul 2019
Abstract
Restoring degraded underwater images is a challenging ill-posed problem. The existing prior-based approaches have limited performance in many situations due to the reliance on handcrafted features. In this paper, we propose an effective convolutional neural network (CNN) for underwater image restoration. The proposed [...] Read more.
Restoring degraded underwater images is a challenging ill-posed problem. The existing prior-based approaches have limited performance in many situations due to the reliance on handcrafted features. In this paper, we propose an effective convolutional neural network (CNN) for underwater image restoration. The proposed network consists of two paralleled branches: a transmission estimation network (T-network) and a global ambient light estimation network (A-network); in particular, the T-network employs cross-layer connection and multi-scale estimation to prevent halo artifacts and to preserve edge features. The estimates produced by these two branches are leveraged to restore the clear image according to the underwater optical imaging model. Moreover, we develop a new underwater image synthesizing method for building the training datasets, which can simulate images captured in various underwater environments. Experimental results based on synthetic and real images demonstrate that our restored underwater images exhibit more natural color correction and better visibility improvement against several state-of-the-art methods. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

Open AccessArticle
Virtual Restoration of Stained Chinese Paintings Using Patch-Based Color Constrained Poisson Editing with Selected Hyperspectral Feature Bands
Remote Sens. 2019, 11(11), 1384; https://doi.org/10.3390/rs11111384 - 10 Jun 2019
Abstract
Stains, as one of most common degradations of paper cultural relics, not only affect paintings’ appearance, but sometimes even cover the text, patterns, and colors contained in the relics. Virtual restorations based on common red–green–blue images (RGB) which remove the degradations and then [...] Read more.
Stains, as one of most common degradations of paper cultural relics, not only affect paintings’ appearance, but sometimes even cover the text, patterns, and colors contained in the relics. Virtual restorations based on common red–green–blue images (RGB) which remove the degradations and then fill the lacuna regions with the image’s known parts with the inpainting technology could produce a visually plausible result. However, due to the lack of information inside the degradations, they always yield inconsistent structures when stains cover several color materials. To effectively remove the stains and restore the covered original contents of Chinese paintings, a novel method based on Poisson editing is proposed by exploiting the information inside the degradations of selected three feature bands as the auxiliary information to guide the restoration since the selected feature bands captured fewer stains and could expose the covered information. To make the Poisson editing suitable for stain removal, the feature bands were also exploited to search for the optimal patch for the pixels in the stain region, and the searched patch was used to construct the color constraint on the original Poisson editing to ensure the restoration of the original color of paintings. Specifically, this method mainly consists of two steps: feature band selection from hyperspectral data by establishing rules and reconstruction of stain contaminated regions of RGB image with color constrained Poisson editing. Four Chinese paintings (‘Fishing’, ‘Crane and Banana’, ‘the Hui Nationality Painting’, and ‘Lotus Pond and Wild Goose’) with different color materials were used to test the performance of the proposed method. Visual results show that this method can effectively remove or dilute the stains while restoring a painting’s original colors. By comparing values of restored pixels with nonstained pixels (reference of their same color materials), images processed by the proposed method had the lowest average root mean square error (RMSE), normalized absolute error (NAE), and average differences (AD), which indicates that it is an effective method to restore the stains of Chinese paintings. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Figure 1

Open AccessArticle
Reconstructing Cloud Contaminated Pixels Using Spatiotemporal Covariance Functions and Multitemporal Hyperspectral Imagery
Remote Sens. 2019, 11(10), 1145; https://doi.org/10.3390/rs11101145 - 14 May 2019
Cited by 1
Abstract
One of the major challenges in optical-based remote sensing is the presence of clouds, which imposes a hard constraint on the use of multispectral or hyperspectral satellite imagery for earth observation. While some studies have used interpolation models to remove cloud affected data, [...] Read more.
One of the major challenges in optical-based remote sensing is the presence of clouds, which imposes a hard constraint on the use of multispectral or hyperspectral satellite imagery for earth observation. While some studies have used interpolation models to remove cloud affected data, relatively few aim at restoration via the use of multi-temporal reference images. This paper proposes not only the use of image time-series, but also the implementation of a geostatistical model that considers the spatiotemporal correlation between them to fill the cloud-related gaps. Using Hyperion hyperspectral images, we demonstrate a capacity to reconstruct cloud-affected pixels and predict their underlying surface reflectance values. To do this, cloudy pixels were masked and a parametric family of non-separable covariance functions was automated fitted, using a composite likelihood estimator. A subset of cloud-free pixels per scene was used to perform a kriging interpolation and to predict the spectral reflectance per each cloud-affected pixel. The approach was evaluated using a benchmark dataset of cloud-free pixels, with a synthetic cloud superimposed upon these data. An overall root mean square error (RMSE) of between 0.5% and 16% of the reflectance was achieved, representing a relative root mean square error (rRMSE) of between 0.2% and 7.5%. The spectral similarity between the predicted and reference reflectance signatures was described by a mean spectral angle (MSA) of between 1° and 11°, demonstrating the spatial and spectral coherence of predictions. The approach provides an efficient spatiotemporal interpolation framework for cloud removal, gap-filling, and denoising in remotely sensed datasets. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

Open AccessArticle
Domain Transfer Learning for Hyperspectral Image Super-Resolution
Remote Sens. 2019, 11(6), 694; https://doi.org/10.3390/rs11060694 - 22 Mar 2019
Abstract
A Hyperspectral Image (HSI) contains a great number of spectral bands for each pixel; however, the spatial resolution of HSI is low. Hyperspectral image super-resolution is effective to enhance the spatial resolution while preserving the high-spectral-resolution by software techniques. Recently, the existing methods [...] Read more.
A Hyperspectral Image (HSI) contains a great number of spectral bands for each pixel; however, the spatial resolution of HSI is low. Hyperspectral image super-resolution is effective to enhance the spatial resolution while preserving the high-spectral-resolution by software techniques. Recently, the existing methods have been presented to fuse HSI and Multispectral Images (MSI) by assuming that the MSI of the same scene is required with the observed HSI, which limits the super-resolution reconstruction quality. In this paper, a new framework based on domain transfer learning for HSI super-resolution is proposed to enhance the spatial resolution of HSI by learning the knowledge from the general purpose optical images (natural scene images) and exploiting the cross-correlation between the observed low-resolution HSI and high-resolution MSI. First, the relationship between low- and high-resolution images is learned by a single convolutional super-resolution network and then is transferred to HSI by the idea of transfer learning. Second, the obtained Pre-high-resolution HSI (pre-HSI), the observed low-resolution HSI, and high-resolution MSI are simultaneously considered to estimate the endmember matrix and the abundance code for learning the spectral characteristic. Experimental results on ground-based and remote sensing datasets demonstrate that the proposed method achieves comparable performance and outperforms the existing HSI super-resolution methods. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Figure 1

Back to TopTop