E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Multispectral Image Acquisition, Processing and Analysis"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 June 2019).

Special Issue Editors

Guest Editor
Dr. Benoit Vozel

University of Rennes 1, Institute of Electronics and Telecommunications, UMR CNRS 6164, Department Wave and Signals, SHINE (SAR & Hyperspectral multi‒modal Imaging and sigNal processing, Electromagnetic modeling) Research Team, Lannion France, 6 Rue de Kerampont, CS 80518, 22305 Lannion, France
Website | E-Mail
Phone: +33-296469071
Interests: multichannel and hyperspectral image processing
Guest Editor
Prof. Vladimir Lukin

National Aerospace University, Department of Information-Communication Technologies, Chkalova Street, 17 Kharkov, 61070, Ukraine
E-Mail
Interests: multichannel image processing
Guest Editor
Dr. Yakoub Bazi

King Saud University, College of Computer and Information Sciences, Computer Engineering Department, P.O. Box. 51178, Riyadh 11543, Kingdom of Saudi Arabia
E-Mail
Interests: remote sensing image processing and analysis

Special Issue Information

Dear Colleagues,

Due to the continually-improving advances in lightweight and less expensive versions of multispectral sensors and remote sensing platform technology in recent years, the end-users are provided with a multitude of timely observational capabilities for a better sensing and monitoring of the Earth surface.

To benefit from the full potential of these ever advancing productive systems in a more flexible and smart way in many applied fields, we need to further continue to improve our analysis and processing capabilities accordingly. Joint efforts for fully-automated, easy-to-use and efficient systems are a key direction to the facilitated and matured operational use of remote sensing.

This Special Issue is thus intended to cover the last advances in the following primary topics of interest (but not limited to) related to Multispectral Image Acquisition, Processing and Analysis:

  • State-of-the-art and emerging multispectral technologies, including new platforms (satellite, aerial, Unmanned Aerial Vehicles) and sensors with:
    • spatial, spectral, temporal sensing abilities
    • georeferencing and navigation abilities
    • cooperative sensing
  • Advanced multispectral image/data analysis and processing:
    • lossless/lossy compression, denoising,
    • geometrical, registration, georeferencing processing,
    • feature extraction, classification, object recognition, change detection, domain adaptation
  • Multisource data fusion
    • optical-radar fusion, pan-sharpening
    • field sensing
    • crowd sensing

A wide spectrum of recent and latest emerging applications highlighting Multispectral Image Acquisition, Processing and Analysis are obviously targeted including biodiversity assessment, vegetation and environmental monitoring (identification of diversity in grassland species, invasive plants, biomass estimation, wetlands), precision agriculture in agricultural ecosystems and crop management, water resource and quality management in nearshore coastal (mapping near-surface water constituents, benthic habitats) and inland waters (analysis and surveying of rivers and lakes), sustainable forestry and agroforestry (forest preservation and mapping of forest species, wildfire detection), mapping archaeological areas, urban development and management, and hazard monitoring.

Dr. Benoit Vozel
Prof. Vladimir Lukin
Dr. Yakoub Bazi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Imaging sensors and platforms
  • Cooperative sensing
  • Multispectral data analysis
  • Multispectral data processing
  • Multisource data fusion
  • Deep learning strategies

Published Papers (11 papers)

View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle
Spectral Super-Resolution with Optimized Bands
Remote Sens. 2019, 11(14), 1648; https://doi.org/10.3390/rs11141648
Received: 7 May 2019 / Revised: 24 June 2019 / Accepted: 5 July 2019 / Published: 11 July 2019
PDF Full-text (35367 KB) | HTML Full-text | XML Full-text
Abstract
Hyperspectral (HS) sensors sample reflectance spectrum in very high resolution, which allows us to examine material properties in very fine details. However, their widespread adoption has been hindered because they are very expensive. Reflectance spectra of real materials are high dimensional but sparse [...] Read more.
Hyperspectral (HS) sensors sample reflectance spectrum in very high resolution, which allows us to examine material properties in very fine details. However, their widespread adoption has been hindered because they are very expensive. Reflectance spectra of real materials are high dimensional but sparse signals. By utilizing prior information about the statistics of real HS spectra, many previous studies have reconstructed HS spectra from multispectral (MS) signals (which can be obtained from cheaper, lower spectral resolution sensors). However, most of these techniques assume that the MS bands are known apriori and do not optimize the MS bands to produce more accurate reconstructions. In this paper, we propose a new end-to-end fully convolutional residual neural network architecture that simultaneously learns both the MS bands and the transformation to reconstruct HS spectra from MS signals by analyzing large quantity of HS data. The learned band can be implemented in hardware to obtain an MS sensor that collects data that is best to reconstruct HS spectra using the learned transformation. Using a diverse set of real-world datasets, we show how the proposed approach of optimizing MS bands along with the transformation can drastically increase the reconstruction accuracy. Additionally, we also investigate the prospects of using reconstructed HS spectra for land cover classification. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Graphical abstract

Open AccessArticle
A Local Feature Descriptor Based on Oriented Structure Maps with Guided Filtering for Multispectral Remote Sensing Image Matching
Remote Sens. 2019, 11(8), 951; https://doi.org/10.3390/rs11080951
Received: 22 March 2019 / Revised: 15 April 2019 / Accepted: 18 April 2019 / Published: 20 April 2019
PDF Full-text (13147 KB) | HTML Full-text | XML Full-text
Abstract
Multispectral image matching plays a very important role in remote sensing image processing and can be applied for registering the complementary information captured by different sensors. Due to the nonlinear intensity difference in multispectral images, many classic descriptors designed for images of the [...] Read more.
Multispectral image matching plays a very important role in remote sensing image processing and can be applied for registering the complementary information captured by different sensors. Due to the nonlinear intensity difference in multispectral images, many classic descriptors designed for images of the same spectrum are unable to work well. To cope with this problem, this paper proposes a new local feature descriptor termed histogram of oriented structure maps (HOSM) for multispectral image matching tasks. This proposed method consists of three steps. First, we propose a new method based on local contrast to construct the structure guidance images from the multispectral images by transferring the significant contours from source images to results, respectively. Second, we calculate oriented structure maps with guided image filtering. In details, we first construct edge maps by the progressive Sobel filters to extract the common structure characteristics from the multispectral images, and then we compute the oriented structure maps by performing the guided filtering on the edge maps with the structure guidance images constructed in the first step. Finally, we build the HOSM descriptor by calculating the histogram of oriented structure maps in a local region of each interest point and normalize the feature vector. The proposed HOSM descriptor was evaluated on three commonly used datasets and was compared with several state-of-the-art methods. The experimental results demonstrate that the HOSM descriptor can be robust to the nonlinear intensity difference in multispectral images and outperforms other methods. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Figure 1

Open AccessArticle
Pansharpening Using Guided Filtering to Improve the Spatial Clarity of VHR Satellite Imagery
Remote Sens. 2019, 11(6), 633; https://doi.org/10.3390/rs11060633
Received: 18 February 2019 / Revised: 10 March 2019 / Accepted: 12 March 2019 / Published: 15 March 2019
Cited by 1 | PDF Full-text (5755 KB) | HTML Full-text | XML Full-text
Abstract
Pansharpening algorithms are designed to enhance the spatial resolution of multispectral images using panchromatic images with high spatial resolutions. Panchromatic and multispectral images acquired from very high resolution (VHR) satellite sensors used as input data in the pansharpening process are characterized by spatial [...] Read more.
Pansharpening algorithms are designed to enhance the spatial resolution of multispectral images using panchromatic images with high spatial resolutions. Panchromatic and multispectral images acquired from very high resolution (VHR) satellite sensors used as input data in the pansharpening process are characterized by spatial dissimilarities due to differences in their spectral/spatial characteristics and time lags between panchromatic and multispectral sensors. In this manuscript, a new pansharpening framework is proposed to improve the spatial clarity of VHR satellite imagery. This algorithm aims to remove the spatial dissimilarity between panchromatic and multispectral images using guided filtering (GF) and to generate the optimal local injection gains for pansharpening. First, we generate optimal multispectral images with spatial characteristics similar to those of panchromatic images using GF. Then, multiresolution analysis (MRA)-based pansharpening is applied using normalized difference vegetation index (NDVI)-based optimal injection gains and spatial details obtained through GF. The algorithm is applied to Korea multipurpose satellite (KOMPSAT)-3/3A satellite sensor data, and the experimental results show that the pansharpened images obtained with the proposed algorithm exhibit a superior spatial quality and preserve spectral information better than those based on existing algorithms. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Figure 1

Open AccessArticle
Enhancement of Component Images of Multispectral Data by Denoising with Reference
Remote Sens. 2019, 11(6), 611; https://doi.org/10.3390/rs11060611
Received: 12 January 2019 / Revised: 25 February 2019 / Accepted: 10 March 2019 / Published: 13 March 2019
Cited by 1 | PDF Full-text (8522 KB) | HTML Full-text | XML Full-text
Abstract
Multispectral remote sensing data may contain component images that are heavily corrupted by noise and the pre-filtering (denoising) procedure is often applied to enhance these component images. To do this, one can use reference images—component images having relatively high quality and that are [...] Read more.
Multispectral remote sensing data may contain component images that are heavily corrupted by noise and the pre-filtering (denoising) procedure is often applied to enhance these component images. To do this, one can use reference images—component images having relatively high quality and that are similar to the image subject to pre-filtering. Here, we study the following problems: how to select component images that can be used as references (e.g., for the Sentinel multispectral remote sensing data) and how to perform the actual denoising. We demonstrate that component images of the same resolution as well as component images of a better resolution can be used as references. To provide high efficiency of denoising, reference images have to be transformed using linear or nonlinear transformations. This paper proposes a practical approach to doing this. Examples of denoising tests and real-life images demonstrate high efficiency of the proposed approach. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Graphical abstract

Open AccessArticle
Fusion of Multispectral and Panchromatic Images via Spatial Weighted Neighbor Embedding
Remote Sens. 2019, 11(5), 557; https://doi.org/10.3390/rs11050557
Received: 20 January 2019 / Revised: 22 February 2019 / Accepted: 1 March 2019 / Published: 7 March 2019
PDF Full-text (4651 KB) | HTML Full-text | XML Full-text
Abstract
Fusing the panchromatic (PAN) image and low spatial-resolution multispectral (LR MS) images is an effective technology for generating high spatial-resolution MS (HR MS) images. Some image-fusion methods inspired by neighbor embedding (NE) are proposed and produce competitive results. These methods generally adopt Euclidean [...] Read more.
Fusing the panchromatic (PAN) image and low spatial-resolution multispectral (LR MS) images is an effective technology for generating high spatial-resolution MS (HR MS) images. Some image-fusion methods inspired by neighbor embedding (NE) are proposed and produce competitive results. These methods generally adopt Euclidean distance to determinate the neighbors. However, closer Euclidean distance is not equal to greater similarity in spatial structure. In this paper, we propose a spatial weighted neighbor embedding (SWNE) approach for PAN and MS image fusion, by exploring the similar manifold structures existing in the observed LR MS images to those of HR MS images. In SWNE, the spatial neighbors of the LR patch are found first. Second, the weights of these neighbors are estimated by the alternative direction multiplier method (ADMM), in which the neighbors and their weights are determined simultaneously. Finally, the HR patches are reconstructed by the sum of HR patches corresponding to the LR patches multiplying with their weights. Due to the introduction of spatial structures in objective function, outlier patches can be eliminated effectively by ADMM. Compared with other methods based on NE, more reasonable neighbor patches and their weights are estimated simultaneously. Some experiments are conducted on datasets collected by QuickBird and Geoeye-1 satellites to validate the effectiveness of SWNE, and the results demonstrate a better performance of SWNE in spatial and spectral information preservation. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Graphical abstract

Open AccessArticle
Hyperspectral Image Classification with Multi-Scale Feature Extraction
Remote Sens. 2019, 11(5), 534; https://doi.org/10.3390/rs11050534
Received: 20 January 2019 / Revised: 26 February 2019 / Accepted: 27 February 2019 / Published: 5 March 2019
PDF Full-text (1610 KB) | HTML Full-text | XML Full-text
Abstract
Spectral features cannot effectively reflect the differences among the ground objects and distinguish their boundaries in hyperspectral image (HSI) classification. Multi-scale feature extraction can solve this problem and improve the accuracy of HSI classification. The Gaussian pyramid can effectively decompose HSI into multi-scale [...] Read more.
Spectral features cannot effectively reflect the differences among the ground objects and distinguish their boundaries in hyperspectral image (HSI) classification. Multi-scale feature extraction can solve this problem and improve the accuracy of HSI classification. The Gaussian pyramid can effectively decompose HSI into multi-scale structures, and efficiently extract features of different scales by stepwise filtering and downsampling. Therefore, this paper proposed a Gaussian pyramid based multi-scale feature extraction (MSFE) classification method for HSI. First, the HSI is decomposed into several Gaussian pyramids to extract multi-scale features. Second, we construct probability maps in each layer of the Gaussian pyramid and employ edge-preserving filtering (EPF) algorithms to further optimize the details. Finally, the final classification map is acquired by a majority voting method. Compared with other spectral-spatial classification methods, the proposed method can not only extract the characteristics of different scales, but also can better preserve detailed structures and the edge regions of the image. Experiments performed on three real hyperspectral datasets show that the proposed method can achieve competitive classification accuracy. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Figure 1

Open AccessArticle
A Multiscale Hierarchical Model for Sparse Hyperspectral Unmixing
Remote Sens. 2019, 11(5), 500; https://doi.org/10.3390/rs11050500
Received: 26 January 2019 / Revised: 15 February 2019 / Accepted: 23 February 2019 / Published: 1 March 2019
Cited by 1 | PDF Full-text (3678 KB) | HTML Full-text | XML Full-text
Abstract
Due to the complex background and low spatial resolution of the hyperspectral sensor, observed ground reflectance is often mixed at the pixel level. Hyperspectral unmixing (HU) is a hot-issue in the remote sensing area because it can decompose the observed mixed pixel reflectance. [...] Read more.
Due to the complex background and low spatial resolution of the hyperspectral sensor, observed ground reflectance is often mixed at the pixel level. Hyperspectral unmixing (HU) is a hot-issue in the remote sensing area because it can decompose the observed mixed pixel reflectance. Traditional sparse hyperspectral unmixing often leads to an ill-posed inverse problem, which can be circumvented by spatial regularization approaches. However, their adoption has come at the expense of a massive increase in computational cost. In this paper, a novel multiscale hierarchical model for a method of sparse hyperspectral unmixing is proposed. The paper decomposes HU into two domain problems, one is in an approximation scale representation based on resampling the method’s domain, and the other is in the original domain. The use of multiscale spatial resampling methods for HU leads to an effective strategy that deals with spectral variability and computational cost. Furthermore, the hierarchical strategy with abundant sparsity representation in each layer aims to obtain the global optimal solution. Both simulations and real hyperspectral data experiments show that the proposed method outperforms previous methods in endmember extraction and abundance fraction estimation, and promotes piecewise homogeneity in the estimated abundance without compromising sharp discontinuities among neighboring pixels. Additionally, compared with total variation regularization, the proposed method reduces the computational time effectively. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Graphical abstract

Open AccessArticle
FCM Approach of Similarity and Dissimilarity Measures with α-Cut for Handling Mixed Pixels
Remote Sens. 2018, 10(11), 1707; https://doi.org/10.3390/rs10111707
Received: 7 September 2018 / Revised: 6 October 2018 / Accepted: 16 October 2018 / Published: 29 October 2018
PDF Full-text (3943 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the fuzzy c-means (FCM) classifier has been studied with 12 similarity and dissimilarity measures: Manhattan distance, chessboard distance, Bray–Curtis distance, Canberra, Cosine distance, correlation distance, mean absolute difference, median absolute difference, Euclidean, Mahalanobis, diagonal Mahalanobis and normalised squared Euclidean distance. [...] Read more.
In this paper, the fuzzy c-means (FCM) classifier has been studied with 12 similarity and dissimilarity measures: Manhattan distance, chessboard distance, Bray–Curtis distance, Canberra, Cosine distance, correlation distance, mean absolute difference, median absolute difference, Euclidean, Mahalanobis, diagonal Mahalanobis and normalised squared Euclidean distance. Both single and composite modes were used with a varying weight constant (m*) and also at different α-cuts. The two best single measures obtained were combined to study the effect of composite measures on the datasets used. An image-to-image accuracy check was conducted to assess the accuracy of the classified images. Fuzzy error matrix (FERM) was applied to measure the accuracy assessment outcomes for a Landsat-8 dataset with respect to the Formosat-2 dataset. To conclude, FCM classifier with Cosine measure performed better than the conventional Euclidean measure. But, due to the incapability of the FCM classifier to handle noise properly, the classification accuracy was around 75%. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Graphical abstract

Open AccessArticle
Diffuse Skylight as a Surrogate for Shadow Detection in High-Resolution Imagery Acquired Under Clear Sky Conditions
Remote Sens. 2018, 10(8), 1185; https://doi.org/10.3390/rs10081185
Received: 1 July 2018 / Revised: 24 July 2018 / Accepted: 25 July 2018 / Published: 27 July 2018
PDF Full-text (14083 KB) | HTML Full-text | XML Full-text
Abstract
An alternative technique for shadow detection and abundance is presented for high spatial resolution imagery acquired under clear sky conditions from airborne/spaceborne sensors. The method, termed Scattering Index (SI), uses Rayleigh scattering principles to create a diffuse skylight vector as a shadow reference. [...] Read more.
An alternative technique for shadow detection and abundance is presented for high spatial resolution imagery acquired under clear sky conditions from airborne/spaceborne sensors. The method, termed Scattering Index (SI), uses Rayleigh scattering principles to create a diffuse skylight vector as a shadow reference. From linear algebra, the proportion of diffuse skylight in each image pixel provides a per pixel measure of shadow extent and abundance. We performed a comparative evaluation against two other methods, first valley detection thresholding (extent) and physics-based unmixing (extent and abundance). Overall accuracy and F-score measures are used to evaluate shadow extent on both Worldview-3 and ADS40 images captured over a common scene. Image subsets are selected to capture objects well documented as shadow detection anomalies, e.g., dark water bodies. Results showed improved accuracies and F-scores for shadow extent and qualitative evaluation of abundance show the method is invariant to scene and sensor characteristics. SI avoids shadow misclassifications by avoiding the use of pixel intensity and the associated limitations of binary thresholding. The method negates the need for complex sun-object-sensor corrections, it is simple to apply, and it is invariant to the exponential increase in scene complexity associated with higher-resolution imagery. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Graphical abstract

Open AccessArticle
A Cloud Detection Method for Landsat 8 Images Based on PCANet
Remote Sens. 2018, 10(6), 877; https://doi.org/10.3390/rs10060877
Received: 13 April 2018 / Revised: 29 May 2018 / Accepted: 1 June 2018 / Published: 5 June 2018
Cited by 7 | PDF Full-text (2667 KB) | HTML Full-text | XML Full-text
Abstract
Cloud detection for remote sensing images is often a necessary process, because cloud is widespread in optical remote sensing images and causes a lot of difficulty to many remote sensing activities, such as land cover monitoring, environmental monitoring and target recognizing. In this [...] Read more.
Cloud detection for remote sensing images is often a necessary process, because cloud is widespread in optical remote sensing images and causes a lot of difficulty to many remote sensing activities, such as land cover monitoring, environmental monitoring and target recognizing. In this paper, a novel cloud detection method is proposed for multispectral remote sensing images from Landsat 8. Firstly, the color composite image of Bands 6, 3 and 2 is divided into superpixel sub-regions through Simple Linear Iterative Cluster (SLIC) method. Then, a two-step superpixel classification strategy is used to predict each superpixel as cloud or non-cloud. Thirdly, a fully connected Conditional Random Field (CRF) model is used to refine the cloud detection result, and accurate cloud borders are obtained. In the two-step superpixel classification strategy, the bright and thick cloud superpixels, as well as the obvious non-cloud superpixels, are firstly separated from potential cloud superpixels through a threshold function, which greatly speeds up the detection. The designed double-branch PCA Network (PCANet) architecture can extract the high-level information of cloud, then combined with a Support Vector Machine (SVM) classifier, the potential superpixels are correctly classified. Visual and quantitative comparison experiments are conducted on the Landsat 8 Cloud Cover Assessment (L8 CCA) dataset; the results indicate that our proposed method can accurately detect clouds under different conditions, which is more effective and robust than the compared state-of-the-art methods. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Graphical abstract

Other

Jump to: Research

Open AccessTechnical Note
Thermal Airborne Optical Sectioning
Remote Sens. 2019, 11(14), 1668; https://doi.org/10.3390/rs11141668
Received: 4 June 2019 / Revised: 26 June 2019 / Accepted: 12 July 2019 / Published: 13 July 2019
PDF Full-text (37625 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We apply a multi-spectral (RGB and thermal) camera drone for synthetic aperture imaging to computationally remove occluding vegetation for revealing hidden objects, as required in archeology, search-and-rescue, animal inspection, and border control applications. The radiated heat signal of strongly occluded targets, such as [...] Read more.
We apply a multi-spectral (RGB and thermal) camera drone for synthetic aperture imaging to computationally remove occluding vegetation for revealing hidden objects, as required in archeology, search-and-rescue, animal inspection, and border control applications. The radiated heat signal of strongly occluded targets, such as a human bodies hidden in dense shrub, can be made visible by integrating multiple thermal recordings from slightly different perspectives, while being entirely invisible in RGB recordings or unidentifiable in single thermal images. We collect bits of heat radiation through the occluder volume over a wide synthetic aperture range and computationally combine them to a clear image. This requires precise estimation of the drone’s position and orientation for each capturing pose, which is supported by applying computer vision algorithms on the high resolution RGB images. Full article
(This article belongs to the Special Issue Multispectral Image Acquisition, Processing and Analysis)
Figures

Graphical abstract

Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top