Special Issue "Advanced Multisensor Image Analysis Techniques for Land-Cover Mapping"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2021).

Special Issue Editors

Dr. Behnood Rasti
E-Mail Website
Guest Editor
Faculty of Electrical and Computer Engineering, University of Iceland, Iceland
Interests: signal and image processing; hyperspectral image analysis; remote sensing; multisensor data fusion; machine learning and control systems
Prof. Dr. Magnus Ulfarsson
E-Mail Website
Guest Editor
Faculty of Electrical and Computer Engineering, University of Iceland, 102 Reykjavík, Iceland
Interests: image/signal processing; machine learning; remote sensing; hyperspectral imaging; hyperspectral unmixing; data fusion
Prof. Dr. Jocelyn Chanussot
E-Mail Website
Guest Editor
Grenoble Institute of Technology, GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, CEDEX, F-38402 Saint Martin d'Hères, France
Interests: image processing; machine learning; mathematical morphology; hyperspectral imaging; data fusion
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advances in remote sensing technologies and their corresponding variety provide complementary information for the target detection, analysis, and observation of the Earth. However, the complexity and variety in remote sensing imaging technologies makes the simultaneous interpretation of the different data sources from ground measurements to aerial and space measurements very challenging. First, the large amount of multisource data makes the analysis cumbersome for the end-users. Second, integrating and interpreting multisource data requires one to develop exclusive image analysis techniques due to the different characteristics of the data, which are often caused by differences in the measurement techniques. As a result, conventional image processing techniques often either fail or they are not efficient enough for multisensor data analysis.

The main aim of this Special Issue is to present the most recent image processing and machine learning techniques for land-cover mapping and tracking using multisensor data such as hyperspectral, multispectral, light detection and ranging, and synthetic aperture radar data. We invite both analytical and application-oriented land-cover mapping techniques and procedures that integrate two or more remote sensing measurements, to be submitted in this Special Issue.

Dr. Behnood Rasti
Prof. Magnus Ulfarsson
Dr. Pedram Ghamisi
Prof. Jon Atli Benediktsson
Prof. Jocelyn Chanussot
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multisensor fusion
  • Multitemporal fusion
  • Remote sensing
  • Land-cover mapping
  • Advanced image analysis
  • Machine learning
  • Deep learning

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Multilevel Structure Extraction-Based Multi-Sensor Data Fusion
Remote Sens. 2020, 12(24), 4034; https://doi.org/10.3390/rs12244034 - 09 Dec 2020
Viewed by 610
Abstract
Multi-sensor data on the same area provide complementary information, which is helpful for improving the discrimination capability of classifiers. In this work, a novel multilevel structure extraction method is proposed to fuse multi-sensor data. This method is comprised of three steps: First, multilevel [...] Read more.
Multi-sensor data on the same area provide complementary information, which is helpful for improving the discrimination capability of classifiers. In this work, a novel multilevel structure extraction method is proposed to fuse multi-sensor data. This method is comprised of three steps: First, multilevel structure extraction is constructed by cascading morphological profiles and structure features, and is utilized to extract spatial information from multiple original images. Then, a low-rank model is adopted to integrate the extracted spatial information. Finally, a spectral classifier is employed to calculate class probabilities, and a maximum posteriori estimation model is used to decide the final labels. Experiments tested on three datasets including rural and urban scenes validate that the proposed approach can produce promising performance with regard to both subjective and objective qualities. Full article
(This article belongs to the Special Issue Advanced Multisensor Image Analysis Techniques for Land-Cover Mapping)
Show Figures

Graphical abstract

Article
Data Fusion Using a Multi-Sensor Sparse-Based Clustering Algorithm
Remote Sens. 2020, 12(23), 4007; https://doi.org/10.3390/rs12234007 - 07 Dec 2020
Cited by 1 | Viewed by 1285
Abstract
The increasing amount of information acquired by imaging sensors in Earth Sciences results in the availability of a multitude of complementary data (e.g., spectral, spatial, elevation) for monitoring of the Earth’s surface. Many studies were devoted to investigating the usage of multi-sensor data [...] Read more.
The increasing amount of information acquired by imaging sensors in Earth Sciences results in the availability of a multitude of complementary data (e.g., spectral, spatial, elevation) for monitoring of the Earth’s surface. Many studies were devoted to investigating the usage of multi-sensor data sets in the performance of supervised learning-based approaches at various tasks (i.e., classification and regression) while unsupervised learning-based approaches have received less attention. In this paper, we propose a new approach to fuse multiple data sets from imaging sensors using a multi-sensor sparse-based clustering algorithm (Multi-SSC). A technique for the extraction of spatial features (i.e., morphological profiles (MPs) and invariant attribute profiles (IAPs)) is applied to high spatial-resolution data to derive the spatial and contextual information. This information is then fused with spectrally rich data such as multi- or hyperspectral data. In order to fuse multi-sensor data sets a hierarchical sparse subspace clustering approach is employed. More specifically, a lasso-based binary algorithm is used to fuse the spectral and spatial information prior to automatic clustering. The proposed framework ensures that the generated clustering map is smooth and preserves the spatial structures of the scene. In order to evaluate the generalization capability of the proposed approach, we investigate its performance not only on diverse scenes but also on different sensors and data types. The first two data sets are geological data sets, which consist of hyperspectral and RGB data. The third data set is the well-known benchmark Trento data set, including hyperspectral and LiDAR data. Experimental results indicate that this novel multi-sensor clustering algorithm can provide an accurate clustering map compared to the state-of-the-art sparse subspace-based clustering algorithms. Full article
(This article belongs to the Special Issue Advanced Multisensor Image Analysis Techniques for Land-Cover Mapping)
Show Figures

Graphical abstract

Article
A Hybrid Attention-Aware Fusion Network (HAFNet) for Building Extraction from High-Resolution Imagery and LiDAR Data
Remote Sens. 2020, 12(22), 3764; https://doi.org/10.3390/rs12223764 - 16 Nov 2020
Cited by 1 | Viewed by 708
Abstract
Automated extraction of buildings from earth observation (EO) data has long been a fundamental but challenging research topic. Combining data from different modalities (e.g., high-resolution imagery (HRI) and light detection and ranging (LiDAR) data) has shown great potential in building extraction. Recent studies [...] Read more.
Automated extraction of buildings from earth observation (EO) data has long been a fundamental but challenging research topic. Combining data from different modalities (e.g., high-resolution imagery (HRI) and light detection and ranging (LiDAR) data) has shown great potential in building extraction. Recent studies have examined the role that deep learning (DL) could play in both multimodal data fusion and urban object extraction. However, DL-based multimodal fusion networks may encounter the following limitations: (1) the individual modal and cross-modal features, which we consider both useful and important for final prediction, cannot be sufficiently learned and utilized and (2) the multimodal features are fused by a simple summation or concatenation, which appears ambiguous in selecting cross-modal complementary information. In this paper, we address these two limitations by proposing a hybrid attention-aware fusion network (HAFNet) for building extraction. It consists of RGB-specific, digital surface model (DSM)-specific, and cross-modal streams to sufficiently learn and utilize both individual modal and cross-modal features. Furthermore, an attention-aware multimodal fusion block (Att-MFBlock) was introduced to overcome the fusion problem by adaptively selecting and combining complementary features from each modality. Extensive experiments conducted on two publicly available datasets demonstrated the effectiveness of the proposed HAFNet for building extraction. Full article
(This article belongs to the Special Issue Advanced Multisensor Image Analysis Techniques for Land-Cover Mapping)
Show Figures

Graphical abstract

Article
A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks
Remote Sens. 2020, 12(12), 2067; https://doi.org/10.3390/rs12122067 - 26 Jun 2020
Cited by 2 | Viewed by 1745
Abstract
Multi-sensor remote sensing image classification has been considerably improved by deep learning feature extraction and classification networks. In this paper, we propose a novel multi-sensor fusion framework for the fusion of diverse remote sensing data sources. The novelty of this paper is grounded [...] Read more.
Multi-sensor remote sensing image classification has been considerably improved by deep learning feature extraction and classification networks. In this paper, we propose a novel multi-sensor fusion framework for the fusion of diverse remote sensing data sources. The novelty of this paper is grounded in three important design innovations: 1- a unique adaptation of the coupled residual networks to address multi-sensor data classification; 2- a smart auxiliary training via adjusting the loss function to address classifications with limited samples; and 3- a unique design of the residual blocks to reduce the computational complexity while preserving the discriminative characteristics of multi-sensor features. The proposed classification framework is evaluated using three different remote sensing datasets: the urban Houston university datasets (including Houston 2013 and the training portion of Houston 2018) and the rural Trento dataset. The proposed framework achieves high overall accuracies of 93.57%, 81.20%, and 98.81% on Houston 2013, the training portion of Houston 2018, and Trento datasets, respectively. Additionally, the experimental results demonstrate considerable improvements in classification accuracies compared with the existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Advanced Multisensor Image Analysis Techniques for Land-Cover Mapping)
Show Figures

Graphical abstract

Back to TopTop