Special Issue "Multi-Modality Data Classification: Algorithms and Applications"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2020).

Special Issue Editors

Dr. Junshi Xia
E-Mail Website
Guest Editor
Geoinformatics Unit, RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan
Interests: high-performance geo-computation; big earth data; data science
Special Issues and Collections in MDPI journals
Dr. Nicola Falco
E-Mail Website
Guest Editor
Lawrence Berkeley National Laboratory, Climate and Ecosystem Sciences Division, Building 085B, M/S 74R316C CA, USA
Interests: signal and image processing; machine learning for remote sensing; multimodal data integration; hyperspectral data analysis; remote sensing for precision agriculture
Special Issues and Collections in MDPI journals
Dr. Lionel Bombrun
E-Mail Website
Guest Editor
Université de Bordeaux, IMS, UMR 5218, Groupe Signal et Image, Bordeaux, France
Interests: signal and image processing; pattern recognition; texture modeling; hyperspectral image classification; SAR image processing; high resolution remote sensing images analysis

Special Issue Information

Dear Colleagues,

Due to the rapid development of sensor technology, multi-modality remotely sensed datasets (e.g., optical, SAR, and LiDAR) that may differ in imaging mechanism, spatial resolution, and coverage can be achieved. Classification is one of the most important techniques to utilize these multi-modality datasets for land cover/land use and dynamic changes in various applications, e.g., precision agriculture, urban planning, and disaster responses.

The utilization of multi-modality datasets has been an active topic in recent years because they can provide complementary information of the same scene, thus boosting the classification performance. The availability of big remote sensing multi-modality data platforms, e.g, ESA’s Copernicus program, Landsat series, and China GaoFen series, is likely to reinforce this trend.  

However, there still remains unsolved problems with multi-modality datasets, such as spectral/spatial variations, gaps in imaging mechanisms, and sensor-specific features of applications, which should be addressed further. This Special Issue, “Multi-Modality Data Classification: Algorithms and Applications”, will collect original manuscripts that address the above-mentioned challenging of multi-modality data classification, not only in the algorithm domain but also in the application domain. We kindly invite you to contribute to the following (but not exhaustive) topics that fit this Special Issue: multi-modality feature extraction, multi-modality data fusion, deep learning and transfer learning using multi-modality datasets, and classification and change detection of multi-modality datasets for any thematic application (related to urban, agricultural, ecological, and disaster ones) from local to global scales.

Dr. Junshi Xia
Dr. Nicola Falco
Dr. Lionel Bombrun
Prof. Jon Atli Benediktsson
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • classification
  • multi-modality data
  • applications
  • data fusion
  • machine learning
  • applications

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Multisensor and Multiresolution Remote Sensing Image Classification through a Causal Hierarchical Markov Framework and Decision Tree Ensembles
Remote Sens. 2021, 13(5), 849; https://doi.org/10.3390/rs13050849 - 25 Feb 2021
Viewed by 526
Abstract
In this paper, a hierarchical probabilistic graphical model is proposed to tackle joint classification of multiresolution and multisensor remote sensing images of the same scene. This problem is crucial in the study of satellite imagery and jointly involves multiresolution and multisensor image fusion. [...] Read more.
In this paper, a hierarchical probabilistic graphical model is proposed to tackle joint classification of multiresolution and multisensor remote sensing images of the same scene. This problem is crucial in the study of satellite imagery and jointly involves multiresolution and multisensor image fusion. The proposed framework consists of a hierarchical Markov model with a quadtree structure to model information contained in different spatial scales, a planar Markov model to account for contextual spatial information at each resolution, and decision tree ensembles for pixelwise modeling. This probabilistic graphical model and its topology are especially fit for application to very high resolution (VHR) image data. The theoretical properties of the proposed model are analyzed: the causality of the whole framework is mathematically proved, granting the use of time-efficient inference algorithms such as the marginal posterior mode criterion, which is non-iterative when applied to quadtree structures. This is mostly advantageous for classification methods linked to multiresolution tasks formulated on hierarchical Markov models. Within the proposed framework, two multimodal classification algorithms are developed, that incorporate Markov mesh and spatial Markov chain concepts. The results obtained in the experimental validation conducted with two datasets containing VHR multispectral, panchromatic, and radar satellite images, verify the effectiveness of the proposed framework. The proposed approach is also compared to previous methods that are based on alternate strategies for multimodal fusion. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Figure 1

Article
Probabilistic Mangrove Species Mapping with Multiple-Source Remote-Sensing Datasets Using Label Distribution Learning in Xuan Thuy National Park, Vietnam
Remote Sens. 2020, 12(22), 3834; https://doi.org/10.3390/rs12223834 - 22 Nov 2020
Cited by 7 | Viewed by 1130
Abstract
Mangrove forests play an important role in maintaining water quality, mitigating climate change impacts, and providing a wide range of ecosystem services. Effective identification of mangrove species using remote-sensing images remains a challenge. The combinations of multi-source remote-sensing datasets (with different spectral/spatial resolution) [...] Read more.
Mangrove forests play an important role in maintaining water quality, mitigating climate change impacts, and providing a wide range of ecosystem services. Effective identification of mangrove species using remote-sensing images remains a challenge. The combinations of multi-source remote-sensing datasets (with different spectral/spatial resolution) are beneficial to the improvement of mangrove tree species discrimination. In this paper, various combinations of remote-sensing datasets including Sentinel-1 dual-polarimetric synthetic aperture radar (SAR), Sentinel-2 multispectral, and Gaofen-3 full-polarimetric SAR data were used to classify the mangrove communities in Xuan Thuy National Park, Vietnam. The mixture of mangrove communities consisting of small and shrub mangrove patches is generally difficult to separate using low/medium spatial resolution. To alleviate this problem, we propose to use label distribution learning (LDL) to provide the probabilistic mapping of tree species, including Sonneratia caseolaris (SC), Kandelia obovata (KO), Aegiceras corniculatum (AC), Rhizophora stylosa (RS), and Avicennia marina (AM). The experimental results show that the best classification performance was achieved by an integration of Sentinel-2 and Gaofen-3 datasets, demonstrating that full-polarimetric Gaofen-3 data is superior to the dual-polarimetric Sentinel-1 data for mapping mangrove tree species in the tropics. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Graphical abstract

Article
Examining the Roles of Spectral, Spatial, and Topographic Features in Improving Land-Cover and Forest Classifications in a Subtropical Region
Remote Sens. 2020, 12(18), 2907; https://doi.org/10.3390/rs12182907 - 08 Sep 2020
Cited by 3 | Viewed by 881
Abstract
Many studies have investigated the effects of spectral and spatial features of remotely sensed data and topographic characteristics on land-cover and forest classification results, but they are mainly based on individual sensor data. How these features from different kinds of remotely sensed data [...] Read more.
Many studies have investigated the effects of spectral and spatial features of remotely sensed data and topographic characteristics on land-cover and forest classification results, but they are mainly based on individual sensor data. How these features from different kinds of remotely sensed data with various spatial resolutions influence classification results is unclear. We conducted a comprehensively comparative analysis of spectral and spatial features from ZiYuan-3 (ZY-3), Sentinel-2, and Landsat and their fused datasets with spatial resolution ranges from 2 m, 6 m, 10 m, 15 m, and to 30 m, and topographic factors in influencing land-cover classification results in a subtropical forest ecosystem using random forest approach. The results indicated that the combined spectral (fused data based on ZY-3 and Sentinel-2), spatial, and topographical data with 2-m spatial resolution provided the highest overall classification accuracy of 83.5% for 11 land-cover classes, as well as the highest accuracies for almost all individual classes. The improvement of spectral bands from 4 to 10 through fusion of ZY-3 and Sentinel-2 data increased overall accuracy by 14.2% at 2-m spatial resolution, and by 11.1% at 6-m spatial resolution. Textures from high spatial resolution imagery play more important roles than textures from medium spatial resolution images. The incorporation of textural images into spectral data in the 2-m spatial resolution imagery improved overall accuracy by 6.0–7.7% compared to 1.1–1.7% in the 10-m to 30-m spatial resolution images. Incorporation of topographic factors into spectral and textural imagery further improved overall accuracy by 1.2–5.5%. The classification accuracies for coniferous forest, eucalyptus, other broadleaf forests, and bamboo forest can be 85.3–91.1%. This research provides new insights for using proper combinations of spectral bands and textures corresponding to specifically spatial resolution images in improving land-cover and forest classifications in subtropical regions. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Graphical abstract

Article
Impairing Land Registry: Social, Demographic, and Economic Determinants of Forest Classification Errors
Remote Sens. 2020, 12(16), 2628; https://doi.org/10.3390/rs12162628 - 14 Aug 2020
Cited by 3 | Viewed by 2357
Abstract
This paper investigates the social, demographic, and economic factors determining differences between forest identification based on remote sensing techniques and land registry. The Database of Topographic Objects and Sentinel-2 satellite imagery data from 2018 were used to train a forest detection supervised machine [...] Read more.
This paper investigates the social, demographic, and economic factors determining differences between forest identification based on remote sensing techniques and land registry. The Database of Topographic Objects and Sentinel-2 satellite imagery data from 2018 were used to train a forest detection supervised machine learning model. Results aggregated to communes (NUTS-5 units) were compared to data from land registry delivered in Local Data Bank by Statistics Poland. The differences identified between above mentioned sources were defined as errors of land registry. Then, geographically weighted regression was applied to explain spatially varying impact of investigated errors’ determinants: Urbanization processes, civic society development, education, land ownership, and culture and quality of spatial planning. The research area covers the entirety of Poland. It was confirmed that in less developed areas, local development policy stimulating urbanization processes does not respect land use planning principles, including the accuracy of land registry. A high education level of the society leads to protective measures before the further increase of the investigated forest cover’s overestimation of the land registry in substantially urbanized areas. Finally, higher coverage by valid local spatial development plans stimulate protection against forest classification errors in the land registry. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Graphical abstract

Article
A Multi-Scale Superpixel-Guided Filter Feature Extraction and Selection Approach for Classification of Very-High-Resolution Remotely Sensed Imagery
Remote Sens. 2020, 12(5), 862; https://doi.org/10.3390/rs12050862 - 07 Mar 2020
Cited by 5 | Viewed by 1285
Abstract
In this article, a novel feature selection-based multi-scale superpixel-based guided filter (FS-MSGF) method for classification of very-high-resolution (VHR) remotely sensed imagery is proposed. Improved from the original guided filter (GF) algorithm used in the classification, the guidance image in the proposed approach is [...] Read more.
In this article, a novel feature selection-based multi-scale superpixel-based guided filter (FS-MSGF) method for classification of very-high-resolution (VHR) remotely sensed imagery is proposed. Improved from the original guided filter (GF) algorithm used in the classification, the guidance image in the proposed approach is constructed based on the superpixel-level segmentation. By taking into account the object boundaries and the inner-homogeneity, the superpixel-level guidance image leads to the geometrical information of land-cover objects in VHR images being better depicted. High-dimensional multi-scale guided filter (MSGF) features are then generated, where the multi-scale information of those land-cover classes is better modelled. In addition, for improving the computational efficiency without the loss of accuracy, a subset of those MSGF features is then automatically selected by using an unsupervised feature selection method, which contains the most distinctive information in all constructed MSGF features. Quantitative and qualitative classification results obtained on two QuickBird remotely sensed imagery datasets covering the Zurich urban scene are provided and analyzed, which demonstrate that the proposed methods outperform the state-of-the-art reference techniques in terms of higher classification accuracies and higher computational efficiency. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Graphical abstract

Article
A Deep Siamese Network with Hybrid Convolutional Feature Extraction Module for Change Detection Based on Multi-sensor Remote Sensing Images
Remote Sens. 2020, 12(2), 205; https://doi.org/10.3390/rs12020205 - 07 Jan 2020
Cited by 13 | Viewed by 1945
Abstract
Information extraction from multi-sensor remote sensing images has increasingly attracted attention with the development of remote sensing sensors. In this study, a supervised change detection method, based on the deep Siamese convolutional network with hybrid convolutional feature extraction module (OB-DSCNH), has been proposed [...] Read more.
Information extraction from multi-sensor remote sensing images has increasingly attracted attention with the development of remote sensing sensors. In this study, a supervised change detection method, based on the deep Siamese convolutional network with hybrid convolutional feature extraction module (OB-DSCNH), has been proposed using multi-sensor images. The proposed architecture, which is based on dilated convolution, can extract the deep change features effectively, and the character of “network in network” increases the depth and width of the network while keeping the computational budget constant. The change decision model is utilized to detect changes through the difference of extracted features. Finally, a change detection map is obtained via an uncertainty analysis, which combines the multi-resolution segmentation, with the output from the Siamese network. To validate the effectiveness of the proposed approach, we conducted experiments on multispectral images collected by the ZY-3 and GF-2 satellites. Experimental results demonstrate that our proposed method achieves comparable and better performance than mainstream methods in multi-sensor images change detection. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Graphical abstract

Article
Accurate Building Extraction from Fused DSM and UAV Images Using a Chain Fully Convolutional Neural Network
Remote Sens. 2019, 11(24), 2912; https://doi.org/10.3390/rs11242912 - 05 Dec 2019
Cited by 7 | Viewed by 1373
Abstract
Accurate extraction of buildings using high spatial resolution imagery is essential to a wide range of urban applications. However, it is difficult to extract semantic features from a variety of complex scenes (e.g., suburban, urban and urban village areas) because various complex man-made [...] Read more.
Accurate extraction of buildings using high spatial resolution imagery is essential to a wide range of urban applications. However, it is difficult to extract semantic features from a variety of complex scenes (e.g., suburban, urban and urban village areas) because various complex man-made objects usually appear heterogeneous with large intra-class and low inter-class variations. The automatic extraction of buildings is thus extremely challenging. The fully convolutional neural networks (FCNs) developed in recent years have performed well in the extraction of urban man-made objects due to their ability to learn state-of-the-art features and to label pixels end-to-end. One of the most successful FCNs used in building extraction is U-net. However, the commonly used skip connection and feature fusion refinement modules in U-net often ignore the problem of feature selection, and the ability to extract smaller buildings and refine building boundaries needs to be improved. In this paper, we propose a trainable chain fully convolutional neural network (CFCN), which fuses high spatial resolution unmanned aerial vehicle (UAV) images and the digital surface model (DSM) for building extraction. Multilevel features are obtained from the fusion data, and an improved U-net is used for the coarse extraction of the building. To solve the problem of incomplete extraction of building boundaries, a U-net network is introduced by chain, which is used for the introduction of a coarse building boundary constraint, hole filling, and "speckle" removal. Typical areas such as suburban, urban, and urban villages were selected for building extraction experiments. The results show that the CFCN achieved recall of 98.67%, 98.62%, and 99.52% and intersection over union (IoU) of 96.23%, 96.43%, and 95.76% in suburban, urban, and urban village areas, respectively. Considering the IoU in conjunction with the CFCN and U-net resulted in improvements of 6.61%, 5.31%, and 6.45% in suburban, urban, and urban village areas, respectively. The proposed method can extract buildings with higher accuracy and with clearer and more complete boundaries. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Graphical abstract

Article
An Integrated Land Cover Mapping Method Suitable for Low-Accuracy Areas in Global Land Cover Maps
Remote Sens. 2019, 11(15), 1777; https://doi.org/10.3390/rs11151777 - 29 Jul 2019
Viewed by 1083
Abstract
In land cover mapping, an area with complex topography or heterogeneous land covers is usually poorly classified and therefore defined as a low-accuracy area. The low-accuracy areas are important because they restrict the overall accuracy (OA) of global land cover classification (LCC) data [...] Read more.
In land cover mapping, an area with complex topography or heterogeneous land covers is usually poorly classified and therefore defined as a low-accuracy area. The low-accuracy areas are important because they restrict the overall accuracy (OA) of global land cover classification (LCC) data generated. In this paper, low-accuracy areas in China (extracted from the MODIS global LCC maps) were taken as examples, identified as the regions having lower accuracy than the average OA of China. An integrated land cover mapping method targeting low-accuracy regions was developed and tested in eight representative low-accuracy regions of China. The method optimized procedures of image choosing and sample selection based on an existent visually-interpreted regional LCC dataset with high accuracies. Five algorithms and 16 groups of classification features were compared to achieve the highest OA. The support vector machine (SVM) achieved the highest mean OA (81.5%) when only spectral bands were classified. Aspect tended to attenuate OA as a classification feature. The optimal classification features for different regions largely depends on the topographic feature of vegetation. The mean OA for eight low-accuracy regions was 84.4% by the proposed method in this study, which exceeded the mean OA of most precedent global land cover datasets. The new method can be applied worldwide to improve land cover mapping of low-accuracy areas in global land cover maps. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Figure 1

Article
A Comparative Review of Manifold Learning Techniques for Hyperspectral and Polarimetric SAR Image Fusion
Remote Sens. 2019, 11(6), 681; https://doi.org/10.3390/rs11060681 - 21 Mar 2019
Cited by 11 | Viewed by 2128
Abstract
In remote sensing, hyperspectral and polarimetric synthetic aperture radar (PolSAR) images are the two most versatile data sources for a wide range of applications such as land use land cover classification. However, the fusion of these two data sources receive less attention than [...] Read more.
In remote sensing, hyperspectral and polarimetric synthetic aperture radar (PolSAR) images are the two most versatile data sources for a wide range of applications such as land use land cover classification. However, the fusion of these two data sources receive less attention than many other, because of their scarce data availability, and relatively challenging fusion task caused by their distinct imaging geometries. Among the existing fusion methods, including manifold learning-based, kernel-based, ensemble-based, and matrix factorization, manifold learning is one of most celebrated techniques for the fusion of heterogeneous data. Therefore, this paper aims to promote the research in hyperspectral and PolSAR data fusion, by providing a comprehensive comparison between existing manifold learning-based fusion algorithms. We conducted experiments on 16 state-of-the-art manifold learning algorithms that embrace two important research questions in manifold learning-based fusion of hyperspectral and PolSAR data: (1) in which domain should the data be aligned—the data domain or the manifold domain; and (2) how to make use of existing labeled data when formulating a graph to represent a manifold—supervised, semi-supervised, or unsupervised. The performance of the algorithms were evaluated via multiple accuracy metrics of land use land cover classification over two data sets. Results show that the algorithms based on manifold alignment generally outperform those based on data alignment (data concatenation). Semi-supervised manifold alignment fusion algorithms performs the best among all. Experiments using multiple classifiers show that they outperform the benchmark data alignment-based algorithms by ca. 3% in terms of the overall classification accuracy. Full article
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)
Show Figures

Figure 1

Back to TopTop