remotesensing-logo

Journal Browser

Journal Browser

Advances in Representation Learning for Remote Sensing Analytics (RLRSA)

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2018) | Viewed by 46970

Special Issue Editors

College of Information and Control Engineering, China University of Petroleum (East China), #66 Changjiang West Road, Huangdao District, Qingdao 266580, China
Interests: machine learning; manifold learning; representation learning; multiview learning; image classification; remote sensing image analysis
Special Issues, Collections and Topics in MDPI journals
School of Information Technologies, Faculty of Engineering and Information Technologies, The University of Sydney, Room 315, Level 3, J12, Cleveland St, Darlington, NSW 2008, Australia
Interests: matrix factorization; transfer learning; multi-task learning; deep learning; remote sensing image understanding
Faculty of Science and Technology, University of Macau, E11 Avenida da Universidade, Taipa, Macau, China
Interests: chaotic systems; multimedia security; computer vision; patter recognition; machine learning; hyperspectral image classification
School of Automation, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
Interests: remote sensing; band selection; image classification; image segmentation; target detection

Special Issue Information

Dear Colleagues,

With the exploding growth of multi-source/multi-temporal/multi-scale remote sensing data, frequently delivered by remote sensors, it is becoming more and more critical to reveal efficient and effective representation learning methodologies for massive remote sensing analytics. Although many promising achievements of representation learning are reported for signal processing applications, it is still a great challenge to develop distinctive representation learning algorithms for remote sensing analytics.

This Special Issue aims to demonstrate the contribution of representation learning algorithms to the research of remote sensing analytics. It is not difficult to enumerate many examples in this area. For instance, unsupervised learning has been widely-applied for remote sensing image segmentation; semi-supervised learning algorithms significantly boost the performance of multispectral image classification; and deep learning methods obtain state-of-the-art results in many remote sensing applications including geo-localization, semantic labeling, target detection in satellite imagery.

The editors expect to collect a set of recent advances in the related topics, to provide a platform for researchers to exchange their innovative ideas on representation learning solutions for remote sensing analytics, and to bring in interesting utilizations of learning algorithms for particular remote sensing applications.

To summarize, this Special Issue welcomes a broad range of submissions that report novel representation learning techniques for remote sensing analytics. We are especially interested in 1) theoretical advances as well as algorithm developments in representation learning for specific remote sensing analytics problems, 2) reports of practical applications and system innovations in remote sensing analytics, and 3) novel data sets as test bed for new developments, preferably with implemented standard benchmarks. The following list contains topics of interest (but is not limited to them):

  • Multiview learning for multi-spectral remote sensing image analysis
  • Feature extraction/learning for remote sensing images
  • Representation learning for hyperspectral image analysis
  • Metric learning for remote sensing image classification
  • Deep learning algorithms for remote sensing
  • Benchmark algorithms and databases for remote sensing
  • Other applications of representation learning for remote sensing

Prof. Weifeng Liu
Dr. Tongliang Liu
Dr. Yicong Zhou
Prof. Shuying Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote Sensing Image Analysis
  • Representation Learning
  • Feature Extraction
  • Deep Learning
  • Multiview Learning
  • Hypergraph Image Classification

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3128 KiB  
Article
Weighted Spatial Pyramid Matching Collaborative Representation for Remote-Sensing-Image Scene Classification
by Bao-Di Liu, Jie Meng, Wen-Yang Xie, Shuai Shao, Ye Li and Yanjiang Wang
Remote Sens. 2019, 11(5), 518; https://doi.org/10.3390/rs11050518 - 04 Mar 2019
Cited by 54 | Viewed by 3352
Abstract
At present, nonparametric subspace classifiers, such as collaborative representation-based classification (CRC) and sparse representation-based classification (SRC), are widely used in many pattern-classification and -recognition tasks. Meanwhile, the spatial pyramid matching (SPM) scheme, which considers spatial information in representing the image, is efficient for [...] Read more.
At present, nonparametric subspace classifiers, such as collaborative representation-based classification (CRC) and sparse representation-based classification (SRC), are widely used in many pattern-classification and -recognition tasks. Meanwhile, the spatial pyramid matching (SPM) scheme, which considers spatial information in representing the image, is efficient for image classification. However, for SPM, the weights to evaluate the representation of different subregions are fixed. In this paper, we first introduce the spatial pyramid matching scheme to remote-sensing (RS)-image scene-classification tasks to improve performance. Then, we propose a weighted spatial pyramid matching collaborative-representation-based classification method, combining the CRC method with the weighted spatial pyramid matching scheme. The proposed method is capable of learning the weights of different subregions in representing an image. Finally, extensive experiments on several benchmark remote-sensing-image datasets were conducted and clearly demonstrate the superior performance of our proposed algorithm when compared with state-of-the-art approaches. Full article
Show Figures

Figure 1

21 pages, 2521 KiB  
Article
A Novel Analysis Dictionary Learning Model Based Hyperspectral Image Classification Method
by Wei Wei, Mengting Ma, Cong Wang, Lei Zhang, Peng Zhang and Yanning Zhang
Remote Sens. 2019, 11(4), 397; https://doi.org/10.3390/rs11040397 - 15 Feb 2019
Cited by 5 | Viewed by 2855
Abstract
Supervised hyperspectral image (HSI) classification has been acknowledged as one of the fundamental tasks of hyperspectral data analysis. Witnessing the success of analysis dictionary learning (ADL)-based method in recent years, we propose an ADL-based supervised HSI classification method in this paper. In the [...] Read more.
Supervised hyperspectral image (HSI) classification has been acknowledged as one of the fundamental tasks of hyperspectral data analysis. Witnessing the success of analysis dictionary learning (ADL)-based method in recent years, we propose an ADL-based supervised HSI classification method in this paper. In the proposed method, the dictionary is modeled considering both the characteristics within the spectrum and among the spectra. Specifically, to reduce the influence of strong nonlinearity within each spectrum on classification, we divide the spectrum into some segments, and based on this we propose HSI classification strategy. To preserve the relationships among spectra, similarities among pixels are introduced as constraints. Experimental results on several benchmark hyperspectral datasets demonstrate the effectiveness of the proposed method for HSI classification. Full article
Show Figures

Graphical abstract

16 pages, 3293 KiB  
Article
Fusion of Multiscale Convolutional Neural Networks for Building Extraction in Very High-Resolution Images
by Genyun Sun, Hui Huang, Aizhu Zhang, Feng Li, Huimin Zhao and Hang Fu
Remote Sens. 2019, 11(3), 227; https://doi.org/10.3390/rs11030227 - 22 Jan 2019
Cited by 67 | Viewed by 5280
Abstract
Extracting buildings from very high resolution (VHR) images has attracted much attention but is still challenging due to their large varieties in appearance and scale. Convolutional neural networks (CNNs) have shown effective and superior performance in automatically learning high-level and discriminative features in [...] Read more.
Extracting buildings from very high resolution (VHR) images has attracted much attention but is still challenging due to their large varieties in appearance and scale. Convolutional neural networks (CNNs) have shown effective and superior performance in automatically learning high-level and discriminative features in extracting buildings. However, the fixed receptive fields make conventional CNNs insufficient to tolerate large scale changes. Multiscale CNN (MCNN) is a promising structure to meet this challenge. Unfortunately, the multiscale features extracted by MCNN are always stacked and fed into one classifier, which make it difficult to recognize objects with different scales. Besides, the repeated sub-sampling processes lead to a blurred boundary of the extracted features. In this study, we proposed a novel parallel support vector mechanism (SVM)-based fusion strategy to take full use of deep features at different scales as extracted by the MCNN structure. We firstly designed a MCNN structure with different sizes of input patches and kernels, to learn multiscale deep features. After that, features at different scales were individually fed into different support vector machine (SVM) classifiers to produce rule images for pre-classification. A decision fusion strategy is then applied on the pre-classification results based on another SVM classifier. Finally, superpixels are applied to refine the boundary of the fused results using region-based maximum voting. For performance evaluation, the well-known International Society for Photogrammetry and Remote Sensing (ISPRS) Potsdam dataset was used in comparison with several state-of-the-art algorithms. Experimental results have demonstrated the superior performance of the proposed methodology in extracting complex buildings in urban districts. Full article
Show Figures

Graphical abstract

19 pages, 3819 KiB  
Article
A Patch-Based Light Convolutional Neural Network for Land-Cover Mapping Using Landsat-8 Images
by Hunsoo Song, Yonghyun Kim and Yongil Kim
Remote Sens. 2019, 11(2), 114; https://doi.org/10.3390/rs11020114 - 09 Jan 2019
Cited by 26 | Viewed by 5358
Abstract
This study proposes a light convolutional neural network (LCNN) well-fitted for medium-resolution (30-m) land-cover classification. The LCNN attains high accuracy without overfitting, even with a small number of training samples, and has lower computational costs due to its much lighter design compared to [...] Read more.
This study proposes a light convolutional neural network (LCNN) well-fitted for medium-resolution (30-m) land-cover classification. The LCNN attains high accuracy without overfitting, even with a small number of training samples, and has lower computational costs due to its much lighter design compared to typical convolutional neural networks for high-resolution or hyperspectral image classification tasks. The performance of the LCNN was compared to that of a deep convolutional neural network, support vector machine (SVM), k-nearest neighbors (KNN), and random forest (RF). SVM, KNN, and RF were tested with both patch-based and pixel-based systems. Three 30 km × 30 km test sites of the Level II National Land Cover Database were used for reference maps to embrace a wide range of land-cover types, and a single-date Landsat-8 image was used for each test site. To evaluate the performance of the LCNN according to the sample sizes, we varied the sample size to include 20, 40, 80, 160, and 320 samples per class. The proposed LCNN achieved the highest accuracy in 13 out of 15 cases (i.e., at three test sites with five different sample sizes), and the LCNN with a patch size of three produced the highest overall accuracy of 61.94% from 10 repetitions, followed by SVM (61.51%) and RF (61.15%) with a patch size of three. Also, the statistical significance of the differences between LCNN and the other classifiers was reported. Moreover, by introducing the heterogeneity value (from 0 to 8) representing the complexity of the map, we demonstrated the advantage of patch-based LCNN over pixel-based classifiers, particularly at moderately heterogeneous pixels (from 1 to 4), with respect to accuracy (LCNN is 5.5% and 6.3% more accurate for a training sample size of 20 and 320 samples per class, respectively). Finally, the computation times of the classifiers were calculated, and the LCNN was confirmed to have an advantage in large-area mapping. Full article
Show Figures

Graphical abstract

20 pages, 7491 KiB  
Article
Hybrid Collaborative Representation for Remote-Sensing Image Scene Classification
by Bao-Di Liu, Wen-Yang Xie, Jie Meng, Ye Li and Yanjiang Wang
Remote Sens. 2018, 10(12), 1934; https://doi.org/10.3390/rs10121934 - 01 Dec 2018
Cited by 22 | Viewed by 2701
Abstract
In recent years, the collaborative representation-based classification (CRC) method has achieved great success in visual recognition by directly utilizing training images as dictionary bases. However, it describes a test sample with all training samples to extract shared attributes and does not consider the [...] Read more.
In recent years, the collaborative representation-based classification (CRC) method has achieved great success in visual recognition by directly utilizing training images as dictionary bases. However, it describes a test sample with all training samples to extract shared attributes and does not consider the representation of the test sample with the training samples in a specific class to extract the class-specific attributes. For remote-sensing images, both the shared attributes and class-specific attributes are important for classification. In this paper, we propose a hybrid collaborative representation-based classification approach. The proposed method is capable of improving the performance of classifying remote-sensing images by embedding the class-specific collaborative representation to conventional collaborative representation-based classification. Moreover, we extend the proposed method to arbitrary kernel space to explore the nonlinear characteristics hidden in remote-sensing image features to further enhance classification performance. Extensive experiments on several benchmark remote-sensing image datasets were conducted and clearly demonstrate the superior performance of our proposed algorithm to state-of-the-art approaches. Full article
Show Figures

Figure 1

17 pages, 2338 KiB  
Article
A Noise-Resilient Online Learning Algorithm for Scene Classification
by Ling Jian, Fuhao Gao, Peng Ren, Yunquan Song and Shihua Luo
Remote Sens. 2018, 10(11), 1836; https://doi.org/10.3390/rs10111836 - 20 Nov 2018
Cited by 23 | Viewed by 3454
Abstract
The proliferation of remote sensing imagery motivates a surge of research interest in image processing such as feature extraction and scene recognition, etc. Among them, scene recognition (classification) is a typical learning task that focuses on exploiting annotated images to infer the category [...] Read more.
The proliferation of remote sensing imagery motivates a surge of research interest in image processing such as feature extraction and scene recognition, etc. Among them, scene recognition (classification) is a typical learning task that focuses on exploiting annotated images to infer the category of an unlabeled image. Existing scene classification algorithms predominantly focus on static data and are designed to learn discriminant information from clean data. They, however, suffer from two major shortcomings, i.e., the noisy label may negatively affect the learning procedure and learning from scratch may lead to a huge computational burden. Thus, they are not able to handle large-scale remote sensing images, in terms of both recognition accuracy and computational cost. To address this problem, in the paper, we propose a noise-resilient online classification algorithm, which is scalable and robust to noisy labels. Specifically, ramp loss is employed as loss function to alleviate the negative affect of noisy labels, and we iteratively optimize the decision function in Reproducing Kernel Hilbert Space under the framework of Online Gradient Descent (OGD). Experiments on both synthetic and real-world data sets demonstrate that the proposed noise-resilient online classification algorithm is more robust and sparser than state-of-the-art online classification algorithms. Full article
Show Figures

Graphical abstract

18 pages, 7610 KiB  
Article
Hyperspectral Image Restoration under Complex Multi-Band Noises
by Zongsheng Yue, Deyu Meng, Yongqing Sun and Qian Zhao
Remote Sens. 2018, 10(10), 1631; https://doi.org/10.3390/rs10101631 - 14 Oct 2018
Cited by 8 | Viewed by 2611
Abstract
Hyperspectral images (HSIs) are always corrupted by complicated forms of noise during the acquisition process, such as Gaussian noise, impulse noise, stripes, deadlines and so on. Specifically, different bands of the practical HSIs generally contain different noises of evidently distinct type and extent. [...] Read more.
Hyperspectral images (HSIs) are always corrupted by complicated forms of noise during the acquisition process, such as Gaussian noise, impulse noise, stripes, deadlines and so on. Specifically, different bands of the practical HSIs generally contain different noises of evidently distinct type and extent. While current HSI restoration methods give less consideration to such band-noise-distinctness issues, this study elaborately constructs a new HSI restoration technique, aimed at more faithfully and comprehensively taking such noise characteristics into account. Particularly, through a two-level hierarchical Dirichlet process (HDP) to model the HSI noise structure, the noise of each band is depicted by a Dirichlet process Gaussian mixture model (DP-GMM), in which its complexity can be flexibly adapted in an automatic manner. Besides, the DP-GMM of each band comes from a higher level DP-GMM that relates the noise of different bands. The variational Bayes algorithm is also designed to solve this model, and closed-form updating equations for all involved parameters are deduced. The experiment indicates that, in terms of the mean peak signal-to-noise ratio (MPSNR), the proposed method is on average 1 dB higher compared with the existing state-of-the-art methods, as well as performing better in terms of the mean structural similarity index (MSSIM) and Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS). Full article
Show Figures

Figure 1

16 pages, 1654 KiB  
Article
Hyperspectral Image Classification Based on Two-Stage Subspace Projection
by Xiaoyan Li, Lefei Zhang and Jane You
Remote Sens. 2018, 10(10), 1565; https://doi.org/10.3390/rs10101565 - 30 Sep 2018
Cited by 11 | Viewed by 2921
Abstract
Hyperspectral image (HSI) classification is a widely used application to provide important information of land covers. Each pixel of an HSI has hundreds of spectral bands, which are often considered as features. However, some features are highly correlated and nonlinear. To address these [...] Read more.
Hyperspectral image (HSI) classification is a widely used application to provide important information of land covers. Each pixel of an HSI has hundreds of spectral bands, which are often considered as features. However, some features are highly correlated and nonlinear. To address these problems, we propose a new discrimination analysis framework for HSI classification based on the Two-stage Subspace Projection (TwoSP) in this paper. First, the proposed framework projects the original feature data into a higher-dimensional feature subspace by exploiting the kernel principal component analysis (KPCA). Then, a novel discrimination-information based locality preserving projection (DLPP) method is applied to the preceding KPCA feature data. Finally, an optimal low-dimensional feature space is constructed for the subsequent HSI classification. The main contributions of the proposed TwoSP method are twofold: (1) the discrimination information is utilized to minimize the within-class distance in a small neighborhood, and (2) the subspace found by TwoSP separates the samples more than they would be if DLPP was directly applied to the original HSI data. Experimental results on two real-world HSI datasets demonstrate the effectiveness of the proposed TwoSP method in terms of classification accuracy. Full article
Show Figures

Figure 1

18 pages, 4483 KiB  
Article
Urban Change Detection Based on Dempster–Shafer Theory for Multitemporal Very High-Resolution Imagery
by Hui Luo, Chong Liu, Chen Wu and Xian Guo
Remote Sens. 2018, 10(7), 980; https://doi.org/10.3390/rs10070980 - 21 Jun 2018
Cited by 95 | Viewed by 7050
Abstract
Fusing multiple change detection results has great potentials in dealing with the spectral variability in multitemporal very high-resolution (VHR) remote sensing images. However, it is difficult to solve the problem of uncertainty, which mainly includes the inaccuracy of each candidate change map and [...] Read more.
Fusing multiple change detection results has great potentials in dealing with the spectral variability in multitemporal very high-resolution (VHR) remote sensing images. However, it is difficult to solve the problem of uncertainty, which mainly includes the inaccuracy of each candidate change map and the conflicts between different results. Dempster–Shafer theory (D–S) is an effective method to model uncertainties and combine multiple evidences. Therefore, in this paper, we proposed an urban change detection method for VHR images by fusing multiple change detection methods with D–S evidence theory. Change vector analysis (CVA), iteratively reweighted multivariate alteration detection (IRMAD), and iterative slow feature analysis (ISFA) were utilized to obtain the candidate change maps. The final change detection result is generated by fusing the three evidences with D–S evidence theory and a segmentation object map. The experiment indicates that the proposed method can obtain the best performance in detection rate, false alarm rate, and comprehensive indicators. Full article
Show Figures

Graphical abstract

14 pages, 2173 KiB  
Article
Region-Wise Deep Feature Representation for Remote Sensing Images
by Peng Li, Peng Ren, Xiaoyu Zhang, Qian Wang, Xiaobin Zhu and Lei Wang
Remote Sens. 2018, 10(6), 871; https://doi.org/10.3390/rs10060871 - 05 Jun 2018
Cited by 42 | Viewed by 5602
Abstract
Effective feature representations play an important role in remote sensing image analysis tasks. With the rapid progress of deep learning techniques, deep features have been widely applied to remote sensing image understanding in recent years and shown powerful ability in image representation. The [...] Read more.
Effective feature representations play an important role in remote sensing image analysis tasks. With the rapid progress of deep learning techniques, deep features have been widely applied to remote sensing image understanding in recent years and shown powerful ability in image representation. The existing deep feature extraction approaches are usually carried out on the whole image directly. However, such deep feature representation strategies may not effectively capture the local geometric invariance of target regions in remote sensing images. In this paper, we propose a novel region-wise deep feature extraction framework for remote sensing images. First, regions that may contain the target information are extracted from one whole image. Then, these regions are fed into a pre-trained convolutional neural network (CNN) model to extract regional deep features. Finally, the regional deep features are encoded by an improved Vector of Locally Aggregated Descriptors (VLAD) algorithm to generate the feature representation for the image. We conducted extensive experiments on remote sensing image classification and retrieval tasks based on the proposed region-wise deep feature extraction framework. The comparison results show that the proposed approach is superior to the existing CNN feature extraction methods. Full article
Show Figures

Graphical abstract

15 pages, 395 KiB  
Article
Discriminant Analysis with Graph Learning for Hyperspectral Image Classification
by Mulin Chen, Qi Wang and Xuelong Li
Remote Sens. 2018, 10(6), 836; https://doi.org/10.3390/rs10060836 - 27 May 2018
Cited by 41 | Viewed by 4976
Abstract
Linear Discriminant Analysis (LDA) is a widely-used technique for dimensionality reduction, and has been applied in many practical applications, such as hyperspectral image classification. Traditional LDA assumes that the data obeys the Gaussian distribution. However, in real-world situations, the high-dimensional data may be [...] Read more.
Linear Discriminant Analysis (LDA) is a widely-used technique for dimensionality reduction, and has been applied in many practical applications, such as hyperspectral image classification. Traditional LDA assumes that the data obeys the Gaussian distribution. However, in real-world situations, the high-dimensional data may be with various kinds of distributions, which restricts the performance of LDA. To reduce this problem, we propose the Discriminant Analysis with Graph Learning (DAGL) method in this paper. Without any assumption on the data distribution, the proposed method learns the local data relationship adaptively during the optimization. The main contributions of this research are threefold: (1) the local data manifold is captured by learning the data graph adaptively in the subspace; (2) the spatial information within the hyperspectral image is utilized with a regularization term; and (3) an efficient algorithm is designed to optimize the proposed problem with proved convergence. Experimental results on hyperspectral image datasets show that promising performance of the proposed method, and validates its superiority over the state-of-the-art. Full article
Show Figures

Figure 1

Back to TopTop