remotesensing-logo

Journal Browser

Journal Browser

Region Based Classification (RBC), Object Based Image Analysis (OBIA) and Deep Learning (DL) for Remote Sensing Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 April 2019) | Viewed by 31816

Special Issue Editors


E-Mail Website
Guest Editor
Image Processing Division, National Institute for Space Research, Av. dos Astronautas, 1758, São José dos Campos, SP 12227-010, Brazil
Interests: pattern recognition for remote sensing; image processing; SAR data processing; remote sensing applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering, Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Rio de Janeiro 22451-040, RJ, Brazil
Interests: pattern recognition for remote sensing; image analysis; remote sensing applications; change detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Environmental Engineering, São Paulo State University, Rod. Presidente Dutra, km 137.8, São José dos Campos, SP 12247-004, Brazil
Interests: pattern recognition; digital image processing; Kernel-based methods; synthetic aperture radar; remote sensing change detection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The large amount of remote sensing (RS) data, of a variety of source types, spectral characteristics, spatial and time resolutions, as well a plethora of analysis algorithms, have opened up new perspectives in many application fields, but made the choosing of the best set of resources more difficult.

Region Based Classification (RBC) also known as Object-based Image Analysis (OBIA), for land cover mapping, has attracted substantial attention. Basically, RBC comprises three main steps, segmentation, feature extraction and classification, executed and configured separately. In this processing chain, segmentation is the critical step. Typically, it relies solely on the image data and ignores semantic, which is considered when the user non-automatically defines the parameter values of the segmentation algorithm. Deep Learning (DL) provide methods to jointly learn from raw input data, a series of features tailored for the task, as well as the optimum parameter values for the underlying classifier. However, DL based solutions, normally, do not rely on image segmentation and demand a huge amount of labeled training data not available in most RS applications. This Special Issue focuses on RBC steps for land use mapping under restricted availability of labeled training data, especially with DL methods. Alternatively, how to specify the segmentation parameters and features coupled with the configuration of standard classifiers (Random Forests, Support Vector Machines, Maximum Likelihood,  and others), for improving  RBC of Remote Sensing data.

Submissions may relate to the following scientific questions (but not limited to):

  • How to specify the best segmentation parameters as function of the classifier to be used, and the set of classes of interest?
  • How to design a system to resolve hard to separate land cover classes?
  • How to use DL methods for Region Based Classification?
  • How to take in account semantics in RBC?
  • How to take in account source data characteristics, like SAR and hyperspectral and/or multi-temporal into RBC processes?

Dr. Luciano Vieira Dutra
Dr. Raul Queiroz Feitosa
Dr. Rogério Galante Negri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Design of classifier systems
  • OBIA optimization
  • Deep Leaning and remote sensing
  • Feature extraction and selection
  • Classifier Selection and optimization
  • Land use / land cover classification
  • Image semantics

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 5758 KiB  
Article
Classification of PolSAR Image Using Neural Nonlocal Stacked Sparse Autoencoders with Virtual Adversarial Regularization
by Ruichuan Wang and Yanfei Wang
Remote Sens. 2019, 11(9), 1038; https://doi.org/10.3390/rs11091038 - 01 May 2019
Cited by 10 | Viewed by 3571
Abstract
Polarimetric synthetic aperture radar (PolSAR) has become increasingly popular in the past two decades, for it can derive multichannel features of ground objects, which contains more discriminative information compared with traditional SAR. In this paper, a neural nonlocal stacked sparse autoencoders with virtual [...] Read more.
Polarimetric synthetic aperture radar (PolSAR) has become increasingly popular in the past two decades, for it can derive multichannel features of ground objects, which contains more discriminative information compared with traditional SAR. In this paper, a neural nonlocal stacked sparse autoencoders with virtual adversarial regularization (NNSSAE-VAT) is proposed for PolSAR image classification. The NNSSAE first extracts the nonlocal features by calculating pairwise similarity of each pixel and its surrounding pixels using a neural network, which contains a multiscale feature extractor and a linear embedding layer. The feature extraction process can relieve the negative influence of speckle noise and extract discriminative nonlocal spatial information without carefully designed parameters. Then, the SSAE maps the center pixel and the extracted nonlocal features into deep latent space in which a Softmax classifier is utilized to conduct classification. The virtual adversarial training is introduced to regularize the network, which tries to keep the network from being overfitting. The experimental results from three real PolSAR image show that the proposed NNSSAE-VAT method has proved its robustness and effectiveness and it can achieve competitive performance compared with related methods. Full article
Show Figures

Figure 1

21 pages, 5559 KiB  
Article
Evaluation of Sampling and Cross-Validation Tuning Strategies for Regional-Scale Machine Learning Classification
by Christopher A. Ramezan, Timothy A. Warner and Aaron E. Maxwell
Remote Sens. 2019, 11(2), 185; https://doi.org/10.3390/rs11020185 - 18 Jan 2019
Cited by 157 | Viewed by 15270
Abstract
High spatial resolution (1–5 m) remotely sensed datasets are increasingly being used to map land covers over large geographic areas using supervised machine learning algorithms. Although many studies have compared machine learning classification methods, sample selection methods for acquiring training and validation data [...] Read more.
High spatial resolution (1–5 m) remotely sensed datasets are increasingly being used to map land covers over large geographic areas using supervised machine learning algorithms. Although many studies have compared machine learning classification methods, sample selection methods for acquiring training and validation data for machine learning, and cross-validation techniques for tuning classifier parameters are rarely investigated, particularly on large, high spatial resolution datasets. This work, therefore, examines four sample selection methods—simple random, proportional stratified random, disproportional stratified random, and deliberative sampling—as well as three cross-validation tuning approaches—k-fold, leave-one-out, and Monte Carlo methods. In addition, the effect on the accuracy of localizing sample selections to a small geographic subset of the entire area, an approach that is sometimes used to reduce costs associated with training data collection, is investigated. These methods are investigated in the context of support vector machines (SVM) classification and geographic object-based image analysis (GEOBIA), using high spatial resolution National Agricultural Imagery Program (NAIP) orthoimagery and LIDAR-derived rasters, covering a 2,609 km2 regional-scale area in northeastern West Virginia, USA. Stratified-statistical-based sampling methods were found to generate the highest classification accuracy. Using a small number of training samples collected from only a subset of the study area provided a similar level of overall accuracy to a sample of equivalent size collected in a dispersed manner across the entire regional-scale dataset. There were minimal differences in accuracy for the different cross-validation tuning methods. The processing time for Monte Carlo and leave-one-out cross-validation were high, especially with large training sets. For this reason, k-fold cross-validation appears to be a good choice. Classifications trained with samples collected deliberately (i.e., not randomly) were less accurate than classifiers trained from statistical-based samples. This may be due to the high positive spatial autocorrelation in the deliberative training set. Thus, if possible, samples for training should be selected randomly; deliberative samples should be avoided. Full article
Show Figures

Graphical abstract

24 pages, 17524 KiB  
Article
A New Method for Region-Based Majority Voting CNNs for Very High Resolution Image Classification
by Xianwei Lv, Dongping Ming, Tingting Lu, Keqi Zhou, Min Wang and Hanqing Bao
Remote Sens. 2018, 10(12), 1946; https://doi.org/10.3390/rs10121946 - 04 Dec 2018
Cited by 55 | Viewed by 5992
Abstract
Conventional geographic object-based image analysis (GEOBIA) land cover classification methods by using very high resolution images are hardly applicable due to their complex ground truth and manually selected features, while convolutional neural networks (CNNs) with many hidden layers provide the possibility of extracting [...] Read more.
Conventional geographic object-based image analysis (GEOBIA) land cover classification methods by using very high resolution images are hardly applicable due to their complex ground truth and manually selected features, while convolutional neural networks (CNNs) with many hidden layers provide the possibility of extracting deep features from very high resolution images. Compared with pixel-based CNNs, superpixel-based CNN classification, carrying on the idea of GEOBIA, is more efficient. However, superpixel-based CNNs are still problematic in terms of their processing units and accuracies. Firstly, the limitations of salt and pepper errors and low boundary adherence caused by superpixel segmentation still exist; secondly, this method uses the central point of the superpixel as the classification benchmark in identifying the category of the superpixel, which does not allow classification accuracy to be ensured. To solve such problems, this paper proposes a region-based majority voting CNN which combines the idea of GEOBIA and the deep learning technique. Firstly, training data was manually labeled and trained; secondly, images were segmented under multiresolution and the segmented regions were taken as basic processing units; then, point voters were generated within each segmented region and the perceptive fields of points voters were put into the multi-scale CNN to determine their categories. Eventually, the final category of each region was determined in the region majority voting system. The experiments and analyses indicate the following: 1. region-based majority voting CNNs can fully utilize their exclusive nature to extract abstract deep features from images; 2. compared with the pixel-based CNN and superpixel-based CNN, the region-based majority voting CNN is not only efficient but also capable of keeping better segmentation accuracy and boundary fit; 3. to a certain extent, region-based majority voting CNNs reduce the impact of the scale effect upon large objects; and 4. multi-scales containing small scales are more applicable for very high resolution image classification than the single scale. Full article
Show Figures

Graphical abstract

20 pages, 5530 KiB  
Article
Superpixel Segmentation of Polarimetric Synthetic Aperture Radar (SAR) Images Based on Generalized Mean Shift
by Fengkai Lang, Jie Yang, Shiyong Yan and Fachao Qin
Remote Sens. 2018, 10(10), 1592; https://doi.org/10.3390/rs10101592 - 05 Oct 2018
Cited by 50 | Viewed by 3367
Abstract
The mean shift algorithm has been shown to perform well in optical image segmentation. However, the conventional mean shift algorithm performs poorly if it is directly used with Synthetic Aperture Radar (SAR) images due to the large dynamic range and strong speckle noise. [...] Read more.
The mean shift algorithm has been shown to perform well in optical image segmentation. However, the conventional mean shift algorithm performs poorly if it is directly used with Synthetic Aperture Radar (SAR) images due to the large dynamic range and strong speckle noise. Recently, the Generalized Mean Shift (GMS) algorithm with an adaptive variable asymmetric bandwidth has been proposed for Polarimetric SAR (PolSAR) image filtering. In this paper, the GMS algorithm is further developed for PolSAR image segmentation. A new merging predicate that is defined in the joint spatial-range domain is derived based on the GMS algorithm. A pre-sorting strategy and a post-processing step are also introduced into the GMS segmentation algorithm. The proposed algorithm can be directly used for PolSAR image superpixel segmentation without any pre-processing steps. Experiments using Airborne SAR (AirSAR) and Experimental SAR (ESAR) L-band PolSAR data demonstrate the effectiveness of the proposed superpixel segmentation algorithm. The parameter settings, stability, quality, and efficiency of the GMS algorithm are also discussed at the end of this paper. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

17 pages, 5300 KiB  
Technical Note
Spectral–Spatial Discriminant Feature Learning for Hyperspectral Image Classification
by Chunhua Dong, Masoud Naghedolfeizi, Dawit Aberra and Xiangyan Zeng
Remote Sens. 2019, 11(13), 1552; https://doi.org/10.3390/rs11131552 - 29 Jun 2019
Cited by 9 | Viewed by 2828
Abstract
Sparse representation classification (SRC) is being widely applied to target detection in hyperspectral images (HSI). However, due to the problem in HSI that high-dimensional data contain redundant information, SRC methods may fail to achieve high classification performance, even with a large number of [...] Read more.
Sparse representation classification (SRC) is being widely applied to target detection in hyperspectral images (HSI). However, due to the problem in HSI that high-dimensional data contain redundant information, SRC methods may fail to achieve high classification performance, even with a large number of spectral bands. Selecting a subset of predictive features in a high-dimensional space is an important and challenging problem for hyperspectral image classification. In this paper, we propose a novel discriminant feature learning (DFL) method, which combines spectral and spatial information into a hypergraph Laplacian. First, a subset of discriminative features is selected, which preserve the spectral structure of data and the inter- and intra-class constraints on labeled training samples. A feature evaluator is obtained by semi-supervised learning with the hypergraph Laplacian. Secondly, the selected features are mapped into a further lower-dimensional eigenspace through a generalized eigendecomposition of the Laplacian matrix. The finally extracted discriminative features are used in a joint sparsity-model algorithm. Experiments conducted with benchmark data sets and different experimental settings show that our proposed method increases classification accuracy and outperforms the state-of-the-art HSI classification methods. Full article
Show Figures

Figure 1

Back to TopTop