remotesensing-logo

Journal Browser

Journal Browser

Special Issue "Unsupervised and Supervised Image Classification in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 January 2022) | Viewed by 2441

Special Issue Editors

Dr. Ruben Fernandez-Beltran
E-Mail Website
Guest Editor
Institute of New Imaging Technologies, University Jaume I, Castelló de la Plana, Spain
Interests: pattern recognition; image analysis; data fusion and their applications in remote sensing; land-cover visual understanding; image classification and retrieval; spectral unmixing and image super-resolution
Dr. Jian Kang
E-Mail Website
Guest Editor
School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
Interests: machine learning; signal processing and their applications in remote sensing; radar imaging; SAR interferometry and denoising; geophysical parameter estimation; semantic segmentation; scene classification and image retrieval
Dr. Renlong Hang
E-Mail Website
Guest Editor
School of Computer & Software, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: machine learning; pattern recognition; multisource fusion; semantic segmentation and their applications in remote sensing
Dr. Jingen Ni
E-Mail Website
Guest Editor
School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
Interests: adaptive signal processing; distributed optimization and learning; remote sensing image processing; artificial neural networks

Special Issue Information

Dear Colleagues,

The unprecedented availability of remote sensing data provides widespread opportunities to cover current and future societal needs. In this context, the accurate classification of remotely sensed images becomes a major issue for advancing on the understanding of anthropogenic changes and their environmental impact. Over the past years, machine learning techniques, especially deep learning-based ones, have certainly shown prominent results to classify remote sensing data. Nonetheless, the increasing visual complexity and data volume still raise important challenges in terms of supervised and unsupervised classification paradigms. In response, we present this special issue with the scope of cutting-edge supervised and unsupervised technologies for the accurate classification of remote sensing data.

Potential topics for this Special Issue include, but are not limited to the following:

  • Pattern recognition, machine learning and deep learning techniques for remote sensing.
  • Intelligent methods for classifying remote sensing images, from the scale of landscapes to ground validation data.
  • Advanced remote sensing scene interpretation methods based on supervised, semi-supervised and unsupervised learning paradigms.
  • New techniques for the accurate quantification of terrestrial biodiversity from remotely sensed data.
  • Innovative classification models for any thematic application (urban, agricultural, ecological...) using multi-source or multi-temporal remote sensing data.

Dr. Ruben Fernandez-Beltran
Dr. Jian Kang
Dr. Renlong Hang
Dr. Jingen Ni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image classification
  • supervised
  • unsupervised
  • semi-supervised
  • remote sensing
  • machine learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Multi-View Structural Feature Extraction for Hyperspectral Image Classification
Remote Sens. 2022, 14(9), 1971; https://doi.org/10.3390/rs14091971 - 20 Apr 2022
Viewed by 375
Abstract
The hyperspectral feature extraction technique is one of the most popular topics in the remote sensing community. However, most hyperspectral feature extraction methods are based on region-based local information descriptors while neglecting the correlation and dependencies of different homogeneous regions. To alleviate this [...] Read more.
The hyperspectral feature extraction technique is one of the most popular topics in the remote sensing community. However, most hyperspectral feature extraction methods are based on region-based local information descriptors while neglecting the correlation and dependencies of different homogeneous regions. To alleviate this issue, this paper proposes a multi-view structural feature extraction method to furnish a complete characterization for spectral–spatial structures of different objects, which mainly is made up of the following key steps. First, the spectral number of the original image is reduced with the minimum noise fraction (MNF) method, and a relative total variation is exploited to extract the local structural feature from the dimension reduced data. Then, with the help of a superpixel segmentation technique, the nonlocal structural features from intra-view and inter-view are constructed by considering the intra- and inter-similarities of superpixels. Finally, the local and nonlocal structural features are merged together to form the final image features for classification. Experiments on several real hyperspectral datasets indicate that the proposed method outperforms other state-of-the-art classification methods in terms of visual performance and objective results, especially when the number of training set is limited. Full article
(This article belongs to the Special Issue Unsupervised and Supervised Image Classification in Remote Sensing)
Show Figures

Figure 1

Article
Automated Parts-Based Model for Recognizing Human–Object Interactions from Aerial Imagery with Fully Convolutional Network
Remote Sens. 2022, 14(6), 1492; https://doi.org/10.3390/rs14061492 - 19 Mar 2022
Viewed by 556
Abstract
Advanced aerial images have led to the development of improved human–object interaction recognition (HOI) methods for usage in surveillance, security, and public monitoring systems. Despite the ever-increasing rate of research being conducted in the field of HOI, the existing challenges of occlusion, scale [...] Read more.
Advanced aerial images have led to the development of improved human–object interaction recognition (HOI) methods for usage in surveillance, security, and public monitoring systems. Despite the ever-increasing rate of research being conducted in the field of HOI, the existing challenges of occlusion, scale variation, fast motion, and illumination variation continue to attract more researchers. In particular, accurate identification of human body parts, the involved objects, and robust features is the key to effective HOI recognition systems. However, identifying different human body parts and extracting their features is a tedious and rather ineffective task. Based on the assumption that only a few body parts are usually involved in a particular interaction, this article proposes a novel parts-based model for recognizing complex human–object interactions in videos and images captured using ground and aerial cameras. Gamma correction and non-local means denoising techniques have been used for pre-processing the video frames and Felzenszwalb’s algorithm has been utilized for image segmentation. After segmentation, twelve human body parts have been detected and five of them have been shortlisted based on their involvement in the interactions. Four kinds of features have been extracted and concatenated into a large feature vector, which has been optimized using the t-distributed stochastic neighbor embedding (t-SNE) technique. Finally, the interactions have been classified using a fully convolutional network (FCN). The proposed system has been validated on the ground and aerial videos of the VIRAT Video, YouTube Aerial, and SYSU 3D HOI datasets, achieving average accuracies of 82.55%, 86.63%, and 91.68% on these datasets, respectively. Full article
(This article belongs to the Special Issue Unsupervised and Supervised Image Classification in Remote Sensing)
Show Figures

Figure 1

Article
SSDAN: Multi-Source Semi-Supervised Domain Adaptation Network for Remote Sensing Scene Classification
Remote Sens. 2021, 13(19), 3861; https://doi.org/10.3390/rs13193861 - 27 Sep 2021
Cited by 4 | Viewed by 875
Abstract
We present a new method for multi-source semi-supervised domain adaptation in remote sensing scene classification. The method consists of a pre-trained convolutional neural network (CNN) model, namely EfficientNet-B3, for the extraction of highly discriminative features, followed by a classification module that learns feature [...] Read more.
We present a new method for multi-source semi-supervised domain adaptation in remote sensing scene classification. The method consists of a pre-trained convolutional neural network (CNN) model, namely EfficientNet-B3, for the extraction of highly discriminative features, followed by a classification module that learns feature prototypes for each class. Then, the classification module computes a cosine distance between feature vectors of target data samples and the feature prototypes. Finally, the proposed method ends with a Softmax activation function that converts the distances into class probabilities. The feature prototypes are also divided by a temperature parameter to normalize and control the classification module. The whole model is trained on both the unlabeled and labeled target samples. It is trained to predict the correct classes utilizing the standard cross-entropy loss computed over the labeled source and target samples. At the same time, the model is trained to learn domain invariant features using another loss function based on entropy computed over the unlabeled target samples. Unlike the standard cross-entropy loss, the new entropy loss function is computed on the model’s predicted probabilities and does not need the true labels. This entropy loss, called minimax loss, needs to be maximized with respect to the classification module to learn features that are domain-invariant (hence removing the data shift), and at the same time, it should be minimized with respect to the CNN feature extractor to learn discriminative features that are clustered around the class prototypes (in other words reducing intra-class variance). To accomplish these maximization and minimization processes at the same time, we use an adversarial training approach, where we alternate between the two processes. The model combines the standard cross-entropy loss and the new minimax entropy loss and optimizes them jointly. The proposed method is tested on four RS scene datasets, namely UC Merced, AID, RESISC45, and PatternNet, using two-source and three-source domain adaptation scenarios. The experimental results demonstrate the strong capability of the proposed method to achieve impressive performance despite using only a few (six in our case) labeled target samples per class. Its performance is already better than several state-of-the-art methods, including RevGrad, ADDA, Siamese-GAN, and MSCN. Full article
(This article belongs to the Special Issue Unsupervised and Supervised Image Classification in Remote Sensing)
Show Figures

Figure 1

Back to TopTop