Special Issue "Analysis of Big Data in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 September 2019).

Special Issue Editors

Prof. Mingmin Chi
E-Mail Website
Guest Editor
School of Computer Science, Shanghai key laboratory of data science, Fudan University, 825 Zhangheng Road, Shanghai, China
Interests: big data; remote sensing; data science; machine learning; high performance computing

Special Issue Information

Dear Colleagues,

Big data is a very important topic in many research areas. Every day a large number of Earth observation (EO) space borne and airborne sensors from many different countries provide a massive amount of remotely-sensed data. Those data sets comprise different spectral bandwidths (dimensionality), spatial resolutions, and radiometric resolutions. The current estimations are that remotely sensed data are now being collected following a Petabyte level growth per day over the world. Combined with human activities and data from social science, the massive remotely sensed data (which consists of big data in remote sensing) have been successfully used for different applications, such as natural hazard monitoring, global climate change, urban planning, etc.

This Special Issue on “Analysis of Big Data in Remote Sensing” is intended to introduce the latest techniques to analyze big data in remote sensing applications. The Special Issue is expected to bring together experts from different research areas to discover and realize the values of big data in various remote sensing areas. As a result, different analysis techniques exploiting big data will be collected in the Special Issue, which will provide a first necessary effort towards the incorporation of this technology into the remote sensing field and also help academia, governments, and industries to gain insights into the potential of using big data techniques and concepts in remote-sensing applications.

High-quality contributions with emphasis placed on (but not limited to) the topic areas listed below will be solicited for the Special Issue for the analysis of big data in remote sensing using:

  • Active learning
  • Cloud computing
  • Crowdsourcing
  • Deep ensemble learning
  • Deep fusion learning
  • Deep reinforcement learning
  • Fusion of deep and shallow machine learning
  • High performance computing
  • Representation learning
  • Semi supervised deep learning
  • Supercomputing
  • Supervised deep learning
  • Transfer deep learning
  • Unsupervised deep learning

Prof. Jon Atli Benediktsson
Prof. Mingmin Chi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Big data
  • Deep learning
  • Remote sensing
  • Supercomputing
  • Image processing
  • Machine learning
  • High performance computing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessFeature PaperArticle
Deep TEC: Deep Transfer Learning with Ensemble Classifier for Road Extraction from UAV Imagery
Remote Sens. 2020, 12(2), 245; https://doi.org/10.3390/rs12020245 - 10 Jan 2020
Abstract
Unmanned aerial vehicle (UAV) remote sensing has a wide area of applications and in this paper, we attempt to address one such problem—road extraction from UAV-captured RGB images. The key challenge here is to solve the road extraction problem using the UAV multiple [...] Read more.
Unmanned aerial vehicle (UAV) remote sensing has a wide area of applications and in this paper, we attempt to address one such problem—road extraction from UAV-captured RGB images. The key challenge here is to solve the road extraction problem using the UAV multiple remote sensing scene datasets that are acquired with different sensors over different locations. We aim to extract the knowledge from a dataset that is available in the literature and apply this extracted knowledge on our dataset. The paper focuses on a novel method which consists of deep TEC (deep transfer learning with ensemble classifier) for road extraction using UAV imagery. The proposed deep TEC performs road extraction on UAV imagery in two stages, namely, deep transfer learning and ensemble classifier. In the first stage, with the help of deep learning methods, namely, the conditional generative adversarial network, the cycle generative adversarial network and the fully convolutional network, the model is pre-trained on the benchmark UAV road extraction dataset that is available in the literature. With this extracted knowledge (based on the pre-trained model) the road regions are then extracted on our UAV acquired images. Finally, for the road classified images, ensemble classification is carried out. In particular, the deep TEC method has an average quality of 71%, which is 10% higher than the next best standard deep learning methods. Deep TEC also shows a higher level of performance measures such as completeness, correctness and F1 score measures. Therefore, the obtained results show that the deep TEC is efficient in extracting road networks in an urban region. Full article
(This article belongs to the Special Issue Analysis of Big Data in Remote Sensing)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Remote Sensing Big Data Classification with High Performance Distributed Deep Learning
Remote Sens. 2019, 11(24), 3056; https://doi.org/10.3390/rs11243056 - 17 Dec 2019
Abstract
High-Performance Computing (HPC) has recently been attracting more attention in remote sensing applications due to the challenges posed by the increased amount of open data that are produced daily by Earth Observation (EO) programs. The unique parallel computing environments and programming techniques that [...] Read more.
High-Performance Computing (HPC) has recently been attracting more attention in remote sensing applications due to the challenges posed by the increased amount of open data that are produced daily by Earth Observation (EO) programs. The unique parallel computing environments and programming techniques that are integrated in HPC systems are able to solve large-scale problems such as the training of classification algorithms with large amounts of Remote Sensing (RS) data. This paper shows that the training of state-of-the-art deep Convolutional Neural Networks (CNNs) can be efficiently performed in distributed fashion using parallel implementation techniques on HPC machines containing a large number of Graphics Processing Units (GPUs). The experimental results confirm that distributed training can drastically reduce the amount of time needed to perform full training, resulting in near linear scaling without loss of test accuracy. Full article
(This article belongs to the Special Issue Analysis of Big Data in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Automatic Detection of Track and Fields in China from High-Resolution Satellite Images Using Multi-Scale-Fused Single Shot MultiBox Detector
Remote Sens. 2019, 11(11), 1377; https://doi.org/10.3390/rs11111377 - 10 Jun 2019
Abstract
Object detection is facing various challenges as an important aspect in the field of remote sensing—especially in large scenes due to the increase of satellite image resolution and the complexity of land covers. Because of the diversity of the appearance of track and [...] Read more.
Object detection is facing various challenges as an important aspect in the field of remote sensing—especially in large scenes due to the increase of satellite image resolution and the complexity of land covers. Because of the diversity of the appearance of track and fields, the complexity of the background and the variety between satellite images, even superior deep learning methods have difficulty extracting accurate characteristics of track and field from large complex scenes, such as the whole of China. Taking track and field as a study case, we propose a stable and accurate method for target detection. Firstly, we add the “deconvolution” and “concat” module to the structure of the original Single Shot MultiBox Detector (SSD), where Visual Geometry Group 16 (VGG16) is served as a basic network, followed by multiple convolution layers. The two modules are used to sample the high-level feature map and connect it with the low-level feature map to form a new network structure multi-scale-fused SSD (abbreviated as MSF_SSD). MSF-SSD can enrich the semantic information of the low-level feature, which is especially effective for small targets in large scenes. In addition, a large number of track and fields are collected as samples for the whole China and a series of parameters are designed to optimize the MSF_SSD network through the deep analysis of sample characteristics. Finally, by using MSF_SSD network, we achieve the rapid and automatic detection of meter-level track and fields in the country for the first time. The proposed MSF_SSD model achieves 97.9% mean average precision (mAP) on validation set which is superior to the 88.4% mAP of the original SSD. Apart from this, the model can achieve an accuracy of 94.3% while keeping the recall rate in a high level (98.8%) in the nationally distributed test set, outperforming the original SSD method. Full article
(This article belongs to the Special Issue Analysis of Big Data in Remote Sensing)
Show Figures

Graphical abstract

Open AccessFeature PaperArticle
Domain Adversarial Neural Networks for Large-Scale Land Cover Classification
Remote Sens. 2019, 11(10), 1153; https://doi.org/10.3390/rs11101153 - 14 May 2019
Cited by 1
Abstract
Learning classification models require sufficiently labeled training samples, however, collecting labeled samples for every new problem is time-consuming and costly. An alternative approach is to transfer knowledge from one problem to another, which is called transfer learning. Domain adaptation (DA) is a type [...] Read more.
Learning classification models require sufficiently labeled training samples, however, collecting labeled samples for every new problem is time-consuming and costly. An alternative approach is to transfer knowledge from one problem to another, which is called transfer learning. Domain adaptation (DA) is a type of transfer learning that aims to find a new latent space where the domain discrepancy between the source and the target domain is negligible. In this work, we propose an unsupervised DA technique called domain adversarial neural networks (DANNs), composed of a feature extractor, a class predictor, and domain classifier blocks, for large-scale land cover classification. Contrary to the traditional methods that perform representation and classifier learning in separate stages, DANNs combine them into a single stage, thereby learning a new representation of the input data that is both domain-invariant and discriminative. Once trained, the classifier of a DANN can be used to predict both source and target domain labels. Additionally, we also modify the domain classifier of a DANN to evaluate its suitability for multi-target domain adaptation problems. Experimental results obtained for both single and multiple target DA problems show that the proposed method provides a performance gain of up to 40%. Full article
(This article belongs to the Special Issue Analysis of Big Data in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
A SAR Dataset of Ship Detection for Deep Learning under Complex Backgrounds
Remote Sens. 2019, 11(7), 765; https://doi.org/10.3390/rs11070765 - 29 Mar 2019
Cited by 4
Abstract
With the launch of space-borne satellites, more synthetic aperture radar (SAR) images are available than ever before, thus making dynamic ship monitoring possible. Object detectors in deep learning achieve top performance, benefitting from a free public dataset. Unfortunately, due to the lack of [...] Read more.
With the launch of space-borne satellites, more synthetic aperture radar (SAR) images are available than ever before, thus making dynamic ship monitoring possible. Object detectors in deep learning achieve top performance, benefitting from a free public dataset. Unfortunately, due to the lack of a large volume of labeled datasets, object detectors for SAR ship detection have developed slowly. To boost the development of object detectors in SAR images, a SAR dataset is constructed. This dataset labeled by SAR experts was created using 102 Chinese Gaofen-3 images and 108 Sentinel-1 images. It consists of 43,819 ship chips of 256 pixels in both range and azimuth. These ships mainly have distinct scales and backgrounds. Moreover, modified state-of-the-art object detectors from natural images are trained and can be used as baselines. Experimental results reveal that object detectors achieve higher mean average precision (mAP) on the test dataset and have high generalization performance on new SAR imagery without land-ocean segmentation, demonstrating the benefits of the dataset we constructed. Full article
(This article belongs to the Special Issue Analysis of Big Data in Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Description Generation for Remote Sensing Images Using Attribute Attention Mechanism
Remote Sens. 2019, 11(6), 612; https://doi.org/10.3390/rs11060612 - 13 Mar 2019
Cited by 7
Abstract
Image captioning generates a semantic description of an image. It deals with image understanding and text mining, which has made great progress in recent years. However, it is still a great challenge to bridge the “semantic gap” between low-level features and high-level semantics [...] Read more.
Image captioning generates a semantic description of an image. It deals with image understanding and text mining, which has made great progress in recent years. However, it is still a great challenge to bridge the “semantic gap” between low-level features and high-level semantics in remote sensing images, in spite of the improvement of image resolutions. In this paper, we present a new model with an attribute attention mechanism for the description generation of remote sensing images. Therefore, we have explored the impact of the attributes extracted from remote sensing images on the attention mechanism. The results of our experiments demonstrate the validity of our proposed model. The proposed method obtains six higher scores and one slightly lower, compared against several state of the art techniques, on the Sydney Dataset and Remote Sensing Image Caption Dataset (RSICD), and receives all seven higher scores on the UCM Dataset for remote sensing image captioning, indicating that the proposed framework achieves robust performance for semantic description in high-resolution remote sensing images. Full article
(This article belongs to the Special Issue Analysis of Big Data in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Utilizing Multilevel Features for Cloud Detection on Satellite Imagery
Remote Sens. 2018, 10(11), 1853; https://doi.org/10.3390/rs10111853 - 21 Nov 2018
Cited by 2
Abstract
Cloud detection, which is defined as the pixel-wise binary classification, is significant in satellite imagery processing. In current remote sensing literature, cloud detection methods are linked to the relationships of imagery bands or based on simple image feature analysis. These methods, which only [...] Read more.
Cloud detection, which is defined as the pixel-wise binary classification, is significant in satellite imagery processing. In current remote sensing literature, cloud detection methods are linked to the relationships of imagery bands or based on simple image feature analysis. These methods, which only focus on low-level features, are not robust enough on the images with difficult land covers, for clouds share similar image features such as color and texture with the land covers. To solve the problem, in this paper, we propose a novel deep learning method for cloud detection on satellite imagery by utilizing multilevel image features with two major processes. The first process is to obtain the cloud probability map from the designed deep convolutional neural network, which concatenates deep neural network features from low-level to high-level. The second part of the method is to get refined cloud masks through a composite image filter technique, where the specific filter captures multilevel features of cloud structures and the surroundings of the input imagery. In the experiments, the proposed method achieves 85.38% intersection over union of cloud in the testing set which contains 100 Gaofen-1 wide field of view images and obtains satisfactory visual cloud masks, especially for those hard images. The experimental results show that utilizing multilevel features by the combination of the network with feature concatenation and the particular filter tackles the cloud detection problem with improved cloud masks. Full article
(This article belongs to the Special Issue Analysis of Big Data in Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
SAR Automatic Target Recognition Using a Roto-Translational Invariant Wavelet-Scattering Convolution Network
Remote Sens. 2018, 10(4), 501; https://doi.org/10.3390/rs10040501 - 22 Mar 2018
Cited by 3
Abstract
The algorithm of synthetic aperture radar (SAR) for automatic target recognition consists of two stages: feature extraction and classification. The quality of extracted features has significant impacts on the final classification performance. This paper presents a SAR automatic target classification method based on [...] Read more.
The algorithm of synthetic aperture radar (SAR) for automatic target recognition consists of two stages: feature extraction and classification. The quality of extracted features has significant impacts on the final classification performance. This paper presents a SAR automatic target classification method based on the wavelet-scattering convolution network. By introducing a deep scattering convolution network with complex wavelet filters over spatial and angular variables, robust feature representations can be extracted across various scales and angles without training data. Conventional dimension reduction and a support vector machine classifier are followed to complete the classification task. The proposed method is then tested on the moving and stationary target acquisition and recognition (MSTAR) benchmark data set and achieves an average accuracy of 97.63% on the classification of ten-class targets without data augmentation. Full article
(This article belongs to the Special Issue Analysis of Big Data in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop