remotesensing-logo

Journal Browser

Journal Browser

Big Remotely Sensed Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 26007

Special Issue Editors


E-Mail Website
Guest Editor
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
Interests: remote sensing image interpretation; artificial intelligence; machine learning; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
Interests: signal processing; remote sensing image processing; deep learning

E-Mail Website
Guest Editor
School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
Interests: machine learning, signal processing, and their applications in remote sensing; radar imaging; SAR interferometry and denoising; scene classification and image retrieval
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
Interests: remote sensing image interpretation; artificial intelligence; machine learning; computer vision

E-Mail Website
Guest Editor
Karlsruhe Institute of Technology (KIT), Institute of Photogrammetry and Remote Sensing (IPF), Englerstr. 7, D-76131 Karlsruhe, Germany
Interests: semantic and statistical scene understanding and monitoring; image-based automatic navigation and 3D reconstruction; physical parameter retrieval from multi- and hyperspectral remote sensing; Radar- and SAR processing for object motion analysis; GI-methods in Augmented Reality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid development of aerospace science and technology, the coverage, spatial resolution, and data acquisition frequency of remote sensing earth observation systems have significantly improved, providing a more convenient and simple way for the public to understand the world. At the same time, remotely sensed data gradually has the characteristics of 4-Vs: volume, variety, velocity, and veracity, accelerating the entry of remote sensing into the era of big data, which poses enormous research challenges in processing, modeling, and analyzing the massive remotely sensed data.

In recent years, the great success of deep learning in the field of computer vision has provided an important opportunity for intelligent information extraction from remotely sensed big data. A large number of scholars have tried to introduce deep learning methods for natural images into the field of remote sensing, the performance of which is significantly better than traditional methods, especially in applications such as remote sensing image classification, object detection, etc. Even so, the very unique characteristics of remotely sensed data, such as large scale, complex spectral features, and image background, make it difficult for existing deep learning methods to further improve performance. Therefore, constructing models, methods, and system tools suitable for the remote sensing area based on understanding the characteristics of them is the only way to effectively use remotely sensed big data.

This Special Issue will report cutting-edge models, methods, and system tools tailored for specific tasks in dealing with remotely sensed big data. It aims at boosting the interpretation of remotely sensed big data towards more accurate, autonomous, and cost-effective quality levels.

The Special Issue invites authors to submit contributions in (but not limited to) the following topics:

  • Deep neural network design incorporating characteristics of remotely sensed big data, including dedicated network design for remote sensing data, lightweight model design, neural architecture search, etc.
  • Advanced models, methods, and system tools for typical applications of remotely sensed big data, including rapid detection and recognition of typical objects in large-scale areas, fine-gained classification and semantic segmentation of remote sensing imagery, etc.
  • Multi-modal learning for semantic analysis of remotely sensed big data, including remote sensing image acquisition, multi-modal data classification, and retrieval, etc.
  • Fusion and analysis of multi-source data, including optical imagery, synthetic aperture radar (SAR) data, LiDAR data, etc.
  • Other related artificial intelligence techniques for remotely sensed big data, including continual learning/life-long learning, meta-learning, transfer learning, etc.

Prof. Dr. Xian Sun
Dr. Martin Weinmann
Prof. Dr. Wei Yang
Prof. Dr. Jian Kang
Dr. Wenhui Diao
Prof. Dr. Stefan Hinz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • big remotely sensed data
  • deep neural network design
  • object detection and recognition
  • semantic segmentation
  • multi-modal learning
  • change detection
  • multi-task learning
  • machine learning
  • computer vision

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 7742 KiB  
Article
Mapping Sugarcane in Central India with Smartphone Crowdsourcing
by Ju Young Lee, Sherrie Wang, Anjuli Jain Figueroa, Rob Strey, David B. Lobell, Rosamond L. Naylor and Steven M. Gorelick
Remote Sens. 2022, 14(3), 703; https://doi.org/10.3390/rs14030703 - 02 Feb 2022
Cited by 8 | Viewed by 4379
Abstract
In India, the second-largest sugarcane producing country in the world, accurate mapping of sugarcane land is a key to designing targeted agricultural policies. Such a map is not available, however, as it is challenging to reliably identify sugarcane areas using remote sensing due [...] Read more.
In India, the second-largest sugarcane producing country in the world, accurate mapping of sugarcane land is a key to designing targeted agricultural policies. Such a map is not available, however, as it is challenging to reliably identify sugarcane areas using remote sensing due to sugarcane’s phenological characteristics, coupled with a range of cultivation periods for different varieties. To produce a modern sugarcane map for the Bhima Basin in central India, we utilized crowdsourced data and applied supervised machine learning (neural network) and unsupervised classification methods individually and in combination. We highlight four points. First, smartphone crowdsourced data can be used as an alternative ground truth for sugarcane mapping but requires careful correction of potential errors. Second, although the supervised machine learning method performs best for sugarcane mapping, the combined use of both classification methods improves sugarcane mapping precision at the cost of worsening sugarcane recall and missing some actual sugarcane area. Third, machine learning image classification using high-resolution satellite imagery showed significant potential for sugarcane mapping. Fourth, our best estimate of the sugarcane area in the Bhima Basin is twice that shown in government statistics. This study provides useful insights into sugarcane mapping that can improve the approaches taken in other regions. Full article
(This article belongs to the Special Issue Big Remotely Sensed Data)
Show Figures

Figure 1

17 pages, 14136 KiB  
Article
Effects of Assimilating Clear-Sky FY-3D MWHS2 Radiance on the Numerical Simulation of Tropical Storm Ampil
by Dongmei Xu, Aiqing Shu, Hong Li, Feifei Shen, Qiang Li and Hang Su
Remote Sens. 2021, 13(15), 2873; https://doi.org/10.3390/rs13152873 - 22 Jul 2021
Cited by 6 | Viewed by 1893
Abstract
A new advanced microwave humidity sounder FY-3D MWHS2 radiance has been assimilated under the clear-sky conditions by implementing its data assimilation interface. The case of the tropical storm Ampil in 2018 is selected to address the effectiveness of the new-built module in the [...] Read more.
A new advanced microwave humidity sounder FY-3D MWHS2 radiance has been assimilated under the clear-sky conditions by implementing its data assimilation interface. The case of the tropical storm Ampil in 2018 is selected to address the effectiveness of the new-built module in the initialization and forecast of typhoons. Apart from the experiment assimilating both the Global Telecommunications System (GTS) data and the FY-3D MWHS2 radiance data, an experiment with only GTS data is also conducted for comparison. The results show that the bias correction of this humidity sounder is effective, and the analysis field after assimilating its radiance data matches well with the observation. The increment of specific humidity below the middle layers is evident after the assimilation of the radiance data. Besides, the geopotential height increment and the specific humidity increment at 500 hPa and 850 hPa, respectively, are favorable, resulting in more accurate rain belt distribution and a higher fraction skill score (FSS). In the deterministic forecast, the track error of the FY-3D MWHS2 experiment is consistently less than 90 km. Full article
(This article belongs to the Special Issue Big Remotely Sensed Data)
Show Figures

Figure 1

23 pages, 13339 KiB  
Article
Multiscale Semantic Feature Optimization and Fusion Network for Building Extraction Using High-Resolution Aerial Images and LiDAR Data
by Qinglie Yuan, Helmi Zulhaidi Mohd Shafri, Aidi Hizami Alias and Shaiful Jahari bin Hashim
Remote Sens. 2021, 13(13), 2473; https://doi.org/10.3390/rs13132473 - 24 Jun 2021
Cited by 12 | Viewed by 2317
Abstract
Automatic building extraction has been applied in many domains. It is also a challenging problem because of the complex scenes and multiscale. Deep learning algorithms, especially fully convolutional neural networks (FCNs), have shown robust feature extraction ability than traditional remote sensing data processing [...] Read more.
Automatic building extraction has been applied in many domains. It is also a challenging problem because of the complex scenes and multiscale. Deep learning algorithms, especially fully convolutional neural networks (FCNs), have shown robust feature extraction ability than traditional remote sensing data processing methods. However, hierarchical features from encoders with a fixed receptive field perform weak ability to obtain global semantic information. Local features in multiscale subregions cannot construct contextual interdependence and correlation, especially for large-scale building areas, which probably causes fragmentary extraction results due to intra-class feature variability. In addition, low-level features have accurate and fine-grained spatial information for tiny building structures but lack refinement and selection, and the semantic gap of across-level features is not conducive to feature fusion. To address the above problems, this paper proposes an FCN framework based on the residual network and provides the training pattern for multi-modal data combining the advantage of high-resolution aerial images and LiDAR data for building extraction. Two novel modules have been proposed for the optimization and integration of multiscale and across-level features. In particular, a multiscale context optimization module is designed to adaptively generate the feature representations for different subregions and effectively aggregate global context. A semantic guided spatial attention mechanism is introduced to refine shallow features and alleviate the semantic gap. Finally, hierarchical features are fused via the feature pyramid network. Compared with other state-of-the-art methods, experimental results demonstrate superior performance with 93.19 IoU, 97.56 OA on WHU datasets and 94.72 IoU, 97.84 OA on the Boston dataset, which shows that the proposed network can improve accuracy and achieve better performance for building extraction. Full article
(This article belongs to the Special Issue Big Remotely Sensed Data)
Show Figures

Graphical abstract

15 pages, 2193 KiB  
Article
Dynamic Pseudo-Label Generation for Weakly Supervised Object Detection in Remote Sensing Images
by Hui Wang, Hao Li, Wanli Qian, Wenhui Diao, Liangjin Zhao, Jinghua Zhang and Daobing Zhang
Remote Sens. 2021, 13(8), 1461; https://doi.org/10.3390/rs13081461 - 10 Apr 2021
Cited by 20 | Viewed by 3154
Abstract
In recent years, fully supervised object detection methods in remote sensing images with good performance have been developed. However, this approach requires a large number of instance-level annotated samples that are relatively expensive to acquire. Therefore, weakly supervised learning using only image-level annotations [...] Read more.
In recent years, fully supervised object detection methods in remote sensing images with good performance have been developed. However, this approach requires a large number of instance-level annotated samples that are relatively expensive to acquire. Therefore, weakly supervised learning using only image-level annotations has attracted much attention. Most of the weakly supervised object detection methods are based on multi-instance learning methods, and their performance depends on the process of scoring the candidate region proposals during training. In this process, the use of only image-level labels for supervision usually cannot obtain optimal results due to the lack of location information of the object. To address the above problem, a dynamic sample pseudo-label generation framework is proposed to generate pseudo-labels for each proposal without additional annotations. First, we propose the pseudo-label generation algorithm (PLG) to generate the category labels of the proposal by using the localization information of the object. Specifically, we propose to use the pixel average of the object’s localization map in the proposal as the proposal category confidence and calculate the pseudo-label by comparing the proposal category confidence with the preset threshold. In addition, an effective adaptive threshold selection strategy is designed to eliminate the effect of different category shape differences in computing sample pseudo-labels. Comparative experiments on the NWPU VHR-10 dataset demonstrate that our method can significantly improve the detection performance compared to existing methods. Full article
(This article belongs to the Special Issue Big Remotely Sensed Data)
Show Figures

Figure 1

15 pages, 10522 KiB  
Article
PCAN—Part-Based Context Attention Network for Thermal Power Plant Detection in Remote Sensing Imagery
by Wenxin Yin, Wenhui Diao, Peijin Wang, Xin Gao, Ya Li and Xian Sun
Remote Sens. 2021, 13(7), 1243; https://doi.org/10.3390/rs13071243 - 24 Mar 2021
Cited by 9 | Viewed by 4423
Abstract
The detection of Thermal Power Plants (TPPs) is a meaningful task for remote sensing image interpretation. It is a challenging task, because as facility objects TPPs are composed of various distinctive and irregular components. In this paper, we propose a novel end-to-end detection [...] Read more.
The detection of Thermal Power Plants (TPPs) is a meaningful task for remote sensing image interpretation. It is a challenging task, because as facility objects TPPs are composed of various distinctive and irregular components. In this paper, we propose a novel end-to-end detection framework for TPPs based on deep convolutional neural networks. Specifically, based on the RetinaNet one-stage detector, a context attention multi-scale feature extraction network is proposed to fuse global spatial attention to strengthen the ability in representing irregular objects. In addition, we design a part-based attention module to adapt to TPPs containing distinctive components. Experiments show that the proposed method outperforms the state-of-the-art methods and can achieve 68.15% mean average precision. Full article
(This article belongs to the Special Issue Big Remotely Sensed Data)
Show Figures

Figure 1

18 pages, 3907 KiB  
Article
C3Net: Cross-Modal Feature Recalibrated, Cross-Scale Semantic Aggregated and Compact Network for Semantic Segmentation of Multi-Modal High-Resolution Aerial Images
by Zhiying Cao, Wenhui Diao, Xian Sun, Xiaode Lyu, Menglong Yan and Kun Fu
Remote Sens. 2021, 13(3), 528; https://doi.org/10.3390/rs13030528 - 02 Feb 2021
Cited by 17 | Viewed by 3570
Abstract
Semantic segmentation of multi-modal remote sensing images is an important branch of remote sensing image interpretation. Multi-modal data has been proven to provide rich complementary information to deal with complex scenes. In recent years, semantic segmentation based on deep learning methods has made [...] Read more.
Semantic segmentation of multi-modal remote sensing images is an important branch of remote sensing image interpretation. Multi-modal data has been proven to provide rich complementary information to deal with complex scenes. In recent years, semantic segmentation based on deep learning methods has made remarkable achievements. It is common to simply concatenate multi-modal data or use parallel branches to extract multi-modal features separately. However, most existing works ignore the effects of noise and redundant features from different modalities, which may not lead to satisfactory results. On the one hand, existing networks do not learn the complementary information of different modalities and suppress the mutual interference between different modalities, which may lead to a decrease in segmentation accuracy. On the other hand, the introduction of multi-modal data greatly increases the running time of the pixel-level dense prediction. In this work, we propose an efficient C3Net that strikes a balance between speed and accuracy. More specifically, C3Net contains several backbones for extracting features of different modalities. Then, a plug-and-play module is designed to effectively recalibrate and aggregate multi-modal features. In order to reduce the number of model parameters while remaining the model performance, we redesign the semantic contextual extraction module based on the lightweight convolutional groups. Besides, a multi-level knowledge distillation strategy is proposed to improve the performance of the compact model. Experiments on ISPRS Vaihingen dataset demonstrate the superior performance of C3Net with 15× fewer FLOPs than the state-of-the-art baseline network while providing comparable overall accuracy. Full article
(This article belongs to the Special Issue Big Remotely Sensed Data)
Show Figures

Figure 1

22 pages, 11378 KiB  
Article
A Novel Unsupervised Classification Method for Sandy Land Using Fully Polarimetric SAR Data
by Weixian Tan, Borong Sun, Chenyu Xiao, Pingping Huang, Wei Xu and Wen Yang
Remote Sens. 2021, 13(3), 355; https://doi.org/10.3390/rs13030355 - 21 Jan 2021
Cited by 10 | Viewed by 2212
Abstract
Classification based on polarimetric synthetic aperture radar (PolSAR) images is an emerging technology, and recent years have seen the introduction of various classification methods that have been proven to be effective to identify typical features of many terrain types. Among the many regions [...] Read more.
Classification based on polarimetric synthetic aperture radar (PolSAR) images is an emerging technology, and recent years have seen the introduction of various classification methods that have been proven to be effective to identify typical features of many terrain types. Among the many regions of the study, the Hunshandake Sandy Land in Inner Mongolia, China stands out for its vast area of sandy land, variety of ground objects, and intricate structure, with more irregular characteristics than conventional land cover. Accounting for the particular surface features of the Hunshandake Sandy Land, an unsupervised classification method based on new decomposition and large-scale spectral clustering with superpixels (ND-LSC) is proposed in this study. Firstly, the polarization scattering parameters are extracted through a new decomposition, rather than other decomposition approaches, which gives rise to more accurate feature vector estimate. Secondly, a large-scale spectral clustering is applied as appropriate to meet the massive land and complex terrain. More specifically, this involves a beginning sub-step of superpixels generation via the Adaptive Simple Linear Iterative Clustering (ASLIC) algorithm when the feature vector combined with the spatial coordinate information are employed as input, and subsequently a sub-step of representative points selection as well as bipartite graph formation, followed by the spectral clustering algorithm to complete the classification task. Finally, testing and analysis are conducted on the RADARSAT-2 fully PolSAR dataset acquired over the Hunshandake Sandy Land in 2016. Both qualitative and quantitative experiments compared with several classification methods are conducted to show that proposed method can significantly improve performance on classification. Full article
(This article belongs to the Special Issue Big Remotely Sensed Data)
Show Figures

Graphical abstract

Other

Jump to: Research

17 pages, 16855 KiB  
Technical Note
Self-Supervised Monocular Depth Learning in Low-Texture Areas
by Wanpeng Xu, Ling Zou, Lingda Wu and Zhipeng Fu
Remote Sens. 2021, 13(9), 1673; https://doi.org/10.3390/rs13091673 - 26 Apr 2021
Viewed by 2386
Abstract
For the task of monocular depth estimation, self-supervised learning supervises training by calculating the pixel difference between the target image and the warped reference image, obtaining results comparable to those with full supervision. However, the problematic pixels in low-texture regions are ignored, since [...] Read more.
For the task of monocular depth estimation, self-supervised learning supervises training by calculating the pixel difference between the target image and the warped reference image, obtaining results comparable to those with full supervision. However, the problematic pixels in low-texture regions are ignored, since most researchers think that no pixels violate the assumption of camera motion, taking stereo pairs as the input in self-supervised learning, which leads to the optimization problem in these regions. To tackle this problem, we perform photometric loss using the lowest-level feature maps instead and implement first- and second-order smoothing to the depth, ensuring consistent gradients ring optimization. Given the shortcomings of ResNet as the backbone, we propose a new depth estimation network architecture to improve edge location accuracy and obtain clear outline information even in smoothed low-texture boundaries. To acquire more stable and reliable quantitative evaluation results, we introce a virtual data set in the self-supervised task because these have dense depth maps corresponding to pixel by pixel. We achieve performance that exceeds that of the prior methods on both the Eigen Splits of the KITTI and VKITTI2 data sets taking stereo pairs as the input. Full article
(This article belongs to the Special Issue Big Remotely Sensed Data)
Show Figures

Graphical abstract

Back to TopTop