Special Issue "Knowledge Graph-Guided Deep Learning for Remote Sensing Image Understanding"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 31 December 2021.

Special Issue Editors

Dr. Yansheng Li
E-Mail Website
Guest Editor
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Interests: knowledge discovery from remote sensing big data; deep learning; knowledge graph
Prof. Dr. Yongjun Zhang
E-Mail Website
Guest Editor
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Interests: digital photogrammetry and remote sensing; computer vision; geometric processing of aerial and space optical imagery; multi-source spatial data integration; integrated sensor calibration and orientation; low-altitude UAV photogrammetry; combined bundle block adjustment of multi-source datasets; LiDAR and image integration; digital city modeling; visual inspection of industrial parts; intelligent extraction of remote sensing information and knowledge modeling
Special Issues and Collections in MDPI journals
Dr. Chen Chen
E-Mail Website
Guest Editor
Department of Electrical & Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
Interests: compressed sensing; signal and image processing; pattern recognition; computer vision; hyperspectral image analysis
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

As one of the most significant achievements in the artificial intelligence (AI) domain, deep learning has achieved tremendous success in remote sensing image understanding, including scene classification, semantic segmentation, and object detection. As is well known, deep learning is a classic data-driven technique and can often be trained in an end-to-end manner. As an inevitable disadvantage, it is incredibly difficult for deep learning to leverage the prior domain knowledge, which means common sense about remote sensing image interpretation and holds a natural generalization property. Despite deep learning’s aforementioned success in remote sensing image understanding, deep learning-based methods are susceptible to noise interference and lack the basic but vital cognition and inference ability. Consequently, deep learning still cannot fully meet the high-reliability demand of remote sensing image interpretation.

As another research hotspot in the field of AI, knowledge graphs work by explicitly representing the domain concepts and their relationships as a collection of triples and have strong knowledge representation capabilities and semantic reasoning capabilities. In order to fully reflect the domain characteristic of remote sensing, how to collaboratively construct a remote sensing knowledge graph with the aid of domain experts and data-driven methods deserves much more exploration. Based on remote sensing knowledge graphs, the semantic reasoning technique becomes very promising to fully leverage the rich semantic information in remote sensing images. Therefore, combining knowledge-driven knowledge graph reasoning and data-driven deep learning would be a promising research avenue to realize intelligent remote sensing image interpretation. The joint technique of knowledge graph reasoning and deep learning benefits assimilates the complementary advantages of these two techniques. It not only makes full use of the low-level and middle-level information mining ability of deep learning but also exerts the high-level semantic reasoning ability of knowledge graph reasoning.

This Special Issue calls for innovative remote sensing image understanding theory and methods by combining deep learning and knowledge graph reasoning. The topics include but are not limited to the following:

  • Interpretive deep learning methods for remote sensing image understanding, including scene classification, semantic segmentation, and object detection;
  • Domain knowledge modeling and reasoning for remote sensing image understanding;
  • Data-driven remote sensing scene graph generation methods;
  • Crowd-sourced remote sensing knowledge graph construction methods;
  • High-reliability remote sensing image understanding methods by combining deep learning and knowledge reasoning;
  • Representation learning of remote sensing knowledge graph for advanced remote sensing image understanding tasks: fine-grained image classification, few-shot image classification, zero-shot image classification, and so forth;
  • Language-level remote sensing image understanding: image caption, visual question answering, cross-modal retrieval between text and images, and so forth;
  • Large-scale and long-term remote sensing image understanding methods based on knowledge-guided deep learning.

Dr. Yansheng Li
Prof. Yongjun Zhang
Dr. Chen Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • high-reliability remote sensing image understanding
  • knowledge graph construction and reasoning
  • interpretive deep learning
  • knowledge graph-guided deep learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Subtask Attention Based Object Detection in Remote Sensing Images
Remote Sens. 2021, 13(10), 1925; https://doi.org/10.3390/rs13101925 - 14 May 2021
Viewed by 328
Abstract
Object detection in remote sensing images (RSIs) is one of the basic tasks in the field of remote sensing image automatic interpretation. In recent years, the deep object detection frameworks of natural scene images (NSIs) have been introduced into object detection on RSIs, [...] Read more.
Object detection in remote sensing images (RSIs) is one of the basic tasks in the field of remote sensing image automatic interpretation. In recent years, the deep object detection frameworks of natural scene images (NSIs) have been introduced into object detection on RSIs, and the detection performance has improved significantly because of the powerful feature representation. However, there are still many challenges concerning the particularities of remote sensing objects. One of the main challenges is the missed detection of small objects which have less than five percent of the pixels of the big objects. Generally, the existing algorithms choose to deal with this problem by multi-scale feature fusion based on a feature pyramid. However, the benefits of this strategy are limited, considering that the location of small objects in the feature map will disappear when the detection task is processed at the end of the network. In this study, we propose a subtask attention network (StAN), which handles the detection task directly on the shallow layer of the network. First, StAN contains one shared feature branch and two subtask attention branches of a semantic auxiliary subtask and a detection subtask based on the multi-task attention network (MTAN). Second, the detection branch uses only low-level features considering small objects. Third, the attention map guidance mechanism is put forward to optimize the network for keeping the identification ability. Fourth, the multi-dimensional sampling module (MdS), global multi-view channel weights (GMulW) and target-guided pixel attention (TPA) are designed for further improvement of the detection accuracy in complex scenes. The experimental results on the NWPU VHR-10 dataset and DOTA dataset demonstrated that the proposed algorithm achieved the SOTA performance, and the missed detection of small objects decreased. On the other hand, ablation experiments also proved the effects of MdS, GMulW and TPA. Full article
Show Figures

Graphical abstract

Article
Hyperspectral Image Classification across Different Datasets: A Generalization to Unseen Categories
Remote Sens. 2021, 13(9), 1672; https://doi.org/10.3390/rs13091672 - 26 Apr 2021
Viewed by 473
Abstract
With the rapid developments of hyperspectral imaging, the cost of collecting hyperspectral data has been lower, while the demand for reliable and detailed hyperspectral annotations has been much more substantial. However, limited by the difficulties of labelling annotations, most existing hyperspectral image (HSI) [...] Read more.
With the rapid developments of hyperspectral imaging, the cost of collecting hyperspectral data has been lower, while the demand for reliable and detailed hyperspectral annotations has been much more substantial. However, limited by the difficulties of labelling annotations, most existing hyperspectral image (HSI) classification methods are trained and evaluated on a single hyperspectral data cube. It brings two significant challenges. On the one hand, many algorithms have reached a nearly perfect classification accuracy, but their trained models are hard to generalize to other datasets. On the other hand, since different hyperspectral datasets are usually not collected in the same scene, different datasets will contain different classes. To address these issues, in this paper, we propose a new paradigm for HSI classification, which is training and evaluating separately across different hyperspectral datasets. It is of great help to labelling hyperspectral data. However, it has rarely been studied in the hyperspectral community. In this work, we utilize a three-phase scheme, including feature embedding, feature mapping, and label reasoning. More specifically, we select a pair of datasets acquired by the same hyperspectral sensor, and the classifier learns from one dataset and then evaluated it on the other. Inspired by the latest advances in zero-shot learning, we introduce label semantic representation to establish associations between seen categories in the training set and unseen categories in the testing set. Extensive experiments on two pairs of datasets with different comparative methods have shown the effectiveness and potential of zero-shot learning in HSI classification. Full article
Show Figures

Figure 1

Article
Combining Deep Semantic Segmentation Network and Graph Convolutional Neural Network for Semantic Segmentation of Remote Sensing Imagery
Remote Sens. 2021, 13(1), 119; https://doi.org/10.3390/rs13010119 - 31 Dec 2020
Cited by 2 | Viewed by 1198
Abstract
Although the deep semantic segmentation network (DSSN) has been widely used in remote sensing (RS) image semantic segmentation, it still does not fully mind the spatial relationship cues between objects when extracting deep visual features through convolutional filters and pooling layers. In fact, [...] Read more.
Although the deep semantic segmentation network (DSSN) has been widely used in remote sensing (RS) image semantic segmentation, it still does not fully mind the spatial relationship cues between objects when extracting deep visual features through convolutional filters and pooling layers. In fact, the spatial distribution between objects from different classes has a strong correlation characteristic. For example, buildings tend to be close to roads. In view of the strong appearance extraction ability of DSSN and the powerful topological relationship modeling capability of the graph convolutional neural network (GCN), a DSSN-GCN framework, which combines the advantages of DSSN and GCN, is proposed in this paper for RS image semantic segmentation. To lift the appearance extraction ability, this paper proposes a new DSSN called the attention residual U-shaped network (AttResUNet), which leverages residual blocks to encode feature maps and the attention module to refine the features. As far as GCN, the graph is built, where graph nodes are denoted by the superpixels and the graph weight is calculated by considering the spectral information and spatial information of the nodes. The AttResUNet is trained to extract the high-level features to initialize the graph nodes. Then the GCN combines features and spatial relationships between nodes to conduct classification. It is worth noting that the usage of spatial relationship knowledge boosts the performance and robustness of the classification module. In addition, benefiting from modeling GCN on the superpixel level, the boundaries of objects are restored to a certain extent and there are less pixel-level noises in the final classification result. Extensive experiments on two publicly open datasets show that DSSN-GCN model outperforms the competitive baseline (i.e., the DSSN model) and the DSSN-GCN when adopting AttResUNet achieves the best performance, which demonstrates the advance of our method. Full article
Show Figures

Figure 1

Back to TopTop