Special Issue "Image Retrieval in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2018).

Special Issue Editors

Prof. Dr. Sébastien Lefèvre
Website
Guest Editor
IRISA, Université Bretagne Sud, Campus de Tohannic, Vannes, 56017, France
Interests: remote sensing; image analysis; image processing; computer vision; machine learning; pattern recognition
Special Issues and Collections in MDPI journals
Dr. Alexandre Benoit
Website
Guest Editor
LISTIC, Polytech Annecy, Chambéry, 5 chemin de bellevue, Annecy-le-vieux, 74940 Annecy, France
Interests: image and video analysis; computer vision; remote sensing; machine learning; pattern recognition
Dr. Erchan Aptoula
Website
Guest Editor
Institute of Information Technologies, Gebze Technical University, Gebze Technical University, Institute of Information Technologies, Cayirova Campus, 41400, Kocaeli, Turkey
Interests: image analysis; mathematical morphology; content based image retrieval; hyperspectral imaging; deep learning

Special Issue Information

Dear Colleagues,

The continuous proliferation of Earth Observation satellites, along with their ever-increasing acquisition performances, has led to the formation of rapidly-growing geospatial data warehouses, in terms of both size and complexity, some of which are publicly available (e.g., Landsat, Sentinels). The analysis and exploration of such massive amounts of data has paved the way for various new applications, ranging from agricultural monitoring to crisis management and global security.

However, the rapid accumulation of gigabytes or terabytes worth of remote sensing data on a daily basis has rendered robust and automated tools, designed for their management, search, and retrieval, as essential for their effective exploitation. Of course, a variety of questions needs to be addressed, from the design of consistent and transferable data representations, to user-friendly querying and retrieval systems dealing with satellite images or mosaics. To this end, multiple methods have already been developed, mostly inspired from the multimedia context, by adapting the existing large body of knowledge in that domain.

Nevertheless, it has become fast clear that due to its much wider variety of sensors and resolutions, as well as the availability of rich prior knowledge, remote sensing retrieval encourages and often in fact renders it mandatory to go beyond mere adaptations and instead design original methods addressing effectively and efficiently these issues. The purpose of this Special Issue is to enable researchers from both multimedia retrieval and remote sensing to meet and share their experiences in order to build the remote sensing retrieval systems of tomorrow.

Topics of interest:

  • Content- and context-based indexing, search and retrieval of RS data
  • Search and browsing on RS Web repositories to face the Peta/Zettabyte scale
  • Advanced descriptors and similarity metrics dedicated to RS data
  • Usage of knowledge and semantic information for retrieval in RS
  • Matching learning for image retrieval in remote sensing
  • Query models, paradigms, and languages dedicated to RS
  • Multimodal/multi-observations (sensors, dates, resolutions) analysis of RS data
  • HCI issues in RS retrieval and browsing
  • Evaluation of RS retrieval systems
  • High performance indexing algorithms for RS data
  • Real-time information retrieval techniques and applications
  • Summarization and visualization of very large satellite image datasets
  • Applications of image retrieval in remote sensing

The availability of various public remote sensing datasets, produced especially for content based retrieval and scene classification, has enabled objective and reproducible benchmarks among the plethora of published description, retrieval and classification methods. From the now relatively small sized UC Merced Land Use Dataset with 2100 aerial images to the large scale High Resolution EuroSat or Very High Resolution SpaceNet databases that cover millions of square meters, researchers are provided with sufficient means to propose and validate methods able to address a rich variety of use cases.

Prof. Sebastien Lefevre
Dr. Alexandre Benoit
Dr. Erchan Aptoula
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • EO archives
  • Content-based image retrieval
  • Information querying and retrieval
  • Remote sensing image indexing
  • Big Data

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Exploring Weighted Dual Graph Regularized Non-Negative Matrix Tri-Factorization Based Collaborative Filtering Framework for Multi-Label Annotation of Remote Sensing Images
Remote Sens. 2019, 11(8), 922; https://doi.org/10.3390/rs11080922 - 16 Apr 2019
Cited by 17
Abstract
Manually annotating remote sensing images is laborious work, especially on large-scale datasets. To improve the efficiency of this work, we propose an automatic annotation method for remote sensing images. The proposed method formulates the multi-label annotation task as a recommended problem, based on [...] Read more.
Manually annotating remote sensing images is laborious work, especially on large-scale datasets. To improve the efficiency of this work, we propose an automatic annotation method for remote sensing images. The proposed method formulates the multi-label annotation task as a recommended problem, based on non-negative matrix tri-factorization (NMTF). The labels of remote sensing images can be recommended directly by recovering the image–label matrix. To learn more efficient latent feature matrices, two graph regularization terms are added to NMTF that explore the affiliated relationships on the image graph and label graph simultaneously. In order to reduce the gap between semantic concepts and visual content, both low-level visual features and high-level semantic features are exploited to construct the image graph. Meanwhile, label co-occurrence information is used to build the label graph, which discovers the semantic meaning to enhance the label prediction for unlabeled images. By employing the information from images and labels, the proposed method can efficiently deal with the sparsity and cold-start problem brought by limited image–label pairs. Experimental results on the UCMerced and Corel5k datasets show that our model outperforms most baseline algorithms for multi-label annotation of remote sensing images and performs efficiently on large-scale unlabeled datasets. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Toward Content-Based Hyperspectral Remote Sensing Image Retrieval (CB-HRSIR): A Preliminary Study Based on Spectral Sensitivity Functions
Remote Sens. 2019, 11(5), 600; https://doi.org/10.3390/rs11050600 - 12 Mar 2019
Cited by 2
Abstract
With the emergence of huge volumes of high-resolution Hyperspectral Images (HSI) produced by different types of imaging sensors, analyzing and retrieving these images require effective image description and quantification techniques. Compared to remote sensing RGB images, HSI data contain hundreds of spectral bands [...] Read more.
With the emergence of huge volumes of high-resolution Hyperspectral Images (HSI) produced by different types of imaging sensors, analyzing and retrieving these images require effective image description and quantification techniques. Compared to remote sensing RGB images, HSI data contain hundreds of spectral bands (varying from the visible to the infrared ranges) allowing profile materials and organisms that only hyperspectral sensors can provide. In this article, we study the importance of spectral sensitivity functions in constructing discriminative representation of hyperspectral images. The main goal of such representation is to improve image content recognition by focusing the processing on only the most relevant spectral channels. The underlying hypothesis is that for a given category, the content of each image is better extracted through a specific set of spectral sensitivity functions. Those spectral sensitivity functions are evaluated in a Content-Based Image Retrieval (CBIR) framework. In this work, we propose a new HSI dataset for the remote sensing community, specifically designed for Hyperspectral remote sensing retrieval and classification. Exhaustive experiments have been conducted on this dataset and on a literature dataset. Obtained retrieval results prove that the physical measurements and optical properties of the scene contained in the HSI contribute in an accurate image content description than the information provided by the RGB image presentation. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Aggregated Deep Local Features for Remote Sensing Image Retrieval
Remote Sens. 2019, 11(5), 493; https://doi.org/10.3390/rs11050493 - 28 Feb 2019
Cited by 16
Abstract
Remote Sensing Image Retrieval remains a challenging topic due to the special nature of Remote Sensing imagery. Such images contain various different semantic objects, which clearly complicates the retrieval task. In this paper, we present an image retrieval pipeline that uses attentive, local [...] Read more.
Remote Sensing Image Retrieval remains a challenging topic due to the special nature of Remote Sensing imagery. Such images contain various different semantic objects, which clearly complicates the retrieval task. In this paper, we present an image retrieval pipeline that uses attentive, local convolutional features and aggregates them using the Vector of Locally Aggregated Descriptors (VLAD) to produce a global descriptor. We study various system parameters such as the multiplicative and additive attention mechanisms and descriptor dimensionality. We propose a query expansion method that requires no external inputs. Experiments demonstrate that even without training, the local convolutional features and global representation outperform other systems. After system tuning, we can achieve state-of-the-art or competitive results. Furthermore, we observe that our query expansion method increases overall system performance by about 3%, using only the top-three retrieved images. Finally, we show how dimensionality reduction produces compact descriptors with increased retrieval performance and fast retrieval computation times, e.g., 50% faster than the current systems. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
A Discriminative Feature Learning Approach for Remote Sensing Image Retrieval
Remote Sens. 2019, 11(3), 281; https://doi.org/10.3390/rs11030281 - 01 Feb 2019
Cited by 22
Abstract
Effective feature representations play a decisive role in content-based remote sensing image retrieval (CBRSIR). Recently, learning-based features have been widely used in CBRSIR and they show powerful ability of feature representations. In addition, a significant effort has been made to improve learning-based features [...] Read more.
Effective feature representations play a decisive role in content-based remote sensing image retrieval (CBRSIR). Recently, learning-based features have been widely used in CBRSIR and they show powerful ability of feature representations. In addition, a significant effort has been made to improve learning-based features from the perspective of the network structure. However, these learning-based features are not sufficiently discriminative for CBRSIR. In this paper, we propose two effective schemes for generating discriminative features for CBRSIR. In the first scheme, the attention mechanism and a new attention module are introduced to the Convolutional Neural Networks (CNNs) structure, causing more attention towards salient features, and the suppression of other features. In the second scheme, a multi-task learning network structure is proposed, to force learning-based features to be more discriminative, with inter-class dispersion and intra-class compaction, through penalizing the distances between the feature representations and their corresponding class centers. Then, a new method for constructing more challenging datasets is first used for remote sensing image retrieval, to better validate our schemes. Extensive experiments on challenging datasets are conducted to evaluate the effectiveness of our two schemes, and the comparison of the results demonstrate that our proposed schemes, especially the fusion of the two schemes, can improve the baseline methods by a significant margin. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
A Two-Branch CNN Architecture for Land Cover Classification of PAN and MS Imagery
Remote Sens. 2018, 10(11), 1746; https://doi.org/10.3390/rs10111746 - 06 Nov 2018
Cited by 19
Abstract
The use of Very High Spatial Resolution (VHSR) imagery in remote sensing applications is nowadays a current practice whenever fine-scale monitoring of the earth’s surface is concerned. VHSR Land Cover classification, in particular, is currently a well-established tool to support decisions in several [...] Read more.
The use of Very High Spatial Resolution (VHSR) imagery in remote sensing applications is nowadays a current practice whenever fine-scale monitoring of the earth’s surface is concerned. VHSR Land Cover classification, in particular, is currently a well-established tool to support decisions in several domains, including urban monitoring, agriculture, biodiversity, and environmental assessment. Additionally, land cover classification can be employed to annotate VHSR imagery with the aim of retrieving spatial statistics or areas with similar land cover. Modern VHSR sensors provide data at multiple spatial and spectral resolutions, most commonly as a couple of a higher-resolution single-band panchromatic (PAN) and a coarser multispectral (MS) imagery. In the typical land cover classification workflow, the multi-resolution input is preprocessed to generate a single multispectral image at the highest resolution available by means of a pan-sharpening process. Recently, deep learning approaches have shown the advantages of avoiding data preprocessing by letting machine learning algorithms automatically transform input data to best fit the classification task. Following this rationale, we here propose a new deep learning architecture to jointly use PAN and MS imagery for a direct classification without any prior image sharpening or resampling process. Our method, namely M u l t i R e s o L C C , consists of a two-branch end-to-end network which extracts features from each source at their native resolution and lately combine them to perform land cover classification at the PAN resolution. Experiments are carried out on two real-world scenarios over large areas with contrasted land cover characteristics. The experimental results underline the quality of our method while the characteristics of the proposed scenarios underline the applicability and the generality of our strategy in operational settings. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Integrating Aerial and Street View Images for Urban Land Use Classification
Remote Sens. 2018, 10(10), 1553; https://doi.org/10.3390/rs10101553 - 27 Sep 2018
Cited by 30
Abstract
Urban land use is key to rational urban planning and management. Traditional land use classification methods rely heavily on domain experts, which is both expensive and inefficient. In this paper, deep neural network-based approaches are presented to label urban land use at pixel [...] Read more.
Urban land use is key to rational urban planning and management. Traditional land use classification methods rely heavily on domain experts, which is both expensive and inefficient. In this paper, deep neural network-based approaches are presented to label urban land use at pixel level using high-resolution aerial images and ground-level street view images. We use a deep neural network to extract semantic features from sparsely distributed street view images and interpolate them in the spatial domain to match the spatial resolution of the aerial images, which are then fused together through a deep neural network for classifying land use categories. Our methods are tested on a large publicly available aerial and street view images dataset of New York City, and the results show that using aerial images alone can achieve relatively high classification accuracy, the ground-level street view images contain useful information for urban land use classification, and fusing street image features with aerial images can improve classification accuracy. Moreover, we present experimental studies to show that street view images add more values when the resolutions of the aerial images are lower, and we also present case studies to illustrate how street view images provide useful auxiliary information to aerial images to boost performances. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
RDCRMG: A Raster Dataset Clean & Reconstitution Multi-Grid Architecture for Remote Sensing Monitoring of Vegetation Dryness
Remote Sens. 2018, 10(9), 1376; https://doi.org/10.3390/rs10091376 - 30 Aug 2018
Cited by 11
Abstract
In recent years, remote sensing (RS) research on crop growth status monitoring has gradually turned from static spectrum information retrieval in large-scale to meso-scale or micro-scale, timely multi-source data cooperative analysis; this change has presented higher requirements for RS data acquisition and analysis [...] Read more.
In recent years, remote sensing (RS) research on crop growth status monitoring has gradually turned from static spectrum information retrieval in large-scale to meso-scale or micro-scale, timely multi-source data cooperative analysis; this change has presented higher requirements for RS data acquisition and analysis efficiency. How to implement rapid and stable massive RS data extraction and analysis becomes a serious problem. This paper reports on a Raster Dataset Clean & Reconstitution Multi-Grid (RDCRMG) architecture for remote sensing monitoring of vegetation dryness in which different types of raster datasets have been partitioned, organized and systematically applied. First, raster images have been subdivided into several independent blocks and distributed for storage in different data nodes by using the multi-grid as a consistent partition unit. Second, the “no metadata model” ideology has been referenced so that targets raster data can be speedily extracted by directly calculating the data storage path without retrieving metadata records; third, grids that cover the query range can be easily assessed. This assessment allows the query task to be easily split into several sub-tasks and executed in parallel by grouping these grids. Our RDCRMG-based change detection of the spectral reflectance information test and the data extraction efficiency comparative test shows that the RDCRMG is reliable for vegetation dryness monitoring with a slight reflectance information distortion and consistent percentage histograms. Furthermore, the RDCGMG-based data extraction in parallel circumstances has the advantages of high efficiency and excellent stability compared to that of the RDCGMG-based data extraction in serial circumstances and traditional data extraction. At last, an RDCRMG-based vegetation dryness monitoring platform (VDMP) has been constructed to apply RS data inversion in vegetation dryness monitoring. Through actual applications, the RDCRMG architecture is proven to be appropriate for timely vegetation dryness RS automatic monitoring with better performance, more reliability and higher extensibility. Our future works will focus on integrating more kinds of continuously updated RS data into the RDCRMG-based VDMP and integrating more multi-source datasets based collaborative analysis models for agricultural monitoring. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Unsupervised Deep Feature Learning for Remote Sensing Image Retrieval
Remote Sens. 2018, 10(8), 1243; https://doi.org/10.3390/rs10081243 - 07 Aug 2018
Cited by 31
Abstract
Due to the specific characteristics and complicated contents of remote sensing (RS) images, remote sensing image retrieval (RSIR) is always an open and tough research topic in the RS community. There are two basic blocks in RSIR, including feature learning and similarity matching. [...] Read more.
Due to the specific characteristics and complicated contents of remote sensing (RS) images, remote sensing image retrieval (RSIR) is always an open and tough research topic in the RS community. There are two basic blocks in RSIR, including feature learning and similarity matching. In this paper, we focus on developing an effective feature learning method for RSIR. With the help of the deep learning technique, the proposed feature learning method is designed under the bag-of-words (BOW) paradigm. Thus, we name the obtained feature deep BOW (DBOW). The learning process consists of two parts, including image descriptor learning and feature construction. First, to explore the complex contents within the RS image, we extract the image descriptor in the image patch level rather than the whole image. In addition, instead of using the handcrafted feature to describe the patches, we propose the deep convolutional auto-encoder (DCAE) model to deeply learn the discriminative descriptor for the RS image. Second, the k-means algorithm is selected to generate the codebook using the obtained deep descriptors. Then, the final histogrammic DBOW features are acquired by counting the frequency of the single code word. When we get the DBOW features from the RS images, the similarities between RS images are measured using L1-norm distance. Then, the retrieval results can be acquired according to the similarity order. The encouraging experimental results counted on four public RS image archives demonstrate that our DBOW feature is effective for the RSIR task. Compared with the existing RS image features, our DBOW can achieve improved behavior on RSIR. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop