E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Image Retrieval in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 December 2018

Special Issue Editors

Guest Editor
Prof. Sébastien Lefèvre

IRISA - Université Bretagne Sud, Campus de Tohannic Address: BP 573, 56017 Vannes Cedex France
Website | E-Mail
Interests: remote sensing; image analysis; image processing; computer vision; machine learning; pattern recognition
Guest Editor
Dr. Alexandre Benoit

LISTIC – Université Savoie Mont Blanc, Polytech Annecy Chambéry - 5, chemin de bellevue - Annecy-Le-Vieux, 74 940 Annecy, France
Website | E-Mail
Interests: image and video analysis; computer vision; remote sensing; machine learning; pattern recognition
Guest Editor
Dr. Erchan Aptoula

Institute of Information Technologies, Gebze Technical University, Gebze Technical University, Institute of Information Technologies, Cayirova Campus, 41400, Kocaeli, Turkey
Website | E-Mail
Interests: image analysis; mathematical morphology; content based image retrieval; hyperspectral imaging; deep learning

Special Issue Information

Dear Colleagues,

The continuous proliferation of Earth Observation satellites, along with their ever-increasing acquisition performances, has led to the formation of rapidly-growing geospatial data warehouses, in terms of both size and complexity, some of which are publicly available (e.g., Landsat, Sentinels). The analysis and exploration of such massive amounts of data has paved the way for various new applications, ranging from agricultural monitoring to crisis management and global security.

However, the rapid accumulation of gigabytes or terabytes worth of remote sensing data on a daily basis has rendered robust and automated tools, designed for their management, search, and retrieval, as essential for their effective exploitation. Of course, a variety of questions needs to be addressed, from the design of consistent and transferable data representations, to user-friendly querying and retrieval systems dealing with satellite images or mosaics. To this end, multiple methods have already been developed, mostly inspired from the multimedia context, by adapting the existing large body of knowledge in that domain.

Nevertheless, it has become fast clear that due to its much wider variety of sensors and resolutions, as well as the availability of rich prior knowledge, remote sensing retrieval encourages and often in fact renders it mandatory to go beyond mere adaptations and instead design original methods addressing effectively and efficiently these issues. The purpose of this Special Issue is to enable researchers from both multimedia retrieval and remote sensing to meet and share their experiences in order to build the remote sensing retrieval systems of tomorrow.

Topics of interest:

  • Content- and context-based indexing, search and retrieval of RS data
  • Search and browsing on RS Web repositories to face the Peta/Zettabyte scale
  • Advanced descriptors and similarity metrics dedicated to RS data
  • Usage of knowledge and semantic information for retrieval in RS
  • Matching learning for image retrieval in remote sensing
  • Query models, paradigms, and languages dedicated to RS
  • Multimodal/multi-observations (sensors, dates, resolutions) analysis of RS data
  • HCI issues in RS retrieval and browsing
  • Evaluation of RS retrieval systems
  • High performance indexing algorithms for RS data
  • Real-time information retrieval techniques and applications
  • Summarization and visualization of very large satellite image datasets
  • Applications of image retrieval in remote sensing

The availability of various public remote sensing datasets, produced especially for content based retrieval and scene classification, has enabled objective and reproducible benchmarks among the plethora of published description, retrieval and classification methods. From the now relatively small sized UC Merced Land Use Dataset with 2100 aerial images to the large scale High Resolution EuroSat or Very High Resolution SpaceNet databases that cover millions of square meters, researchers are provided with sufficient means to propose and validate methods able to address a rich variety of use cases.

Prof. Sebastien Lefevre
Dr. Alexandre Benoit
Dr. Erchan Aptoula
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • EO archives
  • Content-based image retrieval
  • Information querying and retrieval
  • Remote sensing image indexing
  • Big Data

Published Papers (4 papers)

View options order results:
result details:
Displaying articles 1-4
Export citation of selected articles as:

Research

Open AccessArticle A Two-Branch CNN Architecture for Land Cover Classification of PAN and MS Imagery
Remote Sens. 2018, 10(11), 1746; https://doi.org/10.3390/rs10111746
Received: 3 October 2018 / Revised: 1 November 2018 / Accepted: 2 November 2018 / Published: 6 November 2018
PDF Full-text (14099 KB) | HTML Full-text | XML Full-text
Abstract
The use of Very High Spatial Resolution (VHSR) imagery in remote sensing applications is nowadays a current practice whenever fine-scale monitoring of the earth’s surface is concerned. VHSR Land Cover classification, in particular, is currently a well-established tool to support decisions in several
[...] Read more.
The use of Very High Spatial Resolution (VHSR) imagery in remote sensing applications is nowadays a current practice whenever fine-scale monitoring of the earth’s surface is concerned. VHSR Land Cover classification, in particular, is currently a well-established tool to support decisions in several domains, including urban monitoring, agriculture, biodiversity, and environmental assessment. Additionally, land cover classification can be employed to annotate VHSR imagery with the aim of retrieving spatial statistics or areas with similar land cover. Modern VHSR sensors provide data at multiple spatial and spectral resolutions, most commonly as a couple of a higher-resolution single-band panchromatic (PAN) and a coarser multispectral (MS) imagery. In the typical land cover classification workflow, the multi-resolution input is preprocessed to generate a single multispectral image at the highest resolution available by means of a pan-sharpening process. Recently, deep learning approaches have shown the advantages of avoiding data preprocessing by letting machine learning algorithms automatically transform input data to best fit the classification task. Following this rationale, we here propose a new deep learning architecture to jointly use PAN and MS imagery for a direct classification without any prior image sharpening or resampling process. Our method, namely M u l t i R e s o L C C , consists of a two-branch end-to-end network which extracts features from each source at their native resolution and lately combine them to perform land cover classification at the PAN resolution. Experiments are carried out on two real-world scenarios over large areas with contrasted land cover characteristics. The experimental results underline the quality of our method while the characteristics of the proposed scenarios underline the applicability and the generality of our strategy in operational settings. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Figures

Figure 1

Open AccessArticle Integrating Aerial and Street View Images for Urban Land Use Classification
Remote Sens. 2018, 10(10), 1553; https://doi.org/10.3390/rs10101553
Received: 29 August 2018 / Revised: 20 September 2018 / Accepted: 25 September 2018 / Published: 27 September 2018
PDF Full-text (18908 KB) | HTML Full-text | XML Full-text
Abstract
Urban land use is key to rational urban planning and management. Traditional land use classification methods rely heavily on domain experts, which is both expensive and inefficient. In this paper, deep neural network-based approaches are presented to label urban land use at pixel
[...] Read more.
Urban land use is key to rational urban planning and management. Traditional land use classification methods rely heavily on domain experts, which is both expensive and inefficient. In this paper, deep neural network-based approaches are presented to label urban land use at pixel level using high-resolution aerial images and ground-level street view images. We use a deep neural network to extract semantic features from sparsely distributed street view images and interpolate them in the spatial domain to match the spatial resolution of the aerial images, which are then fused together through a deep neural network for classifying land use categories. Our methods are tested on a large publicly available aerial and street view images dataset of New York City, and the results show that using aerial images alone can achieve relatively high classification accuracy, the ground-level street view images contain useful information for urban land use classification, and fusing street image features with aerial images can improve classification accuracy. Moreover, we present experimental studies to show that street view images add more values when the resolutions of the aerial images are lower, and we also present case studies to illustrate how street view images provide useful auxiliary information to aerial images to boost performances. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Figures

Figure 1

Open AccessArticle RDCRMG: A Raster Dataset Clean & Reconstitution Multi-Grid Architecture for Remote Sensing Monitoring of Vegetation Dryness
Remote Sens. 2018, 10(9), 1376; https://doi.org/10.3390/rs10091376
Received: 23 July 2018 / Revised: 20 August 2018 / Accepted: 26 August 2018 / Published: 30 August 2018
PDF Full-text (9166 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, remote sensing (RS) research on crop growth status monitoring has gradually turned from static spectrum information retrieval in large-scale to meso-scale or micro-scale, timely multi-source data cooperative analysis; this change has presented higher requirements for RS data acquisition and analysis
[...] Read more.
In recent years, remote sensing (RS) research on crop growth status monitoring has gradually turned from static spectrum information retrieval in large-scale to meso-scale or micro-scale, timely multi-source data cooperative analysis; this change has presented higher requirements for RS data acquisition and analysis efficiency. How to implement rapid and stable massive RS data extraction and analysis becomes a serious problem. This paper reports on a Raster Dataset Clean & Reconstitution Multi-Grid (RDCRMG) architecture for remote sensing monitoring of vegetation dryness in which different types of raster datasets have been partitioned, organized and systematically applied. First, raster images have been subdivided into several independent blocks and distributed for storage in different data nodes by using the multi-grid as a consistent partition unit. Second, the “no metadata model” ideology has been referenced so that targets raster data can be speedily extracted by directly calculating the data storage path without retrieving metadata records; third, grids that cover the query range can be easily assessed. This assessment allows the query task to be easily split into several sub-tasks and executed in parallel by grouping these grids. Our RDCRMG-based change detection of the spectral reflectance information test and the data extraction efficiency comparative test shows that the RDCRMG is reliable for vegetation dryness monitoring with a slight reflectance information distortion and consistent percentage histograms. Furthermore, the RDCGMG-based data extraction in parallel circumstances has the advantages of high efficiency and excellent stability compared to that of the RDCGMG-based data extraction in serial circumstances and traditional data extraction. At last, an RDCRMG-based vegetation dryness monitoring platform (VDMP) has been constructed to apply RS data inversion in vegetation dryness monitoring. Through actual applications, the RDCRMG architecture is proven to be appropriate for timely vegetation dryness RS automatic monitoring with better performance, more reliability and higher extensibility. Our future works will focus on integrating more kinds of continuously updated RS data into the RDCRMG-based VDMP and integrating more multi-source datasets based collaborative analysis models for agricultural monitoring. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Figures

Graphical abstract

Open AccessArticle Unsupervised Deep Feature Learning for Remote Sensing Image Retrieval
Remote Sens. 2018, 10(8), 1243; https://doi.org/10.3390/rs10081243
Received: 6 June 2018 / Revised: 1 August 2018 / Accepted: 2 August 2018 / Published: 7 August 2018
PDF Full-text (17149 KB) | HTML Full-text | XML Full-text
Abstract
Due to the specific characteristics and complicated contents of remote sensing (RS) images, remote sensing image retrieval (RSIR) is always an open and tough research topic in the RS community. There are two basic blocks in RSIR, including feature learning and similarity matching.
[...] Read more.
Due to the specific characteristics and complicated contents of remote sensing (RS) images, remote sensing image retrieval (RSIR) is always an open and tough research topic in the RS community. There are two basic blocks in RSIR, including feature learning and similarity matching. In this paper, we focus on developing an effective feature learning method for RSIR. With the help of the deep learning technique, the proposed feature learning method is designed under the bag-of-words (BOW) paradigm. Thus, we name the obtained feature deep BOW (DBOW). The learning process consists of two parts, including image descriptor learning and feature construction. First, to explore the complex contents within the RS image, we extract the image descriptor in the image patch level rather than the whole image. In addition, instead of using the handcrafted feature to describe the patches, we propose the deep convolutional auto-encoder (DCAE) model to deeply learn the discriminative descriptor for the RS image. Second, the k-means algorithm is selected to generate the codebook using the obtained deep descriptors. Then, the final histogrammic DBOW features are acquired by counting the frequency of the single code word. When we get the DBOW features from the RS images, the similarities between RS images are measured using L1-norm distance. Then, the retrieval results can be acquired according to the similarity order. The encouraging experimental results counted on four public RS image archives demonstrate that our DBOW feature is effective for the RSIR task. Compared with the existing RS image features, our DBOW can achieve improved behavior on RSIR. Full article
(This article belongs to the Special Issue Image Retrieval in Remote Sensing)
Figures

Graphical abstract

Back to Top