E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Advances in Earth Observations Analytics: Leveraging Radar and Optical Together"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 March 2019

Special Issue Editors

Guest Editor
Dr. Saeid Homayouni

Department of Geography, Environment, and Geomatics, University of Ottawa, Simard Building, 60 University Private, Ottawa, ON K1N 5N6, Canada
Website | E-Mail
Interests: satellite/airborne/UAV; optical; SAR; image analysis
Guest Editor
Dr. H. Peter White

Canada Centre for Mapping and Earth Observation, Natural Resources Canada, 560 Rochester St., Ottawa, ON K1A 0E4, Canada
Website | E-Mail
Interests: hyperspectral; multi-spectral; environmental monitoring; remediation
Guest Editor
Dr. Alireza Tabatabaeenejad

University of Southern California, 3737 Watt Way, PHE 621, Los Angeles, CA 90089, USA
Website | E-Mail
Phone: (213) 740-2574
Interests: microwave remote sensing; electromagnetics
Guest Editor
Dr. Pedram Ghamisi

Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology, Exploration, Chemnitzer Str. 40, D-09599 Freiberg, Germany
Website | E-Mail
Interests: hyperspectral/remote sensing image classification; multisensor data fusion; deep learning; machine learning

Special Issue Information

Dear Colleagues,

Recent advances in Earth Observation (EO) technologies have provided a unique opportunity for increasing detailed understanding of various features of the Earth system. In particular, radar and optical remote sensing systems are collecting multitemporal, multispectral, and multifrequency imagery and data with increasing spatial resolution. These exceptional data bring both opportunities and challenges for detection, identification, classification, and mapping of the Earth. Consequently, now is the time to leverage these advanced technologies to advance our capacity to monitor our dynamic planet and environment.

EO analytics, based on today’s open source technology, artificial intelligence and machine learning, and high-performance computing, can benefit from these opportunities and address the challenges. This technology can provide accurate, up to date, diversified geospatial information for various natural resource, and environmental and societal applications. These applications range from land use/land cover mapping (e.g., monitoring urbanization, croplands, desertification, deforestation and forest health, glaciers and sea ice) to detecting and monitoring of air pollution and oil spills, to geology mapping.

This Special Issue of Remote Sensing, entitled “Advances in Earth Observations Analytics: Leveraging Radar and Optical Together”, aims at presenting the state-of-the-art and original analytical methods for converging diverse advanced remote sensing data into information relevant to various earth sciences applications. Research papers that examine the latest developments in concepts, methods, techniques, and case study applications are welcomed. These analytical methods could be developed for individual or integrated remotely sensed data, e.g., optical (multispectral and hyperspectral), radar (polarimetry and interferometry), LiDAR (terrestrial and airborne), and thermal imagery acquired by satellite, airborne, and UAV sensors.

Dr. Saeid Homayouni
Dr. H. Peter White
Dr. Alireza Tabatabaeenejad
Dr. Pedram Ghamisi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machin Learning
  • Artificial Intelligence
  • Pattern recognition
  • Remote Sensing
  • Multispectral and Hyperspectral
  • Synesthetic Aperture Radar
  • Super Resolution Imagery
  • Earth Observations Time Series
  • Image/Data Quality Enhancement
  • Classification
  • Clustering
  • Object-Based Image Analysis
  • Data Fusion
  • Geo Big Data

Published Papers (2 papers)

View options order results:
result details:
Displaying articles 1-2
Export citation of selected articles as:

Research

Open AccessFeature PaperArticle Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks
Remote Sens. 2018, 10(10), 1649; https://doi.org/10.3390/rs10101649
Received: 5 September 2018 / Revised: 8 October 2018 / Accepted: 15 October 2018 / Published: 16 October 2018
PDF Full-text (22826 KB) | HTML Full-text | XML Full-text
Abstract
Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data
[...] Read more.
Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data based on CNN and composite kernels. First, extinction profiles are applied to both data sources in order to extract spatial and elevation features from hyperspectral and LiDAR-derived data, respectively. Second, a three-stream CNN is designed to extract informative spectral, spatial, and elevation features individually from both available sources. The combination of extinction profiles and CNN features enables us to jointly benefit from low-level and high-level features to improve classification performance. To fuse the heterogeneous spectral, spatial, and elevation features extracted by CNN, instead of a simple stacking strategy, a multi-sensor composite kernels (MCK) scheme is designed. This scheme helps us to achieve higher spectral, spatial, and elevation separability of the extracted features and effectively perform multi-sensor data fusion in kernel space. In this context, a support vector machine and extreme learning machine with their composite kernels version are employed to produce the final classification result. The proposed framework is carried out on two widely used data sets with different characteristics: an urban data set captured over Houston, USA, and a rural data set captured over Trento, Italy. The proposed framework yields the highest OA of 92 . 57 % and 97 . 91 % for Houston and Trento data sets. Experimental results confirm that the proposed fusion framework can produce competitive results in both urban and rural areas in terms of classification accuracy, and significantly mitigate the salt and pepper noise in classification maps. Full article
Figures

Graphical abstract

Open AccessFeature PaperArticle Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery
Remote Sens. 2018, 10(7), 1119; https://doi.org/10.3390/rs10071119
Received: 24 May 2018 / Revised: 30 June 2018 / Accepted: 12 July 2018 / Published: 14 July 2018
Cited by 1 | PDF Full-text (5845 KB) | HTML Full-text | XML Full-text
Abstract
Despite recent advances of deep Convolutional Neural Networks (CNNs) in various computer vision tasks, their potential for classification of multispectral remote sensing images has not been thoroughly explored. In particular, the applications of deep CNNs using optical remote sensing data have focused on
[...] Read more.
Despite recent advances of deep Convolutional Neural Networks (CNNs) in various computer vision tasks, their potential for classification of multispectral remote sensing images has not been thoroughly explored. In particular, the applications of deep CNNs using optical remote sensing data have focused on the classification of very high-resolution aerial and satellite data, owing to the similarity of these data to the large datasets in computer vision. Accordingly, this study presents a detailed investigation of state-of-the-art deep learning tools for classification of complex wetland classes using multispectral RapidEye optical imagery. Specifically, we examine the capacity of seven well-known deep convnets, namely DenseNet121, InceptionV3, VGG16, VGG19, Xception, ResNet50, and InceptionResNetV2, for wetland mapping in Canada. In addition, the classification results obtained from deep CNNs are compared with those based on conventional machine learning tools, including Random Forest and Support Vector Machine, to further evaluate the efficiency of the former to classify wetlands. The results illustrate that the full-training of convnets using five spectral bands outperforms the other strategies for all convnets. InceptionResNetV2, ResNet50, and Xception are distinguished as the top three convnets, providing state-of-the-art classification accuracies of 96.17%, 94.81%, and 93.57%, respectively. The classification accuracies obtained using Support Vector Machine (SVM) and Random Forest (RF) are 74.89% and 76.08%, respectively, considerably inferior relative to CNNs. Importantly, InceptionResNetV2 is consistently found to be superior compared to all other convnets, suggesting the integration of Inception and ResNet modules is an efficient architecture for classifying complex remote sensing scenes such as wetlands. Full article
Figures

Graphical abstract

Back to Top