Special Issue "Advances in Earth Observations Analytics: Leveraging Radar and Optical Together"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 September 2019).

Special Issue Editors

Dr. Saeid Homayouni
Website
Guest Editor
Institut National de la Recherche Scientifique (INRS), Centre Eau Terre Environnement (CETE) 490, rue de la Couronne, Québec, QC G1K 9A9, Canada
Interests: analysis of optical, hyperspectral, and radar Earth observations through artificial intelligence and machine-learning approaches for urban and agro-environmental applications;
Special Issues and Collections in MDPI journals
Dr. H. Peter White
Website
Guest Editor
Canada Centre for Mapping and Earth Observation, Natural Resources Canada, 560 Rochester St., Ottawa, ON K1A 0E4, Canada
Interests: hyperspectral; multi-spectral; environmental monitoring; remediation
Dr. Alireza Tabatabaeenejad
Website
Guest Editor
Research Assistant Professor, University of Southern California, Department of Electrical and Computer Engineering—Electrophysics
Interests: microwave remote sensing; applied and computational electromagnetics; radar remote sensing; inverse models and algorithms for environmental and Earth science applications
Special Issues and Collections in MDPI journals
Dr. Pedram Ghamisi
Website
Guest Editor

Special Issue Information

Dear Colleagues,

Recent advances in Earth Observation (EO) technologies have provided a unique opportunity for increasing detailed understanding of various features of the Earth system. In particular, radar and optical remote sensing systems are collecting multitemporal, multispectral, and multifrequency imagery and data with increasing spatial resolution. These exceptional data bring both opportunities and challenges for detection, identification, classification, and mapping of the Earth. Consequently, now is the time to leverage these advanced technologies to advance our capacity to monitor our dynamic planet and environment.

EO analytics, based on today’s open source technology, artificial intelligence and machine learning, and high-performance computing, can benefit from these opportunities and address the challenges. This technology can provide accurate, up to date, diversified geospatial information for various natural resource, and environmental and societal applications. These applications range from land use/land cover mapping (e.g., monitoring urbanization, croplands, desertification, deforestation and forest health, glaciers and sea ice) to detecting and monitoring of air pollution and oil spills, to geology mapping.

This Special Issue of Remote Sensing, entitled “Advances in Earth Observations Analytics: Leveraging Radar and Optical Together”, aims at presenting the state-of-the-art and original analytical methods for converging diverse advanced remote sensing data into information relevant to various earth sciences applications. Research papers that examine the latest developments in concepts, methods, techniques, and case study applications are welcomed. These analytical methods could be developed for individual or integrated remotely sensed data, e.g., optical (multispectral and hyperspectral), radar (polarimetry and interferometry), LiDAR (terrestrial and airborne), and thermal imagery acquired by satellite, airborne, and UAV sensors.

Dr. Saeid Homayouni
Dr. H. Peter White
Dr. Alireza Tabatabaeenejad
Dr. Pedram Ghamisi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine Learning
  • Artificial Intelligence
  • Pattern recognition
  • Remote Sensing
  • Multispectral and Hyperspectral
  • Synesthetic Aperture Radar
  • Super Resolution Imagery
  • Earth Observations Time Series
  • Image/Data Quality Enhancement
  • Classification
  • Clustering
  • Object-Based Image Analysis
  • Data Fusion
  • Geo Big Data

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Integrating Imaging Spectrometer and Synthetic Aperture Radar Data for Estimating Wetland Vegetation Aboveground Biomass in Coastal Louisiana
Remote Sens. 2019, 11(21), 2533; https://doi.org/10.3390/rs11212533 - 29 Oct 2019
Cited by 5
Abstract
Aboveground biomass (AGB) plays a critical functional role in coastal wetland ecosystem stability, with high biomass vegetation contributing to organic matter production, sediment accretion potential, and the surface elevation’s ability to keep pace with relative sea level rise. Many remote sensing studies have [...] Read more.
Aboveground biomass (AGB) plays a critical functional role in coastal wetland ecosystem stability, with high biomass vegetation contributing to organic matter production, sediment accretion potential, and the surface elevation’s ability to keep pace with relative sea level rise. Many remote sensing studies have employed either imaging spectrometer or synthetic aperture radar (SAR) for AGB estimation in various environments for assessing ecosystem health and carbon storage. This study leverages airborne data from NASA’s Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) to assess their unique capabilities in combination to estimate AGB in coastal deltaic wetlands. Here we develop AGB models for emergent herbaceous and forested wetland vegetation in coastal Louisiana. In addition to horizontally emitted, vertically received (HV) backscatter, SAR parameters are expressed by the Freeman–Durden polarimetric decomposition components representing volume and double-bounce scattering. The imaging spectrometer parameters include normalized difference vegetation index (NDVI), reflectance from 290 visible-shortwave infrared (VSWIR) bands, the first derivatives from those bands, or partial least squares (PLS) x-scores derived from those data. Model metrics and cross-validation indicate that the integrated models using the Freeman-Durden components and PLS x-scores improve AGB estimates for both wetland vegetation types. In our study domain over Louisiana’s Wax Lake Delta (WLD), we estimated a mean herbaceous wetland AGB of 3.58 Megagrams/hectare (Mg/ha) and a total of 3551.31 Mg over 9.92 km2, and a mean forested wetland AGB of 294.78 Mg/ha and a total of 27,499.14 Mg over 0.93 km2. While the addition of SAR-derived values to imaging spectrometer data provides a nominal error decrease for herbaceous wetland AGB, this combination significantly improves forested wetland AGB prediction. This integrative approach is particularly effective in forested wetlands as canopy-level biochemical characteristics are captured by the imaging spectrometer in addition to the variable structural information measured by the SAR. Full article
Show Figures

Graphical abstract

Open AccessArticle
Multi-Source Data Fusion Based on Ensemble Learning for Rapid Building Damage Mapping during the 2018 Sulawesi Earthquake and Tsunami in Palu, Indonesia
Remote Sens. 2019, 11(7), 886; https://doi.org/10.3390/rs11070886 - 11 Apr 2019
Cited by 14
Abstract
This work presents a detailed analysis of building damage recognition, employing multi-source data fusion and ensemble learning algorithms for rapid damage mapping tasks. A damage classification framework is introduced and tested to categorize the building damage following the recent 2018 Sulawesi earthquake and [...] Read more.
This work presents a detailed analysis of building damage recognition, employing multi-source data fusion and ensemble learning algorithms for rapid damage mapping tasks. A damage classification framework is introduced and tested to categorize the building damage following the recent 2018 Sulawesi earthquake and tsunami. Three robust ensemble learning classifiers were investigated for recognizing building damage from Synthetic Aperture Radar (SAR) and optical remote sensing datasets and their derived features. The contribution of each feature dataset was also explored, considering different combinations of sensors as well as their temporal information. SAR scenes acquired by the ALOS-2 PALSAR-2 and Sentinel-1 sensors were used. The optical Sentinel-2 and PlanetScope sensors were also included in this study. A non-local filter in the preprocessing phase was used to enhance the SAR features. Our results demonstrated that the canonical correlation forests classifier performs better in comparison to the other classifiers. In the data fusion analysis, Digital Elevation Model (DEM)- and SAR-derived features contributed the most in the overall damage classification. Our proposed mapping framework successfully classifies four levels of building damage (with overall accuracy >90%, average accuracy >67%). The proposed framework learned the damage patterns from a limited available human-interpreted building damage annotation and expands this information to map a larger affected area. This process including pre- and post-processing phases were completed in about 3 h after acquiring all raw datasets. Full article
Show Figures

Figure 1

Open AccessArticle
Deep Convolutional Capsule Network for Hyperspectral Image Spectral and Spectral-Spatial Classification
Remote Sens. 2019, 11(3), 223; https://doi.org/10.3390/rs11030223 - 22 Jan 2019
Cited by 21
Abstract
Capsule networks can be considered to be the next era of deep learning and have recently shown their advantages in supervised classification. Instead of using scalar values to represent features, the capsule networks use vectors to represent features, which enriches the feature presentation [...] Read more.
Capsule networks can be considered to be the next era of deep learning and have recently shown their advantages in supervised classification. Instead of using scalar values to represent features, the capsule networks use vectors to represent features, which enriches the feature presentation capability. This paper introduces a deep capsule network for hyperspectral image (HSI) classification to improve the performance of the conventional convolutional neural networks (CNNs). Furthermore, a modification of the capsule network named Conv-Capsule is proposed. Instead of using full connections, local connections and shared transform matrices, which are the core ideas of CNNs, are used in the Conv-Capsule network architecture. In Conv-Capsule, the number of trainable parameters is reduced compared to the original capsule, which potentially mitigates the overfitting issue when the number of available training samples is limited. Specifically, we propose two schemes: (1) A 1D deep capsule network is designed for spectral classification, as a combination of principal component analysis, CNN, and the Conv-Capsule network, and (2) a 3D deep capsule network is designed for spectral-spatial classification, as a combination of extended multi-attribute profiles, CNN, and the Conv-Capsule network. The proposed classifiers are tested on three widely-used hyperspectral data sets. The obtained results reveal that the proposed models provide competitive results compared to the state-of-the-art methods, including kernel support vector machines, CNNs, and recurrent neural network. Full article
Show Figures

Graphical abstract

Open AccessArticle
Hyperspectral Feature Extraction Using Sparse and Smooth Low-Rank Analysis
Remote Sens. 2019, 11(2), 121; https://doi.org/10.3390/rs11020121 - 10 Jan 2019
Cited by 12
Abstract
In this paper, we develop a hyperspectral feature extraction method called sparse and smooth low-rank analysis (SSLRA). First, we propose a new low-rank model for hyperspectral images (HSIs) where we decompose the HSI into smooth and sparse components. Then, these components are simultaneously [...] Read more.
In this paper, we develop a hyperspectral feature extraction method called sparse and smooth low-rank analysis (SSLRA). First, we propose a new low-rank model for hyperspectral images (HSIs) where we decompose the HSI into smooth and sparse components. Then, these components are simultaneously estimated using a nonconvex constrained penalized cost function (CPCF). The proposed CPCF exploits total variation penalty, 1 penalty, and an orthogonality constraint. The total variation penalty is used to promote piecewise smoothness, and, therefore, it extracts spatial (local neighborhood) information. The 1 penalty encourages sparse and spatial structures. Additionally, we show that this new type of decomposition improves the classification of the HSIs. In the experiments, SSLRA was applied on the Houston (urban) and the Trento (rural) datasets. The extracted features were used as an input into a classifier (either support vector machines (SVM) or random forest (RF)) to produce the final classification map. The results confirm improvement in classification accuracy compared to the state-of-the-art feature extraction approaches. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
MsRi-CCF: Multi-Scale and Rotation-Insensitive Convolutional Channel Features for Geospatial Object Detection
Remote Sens. 2018, 10(12), 1990; https://doi.org/10.3390/rs10121990 - 08 Dec 2018
Cited by 14
Abstract
Geospatial object detection is a fundamental but challenging problem in the remote sensing community. Although deep learning has shown its power in extracting discriminative features, there is still room for improvement in its detection performance, particularly for objects with large ranges of variations [...] Read more.
Geospatial object detection is a fundamental but challenging problem in the remote sensing community. Although deep learning has shown its power in extracting discriminative features, there is still room for improvement in its detection performance, particularly for objects with large ranges of variations in scale and direction. To this end, a novel approach, entitled multi-scale and rotation-insensitive convolutional channel features (MsRi-CCF), is proposed for geospatial object detection by integrating robust low-level feature generation, classifier generation with outlier removal, and detection with a power law. The low-level feature generation step consists of rotation-insensitive and multi-scale convolutional channel features, which were obtained by learning a regularized convolutional neural network (CNN) and integrating multi-scaled convolutional feature maps, followed by the fine-tuning of high-level connections in the CNN, respectively. Then, these generated features were fed into AdaBoost (chosen due to its lower computation and storage costs) with outlier removal to construct an object detection framework that facilitates robust classifier training. In the test phase, we adopted a log-space sampling approach instead of fine-scale sampling by using the fast feature pyramid strategy based on a computable power law. Extensive experimental results demonstrate that compared with several state-of-the-art baselines, the proposed MsRi-CCF approach yields better detection results, with 90.19% precision with the satellite dataset and 81.44% average precision with the NWPU VHR-10 datasets. Importantly, MsRi-CCF incurs no additional computational cost, which is only 0.92 s and 0.7 s per test image on the two datasets. Furthermore, we determined that most previous methods fail to gain an acceptable detection performance, particularly when they face several obstacles, such as deformations in objects (e.g., rotation, illumination, and scaling). Yet, these factors are effectively addressed by MsRi-CCF, yielding a robust geospatial object detection method. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks
Remote Sens. 2018, 10(10), 1649; https://doi.org/10.3390/rs10101649 - 16 Oct 2018
Cited by 14
Abstract
Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data [...] Read more.
Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data based on CNN and composite kernels. First, extinction profiles are applied to both data sources in order to extract spatial and elevation features from hyperspectral and LiDAR-derived data, respectively. Second, a three-stream CNN is designed to extract informative spectral, spatial, and elevation features individually from both available sources. The combination of extinction profiles and CNN features enables us to jointly benefit from low-level and high-level features to improve classification performance. To fuse the heterogeneous spectral, spatial, and elevation features extracted by CNN, instead of a simple stacking strategy, a multi-sensor composite kernels (MCK) scheme is designed. This scheme helps us to achieve higher spectral, spatial, and elevation separability of the extracted features and effectively perform multi-sensor data fusion in kernel space. In this context, a support vector machine and extreme learning machine with their composite kernels version are employed to produce the final classification result. The proposed framework is carried out on two widely used data sets with different characteristics: an urban data set captured over Houston, USA, and a rural data set captured over Trento, Italy. The proposed framework yields the highest OA of 92 . 57 % and 97 . 91 % for Houston and Trento data sets. Experimental results confirm that the proposed fusion framework can produce competitive results in both urban and rural areas in terms of classification accuracy, and significantly mitigate the salt and pepper noise in classification maps. Full article
Show Figures

Graphical abstract

Open AccessFeature PaperArticle
Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery
Remote Sens. 2018, 10(7), 1119; https://doi.org/10.3390/rs10071119 - 14 Jul 2018
Cited by 100
Abstract
Despite recent advances of deep Convolutional Neural Networks (CNNs) in various computer vision tasks, their potential for classification of multispectral remote sensing images has not been thoroughly explored. In particular, the applications of deep CNNs using optical remote sensing data have focused on [...] Read more.
Despite recent advances of deep Convolutional Neural Networks (CNNs) in various computer vision tasks, their potential for classification of multispectral remote sensing images has not been thoroughly explored. In particular, the applications of deep CNNs using optical remote sensing data have focused on the classification of very high-resolution aerial and satellite data, owing to the similarity of these data to the large datasets in computer vision. Accordingly, this study presents a detailed investigation of state-of-the-art deep learning tools for classification of complex wetland classes using multispectral RapidEye optical imagery. Specifically, we examine the capacity of seven well-known deep convnets, namely DenseNet121, InceptionV3, VGG16, VGG19, Xception, ResNet50, and InceptionResNetV2, for wetland mapping in Canada. In addition, the classification results obtained from deep CNNs are compared with those based on conventional machine learning tools, including Random Forest and Support Vector Machine, to further evaluate the efficiency of the former to classify wetlands. The results illustrate that the full-training of convnets using five spectral bands outperforms the other strategies for all convnets. InceptionResNetV2, ResNet50, and Xception are distinguished as the top three convnets, providing state-of-the-art classification accuracies of 96.17%, 94.81%, and 93.57%, respectively. The classification accuracies obtained using Support Vector Machine (SVM) and Random Forest (RF) are 74.89% and 76.08%, respectively, considerably inferior relative to CNNs. Importantly, InceptionResNetV2 is consistently found to be superior compared to all other convnets, suggesting the integration of Inception and ResNet modules is an efficient architecture for classifying complex remote sensing scenes such as wetlands. Full article
Show Figures

Graphical abstract

Back to TopTop