Special Issue "Advanced Machine Learning for Time Series Remote Sensing Data Analysis"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 May 2020).

Special Issue Editors

Prof. Dr. Gwanggil Jeon
Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon, 22012, Korea
Interests: image/signal processing; entropy coding; artificial intelligence; color image processing; machine learning; remote sensing; hyperspectral imaging; data fusion Learning; Remote Sensing; Hyperspectral Imaging; Data Fusion
Special Issues and Collections in MDPI journals
Dr. Valerio Bellandi
Website
Guest Editor
Dipartimento di Informatica (DI), Università degli Studi di Milano, Via Celoria 18, Milano (MI) 20133, Italy
Interests: remote sensing; image analysis; computer vision; pattern recognition; machine learning
Dr. Abdellah Chehri
Website SciProfiles
Guest Editor
Département des Sciences Appliquées, Université de Québec à Chicoutimi, 555, boul. de l’Université, Chicoutimi, Québec G7H 2B1, Canada
Interests: big data; smart and sustainable cities; urban innovation system; urban knowledge and innovation spaces; knowledge-based development
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing is a fundamental tool for comprehending the earth and supporting human-earth communications. In the last few years, advanced machine learning techniques for time series remote sensing data processing deal with real-life applications with great achievements. For example, there is a necessity for remote sensing and earth observation that makes possible supervising of natural resources and environments. The development of temporal resolution enables for the big data (such as enormous collection of image data) for a specific position, generating a feasibility for time series data analysis and eventual real-time estimation of scene dynamics. Three research directions are suggested: (1) techniques for generating time series image datasets, (2) extraction techniques for time series imagery, and (3) applications of time series image processing in real world natures such as land, climate, disturbance attribution, vegetation dynamics, and urbanization. This Special Issue aims to report the latest advances and trends concerning the advanced machine learning techniques to the time series remote sensing data processing issues. Papers of both theoretical and applicative nature are welcome, as well as contributions regarding new advanced machine learning technique for the remote sensing research community. Major topics of interest, by no means exclusive, are:

  • Time series remote sensing data processing
  • Machine learning techniques for data science and remote sensing
  • Image processing techniques for big data remote sensing
  • Large-scale datasets for training and testing machine learning solutions to remote sensing issues
  • Time series machine learning with scarce or low-quality remote sensing data, transfer learning, cross-sensor learning

Dr. Gwanggil Jeon
Dr. Valerio Bellandi
Dr. Abdellah Chehri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

Time series remote sensing

Time series transfer learning

Time series cross-sensor learning

Machine learning for remote sensing

Big data processing for remote sensing

Large-scale datasets

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial
Editorial for the Special Issue “Advanced Machine Learning for Time Series Remote Sensing Data Analysis”
Remote Sens. 2020, 12(17), 2815; https://doi.org/10.3390/rs12172815 - 31 Aug 2020
Abstract
This Special Issue intended to probe the impact of the adoption of advanced machine learning methods in remote sensing applications including those considering recent big data analysis, compression, multichannel, sensor and prediction techniques. In principal, this edition of the Special Issue is focused [...] Read more.
This Special Issue intended to probe the impact of the adoption of advanced machine learning methods in remote sensing applications including those considering recent big data analysis, compression, multichannel, sensor and prediction techniques. In principal, this edition of the Special Issue is focused on time series data processing for remote sensing applications with special emphasis on advanced machine learning platforms. This issue is intended to provide a highly recognized international forum to present recent advances in time series remote sensing. After review, a total of eight papers have been accepted for publication in this issue. Full article

Research

Jump to: Editorial

Open AccessArticle
Atmospheric Correction Thresholds for Ground-Based Radar Interferometry Deformation Monitoring Estimated Using Time Series Analyses
Remote Sens. 2020, 12(14), 2236; https://doi.org/10.3390/rs12142236 - 12 Jul 2020
Cited by 1
Abstract
Ground-based radar interferometry (GBSAR) is a useful method to control the stability of engineering objects and elements of geographical spaces at risk of deformation or displacement. To secure accurate and credible measurement results, it is crucial to consider atmospheric conditions as they influence [...] Read more.
Ground-based radar interferometry (GBSAR) is a useful method to control the stability of engineering objects and elements of geographical spaces at risk of deformation or displacement. To secure accurate and credible measurement results, it is crucial to consider atmospheric conditions as they influence the corrections to distance measurements. These conditions are especially important considering the radar bandwidth used. Measurements for the stability of engineering objects are not always performed in locations where meteorological monitoring is prevalent; however, information about the range of variability in atmospheric corrections is always welcome. The authors present a hybrid method to estimate the probable need of atmospheric corrections, which allows partly eliminating false positive alarms of deformations as caused by atmospheric fluctuations. Unlike the numerous publications on atmospheric reductions focused on the current state of the atmosphere, the proposed solution is based on applying a classic machine learning algorithm designed for the SARIMAX (Seasonal Autoregressive Integrated Moving Average with covariate at time) time series data model for satellite data shared by NASA (National Aeronautics and Space Administration) during the Landsat MODIS (Moderate Resolution Imaging Spectroradiometer) mission before performing residual estimation during the monitoring phase. Example calculations (proof of concept) were made for ten-year satellite data covering a region for experimental flood bank stability observations as performed using the IBIS-L (Image by Interferometric Survey—Landslide) radar and for target monitoring data (ground measurements). Full article
Show Figures

Graphical abstract

Open AccessArticle
Unsupervised Satellite Image Time Series Clustering Using Object-Based Approaches and 3D Convolutional Autoencoder
Remote Sens. 2020, 12(11), 1816; https://doi.org/10.3390/rs12111816 - 04 Jun 2020
Cited by 1
Abstract
Nowadays, satellite image time series (SITS) analysis has become an indispensable part of many research projects as the quantity of freely available remote sensed data increases every day. However, with the growing image resolution, pixel-level SITS analysis approaches have been replaced by more [...] Read more.
Nowadays, satellite image time series (SITS) analysis has become an indispensable part of many research projects as the quantity of freely available remote sensed data increases every day. However, with the growing image resolution, pixel-level SITS analysis approaches have been replaced by more efficient ones leveraging object-based data representations. Unfortunately, the segmentation of a full time series may be a complicated task as some objects undergo important variations from one image to another and can also appear and disappear. In this paper, we propose an algorithm that performs both segmentation and clustering of SITS. It is achieved by using a compressed SITS representation obtained with a multi-view 3D convolutional autoencoder. First, a unique segmentation map is computed for the whole SITS. Then, the extracted spatio-temporal objects are clustered using their encoded descriptors. The proposed approach was evaluated on two real-life datasets and outperformed the state-of-the-art methods. Full article
Show Figures

Figure 1

Open AccessArticle
Two-Path Network with Feedback Connections for Pan-Sharpening in Remote Sensing
Remote Sens. 2020, 12(10), 1674; https://doi.org/10.3390/rs12101674 - 23 May 2020
Cited by 1
Abstract
High-resolution multi-spectral images are desired for applications in remote sensing. However, multi-spectral images can only be provided in low resolutions by optical remote sensing satellites. The technique of pan-sharpening wants to generate high-resolution multi-spectral (MS) images based on a panchromatic (PAN) image and [...] Read more.
High-resolution multi-spectral images are desired for applications in remote sensing. However, multi-spectral images can only be provided in low resolutions by optical remote sensing satellites. The technique of pan-sharpening wants to generate high-resolution multi-spectral (MS) images based on a panchromatic (PAN) image and the low-resolution counterpart. The conventional deep learning based pan-sharpening methods process the panchromatic and the low-resolution image in a feedforward manner where shallow layers fail to access useful information from deep layers. To make full use of the powerful deep features that have strong representation ability, we propose a two-path network with feedback connections, through which the deep features can be rerouted for refining the shallow features in a feedback manner. Specifically, we leverage the structure of a recurrent neural network to pass the feedback information. Besides, a power feature extraction block with multiple projection pairs is designed to handle the feedback information and to produce power deep features. Extensive experimental results show the effectiveness of our proposed method. Full article
Show Figures

Graphical abstract

Open AccessArticle
Establishing an Empirical Model for Surface Soil Moisture Retrieval at the U.S. Climate Reference Network Using Sentinel-1 Backscatter and Ancillary Data
Remote Sens. 2020, 12(8), 1242; https://doi.org/10.3390/rs12081242 - 13 Apr 2020
Cited by 2
Abstract
Progress in sensor technologies has allowed real-time monitoring of soil water. It is a challenge to model soil water content based on remote sensing data. Here, we retrieved and modeled surface soil moisture (SSM) at the U.S. Climate Reference Network (USCRN) stations using [...] Read more.
Progress in sensor technologies has allowed real-time monitoring of soil water. It is a challenge to model soil water content based on remote sensing data. Here, we retrieved and modeled surface soil moisture (SSM) at the U.S. Climate Reference Network (USCRN) stations using Sentinel-1 backscatter data from 2016 to 2018 and ancillary data. Empirical machine learning models were established between soil water content measured at the USCRN stations with Sentinel-1 data from 2016 to 2017, the National Land Cover Dataset, terrain parameters, and Polaris soil data, and were evaluated in 2018 at the same USCRN stations. The Cubist model performed better than the multiple linear regression (MLR) and Random Forest (RF) model (R2 = 0.68 and RMSE = 0.06 m3 m-3 for validation). The Cubist model performed best in Shrub/Scrub, followed by Herbaceous and Cultivated Crops but poorly in Hay/Pasture. The success of SSM retrieval was mostly attributed to soil properties, followed by Sentinel-1 backscatter data, terrain parameters, and land cover. The approach shows the potential for retrieving SSM using Sentinel-1 data in a combination of high-resolution ancillary data across the conterminous United States (CONUS). Future work is required to improve the model performance by including more SSM network measurements, assimilating Sentinel-1 data with other microwave, optical and thermal remote sensing products. There is also a need to improve the spatial resolution and accuracy of land surface parameter products (e.g., soil properties and terrain parameters) at the regional and global scales. Full article
Show Figures

Graphical abstract

Open AccessArticle
A Sequential Autoencoder for Teleconnection Analysis
Remote Sens. 2020, 12(5), 851; https://doi.org/10.3390/rs12050851 - 06 Mar 2020
Cited by 1
Abstract
Many aspects of the earth system are known to have preferred patterns of variability, variously known in the atmospheric sciences as modes or teleconnections. Approaches to discovering these patterns have included principal components analysis and empirical orthogonal teleconnection (EOT) analysis. The latter is [...] Read more.
Many aspects of the earth system are known to have preferred patterns of variability, variously known in the atmospheric sciences as modes or teleconnections. Approaches to discovering these patterns have included principal components analysis and empirical orthogonal teleconnection (EOT) analysis. The latter is very effective but is computationally intensive. Here, we present a sequential autoencoder for teleconnection analysis (SATA). Like EOT, it discovers teleconnections sequentially, with subsequent analyses being based on residual series. However, unlike EOT, SATA uses a basic linear autoencoder as the primary tool for analysis. An autoencoder is an unsupervised neural network that learns an efficient neural representation of input data. With SATA, the input is an image time series and the neural representation is a unidimensional time series. SATA then locates the 0.5% of locations with the strongest correlation with the neural representation and averages their temporal vectors to characterize the teleconnection. Evaluation of the procedure showed that it is several orders of magnitude faster than other approaches to EOT, produces teleconnection patterns that are more strongly correlated to well-known teleconnections, and is particularly effective in finding teleconnections with multiple centers of action (such as dipoles). Full article
Show Figures

Graphical abstract

Open AccessArticle
Simulated Data to Estimate Real Sensor Events—A Poisson-Regression-Based Modelling
Remote Sens. 2020, 12(5), 771; https://doi.org/10.3390/rs12050771 - 28 Feb 2020
Cited by 1
Abstract
Automatic detection and recognition of Activities of Daily Living (ADL) are crucial for providing effective care to frail older adults living alone. A step forward in addressing this challenge is the deployment of smart home sensors capturing the intrinsic nature of ADLs performed [...] Read more.
Automatic detection and recognition of Activities of Daily Living (ADL) are crucial for providing effective care to frail older adults living alone. A step forward in addressing this challenge is the deployment of smart home sensors capturing the intrinsic nature of ADLs performed by these people. As the real-life scenario is characterized by a comprehensive range of ADLs and smart home layouts, deviations are expected in the number of sensor events per activity (SEPA), a variable often used for training activity recognition models. Such models, however, rely on the availability of suitable and representative data collection and is habitually expensive and resource-intensive. Simulation tools are an alternative for tackling these barriers; nonetheless, an ongoing challenge is their ability to generate synthetic data representing the real SEPA. Hence, this paper proposes the use of Poisson regression modelling for transforming simulated data in a better approximation of real SEPA. First, synthetic and real data were compared to verify the equivalence hypothesis. Then, several Poisson regression models were formulated for estimating real SEPA using simulated data. The outcomes revealed that real SEPA can be better approximated ( R pred 2 = 92.72 % ) if synthetic data is post-processed through Poisson regression incorporating dummy variables. Full article
Show Figures

Graphical abstract

Open AccessArticle
An Optimized Faster R-CNN Method Based on DRNet and RoI Align for Building Detection in Remote Sensing Images
Remote Sens. 2020, 12(5), 762; https://doi.org/10.3390/rs12050762 - 26 Feb 2020
Cited by 4
Abstract
In recent years, the increase of satellites and UAV (unmanned aerial vehicles) has multiplied the amount of remote sensing data available to people, but only a small part of the remote sensing data has been properly used; problems such as land planning, disaster [...] Read more.
In recent years, the increase of satellites and UAV (unmanned aerial vehicles) has multiplied the amount of remote sensing data available to people, but only a small part of the remote sensing data has been properly used; problems such as land planning, disaster management and resource monitoring still need to be solved. Buildings in remote sensing images have obvious positioning characteristics; thus, the detection of buildings can not only help the mapping and automatic updating of geographic information systems but also have guiding significance for the detection of other types of ground objects in remote sensing images. Aiming at the deficiency of traditional building remote sensing detection, an improved Faster R-CNN (region-based Convolutional Neural Network) algorithm was proposed in this paper, which adopts DRNet (Dense Residual Network) and RoI (Region of Interest) Align to utilize texture information and to solve the region mismatch problems. The experimental results showed that this method could reach 82.1% mAP (mean average precision) for the detection of landmark buildings, and the prediction box of building coordinates was relatively accurate, which improves the building detection results. Moreover, the recognition of buildings in a complex environment was also excellent. Full article
Show Figures

Graphical abstract

Open AccessArticle
Deep Learning-Based Drivers Emotion Classification System in Time Series Data for Remote Applications
Remote Sens. 2020, 12(3), 587; https://doi.org/10.3390/rs12030587 - 10 Feb 2020
Cited by 6
Abstract
Aggressive driving emotions is indeed one of the major causes for traffic accidents throughout the world. Real-time classification in time series data of abnormal and normal driving is a keystone to avoiding road accidents. Existing work on driving behaviors in time series data [...] Read more.
Aggressive driving emotions is indeed one of the major causes for traffic accidents throughout the world. Real-time classification in time series data of abnormal and normal driving is a keystone to avoiding road accidents. Existing work on driving behaviors in time series data have some limitations and discomforts for the users that need to be addressed. We proposed a multimodal based method to remotely detect driver aggressiveness in order to deal these issues. The proposed method is based on change in gaze and facial emotions of drivers while driving using near-infrared (NIR) camera sensors and an illuminator installed in vehicle. Driver’s aggressive and normal time series data are collected while playing car racing and truck driving computer games, respectively, while using driving game simulator. Dlib program is used to obtain driver’s image data to extract face, left and right eye images for finding change in gaze based on convolutional neural network (CNN). Similarly, facial emotions that are based on CNN are also obtained through lips, left and right eye images extracted from Dlib program. Finally, the score level fusion is applied to scores that were obtained from change in gaze and facial emotions to classify aggressive and normal driving. The proposed method accuracy is measured through experiments while using a self-constructed large-scale testing database that shows the classification accuracy of the driver’s change in gaze and facial emotions for aggressive and normal driving is high, and the performance is superior to that of previous methods. Full article
Show Figures

Graphical abstract

Back to TopTop