Editorial for the Special Issue “Advanced Machine Learning for Time Series Remote Sensing Data Analysis”

: This Special Issue intended to probe the impact of the adoption of advanced machine learning methods in remote sensing applications including those considering recent big data analysis, compression, multichannel, sensor and prediction techniques. In principal, this edition of the Special Issue is focused on time series data processing for remote sensing applications with special emphasis on advanced machine learning platforms. This issue is intended to provide a highly recognized international forum to present recent advances in time series remote sensing. After review, a total of eight papers have been accepted for publication in this issue.


Introduction
Remote sensing is a fundamental tool for comprehending the earth and supporting human-earth communications. In the last few years, advanced machine learning techniques for time series remote sensing data processing have dealt with real-life applications with great achievements. For example, there is a necessity for remote sensing and earth observation that makes the supervision of natural resources and environments possible. The development of temporal resolution enables for big data (such as enormous collection of image data) for a specific position, generating a feasibility for time series data analysis and eventual real-time estimation of scene dynamics. Three research directions are suggested: (1) techniques for generating time series image datasets, (2) extraction techniques for time series imagery, and (3) applications of time series image processing in real world natures such as land, climate, disturbance attribution, vegetation dynamics, and urbanization.
In light of these and many other challenges, a Special Issue of Advanced Machine Learning for Time Series Remote Sensing Data Analysis has been dedicated to address the current status, challenges, and future research priorities for the remote sensing community.

Themes of This Special Issue
Starting from the above considerations, this Special Issue aims to report the latest advances and trends concerning the advanced machine learning techniques in regard to time series remote sensing data processing issues. This Special Issue also aims to investigate the impact of the adoption of advanced machine learning techniques in remote sensing applications, including the ones that take advantage of recent big data, compression, multichannel, sensor and prediction techniques. This edition of the Special Issue is focused primarily on time series data processing for remote sensing applications with special emphasis on advanced machine learning platforms. This issue is intended to provide a highly recognized international forum to present recent advances in time series remote sensing. We welcomed both theoretical contributions, as well as papers describing interesting applications. Papers were invited for this Special Issue considering aspects of this problem, including: -Time series remote sensing data processing -Machine learning techniques for data science and remote sensing -Image processing techniques for big data remote sensing -Large-scale datasets for training and testing machine learning solutions to remote sensing -Time series machine learning with scarce or low-quality remote sensing data -Transfer learning -Cross-sensor learning After review, a total of eight papers have been accepted for publication in this issue.

Models
Progress in sensor technologies has allowed real-time monitoring of soil water. It is a challenge to model soil water content based on remote sensing data. In the contribution by Chatterjee et al. [1] "Establishing an Empirical Model for Surface Soil Moisture Retrieval at the U.S. Climate Reference Network Using Sentinel-1 Backscatter and Ancillary Data," authors retrieved and modeled surface soil moisture (SSM) at the U.S. Climate Reference Network (USCRN) stations using Sentinel-1 backscatter data from 2016 to 2018 and ancillary data. Empirical machine learning models were established between soil water content measured at the USCRN stations with Sentinel-1 data from 2016 to 2017, the National Land Cover Dataset, terrain parameters, and Polaris soil data, and were evaluated in 2018 at the same USCRN stations. The Cubist model performed better than the multiple linear regression (MLR) and Random Forest (RF) model. The Cubist model performed best in shrub/scrub, followed by herbaceous and cultivated crops but poorly in hay/pasture. The success of SSM retrieval was mostly attributed to soil properties, followed by Sentinel-1 backscatter data, terrain parameters, and land cover. The approach shows the potential for retrieving SSM using Sentinel-1 data in a combination of high-resolution ancillary data across the conterminous United States (CONUS).
Automatic detection and recognition of Activities of Daily Living (ADL) are crucial for providing effective care to frail older adults living alone. A step forward in addressing this challenge is the deployment of smart home sensors capturing the intrinsic nature of ADLs performed by these people. As the real-life scenario is characterized by a comprehensive range of ADLs and smart home layouts, deviations are expected in the number of sensor events per activity (SEPA), a variable often used for training activity recognition models. Such models, however, rely on the availability of suitable and representative data collection and are habitually expensive and resource-intensive. Simulation tools are an alternative for tackling these barriers; nonetheless, an ongoing challenge is their ability to generate synthetic data representing the real SEPA. Hence, the contribution by Ortíz-Barrios et al. [2] "Simulated Data to Estimate Real Sensor Events-A Poisson-Regression-Based Modelling" proposes the use of Poisson regression modelling for transforming simulated data in a better approximation of real SEPA. First, synthetic and real data were compared to verify the equivalence hypothesis. Then, several Poisson regression models were formulated for estimating real SEPA using simulated data. Their outcomes revealed that real SEPA can be better approximated if synthetic data is post-processed through Poisson regression incorporating dummy variables.

Performance Improvement
Many aspects of the Earth's system are known to have preferred patterns of variability, variously known in the atmospheric sciences as modes or teleconnections. Approaches to discovering these patterns have included principal components analysis and empirical orthogonal teleconnection (EOT) analysis. The latter is very effective but is computationally intensive. In the contribution by He and Eastman [3] "A Sequential Autoencoder for Teleconnection Analysis," authors present a sequential autoencoder for teleconnection analysis (SATA). Like EOT, it discovers teleconnections sequentially, with subsequent analyses being based on residual series. However, unlike EOT, SATA uses a basic linear autoencoder as the primary tool for analysis. An autoencoder is an unsupervised neural network that learns an efficient neural representation of input data. With SATA, the input is an image time series and the neural representation is a unidimensional time series. SATA then locates the 0.5% of locations with the strongest correlation with the neural representation and averages their temporal vectors to characterize the teleconnection. Their evaluation of the procedure showed that it is several orders of magnitude faster than other approaches to EOT, produces teleconnection patterns that are more strongly correlated to well-known teleconnections, and is particularly effective in finding teleconnections with multiple centers of action.
In recent years, the increase in satellites and UAV (unmanned aerial vehicles) has multiplied the amount of remote sensing data available to people, but only a small part of the remote sensing data has been properly used; problems such as land planning, disaster management and resource monitoring still need to be solved. Buildings in remote sensing images have obvious positioning characteristics; thus, the detection of buildings can not only help the mapping and automatic updating of geographic information systems but also has guiding significance for the detection of other types of ground objects in remote sensing images. Aiming at the deficiency of traditional building remote sensing detection, an improved Faster R-CNN (region-based convolutional neural network) algorithm is proposed in the contribution by Bai et al. [4] "An Optimized Faster R-CNN Method Based on DRNet and RoI Align for Building Detection in Remote Sensing Images." This work adopts DRNet (dense residual network) and RoI (region of interest) aligned to utilize texture information and to solve the region mismatch problems. Their experimental results showed that this method could reach 82.1% mAP (mean average precision) for the detection of landmark buildings, and the prediction box of building coordinates was relatively accurate, which improves the building detection results. Moreover, the recognition of buildings in a complex environment was also excellent.

Applications
High-resolution multi-spectral images are desired for applications in remote sensing. However, multi-spectral images can only be provided in low resolutions by optical remote sensing satellites. In the contribution by Fu et al. [5] "Two-Path Network with Feedback Connections for Pan-Sharpening in Remote Sensing," authors present a new pan-sharpening technique which generates high-resolution multi-spectral images based on a panchromatic image and the low-resolution counterpart. The conventional deep learning based pan-sharpening methods process the panchromatic and the low-resolution image in a feedforward manner, where shallow layers fail to access useful information from deep layers. To make full use of the powerful deep features that have strong representation ability, the authors propose a two-path network with feedback connections, through which the deep features can be rerouted for refining the shallow features in a feedback manner. Specifically, the authors leverage the structure of a recurrent neural network to pass the feedback information. Besides, a power feature extraction block with multiple projection pairs is designed to handle the feedback information and to produce power deep features. Their extensive experimental results show the effectiveness of their proposed method.
Aggressive driving emotions is one of the major causes for traffic accidents throughout the world. Real-time classification in time series data of abnormal and normal driving is a keystone to avoiding road accidents. Existing work on driving behaviors in time series data have some limitations and discomforts for the users that need to be addressed. In the contribution by Naqvi et al. [6] "Deep Learning-Based Drivers Emotion Classification System in Time Series Data for Remote Applications," the authors proposed a multimodal based method to remotely detect driver aggressiveness in order to deal these issues. The proposed method is based on change in gaze and facial emotions of drivers while driving using near-infrared camera sensors and an illuminator installed in vehicle. Driver's aggressive and normal time series data are collected while playing car racing and truck driving computer games, respectively, while using a driving game simulator. The Dlib program is used to obtain driver's image data to extract face, left and right eye images for finding change in gaze based on convolutional neural network. Similarly, facial emotions that are based on convolutional neural network are also obtained through lips, left and right eye images extracted from the Dlib program. Finally, the score level fusion is applied to scores that were obtained from change in gaze and facial emotions to classify aggressive and normal driving. Their proposed method accuracy is measured through experiments while using a self-constructed large-scale testing database that shows the classification accuracy of the driver's change in gaze and facial emotions for aggressive and normal driving is high, and the performance is superior to that of previous methods.
Nowadays, satellite image time series (SITS) analysis has become an indispensable part of many research projects as the quantity of freely available remote sensed data increases every day. However, with the growing image resolution, pixel-level SITS analysis approaches have been replaced by more efficient ones leveraging object-based data representations. Unfortunately, the segmentation of a whole time series may be a complicated task as some objects undergo important variations from one image to another and can also appear and disappear. In the contribution by Kalinicheva et al. [7] "Unsupervised Satellite Image Time Series Clustering using Object-Based Approaches and 3D Convolutional Autoencoder," authors propose an algorithm which is able to perform segmentation and clustering of SITS by using a compressed SITS representation obtained with a multi-view 3D convolutional autoencoder. First, a unique segmentation map is computed for the whole SITS. Then, the extracted spatio-temporal objects are clustered using their encoded descriptors. The simulation results show that the proposed approach was evaluated on two real-life datasets and outperformed the state-of-the-art methods.
Ground-based radar interferometry is a useful method to control the stability of engineering objects and elements of geographical spaces at risk of deformation or displacement. To secure accurate and credible measurement results, it is crucial to consider atmospheric conditions as they influence the corrections to distance measurements. These conditions are especially important considering the radar bandwidth used. Measurements for the stability of engineering objects are not always performed in locations where meteorological monitoring is prevalent; however, information about the range of variability in atmospheric corrections is always welcome. In the contribution by Owerko et al. [8] "Atmospheric Correction Thresholds for Ground-Based Radar Interferometry Deformation Monitoring Estimated Using Time Series Analyses," the authors present a hybrid method to estimate the probable need for atmospheric corrections, which allows partly eliminating false positive alarms of deformations, as caused by atmospheric fluctuations. Unlike the numerous publications on atmospheric reductions focused on the current state of the atmosphere, the proposed solution is based on applying a classic machine learning algorithm designed for the SARIMAX time series data model for satellite data shared by NASA during the Landsat MODIS mission, before performing a residual estimation during the monitoring phase. Example calculations were made for ten-year satellite data covering a region for experimental flood bank stability observations as performed using the IBIS-L radar and for target monitoring data.

Conclusions
The articles presented in this Special Issue provide insights in fields related to Time Series Remote Sensing Data Analysis using Advanced Machine Learning, including models, performance evaluation and improvements, and application developments. We hope that the readers can benefit from the insights provided by these papers, and contribute to these rapidly growing areas. We also hope that this Special Issue sheds light on major developments in the area of remote sensing and attracts attention by the scientific community to pursue further investigations leading to the rapid implementation of these technologies.
Author Contributions: The authors contributed equally to this editorial. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.