A Hybrid Deep Learning Framework for Unsupervised Anomaly Detection in Multivariate Spatio-Temporal Data

: Multivariate time-series data with a contextual spatial attribute have extensive use for ﬁnding anomalous patterns in a wide variety of application domains such as earth science, hurricane tracking, fraud, and disease outbreak detection. In most settings, spatial context is often expressed in terms of ZIP code or region coordinates such as latitude and longitude. However, traditional anomaly detection techniques cannot handle more than one contextual attribute in a uniﬁed way. In this paper, a new hybrid approach based on deep learning is proposed to solve the anomaly detection problem in multivariate spatio-temporal dataset. It works under the assumption that no prior knowledge about the dataset and anomalies are available. The architecture of the proposed hybrid framework is based on an autoencoder scheme, and it is more e ﬃ cient in extracting features from the spatio-temporal multivariate datasets compared to the traditional spatio-temporal anomaly detection techniques. We conducted extensive experiments using buoy data of 2005 from National Data Buoy Center and Hurricane Katrina as ground truth. Experiments demonstrate that the proposed model achieves more than 10% improvement in accuracy over the methods used in the comparison where our model jointly processes the spatial and temporal dimensions of the contextual data to extract features for anomaly detection.


Introduction
An anomaly is an observation whose properties are significantly different from the majority of other observations under consideration which are called the normal data. Anomaly detection refers to the problem of finding these observations that do not adhere to expected or normal behavior. A spatio-temporal outlier (ST-Outlier) is an object with behavioral (non-contextual) attributes that are notably different from other objects in the neighborhood which is defined by its contextual (spatial and temporal) attributes [1]. Spatio-temporal data have become extremely common in many application domains because of the advancement of the hardware technology for data collection and generation. Collecting data from various spatial locations and at regular time intervals are very important for some problem settings. In such settings, detection of spatio-temporal outliers can lead to the discovery of interesting and generally unexpected patterns such as local instability [2]. Some examples of such spatio-temporal datasets are as follows: meteorological data, traffic data, earth science, and disease outbreak data. Events that generate spatio-temporal data are evolving events, such as hurricanes and disease outbreaks, and both spatial and temporal continuity are important in modelling such events [3].
For a problem domain, obtaining the labelled training data for all types of anomalies is often too expensive if not impossible [4]. This highlights the need for unsupervised techniques to find spatio-temporal anomalies. Moreover, spatio-temporal datasets are generally multivariate and have many contextual structures in them (spatial and temporal regularities) which makes them particularly difficult for labelling and well suited for unsupervised learning models. In the unsupervised scenarios, the type of anomalies and the ratio of anomalous events within the given dataset are generally not known. In such scenarios, we need to model the normal behavior of the underlying system in the presence of anomalies which pose extra difficulty.
In the past few years, some density-and clustering-based unsupervised spatio-temporal anomaly detection algorithms have been developed. The most prominent ones are ST-Outlier detection algorithm [5] and ST-BDBCAN [6] methods which are basically an extension of DBSCAN [7], a density-based spatial clustering algorithm. LDBSCAN [8], a locality-based outlier detection algorithm, is the merge of both DBSCAN and Local Outlier Factor (LOF) [9]. IsolationForest [10,11] is a powerful approach for anomaly detection in multivariate data without relying on any distance or density measure. In particular, it is an unsupervised, tree-based ensemble method that applies the novel concept of isolation to anomaly detection. It detects anomalies based on a fundamentally different model-based approach: an anomalous point is isolated via recursive partitioning by randomly selecting a feature and then randomly selecting a split value for the selected feature. The output is the anomaly score for each data point. Although it establishes a powerful multivariate non-parametric approach, it works on continuous-valued data only. Despite the inherent unsupervised setting, they may still not detect anomalies effectively due to the following reasons:

•
In multivariate time series data, strong temporal dependency exists between time steps. Due to this reason, distance-/clustering-based methods, may not perform well since they cannot capture temporal dependencies properly across different time steps.

•
On the other hand, for the case of high-dimensional data, even the data locality may be ill-defined because of data sparsity [3].

•
Despite the effectiveness of IsolationForest technique, it isolates data points based on one random feature at a time which may not represent spatial and temporal dependencies between data points properly in the spatio-temporal dataset.

•
The definition of distance between data points in multivariate spatio-temporal data with mixed attributes is often challenging. This difficulty may have an adverse effect on outlier detection performance of distance-based clustering algorithms.
To address these challenges, we propose a hybrid deep autoencoder framework. It is composed of a multi-channel convolutional neural network (CNN) and Long Short-Term Memory (LSTM) network, named as MC-CNN-LSTM. The architecture of the proposed hybrid framework is based on the autoencoder scheme which has the greater power in representing arbitrary data distributions for anomaly detection than commonly used alternatives such as PCA or matrix factorization [12]. The multi-channel CNN network is deployed on the encoder side for learning spatial structures of each spatio-temporal multivariate data frame, and LSTM network is deployed on the decoder side for learning temporal patterns from the encoded spatial structures. It is designed to solve the unsupervised anomaly detection problem in non-image multivariate spatio-temporal data. Our major contributions in this paper can be summarized as follows: • In the aforementioned neighborhood methods, the biggest challenge is to combine the contextual attributes in a meaningful way. In the proposed approach, spatial and temporal contexts are handled by different deep learning components as these contextual variables refer to different types of dependencies. • All distance-/clustering-based algorithms are proximity-based algorithms and dependent on some notion of distance. Many distance functions used for proximity quantification (such as Euclidean and Manhattan) are not meaningful for all datatypes.
• The proposed hybrid framework is designed to be trained in a truly unsupervised fashion without any labels indicating normal or abnormal data. The architecture is robust enough to learn the underlying dynamics even if the training dataset contains noise and anomalies.

•
In the proposed hybrid architecture, CNN and LSTM layers are combined into one unified framework that is jointly trained. Unlike proximity-based methods, the framework can be easily implemented on data with mixed-type attributes and scales well to multivariate datasets with high number of features.
We compared the proposed model with the state-of-the-art spatial and spatio-temporal anomaly detection algorithms: ST-Outlier detection algorithm [5], ST-BDBCAN [6], LDBSCAN [8], LOF [9], and to a powerful model-based anomaly detection method, IsolationForest [10]. We conducted extensive experiments using the year 2005 Gulf of Mexico (West) buoy dataset from National Data Buoy Center (https://www.ndbc.noaa.gov/) and Hurricane Katrina best track path data [13] as ground truth for the experiments. Our results demonstrate that the proposed deep learning framework achieves superior results by achieving at least 10% improvement in accuracy over the methods used in comparison.
The remainder of this paper is organized as follows. Section 2 provides an overview of related work in the literature. Section 3 gives theoretical background on autoencoders, autoencoder based anomaly detection, CNN and LSTM networks. Section 4 explains methodology, spatio-temporal data pre-processing and the proposed hybrid deep learning framework. Section 5 gives detailed information on experimental setup, Hurricane Katrina dataset and results of the experiments. Section 6 discusses the results, summarizes contributions and gives the future research directions.

Related Work
The task of detecting outliers or anomalous events in data has been studied extensively in the context of time series and spatial data separately. Outlier detection studies for time-series data find outliers considering only the temporal context [14,15]. For data with spatial context, several context-based anomaly detection techniques have been proposed [16][17][18][19][20]. Spatio-temporal outlier detection methods are significantly more challenging because of the additional difficulty of jointly modeling the spatial and temporal continuity in the dataset [2,3].
Birant and Kut [5] propose a spatio-temporal outlier detection algorithm using a modified version of DBSCAN. It is a neighborhood-based approach which first defines spatial outliers based on spatial neighborhood, then checks the temporal context of spatial outliers using their temporal neighbors. However, their algorithm does not generate a score for data points. Cheng and Li [2] propose a four-step sequential approach to identify spatio-temporal outliers using classification, aggregation, comparison, and verification. Gupta et al. [20] introduce the notion of context-aware anomaly detection by integrating the system logs and time series measurement data in distributed systems. They propose a two-stage clustering methodology using a principle component analysis (PCA) [21] based method and K-Means [22] algorithm to extract context and metric patterns.
The methods mentioned above have something in common: They do not use the spatial and temporal components jointly to detect anomalies. They first find spatial outliers using spatial context. Then, they use temporal context to define temporal neighborhood of spatial outliers to detect if they are temporal outliers too. They all use either a modified version of DBSCAN [7], which is a spatial clustering-based outlier detection algorithm, or LOF [9], which is a locality-based outlier detection algorithm, to find neighborhoods. The main challenge with distance-based methods is that they are computationally too expensive for large multivariate datasets.
The idea of isolation and IsolationForest algorithm have been successfully applied in anomaly detection and rare class classification problems in different data domains. Alanso-Sarria et al. [23] apply IsolationForest analysis into the landcover image dataset to measure the representativeness of training areas in the image. Chen et al. [24] propose an isolation-based online anomalous trajectory detection system to detect abnormal or malicious events using trajectories obtained from Global Position System (GPS)-enabled taxis. Stripling et al. [25] propose an IsolationForest based conditional anomaly detection approach for fraud detection.
Besides traditional distance-and isolation-based methods, deep learning-based anomaly detection approaches have recently gained a lot of attention. LSTM networks have been used for anomaly detection on time series data as it achieves better generalization capability than traditional methods. A LSTM network-based Encoder-Decoder scheme for anomaly detection was recently proposed where the model learns to reconstruct 'normal' time series data and uses reconstruction error to detect anomalies [26]. A recent deep learning anomaly detection approach combining a convolutional neural network (CNN), LSTM, and deep neural network (DNN) were designed to detect traffic anomalies in web servers [27], where they used fully labeled dataset for supervised learning and solved a univariate time series classification problem. Another deep learning framework combining a convolutional encoder, attention-based convolutional LSTM network, and convolutional decoder for detecting spatio-temporal anomalies for multivariate time series data was proposed [28]. In addition to network complexity, their approach requires the calculation of multiscale system signature and residual signature matrices to perform anomaly detection thereby increasing the overall complexity.
Christiansen et al. [29] propose convolutional neural network (CNN)-based anomaly detection system to detect obstacles in an agriculture field. Their anomaly detection framework uses features extracted by a CNN detector. Oh and Yun [30] propose an autoencoder-based approach to detect abnormal operations of a complex machine using operational sound. They adopt a general convolutional autoencoder architecture and use residual errors to detect anomalies. Munir at el. [31] propose a novel approach for anomaly detection in streaming sensors data which takes advantage of both statistical and deep learning-based techniques. They use statistical ARIMA model and a convolutional neural network (CNN) model to forecast the next timestamp in a given time-series. The forecasted value is further passed to an anomaly detector module that marks a data point as normal or anomalous based on the distance between the predicted value and the actual value.
Despite the effectiveness of those abovementioned deep learning approaches, they are either supervised or semi-supervised models which need labelled normal or abnormal data for training. The proposed hybrid framework is designed to be trained in a truly unsupervised fashion without any labels indicating normal or abnormal data. The architecture is robust enough to learn the underlying dynamics even if the training dataset contains noise and anomalies. The main distinction between other deep learning-based methods and the proposed approach is that they perform on either univariate or multivariate time series data, and none of them is actually designed for spatio-temporal multivariate datasets with both spatial and temporal contextual attributes.

Autoencoders
Autoencoders are commonly used for dimensionality reduction of multidimensional data as a powerful non-linear alternative to PCA or matrix factorization [32][33][34]. If a linear activation function is used, the autoencoder becomes virtually identical to a simple linear regression model or PCA/matrix factorization model. When a nonlinear activation function is used, such as rectified linear unit (ReLU) or a sigmoid function, the autoencoder goes beyond the PCA, capturing multi-modal aspects of the input distribution [12,35]. It is shown that carefully designed autoencoders with tuned hyperparameters outperform PCA or K-Means methods in dimension reduction and characterizing data distribution [36,37]. They are also more efficient in detecting subtle anomalies than linear PCAs and in computation cost than kernel PCAs [32,38].
A traditional autoencoder is a feed-forward multi-layer neural network which is trained to copy its input into the output. To prevent identity mapping, deep autoencoders are built with low dimensional hidden layers by creating non-linear representation of input data [32]. An autoencoder is trained to encode the input x into some latent representation z so that the input can be reconstructed from that lower dimensional representation. An autoencoder is trained by unsupervised learning to learn how to build its original input with a minimum reconstruction error. Figure 1 depicts a typical autoencoder network structure with one hidden layer, being composed of two parts: an encoder and decoder. Deep autoencoders learn a non-linear mapping from the input to the output through multiple encoding and decoding steps. An autoencoder takes an input vector x ∈ R d , and first maps it to a latent representation z ∈ R d through the mapping: where the function f θ represents the encoding steps and parameterized by θ = {W, b}. W is a d × d weight matrix and b is a bias vector. The lower dimensional latent representation of the input is then mapped back to a reconstructed vector x ∈ R d in the input space where the function g θ represents the decoding steps and parameterized by θ = {W , b }. The autoencoders training procedure consists of finding a set of parameters {W, b, W , b } that make the reconstructed vector x as close as possible to the original input x. The parameters of the autoencoder are optimized by minimizing a loss function that measures the quality of the reconstructions. The loss function of an autoencoder is sum-of-squared differences between the input and the output. Therefore, an autoencoder tries to minimize the following loss function during the training: where ϕ is the training dataset.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 25 two parts: an encoder and decoder. Deep autoencoders learn a non-linear mapping from the input to the output through multiple encoding and decoding steps. An autoencoder takes an input vector ∈ , and first maps it to a latent representation ∈ ′ through the mapping: where the function represents the encoding steps and parameterized by = { , }. is a ′ × weight matrix and is a bias vector. The lower dimensional latent representation of the input is then mapped back to a reconstructed vector ′ ∈ in the input space where the function ′ represents the decoding steps and parameterized by ′ = { ′ , ′ }. The autoencoders training procedure consists of finding a set of parameters { , , ′ , ′ } that make the reconstructed vector ′ as close as possible to the original input . The parameters of the autoencoder are optimized by minimizing a loss function that measures the quality of the reconstructions. The loss function of an autoencoder is sum-of-squared differences between the input and the output. Therefore, an autoencoder tries to minimize the following loss function during the training: where is the training dataset.

Anomaly Detection with Autoencoders
The main idea behind autoencoder based anomaly detection is to measure how much the reconstructed data deviates from the original data. An autoencoder has an unsupervised learning objective whose primary task is to copy the input to the output [39]. Therefore, an autoencoder is trained to reconstruct data by minimizing this objective function, or loss function. For anomaly detection, reconstruction error is used as the anomaly score. Data points which generate high reconstruction errors can be categorized as anomalous data points based on a threshold value. When autoencoders are used for anomaly detection, they are trained using semi-supervised learning. In this scenario, only normal data instances are used in training process. The training dataset should be cleaned from anomalous data points and outliers as much as possible for a successful model generation. Therefore, for the anomaly detection model to be successful, it is critical to design a preprocessing phase that can filter out or correct such noise from the data when possible. This can be achieved using a variety of unsupervised domain-specific methods, such as statistical and distance-

Anomaly Detection with Autoencoders
The main idea behind autoencoder based anomaly detection is to measure how much the reconstructed data deviates from the original data. An autoencoder has an unsupervised learning objective whose primary task is to copy the input to the output [39]. Therefore, an autoencoder is trained to reconstruct data by minimizing this objective function, or loss function. For anomaly detection, reconstruction error is used as the anomaly score. Data points which generate high reconstruction errors can be categorized as anomalous data points based on a threshold value. When autoencoders are used for anomaly detection, they are trained using semi-supervised learning. In this scenario, only normal data instances are used in training process. The training dataset should be cleaned from anomalous data points and outliers as much as possible for a successful model generation. Therefore, for the anomaly detection model to be successful, it is critical to design a pre-processing phase that can filter out or correct such noise from the data when possible. This can be achieved using a variety of unsupervised domain-specific methods, such as statistical and distance-based methods. For example, in the context of time series data, autoregressive models, in the context of spatial data, distanceand density-based models can be used for noise removal. Techniques such as Principle Component Analysis (PCA) and Singular Value Decomposition (SVD) methods can also be used in order to improve the representation quality of data and remove discordant data points [1,3]. After the training process, the autoencoder will generally reconstruct normal data with very small reconstruction error. As the autoencoder has not encountered with the abnormal data during the training, it will fail to reconstruct them and generate high reconstruction errors which can be used as anomaly score.
There are some practical issues in using autoencoders with contaminated training data (dataset with normal and anomalous data points). Since anomalies are treated as normal data points during the training phase, there will be inevitably more errors in the model compared to training with only normal data points. If we try to overcome these errors by tuning the network with more layers and neurons, we may face the problem of overfitting which is the significant problem in the case of deep neural networks. A sufficiently complex deep autoencoder may even learn how to represent each anomaly with sufficient training by generating low reconstruction errors which would be a problem in anomaly detection [3].

Convolutional Neural Networks (CNNs)
Fully convolutional networks have shown compelling quality and efficiency for image classification and segmentation. It has been shown that a fully convolutional network, which is trained end to end, pixel to pixel, given the category-wise semantic segmentation annotation, exceeds the state of the art without further creating hand-crafted features [40,41].
The layers of a convolutional neural network are arranged to represent spatial information within data in a grid structure. These spatial relationships are inherited from one layer to the next creating representative features based on a small local spatial region in the previous layer. This spatial transformation is achieved by three types of layers which are commonly present in a convolutional neural network: convolution, pooling, and activation function layers. Each layer in the convolutional network represents a three-dimensional grid structure of size h × w × d, where height (h), width (w) and depth (d), h and w are spatial dimensions, and d refers to the number of channels or the number of feature maps in the hidden layers. In addition to hidden layers, a final set of fully connected layers is often used in an application specific way to generate the desired output [42]. Let x l = x l ijk be a three-dimensional tensor representing feature maps in the lth layer. The indices i, j, k indicate the positions along the height, width, and depth of the filter maps. The weights of the pth feature map in the lth layer are represented by the three-dimensional tensor as W (p,l) = w The transformation from the lth to the (l + 1) st convolutional layer after activation function applied can be written as follows: where f is the activation function, such as tanh or ReLU, b l p is the bias for the pth feature map, w (p,l) rsk represents the parameters of the pth filter, and M is the size of the filter. The convolutional operation is a simple dot product over the entire volume of the filter, which is repeated over spatial dimensions (i, j) and filters (indexed by p). d l+1 is the number of feature maps, or number of channels, in the (l + 1) st layer, where p ∈ 1 . . . d l+1 .
The pooling operation works on small spatial regions and produces another layer with the same depth. The pooling layers decrease the spatial dimensions of each activation map and effectively reduce the number of parameters and computational complexity of the network. Max pooling and average pooling are the mostly used pooling techniques in CNNs. Max pooling operation with pooling size of R × R returns the maximum of the values in the square region of pool size in each layer. Stride value defines how much the pooling window will move in each spatial dimension, which is another way that convolution can reduce the spatial footprint of the feature maps. Previous studies [43] show that max pooling is superior to average pooling for image classification and it also leads to faster convergence rate and improves generalization performance by preventing overfitting. Hence, the max pooling strategy is applied in convolutional neural network component of the proposed framework.

Long Short-Term Memory (LSTM) Networks
Autoencoders LSTM network is a type of recurrent neural network (RNN) with controllable memory units that enable the network to learn when to forget previous hidden states and when to update hidden states with new information [44]. They have been proved to be very effective in many long-range sequential modeling applications such as machine translation, speech and action recognition [45][46][47][48]. Generally, LSTM recursively maps the input data at the current time step to a sequence of hidden states, and thus the learning process of LSTM should be in a sequential manner.
LSTM networks address the vanishing gradient problem commonly found in ordinary RNNs by incorporating the concept of gating into their state dynamics. The inputs provided to the LSTM memory unit are fed through gates which are the input, output, and forget gates. At each time step, an LSTM unit updates a hidden state vector h t and a cell state vector c t responsible for controlling state updates and outputs using current input, previous cell state and hidden state values. Specifically, given a sequential data of T timestamps (x 1 , x 2 , . . . , x T ), an LSTM memory unit maps the inputs to an output sequence (y 1 , y 2 , . . . , y T ) by computing the gate functions recursively for each step from t = 1 to t = T with the following equations: where x t and h t are the input and hidden state vectors, the notations i t , f t , o t , c t are the activation vectors of the input gate, forget gate, output gate, and memory cell, respectively. W ab is the weight matrix between a and b, and b a is the bias term of a, σ is the sigmoid function. o denotes the element-wise product. The memory cell unit c t is the sum of two terms: the previous memory cell unit modulated by forget gate and a function of the current input and previous hidden state. Input and forget gates can be thought of as knobs that control the learning process: the LSTM learns to selectively forget its previous memory through f t and considers its current input and previous hidden state through i t . The output gate o t learns how much of the current memory cell to transfer to the hidden state. This gated structure seems to enable the LSTM to learn complex and long-term temporal dynamics for contextual data.

Spatio-Temporal Data Pre-Processing
A univariate time series is a sequence of data points with timestamps representing contextual dimension. A multivariate time series is a set of univariate time series with the same timestamps. On the other hand, multivariate time series data can be thought as a dataset composed of many univariate time series, which are measured at successive points in time and spaced at uniform time intervals. A spatio-temporal multivariate data point is a multivariate time series observation with a spatial attribute (a contextual attribute) attached to it.
denote a multivariate time series dataset composed of N subsequences of multivariate observations (data points). Each subsequence has the same number of time steps, or window size, which is generated from a long multivariate time series data. Let each subsequence x (n) has T time steps, and each observation at time step t, is a d dimensional vector. The dataset X has dimensions of (N, d, T), where x (n) ∈ R d×T . Each sample from multivariate time series dataset can be represented as: In a spatio-temporal dataset, each multivariate data point x (n) comes from a different spatial location, or region, which has different spatial attributes (such as latitude and longitude). Specifically, each multivariate data point has a spatial attribute s (n) attached to it. We denote multivariate which contains N (i) multivariate data points from S different spatial regions. The multivariate spatio-temporal data matrix X ST can be represented as a three-dimensional matrix, as shown in Figure 2. It is considered as a group of many multivariate subsequences related to multiple spatial regions and representing observations from the same time window with the same timestamps. T, which is called the "input window-size", represents the number of timestamps in the multivariate subsequence, and d represents the number of observed features (number of univariate time series). m represents the number of spatial neighborhoods to include in the anomaly detection process. The best m can be found empirically for each problem domain. m number of neighboring regions are selected from S different spatial regions by a distance based nearest neighbor algorithm using pairwise spatial distance between regions.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 25 Let = { ( ) } =1 denote a multivariate time series dataset composed of subsequences of multivariate observations (data points). Each subsequence has the same number of time steps, or window size, which is generated from a long multivariate time series data. Let each subsequence ( ) has T time steps, and each observation at time step t, is a d dimensional vector. The dataset X has dimensions of (N, d, T), where ( ) ∈ ℝ × . Each sample from multivariate time series dataset can be represented as: In a spatio-temporal dataset, each multivariate data point ( ) comes from a different spatial location, or region, which has different spatial attributes (such as latitude and longitude). Specifically, each multivariate data point has a spatial attribute ( ) attached to it. We denote multivariate spatio-temporal dataset as DST = {( ( ) , ( ) )} =1 which contains ( ) multivariate data points from S different spatial regions. The multivariate spatio-temporal data matrix can be represented as a three-dimensional matrix, as shown in Figure 2. It is considered as a group of many multivariate subsequences related to multiple spatial regions and representing observations from the same time window with the same timestamps. , which is called the "input window-size", represents the number of timestamps in the multivariate subsequence, and represents the number of observed features (number of univariate time series).
represents the number of spatial neighborhoods to include in the anomaly detection process. The best can be found empirically for each problem domain.
number of neighboring regions are selected from S different spatial regions by a distance based nearest neighbor algorithm using pairwise spatial distance between regions.

Anomaly Detection Procedure
The spatio-temporal anomaly detection problem is formulated as detecting anomalous multivariate observations in dataset DST which differentiate significantly from their spatial and temporal neighbors. Given the spatio-temporal three-dimensional data matrix , the goal is to reconstruct the multivariate subsequences from the spatial region , which is the region of interest. Anomalous data points have large reconstruction errors because they do not conform to the subspace patterns in the data. Therefore, for the proposed encoder-decoder-based anomaly detection framework, the aggregate error of reconstruction over all dimensions can be used as the anomaly score. All ( ) multivariate data points, or subsequences, with high reconstruction errors from the spatial region are considered to be anomalies. The anomaly detection process is performed using the reconstruction error as anomaly score.

Anomaly Detection Procedure
The spatio-temporal anomaly detection problem is formulated as detecting anomalous multivariate observations in dataset D ST which differentiate significantly from their spatial and temporal neighbors. Given the spatio-temporal three-dimensional data matrix X ST , the goal is to reconstruct the multivariate subsequences from the spatial region S i , which is the region of interest. Anomalous data points have large reconstruction errors because they do not conform to the subspace patterns in the data. Therefore, for the proposed encoder-decoder-based anomaly detection framework, the aggregate error of reconstruction over all d dimensions can be used as the anomaly score. All x (n) i multivariate data points, or subsequences, with high reconstruction errors from the spatial region S i are considered to be anomalies.
The anomaly detection process is performed using the reconstruction error as anomaly score. Let x = x (1) , x (2) , . . . , x (T) be a multivariate time series data point from the multivariate time series dataset X, where x ∈ R d×T , T is the length of the time series window, and d is the number of features (univariate time series). Each point x (i) is a d-dimensional vector of readings for d variables at time instance t i . The root mean squared error (RMSE) is used to calculate the reconstruction error for a given time step t i as given by the following equation: where x i is the real value andx i is the reconstructed value at time step t i . The reconstruction error for each time step in the output data is calculated. The result is the aggregated error of output data which has a window size of T. These aggregated errors of data points can be used as anomaly score and high scoring data points are regarded as spatio-temporal anomalies. The proposed framework uses unlabeled spatio-temporal data points for training. Training data might be contaminated by anomalous data which is the case in most of the real-word scenarios. The multi-channel CNN architecture helps to overcome the problem of overfitting. It processes data which come from m spatial neighborhoods simultaneously and extracts representative features of it. This approach helps to build a model that represents the great majority of data points and not the anomalous points which are rare. Anomalous observations can be thought of as noise and the framework is designed to be robust enough not to learn representation of those points. The framework is trained to reconstruct data by minimizing a loss function (mean squared error) that measures the quality of the reconstructions. Our main hypothesis is that after training, the model is able to reconstruct the input data that conform to expected or normal behavior within the dataset, while it fails to reconstruct anomalous data, since it has not learned any features to represent them.

Proposed Hybrid Framework
The proposed hybrid framework consists of two major components: a multi-channel convolutional neural network-based encoder (MC-CNN-Encoder) and a Long Short-Term Memory-based decoder (LSTM-Decoder). The framework is connected in a linear structure and trained jointly to minimize the reconstruction error. The CNN layers are an important part of the framework and work as encoder to extract spatial features. The multi-channel structure helps to capture the spatial dependencies in neighboring spatial regions. The reconstruction error of the framework for the given data point is used for anomaly score. Our model design is inspired by the network combining CNN and LSTM into one unified architecture for text prediction problem [49], which shows that LSTM performance can be improved by providing better features. The overall architecture of the framework is illustrated in Figure 3.
The performance of the designed framework depends on various components and training parameters. The number of layers and units in each layer, the number of kernels used in each convolutional layer, dropout ratio and type and size of pooling operators are all important parameters in the anomaly detection process as they can affect the learning performance and speed. They play an important role in extracting more representative features of the training data. To determine the optimal architecture and the best training parameters for the overall network, including the optimum parameters used to preprocess the raw data such as window size and step size, the characteristics of the input data have to be analyzed carefully. In this study, the number of layers and neurons on each layer are selected accordingly to the best empirical performance of grid search on training dataset.

MC-CNN-Encoder
For the spatial encoder component, an effective multi-channel convolutional encoder is proposed. Each channel of the encoder takes multivariate time series data from different spatial neighbors as input. Using the multichannel approach, spatio-temporal features from spatial neighborhood of data points can be extracted. As in image data, multivariate spatio-temporal data are processed by 2D kernels and 2D pooling operations. In contrast to other deep learning approaches which apply CNN on time series data with 1D kernels to extract temporal features [50][51][52], 2D kernels are applied to extract spatial features.
MC-CNN-Encoder component is composed of two convolutional blocks, and each block is made up of four cascading layers: 2D convolution kernel layer, followed by an activation layer, followed by 2D max-pooling layer and a channel-wise batch normalization layer. The first convolutional layer has 16 filters of size (2 × 2) with zero-padding and no striding (or with strides (1 × 1)). The second convolution block has convolutional layer with 32 filters of size (3 × 3) without striding and with zero padding. The max pooling operation with pool size (2 × 2) and strides (2 × 2) is used to cut temporal dimensions by half at each max pooling layer.
The MC-CNN-Encoder takes three-dimensional multivariate spatio-temporal input data and generates a higher order representation of it. The dimension of the output of the MC-CNN-Encoder is ( × × ). Thus, we need to add a layer to reshape the output before passing to the LSTM layer. The output of the MC-CNN-Encoder is flattened over spatial dimensions into a feature vector and sent to the LSTM Decoder. As we go deeper in the network, the width and height dimensions (the temporal and feature dimensions representing multivariate time series) tend to shrink as a result of max pooling operation, while the number of feature maps (spatial dimension) for each Conv2D layer controlled by number of filters increases. This architecture helps to extract complex spatio-temporal dependencies for each data point in the dataset.

LSTM-Decoder
The spatial feature maps generated by the MC-CNN-Encoder are temporally dependent on previous time steps. An LSTM-Decoder is designed to represent those temporal dependencies. It decodes the MC-CNN-Encoder output and reconstructs the sequence data using hidden states (feature maps). It is composed of an LSTM block and a fully connected neural network (FCNN) layer.

MC-CNN-Encoder
For the spatial encoder component, an effective multi-channel convolutional encoder is proposed. Each channel of the encoder takes multivariate time series data from different spatial neighbors as input. Using the multichannel approach, spatio-temporal features from spatial neighborhood of data points can be extracted. As in image data, multivariate spatio-temporal data are processed by 2D kernels and 2D pooling operations. In contrast to other deep learning approaches which apply CNN on time series data with 1D kernels to extract temporal features [50][51][52], 2D kernels are applied to extract spatial features.
MC-CNN-Encoder component is composed of two convolutional blocks, and each block is made up of four cascading layers: 2D convolution kernel layer, followed by an activation layer, followed by 2D max-pooling layer and a channel-wise batch normalization layer. The first convolutional layer has 16 filters of size (2 × 2) with zero-padding and no striding (or with strides (1 × 1)). The second convolution block has convolutional layer with 32 filters of size (3 × 3) without striding and with zero padding. The max pooling operation with pool size (2 × 2) and strides (2 × 2) is used to cut temporal dimensions by half at each max pooling layer.
The MC-CNN-Encoder takes three-dimensional multivariate spatio-temporal input data and generates a higher order representation of it. The dimension of the output of the MC-CNN-Encoder is (timestamps × f eatures × f eature maps). Thus, we need to add a layer to reshape the output before passing to the LSTM layer. The output of the MC-CNN-Encoder is flattened over spatial dimensions into a feature vector and sent to the LSTM Decoder. As we go deeper in the network, the width and height dimensions (the temporal and feature dimensions representing multivariate time series) tend to shrink as a result of max pooling operation, while the number of feature maps (spatial dimension) for each Conv2D layer controlled by number of filters increases. This architecture helps to extract complex spatio-temporal dependencies for each data point in the dataset.

LSTM-Decoder
The spatial feature maps generated by the MC-CNN-Encoder are temporally dependent on previous time steps. An LSTM-Decoder is designed to represent those temporal dependencies. It decodes the MC-CNN-Encoder output and reconstructs the sequence data using hidden states (feature maps). It is composed of an LSTM block and a fully connected neural network (FCNN) layer. The LSTM block comprises of two LSTM layers followed by a dropout layer. The output of the LSTM block is fed to the FCNN layer, where the output is mapped into the feature set we want to reconstruct.
During the process of anomaly detection, the MC-CNN-Encoder learns a three-dimensional vector representation of the multivariate spatio-temporal input data. Then, the LSTM-Decoder uses the reshaped representation to extract temporal features and reconstruct the input data. The LSTM-block is trained to learn higher level temporal features of the input data, whereas the FCNN layer is trained to reconstruct the target univariate time series. In the LSTM block, two hidden layers are used with 64 and 16 hidden units each and followed by a dropout layer to prevent overfitting. The LSTM block comprises of two LSTM layers followed by a dropout layer. The output of the LSTM block is fed to the FCNN layer, where the output is mapped into the feature set we want to reconstruct. During the process of anomaly detection, the MC-CNN-Encoder learns a three-dimensional vector representation of the multivariate spatio-temporal input data. Then, the LSTM-Decoder uses the reshaped representation to extract temporal features and reconstruct the input data. The LSTMblock is trained to learn higher level temporal features of the input data, whereas the FCNN layer is trained to reconstruct the target univariate time series. In the LSTM block, two hidden layers are used with 64 and 16 hidden units each and followed by a dropout layer to prevent overfitting.
For the final dense (FCNN) layer, the number of units is set to ′ and it equals to the number of univariate time series (features) that we want to reconstruct. The number of hidden units in the dense layer can be adjusted according to the problem context at hand. For an anomaly detection problem, we are only interested with the reconstruction of a subset of original spatio-temporal multivariate dataset and not the fully reconstructed version of it. The overall framework is trained to produce the target multivariate time series = { 1 , … , ′ } of length ′ which is the size of the reconstruction window. The length of ′ can be equal to or smaller than the input window size and should be tuned for each problem. Each sequence ∈ ℝ ′ is an ′ -dimensional vector where ′ ≤ . The detailed architecture of the MC-CNN-Encoder and LSTM-Decoder are illustrated in Figure 4.

Experiments
To verify and evaluate the proposed hybrid deep learning framework for spatio-temporal anomaly detection, a set of experiments were conducted on buoy dataset. We used the year 2005 Gulf of Mexico (West) buoy dataset from National Buoy Data Center. A buoy is a floating object in water and used to mark the location and to record oceanographic data. Our main motivations in selecting this dataset and doing this case study can be given as:

•
Due to the difficulty of finding a labelled spatio-temporal multivariate real-world dataset for anomaly detection, we decided to use the buoy dataset full with well-known natural phenomena (anomalies) such as hurricanes.

•
Associating the spatio-temporal outliers detected between August 25th and August 30th with the path of Hurricane Katrina helps us to compare our approach with other algorithms.

•
To get a better understanding about the performance of algorithms, we used OAHII (Outliers Association with Hurricane Intensity Index) measures developed by Duggimpudi et al. [6] using the same buoy dataset. Using these measures, we were able to quantitatively compare the anomaly scores generated by algorithms.

Dataset Description
The buoy dataset has many meteorological measurements such as wind direction, wind speed, gust speed, wave height, air temperature, water temperature, tide, etc. However, buoy measurements from the year 2005 have many missing values or missing features. We have selected buoys which have sufficient historical data from the 2005 hurricane season for comparison purposes. For each buoy data reading, there are two contextual attributes: a time stamp (temporal context) attribute and latitude-longitude (spatial context) attribute which is static for each buoy. Beside these contextual attributes there are six behavioral attributes which are mostly available for each buoy used for this experiment: wind direction, wind speed, gust speed, barometric pressure, air temperature, and water temperature. After having eliminated buoys with many missing features and/or values and the ones that were damaged during the hurricane season we have selected 21 buoys and their data were included in our experiment. Those buoys are: 41001, 42001, 42003, 42036, 42038, 42039, 42040, FWYF1, BURL1, BYGL1, CDRF1, FWYF1, GDIL1, MLRF1, SANF1, SGOF1, SMKF1, SPGF1, VCAF1, KYWF1, NPSF1, and LONF1. Their locations are depicted in Figure 5. Python programming language and Geopandas (http://geopandas.org/), which is an open source project to work with geospatial data in Python, are used to depict buoy locations on map.

Experiments
To verify and evaluate the proposed hybrid deep learning framework for spatio-temporal anomaly detection, a set of experiments were conducted on buoy dataset. We used the year 2005 Gulf of Mexico (West) buoy dataset from National Buoy Data Center. A buoy is a floating object in water and used to mark the location and to record oceanographic data. Our main motivations in selecting this dataset and doing this case study can be given as:


Due to the difficulty of finding a labelled spatio-temporal multivariate real-world dataset for anomaly detection, we decided to use the buoy dataset full with well-known natural phenomena (anomalies) such as hurricanes.  Associating the spatio-temporal outliers detected between August 25th and August 30th with the path of Hurricane Katrina helps us to compare our approach with other algorithms.  To get a better understanding about the performance of algorithms, we used OAHII (Outliers Association with Hurricane Intensity Index) measures developed by Duggimpudi et al. [6] using the same buoy dataset. Using these measures, we were able to quantitatively compare the anomaly scores generated by algorithms.

Dataset Description
The buoy dataset has many meteorological measurements such as wind direction, wind speed, gust speed, wave height, air temperature, water temperature, tide, etc. However, buoy measurements from the year 2005 have many missing values or missing features. We have selected buoys which have sufficient historical data from the 2005 hurricane season for comparison purposes. For each buoy data reading, there are two contextual attributes: a time stamp (temporal context) attribute and latitude-longitude (spatial context) attribute which is static for each buoy. Beside these contextual attributes there are six behavioral attributes which are mostly available for each buoy used for this experiment: wind direction, wind speed, gust speed, barometric pressure, air temperature, and water temperature. After having eliminated buoys with many missing features and/or values and the ones that were damaged during the hurricane season we have selected 21 buoys and their data were

Data Preparation
Most buoys have hourly readings while some have readings in every 30 or 6 minutes. In such cases, we have down sampled data by selecting only hourly readings and discarding the rest to generate maximum 24 readings for each day. If a buoy has no reading corresponding to an hour, we fill the missing reading with a data point which was read less than 30 min ago.
If there was a missing reading for any of the attributes in the historical data, it was imputed by National Data Buoy Center using any of the following values depending on the attribute: 99.00, 999.0, or 9999.0. We have imputed these missing observations by using piecewise linear interpolation technique by putting a limit of 24 (one day of data) on a maximum number of consecutive missing values that could be filled. Even after interpolation, there is still huge number of missing attributes for some of buoys as can be seen in Figure 6.

Data Preparation
Most buoys have hourly readings while some have readings in every 30 or 6 minutes. In such cases, we have down sampled data by selecting only hourly readings and discarding the rest to generate maximum 24 readings for each day. If a buoy has no reading corresponding to an hour, we fill the missing reading with a data point which was read less than 30 min ago.
If there was a missing reading for any of the attributes in the historical data, it was imputed by National Data Buoy Center using any of the following values depending on the attribute: 99.00, 999.0, or 9999.0. We have imputed these missing observations by using piecewise linear interpolation technique by putting a limit of 24 (one day of data) on a maximum number of consecutive missing values that could be filled. Even after interpolation, there is still huge number of missing attributes for some of buoys as can be seen in Figure 6.  The wind direction attribute is defined as "the direction the wind is coming from in degrees clockwise from true N" by National Data Buoy Center and it takes values between 0 • and 360 • . In this study, we have divided the full 360 • value into eight equally sized slices of 45 • and label encoded each slice. Then, wind speed attribute of observations from buoys were one-hot encoded using these labels based on which slice direction its value falls in. Meanwhile, we have applied the min-max scaling technique on five attributes, which are water temperature (WTMP), air temperature (ATMP), wind speed (WSPD), gust speed (GST) and barometric pressure (BAR), to map all values to the range of [0, 1].
We consider all data readings starting from January 1st, 2005 00:00 till August 31st, 2005 23:59; separating data from January till the end of June as training set, data from July as validation set and data from August as test set. We need all selected attribute values available for all data points to train models. We have discarded any observations with missing attributes, resulting a total of 93,982 readings from 21 different buoys. In the final dataset, the total number of data points is different for each buoy as the number of missing attributes differs significantly. In the final dataset, the number of data readings from each buoy is shown in Table 1, in column "# Observations". Training is executed using overlapping sequences obtained with a sliding window algorithm with a window size T and step size of s. The sliding window technique applied to the pre-processed buoy dataset to extract subsequences is given in Algorithm 1. The length of each subsequence is equal to the window size. Using sliding window technique, for a long sequence with length L, the number of extracted subsequences can be given as (L-T+1)/s . This gives the maximum number of subsequences we can possibly extract. As shown in Figure 6, each buoy has a different number of missing data which results in data sequences with different length. Due to the high number of missing readings, we have tolerated missing data points on some time stamps by keeping the time difference limit between the end points of a subsequence at 2 × T. After having applied Algorithm 1 with window size set to 12 and step size to 1, we have generated training, validation and test sets which have different number of subsequences for each buoy as shown in Table 1.

Algorithms Used for Comparison
To validate the effectiveness of our approach, we have examined spatio-temporal outliers detected by our model and by the following state-of-the art anomaly detection techniques: ST-Outlier detection algorithm [5], ST-BDBCAN [6], LDBSCAN [8], LOF [9], and Isolation Forest [10]. We have implemented ST-Outlier detection and LDBSCAN algorithms in Python 3.6.8 programming language. For LOF (https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) and IsolationForest (https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest. html) methods, we used implementations available in the scikit-learn (https://scikit-learn.org/stable/), which is a free machine learning library for Python programming language. We have used Euclidean distance for all proximity-based clustering algorithms since it has generated better results compared to other distance metrics. As ST-Outlier detection algorithm does not return a score, it is not included in the quantitative evaluation tests.
Model parameters were selected through randomized search over a predefined list for each parameter. For LDBSCAN algorithm, following values were assigned to the parameters since they gave the best result, as stated in [6]: LOFUB = 4, MinPts LOF = 180, MinPts LDBCAN = 225, and pct = 0.27. For the LOF algorithm, we used 50 for number of neighbors to use for k-nearest neighbor calculations, and 0.1 for contamination ratio. For the ST-Outlier detection algorithm, the following values were assigned to the parameters: Eps1 = 100 (distance parameter for spatial attribute), Eps2 = 0.2 (distance parameter for non-spatial attribute), MinPts = 5, which is the minimum number of points within Eps1 and Eps2 distance of a point. For IsolationForest, we have used 100 base estimators (number of trees), and contamination was set to 0.1.
Besides the above baseline methods, we also evaluate the depth of the components of the framework for the sample dataset: Number of layers in MC-CNN-Encoder component and number of layers in LSTM-decoder component. The quantitative evaluation of variants of the framework is given in the Section 5.5.3.
In order to compare our approach quantitatively to models which return an outlierness score, we have used three different OAHII (Outliers Association with Hurricane Intensity Index) measures developed by Duggimpudi et al. [6]. OAHII measures show how much the generated scores for each observation are correlated with the impact of the ground truth spatio-temporal anomaly which is the Hurricane Katrina for the test data. While OAHII1 measures the Pearson correlation between the outlierness score and actual impact of the hurricane on each data point and impact, OAHII2 measures the recall power of models using the impact vector as weight factor in weighted correlation, a derivative of Pearson correlation, and OAHII3 measures the precision power of models using the scores vector as weight vector in weighted correlation.

Framework Tuning
The proposed framework is implemented using the Keras (https://keras.io/) deep learning library using Python, running on top of the Tensorflow (https://www.tensorflow.org/) backend. All hidden layers are equipped with the Rectified Linear Unit (ReLU) non-linearity which allows the networks to converge faster [53]. For the final fully connected neural network layer, the sigmoid non-linear activation function is used. Optimization is performed by ADAM optimizer algorithm [54]. The Adam optimization algorithm is a powerful extension to stochastic gradient descent and has been successfully applied into deep learning applications in various domains such as computer vision and natural language processing. Layer weights are initialized with the Glorot uniform initializer. The framework is trained in mini-batches of size 32 for 20 epochs, with learning rate = 0.001, beta_1 = 0.9, beta_2 = 0.999, and epsilon = 1e-07. The final framework has 35,092 parameters to optimize. The optimum parameters for training epochs, batch size, learning rate and activation function are selected through grid search. Although window size is an important hyper parameter for time series analysis, our framework is not very sensitive to it, as it is explained in the next section. We have tested the performance of the proposed framework using window sizes 12, 6, and 3. The anomaly detection results given in Section 5.5.1 were generated by the framework with window size set to 12 (T = 12) and sliding step size set to 1.

Anomaly Detection Results
In Figure 7, the location of buoys and path of Hurricane Katrina along with the time stamps are shown. The track of Katrina is plotted according to "best track" positions and intensities measured in every six hours, which starts on 00:00 UTC 25 August and ends on 18:00 UTC 29 August [13]. The cyclone became Katrina, the 11th tropical storm of the 2005 Atlantic hurricane season, at 12:00 UTC 24 August when it was over the central Bahamas. Katrina has reached hurricane status near 21:00 UTC 25 August, less than two hours before its center made landfall on the southeastern coast of Florida. As it entered the Gulf of Mexico on 26 August, Katrina entered two rapid intensification periods between 26 and 28 August. Katrina strengthened from a low-end Category 3 hurricane to a Category 5 in less than 12 hours reaching the peak by 12:00 UTC 28 August. The large wind field on 28 August, with hurricane-force winds extended out to about 90 nautical miles from the center, made Katrina extremely intense and exceptionally large [13].  Table 2. The anomalies which are in space-time vicinity of the Hurricane Katrina are given in blue for performance comparison. The proposed framework (MC-CNN-LSTM) with window size 12 detected all spatio-temporal anomalies caused by Hurricane Katrina as it passed by buoys. When we examine the results, LDBSCAN seems to give the highest score to data points which are the most related with Hurricane Katrina path after our model, followed by the ST-Outlier detection algorithm, LOF and IsolationForest.   Table 2. The anomalies which are in space-time vicinity of the Hurricane Katrina are given in blue for performance comparison. The proposed framework (MC-CNN-LSTM) with window size 12 detected all spatio-temporal anomalies caused by Hurricane Katrina as it passed by buoys. When we examine the results, LDBSCAN seems to give the highest score to data points which are the most related with Hurricane Katrina path after our model, followed by the ST-Outlier detection algorithm, LOF and IsolationForest.
Although this heuristic analysis gives a high-level understanding about performance of algorithms, it is better to take all data points and their scores into account for the duration of Hurricane Katrina. The actual Katrina path reported by Knabb et al. [13] gives latitude and longitude information of the center reported for every six hours. We have applied linear interpolation to transform it to an hourly basis data to match the buoys' hourly readings data. For each buoy whose location is highly relevant to Katrina's path, we have plotted the anomaly score of each data point against the distance to the center of the Katrina for the time frame starting on 00:00 UTC 25 August and ending on 23:59 UTC 29 August. In order to show comparison in detail, anomaly scores of the proposed framework (MC-CNN-LSTM), LDBSCAN and IsolationForest algorithms are plotted against the distance between buoys and the center of the Katrina in Figure 8. Anomaly score values and distances are min-max normalized in the range of [0, 1] before plotting. Table 2. Timestamps of detected anomalies. Anomalous data points which are in the spatio-temporal neighborhood of the Hurricane Katrina are given in blue.

MC-CNN-LSTM (T = 12)
In Figure 8, we can observe that the anomaly scores of LDBSCAN and IsolationForest are not stable and results contain many false positives and false negatives. For the LDBSCAN algorithm, false alarms based on generated anomaly scores are quite visible for buoys 42038, LONF1, and SANF1. For buoy 42038, LDBSCAN generates three highest scores for data points when Katrina center is not the closest. For buoys LONF1 and SANF1, LDBSCAN exhibits high anomaly score values followed by sharp drops which followed by high score values again for data points when Katrina center is very close. On the other hand, IsolationForest, which is the next best performing algorithm according to the results of quantitative evaluation given in Section 5.5.2, shows weaker performance than the proposed framework in capturing the anomalous trend in the test data. Its performance degrades significantly for buoys FWYF1, SANF1, and SMKF1. Although these buoys are on the direct path of the Katrina, anomaly scores generated by IsolationForest are not stable and exhibit high values for data points when Katrina center is not very close. Meanwhile, the anomaly scores generated by MC-CNN-LSTM is smoother than LDBSCAN and IsolationForest. In other words, the anomaly scores generated by MC-CNN-LSTM do not exhibit sharp drops and high peaks in consecutive data points. The proposed framework outperforms LDBSCAN and IsolationForest in capturing the overall anomalous trend in the test data. In summary, MC-CNN-LSTM generates fewer false alarms and does not miss real alarms, as generated anomaly score values show higher correlation with the distance between buoys and the center of the Katrina.

Quantitative Evaluation of Models
To provide a further quantitative evaluation, we have used the measure OAHII (Outliers Association with Hurricane Intensity Index) developed by Duggimpudi et al. [6]. OAHII shows how much correlation exists between the outlierness scores generated by an algorithm and the actual impact of the hurricane (which is the ground truth for the given dataset). The ground truth is

Quantitative Evaluation of Models
To provide a further quantitative evaluation, we have used the measure OAHII (Outliers Association with Hurricane Intensity Index) developed by Duggimpudi et al. [6]. OAHII shows how much correlation exists between the outlierness scores generated by an algorithm and the actual impact of the hurricane (which is the ground truth for the given dataset). The ground truth is represented by the actual Katrina path reported in Knabb et al. [13], which contains the best track positions and intensities reported once every 6 h. It contains the latitude, longitude, time, wind speed and pressure information around a set of hurricane eye locations during the period from August 23rd to August 31st. We have applied linear interpolation to the ground truth data to fill missing hurricane readings and to transform it to an hourly basis like the buoy dataset.
The anomaly scores of the proposed hybrid deep learning framework (HDLF) and IsolationForest (IF), LDBSCAN, LOF, and ST-BDBCAN algorithms are compared, excluding ST-Outlier detection algorithm as it does not generate a score. The proposed framework is tested against the data prepared with different window sizes to demonstrate that it is the highest performing model at each configuration. We used Euclidean distance for LOF and LDBSCAN algorithms as it gives better results. However, we gave the result for ST-BDBCAN generated by using Mahalanobis distance since it yielded in better results as indicated by Duggimpudi et al. [6]. Results of the three OAHII measures, OAHII1, OAHII2, and OAHII3 are shown in Table 3. OAHII measures show the effectiveness of our model over the state-of-the art spatio-temporal anomaly detection algorithms. It is superior to other approaches in terms of both anomaly detection accuracy, which is given in Table 2, and correlation of scores to the ground truth, which is given in Table 3. According to the results given in Table 3, our model performed better than the next best performing algorithm in detecting anomalies scoring more than 10% in all OAHII measures.

Quantitative Evaluation of Framework Variants
To further investigate the effects of hyper parameters, we have tested the framework under different configurations by changing the number of CNN and LSTM layers.  Table 4.
The best OAHII1 measure for each framework variant is given in blue. The overall results show that for the given problem domain the network is not very sensitive to the number of layers and increasing the number of layers does not improve the performance significantly. The boxplot of each framework variant is given in Figure 9. According to the mean score of each model variant represented by the blue line in the box plot, the accuracy (OAHII1 measure) does not deviate much between different models with different network parameters. The average accuracy calculated over all framework variants with different step sizes is 0.836 and more than 10% better than the next best scoring algorithm, which is IsolationForest according to the quantitative evaluation results given in Table 3. The core architecture of the network and the way we use the spatial and temporal context prove to be a powerful approach in spatio-temporal anomaly detection problem. increasing the number of layers does not improve the performance significantly. The boxplot of each framework variant is given in Figure 9. According to the mean score of each model variant represented by the blue line in the box plot, the accuracy (OAHII1 measure) does not deviate much between different models with different network parameters. The average accuracy calculated over all framework variants with different step sizes is 0.836 and more than 10% better than the next best scoring algorithm, which is IsolationForest according to the quantitative evaluation results given in Table 3. The core architecture of the network and the way we use the spatial and temporal context prove to be a powerful approach in spatio-temporal anomaly detection problem.

Discussions
In this study, a new hybrid approach is proposed to the problem of unsupervised anomaly detection in multivariate spatio-temporal data. In unsupervised scenarios, the type of anomalies and the ratio of anomalous events within the given dataset are generally not known. Training data might be contaminated by anomalous data which is the case in most of the real word scenarios. For the proposed unsupervised setting, no labeled dataset is required to train the framework for the anomaly detection problem. The proposed hybrid deep learning framework is designed to be trained in a truly unsupervised fashion with the full absence of labels, for either normal or abnormal data. It is composed of a multi-channel convolutional neural network (CNN) and Long Short-Term Memory (LSTM) network. The architecture of the framework is based on an autoencoder scheme: the multichannel CNN network is deployed on the encoder side for learning spatial structures of each spatiotemporal multivariate data frame, and LSTM network is deployed on the decoder side for learning temporal patterns from the encoded spatial structures. The architecture is robust enough to learn the important structure of the data even if the training dataset contains noise and anomalous data points. To the best of our knowledge, this is the first deep learning-based approach for unsupervised anomaly detection problem in non-image multivariate spatio-temporal dataset.
A set of experiments was carried out using the buoy dataset from National Data Buoy Center which is contaminated with noise and anomalies such as tropical storms and hurricanes. We demonstrated that the proposed encoder-decoder-based architecture achieves better results against

Discussions
In this study, a new hybrid approach is proposed to the problem of unsupervised anomaly detection in multivariate spatio-temporal data. In unsupervised scenarios, the type of anomalies and the ratio of anomalous events within the given dataset are generally not known. Training data might be contaminated by anomalous data which is the case in most of the real word scenarios. For the proposed unsupervised setting, no labeled dataset is required to train the framework for the anomaly detection problem. The proposed hybrid deep learning framework is designed to be trained in a truly unsupervised fashion with the full absence of labels, for either normal or abnormal data. It is composed of a multi-channel convolutional neural network (CNN) and Long Short-Term Memory (LSTM) network. The architecture of the framework is based on an autoencoder scheme: the multi-channel CNN network is deployed on the encoder side for learning spatial structures of each spatio-temporal multivariate data frame, and LSTM network is deployed on the decoder side for learning temporal patterns from the encoded spatial structures. The architecture is robust enough to learn the important structure of the data even if the training dataset contains noise and anomalous data points. To the best of our knowledge, this is the first deep learning-based approach for unsupervised anomaly detection problem in non-image multivariate spatio-temporal dataset.
A set of experiments was carried out using the buoy dataset from National Data Buoy Center which is contaminated with noise and anomalies such as tropical storms and hurricanes. We demonstrated that the proposed encoder-decoder-based architecture achieves better results against state-of-the-art spatio-temporal anomaly detection methods providing at least 10% improvement in quantitative scores. The proposed hybrid model stands out from other traditional and deep learning-based models as it is capable of handling noise and missing values better. Quantitative evaluation tests of framework variants show that the core architecture of the network and the way we use the spatial and temporal context prove to be a powerful and robust approach in the spatio-temporal anomaly detection problem.
The major contributions of this study can be summarized as follows: • In the proposed hybrid model, we handle spatial and temporal contexts by different deep learning components as these contextual variables refer to different types of dependencies. • Unlike proximity-based methods, the proposed model can be easily implemented on datasets with mixed-type behavioral attributes and it scales well to multivariate datasets with high number of features.

•
The crafting of multivariate spatio-temporal dataset for unsupervised anomaly detection is also novel. The multi-channel CNN encoder network provides better representative spatial features for the anomaly detection process by using spatial neighborhood data.
We believe that this approach is an important improvement towards applying deep learning in unsupervised learning problems. In the future, we plan to broaden this study in several directions. First, we plan to enhance the framework by adding the attention mechanism to tackle the weaknesses in analyzing long sequences. By letting the decoder component to have an attention mechanism, we believe that the framework can better detect collective anomalies in long ranging multivariate spatio-temporal data. Second, we also plan to explore optimization-based techniques to select framework's hyper-parameters to further enhance the performance. Finally, we would like to conduct research on more efficient data imputation techniques to improve the anomaly detection process for real life datasets.