Adaptive Online Learning for the Autoregressive Integrated Moving Average Models

: This paper addresses the problem of predicting time series data using the autoregressive integrated moving average (ARIMA) model in an online manner. Existing algorithms require model selection, which is time consuming and unsuitable for the setting of online learning. Using adaptive online learning techniques, we develop algorithms for ﬁtting ARIMA models without hyperparameters. The regret analysis and experiments on both synthetic and real-world datasets show that the performance of the proposed algorithms can be guaranteed in both theory and practice.


Introduction
The autoregressive integrated moving average (ARIMA) model is an important tool for time series analysis [1], and has been successfully applied to a wide range of domains including the forecasting of household electric consumption [2], scheduling in smart grids [3], finance [4], and environment protection [5]. It specifies that the values of a time series depend linearly on their previous values and error terms. In recent years, online learning (OL) methods have been applied to estimate the univariate [6,7] and multivariate [8,9] ARIMA models for their efficiency and scalability. These methods are based on the fact that any ARIMA model can be approximated by a finite dimensional autoregressive (AR) model, which can be fitted incrementally using online convex optimization algorithms. However, to guarantee accurate predictions, these methods require a proper configuration of hyperparameters, such as the diameter of the decision set, the learning rate, the order of differencing, and the lag of the AR model. Theoretically, these hyperparameters need to be set according to prior knowledge about the data generation, which is impossible to obtain. In practice, the hyperparameters are usually tuned to optimize the goodness of fit on the unseen data, which requires numerical simulation (e.g., cross-validation) on a previously collected dataset. The numerical simulation is notoriously expensive, since it requires multiple training runs for each candidate hyperparameter configuration. Furthermore, a previously collected dataset containing ground truth is needed for validation of the fitted model, which is unsuited for the online setting. Unfortunately, the expensive tuning process needs to be regularly repeated if the statistical properties of the time series change over time in an unforeseen way.
Given a new problem of predicting time series values, it appears that tuning the hyperparameters of the online algorithms can negate the benefits of the online setting. This paper addresses this problem in the online learning framework by proposing new parameter-free algorithms for learning ARIMA models, while their performance can still be guaranteed in both theory and practice. A naive attempt for this would be to directly apply parameter-free online convex optimization (PF-OCO) algorithms to the AR approximation. However, the theoretical performance of the AR approximation and the parameter-free algorithms rely on the bounded gradient vectors of the loss function, which is unreasonable for the widely used squared error with an unbounded domain.
The key contribution of this paper is the design of online learning algorithms for ARIMA models, avoiding regular and expensive hyperparameter tuning without damaging the power of the models. Our algorithms update the model incrementally with a computational complexity that is linearly related to the size of the model parameters and the number of candidate models in each iteration. To obtain a solid theoretical foundation, we first show that, for any locally Lipschitz-continuous function, ARIMA models with fixed order of differencing can be approximated using an AR model of the same order for a large enough lag. Based on this, new algorithms are proposed for learning the AR model adaptively without requiring any prior knowledge about the model parameters. For Lipschitz-continuous loss functions, we apply a new algorithm based on the adaptive follow the regularized leader (FTRL) framework [10] and show that our algorithm achieves a sublinear regret bound depending on the data sequence and the Lipschitz constant. A special treatment on the commonly used squared error is required due to its non-Lipschitz continuity. To obtain a data-dependent regret bound, we combine a polynomial regularizer [11] with the adaptive FTRL framework. Finally, to find the proper order and lag of the AR model in an online manner, multiple AR models are simultaneously maintained, and an adaptive hedge algorithm is applied to aggregate their predictions. In the previous attempts [12,13] to solve this online model selection (OMS) problem, the exponentiated gradient (EG) algorithm has been directly applied to aggregate the predictions, which not only requires tuning the learning rate, but also yields a regret bound depending on the loss incurred by the worst model. Our adaptive hedge algorithm is parameter-free and guarantees a regret bound depending on the time series sequence. Table 1 provides a comparison of the online learning algorithms applied to the learning of the ARIMA models. In addition to the theoretical analysis, we also demonstrate the performance of the proposed algorithm using both synthetic and real-world datasets. For non-Lipschitz-continuous loss functions, the gradient norm can be unbounded. These algorithms with performance depending on the gradient norm can fail without making further assumptions on the data generation. For OGD, the learning rate and the diameter of the decision set need to be tuned in practice. ONS has an additional hyperparameter controlling the numerical stability. Applying SF-MD to ARIMA, the diameter of the model parameter has to be tuned. To obtain optimal performance, the learning rate of EG has to be tuned.
The rest of the paper is organized as follows. Section 2 reviews the existing work on the subject. The notation, learning model, and formal description of the problem are introduced in Section 3. Next, we present and analyze our algorithms in Section 4. Section 5 demonstrates the empirical performance of the proposed methods. Finally, we conclude our work with some future research directions in Section 6. Algorithm 1 ARIMA-AdaFTRL.

Related Work
An ARIMA model can be fitted using statistical methods such as recursive least square and maximum likelihood estimation, which are not only based on strong assumptions such as the Gaussian distributed noise terms [18], linear dependencies [19], and data generated by a stationary process [20], but also require solution of non-convex optimization problems [21]. Although these assumptions can be relaxed by considering non-Gaussian noise [22,23], non-stationary processes [24], or a convex relaxation [21], the pre-trained models still cannot deal with concept drift [7]. Moreover, retraining is time consuming and memory intensive, especially for large-scale datasets. The idea of applying regret minimization techniques to autoregressive moving average (ARMA) prediction was first introduced in [6]. The authors propose online algorithms incrementally producing predictions close to the values generated by the best ARMA model. This idea was extended to ARIMA(p, q, d) models in [7] by learning the AR(m) model of the higher-order differencing of the time series. Further extensions to multiple time series can be found in [8,9], while the problem of predicting time series with missing data was addressed in [25].
In order to obtain accurate predictions, the lag of the AR model and the order of differencing have to be tuned, which has been well studied in the offline setting. In some textbooks [20,26,27], Akaike's Information Criterion (AIC) and the Bayesian Information Criterion (BIC) are recommended for this task. Both require prior knowledge and strong assumptions about the variance of the noise [20], and are time and space consuming as they require numerical simulation such as cross-validation on previously collected datasets. Nevertheless, given a properly selected lag m and order d, online convex optimization techniques such as online Newton step (ONS) or online gradient descent (OGD) can be applied to fitting the model in the regret minimization framework [6][7][8][9]. However, both algorithms introduce additional hyperparameters to control the learning rate and numerical stability.
The idea of selecting hyperparameters for online time series prediction was proposed in [12,13]. Regarding the online AR predictor with different lags as experts, the authors aggregate over predictors by applying a multiplicative weights algorithm for prediction with expert advice. The proposed algorithm is not optimal for time series prediction, since the regret bound of the chosen algorithm depends on the largest loss incurred by the experts [28]. Furthermore, each individual expert still requires that the parameters are taken from a compact decision set, the diameter of which needs to be tuned in practice. A series of recent works on parameter-free online learning have provided possibilities of achieving sublinear regret without prior information on the decision set. In [14], the unconstrained online learning problem is modeled as a betting game, based on which a parameterfree algorithm is developed. The algorithm was further extended in [15], so a better regret bound can be achieved for strongly convex loss functions. However, the coin betting algorithm requires that the gradient vectors are normalized, which is unrealistic for unbounded time series and the squared error loss. In [16,17], the authors introduced parameter-free algorithms without requiring normalized gradient vectors. Unfortunately, the regret upper bounds of the proposed algorithms depend on the norm of the gradient vectors, which could be extremely large in our setting.
The main idea of the current work is based on the combination of the adaptive FTRL framework [10] and the idea of handling relative Lipschitz continuous functions [11], which makes it possible to devise an online algorithm with a data-dependent regret upper bound. To aggregate the results, an adaptive optimistic algorithm is proposed, such that the overall regret depends on the data sequence instead of the worst-case loss.

Preliminary and Learning Model
Let X t denote the value observed at time t of a time series. We assume that X t is taken from a finite dimensional real vector space X with norm · . We denote by L(X, X) the vector space of bounded linear operators from X to X and α op = sup x∈X,x =0 αx x the corresponding operator norm. An AR(p) model is given by where α i ∈ L(X, X) is a linear operator and t ∈ X is an error term. The ARMA(p, q) model extends the AR(p) model by adding a moving average (MA) component as follows: where t ∈ X is the error term and β i ∈ L(X, X). We define the d-th order differencing of the time series as d X t = d−1 X t − d−1 X t−1 for d ≥ 1 and 0 X t = X t . The ARIMA(p, q, d) model assumes that the d-th order differencing of the time series follows an ARMA(p, q) model. In this section, this general setting suffices for introducing the learning model. In the following sections, we fix the basis of X to obtain implementable algorithms, for which different kinds of norms and inner products for vectors and matrices are needed. We provide a table of required notation in Appendix C.
In this paper, we consider the setting of online learning, which can be described as an iterative game between a player and an adversary. In each round t of the game, the player makes a predictionX t . Next, the adversary chooses some X t and reveals it to the player, who then suffers the loss l(X t ,X t ) for some convex loss function l : X × X → R. The ultimate goal is to design a strategy for the player to minimize the cumulative loss ∑ T t=1 l(X t ,X t ) of T rounds. For simplicity, we define In classical textbooks about time series analysis, the signal is assumed to be generated by a model, based on which the predictions are made. In this paper, we make no assumptions on the data generation. Therefore, minimizing the cumulative loss is generally impossible. An achievable objective is to keep a possibly small regret of not having chosen some ARIMA(p, q, d) model to generate the predictionX t . Formally, we denote byX t (α, β) the prediction using the ARIMA(p, q, d) model parameterized by α and β, given by (in this paper, we do not directly address the problem of the cointegration, where the third term should be applied to a low-rank linear operator): The cumulative regret of T rounds is then given by The goal of this paper is to design a strategy for the player such that the cumulative regret grows sublinearly in T. In the ideal case, in which the data are actually generated by an ARIMA process, the prediction generated by the player yields a small loss. Otherwise, the predictions are always close to those produced by the best ARIMA model, independent of the data generation. Following the adversarial setting in [6], we allow the sequences {X t }, { t } and the parameters α, β to be selected by the adversary. Without any restrictions on the model, this is no different than the impossible task of minimizing the cumulative loss, since t−1 can always be selected such that X t =X t (α, β) holds for all t. Therefore, we make the following assumptions throughout this paper: Since we are interested in competing against predictions generated by ARIMA models, we assume that t is selected as if X t is generated by the ARIMA process. Furthermore, we assume the norm t is upper bounded within T iterations. Assumption 2 is a sufficient condition for the MA component to be invertible, which prevents it from going to infinity as t → ∞ [27].
Our work is based on the fact that we can compete against an ARIMA(p, q, d) model by taking predictions from an AR(m) model of the d-th order differencing for large enough m, which is shown in the following lemma, the proof of which can be found in Appendix A.
and β be as assumed in Assumptions 1 and 2. Then there is some As can be seen from the lemma, a predictionX t (γ) generated by the process is close to the predictionX t (α, β) generated by the ARIMA process. In the previous works [6,7], the loss function l t is assumed to be Lipschitz continuous to control the difference of loss incurred by the approximation. In general, this does not hold for squared error. However, from Assumption 1 and Lemma 1, it follows that bothX t (α, β) andX t (γ) lie in a compact set around X t with a bounded diameter. Given the convexity of l, which is local Lipschitz continuous in the compact convex domain, we obtain a similar property: where L(X t ) is some constant depending on X t . For squared error, it is easy to verify that the Lipschitz constant depends on d X t , the boundedness of which can be reasonably assumed. To avoid extraneous details, we simply add the third assumption: The next corollary shows that the losses incurred by the ARIMA and its approximation are close, which allows us to take predictions from the approximation.
and l be as assumed in Assumptions 1-3. Then there is some holds for all t = 1 . . . T.

Algorithms and Analysis
From Corollary 1, it follows clearly that an ARIMA(p, q, d) model can be approximated by an integrated AR model with large enough m. However, neither the order of differencing d nor the lag m is known. To circumvent tuning them using a previously collected dataset, we propose a framework with a two-level hierarchical construction, which is described in Algorithm 4.

Algorithm 4 Two-level framework.
Input: K instances of the slave algorithm A 1 , . . . , A K . An instance of master algorithm M.
The idea is to maintain a master algorithm M and a set of slave algorithms {A m |m = 1, . . . , K}. At each step t, the master algorithm receives predictionsX k t from A k for k = 1, . . . , K. Then it comes up with a convex combinationX t = ∑ K i=1 w i tX i t for some w t ∈ ∆ in the simplex. Next, it observes X t and computes the loss l t (X k t (γ)) for each slave A k , which is then used to update A k and w t+1 . Let {X k t } be the sequence generated by some slave k. We define the regret of not having chosen the prediction generated by slave k as and the regret of the slave k whereX t (γ k ) is the prediction generated by an integrated AR model parameterized by γ k . Let A k be some slave. Then the regret of this two-level framework can obviously be decomposed as

Corollary 1
For γ k , α, and β satisfying the condition in Corollary 1 (this is not a condition of having a correct algorithm-with more slaves, there are more α, β satisfying the condition; we increase the freedom of the model by increasing the number of slaves), the marked term above is upper bounded by a constant, that is, If the regret of the master and the slaves grow sublinearly in T, we can achieve an overall sublinear regret upper bound, which is formally described in the following corollary. Corollary 2. Let A i be an online learning algorithm against an AR(m i ) model parameterized by γ i for i = 1, . . . , K. For any ARIMA model parameterized by α and β, if there is a k ∈ {1, . . . , K} such thatX t (γ k ),X t (α, β) and {X t } satisfy Assumptions 1-3, then running Algorithm 4 with M and A 1 , . . . , A K guarantees Next, we design and analyze parameter-free algorithms for the slaves and the master.

Algorithms for Lipschitz Loss
Given fixed m and d, an integrated AR(m) model can be treated as an ordinary linear regression model. In each iteration t, we select γ t = (γ 1,t , . . . , γ m,t ) ∈ L(X, X) m and make predictionX Since l t is convex, there is some subdifferential g t ∈ ∂l t (X t (γ t )) such that Thus, we can cast the online linear regression problem to an online linear optimization problem. Unlike the previous work, we focus on the unconstrained setting, where γ t is not picked from a compact decision set. In this setting, we can apply an FTRL algorithm with an adaptive regularizer. To obtain an efficient implementation, we fix a basis for both X and X * . Now we can assume X = X * = R n and work with the matrix representation of γ ∈ L(X, X). It is easy to verify that (2) can be rewritten as is the Frobenius inner product. It is well known that the Frobenius inner product can be considered as a dot product of vectorized matrices, with which we obtain a simple first-order (the computational complexity per iteration depends linearly on the dimension of the parameter, i.e., O(n 2 m)) algorithm described in Algorithm 1.
The cumulative regret of Algorithm 1 can be upper bounded using the following theorem.
For an L-Lipschitz loss function l t , in which L T+1 is upper bounded by L, we obtain a sublinear regret upper bound depending on the sequence of d-th order differencing { d X t }. In case L is known, we can set L 0 = L, otherwise picking L 0 arbitrarily from a reasonable range (e.g., L 0 = 1) would not have a devastating impact on the performance of the algorithms.

Algorithms for Squared Errors
For the commonly used squared error given by it can be verified that g t can be represented as a vector for all t. Existing algorithms, which have a regret upper bound depending on g t 2 , could fail since g t 2 can be set arbitrarily large due to the adversarially selected data sequence X 1 , . . . , X t . To design a parameter-free algorithm for the squared error, we equip FTRL with a time-varying polynomial regularizer described in Algorithm 2. Define and consider the matrix representation γ t = γ 1,t · · · γ m,t . Then we have g t = γ t x t − d X t , and the upper bound of the regret can be rewritten as The idea of Algorithm 2 is to run the FTRL algorithm with a polynomial regularizer for increasing sequences {λ t } and {η t }, which leads to updating rule given by for c satisfying λ t c 3 + η t c = θ t F . Since we have λ t ≥ 0 and η t > 0 for θ 1 = 0, c exists and has a closed-form expression. The computational complexity per iteration has a linear dependency on the dimension of L(X, X) m . The following theorem provides a regret upper bound of Algorithm 2.
Theorem 2. Let {X t } be any sequence of vectors taken from X and be the squared error. We define x t = d X t−1 · · · d X t−m and γ = γ 1 · · · γ m , the matrix representation of γ 1 , . . . γ m ∈ L(X, X). Then, Algorithm 2 guarantees for all γ ∈ L(X, X) m .
For squared error, Algorithm 2 does not require a compact decision set and ensures a sublinear regret bound depending on the data sequence. Similar to Algorithm 1, one can set G 0 according to the prior knowledge about the bounds of the time series. Alternatively, we can simply set G 0 = 1 to obtain a reasonable performance.

Online Model Selection Using Master Algorithms
The straightforward choice of the master algorithm would be the exponentiated gradient algorithm for prediction with expert advice. However, this algorithm requires tuning of the learning rate and losses bounded by a small quantity, which can not be assumed for our case. The AdaHedge algorithm [29] solves these problems. However, it yields a worst-case regret bound depending on the largest loss observed, which could be much worse compared to a data-dependent regret bound.
Our idea is based on the adaptive optimistic follow the regularized leader (AO-FTRL) framework [10]. Given a sequence of hints {h t } and loss vectors {z t }, AO-FTRL guarantees a regret bound related to ∑ T t=1 z t − h t 2 t for some time-varying norm · t . In our case, where the loss incurred by a slave is given by l(X t ,X k t ) at iteration t, we simply choose h k,t = l(∑ d−1 i=0 i X t−1 ,X k t ). If l is L-Lipschitz in its first argument, then we have |z k,t − h k,t | ≤ L d X t , which leads to a data-dependent regret. The obtained algorithm is described in Algorithm 3. Its regret is upper bounded by the following theorem, the proof of which is provided in Appendix B.
and {w t } be as generated in Algorithm 3. Assume l is L-Lipschitz in its first argument and convex in its second argument. Then for any sequence {X t } and slave algorithm A k , we have By Corollary 2, combining Algorithm 3 with Algorithms 1 or 2 guarantees a datadependent regret upper bound sublinear in T. Note that there is an input parameter d for Algorithm 3, which can be adjusted according to the prior knowledge of the dataset such that d X t 2 2 can be bounded by a small quantity. In case no prior knowledge can be obtained, we can set d to the maximal order of differencing used in the slave algorithms. Arguably, the Lipschitz continuity is not a reasonable assumption for squared error with unbounded domain. With a bounded d X t 2 2 , we can assume that the loss function is locally Lipschitz, but with a Lipschitz constant depending on the prediction. In the next section, we show the performance of Algorithm 3 in combination with Algorithms 1 and 2 in different experimental settings.

Experiments and Results
In this section, we carry out experiments on both synthetic and real-world data to show that the proposed algorithms can generate promising predictions without tuning hyperparameters.

Experiment Settings
The synthetic data was generated randomly. We run 20 trials for each synthetic experiment and average the results. For numerical stability, we scale the real-world data down so that the values are between 0 and 10. Note that the range of the data are not assumed or used in the algorithms.

Setting 3: Time-Varying Models
To get more adversarially selected time series values, we generate the first half of the values using a stationary 10-dimensional ARIMA(5, 2, 1) model and the second half of the values using a stationary 10-dimensional ARIMA(5, 2, 0) model. The model parameters are drawn randomly.

Stock Data: Time Series with Trend
Following the experiments in [8], we collect the daily stock prices of seven technology companies from Yahoo Finance together with the S&P 500 index for over twenty years, which has an obvious increasing trend and is believed to exhibit integration.

Google Flu Data: Time Series with Seasonality
We collect estimates of influenza activity of the northern hemisphere countries, which has an obvious seasonal pattern. In the experiment, we examine the performance of the algorithms for handling regular and predictable changes that occur over a fixed period.

Electricity Demand: Trend and Seasonality
In this setting, we collect monthly load, gross electricity production, net electricity consumption, and gross demand in Turkey from 1976 to 2010. The dataset contains both trend and seasonality.

Experiments for the Slave Algorithms
We first fix d = 1 and m = 16 and compare our slave algorithms with ONS and OGD from [9] for squared error l t (X t ) = 1 2 X t −X t 2 2 and Euclidean distance l t (X t ) = X t −X t 2 . ONS and OGD stack and vectorize the parameter matrices, and incrementally update the vectorized parameter respectively using the following rules where g t is the vectorized gradient at step t, W is the decision set satisfying sup u∈W u 2 ≤ c, and the operator Π W (v) projects v into W. We select a list of candidate values for each hyperparameter, evaluate their performance on the whole dataset, and select the configuration with the best performance for comparison. Since the synthetic data are generated randomly, we average the results over 20 trials for stability. The corresponding results are shown in Figures 1-6 (to amplify the differences of the algorithms, we use log plots for the y-axis for all settings; for the synthetic datasets, we also use log plot for the x-axis, so that the behavior of the algorithms in the first 1000 steps can be better observed).
To show the impact of the hyperparameters on the performance of the baseline algorithm, we also plot their performance using sub-optimal configurations. Note that since the error term t cannot be predicted, an ideal predictor would suffer an average error rate of at least t 2 2 and t 2 for the two kinds of loss function. This is known for the synthetic datasets and plotted in the figures.
In all settings, both AdaFTRL and AdaFTRL-Poly have a performance on par with well-tuned OGD and ONS, which can have extremely bad performance using sub-optimal hyperparameter configurations. In the experiments using synthetic datasets, AdaFTRL suffers large loss at the beginning while generating accurate predictions after 1000 iterations. The relative performances of the proposed algorithms after the first 1000 iterations compared to the best tuned baseline algorithms are plotted in Appendix D. AdaFTRL-Poly has more stable performance compared to AdaFTRL. In the experiment with Google Flu data, all algorithms suffer huge losses around iteration 300 due to an abrupt change in the dataset. OGD and ONS with sub-optimal hyperparameter configurations, despite good performance for the first half of the data, generate very inaccurate predictions after the abrupt change in the dataset. This could lead to a catastrophic failure in practice, when certain patterns do not appear in the dataset collected for hyperparameter tuning. Our algorithms are more robust against this change and perform similarly to OGD and ONS with optimal hyperparameter configurations.

Experiments for Online Model Selection
The performance of the two-level framework and Algorithm 3 for online model selection is demonstrated in Figures 7-12. We simultaneously maintain 96 AR(m) models of d-th-order differencing for m = 1, . . . 32 and d = 0, . . . 2, which are updated by Algorithms 1 and 2 for squared error and Euclidean distance, respectively. The predictions generated by the AR models are aggregated using Algorithm 3 and the aggregation algorithm (AA) introduced in [13] with learning rate set to √ T. We compare the average losses incurred by the aggregated predictions with those incurred by the best AR model. To show the impact of m and d, we also plot the average loss of some other sub-optimal AR models.
In all settings, AO-Hedge outperforms AA, although the differences are very slight in some of the experiments. We would like to stress again that the choice of the hyperparameters has a great impact on the performance of the AR model. In settings 1-3, the AR model with 0-th-order differencing has the best performance, although the data are generated using d = 1, which suggests that the prior knowledge about the data generation may not be helpful for the model selection in all cases. The experimental results also show that AO-Hedge has a performance similar to the best AR model.       Aggregation-Algorithm ARIMA-AO-Hedge

Conclusions
We proposed algorithms for fitting ARIMA models in an online manner without requiring prior knowledge or tuning hyperparameters. We showed that the cumulative regret of our method grows sublinearly with the number of iterations and depends on the values of the time series. The comparison study on both synthetic and real-world datasets suggests that the proposed algorithms have a performance on par with the well-tuned state-of-the-art algorithms.
There are still several remaining issues that we want to address in future research. Firstly, it would be interesting to also develop a parameter-free algorithm for the cointegrated vector ARMA model. Secondly, we believe that the strong assumption on the β coefficient can be relaxed for multi-dimensional time series by generalizing Lemma 2 in [7]. Furthermore, we are also interested in applying online learning to other time series models such as the (generalized) ARCH model [30]. Finally, the proposed algorithms need to be empirically analyzed using more real-world datasets and loss functions, and compared with more recent predictive models such as recurrent neural networks and the models combining neural networks and ARIMA models [31]. Funding: We acknowledge support by the German Research Foundation and the Open Access Publication Fund of TU Berlin.

Informed Consent Statement: Not applicable
Data Availability Statement: The source code for generating the synthetic data set, the implementation of the algorithms, and the detailed information about our experiments are available on GitHub: https://github.com/OnlinePredictorTS/AOLForTimeSeries (accessed on March 2021). The stock data are collected from https://finance.yahoo.com/ (accessed on March 2021). The Google Flu data are available in https://github.com/datalit/googleflutrends/ (accessed on March 2021). The detailed information about the electricity demand can be found in [32].

Conflicts of Interest:
The authors declare no conflicts of interest.

Appendix A
We prove Lemma 1 in this section. Consider the ARIMA model given by be the t-th value generated by the ARIMA process. To prove Lemma 1, we generalize the proof provided in [6]. To remove the MA component, we first recursively define a growing process of the d-th-order differencing be the t-th value generated by this process.
The next lemma shows that it approximates an ARIMA(p, q, d) process.
Lemma A1. For any α, β, and { t } satisfying Assumptions 1 and 2, we have, for t = 1, . . . , T, Proof. First of all, we have we can assume t ≤ R for t ≤ 0. Next, we prove by induction on t that Y τ ≤ (1 − ) τ q R holds for all τ ≤ t. For the induction basis, we have for all τ ≤ 0. We assume the claim holds for some t, then we have which concludes the induction. Finally, we have which is the claimed result.
Next, we recursively define the following process: where d X m t (α, β) = d X t for m ≤ 0. Let {X m t (α, β)} be the sequence generated as follows: We show in the next lemma that it is close to {X ∞ t (α, β)}.
Lemma A2. For any α, β, {l t }, and { t } satisfying A1-A2, we have We prove by induction on m that holds for all t = 1, . . . , T and 0 ≤m ≤ m. For m = 0, we have for t = 1, . . . , T By the definition of the stochastic process { d X ∞ (α, β)}, we have where Y t−i is defined as in the proof of Lemma A1. From the assumption, we have dX t (α, β) − d X t = t ≤ R, and, as we have proved in Lemma A1, Y t ≤ R holds. Therefore, we obtain Z 0 t ≤ 2R, which is the induction basis. Next, assume the claim holds for all 0, . . . , m − 1. Then we have From the induction hypothesis, we have From the proof of the induction basis, we have Therefore, Z m t can be further bounded using Choosing m ≥ q log T log 1 which is the claimed result.
This process of the d-th-order differencing is actually an integrated AR(m + p) process with order d, which is shown in the following lemma.
Lemma A3. For any data sequence {X m t (α, β)} generated by a process of the d-th-order differencing given by (A1) and (A2) there is a γ ∈ L(X, X) m+p such that holds for all t.
Proof. Let { d X m t (α, β)} be the sequence generated by (A1). We prove by induction on m that for allm ≤ m there is a γ ∈ L(X, X)m +p such that holds for all α and β. The induction basis follows directly from the definition that Assume that the claim holds for some m. Let α i be the zero linear functional for i > p and β i be the zero linear functional for i > q. Then we have where the second equality follows from the fact that for i = 1, . . . , m + p + 1, and the claimed result follows.
Finally, we prove Lemma 1 by combining the results.
Proof of Lemma 1. From Lemmas A1, A2, and A3, there is some γ ∈ L(X, X) m with m ≥ q log T log 1 1− + p such that Combining the inequalities above, rearranging and adding ∑ T t=1 g t , w t to both sides, we obtain which is the claimed result.
Proof of Theorem 1. First of all, since we have the overall regret can be considered as the sum of the regrets ∑ T t=1 g i,t (γ i,t − γ i ). Next, we analyse the regret of each From the updating rule of G i,t , we have g i,t = 0 for G i,t = 0. Let t 0 be the smallest index such that G i,t 0 > 0. Then we have For G i,t > 0, ψ i,t is η i,t -strongly convex with respect to · F . From the duality of strong convexity and strong smoothness (see Proposition 2 in [17]), we have From the definition of Frobenius norm, we have Then, we obtain where the second inequality uses Lemma 4 in [17] and the last inequality follows from the fact that g i,t F ≤ L t d X t−i 2 ≤ L T+1 d X t−i 2 . Furthermore, we have 2 2 . First of all, it is easy to verify that γ t ∈ ∂ψ * t (θ t ). Applying Lemma A4 with h t = 0, we have Define v t ∈ ∂ψ * t+1 (θ t ). Then we have where we defineψ t (γ) = λ t 4 γ 4 F andψ t (γ) = η t 2 γ 2 F . From the properties of the Frobenius norm, we have Following the idea of [33], we can upper bound γ t 2 F γ t − v t 2 F as follows: where the second inequality uses the fact that 2ab − b 2 ≤ a 2 . Let t 0 be the smallest index such that λ t 0 > 0. Then we have where the last inequality uses Lemma 4 in [17]. Similarly, let t 1 be the smallest index such that η t 0 > 0. Then we obtain the upper bound Combining (A3)-(A6), we obtain For θ 1 = 0, it is easy to verify that ψ * 1 (θ 1 ) ≤ w 1 , θ 1 F ≤ θ 1 2 F η 1 ≤ θ 1 F . By putting this in the inequality above, we obtain the claimed result.

Proof of Theorem 3
Proof. Define where I w = {i = 1, . . . , k|w i = 0}. It can be verified that w t ∈ ∂ψ * t (θ t ). Applying Lemma A4, we obtain From the definition of ψ t , it follows that ψ T+1 (u) ≤ log K 2 ∑ T t=1 z t − h t 2 ∞ and ψ * 1 (θ 1 ) = 0 hold. Define v t ∈ ∂ψ * t (θ t+1 ). Next, we bound the third term as follows: where the first inequality uses the fact that ψ t is 2η t strongly convex w.r.t. · 1 . Adding up from 1 to T, we have Combining the inequalities, we obtain T ∑ t=1 l(X t , where the first inequality follows from Jensen's inequality. Furthermore, if l is L-Lipschitz in its first argument, then we have Finally, we obtain the regret upper bound which is the claimed result.

Appendix C
We summarize the main notations used throughout the article in Table A1. (X, · ) finite dimensional norm space (X * , · * ) the dual space with dual norm of (X, · ) L(X, X) vector space of bounded linear operators the operator norm of α ∈ L(X, X) Frobenius inner product the set of subdifferential of ψ at w ψ * : the Bregman divergence

Appendix D
For the synthetic data, the relative performance of the proposed algorithms after the first 1000 iterations are plotted in Figures A1-A3. For each setting, we calculate the average loss after the first 1000 iterations and plot the difference of the proposed algorithms compared to the average loss incurred by the best baseline algorithm.