Robust Sequential Change-Point Detection by Convex Optimization

We address the computational challenge of finding the robust sequential change-point detection procedures when the pre- and post-change distributions are not completely specified. Earlier works [veeravalli 1994] and [Unnikrishnan 2011] establish the general conditions for robust procedures which include finding a pair of least favorable distributions (LFDs). However, in the multi-dimensional setting, it is hard to find such LFDs computationally. We present a method based on convex optimization that addresses this issue when the distributions are Gaussian with unknown parameters from pre-specified uncertainty sets. We also establish theoretical properties of our robust procedures, and numerical examples demonstrate their good performance.


Introduction
Sequential analysis is a classic topic in statistics concerning online inference from a sequence of observations. The goal is to make statistical inference as quickly as possible, while controlling the false-alarm rate. An important sequential analysis problem commonly studied is sequential change-point detection [1]. It arises from various applications including online anomaly detection, statistical quality control, biosurveillance, financial arbitrage detection and network security monitoring (see, e.g., [2][3][4]).
We are interested in the sequential change-point detection problem with known pre-change parameters but unknown post-change parameters. Specifically, given a sequence of samples X 1 , X 2 , . . ., we assume that they are independent and identically distributed (i.i.d.) with certain distribution f θ parameterized by θ, and the values of θ are different before and after some unknown time called the change-point. We further assume that the parameters before the change-point are known. This is reasonable since usually it is relatively easy to obtain the reference data for the normal state, so that the parameters in the normal state can be estimated with good accuracy. After the change-point, however, the values of the parameters switch to some unknown values, which represent anomalies or novelties that need to be discovered.

Motivation: Dilemma of CUSUM and Generalized Likelihood Ratio (GLR) Statistics
Consider change-point detection with unknown post-change parameters. A commonly used change-point detection method is the so-called CUSUM procedure [4] that can be derived from likelihood ratios. Assume that before the change, the samples X i follow a distribution f θ 0 and after the change the samples X i follow another distribution f θ 1 . CUSUM procedure has a recursive structure: initialized with W 0 = 0, the likelihood-ratio statistic can be computed according to W t+1 = max{W t + log( f θ 1 (X t+1 )/ f θ 0 (X t+1 )), 0}, and a change-point is detected whenever W t exceeds a pre-specified threshold. Due to the recursive structure, CUSUM is memory and computation efficient since it does not need to store the historical data and only needs to record the value of W t . The performance of CUSUM depends on the choice of the post-change parameter θ 1 ; in particular, there must be a well-defined notion of "distance" between θ 0 and θ 1 . However, the choice of θ 1 is somewhat subjective. Even if in practice a reasonable choice of θ 1 is the "smallest" change-of-interest, in the multi-dimensional setting, it is hard to define what the "smallest" change would mean. Moreover, when the assumed parameter θ 1 deviates significantly from the true parameter value, CUSUM may suffer a severe performance degradation [5].
An alternative approach is the Generalized Likelihood Ratio (GLR) statistic based procedure [6]. The GLR statistic finds the maximum likelihood estimate (MLE) of the post-change parameter and plugs it back into the likelihood ratio to form the detection statistic. To be more precise, for each hypothetical change-point location k, the corresponding post-change samples are {X k+1 , . . . , X t }. Using these samples, one can form the MLE denoted asθ k+1,t . Without knowing whether the change occurs and where it occurs beforehand when forming the GLR statistic, we have to maximize k over all possible change locations. The GLR statistic is given by max k<t ∑ t i=k+1 log( fθ k,t (X i )/ f θ 0 (X t )), and a change is announced whenever it exceeds a pre-specified threshold. The GLR statistic is more robust than CUSUM [7], and it is particularly useful when the post-change parameter may vary from one situation to another. In simple cases, the MLEθ k+1,t may have closed-form expressions and may be evaluated recursively. For instance, when the post-change distribution is Gaussian with mean θ [8], , andθ k+1,t+1 = (t − k)/(t − k + 1) ·θ k+1,t + X t+1 /(t − k + 1). However, in more complex situations, in general MLEθ k+1,t does not have recursive form and cannot be evaluated using simple summary statistics. One such instance is given in Section 1.2. Another instance is when there is a constraint on the MLE such as sparsity. In these cases, one has to store historical data and recompute the MLEθ k,t whenever there is new data, which is not memory efficient nor computational efficient. For these cases, as a remedy, the window-limited GLR is usually considered, where only the past w samples are stored and the maximization is restricted to be over k ∈ (t − w, t]. However, even with the window-limited GLR, one still has to recomputeθ k,t using historical data whenever the new data are added. Besides CUSUM or GLR, various online change-point detection procedures using one-sample updates have been considered, which replace with the MLE with a simple recursive estimator. The one-sample update estimate takes the form ofθ k,t = h(X t ,θ k,t−1 ) for some function h that uses only the most recent data and the previous estimate. Then the estimates are plugged into the likelihood ratio statistic to perform detection. Online convex optimization algorithms (such as online mirror descent) are natural approach to construct these estimators (see, e.g., [9,10]). Such a scheme provides a more versatile approach to developing a detecting procedure for complex situations, where the exact MLE does not have a recursive form or even a closed-form expression. The one-sample update enjoys efficient computation, as information from the new data can be incorporated via low computational cost update. It is also memory efficient since the update only needs the most recent sample. The one sample update estimators may not correspond to the exact MLE, but they tend to result in good detection performance. However, in general there is no performance guarantees for such an approach. This is the question we aim to address in this paper.

Application Scenario: Social Network Change-Point Detection
The widespread use of social networks (such as Twitter) leads to a large amount of user-generated data generated continuously. One important aspect is to detect change-points in streaming social network data. These change-points may represent the collective anticipation of response to external events or system "shocks" [11]. Detecting such changes can provide a better understanding of patterns of social life. In social networks, a common form of the data is discrete events over continuous time. As a simplification, each event contains a time label and a user label in the network. In our prior work [12], we model discrete events using network point processes, which capture the influence between users through an influence matrix. We then cast the problem as detecting changes in an influence matrix, assuming that the influence matrix in the normal state (before the change) can be estimated from the reference data. After the change, the influence matrix is unknown (since it represents an anomaly) and has to be estimated online. Due to computational burden and memory constraint, since the scale of the network tends to be large, we do not want to store the entire historical data and rather compute the statistic in real-time. A simulated example to illustrate this case is shown later in Section 4.4.

Contributions
This paper has two main contributions. First, we present a general approach based on online convex optimization (OCO) for constructing the estimator for the one-sided sequential hypothesis test and the sequential change-point detection, in the non-anticipative approach of [8] if the MLE cannot be computed in a convenient recursive form.
Second, we provide a proof of the near second-order asymptotic optimality of this approach when a "logarithmic regret property" is satisfied and when the distributions are from an exponential family. The nearly second-order asymptotic optimality [4] means that the upper bound for performance matches the lower bound up to a log-log factor as the false-alarm rate tends to zero. Inspired by the existing connection between sequential analysis and online convex optimization in [13,14], we prove the near optimality leveraging the logarithmic regret property of online mirror descent (OMD) and the lower bound established in statistical sequential change-point literature [4,15]. More precisely, we provide a general upper bound for the one-sided sequential hypothesis test and change-point detection procedures with the one-sample update schemes. The upper bound explicitly captures the impact of estimation on detection by an estimation algorithm dependent factor. This factor shows up as an additional term in the upper bound for the expected detection delay, and it corresponds to the regret incurred by the one-sample update estimators. This establishes an interesting linkage between sequential change-point detection and online convex optimization. Although both fields, sequential change-point detection and online convex optimization, study sequential data, the precise connection between them is not clear, partly because the performance metrics are different: the former is concerned with the tradeoff between average run length and detection delay, whereas the latter focuses on bounding the cumulative loss incurred by the sequence of estimators through a regret bound [14,16]. Synthetic examples validate the performances of one sample update schemes. Here we focus on OMD estimators, but the results can be generalized to other OCO schemes such as the online gradient descent.

Literature and Related Work
Sequential change-point detection is a classic subject with an extensive literature. Much success has been achieved when the pre-change and post-change distributions are exactly specified. For example, the CUSUM procedure [17] with first-order asymptotic optimality [18] and exact optimality [19] in the minimax sense, and the Shiryayev-Roberts (SR) procedure [20] derived based on Bayesian principle that also enjoys various optimality properties. Both CUSUM and SR procedures rely on likelihood ratios between the specified pre-change and post-change distributions.
There are two main approaches in dealing with the unknown post-change parameters. The first one is a GLR approach [7,[21][22][23][24], and the second is a mixture approach [15,25]. The GLR statistic enjoys certain optimality properties, but it can not be computed recursively in many cases [23]. To address the infinite memory issue, [7,21] studied the window-limited GLR procedure. The main advantage of the mixture approach is that it allows an easy evaluation of a threshold that guarantees the desired false alarm constraint. A disadvantage of this approach is that sometimes there may not be a natural way of selecting the weight function, in particular when there is no conjugate prior. This motivated a third approach to this problem, which was proposed first by Robbins and Siegmund in the context of hypothesis testing, and then Lorden and Pollak [8] in the sequential change detection problem. This approach replaces the unknown parameter with some non-anticipating estimator, which can be easier to find even if there is no conjugate prior, as in the Gamma example considered in [8,25]. This work developed a modified SR procedure by introducing a prior distribution to the unknown parameters. While the non-anticipating estimator approach [8,24] enjoys recursive and thus efficient computation for the likelihood ratio based detection statistics, their approaches to constructing recursive estimators (based on MLE or method-of-moments) cannot be easily extended to more complex cases (for instance, multi-dimensional parameters with constraints). Here, we consider a general and convenient approach for constructing non-anticipating estimators based on online convex optimization which is particularly useful for these complex cases. Our work provides an alternative proof for the nearly second-order asymptotic optimality by building a connection to online convex optimization and leveraging the regret bound type of results [14]. For one-dimensional Gaussian mean shift without any constraint, we replicate the second-order asymptotic optimality, namely, Theorem 3.3 in [24]. Recent work [26] also treats the problem when the pre-change distribution has unknown parameters.
Another related problem is sequential joint estimation and detection, but the goal is different in that one aims to achieve both good detection and good estimation performance, whereas in our setting, estimation is only needed for computing the detection statistics. These works include [27] and [28], which study the joint detection and estimation problem of a specific form that arises from many applications such as spectrum sensing [29], image observations [30], and MIMO radar [31]: a linear scalar observation model with Gaussian noise, and under the alternative hypothesis there is an unknown multiplicative parameter. The paper of [27] demonstrates that solving the joint problem by treating detection and estimation separately with the corresponding optimal procedure does not yield an overall optimum performance, and provides an elegant closed-form optimal detector. Later on [28] generalizes the results. There are also other approaches solving the joint detection-estimation problem using multiple hypotheses testing [30,32] and Bayesian formulations [33].
Related work using online convex optimization for anomaly detection includes [9], which develops an efficient detector for the exponential family using online mirror descent and proves a logarithmic regret bound, and [10], which dynamically adjusts the detection threshold to allow feedbacks about whether decision outcome. However, these works consider a different setting that the change is a transient outlier instead of a persistent change, as assumed by the classic statistical change-point detection literature. When there is persistent change, it is important to accumulate "evidence" by pooling the post-change samples (our work considers the persistent change).
Extensive work has been done for parameter estimation in the online-setting. This includes online density estimation over the exponential family by regret minimization [9,10,16], sequential prediction of individual sequence with the logarithm loss [13,34], online prediction for time series [35], and sequential NML (SNML) prediction [34] which achieves the optimal regret bound. Our problem is different from the above, in that estimation is not the end goal; one only performs parameter estimation to plug them back into the likelihood function for detection. Moreover, a subtle but important difference of our work is that the loss function for online detecting estimation is − fθ i (X i ), whereas our loss function is − fθ i−1 (X i ) in order to retain the martingale property, which is essential to establish the nearly second-order asymptotic optimality.

Preliminaries
Assume a sequence of i.i.d. random variables X 1 , X 2 , . . . with a probability density function of a parametric form f θ . The parameter θ may be unknown. Consider two related problems: one-sided sequential hypothesis test and sequential change-point detection. The detection statistic relies on a sequence estimator {θ t } constructed using online mirror descent. The OMD uses simple one-sample update: the update fromθ t−1 toθ t only uses the current sample X t . This is the main difference from the traditional generalized likelihood ratio (GLR) statistic [7], where eachθ t is estimated using historical samples. In the following, we present detailed descriptions for two problems. We will consider exponential family distributions and present our non-anticipating estimator based on the one-sample estimate.

One-Sided Sequential Hypothesis Test
First, we consider a one-sided sequential hypothesis test where the goal is only to reject the null hypothesis. This is a special case of the change-detection problem where the change-point can be either 0 or ∞ (meaning it never occurs). Studying this special case will given us an important intermediate step towards solving the sequential change-detection problem.
Consider the null hypothesis H 0 : θ = θ 0 versus the alternative H 1 : θ = θ 0 . Hence, the parameter under the alternative distribution is unknown. The classic approach to solve this problem is the one-sided sequential probablity-ratio test (SPRT) [36]: at each time, given samples {X 1 , X 2 , . . . , X t }, the decision is either to reject H 0 or taking more samples if the rejection decision can not be made confidently. Here, we introduce a modified one-sided SPRT with a sequence of non-anticipating plug-in estimators:θ t :=θ t (X 1 , . . . , X t ), t = 1, 2, . . . .
Define the test statistic at time t as The test statistic has a simple recursive implementation: Define a sequence of σ-algebras {F t } t≥1 where F t = σ(X 1 , . . . , X t ). The test statistic has the martingale property due to its non-anticipating nature: E[Λ t | F t−1 ] = Λ t−1 , where the expectation is taken when X 1 , . . . are i.i.d. random variables drawn from f θ 0 . The decision rule is a stopping time where b > 0 is a pre-specified threshold. We reject the null hypothesis whenever the statistic exceeds the threshold. The goal is to reject the null hypothesis using as few samples as possible under the false-alarm rate (or Type-I error) constraint.

Sequential Change-Point Detection
Now we consider the sequential change-point detection problem. A change may occur at an unknown time ν which alters the underlying distribution of the data. One would like to detect such a change as quickly as possible. Formally, change-point detection can be cast into the following hypothesis test: Here we assume an unknown θ to represent the anomaly. The goal is to detect the change as quickly as possible after it occurs under the false-alarm rate constraint. We will consider likelihood ratio based detection procedures adapted from two types of existing ones, which we call the adaptive CUSUM (ACM), and the adaptive SRRS (ASR) procedures.
For change-point detection, the post-change parameter is estimated using post-change samples. This means that, for each putative change-point location before the current time k < t, the post-change samples are {X k , . . . , X t }; with a slight abuse of notation, the post-change parameter is estimated aŝ Therefore, for k = 1,θ k,i becomesθ i defined in (2) for the one-sided SPRT. Initialize withθ k,k−1 = θ 0 . The likelihood ratio at time t for a hypothetical change-point location k is given by where Λ k,t can be computed recursively similar to (2). Since we do not know the change-point location ν, from the maximum likelihood principle, we take the maximum of the statistics over all possible values of k. This gives the ACM procedure: where b 1 is a pre-specified threshold. Similarly, by replacing the maximization over k in (7) with summation, we obtain the following ASR procedure [8], which can be interpreted as a Bayesian statistic similar to the Shiryaev-Roberts procedure.
where b 2 is a pre-specified threshold. The computations of Λ k,t and estimator {θ t }, {θ k,t } are discussed later in Section 2.4. For a fixed k, the comparison between our methods and GLR is illustrated in Figure 1.

Remark 1.
In practice, to prevent the memory and computation complexity from blowing up as time t goes to infinity, we can use window-limited version of the detection procedures in (7) and (8). The window-limited versions are obtained by replacing max 1≤k≤t with max t−w≤k≤t in (7) and by replacing ∑ t k=1 with ∑ t k=t−w in (8). Here w is a prescribed window size. Even if we do not provide theoretical analysis to the window-limited versions, we refer the readers to [7] for the choice of w the window-limited GLR procedures.

Exponential Family
In this paper, we focus on f θ being the exponential family for the following reasons: (i) exponential family [10] represents a very rich class of parametric and even many nonparametric statistical models [37]; (ii) the negative log-likelihood function for exponential family − log f θ (x) is convex, and this allows us to perform online convex optimization. Some useful properties of the exponential family are briefly summarized below, and full proofs can be found in [10,38].
Consider an observation space X equipped with a sigma algebra B and a sigma finite measure H on (X , B). Assume the number of parameters is d. Let x denote the transpose of a vector or matrix.
Here φ(x) corresponds to the sufficient statistic for θ. Let Θ denote the parameter space in R d . Let {P θ , θ ∈ Θ} be a set of probability distributions with respect to the measure H. Then, {P θ , θ ∈ Θ} is said to be a multivariate exponential family with natural parameter θ, if the probability density function of each f θ ∈ P θ with respect to H can be expressed as In the definition, the so-called log-partition function is given by To make sure f θ (x) a well-defined probability density, we consider the following two sets for parameters: , and the Hessian ∇ 2 Φ(θ) corresponds to the covariance matrix of the vector φ(X). Therefore, ∇ 2 Φ(θ) is positive semidefinite and Φ(θ) is convex. Moreover, Φ is a Legendre function, which means that it is strongly convex, continuous differentiable and essentially smooth [38]. The Legendre-Fenchel dual Φ * is defined as The mappings ∇Φ * is an inverse mapping of ∇Φ [39]. Moreover, if Φ is a strongly convex function, then ∇Φ * = (∇Φ) −1 .
A general measure of proximity used in the OMD is the so-called Bregman divergence B F , which is a nonnegative function induced by a Legendre function F (see, e.g., [10,38]) defined as For exponential family, a natural choice of the Bregman divergence is the Kullback-Leibler (KL) divergence. Define E θ as the expectation when X is a random variable with density f θ and I(θ 1 , θ 2 ) as the KL divergence between two distributions with densities f θ 1 and f θ 2 for any θ 1 , θ 2 ∈ Θ. Then It can be shown that, for exponential family, is a Bregman divergence. This property is useful to constructing mirror descent estimator for the exponential family [39,40].

Online Convex Optimization (OCO) Algorithms for Non-Anticipating Estimators
Online convex optimization (OCO) algorithms [14] can be interpreted as a player who makes sequential decisions. At the time of each decision, the outcomes are unknown to the player. After committing to a decision, the decision maker suffers a loss that can be adversarially chosen. An OCO algorithm makes decisions, which, based on the observed outcomes, minimizes the regret that is the difference between the total loss that has incurred relative to that of the best fixed decision in hindsight. To design non-anticipating estimators, we consider OCO algorithms with likelihood-based regret functions. We iteratively estimate the parameters at the time when one new observation becomes available based on the maximum likelihood principle, and hence the loss incurred corresponds to the negative log-likelihood of the new sample evaluated at the estimator t (θ) := − log f θ (X t ), which corresponds to the log-loss in [13]. Given samples X 1 , . . . , X t , the regret for a sequence of estimators Below we omit the superscript a occasionally for notational simplicity.
In this paper, we consider a generic OCO procedure called the online mirror descent algorithms (OMD) [14,41]. Next, we discuss how to construct the non-anticipating estimators {θ t } t≥1 in (1), and {θ k,t }, k = 1, 2, . . . , t − 1 in (5) using OMD. The main idea of OMD is the following. At each time step, the estimatorθ t−1 is updated using the new sample X t , by balancing the tendency to stay close to the previous estimate against the tendency to move in the direction of the greatest local decrease of the loss function. For the loss function defined above, a sequence of OMD estimator is constructed bŷ where B Φ is defined in (11). Here Γ ⊂ Θ σ is a closed convex set, which is problem-specific and encourages certain parameter structure such as sparsity.
Remark 2. Similar to (13), for any fixed k, we can compute {θ k,t } t≥1 via OMD for sequential change-point detection. The only difference is that {θ k,t } t≥1 is computed if we use X k as our first sample and then apply the recursive update (13) on X k+1 , . . .. Forθ t , we use X 1 as our first sample.
There is an equivalent form of OMD, presented as the original formulation in [40]. The equivalent form is sometimes easier to use for algorithm development, and it consists of four steps: (1) compute the dual variable: The equivalence between the above form for OMD and the nonlinear projected subgradient approach in (13) is proved in [39]. We adopt this approach when deriving our algorithm and follow the same strategy as [9]. Algorithm 1 summarizes the steps [42].
For strongly convex loss function, the regret of many OCO algorithms, including the OMD, has the property that R n ≤ C log n for some constant C (depend on f θ and Θ σ ) and any positive integer n [10,43]. Note that for exponential family, the loss function is the negative log-likelihood function, which is strongly convex over Θ σ . Hence, we can have the logarithmic regret property.
Acquire a new observation X t 4:

Nearly Second-Order Asymptotic Optimality of One-Sample Update Schemes
Below we prove the nearly second-order asymptotic optimality of the one-sample update schemes. More precisely, the nearly second-order asymptotic optimality means that the algorithm obtains the lower performance bound asymptotically up to a log-log factor in the false-alarm rate, as the false-alarm rate tends to zero (in many cases the log-log factor is a small number).
We first introduce some necessary notations. Denote P θ,ν and E θ,ν as the probability measure and the expectation when the change occurs at time ν and the post-change parameter is θ, i.e., when X 1 , . . . , X ν are i.i.d. random variables with density f θ 0 and X ν+1 , X ν+2 , . . . are i.i.d. random variables with density f θ . Moreover, let P ∞ and E ∞ denote the probability measure when there is no change, i.e., X 1 , X 2 , . . . are i.i.d. random variables with density f θ 0 . Finally, let F t denote the σ-algebra generated by X 1 , . . . , X t for t ≥ 1.

"One-Sided" Sequential Hypothesis Test
Recall that the decision rule for sequential hypothesis test is a stopping time τ(b) defined in (3). The two standard performance metrics are the false-alarm rate, denoted as P ∞ (τ(b) < ∞), and the expected detection delay (i.e., the expected number of samples needed to reject the null), denoted as ]. Usually, one adjusts the threshold b to control the false-alarm rate to be below a certain level.
Our main result is the following. As has been observed by [23], there is a loss in the statistical efficiency by using one-sample update estimators relative to the GLR approach using the entire samples X 1 , . . . , X t in the past. The theorem below shows that this loss corresponds to the expected regret given in (12).
Here O(1) is a term upper-bounded by an absolute constant as b → ∞.
The main idea of the proof is to decompose the statistic defining τ(b), log Λ(t), into a few terms that form martingales, and then invoke the Wald's Theorem for the stopped process. (14) is valid for a sequence of non-anticipating estimators generated by an OCO algorithm. Moreover, (14) gives an explicit connection between the expected detection delay for the one-sided sequential hypothesis testing (left-hand side of (14)) and the regret for the OCO (the second term on the right-hand side of (14)). This illustrates clearly the impact of estimation on detection by an estimation algorithm dependent factor.

Remark 3. The inequality
Note that in the statement of the Theorem 1, the stopping time τ(b) appears on the right-hand side of the inequality (14). For OMD, the expected sample size is usually small. By comparing with specific regret bound R τ(b) , we can bound E θ,0 [τ(b)] as discussed in Section 4. The most important case is that when the estimation algorithm has a logarithmic expected regret. For the exponential family, as shown in Section 3.3, Algorithm 1 can achieve E θ,0 [R n ] ≤ C log n for any positive integer n.
To obtain a more specific order of the upper bound for E θ,0 [τ b ] when b grows, we establish an upper bound for E θ,0 [τ b ] as a function of b, to obtain the following Corollary 1. Assume that E θ,0 [R a n ] ≤ C log n for any positive integer n and some constant C > 0, we have Here o(1) is a vanishing term as b → ∞.
Corollary 1 shows that other than the well known first-order approximation b/I(θ, θ 0 ) [8,18], the expected detection delay E θ,0 [τ(b)] is bounded by an additional term that is on the order of log(b) if the estimation algorithm has a logarithmic regret. This log b term plays an important role in establishing the optimality properties later. To show the optimality properties for the detection procedures, we first select a set of detection procedures with false-alarm rates lower than a prescribed value, and then prove that among all the procedures in the set, the expected detection delays of our proposed procedures are the smallest. Thus, we can choose a threshold b to uniformly control the false-alarm rate of τ(b).
Lemma 1 (false-alarm rate of τ(b)). Let {θ t } t≥1 be any sequence of non-anticipating estimators. For any b > 0, P ∞ (τ(b) < ∞) ≤ exp(−b). Lemma 1 shows that as b increases the false-alarm rate of τ(b) decays exponentially fast. We can set b = log(1/α) to make the false-alarm rate of τ(b) less than some α > 0. Next, leveraging an existing lower bound for general SPRT presented in Section 5.5.1.1 in [4], we establish the nearly second-order asymptotic optimality of OMD based SPRT as follows: Corollary 2 (Nearly second-order optimality of OCO based SPRT). Let {θ t } t≥1 be a sequence of non-anticipating estimators generated by an OCO algorithm a. Assume that E θ,0 [R a n ] ≤ C log n for any positive integer n and some constant C > 0. Define a set C(α) = {T : P ∞ (T < ∞) ≤ α}. For b = log(1/α), due to Lemma 1, τ(b) ∈ C(α). For such a choice, τ(b) is nearly second-order asymptotic optimal in the sense that for any θ ∈ Θ σ − {θ 0 }, as α → 0, The result means that, compared with any procedure (including the optimal procedure) calibrated to have a false-alarm rate less than α, our procedure incurs an at most log(log(1/α)) increase in the expected detection delay, which is usually a small number. For instance, even for a conservative case when we set α = 10 −5 to control the false-alarm rate, the number is log(log(1/α)) = 2.44.

Sequential Change-Point Detection
Now we proceed the proof by leveraging the close connection [18] between the sequential change-point detection and the one-sided hypothesis test. For sequential change-point detection, the two commonly used performance metrics [4] are the average run length (ARL), denoted by E ∞ [T]; and the maximal conditional average delay to detection (CADD), denoted by sup ARL is the expected number of samples between two successive false alarms, and CADD is the expected number of samples needed to detect the change after it occurs. A good procedure should have a large ARL and a small CADD. Similar to the one-sided hypothesis test, one usually choose the threshold large enough so that ARL is larger than a pre-specified level.
Similar to Theorem 1, we provide an upper bound for the CADD of our ASR and ACM procedures.

Theorem 2.
Consider the change-point detection procedure T ACM (b 1 ) in (7) and T ASR (b 2 ) in (8). For any fixed k, let {θ k,t } t≥1 be a sequence of non-anticipating estimators generated by an OCO algorithm a.
To prove Theorem 2, we relate the ASR and ACM procedures to the one-sided hypothesis test and use the fact that when the measure P ∞ is known, sup ν≥0 E θ,ν [T − ν | T > ν] is attained at ν = 0 for both the ASR and the ACM procedures. Above, we may apply a similar argument as in Corollary 1 to remove the dependence on τ(b) on the right-hand-side of the inequality. We establish the following lower bound for the ARL of the detection procedures, which is needed for proving Corollary 3: Lemma 2 (ARL). Consider the change-point detection procedure T ACM (b 1 ) in (7) and T ASR (b 2 ) in (8). For any fixed k, let {θ k,t } t≥1 be any sequence of non-anticipating estimators. Let b 1 = b 2 = b, given a prescribed lower bound γ > 0 for the ARL, we have Lemma 2 shows that given a required lower bound γ for ARL, we can choose b = log γ to make the ARL be greater than γ. This is consistent with earlier works [8,25] which show that the smallest threshold b such that E ∞ [T ACM (b)] ≥ γ is approximate log γ. However, the bound in Lamma 2 is not tight, since in practice we can set b = ρ log γ for some ρ ∈ (0, 1) to ensure that ARL is greater than γ.
Combing the upper bound in Theorem 2 with an existing lower bound for the CADD of SRRS procedure in [15], we obtain the following optimality properties.

Corollary 3 (Nearly second-order asymptotic optimality of ACM and ASR).
Consider the change-point detection procedure T ACM (b 1 ) in (7) and T ASR (b 2 ) in (8). For any fixed k, let {θ k,t } t≥1 be a sequence of non-anticipating estimators generated by an OCO algorithm a. Assume that E θ,0 [R a n ] ≤ C log n for any positive integer n and some constant C > 0.
For b = log γ, due to Lemma 2, both T ASR (b) and T ACM (b) belong to S(γ). For such b, both T ASR (b) and T ACM (b) are nearly second-order asymptotic optimal in the sense that for any θ ∈

A similar expression holds for T ACM (b).
The result means that, compared with any procedure (including the optimal procedure) calibrated to have a fixed ARL larger than γ, our procedure incurs an at most log(log γ) increase in the CADD. Comparing (18) with (16), we note that the ARL γ plays the same role as 1/α because 1/γ is roughly the false-alarm rate for sequential change-point detection [18].

Example: Regret Bound for Specific Cases
In this subsection, we show that the regret bound R t can be expressed as a weighted sum of Bregman divergences between two consecutive estimators. This form of R t is useful to show the logarithmic regret for OMD. The following result comes as a modification of [16].
Corollary 4 (Upper bound for the expected regret, Gaussian). Assume X 1 , X 2 , . . . are i.i.d. following N (θ, I d ) with some θ ∈ R d . Assume that {θ i } i≥1 , {μ i } i≥1 are obtained using Algorithm 1 with η i = 1/i and Γ = R d . For any t > 0, we have that for some constant C 1 > 0 that depends on θ, The following calculations justify Corollary 4, which also serve as an example of how to use regret bound. First, the assumptionθ t =θ t in Theorem 3 is satisfied for the following reasons. Consider Γ = R d is the full space. According to Algorithm 1, using the non-negativity of the Bregman divergence, we haveθ t = arg min u∈Γ B Φ (u,θ t ) =θ t . Then the regret bound can be written as Since the step-size η i = 1/i, the second term in the above equation can be written as: Combining above, we have for any i ≥ 1, we obtain the desired result. Thus, with i.i.d.
multivariate normal samples, the expected regret grows logarithmically with the number of samples.
Using the similar calculations, we can also bound the expected regret in the general case. As shown in the proof above for Corollary 4, the dominating term for R t can be rewritten as whereμ i is a convex combination ofμ i−1 andμ i . For an arbitrary distribution, the term (φ( can be viewed as a local normal distribution with the changing curvature ∇ 2 Φ * (μ i ). Thus, it is possible to prove case-by-case the O(log t)-style bounds by making more assumptions about the distributions. Recall the notation Θ σ in Section 2.3 such that − log f θ (x) is σ-strongly convex over Θ σ . Let · 2 denote the 2 norm. Moreover, we assume that the true parameter belongs to a set Γ that is a closed and convex subset of Θ σ such that sup θ∈Γ ∇Φ(θ) 2 ≤ M for some constant M. Thus, one can show that − log f θ (x) is not only σ-strongly convex but also M-strongly smooth over Γ. Theorem 3 in [10] shows that for all θ ∈ Γ and n ≥ 1, consider that {θ i } i≥1 is obtained by OMD, then Therefore, for any bounded distributions within the exponential family, we achieve a logarithmic regret. This logarithmic regret is valid for Bernoulli distribution, Beta distribution and some truncated versions of classic distributions (e.g., truncated Gaussian distribution, truncated Gamma distribution and truncated Geometric distribution analyzed in [44]).

Numerical Examples
In this section, we present some synthetic examples to demonstrate the good performance of our methods. We will focus on ACM and ASR for sequential change-point detection. In the following, we consider the window-limited versions (see Remark 1) of ACM and ASR with window size w = 100. Recall that when the measure P ∞ is known, sup ν≥0 E θ,ν [T − ν | T > ν] is attained at ν = 0 for both ASR and ACM procedures (a proof can be found in the proof of Theorem 2). Therefore, in the following experiments we define the expected detection delay (EDD) as E θ,0 [T] for a stopping time T. To compare the performance between different detection procedures, we determine the threshold for each detection procedure by Monte-Carlo simulations such that the ARL for each procedure is about 10, 000. Below, we denote · 2 , · 1 and · 0 as the 2 norm, 1 norm and 0 norm defined as the number of non-zero entries, respectively. The following experiments are all run on the same Macbook Air with an Intel i7 Core CPU.

Detecting Sparse Mean-Shift of Multivariate Normal Distribution
We consider detect the sparse mean shift for multivariate normal distribution. Specifically, we assume that the pre-change distribution is N (0, I d ) and the post-change distribution is N (θ, I d ) for some unknown θ ∈ {θ ∈ R d : θ 0 ≤ s}, where s is called the sparsity of the mean shift. Sparse mean shift detection is of particular interest in sensor networks [45,46]. For this Gaussian case, the Bregman divergence is given by B Φ (θ 1 , θ 2 ) = I(θ 2 , θ 1 ) = θ 1 − θ 2 2 2 /2. Therefore, the projection onto Γ in Algorithm 1 is a Euclidean projection onto a convex set, which in many cases can be implemented efficiently. As a frequently used convex relaxation of the 0 -norm ball, we set Γ = {θ : θ 1 ≤ s} (it is known that imposing an 1 constraint leads to sparse solution; see, e.g., [47]). Then, the projection onto 1 ball can be computed very efficiently via a simple soft-thresholding technique [48].
Two benchmark procedures are the CUSUM and the GLR. For the CUSUM procedure, we specify a nominal post-change mean, which is an all-one vector. If knowing the post-change mean is sparse, we can also use the shrinkage estimator presented in [49], which performs hard or soft thresholding of the estimated post-change mean parameter. Our procedures are T ASR (b) and T ACM (b) with Γ = R d and Γ = {θ : θ 1 ≤ 5}. In the following experiments, we run 10, 000 Monte Carlo trials to obtain each simulated EDD.
In the experiments, we set d = 20. The post-change distributions are N (θ, I d ), where 100p% entry of θ is 1 and others are 0, and the location of nonzero entries are random. Table 1 shows the EDDs versus the proportion p. Note that our procedures incur little performance loss compared with the GLR procedure and the CUSUM procedure. Notably, T ACM (b) with Γ = {θ : θ 1 ≤ 5} performs almost the same as the GLR procedure and much better than the CUSUM procedure when p is small. This shows the advantage of projection when the true parameter is sparse. Table 1. Comparison of the EDDs in detecting the sparse mean shift of multivariate Gaussian distribution. Below, "CUSUM": CUSUM procedure with pre-specified all-one vector as post-change parameter; "Shrinkage": component-wise shrinkage estimator in [49]; "GLR": GLR procedure; "ASR": p is the proportion of non-zero entries in θ. We run 10, 000 Monte Carlo trials to obtain each value. For each value, the standard deviation is less than one half of the value.

Detecting the Scale Change in Gamma Distribution
We consider an example that detects the scale change in Gamma distributions. Assume that we observe a sequence X 1 , X 2 . . . of samples drawn from Gamma(α, β) for some α, β > 0, with the probability density function given by f α,β (x) = exp(−xβ)x α−1 β α /Γ(α) (to avoid confusion with the Γ parameter in Algorithm 1 we useΓ(·) to denote the Gamma function). The parameter α −1 is called the dispersion parameter that scales the loss and the divergences. For simplicity, we fix α = 1 just like we fix the variance in the Gaussian case. The specifications in the Algorthm 1 are as follows: θ = −β, Assume that the pre-change distribution is Gamma(1, 1) and the post-change distribution is Gamma(1, β) for some unknown β > 0. We compare our algorithms with CUSUM, GLR and non-ancitipating estimator based on the method of moment (MOM) estimator in [8]. For the CUSUM procedure, we specify the post-change β to be 2. The results are shown in Table 2. CUSUM fails to detect the change when β = 0.1, which is far away from the pre-specified post-change parameter β = 2. We can see that performance loss of the proposed ACM method compared with GLR and MOM is very small. Table 2. Comparison of the EDDs in detecting the scale change in Gamma distribution. Below, "CUSUM": CUSUM procedure with pre-specified post-change parameter β = 2; "MOM": Method of Moments estimator method; "GLR": GLR procedure; "ASR": T ASR (b) with Γ = (−∞, 0); "ACM": T ACM (b) with Γ = (−∞, 0). We run 10, 000 Monte Carlo trials to obtain each value. For each value, the standard deviation is less than one half of the value.

Communication-Rate Change Detection with Erdős-Rényi Model
Next, we consider a problem to detect the communication-rate change in a network, which is a model for social network data. Suppose we observe communication between nodes in a network over time, represented as a sequence of (symmetric) adjacency matrices of the network. At time t, if node i and node j communicates, then the adjacency matrix has 1 on the ijth and jith entries (thus it forms an undirected graph). The nodes that do not communicate have 0 on the corresponding entries. We model such communication patterns using the Erdos-Renyi random graph model. Each edge has a fixed probability of being present or absent, independently of the other edges. Under the null hypothesis, each edge is a Bernoulli random variable that takes values 1 with known probability p and value 0 with probability 1 − p. Under the alternative hypothesis, there exists an unknown time κ, after which a small subset of edges occur with an unknown and different probability p = p.
In the experiments, we set N = 20 and d = 190. For the pre-change parameters, we set p i = 0.2 for all i = 1, . . . , d. For the post-change parameters, we randomly select n out of the 190 edges, denoted by E , and set p i = 0.8 for i ∈ E and p i = 0.2 for i / ∈ E . As said before, let the change happen at time ν = 0 (since the upper bound for EDD is achieved at ν = 0 as argued in the proof of Theorem 2).
To implement CUSUM, we specify the post-change parameters p i = 0.8 for all i = 1, . . . , d.
The results are shown in Table 3. Our procedures are better than CUSUM procedure when n is small since the post-change parameters used in CUSUM procedure is far from the true parameter. Compared with the GLR procedure, our methods have a small performance loss, and the loss is almost negligible as n approaches to d = 190. Table 3. Comparison of the EDDs in detecting the changes of the communication-rates in a network. Below, "CUSUM": CUSUM procedure with pre-specified post-change parameters p = 0.8; "GLR": GLR procedure; "ASR": T ASR (b) with Γ = R; "ACM": T ACM (b) with Γ = R. We run 10, 000 Monte Carlo trials to obtain each value. For each value, the standard deviation is less than one half of the value. n = 78 n = 100 n = 120 n = 150 n = 170 n = 190 Below are the specifications of Algorithm 1 in this case. For Bernoulli distribution with unknown parameter p, the natural parameter θ is equal to log(p/(1 − p)). Thus, we have Θ = R, φ(x) = x, dH(x) = 1, Φ(θ) = log(1 + exp(θ)), µ = exp(θ)/(1 + exp(θ)) and Φ * (µ) = µ log µ + (1 − µ)log(1 − µ).

Point Process Change-Point Detection: Poisson to Hawkes Processes
In this example, to illustrate the situation in Section 1.2, we consider a case where a homogeneous Poisson process switches to a Hawkes process (see, e.g., [12]); this can be viewed as a simplest case in Section 1.2 with one node. We construct ACM and ASR procedures. In this case, the MLE for the unknown post-change parameter cannot be found in close-form, yet ACM and ASR can be easily constructed and give reasonably good performance, although our theory no longer holds in this case due to the lack of i.i.d. samples.
The Hawkes process can be viewed as a non-homogeneous Poisson process where the intensity is influenced by historical events. The data consist of a sequence of events occurring at times {t 1 , t 2 , . . . , t n } before a time horizon T: t i ≤ T. Assume the intensity of the Poisson process is λ s , s ∈ (0, T) and there may exists a change-point κ ∈ (0, T) such that the process changes. The null and alternative hypothesis tests are where µ is a known baseline intensity, θ > 0 is unknown magnitude of the change, ϕ(s) = βe −βs is the normalized kernel function with pre-specified parameter β > 0, which captures the influence from the past events. We treat the post-change influence parameter θ as unknown since it represents an anomaly.
We first use a sliding window to convert the event times into a sequence of vectors with overlapping events. Assume of size of the sliding window is L. For a given scanning time T i ≤ T, we map all the events in [T i − L, T i ] to a vector X i = [t (1) , . . . , t (m i ) ] , t (i) ∈ [T i − L, T i ], where m i is the number of events falling into the window. Note that X i can have different length for different i. Consider a set of scanning times T 1 , T 2 , . . . , T t . This maps the event times into a sequence of vectors X 1 , X 2 , . . . , X t of lengthes m 1 , m 2 , . . ., m t . These scanning times can be arbitrary; here we set them to be event times so that there are at least one sample per sliding window.
For a hypothetical change-point location k, it can be shown that the log-likelihood ratio (between the Hawkes process and the Poisson process) as a function of θ, is given by Now based on this sliding window approach, we can approximate the original change-point detection problem as the following. Without change, X 1 , . . . , X t are sampled from a Poisson process. Under the alternative, the change occurs at some time such that X 1 , . . . , X κ are sampled from a Poisson process, and X κ+1 , . . . , X t are sampled from a Hawkes process with parameter θ, rather than a Poisson process. We define the estimator of θ, for assumed change-point location κ = k as followŝ Now, consider k ∈ [i − w, i − 1], and keep w estimators:θ i−w,i , . . . ,θ i−1,i . The update for each estimator is based on stochastic gradient descent. By taking derivative with respect to θ, we have Note that there is no close form expression for the MLE, which the solution to the above equation. We perform stochastic gradient descent instead where γ > 0 is the step-size. Now we can apply the ACM and ASR procedures, by using the fact that fθ k,t (X t+1 )/ f θ 0 (X t+1 ) = (θ k,t |X t+1 ) and calculating using (19). Table 4 shows the EDD for different α. Here we choose the threshold such that ARL is 5000. We see that the scheme has a reasonably good performance, the detection delay decreases as the true signal strength θ increases.

Conclusions
In this paper, we consider sequential hypothesis testing and change-point detection with computationally efficient one-sample update schemes obtained from online mirror descent. We show that the loss of the statistical efficiency caused by the online mirror descent estimator (replacing the exact maximum likelihood estimator using the complete historical data) is related to the regret incurred by the online convex optimization procedure. The result can be generalized to any estimation method with logarithmic regret bound. This result sheds lights on the relationship between the statistical detection procedures and the online convex optimization.
Then under the measure P θ,0 , S t is a random walk with i.i.d. increment. Then, by Wald's identity (e.g., [1]) we have that On the other hand, let θ * N denote the MLE based on (X 1 , . . . , X N ). The key to the proof is to decompose the stopped process S θ N as a summation of three terms as follows: Note that the first term of the decomposition on the right-hand side of (A2) is always non-positive since Therefore we have Now consider the third term in the decomposition (A2). Similar to the proof of equation (5.109) in [4], we claim that its expectation under measure P θ,0 is upper bounded by b/I(θ, θ 0 ) + O(1) as b → ∞. Next, we prove the claim. For any positive integer n, we further decompose the third term in (A2) as where The decomposition of (A3) consists of stochastic processes {M n (θ)} and {m n (θ, θ 0 )}, which are both P θ,0 -martingales with zero expectation, i.e., E θ,0 [M n (θ)] = E θ,0 [m n (θ, θ 0 )] = 0 for any positive integer n. Since for exponential family, the log-partition function Φ(θ) is bounded, by the inequalities for martingales [47] we have that where C 1 and C 2 are two absolute constants that do not depend on n. Moreover, we observe that for all θ ∈ Θ, Therefore, applying (A4), we have that n −1 G n (θ), n −1 M n (θ) and n −1 m n (θ, θ 0 ) converge to 0 almost surely. Moreover, the convergence is P θ,0 -r-quickly for r = 1. We say that n −1 A n converges P θ,0 -r-quickly to a constant I if E θ,0 [G( )] r < ∞ for all > 0, where G( ) = sup{n ≥ 1 : |n −1 A n − I| > } is the last time when n −1 A n leaves the interval [I − , I + ] (for more details, we refer the readers to Section 2.4.3 of [4]). Therefore, dividing both sides of (A3) by n, we obtain n −1 ∑ n i=1 log( fθ i−1 (X i )/ f θ 0 (X i )) converges P θ,0 -1-quickly to I(θ, θ 0 ). For > 0, we now define the last entry time L( ) = sup n ≥ 1 : 1 By the definition of P θ,0 -1-quickly convergence and the finiteness of I(θ, θ 0 ), we have that E θ,0 [L( )] < +∞ for all > 0. In the following, define a scaled thresholdb = b/I(θ, θ 0 ). Observe that conditioning on the event {L( ) + 1 < N < +∞}, we have that Therefore, conditioning on the event {L( ) + 1 < N < +∞}, we have that N < 1 + b/(1 − ). Hence, for any 0 < < 1, we have Since E θ,0 [L( )] < ∞ for any > 0, from (A5) above, we have that the third term in (A2) is upper bounded byb + O(1). Finally, the second term in (A2) can be written as which is just the regret defined in (12) for the online estimators: R t , when the loss function is defined to be the negative likelihood function. Then, the theorem is proven by combining the above analysis for the three terms in (A2) and (A1).
Using this argument, we obtain Note that a similar argument can be found in [49].
Next we will establish a few Lemmas useful for proving Theorem 2 for sequential detection procedures. Define a measure Q on (X ∞ , B ∞ ) under which the probability density of X i conditional on F i−1 is fθ i−1 . Then for any event A ∈ F i , we have that Q(A) = A Λ i dP ∞ . The following lemma shows that the restriction of Q to F i is well defined. Lemma A1. Let Q i be the restriction of Q to F i . Then for any A ∈ F k and any i ≥ k, Q i (A) = Q k (A).
Proof of Lemma 1. To bound the term P ∞ (τ(b) < ∞), we need take advantage of the martingale property of Λ t in (2). The major technique is the combination of change of measure and Wald's likelihood ratio identity [1]. The proofs are a combination of the results in [23] and [8] and the reader can find a complete proof in [23]. For purpose of completeness we copy those proofs here.
Define the L i = dP i /dQ i as the Radon-Nikodym derivative, where P i and Q i are the restriction of P ∞ and Q to F i , respectively. Then we have that L i = (Λ i ) −1 for any i ≥ 1 (note that Λ i is defined in (2)). Combining the Lemma A1 and the Wald's likelihood ratio identity, we have that where I(E) is an indicator function that is equal to 1 for any ω ∈ E and is equal to 0 otherwise.
Proof of Corollary 2. Using (5.180) and (5.188) in [4], which are about asymptotic performance of open-ended tests. Since our problem is a special case of the problem in [4], we can obtain inf T∈C(α) (1)).
Combing the above result and the right-hand side of (15), we prove the corollary.
Proof of Theorem 2. From (A10), we have that for any ν ≥ 1, Therefore, to prove the theorem using Theorem 1, it suffices to show that Using an argument similar to the remarks in [8], we have that the supreme of detection delay over all change locations is achieved by the case when change occurs at the first instance, This is a slight modification (a small change on the subscripts) of the remarks in [8] but for the purpose of completeness and clearness we write the details in the following. Notice that since θ 0 is known, for any j ≥ 1, the distribution of {max j+1≤k≤t Λ k,t } ∞ t=j+1 under P θ,j conditional on F j is the same as the distribution of {max 1≤k≤t Λ k,t } ∞ t=1 under P θ,0 . Below, we use a renewal property of the ACM procedure. Define However, max 1≤k≤t log Λ k,t ≥ max j+1≤k≤t Λ k,t for any t > j. Therefore, T Thus, to prove (A9), it suffices to show that E θ,0 [T ACM (b)] ≤ E θ,0 [τ(b)]. To show this, define τ(b) (t) as the new stopping time that applies the one-sided sequential hypothesis testing procedure τ(b) to data {X i } ∞ i=t . Then we have that in fact T ACM (b) = min t≥1 {τ(b) (t) + t − 1}, this relationship was developed in [18]. Thus, T ACM (b) ≤ τ(b) (1) Proof of Lemma 2. This is a classic result proved by using the martingale property and the proof routine can be found in many textbooks such as [4]. First, rewrite T ASR (b) as we Therefore, {R t − t} t≥1 is a (P ∞ , F t )-martingale with zero mean. Suppose that E ∞ [T ASR (b)] < ∞ (otherwise the statement of proposition is trivial), then we have that Equation (A11) leads to the fact that P ∞ (T ASR (b)) ≥ t = o(t −1 ) and the fact that 0 ≤ R t ≤ exp(b) conditioning on the event {T ASR (b) > t}, we have that lim inf Therefore, we can apply the optional stopping theorem for martingales, to obtain that E ∞ [R T ASR (b) ] = E ∞ [T ASR (b)]. By the definition of T ASR (b), Proof of Corollary 3. Our Theorem 1 and the remarks in [15] show that the minimum worst-case detection delay, given a fixed ARL level γ, is given by It can be shown that the infimum is attained by choosing T(b) as a weighted Shiryayev-Roberts detection procedure, with a careful choice of the weight over the parameter space Θ. Combing (A12) with the right-hand side of (15), we prove the corollary.
The following derivation borrows ideas from [16]. First, we derive concise forms of the two terms in the definition of R t in (12).
Lemma A2. Assume that X 1 , X 2 , . . . are i.i.d. random variables with density function f θ (x), and assume decreasing step-size η i = 1/i in Algorithm 1. Given {θ i } i≥1 , {μ i } i≥1 generated by Algorithm 1. Ifθ i =θ i for any i ≥ 1, then for any null distribution parameter θ 0 ∈ Θ and t ≥ 1, Moreover, for any t ≥ 1, By subtracting the expressions in (A13) and (A14), we obtain the following result which shows that the regret can be represented by a weighted sum of the Bregman divergences between two consecutive estimators.
Proof of Lemma A2. By the definition of the Legendre-Fenchel dual function we have that Φ * (µ) = θ µ − Φ(θ) for any θ ∈ Θ. By this definition, and choosing η i = 1/i, we have that for any i ≥ 1 where we use the update rule in Line 6 of Algorithm 1 and the assumptionθ i =θ i to have the third equation. We define 1/η 0 = 0 in the last equation. Now summing the terms in (A15), where the second term form a telescopic series, over i from 1 to t, we have that Moreover, from the definition we have that Taking the first derivative of ∑ t i=1 {− log f θ (X i )} with respect to θ and setting it to 0, we findμ, the stationary point, given byμ Similarly, using the expression of the dual function, and pluggingμ back into the equation, we have that θ φ(X i ) = −tΦ * (μ).
Proof of Theorem 3. By choosing the step-size η i = 1/i for any i ≥ 1 in Algorithm 1, and assuminĝ θ i =θ i for any i ≥ 1, we have by induction that Subtracting (A13) by (A14), we obtain The final equality is obtained by Taylor expansion.