Next Article in Journal
Target Classification Method of Tactile Perception Data with Deep Learning
Next Article in Special Issue
Coupled VAE: Improved Accuracy and Robustness of a Variational Autoencoder
Previous Article in Journal
Bert-Enhanced Text Graph Neural Network for Classification
Previous Article in Special Issue
Still No Free Lunches: The Price to Pay for Tighter PAC-Bayes Bounds
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sequential Learning of Principal Curves: Summarizing Data Streams on the Fly

by 1 and 2,*
1
Department of Statistics, Central China Normal University, Wuhan 430079, China
2
Inria, Lille-Nord Europe Research Centre and Inria London, France and Centre for Artificial Intelligence, Department of Computer Science, University College London, London WC1V 6LJ, UK
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(11), 1534; https://doi.org/10.3390/e23111534
Received: 22 August 2021 / Revised: 23 October 2021 / Accepted: 1 November 2021 / Published: 18 November 2021
(This article belongs to the Special Issue Approximate Bayesian Inference)

Abstract

:
When confronted with massive data streams, summarizing data with dimension reduction methods such as PCA raises theoretical and algorithmic pitfalls. A principal curve acts as a nonlinear generalization of PCA, and the present paper proposes a novel algorithm to automatically and sequentially learn principal curves from data streams. We show that our procedure is supported by regret bounds with optimal sublinear remainder terms. A greedy local search implementation (called slpc, for sequential learning principal curves) that incorporates both sleeping experts and multi-armed bandit ingredients is presented, along with its regret computation and performance on synthetic and real-life data.

1. Introduction

Numerous methods have been proposed in the statistics and machine learning literature to sum up information and represent data by condensed and simpler-to-understand quantities. Among those methods, principal component analysis (PCA) aims at identifying the maximal variance axes of data. This serves as a way to represent data in a more compact fashion and hopefully reveal as well as possible their variability. PCA was introduced by [1,2] and further developed by [3]. This is one of the most widely used procedures in multivariate exploratory analysis targeting dimension reduction or feature extraction. Nonetheless, PCA is a linear procedure and the need for more sophisticated nonlinear techniques has led to the notion of principal curve. Principal curves may be seen as a nonlinear generalization of the first principal component. The goal is to obtain a curve which passes “in the middle” of data, as illustrated by Figure 1. This notion of skeletonization of data clouds has been at the heart of numerous applications in many different domains, such as physics [4,5], character and speech recognition [6,7], mapping and geology [5,8,9], to name but a few.

1.1. Earlier Works on Principal Curves

The original definition of principal curve dates back to [10]. A principal curve is a smooth ( C ) parameterized curve f ( s ) = f 1 ( s ) , , f d ( s ) in R d which does not intersect itself, has finite length inside any bounded subset of R d and is self-consistent. This last requirement means that f ( s ) = E [ X | s f ( X ) = s ] , where X R d is a random vector and the so-called projection index s f ( x ) is the largest real number s minimizing the squared Euclidean distance between f ( s ) and x, defined by
s f ( x ) = sup s : x f ( s ) 2 2 = inf τ x f ( τ ) 2 2 .
Self-consistency means that each point of f is the average (under the distribution of X) of all data points projected on f , as illustrated by Figure 2.
However, an unfortunate consequence of this definition is that the existence is not guaranteed in general for a particular distribution, let alone for an online sequence for which no probabilistic assumption is made. In order to handle complex data structures, Ref. [11] proposed principal curves (PCOP) of principal oriented points (POPs) which are defined as the fixed points of an expectation function of points projected to a hyperplane minimising the total variance. To obtain POPs, a cluster analysis is performed on the hyperplane and only data in the local cluster are considered. Ref. [12] introduced the local principal curve (LPC), whose concept is similar to that of [11], but accelerates the computation of POPs by calculating local centers of mass instead of performing cluster analysis, and local principal component instead of principal direction. Later, Ref. [13] also considered LPC in data compression and regression to reduce the dimension of predictors space to low-dimension manifold. Ref. [14] extended the idea of localization to independent component analysis (ICA) by proposing a local-to-global non-linear ICA framework for visual and auditory signal. Ref. [15] considered principal curves from a different perspective: as the ridge of a smooth probability density function (PDF) generating dataset, where the ridges are collections of all points; the local gradient of a PDF is an eigenvector of the local Hessian, and the eigenvalues corresponding to the remaining eigenvectors are negative. To estimate principal curves based on this definition, the subspace constrained mean shift (SCMS) algorithm was proposed. All the local methods above require strong assumptions on the PDF, such as twice continuous differentiability, which may prove challenging to be satisfied in the settings of online sequential data. Ref. [16] proposed a new concept of principal curves which ensures its existence for a large class of distributions. Principal curves f are defined as the curves minimizing the expected squared distance over a class F L of curves whose length is smaller than L > 0 ; namely,
f arg inf f F L Δ ( f ) ,
where
Δ ( f ) = E Δ f , X = E inf s f ( s ) X 2 2 .
If E X 2 2 < , f always exists but may not be unique. In practical situations where only i.i.d. copies X 1 , , X n of X are observed, the method of [16] considers classes F k , L of all polygonal lines with k segments and length not exceeding L, and chooses an estimator f ^ k , n of f as the one within F k , L , which minimizes the empirical counterpart
Δ n ( f ) = 1 n i = 1 n Δ f , X i
of Δ ( f ) . It is proved in [17] that if X is almost surely bounded and k n 1 / 3 , then
Δ f ^ k , n Δ f = O n 1 / 3 .
As the task of finding a polygonal line with k segments and length of at most L that minimizes Δ n ( f ) is computationally costly, Ref. [17] proposed a polygonal line algorithm. This iterative algorithm proceeds by fitting a polygonal line with k segments and considerably speeds up the exploration part by resorting to gradient descent. The two steps (projection and optimization) are similar to what is done by the k-means algorithm. However, the polygonal line algorithm is not supported by theoretical bounds and leads to variable performance depending on the distribution of the observations.
As the number of segments, k, plays a crucial role (a too small a k value leads to a poor summary of data, whereas a too-large k yields overfitting; see Figure 3), Ref. [18] aimed to fill the gap by selecting an optimal k from both theoretical and practical perspectives.
Their approach relies strongly on the theory of model selection by penalization introduced by [19] and further developed by [20]. By considering countable classes { F k , } k , of polygonal lines with k segments and total length L , and whose vertices are on a lattice, the optimal ( k ^ , ^ ) is obtained as the minimizer of the criterion
crit ( k , ) = Δ n f ^ k , + pen ( k , ) ,
where
pen ( k , ) = c 0 k n + c 1 n + c 2 1 n + δ 2 w k , 2 n
is a penalty function where δ stands for the diameter of observations and w k , denotes the weight attached to class F k , ; and it has constants c 0 , c 1 , c 2 depending on δ , maximum length L and a certain number of dimensions of observations. Ref.  [18] then proved that
E Δ ( f ^ k ^ , ^ ) Δ ( f ) inf k , E Δ ( f ^ k , ) Δ ( f ) + pen ( k , ) + δ 2 Σ 2 3 / 2 π n ,
where Σ is a numerical constant. The expected loss of the final polygonal line f ^ k ^ , ^ is close to the minimal loss achievable over F k , up to a remainder term decaying as 1 / n .

1.2. Motivation

The big data paradigm—where collecting, storing and analyzing massive amounts of large and complex data becomes the new standard—commands one to revisit some of the classical statistical and machine learning techniques. The tremendous improvements of data acquisition infrastructures generates new continuous streams of data, rather than batch datasets. This has drawn great interest to sequential learning. Extending the notion of principal curves to the sequential settings opens up immediate practical application possibilities. As an example, path planning for passengers’ locations can help taxi companies to better optimize their fleet. Online algorithms that could yield instantaneous path summarization would be adapted to the sequential nature of geolocalized data. Existing theoretical works and practical implementations of principal curves are designed for the batch setting [7,16,17,18,21] and their adaptation to the sequential setting is not a smooth process. As an example, consider the algorithm in [18]. It is assumed that vertices of principal curves are located on a lattice, and its computational complexity is of order O ( n N p ) where n is the number of observations, N the number of points on the lattice and p the maximum number of vertices. When p is large, running this algorithm at each epoch yields a monumental computational cost. In general, if data are not identically distributed or even adversary, algorithms that originally worked well in the batch setting may not be ideal when cast onto the online setting (see [22], Chapter 4). To the best of our knowledge, little effort has been put so far into extending principal curves algorithms to the sequential context.
Ref. [23] provided an incremental version of the SCMS algorithm [15] which is based on a definition of a principal curve as the ridge of a smooth probability density function generating observations. They applied the SCMS algorithm to the input points that are associated with the output points which are close to the new incoming sample and leave the remaining outputs unchanged. Hence, this algorithm can be used to deal with sequential data. As presented in the next section, our algorithm for sequentially updating principal curve vertices that are close to new data is similar in spirit to that of incremental SCMS. However, a difference is that our algorithm outputs polygonal lines. In addition, the computation complexity of our method is O ( n 2 ) , and incremental SCMS has O ( n 3 ) complexity, where n is the number of observations. Ref. [24] considered sequential principal curves analysis in a fairly different setting in which the goal was to derive in an adaptive fashion a set of nonlinear sensors by using a set of preliminary principal curves. Unfolding sequentially principal curves and a sequential path for Jacobian integration were considered. The “sequential” in this setting represented the generalization of principal curves to principal surfaces or even a principal manifold of higher dimensions. This way of sequentially exploiting principal curves was firstly proposed by [11] and later extended by [14,25,26] to give curvilinear representations using sequence of local-to-global curves. In addition, Refs. [15,27,28] presented, respectively, principal polynomial and non-parametric regressions to capture the nonlinear nature of data. However, these methods are not originally designed for treating sequential data. The present paper aims at filling this gap: our goal was to propose an online perspective to principal curves by automatically and sequentially learning the best principal curve summarizing a data stream. Sequential learning takes advantage of the latest collected (set of) observations and therefore suffers a much smaller computational cost.
Sequential learning operates as follows: a blackbox reveals at each time t some deterministic value x t , t = 1 , 2 , , and a forecaster attempts to predict sequentially the next value based on past observations (and possibly other available information). The performance of the forecaster is no longer evaluated by its generalization error (as in the batch setting) but rather by a regret bound which quantifies the cumulative loss of a forecaster in the first T rounds with respect to some reference minimal loss. In sequential learning, the velocity of algorithms may be favored over statistical precision. An immediate use of aforecited techniques [17,18,21] at each time round t (treating data collected until t as a batch dataset) would result in a monumental algorithmic cost. Rather, we propose a novel algorithm which adapts to the sequential nature of data, i.e., which takes advantage of previous computations.
The contributions of the present paper are twofold. We first propose a sequential principal curve algorithm, for which we derived regret bounds. We then present an implementation, illustrated on a toy dataset and a real-life dataset (seismic data). The sketch of our algorithm’s procedure is as follows. At each time round t, the number of segments of k t is chosen automatically and the number of segments k t + 1 in the next round is obtained by only using information about k t and a small number of past observations. The core of our procedure relies on computing a quantity which is linked to the mode of the so-called Gibbs quasi-posterior and is inspired by quasi-Bayesian learning. The use of quasi-Bayesian estimators is especially advocated by the PAC-Bayesian theory, which originated in the machine learning community in the late 1990s, in the seminal works of [29] and McAllester [30,31]. The PAC-Bayesian theory has been successfully adapted to sequential learning problems; see, for example, Ref. [32] for online clustering. We refer to [33,34] for a recent overview of the field.
The paper is organized as follows. Section 2 presents our notation and our online principal curve algorithm, for which we provide regret bounds with sublinear remainder terms in Section 3. A practical implementation was proposed in Section 4, and we illustrate its performance on synthetic and real-life datasets in Section 5. Proofs of all original results claimed in the paper are collected in Section 6.

2. Notation

A parameterized curve in R d is a continuous function f : I R d where I = [ a , b ] is a closed interval of the real line. The length of f is given by
L ( f ) = lim M sup a = s 0 < s 1 < < s M = b i = 1 M f ( s i ) f ( s i 1 ) 2 .
Let x 1 , x 2 , , x T B ( 0 , d R ) R d be a sequence of data, where B ( c , R ) stands for the 2 -ball centered in c R d with radius R > 0 . Let Q δ be a grid over B ( 0 , d R ) , i.e., Q δ = B ( 0 , d R ) Γ δ where Γ δ is a lattice in R d with spacing δ > 0 . Let L > 0 and define for each k 1 , p the collection F k , L of polygonal lines f with k segments whose vertices are in Q δ and such that L ( f ) L . Denote by F p = k = 1 p F k , L all polygonal lines with a number of segments p , whose vertices are in Q δ and whose length is at most L. Finally, let K ( f ) denote the number of segments of f F p . This strategy is illustrated by Figure 4.
Our goal is to learn a time-dependent polygonal line which passes through the “middle” of data and gives a summary of all available observations x 1 , , x t 1 (denoted by ( x s ) 1 : ( t 1 ) hereafter) before time t. Our output at time t is a polygonal line f ^ t F p depending on past information ( x s ) 1 : ( t 1 ) and past predictions ( f ^ s ) 1 : ( t 1 ) . When x t is revealed, the instantaneous loss at time t is computed as
Δ f ^ t , x t = inf s I f ^ t ( s ) x t 2 2 .
In what follows, we investigate regret bounds for the cumulative loss based on (2). Given a measurable space Θ (embedded with its Borel σ -algebra), we let P ( Θ ) denote the set of probability distributions on Θ , and for some reference measure π , we let P π ( Θ ) be the set of probability distributions absolutely continuous with respect to π .
For any k 1 , p , let π k denote a probability distribution on F k , L . We define the prior π on F p = k = 1 p F k , L as
π ( f ) = k 1 , p w k π k ( f ) 𝟙 f F k , L , f F p ,
where w 1 , , w p 0 and k 1 , p w k = 1 .
We adopt a quasi-Bayesian-flavored procedure: consider the Gibbs quasi-posterior (note that this is not a proper posterior in all generality, hence the term “quasi”):
ρ ^ t ( · ) exp ( λ S t ( · ) ) π ( · ) ,
where
S t ( f ) = S t 1 ( f ) + Δ ( f , x t ) + λ 2 Δ ( f , x t ) Δ ( f ^ t , x t ) 2 ,
as advocated by [32,35] who then considered realizations from this quasi-posterior. In the present paper, we will rather focus on a quantity linked to the mode of this quasi-posterior. Indeed, the mode of the quasi-posterior ρ ^ t + 1 is
arg min f F p s = 1 t Δ ( f , x s ) ( i ) + λ 2 s = 1 t Δ ( f , x t ) Δ ( f ^ t , x t ) 2 ( i i ) + ln π ( f ) λ ( i i i ) ,
where (i) is a cumulative loss term, (ii) is a term controlling the variance of the prediction f to past predictions f ^ s , s t , and (iii) can be regarded as a penalty function on the complexity of f if π is well chosen. This mode hence has a similar flavor to follow the best expert or follow the perturbed leader in the setting of prediction with experts (see [22,36], Chapters 3 and 4) if we consider each f F p as an expert which always delivers constant advice. These remarks yield Algorithm 1.
Algorithm 1 Sequentially learning principal curves.
   1:
Input parameters: p > 0 , η > 0 , π ( z ) = e z 1 { z > 0 } and penalty function h : F p R +
   2:
Initialization: For each f F p , draw z f π and Δ f , 0 = 1 η ( h ( f ) z f )
   3:
For t = 1 , , T
   4:
      Get the data x t
   5:
      Obtain
f ^ t = arg inf f F p s = 0 t 1 Δ f , s ,
      where Δ f , s = Δ ( f , x s ) , s 1 .
   6:
End for

3. Regret Bounds for Sequential Learning of Principal Curves

We now present our main theoretical results.
Theorem 1.
For any sequence ( x t ) 1 : T B ( 0 , d R ) , R 0 and any penalty function h : F p R + , let π ( z ) = e z 1 { z > 0 } . Let 0 < η 1 d ( 2 R + δ ) 2 ; then the procedure described in Algorithm 1 satisfies
t = 1 T E π Δ ( f ^ t , x t ) 1 + c 0 ( e 1 ) η S T , h , η + 1 η 1 + ln f F p e h ( f ) ,
where c 0 = d ( 2 R + δ ) 2 and
S T , h , η = inf k 1 , p inf f F p K ( f ) = k t = 1 T Δ ( f , x t ) + h ( f ) η .
The expectation of the cumulative loss of polygonal lines f ^ 1 , , f ^ T is upper-bounded by the smallest penalized cumulative loss over all k { 1 , , p } up to a multiplicative term ( 1 + c 0 ( e 1 ) η ) , which can be made arbitrarily close to 1 by choosing a small enough η . However, this will lead to both a large h ( f ) / η in S T , h , η and a large 1 η ( 1 + ln f F p e h ( f ) ) . In addition, another important issue is the choice of the penalty function h. For each f F p , h ( f ) should be large enough to ensure a small f F p e h ( f ) , but not too large to avoid overpenalization and a larger value for S T , h , η . We therefore set
h ( f ) ln ( p e ) + ln | { f F p , K ( f ) = k } |
for each f with k segments (where | M | denotes the cardinality of a set M) since it leads to
f F p e h ( f ) = k 1 , p f F p K ( f ) = k e h ( f ) k 1 , p 1 p e 1 e .
The penalty function h ( f ) = c 1 K ( f ) + c 2 L + c 3 satisfies (3), where c 1 , c 2 , c 3 are constants depending on R, d, δ , p (this is proven in Lemma 3, in Section 6). We therefore obtain the following corollary.
Corollary 1.
Under the assumptions of Theorem 1, let
η = min 1 d ( 2 R + δ ) 2 , c 1 p + c 2 L + c 3 c 0 ( e 1 ) inf f F p t = 1 T Δ ( f , x t ) .
Then
t = 1 T E Δ ( f ^ t , x t ) inf k 1 , p inf f F p K ( f ) = k t = 1 T Δ ( f , x t ) + c 0 ( e 1 ) r T , k , L + c 0 ( e 1 ) r T , p , L + c 0 ( e 1 ) ( c 1 p + c 2 L + c 3 ) ,
where r T , k , L = inf f F p t = 1 T Δ ( f , x t ) ( c 1 k + c 2 L + c 3 ) .
Proof. 
Note that
t = 1 T E Δ ( f ^ t , x t ) S T , h , η + η c 0 ( e 1 ) inf f F p t = 1 T Δ ( f , x t ) + c 0 ( e 1 ) ( c 0 p + c 2 L + c 3 ) ,
and we conclude by setting
η = c 1 p + c 2 L + c 3 c 0 ( e 1 ) inf f F p t = 1 T Δ ( f , x t ) .
 □
Sadly, Corollary 1 is not of much practical use since the optimal value for η depends on inf f F p t = 1 T Δ ( f , x t ) which is obviously unknown, even more so at time t = 0 . We therefore provide an adaptive refinement of Algorithm 1 in the following Algorithm 2.
Algorithm 2 Sequentially and adaptively learning principal curves.
   1:
Input parameters: p > 0 , L > 0 , π , h and η 0 = c 1 p + c 2 L + c 3 c 0 e 1
   2:
Initialization: For each f F p , draw z f π , Δ f , 0 = 1 η 0 ( h ( f ) z f ) and f ^ 0 = arg inf f F p Δ f , 0
   3:
For t = 1 , , T
   4:
      Compute η t = c 1 p + c 2 L + c 3 c 0 ( e 1 ) t
   5:
      Get data x t and compute Δ f , t = Δ ( f , x t ) + 1 η t 1 η t 1 h ( f ) z f
   6:
      Obtain
f ^ t = arg inf f F p s = 0 t 1 Δ f , s .
   7:
End for
Theorem 2.
For any sequence ( x t ) 1 : T B ( 0 , d R ) , R 0 , let h ( f ) = c 1 K ( f ) + c 2 L + c 3 where c 1 , c 2 , c 3 are constants depending on R , d , δ , ln p . Let π ( z ) = e z 1 { z > 0 } and
η 0 = c 1 p + c 2 L + c 3 c 0 e 1 , η t = c 1 p + c 2 L + c 3 c 0 ( e 1 ) t ,
where t 1 and c 0 = d ( 2 R + δ ) 2 . Then the procedure described in Algorithm 2 satisfies
t = 1 T E Δ ( f ^ t , x t ) inf k 1 , p inf f F p K ( f ) = k t = 1 T Δ ( f , x t ) + c 0 ( e 1 ) T ( c 1 k + c 2 L + c 3 ) + 2 c 0 ( e 1 ) T ( c 1 p + c 2 L + c 3 ) .
The message of this regret bound is that the expected cumulative loss of polygonal lines f ^ 1 , , f ^ T is upper-bounded by the minimal cumulative loss over all k { 1 , , p } , up to an additive term which is sublinear in T. The actual magnitude of this remainder term is k T . When L is fixed, the number k of segments is a measure of complexity of the retained polygonal line. This bound therefore yields the same magnitude as (1), which is the most refined bound in the literature so far ([18] where the optimal values for k and L were obtained in a model selection fashion).

4. Implementation

The argument of the infimum in Algorithm 2 is taken over F p = k = 1 p F k , L which has a cardinality of order Q δ p , making any greedy search largely time-consuming. We instead turn to the following strategy: Given a polygonal line f ^ t F k t , L with k t segments, we consider, with a certain proportion, the availability of f ^ t + 1 within a neighborhood U ( f ^ t ) (see the formal definition below) of f ^ t . This consideration is well suited for the principal curves setting, since if observation x t is close to f ^ t , one can expect that the polygonal line which well fits observations x s , s = 1 , , t lies in a neighborhood of f ^ t . In addition, if each polygonal line f is regarded as an action, we no longer assume that all actions are available at all times, and allow the set of available actions to vary at each time. This is a model known as “sleeping experts (or actions)” in prior work [37,38]. In this setting, defining the regret with respect to the best action in the whole set of actions in hindsight remains difficult, since that action might sometimes be unavailable. Hence, it is natural to define the regret with respect to the best ranking of all actions in the hindsight according to their losses or rewards, and at each round one chooses among the available actions by selecting the one which ranks the highest. Ref. [38] introduced this notion of regret and studied both the full-information (best action) and partial-information (multi-armed bandit) settings with stochastic and adversarial rewards and adversarial action availability. They pointed out that the EXP4 algorithm [37] attains the optimal regret in the adversarial rewards case but has a runtime exponential in the number of all actions. Ref. [39] considered full and partial information with stochastic action availability and proposed an algorithm that runs in polynomial time. In what follows, we materialize our implementation by resorting to “sleeping experts”, i.e., a special set of available actions that adapts to the setting of principal curves.
Let σ denote an ordering of | F p | actions, and A t a subset of the available actions at round t. We let σ ( A t ) denote the highest ranked action in A t . In addition, for any action f F p we define the reward r f , t of f at round t , t 0 by
r f , t = c 0 Δ ( f , x t ) .
It is clear that r f , t ( 0 , c 0 ) . The convention from losses to gains is done in order to facilitate the subsequent performance analysis. The reward of an ordering σ is the cumulative reward of the selected action at each time:
t = 1 T r σ ( A t ) , t ,
and the reward of the best ordering is max σ t = 0 T r σ ( A t ) , t (respectively, E max σ t = 1 T r σ ( A t ) , t when A t is stochastic).
Our procedure starts with a partition step which aims at identifying the “relevant” neighborhood of an observation x R d with respect to a given polygonal line, and then proceeds with the definition of the neighborhood of an action f . We then provide the full implementation and prove a regret bound.
Partition. For any polygonal line f with k segments, we denote by V = v 1 , , v k + 1 its vertices and by s i , i = 1 , , k the line segments connecting v i and v i + 1 . In the sequel, we use f ( V ) to represent the polygonal line formed by connecting consecutive vertices in V if no confusion arises. Let V i , i = 1 , , k + 1 and S i , i = 1 , , k be the Voronoi partitions of R d with respect to f , i.e., regions consisting of all points closer to vertex v i or segment s i . Figure 5 shows an example of Voronoi partition with respect to f with three segments.
Neighborhood. For any x R d , we define the neighborhood N ( x ) with respect to f as the union of all Voronoi partitions whose closure intersects with two vertices connecting the projection f ( s f ( x ) ) of x to f . For example, for the point x in Figure 5, its neighborhood N ( x ) is the union of S 2 , V 3 , S 3 and V 4 . In addition, let N t ( x ) = x s N x , s = 1 , , t . be the set of observations x 1 : t belonging to N x and N ¯ t ( x ) be its average. Let D ( M ) = sup x , y M | | x y | | 2 denote the diameter of set M R d . We finally define the local grid Q δ , t ( x ) of x R d at time t as
Q δ , t ( x ) = B N ¯ t ( x ) , D N t ( x Q δ .
We can finally proceed to the definition of the neighborhood U ( f ^ t ) of f ^ t . Assume f ^ t has k t + 1 vertices V = ( v 1 : i t 1 ( i ) , v i t : j t 1 ( i i ) , v j t : k t + 1 ( i i i ) ) , where vertices of ( i i ) belong to Q δ , t ( x t ) while those of ( i ) and ( i i i ) do not. The neighborhood U ( f ^ t ) consists of f sharing vertices ( i ) and ( i i i ) with f ^ t , but can be equipped with different vertices ( i i ) in Q δ , t ( x t ) ; i.e.,
U ( f ^ t ) = f ( V ) , V = v 1 : i t 1 , v 1 : m , v j t : k t + 1 ,
where v 1 : m Q δ , t ( x t ) and m is given by
m = j t i t 1 reduce segments by 1 unit , j t i t same number of segments , j t i t + 1 increase segments by 1 unit .
In Algorithm 3, we initiate the principal curve f ^ 1 as the first component line segment whose vertices are the two farthest projections of data x 1 : t 0 ( t 0 can be set to 20 in practice) on the first component line. The reward of f at round t in this setting is therefore r f , t = c 0 Δ ( f , x t 0 + t ) . Algorithm 3 has an exploration phase (when I t = 1 ) and an exploitation phase ( I t = 0 ). In the exploration phase, it is allowed to observe rewards of all actions and to choose an optimal perturbed action from the set F p of all actions. In the exploitation phase, only rewards of a part of actions can be accessed and rewards of others are estimated by a constant, and we update our action from the neighborhood U f ^ t 1 of the previous action f ^ t 1 . This local update (or search) greatly reduces computation complexity since | U ( f ^ t 1 ) | F p when p is large. In addition, this local search will be enough to account for the case when x t locates in U f ^ t 1 . The parameter β needs to be carefully calibrated since it should not be too large to ensure that the condition c o n d ( t ) is non-empty; otherwise, all rewards are estimated by the same constant and thus lead to the same descending ordering of tuples for both s = 1 t 1 r ^ f , s , f F p and s = 1 t r ^ f , s , f F p . Therefore, we may face the risk of having f ^ t + 1 in the neighborhood of f ^ t even if we are in the exploration phase at time t + 1 . Conversely, very small β could result in large bias for the estimation r f , t P f ^ t = f | H t of r f , t . Note that the exploitation phase is close yet different to the label efficient prediction ([40], Remark 1.1) since we allow an action at time t to be different from the previous one. Ref. [41] proposed the geometric resampling method to estimate the conditional probability P f ^ t = f | H t since this quantity often does not have an explicit form. However, due to the simple exponential distribution of z f chosen in our case, an explicit form of P f ^ t = f | H t is straightforward.
Algorithm 3 A locally greedy algorithm for sequentially learning principal curves.
   1:
Input parameters: p > 0 , R > 0 , L > 0 , ϵ > 0 , α > 0 , 1 > β > 0 and any penalty function h
   2:
Initialization: Given ( x t ) 1 : t 0 , obtain f ^ 1 as the first principal component
   3:
For t = 2 , , T
   4:
      Draw I t B e r n o u l l i ( ϵ ) and z f π .
   5:
      Let
σ ^ t = sort f , s = 1 t 1 r ^ f , s 1 η t 1 h ( f ) + 1 η t 1 z f ,
i.e., sorting all f F p in descending order according to their perturbed cumulative reward till t 1 .
   6:
      If I t = 1 , set A t = F p and f ^ t = σ ^ t ( A t ) and observe r f ^ t , t
   7:
            
r ^ f , t = r f , t for f F p .
   8:
       If I t = 0 , set A t = U ( f ^ t 1 ) , f ^ t = σ ^ t ( A t ) and observe r f ^ t , t
   9:
            
r ^ f , t = r f , t P f ^ t = f | H t i f f U ( f ^ t 1 ) c o n d ( t ) and f ^ t = f , α otherwise ,
where H t denotes all the randomness before time t and cond ( t ) = f F p : P f ^ t = f | H t > β . In particular, when t = 1 , we set r ^ f , 1 = r f , 1 for all f F p , U f ^ 0 = and r ^ σ ^ 1 U f ^ 0 , 1 0 .
   10:
End for
Theorem 3.
Assume that p > 6 , T 2 | F p | 2 and let β = F p 1 2 T 1 4 , α = c 0 β , c ^ 0 = 2 c 0 β , ϵ = 1 F p 1 2 3 p T 1 4 and
η 1 = η 2 = = η T = c 1 p + c 2 L + c 3 T ( e 1 ) c ^ 0 .
Then the procedure described in Algorithm 3 satisfies the regret bound
t = 1 T E Δ f ^ t , x t inf f F p E t = 1 T Δ f , x t + O ( T 3 4 ) .
The proof of Theorem 3 is presented in Section 6. The regret is upper bounded by a term of order F p 1 2 T 3 4 , sublinear in T. The term ( 1 ϵ ) c 0 T = c 0 F p 1 2 T 3 4 is the price to pay for the local search (with a proportion 1 ϵ ) of polygonal line f ^ t in the neighborhood of the previous f ^ t 1 . If ϵ = 1 , we would have that c ^ 0 = c 0 , and the last two terms in the first inequality of Theorem 3 would vanish; hence, the upper bound reduces to Theorem 2. In addition, our algorithm achieves an order that is smaller (from the perspective of both the number F p of all actions and the total rounds T) than [39] since at each time, the availability of actions for our algorithm can be either the whole action set or a neighborhood of the previous action while [39] consider at each time only partial and independent stochastic available set of actions generated from a predefined distribution.

5. Numerical Experiments

We illustrate the performance of Algorithm 3 on synthetic and real-life data. Our implementation (hereafter denoted by slpc—Sequential Learning of Principal Curves) is conducted with the R language and thus our most natural competitors are the R package princurve, which is the algorithm from [10], and incremental, which is the algorithm from SCMS [23]. We let p = 50 , R = max t = 1 , , T | | x | | 2 / d , L = 0.1 p d R . The spacing δ of the lattice is adjusted with respect to data scale.
Synthetic data We generate a dataset x t R 2 , t = 1 , , 500 uniformly along the curve y = 0.05 × ( x 5 ) 3 , x [ 0 , 10 ] . Table 1 shows the regret (first row) for
  • the ground truth (sum of squared distances of all points to the true curve),
  • princurve and incremental SCMS (sum of squared distances between observation x t + 1 and fitted princurve on observations x 1 : t ),
  • slpc (regret being equal to t = 0 T 1 E [ Δ ( f ^ t + 1 , x t + 1 ) ] in both cases).
The mean computation time with different values for the time horizons T are also reported.
Table 1 demonstrates the advantages of our method slpc, as it achieved the optimal tradeoff between performance (in terms of regret) and runtime. Although princurve outperformed the other two algorithms in terms of computation time, it yielded the largest regret, since it outputs a curve which does not pass in “the middle of data” but rather bends towards the curvature of the data cloud, as shown in Figure 6 where the predicted principal curves f ^ t + 1 for princurve, incremental SCMS and slpc are presented. incremental SCMS and slpc both yielded satisfactory results, although the mean computation time of splc was significantly smaller than that of incremental SCMS (the reason being that eigenvectors of the Hessian of PDF need to be computed in incremental SCMS). Figure 7 showed, respectively, the estimation of the regret of slpc and its per-round value (i.e., the cumulative loss divided by the number of rounds) both with respect to the round t. The jumps in the per-round curve occurred at the beginning, due to the initialization from a first principal component and to the collection of new data. When data accumulates, the vanishing pattern of the per-round curve illustrates that the regret is sublinear in t, which matches our aforementioned theoretical results.
In addition, to better illustrate the way slpc works between two epochs, Figure 8 focuses on the impact of collecting a new data point on the principal curve. We see that only a local vertex is impacted, whereas the rest of the principal curve remains unaltered. This cutdown in algorithmic complexity is one the key assets of slpc.
Synthetic data in high dimension. We also apply our algorithm on a dataset { x t R 6 , t = 1 , 2 , , 200 } in higher dimension. It is generated uniformly along a parametric curve whose coordinates are
0.5 t cos ( t ) 0.5 t sin ( t ) 0.5 t t t 2 ln ( t + 1 )
where t takes 100 equidistant values in [ 0 , 2 π ] . To the best of our knowledge, [10,16,18] only tested their algorithm on 2-dimensional data. This example aims at illustrating that our algorithm also works on higher dimensional data. Table 2 shows the regret for the ground truth, princurve and slpc.
In addition, Figure 9 shows the behaviour of slpc (green) on each dimension.
Seismic data. Seismic data spanning long periods of time are essential for a thorough understanding of earthquakes. The “Centennial Earthquake Catalog” [42] aims at providing a realistic picture of the seismicity distribution on Earth. It consists in a global catalog of locations and magnitudes of instrumentally recorded earthquakes from 1900 to 2008. We focus on a particularly representative seismic active zone (a lithospheric border close to Australia) whose longitude is between E 130 to E 180 and latitude between S 70 to N 30 , with T = 218 seismic recordings. As shown in Figure 10, slpc recovers nicely the tectonic plate boundary, but both princurve and incremental SCMS with well-calibrated bandwidth fail to do so.
Lastly, since no ground truth is available, we used the R 2 coefficient to assess the performance (residuals are replaced by the squared distance between data points and their projections onto the principal curve). The average over 10 trials was 0.990.
Back to Seismic Data.Figure 11 was taken from the USGS website (https://earthquake.usgs.gov/data/centennial/) and gives the global locations of earthquakes for the period 1900–1999. The seismic data (latitude, longitude, magnitude of earthquakes, etc.) used in the present paper may be downloaded from this website.
Daily Commute Data. The identification of segments of personal daily commuting trajectories can help taxi or bus companies to optimize their fleets and increase frequencies on segments with high commuting activity. Sequential principal curves appear to be an ideal tool to address this learning problem: we tested our algorithm on trajectory data from the University of Illinois at Chicago (https://www.cs.uic.edu/~boxu/mp2p/gps_data.html). The data were obtained from the GPS reading systems carried by two of the laboratory members during their daily commute for 6 months in the Cook county and the Dupage county of Illinois. Figure 12 presents the learning curves yielded by princurve and slpc on geolocalization data for the first person, on May 30. A particularly remarkable asset of slpc is that abrupt curvature in the data sequence was perfectly captured, whereas princurve does not enjoy the same flexibility. Again, we used the R 2 coefficient to assess the performance (where residuals are replaced by the squared distances between data points and their projections onto the principal curve). The average over 10 trials was 0.998.

6. Proofs

This section contains the proof of Theorem 2 (note that Theorem 1 is a straightforward consequence, with η t = η , t = 0 , , T ) and the proof of Theorem 3 (which involves intermediary lemmas). Let us first define for each t = 0 , , T the following forecaster sequence ( f ^ t ) t
f ^ 0 = arg inf f F p Δ f , 0 = arg inf f F p 1 η 0 h ( f ) 1 η 0 z f , f ^ t = arg inf f F p s = 0 t Δ f , s = arg inf f F p s = 1 t Δ ( f , x s ) + 1 η t 1 h ( f ) 1 η t 1 z f , t 1 .
Note that f ^ t is an “illegal” forecaster since it peeks into the future. In addition, denote by
f = arg inf f F p t = 1 T Δ ( f , x t ) + 1 η T h ( f )
the polygonal line in F p which minimizes the cumulative loss in the first T rounds plus a penalty term. f is deterministic, and f ^ t is a random quantity (since it depends on z f , f F p drawn from π ). If several f attain the infimum, we chose f T as the one having the smallest complexity. We now enunciate the first (out of three) intermediary technical result.
Lemma 1.
For any sequence x 1 , , x T in B ( 0 , d R ) ,
t = 0 T Δ f ^ t , t t = 0 T Δ f ^ T , t , π - almost surely .
Proof. 
Proof by induction on T. Clearly (5) holds for T = 0 . Assume that (5) holds for T 1 :
t = 0 T 1 Δ f ^ t , t t = 0 T 1 Δ f ^ T 1 , t .
Adding Δ f ^ T , T to both sides of the above inequality concludes the proof. □
By (5) and the definition of f ^ T , for k 1 , we have π -almost surely that
t = 1 T Δ ( f ^ t , x t ) t = 1 T Δ ( f ^ T , x t ) + 1 η T h ( f ^ T ) 1 η T Z f ^ T + t = 0 T 1 η t 1 1 η t h ( f ^ t ) Z f ^ t t = 1 T Δ ( f , x t ) + 1 η T h ( f ) 1 η T Z f + t = 0 T 1 η t 1 1 η t h ( f ^ t ) Z f ^ t = inf f F p t = 1 T Δ ( f , x t ) + 1 η T h ( f ) 1 η T Z f + t = 0 T 1 η t 1 1 η t h ( f ^ t ) Z f ^ t ,
where 1 / η 1 = 0 by convention. The second and third inequality is due to respectively the definition of f ^ T and f T . Hence
E t = 1 T Δ f ^ t , x t inf f F p t = 1 T Δ ( f , x t ) + 1 η T h ( f ) 1 η T E [ Z f T ] + t = 0 T E 1 η t 1 η t 1 h ( f ^ t ) + Z f ^ t inf f F p t = 1 T Δ ( f , x t ) + 1 η T h ( f ) + t = 1 T 1 η t 1 η t 1 E sup f F p h ( f ) + Z f = inf f F p t = 1 T Δ ( f , x t ) + 1 η T h ( f ) + 1 η T E sup f F p h ( f ) + Z f ,
where the second inequality is due to E [ Z f T ] = 0 and 1 η t 1 η t 1 > 0 for t = 0 , 1 , , T since η t is decreasing in t in Theorem 2. In addition, for y 0 , one has
P h ( f ) + Z f > y = e h ( f ) y .
Hence, for any y 0
P sup f F p h ( f ) + Z f > y f F p P Z f h ( f ) + y = f F p e h ( f ) e y = u e y ,
where u = f F p e h ( f ) . Therefore, we have
E sup f F p h ( f ) + Z f ln u E max 0 , sup f F p h ( f ) + Z f ln u 0 P max 0 , sup f F p h ( f ) + Z f ln u > y d y 0 P sup f F p h ( f ) + Z f > y + ln u d y 0 u e ( y + ln u ) d y = 1 .
We thus obtain
E t = 1 T Δ f ^ t , x t inf f F p t = 1 T Δ ( f , x t ) + 1 η T h ( f ) + 1 η T 1 + ln f F p e h ( f ) .
Next, we control the regret of Algorithm 2.
Lemma 2.
Assume that z f is sampled from the symmetric exponential distribution in R , i.e., π ( z ) = e z 1 { z > 0 } . Assume that sup t = 1 , , T η t 1 1 d ( 2 R + δ ) 2 , and define c 0 = d ( 2 R + δ ) 2 . Then for any sequence ( x t ) B ( 0 , d R ) , t = 1 , , T ,
t = 1 T E Δ f ^ t , x t t = 1 T 1 + η t 1 c 0 ( e 1 ) E Δ f ^ t , x t .
Proof. 
Let us denote by
F t ( Z f ) = Δ f ^ t , x t = Δ arg inf f F s = 1 t 1 Δ ( f , x s ) + 1 η t 1 h ( f ) 1 η t 1 Z f , x t
the instantaneous loss suffered by the polygonal line f ^ t when x t is obtained. We have
E [ Δ f ^ t , x t ] = F t z η t 1 Δ f , x t π ( z ) d z = F t ( z ) π z + η t 1 Δ ( f , x t ) d z = F t ( z ) e z + η t 1 Δ ( f , x t ) d z e η t 1 d ( 2 R + δ ) 2 F t ( z ) e z d z = e η t 1 d ( 2 R + δ ) 2 E [ Δ f ^ t , x t ] ,
where the inequality is due to the fact that Δ ( f , x ) d ( 2 R + δ ) 2 holds uniformly for any f F p and x B ( 0 , d R ) . Finally, summing on t on both sides and using the elementary inequality e x 1 + ( e 1 ) x if x ( 0 , 1 ) concludes the proof. □
Lemma 3.
For k 1 , p , we control the cardinality of set f F p , K ( f ) = k as
ln f F p , K ( f ) = k ln ( 8 p e V d ) + 3 d 3 2 d k + ln 2 δ d + d δ L + d ln d ( 2 R + δ ) δ = Δ c 1 k + c 2 L + c 3 ,
where V d denotes the volume of the unit ball in R d .
Proof. 
First, let N k , δ denote the set of polygonal lines with k segments and whose vertices are in Q δ . Notice that N k , δ is different from { f F p , K ( f ) = k } and that
{ f F p , K ( f ) = k } p k N k , δ .
Hence
ln { f F p , K ( f ) = k } ln p k + ln N k , δ k ln p e k + k ln 8 V d + 3 d 3 2 d + ln 2 d δ + d δ L + d ln d ( 2 R + δ ) δ k ln ( p e ) + k ln 8 V d + 3 d 3 2 d + ln 2 d δ + d δ L + d ln d ( 2 R + δ ) δ ,
where the second inequality is a consequence to the elementary inequality p k p e k k combined with Lemma 2 in [16]. □
We now have all the ingredients to prove Theorem 1 and Theorem 2.
First, combining (6) and (7) yields that
t = 1 T E Δ ( f ^ t , x t ) inf f F p t = 1 T Δ ( f , x t ) + 1 η T h ( f ) + 1 η T 1 2 + ln f F p e h ( f ) + c 0 ( e 1 ) t = 1 T η t 1 E Δ ( f ^ t , x t ) inf k 1 , p inf f F p K ( f ) = k t = 1 T Δ ( f , x t ) + h ( f ) η T + 1 η T 1 2 + ln f F p e h ( f ) + c 0 ( e 1 ) t = 1 T η t 1 E Δ ( f ^ t , x t ) .
Assume that η t = η , t = 0 , , T and h ( f ) = c 1 K ( f ) + c 2 L + c 3 for f F p , then ( 1 2 + f F p e h ( f ) ) 0 and moreover
t = 1 T E Δ ( f ^ t , x t ) S T , h , η + 1 η 1 2 + ln f F p e h ( f ) + c 0 ( e 1 ) η t = 1 T E Δ ( f ^ t , x t ) S T , h , η + c 0 ( e 1 ) η S T , h , η S T , h , η + η c 0 ( e 1 ) inf f F p t = 1 T Δ ( f , x t ) + c 0 ( e 1 ) ( c 1 p + c 2 L + c 3 ) ,
where
S T , h , η = inf k 1 , p inf f F p K ( f ) = k t = 1 T Δ ( f , x t ) + h ( f ) η
and the second inequality is obtained with Lemma 1. By setting
η = c 1 p + c 2 L + c 3 c 0 ( e 1 ) inf f F p t = 1 T Δ ( f , x t )
we obtain
t = 1 T E Δ ( f ^ t , x t ) inf k 1 , p inf f F p K ( f ) = k t = 1 T Δ ( f , x t ) + c 0 ( e 1 ) r T , k , L + c 0 ( e 1 ) L T , p , L + c 0 ( e 1 ) c 1 p + c 2 L + c 3 ,
where r T , k , L = inf f F p t = 1 T Δ ( f , x t ) ( c 1 k + c 2 L + c 3 ) . This proves Theorem 1.
Finally, assume that
η 0 = c 1 p + c 2 L + c 3 c 0 ( e 1 ) and η t = c 1 p + c 2 L + c 3 c 0 ( e 1 ) t , t = 1 , , T .
Since E Δ ( f ^ t , x t ) c 0 for any t = 1 , , T , we have
t = 1 T E Δ ( f ^ t , x t ) inf k 1 , p inf f F p K ( f ) = k t = 1 T Δ ( f , x t ) + h ( f ) η T + 1 η T 1 + ln f F p e h ( f ) + c 0 2 ( e 1 ) t = 1 T η t 1 inf k 1 , p inf f F p K ( f ) = k t = 1 T Δ ( f , x t ) + c 0 ( e 1 ) T ( c 0 k + c 2 L + c 3 ) + 2 c 0 ( e 1 ) T ( c 0 p + c 2 L + c 3 ) ,
which concludes the proof of Theorem 2.
Lemma 4.
Using Algorithm 3, if 0 < ϵ 1 , 0 < β < 1 , α ( 1 β ) c 0 β and U f ^ t 1 2 for all t 2 , where U f ^ t 1 is the cardinality of U f ^ t 1 , then we have
t = 1 T E r f ^ t , t t = 1 T E r ^ σ ^ t A t , t 2 ( 1 ϵ ) α β t = 1 T U f ^ t 1 .
Proof. 
First notice that A t = U f ^ t 1 if I t = 0 , and that for t 2
E r f ^ t , t | H t , I t = 0 = E r σ ^ t A t , t | H t , I t = 0 = f A t c o n d ( t ) r f , t P σ ^ t A t = f | H t + f A t c o n d ( t ) c r f , t P σ ^ t A t = f | H t f A t c o n d ( t ) r f , t + f A t c o n d ( t ) c α P σ ^ t A t = f | H t ( 1 β ) f A t c o n d ( t ) r f , t f A t c o n d ( t ) c α r f , t P σ ^ t A t = f | H t = E r ^ σ ^ t A t , t | H t , I t = 0 ( 1 β ) f A t c o n d ( t ) r f , t f A t c o n d ( t ) c α r f , t P σ ^ t A t = f | H t E r ^ σ ^ t A t , t | H t , I t = 0 ( 1 β ) c 0 A t α β A t E r ^ σ ^ t A t , t | H t , I t = 0 2 α β A t ,
where c o n d ( t ) c denotes the complement of set c o n d ( t ) . The first inequality above is due to the assumption that for all f A t c o n d ( t ) , we have P σ ^ t A t = f | H t β . For t = 1 , the above inequality is trivial since r ^ σ ^ 1 U f ^ 0 , 1 0 by its definition. Hence, for t 1 , one has
E r f ^ t , t | H t = ϵ E r σ ^ t F p , t | H t , I t = 1 + ( 1 ϵ ) E r σ ^ t A t , t | H t , I t = 0 E r ^ f ^ t , t | H t 2 α β A t .
Summing on both sides of inequality (8) over t terminates the proof of Lemma 4. □
Lemma 5.
Let c ^ 0 = c 0 β + α . If 0 < η 1 = η 2 = = η T = η < 1 c ^ 0 , then we have
E max σ ^ t = 1 T r ^ σ ^ A t , t 1 η h σ ^ A t t = 1 T E r ^ σ ^ t A t , t c ^ 0 2 ( e 1 ) η T + c ^ 0 ( e 1 ) c 1 p + c 2 L + c 3 .
Proof. 
By the definition of r ^ f , t in Algorithm 3, for any f F p and t 1 , we have
r ^ f , t max r f , t P f ^ t = f | H t , α , r f , t max c 0 β , α c ^ 0 ,
where in the second inequality we use that r f , t c 0 for all f and t, and that P f ^ t = f | H t β when f U f ^ t 1 c o n d ( t ) . The rest of the proof is similar to those of Lemmas 1 and 2. In fact, if we define by Δ ^ f , x t = c ^ 0 r ^ f , t , then one can easily observe the following relation when I t = 1 (similar relation in the case that I t = 0)
f ^ t = σ ^ t F p = arg max f F p s = 1 t 1 r ^ f , s + 1 η z f h ( f ) = arg min f F p s = 1 t 1 Δ ^ ( f , x s ) + 1 η h ( f ) z f .
Then applying Lemmas 1 and 2 on this newly defined sequence Δ ^ f ^ t , x t , t = 1 , T leads to the result of Lemma 5. □
The proof of the upcoming Lemma 6 requires the following submartingale inequality: let Y 0 , Y T be a sequence of random variable adapted to random events H 0 , , H T such that for 1 t T , the following three conditions hold:
E Y t | H t 0 , Var ( Y t | H t ) a 2 , Y t E Y t | H t b .
Then for any λ > 0 ,
P t = 1 T Y t > Y 0 + λ exp λ 2 2 T ( a 2 + b 2 ) .
The proof can be found in Chung and Lu [43] (Theorem 7.3).
Lemma 6.
Assume that 0 < β < 1 F p , α c 0 β and η > 0 , then we have
E max σ t = 1 T r σ A t , t 1 η h σ A t E max σ ^ t = 1 T r ^ σ ^ A t , t 1 η h σ ^ A t 1 F p β 2 T c 0 2 β + α 2 ( 1 β ) + c 0 + 2 α 2 ln 1 β + F p β c 0 T .
Proof. 
First, we have almost surely that
max σ t = 1 T r σ A t , t 1 η h σ A t max σ ^ t = 1 T r ^ σ ^ A t , t 1 η h σ ^ A t max f F p t = 1 T r f , t r ^ f , t .
Denote by Y f , t = r f , t r ^ f , t . Since
E r ^ f , t | H t = r f , t + ( 1 ϵ ) α 1 P f ^ t = f | H t i f f U ( f ^ t 1 ) c o n d ( t ) , ϵ r f , t + ( 1 ϵ ) α otherwise ,
and α > c 0 r f , t uniformly for any f and t, we have uniformly that E Y t | H t 0 , satisfying the first condition.
For the second condition, if f U f ^ t 1 c o n d ( t ) , then
Var ( Y t | H t ) = E r ^ f , t 2 | H t E r ^ f , t | H t 2 ϵ r f , t 2 + ( 1 ϵ ) r f , t 2 P f ^ t = f | H t + α 1 P f ^ t = f | H t r f , t + ( 1 ϵ ) α 1 P f ^ t = f | H t 2 r f , t 2 β + α 2 ( 1 β ) c 0 2 β + α 2 ( 1 β ) .
Similarly, for f U f ^ t 1 c o n d ( t ) , one can have Var ( Y t | H t ) α 2 . Moreover, for the third condition, since
E Y f , t | H t 2 α ,
then
Y f , t E Y f , t | H t r f , t + 2 α c 0 + 2 α .
Setting λ = 2 T c 0 2 β + α 2 ( 1 β ) + c 0 + 2 α 2 ln 1 β leads to
P t = 1 T Y f , t λ β .
Hence the following inequality holds with probability 1 | F p | β
max f F p t = 1 T r f , t r ^ f , t 2 T c 0 2 β + α 2 ( 1 β ) + c 0 + 2 α 2 ln 1 β .
Finally, noticing that max f F p t = 1 T r f , t r ^ f , t c 0 T almost surely, we terminate the proof of Lemma 6. □
Proof of Theorem 3.
Assume that p > 6 , T 2 | F p | 2 and let
β = F p 1 2 T 1 4 , α = c 0 β , c ^ 0 = 2 c 0 β , η 1 = η 2 = = η T = c 1 p + c 2 L + c 3 T ( e 1 ) c ^ 0 , ϵ = 1 F p 1 2 3 p T 1 4 .
With those values, the assumptions of Lemmas 4, 5 and 6 are satisfied. Combining their results lead to the following
t = 1 T E r f ^ t , t E max σ t = 1 T r σ A t , t 1 η h σ A t 2 α β ( 1 ϵ ) t = 1 T U f ^ t 1 c ^ 0 2 ( e 1 ) η T c ^ 0 ( e 1 ) c 1 p + c 2 L + c 3 1 F p β 2 T c 0 2 β + α 2 ( 1 β ) + c 0 + 2 α 2 ln 1 β F p β c 0 T E max σ t = 1 T r σ A t , t 1 η h σ A t ( 1 ϵ ) F p 3 p c 0 T c ^ 0 2 ( e 1 ) η T c ^ 0 ( e 1 ) c 1 p + c 2 L + c 3 1 F p β 2 T c 0 2 β + α 2 ( 1 β ) + c 0 + 2 α 2 ln 1 β F p β c 0 T E max σ t = 1 T r σ A t , t 1 η h σ A t O F p 1 2 T 3 4 ,
where the second inequality is due to the fact that the cardinality U f ^ t 1 is upper bounded by F p 3 p for t 1 . In addition, using the definition of r f , t that r f , t = c 0 Δ ( f , x t ) terminates the proof of Theorem 3. □

Author Contributions

Conceptualization, L.L. and B.G.; Formal analysis, L.L. and B.G.; Methodology, B.G.; Project administration, B.G.; Software, L.L.; Supervision, B.G.; Writing—original draft, L.L. and B.G.; Writing—review and editing, L.L. and B.G. All authors have read and agreed to the published version of the manuscript.

Funding

LL is funded and supported by the Fundamental Research Funds for the Central Universities (Grand No. 30106210158) and National Natural Science Foundation of China (Grant No. 61877023), the Fundamental Research Funds for the Central Universities (CCNU19TD009). BG is supported in part by the U.S. Army Research Laboratory and the U. S. Army Research Office, and by the U.K. Ministry of Defence and the U.K. Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/R013616/1. BG acknowledges partial support from the French National Agency for Research, grants ANR-18-CE40-0016-01 and ANR-18-CE23- 0015-02.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pearson, K. On lines and planes of closest fit to systems of point in space. Philos. Mag. 1901, 2, 559–572. [Google Scholar] [CrossRef][Green Version]
  2. Spearman, C. “General Intelligence”, Objectively Determined and Measured. Am. J. Psychol. 1904, 15, 201–292. [Google Scholar] [CrossRef]
  3. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417–441. [Google Scholar] [CrossRef]
  4. Friedsam, H.; Oren, W.A. The application of the principal curve analysis technique to smooth beamlines. In Proceedings of the 1st International Workshop on Accelerator Alignment, Stanford, CA, USA, 31 July–2 August 1989. [Google Scholar]
  5. Brunsdon, C. Path estimation from GPS tracks. In Proceedings of the 9th International Conference on GeoComputation, Maynoorth, Ireland, 3–5 September 2007. [Google Scholar]
  6. Reinhard, K.; Niranjan, M. Parametric Subspace Modeling Of Speech Transitions. Speech Commun. 1999, 27, 19–42. [Google Scholar] [CrossRef]
  7. Kégl, B.; Krzyżak, A. Piecewise linear skeletonization using principal curves. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 59–74. [Google Scholar] [CrossRef]
  8. Banfield, J.D.; Raftery, A.E. Ice floe identification in satellite images using mathematical morphology and clustering about principal curves. J. Am. Stat. Assoc. 1992, 87, 7–16. [Google Scholar] [CrossRef]
  9. Stanford, D.C.; Raftery, A.E. Finding curvilinear features in spatial point patterns: Principal curve clustering with noise. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 601–609. [Google Scholar] [CrossRef][Green Version]
  10. Hastie, T.; Stuetzle, W. Principal curves. J. Am. Stat. Assoc. 1989, 84, 502–516. [Google Scholar] [CrossRef]
  11. Delicado, P. Another Look at Principal Curves and Surfaces. J. Multivar. Anal. 2001, 77, 84–116. [Google Scholar] [CrossRef][Green Version]
  12. Einbeck, J.; Tutz, G.; Evers, L. Local principal curves. Stat. Comput. 2005, 15, 301–313. [Google Scholar] [CrossRef][Green Version]
  13. Einbeck, J.; Tutz, G.; Evers, L. Data Compression and Regression through Local Principal Curves and Surfaces. Int. J. Neural Syst. 2010, 20, 177–192. [Google Scholar] [CrossRef][Green Version]
  14. Malo, J.; Gutiérrez, J. V1 non-linear properties emerge from local-to-global non-linear ICA. Netw. Comput. Neural Syst. 2006, 17, 85–102. [Google Scholar] [CrossRef]
  15. Ozertem, U.; Erdogmus, D. Locally Defined Principal Curves and Surfaces. J. Mach. Learn. Res. 2011, 12, 1249–1286. [Google Scholar]
  16. Kégl, B. Principal Curves: Learning, Design, and Applications. Ph.D. Thesis, Concordia University, Montreal, QC, Canada, 1999. [Google Scholar]
  17. Kégl, B.; Krzyżak, A.; Linder, T.; Zeger, K. Learning and design of principal curves. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 281–297. [Google Scholar] [CrossRef][Green Version]
  18. Biau, G.; Fischer, A. Parameter selection for principal curves. IEEE Trans. Inf. Theory 2012, 58, 1924–1939. [Google Scholar] [CrossRef][Green Version]
  19. Barron, A.; Birgé, L.; Massart, P. Risk bounds for model selection via penalization. Probab. Theory Relat. Fields 1999, 113, 301–413. [Google Scholar] [CrossRef]
  20. Birgé, L.; Massart, P. Minimal penalties for Gaussian model selection. Probab. Theory Relat. Fields 2007, 183, 33–73. [Google Scholar] [CrossRef]
  21. Sandilya, S.; Kulkarni, S.R. Principal curves with bounded turn. IEEE Trans. Inf. Theory 2002, 48, 2789–2793. [Google Scholar] [CrossRef]
  22. Cesa-Bianchi, N.; Lugosi, G. Prediction, Learning and Games; Cambridge University Press: New York, NY, USA, 2006. [Google Scholar]
  23. Rudzicz, F.; Ghassabeh, Y.A. Incremental algorithm for finding principal curves. IET Signal Process. 2015, 9, 521–528. [Google Scholar]
  24. Laparra, V.; Malo, J. Sequential Principal Curves Analysis. arXiv 2016, arXiv:1606.00856. [Google Scholar]
  25. Laparra, V.; Jiménez, S.; Camps-Valls, G.; Malo, J. Nonlinearities and Adaptation of Color Vision from Sequential Principal Curves Analysis. Neural Comput. 2012, 24, 2751–2788. [Google Scholar] [CrossRef][Green Version]
  26. Laparra, V.; Malo, J. Visual Aftereffects and Sensory Nonlinearities from a single Statistical Framework. Front. Hum. Neurosci. 2015, 9. [Google Scholar] [CrossRef][Green Version]
  27. Laparra, V.; Jiménez, S.; Tuia, D.; Camps-Valls, G.; Malo, J. Principal Polynomial Analysis. Int. J. Neural Syst. 2014, 24, 1440007. [Google Scholar] [CrossRef][Green Version]
  28. Laparra, V.; Malo, J.; Camps-Valls, G. Dimensionality Reduction via Regression in Hyperspectral Imagery. IEEE J. Sel. Top. Signal Process. 2015, 9, 1026–1036. [Google Scholar] [CrossRef][Green Version]
  29. Shawe-Taylor, J.; Williamson, R.C. A PAC analysis of a Bayes estimator. In Proceedings of the 10th annual conference on Computational Learning Theory, Nashville, TN, USA, 6–9 July 1997; pp. 2–9. [Google Scholar] [CrossRef]
  30. McAllester, D.A. Some PAC-Bayesian Theorems. Mach. Learn. 1999, 37, 355–363. [Google Scholar] [CrossRef][Green Version]
  31. McAllester, D.A. PAC-Bayesian Model Averaging. In Proceedings of the 12th Annual Conference on Computational Learning Theory, Santa Cruz, CA, USA, 7–9 July 1999; pp. 164–170. [Google Scholar]
  32. Li, L.; Guedj, B.; Loustau, S. A quasi-Bayesian perspective to online clustering. Electron. J. Stat. 2018, 12, 3071–3113. [Google Scholar] [CrossRef]
  33. Guedj, B. A Primer on PAC-Bayesian Learning. In Proceedings of the Second Congress of the French Mathematical Society, Long Beach, CA, USA, 10 June 2019; pp. 391–414. [Google Scholar]
  34. Alquier, P. User-friendly introduction to PAC-Bayes bounds. arXiv 2021, arXiv:2110.11216. [Google Scholar]
  35. Audibert, J.Y. Fast Learning Rates in Statistical Inference through Aggregation. Ann. Stat. 2009, 37, 1591–1646. [Google Scholar] [CrossRef]
  36. Hutter, M.; Poland, J. Adaptive Online Prediction by Following the Perturbed Leader. J. Mach. Learn. Res. 2005, 6, 639–660. [Google Scholar]
  37. Auer, P.; Cesa-Bianchi, N.; Freund, Y.; Schapire, R.E. The Nonstochastic multiarmed Bandit problem. SIAM J. Comput. 2003, 32, 48–77. [Google Scholar] [CrossRef]
  38. Kleinberg, R.D.; Niculescu-Mizil, A.; Sharma, Y. Regret Bounds for Sleeping Experts and Bandits. In COLT; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  39. Kanade, V.; McMahan, B.; Bryan, B. Sleeping Experts and Bandits with Stochastic Action Availability and Adversarial Rewards. Artif. Intell. Stat. 2009, 3, 1137–1155. [Google Scholar]
  40. Cesa-Bianchi, N.; Lugosi, G.; Stoltz, G. Minimizing regret with label-efficient prediction. IEEE Trans. Inf. Theory 2005, 51, 2152–2162. [Google Scholar] [CrossRef][Green Version]
  41. Neu, G.; Bartók, G. An Efficient Algorithm for Learning with Semi-Bandit Feedback; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8139, pp. 234–248. [Google Scholar]
  42. Engdahl, E.R.; Villaseñor, A. 41 Global seismicity: 1900–1999. Int. Geophys. 2002, 81, 665–690. [Google Scholar]
  43. Chung, F.; Lu, L. Concentration Inequalities and Martingale Inequalities: A Survey. Internet Math. 2006, 3, 79–127. [Google Scholar] [CrossRef][Green Version]
Figure 1. A principal curve.
Figure 1. A principal curve.
Entropy 23 01534 g001
Figure 2. A principal curve and projections of data onto it.
Figure 2. A principal curve and projections of data onto it.
Entropy 23 01534 g002
Figure 3. Principal curves with different numbers (k) of segments. (a) A too small k. (b) Right k. (c) A too large k.
Figure 3. Principal curves with different numbers (k) of segments. (a) A too small k. (b) Right k. (c) A too large k.
Entropy 23 01534 g003
Figure 4. An example of a lattice Γ δ in R 2 with δ = 1 (spacing between blue points) and B ( 0 , 10 ) (black circle). The red polygonal line is composed of vertices in Q δ = B ( 0 , 10 ) Γ δ .
Figure 4. An example of a lattice Γ δ in R 2 with δ = 1 (spacing between blue points) and B ( 0 , 10 ) (black circle). The red polygonal line is composed of vertices in Q δ = B ( 0 , 10 ) Γ δ .
Entropy 23 01534 g004
Figure 5. An example of a Voronoi partition.
Figure 5. An example of a Voronoi partition.
Entropy 23 01534 g005
Figure 6. Synthetic data. Black dots represent data x 1 : t . The red point is the new observation x t + 1 . princurve (solid red) and slpc (solid green). (a) t = 150 , princurve. (b) t = 450 , princurve. (c t = 150 , incremental SCMS. (d) t = 450 , incremental SCMS. (e) t = 150 , slpc. (f) t = 450 , slpc.
Figure 6. Synthetic data. Black dots represent data x 1 : t . The red point is the new observation x t + 1 . princurve (solid red) and slpc (solid green). (a) t = 150 , princurve. (b) t = 450 , princurve. (c t = 150 , incremental SCMS. (d) t = 450 , incremental SCMS. (e) t = 150 , slpc. (f) t = 450 , slpc.
Entropy 23 01534 g006
Figure 7. Mean estimation of regret and per-round regret of slpc with respect to time round t, for the horizon T = 500 . (a) Mean estimation of the regret of slpc over 20 trials (black line) and a bisection line (green) with respect to time round t. (b) Per-round of estimated regret of slpc with respect to t.
Figure 7. Mean estimation of regret and per-round regret of slpc with respect to time round t, for the horizon T = 500 . (a) Mean estimation of the regret of slpc over 20 trials (black line) and a bisection line (green) with respect to time round t. (b) Per-round of estimated regret of slpc with respect to t.
Entropy 23 01534 g007
Figure 8. Synthetic data. Zooming in: how a new data point impacts the principal curve only locally. (a) At time t = 97 . (b) And at time t = 98 .
Figure 8. Synthetic data. Zooming in: how a new data point impacts the principal curve only locally. (a) At time t = 97 . (b) And at time t = 98 .
Entropy 23 01534 g008
Figure 9. slpc (green line) on synthetic high dimensional data from different perspectives. Black dots represent recordings x 1 : 99 ; the red dot is the new recording x 200 . (a) slpc, t = 199 , 1st and 2nd coordinates. (b) slpc, t = 199 , 3th and 5th coordinates. (c) slpc, t = 199 , 4th and 6th coordinates.
Figure 9. slpc (green line) on synthetic high dimensional data from different perspectives. Black dots represent recordings x 1 : 99 ; the red dot is the new recording x 200 . (a) slpc, t = 199 , 1st and 2nd coordinates. (b) slpc, t = 199 , 3th and 5th coordinates. (c) slpc, t = 199 , 4th and 6th coordinates.
Entropy 23 01534 g009
Figure 10. Seismic data. Black dots represent seismic recordings x 1 : t ; the red dot is the new recording x t + 1 . (a) princurve, t = 100 . (b) princurve, t = 125 . (c) incremental SCMS, t = 100 . (dincremental SCMS, t = 125 . (e) slpc, t = 100 . (f) slpc, t = 125 .
Figure 10. Seismic data. Black dots represent seismic recordings x 1 : t ; the red dot is the new recording x t + 1 . (a) princurve, t = 100 . (b) princurve, t = 125 . (c) incremental SCMS, t = 100 . (dincremental SCMS, t = 125 . (e) slpc, t = 100 . (f) slpc, t = 125 .
Entropy 23 01534 g010
Figure 11. Seismic data from https://earthquake.usgs.gov/data/centennial/.
Figure 11. Seismic data from https://earthquake.usgs.gov/data/centennial/.
Entropy 23 01534 g011
Figure 12. Daily commute data. Black dots represent collected locations x 1 : t . The red point is the new observation x t + 1 . princurve (solid red) and slpc (solid green). (a) t = 10 , princurve. (b) t = 127 , princurve. (c) t = 10 , slpc. (d) t = 127 , slpc.
Figure 12. Daily commute data. Black dots represent collected locations x 1 : t . The red point is the new observation x t + 1 . princurve (solid red) and slpc (solid green). (a) t = 10 , princurve. (b) t = 127 , princurve. (c) t = 10 , slpc. (d) t = 127 , slpc.
Entropy 23 01534 g012
Table 1. The first line is the regret (cumulative loss) on synthetic data (average over 10 trials, with standard deviation in brackets). Second and third lines are the average computation time for two values of the time horizon T. princurve and incremental SCMS are deterministic, hence the zero standard deviation for regret.
Table 1. The first line is the regret (cumulative loss) on synthetic data (average over 10 trials, with standard deviation in brackets). Second and third lines are the average computation time for two values of the time horizon T. princurve and incremental SCMS are deterministic, hence the zero standard deviation for regret.
Ground TruthPrincurveIncremental SCMSslpc
2.48 (0)26.02 (0)19.09 (0)20.83 (3.23)
T = 5000.029 s (0.0001 s)18.79 s (0.007 s)1.44 s (0.030 s)
T = 50000.35 s (0.006 s)>60 s (NA)4.13 s (0.807 s)
Table 2. Regret (cumulative loss) on synthetic high dimensional data in (average over 10 trials, with standard deviation in brackets). princurve and incremental SCMS are deterministic, hence the zero standard deviation.
Table 2. Regret (cumulative loss) on synthetic high dimensional data in (average over 10 trials, with standard deviation in brackets). princurve and incremental SCMS are deterministic, hence the zero standard deviation.
Ground TruthPrincurveIncremental SCMSslpc
3.290 (0)14.204 (0)5.38 (0)6.797 (0.409)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, L.; Guedj, B. Sequential Learning of Principal Curves: Summarizing Data Streams on the Fly. Entropy 2021, 23, 1534. https://doi.org/10.3390/e23111534

AMA Style

Li L, Guedj B. Sequential Learning of Principal Curves: Summarizing Data Streams on the Fly. Entropy. 2021; 23(11):1534. https://doi.org/10.3390/e23111534

Chicago/Turabian Style

Li, Le, and Benjamin Guedj. 2021. "Sequential Learning of Principal Curves: Summarizing Data Streams on the Fly" Entropy 23, no. 11: 1534. https://doi.org/10.3390/e23111534

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop