Is the Finite-Time Lyapunov Exponent Field a Koopman Eigenfunction?

This work serves as a bridge between two approaches to analysis of dynamical systems: the local, geometric analysis and the global, operator theoretic, Koopman analysis. We explicitly construct vector fields where the instantaneous Lyapunov exponent field is a Koopman eigenfunction. Restricting ourselves to polynomial vector fields to make this construction easier, we find that such vector fields do exist, and we explore whether such vector fields have a special structure, thus making a link between the geometric theory and the transfer operator theory.


Significance
Two approaches to analyzing dynamical systems are the geometric approach and operator theoretic approach, exemplified in recent years by invariant manifolds and Koopman operators. The geometric invariant manifold approach is closely related to the instantaneous version of the finite-time Lyapunov exponent (FTLE) field. The very different, spectral and measurebased operator theoretic approach of evolution operators, a.k.a. "Koopmanism" involves Koopman eigenfunctions (KEIGs).
In this paper, we ask a simple question, "Is the FTLE field a KEIG?" The answer is: in general, no. This motivates the explicit construction of vector fields where the answer is yes, in the sense that the FTLE field in the infinitesimal time limit, i.e., the instantaneous Lyapunov exponent (iLE) field, is a KEIG. Restricting ourselves to polynomial vector fields to make this construction easier, we indeed find that such vector fields do exist, and we explore whether such vector fields have a special structure.

Instantaneous Lyapunov Exponent Analysis
Solutions of non-autonomous vector fields, such as the motion of a fluid element x(t) in the time-dependent fluid velocity field v(x, t), can be challenging to analyze. The method of Lagrangian coherent structures (LCS) has become a popular tool to analyze structures in the phase space of low-dimensional non-autonomous vector fields [1,2,3]. A common computational framework to obtain LCS is to consider a scalar field derived from numerically integrated trajectories (i.e., numerical approximation of solutions of (1), called the "Lagrangian" point of view in the fluid literature), in particular the finite-time Lyapunov exponent (FTLE) field [2,4]. New tools have recently been developed which use vector field gradients, instead of integrating trajectories, the instantaneous limit of FTLE and LCS. These vector gradients are assembled into the Eulerian rate-of-strain tensor, so-called because of its relationship to the space-fixed, or "Eulerian" pointof-view in the fluid literature. The Eulerian rate-of-strain tensor was shown to provide an instantaneous approximation of LCS in two-dimensional fluid flows [5]. Further work on this topic extended the ideas to n dimensions [6], showing that the minimum and maximum eigenvalues of the Eulerian rate-of-strain tensor, s 1 and s n , are the limits of the backward-time and forward-time FTLE fields, respectively, as the integration time goes to zero. Trenches of the minimum eigenvalue field can be identified as instantaneous attracting LCSs whereas ridges of the maximum eigenvalue field can be identified as instantaneous repelling LCSs. For the remainder of the paper, we shall refer to the minimum and maximum eigenvalues of the S as the attraction and repulsion rates, respectively, and both can be considered as instantaneous, rather than finite-time, Lyapunov exponents. For brevity we refer to the iLE field, the instantaneous Lyapunov exponent, field, following [6].
Broadly speaking, the trenches of the attraction rate field reveal where phase space regions will congregate under the flow, as shown in the schematic of Figure 1. These are instantaneous attracting LCSs, that is, the instantaneously most attracting co-dimension 1 surfaces in M , also referred to as instantaneous Lyapunov exponent structures (iLES) [6]. In practical applications, the primary focus has been on the attraction rate given its importance for prediction [7] as opposed to repelling features which particles will diverge from before flowing independent of those features.
In the remainder of this paper, we will restrict ourselves to autonomous systems, where the vector field (1) is independent of time, that is,

Instantaneous Attraction and Repulsion Rates
For ease of exposition, we limit our discussion in this section to autonomous two-dimensional vector fields. To calculate the attraction rate we first needed to calculate the gradient of the vector field, ∇v(x), for the vector field v(x) = (u(x, y), v(x, y)), where u is the first component and v is the second component. Also the two-dimensional state is written as x = (x, y).
The gradient of the velocity vector is, and the Eulerian rate-of-strain tensor (2) is explicitly, The attraction rate, s 1 (x), which is the minimum eigenvalue of S(x) at the location (x) is given analytically by, where the dependence of u and v on x is understood. Similarly, the repulsion rate is given analytically by,

Finite-Time Lyapunov Exponent
Based on the ODE, (3), we can calculate, the flow map, and is typically given numerically [2,8,9,10]. Taking the gradient of the flow map F t (x 0 ) with respect to initial conditions x 0 , ∇F t (x 0 ), the right Cauchy-Green strain tensor over a time interval of interest can be calculated, which is positive-definite, giving eigenvalues which are all positive. In a two-dimensional flow, the two eigenvalues are ordered λ 1 < λ 2 . From the maximum eigenvalue, λ 2 , the FTLE [2,11] can be defined as, where t is the (signed) elapsed time, often referred to as the integration time or evolution time in the FTLE literature. The FTLE measures the rate of separation of two nearby fluid parcels in a flow over the time horizon t. Ridges of the σ t (x 0 ) field for t < 0 identify regions of the flow which are the most attracting over the time interval [t, 0]. Furthermore, in [6] it was demonstrated that the −s 1 (x 0 ) field (with the minus sign, where s 1 is from (6)) is the limit of the σ t (x 0 ) field as t → 0. Thus, the attraction rate field, s 1 (x 0 ), provides a computationally inexpensive instantaneous approximation of the main attracting curves, as it is based on a single velocity snapshot; no trajectory integration is necessary.

On Evolution of Observations, the Koopman Operator and its Eigenfunction PDE
The Koopman spectral analysis has become extensively popular and relevant lately in science and engineering [12,13,14] especially for a data-driven perspective for analyzing dynamical systems. The idea is that even a nonlinear dynamical system can be interpreted as a much simpler linear system. However, this reinterpretation of the nonlinear dynamical system is as a linear dynamical system in a function space, which may be infinite dimensional. This is perhaps not a bad trade-off, infinite dimensions for linearity, and from there, computational schemes proceed to estimate the representation in terms of finite dimensional truncation of the infinite dimensional embedding. Often, by finite truncation the linear dynamics is approximated by finite dimensional embedding to a linear subspace.
We first review Koopman spectral analysis in brief details. Consider, the autonomous differential equation in (3). As above, the flow for each t ∈ R (or semi-flow for t ≥ 0) as a function, x(t) ≡ F t (x 0 ) for a trajectory starting at x(0) = x 0 ∈ M . There is the dynamics of the associated Koopman operator, often called a composition operator, which describes evolution of "observables", meaning "measurements" along the flow, [12,15]. Rather than classically analyzing individual trajectories in the phase space, observations measured as functions over the space are analyzed. These "observation functions", are elements of a space of observation functions F. For example, is commonly used since it is particularly convenient for numerical applications that utilize the inner product associated with the Hilbert space structure [12,13,16,14,16,17]. We will assume scalar observation functions, but multiple scalar observation functions can just as well be considered together, "stacked" as a composite vector valued observation function.
The dynamics of how observation functions change over time as sampled along orbits is what the Koopman operator defines. See Figure 2. The Koopman Operator, (Composition Operator), [18,15,19,20], K t , associated with F t , is a (semi-) flow, stated as the following composition, on the function space F, for each t ∈ R (or as a semi-flow if the relation only holds for t ≥ 0). That is, for each x, we observe the value of an observable g not at x, but "downstream" by time t, at F t (x). See Figure 2. Notice that for brevity we have suppressed the starting time t 0 in the statement of the Koopman operator semi-flow notation, K t . An important feature of the Koopman operator is that it is linear operator on its domain, the function space F, but at the cost of possibly being infinite dimensional, even though it may be associated with a flow F t that evolves on a finite dimensional space, and indeed even due to a nonlinear vector field. The spectral theory of Koopman operators [20,12,15,21] concerns eigenfunctions and eigenvalues of the operator K t . Writing an eigenvalue, eigenfunction pair of the Koopman operator as (λ, φ λ (x)), must satisfy the equation, See Fig. 2. For convenience, we will say "KEIGs" when referring to a Koopman eigenvalue and eigenfunction pair, (λ, φ λ (x)). It may seem surprising if trying to make an analogy to the spectrum of matrices (finite rank operators), however, for each λ not only is the eigenfunction φ λ not unique, there are in fact uncountably infinitely many functions associated with each λ [22,23]. This is true even while allowing only unit normalized eigenfunctions, to remove the trivial idea that constant multiples of eigenfunctions are eigenfunctions.
A trending concept in the empirical study of dynamical systems has come to be the spectral decomposition of observables into eigenfunctions of the Koopman operator. Let a D-dimensional vector valued set of observables be written as a linear combination of eigenfunctions, where the vectors v j ∈ C D are called Koopman modes. Further, the power of this concept lies in the following expression that describes the dynamics of observations in terms that remind us of the linear Fourier analysis, but now of the Koopman modes. That is if, We considered the nonuniqueness of such a decomposition, and furthermore the nature and even the cardinality of these eigenfunctions in [22]. In this current work, this property leads to our goal to contrast such a Koopman spectral decomposition to other notions of coherence, such as the FTLE and iLE [6]. So, we will highlight that other interesting global observations, such as the FTLE field, can be usefully presented as series in Koopman eigenfunctions.
A fast growing literature exists concerning data-driven approaches to construct eigenfunctions for an observed flow, given data as trajectories, namely dynamic mode decomposition, (DMD), extended dynamic mode decomposition (EDMD) and variants [24,25,26,27]. However an analytical description in terms of a PDE follows the infinitesimal generator of the Koopman operator. Corresponding to the statement of K t as a semi-group of compositions, follows the action of the infinitesimal generator, [17,28,29], and so we recall, [17,28,30], that a smooth exact eigenfunction of the Koopman operator of a given flow, for a given eigenvalue λ ∈ C must satisfy the if M is compact, and φ λ : Here, we will use solutions constructed directly from (18), to compare the Koopman perspective to the LCS and iLE perspective. This PDE is of a quasilinear form [31], and therefore solvable by the method of characteristics, [22,23]. An initial data function h : Λ → C propagates throughout an open domain in which the flow of the differential equation is defined, and respecting (18). That an open-KEIGS pair, where r * (x) is the "time"-of-flight such that for a point x ∈ U , there is an intersection in U by pull back to the data surface Λ, For each x ∈ U , is the parameterization on Λ of that first intersection point. This solution is valid when the orbit is non-recurrent in the open domain, during the period of interest. See Fig. 3, where we illustrate that a general solution of the eigenfunction PDE is simply a pull back along the flow through x, to read the data on Λ, and then scale it according to the linear action of (e λ ) r , for the "time" it takes to pull back the point r = r * (x).

One-Dimensional Vector Fields
Consider autonomous real vector fields for x ∈ R, The Eulerian rate-of-strain tensor is the one rate, So if s 1 (x) is a Koopman eigenfunction, it satisfies (18), which is, for some λ ∈ C and f . Let g(x) = s 1 (x) be a real scalar function, then we can state the condition as, That is, find a function g(x) such that its derivative times its integral is proportional to the function itself. We claim that the only real function g(x) that satisfies this for non-zero λ is the zero function, g(x) = 0. Therefore, for 1-dimensional flows, there is no non-trivial instantaneous Lyapunov exponent field which is a Koopman eigenfunction.

Two-Dimensional Nonlinear Saddle Flow
While in 1 dimension, the iLE field is not a Koopman eigenfunction, perhaps the situation is different in 2 dimensions. As an initial 2-dimensional vector field to motivate our study, consider the following nonlinear saddle flow with cubic term,ẋ These two uncoupled ordinary differential equations admit the explicit solutions, where the initial condition at time 0 is x = (x, y). The right Cauchy-Green deformation tensor for a backward integration time t < 0, is, which yields a backward-time FTLE (t < 0), from (10), of, Using Taylor series approximations for small |t|, the backward FTLE can be written as an expansion in t for small |t|, The backward-time FTLE can be approximated to leading order, O(1), by the negative of the minimum eigenvalue of S(x). The matrix S(x) is, The minimum eigenvalue is the iLE, the attracting rate (6), s 1 (x) = −1+3y 2 . Using g(x) = s 1 (x) as our candidate function, we find, which is not in the form λg(x) (as in (18)), so s 1 (x) is not a Koopman eigenfunction. But it can be written as a sum of Koopman eigenfunctions. Following the prescription given above for constructing Koopman eigenfunction using the explicit solution (27) to the ODE, we find that the Koopman eigenfunctions are of the form, where h : R → R, is any scalar function of s = x 3y 2 1−y 2 and λ ∈ C is a constant. One can verify directly that a function of the form (33) satisfies (18), the infinitesimal form of the eigenvalue equation. Details are in Appendix A.
Therefore, a choice of h(s) = 1 9 will cancel out −3y 6 . Following in a similar manner, we can remove the leading order remainder term from φ −2 (x) + φ −4 (x) + φ −6 (x), which is 3y 8 , and subsequently all higher order terms, since the leading order term of φ −2k (x) is of order y 2k . Thus we can write the term 3y 2 as the sum of an infinite series of Koopman eigenfunctions, where, for integer k ≥ 1. Defining φ 0 (x) as −1, s 1 (x), the instantaneous attracting rate, can be written exclusively in terms of these Koopman eigenfunctions, We have shown that while certain special case vector fields have the property that the dominant eigenfunction is also the infinitesimal FTLEs, that is the instantaneous Lyapunov exponent (iLE), this is not the general scenario. However, when this occurs, this would be a strong relationship between the geometric theory and the theory of evolution operators. On the other hand, in a general scenario for a given vector field, if the corresponding eigenfunctions are dense in a space of a functions that includes the iLEs, then clearly the iLE can be written as a superposition of Koopman eigenfunctions, such as described by the example of (41). Thus the geometric theory can still be interpreted in terms of the spectral theory. In the intermediate scenario, homogeneous polynomials, (see Appendix B) offer an enticing class of problems with a general relationship. In some vector fields then, the ILE may be a finite sum of Koopman eigenfunctions.

General Two-Dimensional Vector Fields
Consider a general two-dimensional autonomous vector field of the form, where the right-hand-side functions u and v are as smooth as necessary in their arguments. The gradient matrix is given by (4) and the attraction and repulsion rates given by (6) and (7), respectively. We will focus on the repulsion rate, s 2 (x, y), but similar arguments can be applied equally to the attraction rate. We will proceed by seeking the conditions under which s 2 (x, y) is also a Koopman eigenfunction for some non-trivial eigenvalue λ, which is, To simplify the calculation, we will consider only vector fields such that, which, in fluid terms, would mean the vector field has zero shear component of the strain rate [6], in which case the repulsion rate, s 2 , simplifies to, with partial derivatives, and so the condition for the repulsion rate to be a KEIG, (43), is the following condition on the vector field functions u and v, where the (x, y) dependence of the vector field components is understood. Furthermore, under the assumption (44), the attraction rate, s 1 , also simplifies to, with partial derivatives, and so the condition for the attraction rate to be a KEIG is the following condition on the vector field functions u and v, (50)

Polynomial Vector Fields
It is not immediately obvious if (47) or (50) admit any solutions. We seek to construct solutions, if they exist. To simplify this search, we will look initially at a special class of vector fields, polynomial vector fields.
Adopting the common notation of the polynomial vector field literature (see, e.g., [32]), we let u(x, y) = P (x, y) and v(x, y) = Q(x, y) be polynomials of degree m. We consider a system of the form (42) with an equilibrium point which we may place at the origin, where P k (x, y) and Q k (x, y) are both members of R k , the homogeneous polynomials of degree k (understood to be in the 2 variables x and y). The dimension of the vector space R k is equal to the number of monomials of degree k in x and y, and is, dim(R k ) = k + 1. The number of terms in (51) is therefore, We will denote the polynomials in terms of monomials, where a ij and b ij are real scalar coefficients. For (51), the coefficients form a space of dimension m(m + 3). Based on the nonlinear saddle flow example, we expect the vector fields which satisfy (47) or (50) will be some subspace of dimension d < m(m + 3).

Quadratic vector fields
Consider (51) with m = 2, P (x, y) = a 10 x + a 11 y + a 20 x 2 + a 21 xy + a 22 y 2 , Our assumption (44) implies, which is a 7-dimensional subspace of the original 10-dimensional space of quadratic vector fields. Furthermore, for this vector field, we have, and, Since s 2 has no second order terms, the left-hand-side of (47) must have all coefficients of second order terms identically zero, i.e., This leads to conditions on a 10 and a 11 , which further reduces the space of vector fields to only a 2-dimensional subspace with free parameters (λ, a 20 ).
By construction, the repulsion rate is now a Koopman eigenfunction, with eigenvalue λ, where both λ and a 20 are free parameters. The quadratic vector fields which this Koopman eigenfunction corresponds to is where (λ, a 20 ) ∈ R 2 are free parameters. Notice that the linear part of the vector field is skew-symmetric and that the quadratic terms in P and Q are opposite sign. We note that while the repulsion rate s 2 = ∂P ∂x is a KEIG, there is no guarantee that the attraction rate s 1 = ∂Q ∂y will be.

Cubic vector fields
For polynomial vector fields of order m, the s 1 or s 2 fields will be polynomials of order m − 1, since they are based on gradients of the vector field. Thus quadratic vector fields produced a linear s 2 above. By the definition of a instantaneous Lyapunov exponent structure (iLES) [6], we need a ridge of s 2 for a repelling iLES or a trench of s 2 for an attracting iLES. To have this, we need an s 1 or s 2 which is quadratic. Thus, we must consider polynomial vector fields of at least cubic order.
We perform the same procedure using the same assumption (44), but now for the attracting rate, s 1 , and for vector fields satisfying (50) which are homogeneous polynomials in x and y through order m = 3. We augment polynomial fields of the form (51) by allowing constant terms, P 0 = a 00 and Q 0 = b 00 , which leads to m(m + 3) + 2 terms. Out of the 20-dimensional space of planar cubic vector fields, we follow an approach analogous to the quadratic example above, and obtain, P (x, y) = a 00 + a 10 x + a 11 y + a 20 (x + y) 2 + a 20 k(x + y) 3 , where, assuming k = 0, a 00 = 1 6k a 10 − a 20 3k − b 00 , a 11 = 1 2 a 10 + a 20 3k , where (a 10 , k, a 20 , b 00 ) ∈ R 4 are free parameters. Notice that the linear part of the vector field is skew-symmetric and that the quadratic and cubic terms in P and Q are opposite sign. For this cubic vector field, the attracting rate function, is a Koopman eigenfunction with eigenvalue, Because the attracting rate field is quadratic, it can potentially have a ridge. We re-write the vector field in terms of λ and c = −2ka 20 , where (λ, c, k, a 00 ) ∈ R 4 are free parameters. The attracting rate, is a Koopman eigenfunction with eigenvalue λ for vector field (69).

Cubic vector field example
As an explicit example of (69), consider the following, where (λ, c, k, a 00 ) = (2, 2 3 , − 1 3 , −2),ẋ The attracting rate for this vector field, is a Koopman eigenfunction with eigenvalue λ = 2. We note that if we perform a linear transformation to new variables, we get a vector field that does not have the same symmetries, but nonetheless has the same attracting rate, which is a Koopman eigenfunction of the new velocity field, with the same eigenvalue λ = 2.

Cubic vector field transformation to simplify
For general parameters, the same linear transformation T leads to the vector field (69) in the (r, s) variables as follows, As theṙ equation is uncoupled from s, it can be solved in closed form. For an initial condition r 0 at time t 0 , we have, where a = 1 2 λ andā = 1 6 λ k . Therefore the s ODE can be considered a time-dependent ODE in 1D.
Notice thatr = −ā/a = − 1 3k is an invariant manifold, i.e.,ṙ = 0 (and this is also the location of the attracting rate iLE ridge for k = ± 1 3 ). It turns out we can transform the nonlinear ODE (76) into a form where it can be solved analytically. We translate the vector field so the new origin is the single equilibrium point of (76), by defining, where the vector field in (x 1 , x 2 ) is, Notice this transformed system depends on only 2 parameters, (λ, c). Furthermore, the vector field (80) does not satisfy the simplifying assumption (44) which held in the original variables. We note that the origin of (80) is the only equilibrium point and is either a stable or unstable node, depending on whether λ is negative or positive, respectively. The system has an s 1 field that is a KEIG, φ λ , with eigenvalue λ, which has a trench at x 1 = 0 for c < 0, which is a necessary condition for being an attracting iLES [6]. From (80), we see that the set {x 1 = 0}, the x 2 -axis, is also an invariant manifold. For λ < 0 (λ > 0), the x 2 -axis is the fastest direction along the stable (unstable) invariant manifold of the origin.

Properties of KEIGs of 2-dimensional cubic vector fields
We discuss some of the properties of KEIGs of cubic vector fields in two dimensions. The nonlinear ODE (80) is, nonetheless, in a form which admits Carleman linearization [33].
Augment the system with a third nonlinear variable, x 3 1 . Then we have a linear ODE, where the y space can be interpreted as a three-dimensional Koopman observable vector space [34]. The linear ODE in y admits an analytical solution, which implies x 1 (t) = e λt/2 x 1 (0).
Using s 1 in (81) as the observable function in (13), we can explicitly find that, so following individual trajectories, the s 1 field at time t is the same as the s 1 field at the initial time 0 multiplied by a factor (e λ ) t as we expect from an observable that is also a KEIG; see (14).
We show an example phase portrait for (λ, c) = (−1, −1) in Figure 4. For these parameters, the x 2 -axis, {x 1 = 0}, is an attracting iLES. Note that s 1 is not unique as a Koopman eigenfunction with eigenvalue λ. We can verify that x 1 is itself a KEIG with eigenvalue λ 2 , and due to the theorem of [22], any power of x 1 is also a KEIG, in particular, x 2 1 is a KEIG with eigenvalue λ, as is γx 2 1 for any constant γ.
7.6. A family of polynomial vector fields with a KEIG attracting rate Based on the form of (80), we can construct a family of vector fields which all have the attracting rate as a KEIG, where we include inẋ 2 only additional powers of x 1 , with additional parameters c 4 , c 5 , etc. The vector field (85) has the attracting rate, which is a KEIG with eigenvalue λ. As before, the x 2 -axis is (i) an attracting iLES for c < 0 and (ii) the fastest direction along the stable (unstable) invariant manifold of the origin for λ < 0 (λ > 0).

Conclusion
In the present work, we construct vector fields with the property that either the attracting rate or repulsion rate is a Koopman eigenfunction. We find that in 1 dimension, this is not possible, but in 2 dimensions, it is. We consider cubic 2-dimensional vector fields, and find a 2-parameter family of systems which have an attracting rate KEIG. It turns out that these systems, while nonlinear, have the unusual property that Carleman linearization truncates at finite order, allowing us to find the exact analytical solution of the flow map using linear methods. It is not obvious why the search for a vector field where the attracting rate is a KEIG would lead to this property.
The further investigation of vector fields which have attracting and repulsion rate fields as KEIGs is left as future work. Returning to the title question of this paper, "Is the finite-time Lyapunov exponent field a Koopman eigenfunction?" while the answer is yes in some special cases, in general, the answer is, no. Now take the dot product of the gradient, with v(x) from (26).

B. Arbitrary functions g(x) can be written as an infinite series of Koopman eigenfunctions
For the nonlinear saddle system in section 5, let h(s) = s n , then, In a similar way, we can write y n for an integer n which is either even or odd, as an infinite series of Koopman eigenfunctions, and the same for x n y m . So any function g(x) which is a homogeneous polynomial of order n ∈ Z in x and y can be written in terms of an infinite series of Koopman eigenfunctions. This means any function which admits a Taylor series expansion can be written in terms of an infinite series of Koopman eigenfunctions.