Next Article in Journal
Are Strategies Favoring Pattern Matching a Viable Way to Improve Complexity Estimation Based on Sample Entropy?
Next Article in Special Issue
Can Short and Partial Observations Reduce Model Error and Facilitate Machine Learning Prediction?
Previous Article in Journal
Relative Entropy and Minimum-Variance Pricing Kernel in Asset Pricing Model Evaluation
Article

Kernel-Based Approximation of the Koopman Generator and Schrödinger Operator

by 1,*,†, 2,† and 3
1
Department of Mathematics and Computer Science, Freie Universität Berlin, 14195 Berlin, Germany
2
Department of Mathematics, Paderborn University, 33098 Paderborn, Germany
3
Department of Mathematics, Imperial College London, London SW7 2AZ, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2020, 22(7), 722; https://doi.org/10.3390/e22070722
Received: 27 May 2020 / Revised: 25 June 2020 / Accepted: 26 June 2020 / Published: 30 June 2020

Abstract

Many dimensionality and model reduction techniques rely on estimating dominant eigenfunctions of associated dynamical operators from data. Important examples include the Koopman operator and its generator, but also the Schrödinger operator. We propose a kernel-based method for the approximation of differential operators in reproducing kernel Hilbert spaces and show how eigenfunctions can be estimated by solving auxiliary matrix eigenvalue problems. The resulting algorithms are applied to molecular dynamics and quantum chemistry examples. Furthermore, we exploit that, under certain conditions, the Schrödinger operator can be transformed into a Kolmogorov backward operator corresponding to a drift-diffusion process and vice versa. This allows us to apply methods developed for the analysis of high-dimensional stochastic differential equations to quantum mechanical systems.
Keywords: Koopman generator; Schrödinger operator; reproducing kernel Hilbert space Koopman generator; Schrödinger operator; reproducing kernel Hilbert space

1. Introduction

The Koopman operator [1,2,3,4] plays a central role in the global analysis of complex dynamical systems. It is, for instance, used to find conformations of molecules and coherent patterns in fluid flows, but also for prediction, stability analysis, and control [5,6,7,8,9,10]. Instead of analyzing a given finite-dimensional, but highly nonlinear system directly, the underlying idea is to compute an associated infinite-dimensional, but linear operator [4]. By computing an approximation of this operator from measurement or simulation data, it is possible to extract Koopman eigenvalues, eigenfunctions, and modes. The most frequently used techniques are based on variants or generalizations of extended dynamic mode decomposition (EDMD) [11,12]. A reformulation of EDMD for the generator of the Koopman operator, called gEDMD, was recently proposed in [13]. It was shown that in addition to the previously mentioned applications, the generator contains valuable information about the governing equations of a system; see also [7,14]. System identification aims at learning a preferably parsimonious model from data. That is, the learned model should comprise as few terms as possible and still have predictive power, which is typically accomplished by utilizing sparse regression techniques. One drawback of gEDMD is that it requires a set of explicitly chosen basis functions and their first- and—if the system is non-deterministic and non-reversible—second-order derivatives. Moreover, the size of the resulting matrix eigenvalue problem that needs to be solved to compute eigenvalues, eigenfunctions, and modes of the generator depends on the size of the dictionary. The goal of this paper is to derive a kernel-based method to approximate the Koopman generator from data. A kernel-based variant of EDMD was proposed in [12] and generalized in [15]. We derive a kernel-based variant of gEDMD. Employing the well-known kernel trick, a dual eigenvalue problem whose size depends on the number of snapshots can be constructed. The resulting methods allow for implicitly infinite-dimensional feature spaces and only require partial derivatives of the kernel function. This enables us to apply the methods to high-dimensional systems for which conventional techniques would be prohibitively expensive due to the curse of dimensionality, provided the number of snapshots is such that the eigenvalue problem can still be solved numerically or can be downsampled without losing essential information. Since we aim at approximating differential operators, we need to be able to represent derivatives in reproducing kernel Hilbert spaces. This requires the notion of derivative reproducing properties. Derivative reproducing kernels [16] were used to approximate Lyapunov functions for ordinary differential equations in [17] and to approximate center manifolds for ordinary differential equations in [18]. Reproducing kernel Hilbert spaces with derivative reproducing properties are related to the native spaces introduced in a different context in [19].
Similar operators are also used for manifold learning and understanding the geometry of high-dimensional data [20,21,22,23]. Methods like diffusion maps construct graph Laplacians with the aid of diffusion kernels, effectively approximating transition probabilities between data points. In the infinite data limit and letting the kernel bandwidth go to zero, it has been shown that these methods, depending on the normalization, essentially compute eigenfunctions of certain differential operators, e.g., the Laplace–Beltrami operator, the Kolmogorov backward operator, or the Fokker–Planck operator.
Another related differential operator that is of utmost importance in quantum mechanics is the Schrödinger operator. Solutions of the time-independent Schrödinger equation describe stationary states and associated energy levels. We will illustrate how kernel-based methods developed for the Koopman generator can be applied to these related problems. The main contributions of this paper are:
  • We show how the derivative reproducing properties of kernels can be used to approximate differential operators such as the Koopman generator and the Schrödinger operator, as well as their eigenvalues and eigenfunctions from data. Additionally, we derive a kernel-based method tailored to reversible dynamics, which does not require estimating drift and diffusion terms, but only an equilibrated trajectory.
  • Furthermore, we exploit the fact that, under certain conditions, the Schrödinger operator can be turned into a Kolmogorov backward operator (see, e.g., [24]), which allows for the interpretation of a quantum-mechanical system as a drift-diffusion process and, as a consequence, the application of methods developed for the analysis of stochastic differential equations or their generators.
  • We demonstrate potential applications in molecular dynamics, using the example of a quadruple-well problem, and quantum mechanics, describing how to apply the proposed methods directly to the Schrödinger equation or the associated stochastic process. This will be illustrated with two well-known examples, the quantum harmonic oscillator and the hydrogen atom.
The remainder of the manuscript is structured as follows: We first introduce the necessary tools, namely the Koopman operator, its generator, and (derivative) reproducing kernel Hilbert spaces in Section 2. Additionally, relationships with the Schrödinger equation will be explored. The derivation of the kernel-based formulation of gEDMD will be detailed in Section 3. In Section 4, we will show how the derived methods can be applied to molecular dynamics and quantum mechanics problems. Concluding remarks and future work will be discussed in Section 5.

2. Koopman Theory and Reproducing Kernel Hilbert Spaces

We start directly with the non-deterministic setting; the Koopman operator and its generator for ordinary differential equations can then be regarded as a special case; see also [13] for a detailed comparison. The notation used below is summarized in Table 1.

2.1. The Koopman Operator and Its Generator

In what follows, let X R d be the state space and f : X R a real-valued observable of the system. Furthermore, let E [ · ] denote the expected value and Θ t the flow map associated with a dynamical system, i.e., Θ t ( X 0 ) = X t . Given a stochastic differential equation of the form:
d X t = b ( X t ) d t + σ ( X t ) d B t ,
where b : R d R d is called the drift term, σ : R d R d × d the diffusion term, and B t is d-dimensional Brownian motion, the stochastic Koopman operator is defined by:
( K t f ) ( x ) = E [ f ( Θ t ( x ) ) ] .
The infinitesimal generator L of the semigroup of Koopman operators is given by:
L f = i = 1 d b i f x i + 1 2 i = 1 d j = 1 d a i j 2 f x i x j
and its adjoint, the generator of the Perron–Frobenius operator, by:
L * f = i = 1 d ( b i f ) x i + 1 2 i = 1 d j = 1 d 2 ( a i j f ) x i x j ,
with a = σ σ . We assume from now on that a is uniformly positive definite on X . The second-order partial differential equation u t = L u is also called the Kolmogorov backward equation and u t = L * u the Fokker–Planck equation [2].
Remark 1.
As in [13], we will often consider systems of the form:
d X t = V ( X t ) d t + 2 β 1 d B t ,
where V is a given potential and β the inverse temperature. In this case, the operators can be written as:
L f = V · f + β 1 Δ f a n d L * f = V · f + Δ V f + β 1 Δ f .

2.2. Generator EDMD

A data-driven method for the approximation of the generator of the Koopman operator and Perron–Frobenius operator called generator extended dynamic mode decomposition (gEDMD) was derived in [13]. While standard EDMD requires a training dataset { x m } m = 1 M and the corresponding data points { y m } m = 1 M , where y m = Θ τ ( x m ) for a fixed lag time τ , gEDMD assumes that we can evaluate or estimate (using, for instance, Kramers–Moyal formulae) { b ( x m ) } m = 1 M and { σ ( x m ) } m = 1 M . Choosing a dictionary of basis functions { ϕ n } n = 1 N , where ϕ n : R d R , and defining ϕ ( x ) = [ ϕ 1 ( x ) , , ϕ N ( x ) ] , we compute the matrices Φ X , d Φ X R N × M , with:
Φ X = ϕ 1 ( x 1 ) ϕ 1 ( x M ) ϕ N ( x 1 ) ϕ N ( x M ) and d Φ X = d ϕ 1 ( x 1 ) d ϕ 1 ( x M ) d ϕ N ( x 1 ) d ϕ N ( x M ) ,
where:
d ϕ n ( x ) = i = 1 d b i ( x ) ϕ n x i ( x ) + 1 2 i = 1 d j = 1 d a i j ( x ) 2 ϕ n x i x j ( x ) .
The matrix representation of the least-squares approximation of the Koopman generator L is then given by:
L ^ = d Φ X Φ X + = A ^ G ^ + ,
with:
A ^ = 1 M m = 1 M d ϕ ( x m ) ϕ ( x m ) and G ^ = 1 M m = 1 M ϕ ( x m ) ϕ ( x m ) .
It was shown that gEDMD, in the infinite data limit, converges to a Galerkin projection of the generator onto the space spanned by the basis functions { ϕ n } n = 1 N and that L ^ is an empirical estimate of the projected generator [13]. Approximations of eigenfunctions of L are then given by:
φ ( x ) = ξ , ϕ ( x ) ,
where ξ is an eigenvector of L ^ corresponding to the eigenvalue λ and · , · denotes the standard Euclidean inner product. Analogously, the generator of the Perron–Frobenius operator is given by ( L ^ * ) = A ^ G ^ + . Further details, examples, and different applications including system identification, coarse graining, and control can also be found in [13].

2.3. Second-Order Differential Operators

Consider the generator L in (2), and assume there is a unique strictly positive invariant density ρ 0 , which we can write as ρ 0 ( x ) exp ( F ( x ) ) . The function F is called a generalized potential (with F = β V for the stochastic differential equation in Remark 1). The measure corresponding to ρ 0 is denoted by d μ = ρ 0 d x . The negative generator can be decomposed into a symmetric and an anti-symmetric part as:
L = 1 2 e F · e F a · + J · = S + A ,
J = 1 2 e F · ( e F a ) b ;
see [24]. The vector field J is called stationary probability flow. In the form of (3), L is a special case of an elliptic second-order differential operator on L μ 2 , given by:
T = 1 2 e F · e F a · + J · + W ,
for scalar functions F , W , a uniformly positive definite matrix field a, and a vector field J.
Remark 2.
Because of the general form of (5), we avoid making too many assumptions about the coefficients of T or its domain of definition. The goal is to derive numerical algorithms using a minimal set of assumptions. A detailed analysis of the interplay between the domains and properties of the reproducing kernel Hilbert space (RKHS) will be carried out in future publications.
If F 0 , we obtain generalized Schrödinger operators as another special case, i.e.,
H = 1 2 · a · + J · + W ,
with W called the potential energy in quantum mechanics. In particular, with the reduced Planck constant and the mass m , setting a 2 m I and J 0 leads to the Hamiltonian H = 2 2 m Δ + W of the time-independent Schrödinger equation in quantum mechanics:
H ψ = E ψ .
We note for later use that, under certain conditions, Schrödinger operators and Koopman generators are equivalent; see, e.g., ([24] Chapter 4.9). For the sake of completeness, the proof is shown in Appendix A.
Lemma 1.
The ergodic generator L with unique positive invariant density ρ 0 exp ( F ) is unitarily equivalent to the Schrödinger operator H in (6) on L 2 , with J remaining unchanged and W given by:
W = 1 4 · ( a F ) + 1 8 F a F + 1 2 J · F .
The function e 1 2 F is an eigenfunction of H with eigenvalue zero. Conversely, let H be as in (6), and assume there is a non-degenerate smallest eigenvalue E 0 with strictly positive real eigenfunction ψ 0 = exp ( η ) . Then, H is unitarily equivalent to a negative ergodic generator L on L μ 2 , where ρ 0 exp ( 2 η ) is the density associated with μ and ρ 0 is invariant for the corresponding SDE. The explicit form of L is given by:
L = e η H E 0 ( e η · ) = 1 2 e 2 η · e 2 η a · + J · .
Corollary 1.
Applying Lemma 1 to (7), we have:
1 ψ 0 ( H E 0 ) ( ψ 0 f ) = 2 m η · f + 2 2 m Δ f = L f ,
where L is the Koopman generator of a drift-diffusion process (see Remark 1) with potential (up to an additive constant):
V ( x ) = 2 m η ( x ) ,
and temperature β 1 = 2 2 m .
We will exploit this duality below to apply methods developed for the Koopman operator or generator to the Schrödinger operator. More details on quantum chemistry in general and also the quantum harmonic oscillator and the hydrogen atom studied in Section 4 can be found, e.g., in [25].

2.4. Reproducing Kernel Hilbert Spaces and Derivative Reproducing Properties

We aim at representing the differential operators introduced above in reproducing kernel Hilbert spaces.
Definition 1.
Let X be a set and H a space of functions f : X R . Then, H is called an RKHS with inner product · , · H if a function k : X × X R exists such that:
(i)
f , k ( x , · ) H = f ( x ) for all f H and
(ii)
H = span { k ( x , · ) x X } ¯ .
The function k is called a kernel. It was shown that every RKHS has a unique symmetric positive definite reproducing kernel and that, conversely, every symmetric positive definite kernel spans a unique RKHS; see [26,27,28]. Here, we use the terms positive definite and strictly positive definite, i.e., positive definite means that r = 1 M s = 1 M γ r γ s k ( x r , x s ) 0 for all M N , γ 1 , , γ M R , and  x 1 , , x M X . Frequently used kernels include the polynomial kernel and the Gaussian kernel, given by:
k ( x , x ) = ( c + x x ) q and k ( x , x ) = exp x x 2 2 ς 2 ,
respectively. Here, q N is the degree of the polynomial kernel, c 0 a parameter, and ς the bandwidth of the Gaussian kernel. We now introduce the partial derivative reproducing properties of RKHSs [16]. Let α = ( α 1 , , α d ) N 0 d be a multi-index and | α | = i = 1 d α i . Furthermore, for a fixed p N 0 , we define the index set I p = { α N 0 d : | α | p } . Given f : X R , let D α denote the partial derivative (assuming it exists):
D α f = | α | x 1 α 1 x d α d f .
Thus, the i th entry of the gradient is given by D e i f and the ( i , j ) th entry of the Hessian by D e i + e j , where e i and e j are the i th and j th unit vectors, respectively. When we apply the differential operator D α to the kernel k, the multi-index α is assumed to be embedded into N 0 2 d by adding zeros, i.e., the derivatives are computed with respect to the first argument of the kernel. Furthermore, when we write k ( x , x ) , the gradient is computed with respect to x. In what follows, let k ( x , · ) = ϕ ( x ) , where  ϕ is the canonical feature space mapping.
Theorem 1
([16]). Given p N 0 and a positive definite kernel k : X × X R with k C 2 p ( X × X ) , the following holds:
(i)
D α k ( x , · ) H for any x X and α I p .
(ii)
( D α f ) ( x ) = D α k ( x , · ) , f H for any x X , f H , and α I p .
The second property is called the derivative reproducing property. For p = 0 , this reduces to the standard reproducing property of RKHSs.
Example 1.
Let us consider the two aforementioned kernels:
  • For the polynomial kernel, we obtain:
    D e i k ( x , x ) = q x i ( c + x x ) q 1 a n d D e i + e j k ( x , x ) = q ( q 1 ) x i x j ( c + x x ) q 2 .
    Thus, k ( x , x ) = q x ( c + x x ) q 1 and 2 k ( x , x ) = q ( q 1 ) x x ( c + x x ) q 2 .
  • Similarly, for the Gaussian kernel, this results in:
    D e i k ( x , x ) = 1 ς 2 ( x i x i ) k ( x , x ) , D e i + e j k ( x , x ) = 1 ς 4 ( x i x i ) 2 1 ς 2 k ( x , x ) , i = j , 1 ς 4 ( x i x i ) ( x j x j ) k ( x , x ) , i j ,
    k ( x , x ) = 1 ς 2 ( x x ) k ( x , x ) , and 2 k ( x , x ) = 1 ς 4 ( x x ) ( x x ) 1 ς 2 I k ( x , x ) .
For the numerical experiments below, we will mainly use the Gaussian kernel. (To get error estimates, it might be more convenient to use Wendland functions [19]. We leave the formal analysis of the methods developed in this paper for future work.)

3. Kernel-Based Representation of Differential Operators

In this section, we introduce the Galerkin projection of the differential operators discussed above onto the RKHS, including the Koopman generator and Schrödinger operator. We then move on to show how these projected operators can be estimated from data.

3.1. Galerkin Projection of Operators

Let μ denote a probability measure on the state space X , with density ρ 0 e F for a generalized potential F.
Definition 2.
We define the covariance operator C 00 : H H by:
C 00 = ϕ ( x ) ϕ ( x ) d μ ( x ) ,
and an operator T H : H H by:
T H = ϕ ( x ) 1 2 i = 1 d j = 1 d a i j ( x ) D e i + e j ϕ ( x ) d μ ( x ) + ϕ ( x ) i = 1 d J i ( x ) 1 2 e F ( x ) · ( e F ( x ) a : , i ( x ) ) D e i ϕ ( x ) d μ ( x ) + W ( x ) ϕ ( x ) ϕ ( x ) d μ ( x ) .
If J 0 , we define T H by:
T H = 1 2 i = 1 d j = 1 d a i j ( x ) D e i ϕ ( x ) D e j ϕ ( x ) + W ( x ) ϕ ( x ) ϕ ( x ) d μ ( x ) .
The operator C 00 is the standard covariance operator C X X ; see [29,30]. The operator T H mimics the action of the bilinear form T f , g μ on the RKHS. It plays the same role as the cross-covariance operator C X Y for the Koopman operator in [15]. The form of the symmetric operator for J 0 is motivated by the symmetry of T , and that, at least formally:
T f , g μ = 1 2 f ( x ) a ( x ) g ( x ) + W ( x ) f ( x ) g ( x ) d μ ( x ) ;
see also [31].
Lemma 2.
Assume that H D ( T ) and that all terms appearing under the integral signs in (8) and (9) (or (10)) are in L μ 1 as bounded operators on H , that is:
| a i j ( x ) | D e i + e j ϕ ( x ) H ϕ ( x ) H d μ ( x ) < ,
| J i ( x ) | + 1 2 e F ( x ) | · ( e F ( x ) a : , i ( x ) ) | D e i ϕ ( x ) H ϕ ( x ) H d μ ( x ) < ,
| W ( x ) | ϕ ( x ) H ϕ ( x ) H d μ ( x ) < ,
ϕ ( x ) H ϕ ( x ) H d μ ( x ) < .
Then, for all f , g H ,
T f , g μ = T H f , g H , f , g μ = C 00 f , g H .
The proof can be found in Appendix A. It uses the derivative reproducing properties and the definition of rank-one operators. Note that:
D e i f ( x ) g ( x ) = D e i ϕ ( x ) , f H ϕ ( x ) , g H = D e i ϕ ( x ) ϕ ( x ) , f g H H = ( ϕ ( x ) D e i ϕ ( x ) ) f , g H .
Lemma 3.
Assume that T f H for all f H , then T H f = C 00 T f .
Proof. 
The proof is similar to the one for the corresponding result for kernel transfer operators; see [15]. With the previous lemma, we obtain:
C 00 T f , g H = E μ [ ( T f ) ( x ) g ( x ) ] = ( T f ) ( x ) g ( x ) d μ ( x ) = T H f , g H
for arbitrary g H . □
If the assumptions of Lemma 3 are satisfied and the operator C 00 is invertible, the RKHS operators defined above can be used to compute exact eigenfunctions of T . Indeed, if φ is a solution of:
T H φ = C 00 T φ = λ C 00 φ ,
then multiplying this equation by C 00 1 shows that φ is also an eigenfunction for T . A typical approach to circumvent the potential nonexistence of the inverse of the covariance operator is to consider a regularized version T ε = ( C 00 + ε I ) 1 T H for a regularization parameter ε . However, the assumptions of Lemma 3 are strong and may be hard to verify in practice. However, in any case, Lemma 2 shows that the operators defined in Definition 2 provide a Galerkin approximation of the full operator in the RKHS H .

3.2. Empirical Estimates

The next step is to derive empirical estimates of the operators defined above. Given training data { x m } m = 1 M , sampling the probability distribution μ , we define Φ = [ ϕ ( x 1 ) , , ϕ ( x M ) ] and d Φ = [ d ϕ ( x 1 ) , , d ϕ ( x M ) ] , where:
d ϕ ( x m ) = 1 2 i = 1 d j = 1 d a i j ( x m ) D e i + e j ϕ ( x m ) + i = 1 d J i ( x m ) 1 2 j = 1 d e F ( x m ) x j ( e F ( x m ) a j i ( x m ) ) D e i ϕ ( x m ) + W ( x m ) ϕ ( x m ) .
If T is the generator of an SDE with invariant measure μ , the data can also be obtained by integrating the stochastic dynamics with the initial condition drawn from μ . We see that Φ is the standard feature map and d Φ contains the action of the differential operator. The empirical estimates of the operators C 00 and T H are then given by the following expressions:
C ^ 00 = 1 M m = 1 M ϕ ( x m ) ϕ ( x m ) = 1 M Φ Φ , T ^ H = 1 M m = 1 M ϕ ( x m ) d ϕ ( x m ) = 1 M Φ d Φ .
Note that these are still finite-rank operators on the full RKHS H . For the symmetric RKHS operator T H , we need to define the empirical estimate in a slightly different way. Decompose the positive definite matrix a ( x m ) = σ ( x m ) σ ( x m ) . With:
d ϕ l ( x m ) = i = 1 d σ i l ( x m ) D e i ϕ ( x m ) = ϕ ( x m ) σ l ( x m ) ,
where σ l is the l th column of σ , the empirical RKHS operator becomes:
T ^ H = 1 2 M m = 1 M l = 1 d d ϕ l ( x m ) d ϕ l ( x m ) + 1 M m = 1 M W ( x m ) ϕ ( x m ) ϕ ( x m ) .
Remark 3.
If the feature space associated with the kernel k is finite-dimensional and known explicitly, i.e., ϕ ( x ) = [ ϕ 1 ( x ) , , ϕ N ( x ) ] and k ( x , x ) = ϕ ( x ) , ϕ ( x ) , then for the Koopman generator, we obtain gEDMD as a special case, with C ^ 00 = G ^ and T ^ H = A ^ . However, the goal is to rewrite gEDMD in such a way that only kernel evaluations are required since ϕ can potentially be infinite-dimensional and might only be defined implicitly.

3.3. Weak Formulation and Numerical Algorithm

With Lemma 2 in mind, we now proceed to the weak formulation of the eigenvalue problem for the operator T . We then define the quadratic forms:
Q ( f , g ) = T f , g μ , f , g D Q , S ( f , g ) = f , g μ , f , g L μ 2 , Q H ( f , g ) = T H f , g H , f , g H , S H ( f , g ) = C 00 f , g H , f , g H , Q ^ H ( f , g ) = T ^ H f , g H , f , g H , S ^ H ( f , g ) = C ^ 00 f , g H , f , g H ,
where D Q is the domain of the quadratic form Q . We consider the weak eigenvalue problems:
Q ( f n , g ) = λ n S ( f n , g ) g D Q ,
Q H ( f ˜ n , g ) = λ ˜ n S H ( f ˜ n , g ) g H ,
Q ^ H ( f ^ n , g ) = λ ^ n S ^ H ( f ^ n , g ) g H .
We will now rewrite (17) in such a way that only kernel evaluations—in the form of Gram matrices—are required. The derivation is similar to the kernel transfer operator counterpart in [15], but we now need to consider derivatives at the training data points instead of the time-lagged variables. We start by restricting (17) to the finite-dimensional space H M = span { ϕ ( x m ) } m = 1 M , which we assume to be M-dimensional. Elements of this space are of the form f = Φ u for some vector u R m . We examine the quadratic forms Q ^ H and S ^ H on this space.
Lemma 4.
A solution of the problem Q ^ H ( f , g ) = λ ^ S ^ H ( f , g ) is given by f = Φ u , where u is a solution of one of the following generalized eigenvalue problems:
(i)
In the general case, u solves G 2 u = λ ^ G 0 u , where the entries of the matrices G 2 and G 0 are given by:
G 2 m r = d ϕ ( x m ) ( x r ) , G 0 m r = ϕ ( x m ) ( x r ) .
(ii)
Analogously, for the symmetric case, we obtain 1 2 l = 1 d G 1 ( l ) G 1 ( l ) u = λ ^ G 0 G 0 u , where we define:
G 1 ( l ) m r = σ l ( x m ) k ( x m , x r )
and σ l ( x m ) is the l th column of the matrix σ ( x m ) .
The proofs are shown in Appendix A. Since ϕ ( x m ) ( x r ) = k ( x m , x r ) , G 0 is the standard Gram matrix. The reversible case requires only first-order derivatives of the kernel. Furthermore, only trajectory data sampled from the invariant distribution μ and estimates of the diffusion term σ are needed. For typical problems, σ is constant and not position-dependent. As a result, the diffusion term needs to be estimated only once or might even be known. For molecular dynamics problems, for instance, it is proportional to the square root of the temperature. The overall approach is summarized in the following algorithm. Note that it is not a direct kernelization of gEDMD, but an extension that approximates the Koopman generator as a special case.
Algorithm 1.
The final numerical algorithm can be summarized as follows:
(1)
Choose a kernel k and compute all its required derivatives, either analytically or with the aid of automatic differentiation.
(2)
Assemble the Gram matrices G 2 and G 0 or, if the system is symmetric, G 1 ( l ) , for l = 1 , , d , and G 0 .
(3)
Solve the corresponding eigenvalue problem described in Lemma 4 to obtain an eigenvector u.
(4)
An eigenfunction is then given by φ = Φ u .
The two main steps of the algorithm are assembling the Gram matrices and solving the generalized eigenvalue problem. Since the size of the eigenvalue problem depends on the number of data points, the cost is cubic in M. This is a drawback of many kernel-based methods. The efficient approximation of solutions to this eigenvalue problem for large datasets will be considered in future work.

3.4. Analysis

In this section, we provide some preliminary analysis of the methods introduced above. The first result concerns the convergence of the empirical estimates.
Lemma 5.
As M , the empirical estimates defined in Section 3.2 converge to the corresponding RKHS operators in Definition 2 with respect to the operator norm for almost all data sequences { x m } m = 1 M , if the data were generated either as i.i.d. samples from μ or by integrating a stochastic dynamics, which is ergodic with respect to μ.
Proof. 
The statement follows from ergodicity of the underlying dynamics, the integrability conditions in Lemma 2, and the Birkhoff individual ergodic theorem for Banach space valued functions [32]. □
Next, we generalize ([33] Theorem 7) to obtain convergence rates on the empirical estimates for i.i.d. data:
Lemma 6.
Assume that (1114) hold. Then:
(i)
The operators C 00 , C ^ 00 , T H , and T ^ H are Hilbert–Schmidt.
(ii)
Let δ ( 0 , 1 ] . Assume the coefficients of the operator T are all globally bounded, and let sup x X D α k ( x , x ) < for all | α | 4 ( | α | 2 in the symmetric case). If the data are drawn i.i.d. from the distribution μ, then there are constants κ 0 , κ 1 such that with probability at least 1 δ ,
C 00 C ^ 00 H S 2 κ 0 2 M log 1 / 2 2 δ , T H T ^ H H S 2 κ 1 2 M log 1 / 2 2 δ ,
where the · H S is the Hilbert–Schmidt norm.
Proof. 
(i) The empirical estimates are all finite rank and, therefore, Hilbert-Schmidt. For C 00 and T H , this follows from the integrability conditions and the first part of the proof of Lemma 2; see Appendix A.
(ii) For C 00 , the bound was already proven in [33] with κ 0 = sup x X k ( x , x ) 2 . We can employ the same strategy to obtain the bound for T H . Consider the operator T ^ H m = ϕ ( x m ) d ϕ ( x m ) T H , which satisfies E μ [ T ^ H m ] = 0 . By global boundedness of the coefficients of T and by:
ϕ ( x ) D α ϕ ( x ) H S = ϕ ( x ) H D α ϕ ( x ) H = k ( x , · ) , k ( x , · ) H 1 / 2 D α k ( x , · ) , D α k ( x , · ) H 1 / 2 = k ( x , x ) 1 / 2 D 2 α k ( x , x ) 1 / 2 ,
we can find a κ 1 such that ϕ ( x ) d ϕ ( x ) H S κ 1 for all x X . We then have T ^ H m H S 2 κ 1 , and the result follows from the concentration bound ([33] Equation (3)). □
Finally, we show that solutions of (16) are also eigenvalues of the full operator T if the RKHS is dense in D Q :
Proposition 1.
Let H be dense in D Q with respect to the norm in L μ 2 . If ψ ˜ H is an eigenfunction of (16), it is also an eigenfunction of T with the same eigenvalue.
Proof. 
Let ψ ˜ solve the variational problem (16). The definition of the operators C 00 , T H implies that for all ϕ H :
T ψ ˜ , ϕ μ = T H ψ ˜ , ϕ H = λ ˜ C 00 ψ ˜ , ϕ H = λ ˜ ψ ˜ , ϕ μ .
By the density of the RKHS, this also holds for all ϕ D Q , and consequently, ψ ˜ is an eigenfunction of  T . □
Note that even if the RKHS is dense, there might be additional eigenfunctions that are not contained in H and that will not appear as solutions of (16).

4. Applications

The methods described above have important applications in molecular dynamics and quantum physics, which we will show in an exemplary way, but can in principle be applied to data generated by arbitrary dynamical systems and also other differential operators. The code and select examples are available online [34]. Note that this is just a proof-of-concept implementation and that the methods could be sped up significantly by vectorizing and parallelizing the code and by tailoring the implementation to specific kernels.

4.1. Molecular Dynamics

Eigenvalues and eigenfunctions of transfer operators associated with molecular dynamics problems are often used to understand protein folding or binding/unbinding processes and their implied time scales. Conformations correspond to metastable sets and transitions between different conformations to crossing energy barriers. The slowest dynamical processes are encoded in eigenfunctions whose eigenvalues are close to zero. Large-scale molecular dynamics examples, analyzed using kernel EDMD, can also be found in [35]. In this paper, we want to focus more on new applications.
Example 2.
Let us consider the simple quadruple-well problem whose potential V is visualized in Figure 1a; see also [13]. We first generate an equilibrated trajectory so that the training dataset of size M = 5000 is sampled from the invariant distribution and then apply kernel gEDMD for reversible processes, choosing a Gaussian kernel with bandwidth ς = 0.5 . The operator L has four dominant eigenvalues λ 0 = 0.009 , λ 1 = 0.400 , λ 2 = 1.011 , and λ 3 = 1.55 , followed by a spectral gap. We then apply SEBA (sparse eigenbasis approximation; see [36]) to cluster the dominant eigenfunctions into four metastable sets. The results are shown in Figure 1b. As expected, the sets correspond to the wells of the potential. The computation and clustering of the eigenfunctions took approximately four minutes on a standard laptop (8 cores, 1.80 GHz, 16 GB of RAM). For comparison, we estimated the generator eigenvalues using a Markov state model. Applying both methods to 20 different trajectories, we computed the average of the eigenvalues and the standard deviation, see Figure 1c. The results were in excellent agreement. Clearly, the standard deviation increased for higher eigenvalues.

4.2. Quantum Mechanics

The goal now is to apply data-driven methods to simple quantum mechanics problems of the form (7) with H = 2 2 m Δ + W .

4.2.1. Generator EDMD for the Schrödinger Equation

Let us consider two systems for which the eigenfunctions are well known.
Example 3.
For the quantum harmonic oscillator with angular frequency ω, the potential can be written as W ( x ) = 1 2 m ω 2 x 2 . The eigenfunctions ψ and corresponding energy levels E of this system can be computed analytically, and we obtain:
ψ ( x ) = 1 2 ! m ω π 1 / 4 exp m ω 2 x 2 H m ω x
and E = ω + 1 2 , for = 0 , 1 , 2 , . Here, H denotes the th physicists’ Hermite polynomial. For the numerical experiments, we set = m = ω = 1 . Furthermore, the bandwidth of the kernel is set to ς = 1 . Computing the Gram matrices G 2 and G 0 for 100 uniformly distributed points in [ 5 , 5 ] and solving the corresponding eigenvalue problem, this results in the eigenfunctions shown in Figure 2. The probability densities p are defined by p ( x ) = | ψ ( x ) | 2 .
Example 4.
As a second example, let us analyze the Schrödinger equation for the hydrogen atom, where  W ( x ) = e 2 4 π ε 0 x , with x R 3 . Here, e is the electron charge and ε 0 the vacuum permittivity. Note that the parameter m in front of the Laplacian is the reduced mass of the system. As before, we define the physical constants to be one and use the Gaussian kernel, now with bandwidth ς = 2 . We then generate 5000 uniformly distributed test points in the ball with radius 20 and compute the Gram matrices G 2 and G 0 . Solving the resulting eigenvalue problem, we obtain the eigenfunctions shown in Figure 3. As expected, there are several repeated eigenvalues (up to small perturbations due to the randomly sampled test points and numerical errors) for the higher energy states.

4.2.2. SDE Formulation of the Schrödinger Equation

In order to derive gEDMD, we went from the stochastic differential equation to the Kolmogorov backward equation, which is the generator of the Koopman operator, or the adjoint Fokker–Planck equation, which is the generator of the Perron–Frobenius operator. Exploiting the resemblance between these two equations and the Schrödinger equation, we illustrated how data-driven methods can, in the same way, be used to compute wavefunctions. We now want to go in the opposite direction and find a stochastic differential equation whose eigenfunctions correspond to the wavefunctions. Formal similarities between quantum mechanics and the theory of stochastic processes have been investigated since the beginning of quantum mechanics by Schrödinger and others (see, for example, [37] and the references therein). The necessary transformations were already introduced in Section 2.3; we now want to exploit these relationships. Let us consider the two aforementioned examples again.
Example 5.
Using Corollary 1, the quantum harmonic oscillator can be transformed into an Ornstein–Uhlenbeck process:
d X t = α X t d t + 2 β 1 d B t ,
with friction coefficient α = ω and temperature β 1 = 2 2 m . Since the eigenvalues of the Ornstein–Uhlenbeck process are λ = α = ω , the resulting eigenvalues of the quantum harmonic oscillator are E = E 0 λ = ω + 1 2 . Correspondingly, the (unnormalized) eigenfunctions of the Ornstein–Uhlenbeck process are φ ( x ) = H ˜ ( α β x ) , where H ˜ is the th probabilists’ Hermite polynomial. Thus,
ψ ( x ) = ψ 0 ( x ) H ˜ 2 m ω x = exp m ω 2 x 2 H m ω x ,
which is consistent with the results obtained above. In the last step, we transformed the probabilists’ Hermite polynomials into the physicists’ Hermite polynomials.
Example 6.
Similarly, for the hydrogen atom, whose ground state is given by:
ψ 0 ( x ) = 1 π a 0 3 exp 1 a 0 x ,
where a 0 = 4 π ε 0 2 m e 2 , we obtain V ( x ) = 2 m a 0 x , and thus:
V ( x ) = 2 m a 0 x x .
There are now two options to compute the eigenfunctions numerically: we can either directly apply kernel gEDMD to the Koopman generator or generate time-series data by integrating the stochastic differential equation and then applying kernel EDMD or simply Ulam’s method. We proceed with the former, but the latter leads to comparable results (although typically, more data points are required to achieve the same accuracy due to the stochasticity). We again generate uniformly distributed test points x m in the ball with radius 20, this time m = 10 , 000 , and use the Gaussian kernel with bandwidth ς = 2 . This results in the same eigenfunctions as the ones shown in Figure 3. Due to the larger number of test points, even higher energy states can be well approximated. Two additional eigenfunctions are shown in Figure 4.
The examples illustrate that instead of solving partial differential equations, we can also compute eigenfunctions by approximating the Koopman operator from time-series data. The question under which conditions a non-degenerate strictly positive ground state exists needs to be addressed separately. One important theorem can be found in [38]:
Theorem 2.
Let L loc 2 ( X ) be the space of locally square-integrable functions and W L loc 2 ( X ) positive. Suppose lim | x | W ( X ) = , then Δ + W has a non-degenerate strictly positive ground state.
There are other results concerning the existence of such states; see [38] for details. Furthermore, diffusion Monte Carlo methods, which simultaneously compute the ground state energy and wavefunction, rely on similar assumptions [39]. However, in many cases of interest, the ground state of fermionic systems will have nodes so that these methods are not applicable [39]. The work presented here aims mainly at linking different operators describing the evolution of dynamical systems; more detailed relationships—in particular with the aforementioned diffusion Monte Carlo methods—and practical implications will be studied in future work.

4.3. Manifold Learning

So far, we assumed that the data were generated by a dynamical system. There is, however, a second scenario without any notion of time, where the Kolmogorov backward equation and Fokker–Planck equation are used for dimensionality reduction and manifold learning [21]; see also [20,22,23] and the references therein.
Let the data points { x m } m = 1 M be sampled from an arbitrary probability density ρ , then we can define the associated potential by:
U ( x ) = log ρ ( x ) .
It was shown in [21] that, depending on some normalization parameter α , anisotropic diffusion maps approximate operators of the form:
L α f = 2 ( 1 α ) U · f + Δ f .
That is, for α = 1 2 , we obtain the standard Kolmogorov backward equation with β = 1 . Thus, the algorithms described above could also potentially be used for manifold learning purposes. We will illustrate this with a simple example.
Example 7.
We consider the well-known Swiss roll; see, for instance, [23]. The goal is to parametrize the two-dimensional manifold. We use kernel density estimation, cf. [40], and a Gaussian kernel with bandwidth ς = 0.22 to learn U ( x ) , i.e.,
U ( x ) = 1 M ( 2 π ς ) d m = 1 M k ( x , x m )
and approximate the backward Kolmogorov operator by applying kernel gEDMD. Here, M = 2000 and d = 2 . The results are shown in Figure 5. The first eigenfunction parametrizes the angular direction, followed by higher order modes, and only the sixth eigenfunction corresponds to the x 3 direction. Considering these eigenfunctions as new coordinates, we obtain an unfolding of the roll. Note that also diffusion maps do not yield perfect rectangles in the embedded space due to the non-uniform density of points on the manifold [23].
These results demonstrate that the eigenfunctions of certain differential operators capture geometrical properties of the data. However, the assumption that a strictly positive density in the ambient space exists will in general not be satisfied if the data are supported only on a lower dimensional manifold. This problem was circumvented by using kernel density estimation and a kernel with global support. Carrying over the definition of the differential operators involved and of their kernel-based analogues to the manifold case are beyond the scope of this paper and will be studied in future work. The same applies to the investigation of detailed relationships with diffusion maps or other manifold learning techniques. Concepts like neighborhood and sparsity will then need to be carried over to gEDMD to make this method amenable to large datasets. Furthermore, heuristics to find the optimal bandwidth ς are required since the results often strongly depend on the kernel hyperparameters.

5. Conclusions

Using the theory of derivative reproducing kernel Hilbert spaces, we derived a kernel-based formulation of gEDMD for approximating the Koopman generator, which allowed for the computation of eigenfunctions of potentially high-dimensional stochastic dynamical systems. If the system is reversible, the generator can be approximated from equilibrated time-series data, without having to estimate the drift and diffusion terms at the training data points. Furthermore, we showed that data-driven methods developed for the analysis of stochastic dynamical systems (kernel EDMD) can be carried over to their generators (kernel gEDMD) and, in turn, to the Schrödinger operator. Conversely, under certain assumptions on the ground state, the Schrödinger equation can be turned into a Kolmogorov backward equation corresponding to a drift-diffusion process. These results are summarized in Figure 6. Similar transformations also exist for the Fokker–Planck operator; see [24]. All derived approaches were illustrated with numerical results ranging from molecular dynamics to quantum mechanics.
Although we focused mainly on the Kolmogorov backward equation, the Fokker–Planck equation, and the Schrödinger equation, these methods can be applied to approximate other differential operators as well. An interesting open question is whether such algorithms can also be used for manifold learning. Some preliminary results were presented in Section 4, but a rigorous mathematical justification would require significant additional research. Analyzing connections with diffusion maps [20] or generalizations thereof in detail could be a potential direction for future work.
Another interesting avenue for future research could be to improve the efficiency and stability of the presented algorithms. Exploiting the properties of the given kernels, it might be possible to speed up computations significantly. The definition of a cutoff radius for the kernel or considering only a certain number of neighbors of data points, for instance, would—for suitable problems—result in sparse matrices. Moreover, the results sensitively depend on the hyperparameters such as the bandwidth of the Gaussian kernel. If the bandwidth is too small, this leads to overfitting and noisy eigenfunctions. If it is, on the other hand, too large, then the kernel is not able to capture the properties of the dynamical system accurately anymore. As a result, the Gram matrix G 0 has (numerically) essentially a low rank structure, and we obtain many zero eigenvalues. The question is then how to compute the smallest nonzero eigenvalues and corresponding eigenvectors efficiently.
Potential solutions for the hyperparameter tuning problem are techniques based on cross-validation [41] or so-called kernel flows [42]. By defining an optimization problem for the parameters of the kernel, e.g., based on a variational principle [43], gradient descent methods can help find suitable parameter values.   

Author Contributions

Conceptualization, S.K., F.N., and B.H.; methodology, S.K. and F.N.; software, S.K. and F.N.; formal analysis, S.K. and F.N.; writing, original draft preparation, S.K., F.N., and B.H. All authors read and agreed to the published version of the manuscript.

Funding

S.K. was funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1114 (Scaling Cascades in Complex Systems, project ID: 235221301) and through Germany’s Excellence Strategy (MATH+: The Berlin Mathematics Research Center, EXC-2046/1, project ID: 390685689). B.H. thanks the European Commission for funding through the Marie Curie fellowships scheme.

Acknowledgments

The publication of this article was funded by Freie Universität Berlin. We would like to thank Luigi Delle Site for many helpful discussions regarding quantum chemistry and Amel Durakovic for useful discussions on the correspondence between the Schrödinger equation and complex Langevin dynamics.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs

Proof of Lemma 1. 
The unitary transformation is H = e 1 2 F L e 1 2 F · . We obtain:
H f = e 1 2 F S e 1 2 F f + e 1 2 F J · e 1 2 F f + 1 2 e 1 2 F f F = e 1 2 F S e 1 2 F f + J · f + 1 2 J · F f ,
which establishes the first-order term in H and the third term in the definition of W. For the symmetric part, we find that:
e 1 2 F S e 1 2 F f = 1 2 e 1 2 F · ( e F a ( e 1 2 F f ) ) = 1 2 e 1 2 F · ( e 1 2 F a [ f + 1 2 F f ] ) = 1 2 · ( a f ) + 1 4 F a f 1 4 e 1 2 F · ( e 1 2 F f a F ) ,
which establishes the second-order term in the definition of H . Expanding the third term above, we get:
1 4 e 1 2 F · ( e 1 2 F f a F ) = 1 4 · ( a F ) f 1 4 f a F + 1 8 F a F f ,
which cancels out the second term of the previous equation and establishes the remaining terms for W.
For the converse direction, we first translate the eigenvalue equation for ψ 0 into an equation for η :
0 = ( H E 0 ) ψ 0 = 1 2 · a e η + J · e η + ( W E 0 ) e η = 1 2 · e η a η J · η e η + ( W E 0 ) e η = e η 1 2 · a η 1 2 η a η J · η + W E 0 ,
implying that the term in brackets is also vanishing. Now, we define the negative generator by the transformation L = e η H E 0 ( e η · ) . Expanding the action of L , we find:
L f = e η 1 2 · a ( e η f ) + J · ( e η f ) + ( W E 0 ) ( e η f ) = e η 1 2 · e η a [ f η f ] + e η J · f + ( W E 0 J · η ) ( e η f ) = 1 2 · a f + 1 2 η a f + 1 2 · ( a η ) f + 1 2 η a f 1 2 η a η f + J · f + ( W E 0 J · η ) f = 1 2 · a f + η a f + J · f = 1 2 e 2 η · e 2 η a f + J · f .
Proof of Lemma 2. 
We only show the proof for T H . Similar to the argument in [44], T H is a bounded linear operator on H because of:
ϕ ( x ) 1 2 i = 1 d j = 1 d a i j ( x ) D e i + e j ϕ ( x ) d μ ( x ) + ϕ ( x ) i = 1 d J i ( x ) 1 2 e F ( x ) · ( e F ( x ) a : , i ( x ) ) D e i ϕ ( x ) d μ ( x ) + W ( x ) ϕ ( x ) ϕ ( x ) d μ ( x ) H S 1 2 i = 1 d j = 1 d | a i j ( x ) | D e i + e j ϕ ( x ) H ϕ ( x ) H d μ ( x ) + i = 1 d | J i ( x ) | + 1 2 e F ( x ) | · ( e F ( x ) a : , i ( x ) ) | D e i ϕ ( x ) H ϕ ( x ) H d μ ( x ) + | W ( x ) | ϕ ( x ) H ϕ ( x ) H d μ ( x ) < .
Using the derivative reproducing property, we obtain:
T f , g μ = ( T f ) ( x ) g ( x ) d μ ( x ) = 1 2 i = 1 d j = 1 d a i j ( x ) 2 f x i x j ( x ) g ( x ) d μ ( x ) + i = 1 d J i ( x ) 1 2 e F ( x ) · ( e F ( x ) a : , i ( x ) ) f x i g ( x ) d μ ( x ) + W ( x ) f ( x ) g ( x ) d μ ( x ) = 1 2 i = 1 d j = 1 d a i j ( x ) D e i + e j ϕ ( x ) , f H ϕ ( x ) , g H d μ ( x ) + i = 1 d J i ( x ) 1 2 e F ( x ) · ( e F ( x ) a : , i ( x ) ) D e i ϕ ( x ) , f H ϕ ( x ) , g H d μ ( x ) + W ( x ) ϕ ( x ) , f H ϕ ( x ) , g H d μ ( x ) = 1 2 i = 1 d j = 1 d a i j ( x ) D e i + e j ϕ ( x ) ϕ ( x ) , f g H H d μ ( x ) + i = 1 d J i ( x ) 1 2 e F ( x ) · ( e F ( x ) a : , i ( x ) ) D e i ϕ ( x ) ϕ ( x ) , f g H H d μ ( x ) + W ( x ) ϕ ( x ) ϕ ( x ) , f g H H d μ ( x ) = T H f , g H .
The same argument can be used to prove the statement about the symmetric case. □
Proof of Lemma 4. 
Let f = Φ u and g = Φ v . Then:
S ^ H ( f , g ) = 1 M m = 1 M ϕ ( x m ) ϕ ( x m ) r = 1 M u r ϕ ( x r ) , s = 1 M v s ϕ ( x s ) H = 1 M m = 1 M r = 1 M s = 1 M u r v s ϕ ( x m ) , ϕ ( x r ) H ϕ ( x m ) , ϕ ( x s ) H = 1 M m = 1 M r = 1 M s = 1 M u r v s k ( x m , x r ) k ( x m , x s ) = 1 M G 0 u , G 0 v .
Similarly,
Q ^ H ( f , g ) = 1 M m = 1 M ϕ ( x m ) d ϕ ( x m ) r = 1 M u r ϕ ( x r ) , s = 1 M v s ϕ ( x s ) H = 1 M m = 1 M r = 1 M s = 1 M u r v s d ϕ ( x m ) , ϕ ( x r ) H ϕ ( x m ) , ϕ ( x s ) H = 1 M G 2 u , G 0 v .
If the kernel functions at the training points are linearly independent, then G 0 is invertible, and it suffices to compute eigenvectors u of the generalized matrix eigenvalue problem G 2 u = λ G 0 u . In the symmetric case, the expression for the quadratic form Q ^ H ( f , g ) changes to:
Q ^ H ( f , g ) = 1 2 M l = 1 d m = 1 M r = 1 M s = 1 M u r v s σ l ( x m ) k ( x m , x r ) σ l ( x m ) k ( x m , x s ) = 1 2 M l = 1 d G 1 ( l ) u , G 1 ( l ) v .

References

  1. Koopman, B. Hamiltonian systems and transformations in Hilbert space. Proc. Natl. Acad. Sci. USA 1931, 17, 315. [Google Scholar] [CrossRef]
  2. Lasota, A.; Mackey, M.C. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics, 2nd ed. Probab. Eng. Inf. Sci. 1994, 10, 311. [Google Scholar]
  3. Mezić, I. Spectral Properties of Dynamical Systems, Model Reduction and Decompositions. Nonlinear Dyn. 2005, 41, 309–325. [Google Scholar] [CrossRef]
  4. Budišić, M.; Mohr, R.; Mezić, I. Applied Koopmanism. Chaos Interdiscip. J. Nonlinear Sci. 2012, 22. [Google Scholar] [CrossRef]
  5. Mauroy, A.; Mezić, I. Global stability analysis using the eigenfunctions of the Koopman operator. IEEE Trans. Autom. Control 2016, 61, 3356–3369. [Google Scholar] [CrossRef]
  6. Klus, S.; Koltai, P.; Schütte, C. On the numerical approximation of the Perron–Frobenius and Koopman operator. J. Comput. Dyn. 2016, 3, 51–79. [Google Scholar] [CrossRef]
  7. Kaiser, E.; Kutz, J.N.; Brunton, S.L. Data-driven discovery of Koopman eigenfunctions for control. arXiv 2017, arXiv:1707.01146. [Google Scholar]
  8. Korda, M.; Mezić, I. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. Automatica 2018, 93, 149–160. [Google Scholar] [CrossRef]
  9. Peitz, S.; Klus, S. Koopman operator-based model reduction for switched-system control of PDEs. Automatica 2019, 106, 184–191. [Google Scholar] [CrossRef]
  10. Klus, S.; Husic, B.E.; Mollenhauer, M.; Noé, F. Kernel methods for detecting coherent structures in dynamical data. Chaos 2019. [Google Scholar] [CrossRef]
  11. Williams, M.O.; Kevrekidis, I.G.; Rowley, C.W. A Data-Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition. J. Nonlinear Sci. 2015, 25, 1307–1346. [Google Scholar] [CrossRef]
  12. Williams, M.O.; Rowley, C.W.; Kevrekidis, I.G. A Kernel-Based Method for Data-Driven Koopman Spectral Analysis. J. Comput. Dyn. 2015, 2, 247–265. [Google Scholar] [CrossRef]
  13. Klus, S.; Nüske, F.; Peitz, S.; Niemann, J.H.; Clementi, C.; Schütte, C. Data-driven approximation of the Koopman generator: Model reduction, system identification, and control. Physica D 2020, 406, 132416. [Google Scholar] [CrossRef]
  14. Mauroy, A.; Goncalves, J. Linear identification of nonlinear systems: A lifting technique based on the Koopman operator. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, 12–14 December 2016; pp. 6500–6505. [Google Scholar]
  15. Klus, S.; Schuster, I.; Muandet, K. Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces. J. Nonlinear Sci. 2019. [Google Scholar] [CrossRef]
  16. Zhou, D.X. Derivative reproducing properties for kernel methods in learning theory. J. Comput. Appl. Math. 2008, 220, 456–463. [Google Scholar] [CrossRef]
  17. Giesl, P.; Hamzi, B.; Rasmussen, M.; Webster, K. Approximation of Lyapunov functions from noisy data. J. Comput. Dyn. 2019. [Google Scholar] [CrossRef]
  18. Haasdonk, B.; Hamzi, B.; Santin, G.; Witwar, D. Greedy Kernel Methods for Center Manifold Approximation. arXiv 2018, arXiv:1810.11329. [Google Scholar]
  19. Wendland, H. Scattered Data Approximation; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
  20. Coifman, R.R.; Lafon, S. Diffusion maps. Appl. Comput. Harmon. Anal. 2006, 21, 5–30. [Google Scholar] [CrossRef]
  21. Nadler, B.; Lafon, S.; Coifman, R.R.; Kevrekidis, I.G. Diffusion maps, spectral clustering and reaction coordinates of dynamical systems. Appl. Comput. Harmon. Anal. 2006, 21, 113–127. [Google Scholar] [CrossRef]
  22. Coifman, R.R.; Kevrekidis, I.G.; Lafon, S.; Maggioni, M.; Nadler, B. Diffusion Maps, Reduction Coordinates, and Low Dimensional Representation of Stochastic Systems. Multiscale Model. Simul. 2008, 7, 842–864. [Google Scholar] [CrossRef]
  23. Nadler, B.; Lafon, S.; Coifman, R.R.; Kevrekidis, I.G. Diffusion Maps—A Probabilistic Interpretation for Spectral Embedding and Clustering Algorithms. In Principal Manifolds for Data Visualization and Dimension Reduction; Gorban, A., Kégl, B., Wunsch, D., Zinovyev, A., Eds.; Springer: Heidelberg, Germany, 2008; pp. 238–260. [Google Scholar]
  24. Pavliotis, G.A. Stochastic Processes and Applications: Diffusion Processes, the Fokker–Planck and Langevin Equations; Springer: New York, NY, USA, 2014. [Google Scholar]
  25. Levine, I.N. Quantum Chemistry; Prentice Hall: Upper Saddle River, NJ, USA, 2000. [Google Scholar]
  26. Aronszajn, N. Theory of Reproducing Kernels. Trans. Am. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
  27. Schölkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond; MIT press: Cambridge, MA, USA, 2001. [Google Scholar]
  28. Steinwart, I.; Christmann, A. Support Vector Machines, 1st ed.; Springer: New York, NY, USA, 2008. [Google Scholar]
  29. Baker, C. Mutual Information for Gaussian Processes. SIAM J. Appl. Math. 1970, 19, 451–458. [Google Scholar] [CrossRef]
  30. Baker, C. Joint Measures and Cross-Covariance Operators. Trans. Am. Math. Soc. 1973, 186, 273–289. [Google Scholar] [CrossRef]
  31. Davies, E.B. Spectral Theory and Differential Operators; Cambridge University Press: Cambridge, UK, 1996; Volume 42. [Google Scholar]
  32. Chacon, R.V. An ergodic theorem for operators satisfying norm conditions. J. Math. Mech. 1962, 11, 165–172. [Google Scholar]
  33. Rosasco, L.; Belkin, M.; Vito, E.D. On Learning with Integral Operators. J. Mach. Learn. Res. 2010, 11, 905–934. [Google Scholar]
  34. Klus, S. Data-Driven Dynamical Systems Toolbox. Available online: https://github.com/sklus/d3s/ (accessed on 1 May 2020).
  35. Klus, S.; Bittracher, A.; Schuster, I.; Schütte, C. A kernel-based approach to molecular conformation analysis. J. Chem. Phys. 2018, 149, 244109. [Google Scholar] [CrossRef]
  36. Froyland, G.; Rock, C.P.; Sakellariou, K. Sparse eigenbasis approximation: Multiple feature extraction across spatiotemporal scales with application to coherent set identification. Commun. Nonlinear Sci. Numer. Simul. 2019, 77, 81–107. [Google Scholar] [CrossRef]
  37. Okamoto, H. Stochastic formulation of quantum mechanics based on a complex Langevin equation. J. Phys. A Math. Gen. 1990, 23, 5535–5545. [Google Scholar] [CrossRef]
  38. Reed, M.; Simon, B. Methods of Modern Mathematical Physics. IV Analysis of Operators; Academic Press: San Diego, CA, USA, 1978. [Google Scholar]
  39. Kosztin, I.; Faber, B.; Schulten, K. Introduction to the diffusion Monte Carlo method. Am. J. Phys. 1996, 64, 633–644. [Google Scholar] [CrossRef]
  40. Parzen, E. On Estimation of a Probability Density Function and Mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  41. McGibbon, R.T.; Pande, V.S. Variational cross-validation of slow dynamical modes in molecular kinetics. J. Chem. Phys. 2015, 142, 03B621_1. [Google Scholar] [CrossRef]
  42. Owhadi, H.; Yoo, G.R. Kernel Flows: From learning kernels from data into the abyss. J. Comput. Phys. 2019, 389, 22–47. [Google Scholar] [CrossRef]
  43. Wu, H.; Noé, F. Variational approach for learning Markov processes from time series data. arXiv 2017, arXiv:1707.04659. [Google Scholar] [CrossRef]
  44. Muandet, K.; Fukumizu, K.; Sriperumbudur, B.; Schölkopf, B. Kernel mean embedding of distributions: A review and beyond. Found. Trends Mach. Learn. 2017, 10, 1–141. [Google Scholar] [CrossRef]
Figure 1. (a) Quadruple-well potential. The color blue corresponds to small values and yellow to large values. (b) Clustering into four metastable sets based on sparse eigenbasis approximation (SEBA). (c) Eigenvalues computed using kernel generator extended dynamic mode decomposition (gEDMD) and a Markov state model. The bars indicate the estimated standard deviation.
Figure 1. (a) Quadruple-well potential. The color blue corresponds to small values and yellow to large values. (b) Clustering into four metastable sets based on sparse eigenbasis approximation (SEBA). (c) Eigenvalues computed using kernel generator extended dynamic mode decomposition (gEDMD) and a Markov state model. The bars indicate the estimated standard deviation.
Entropy 22 00722 g001
Figure 2. (a) Numerically computed eigenfunctions ψ and associated energy levels E of the quantum harmonic oscillator. The results are virtually indistinguishable from the analytical results. (b) Corresponding probability densities p .
Figure 2. (a) Numerically computed eigenfunctions ψ and associated energy levels E of the quantum harmonic oscillator. The results are virtually indistinguishable from the analytical results. (b) Corresponding probability densities p .
Entropy 22 00722 g002
Figure 3. Numerically computed eigenfunctions of the Schrödinger equation associated with the hydrogen atom. Only points where the absolute value of the eigenfunction is larger than a given threshold are plotted. The shapes clearly resemble the well-known hydrogen atom orbitals shown next to the scatter plots. The eigenfunctions (or rotations thereof) correspond to the following quantum numbers ( n ̲ , ̲ , m ̲ ) : (a) ( 1 , 0 , 0 ) , (b) ( 2 , 1 , 1 ) , (c) ( 3 , 2 , 1 ) , and (d) ( 4 , 3 , 1 ) .
Figure 3. Numerically computed eigenfunctions of the Schrödinger equation associated with the hydrogen atom. Only points where the absolute value of the eigenfunction is larger than a given threshold are plotted. The shapes clearly resemble the well-known hydrogen atom orbitals shown next to the scatter plots. The eigenfunctions (or rotations thereof) correspond to the following quantum numbers ( n ̲ , ̲ , m ̲ ) : (a) ( 1 , 0 , 0 ) , (b) ( 2 , 1 , 1 ) , (c) ( 3 , 2 , 1 ) , and (d) ( 4 , 3 , 1 ) .
Entropy 22 00722 g003
Figure 4. Eigenfunctions of the Schrödinger equation associated with the hydrogen atom computed by applying kernel gEDMD to the corresponding Koopman generator. The quantum numbers ( n ̲ , ̲ , m ̲ ) are: (a) ( 3 , 2 , 0 ) and (b) ( 4 , 3 , 2 ) .
Figure 4. Eigenfunctions of the Schrödinger equation associated with the hydrogen atom computed by applying kernel gEDMD to the corresponding Koopman generator. The quantum numbers ( n ̲ , ̲ , m ̲ ) are: (a) ( 3 , 2 , 0 ) and (b) ( 4 , 3 , 2 ) .
Entropy 22 00722 g004
Figure 5. Swiss roll colored with respect to the eigenfunctions (a) φ 0 and (b) φ 5 , which parametrize the angular and vertical direction, respectively. (c) Resulting two-dimensional embedding.
Figure 5. Swiss roll colored with respect to the eigenfunctions (a) φ 0 and (b) φ 5 , which parametrize the angular and vertical direction, respectively. (c) Resulting two-dimensional embedding.
Entropy 22 00722 g005
Figure 6. Relationships between the Koopman, Kolmogorov, and Schrödinger operators for a drift-diffusion process of the form d X t = V ( X t ) d t + 2 β 1 d B t . Here, ρ 0 denotes the invariant density, i.e., L * ρ 0 = 0 . In our setting, the transformation of the Schrödinger operator requires a strictly positive real-valued ground state ψ 0 .
Figure 6. Relationships between the Koopman, Kolmogorov, and Schrödinger operators for a drift-diffusion process of the form d X t = V ( X t ) d t + 2 β 1 d B t . Here, ρ 0 denotes the invariant density, i.e., L * ρ 0 = 0 . In our setting, the transformation of the Schrödinger operator requires a strictly positive real-valued ground state ψ 0 .
Entropy 22 00722 g006
Table 1. Overview of notation.
Table 1. Overview of notation.
X t stochastic process
X state space
k , ϕ kernel and associated feature map
H reproducing kernel Hilbert space induced by k
K t Koopman operator with lag time t
L generator of the Koopman operator
H Schrödinger operator
T general differential operator
T H kernel-based differential operator
C 00 covariance operator
A ^ empirical estimate of operator A
G 0 , G 1 , G 2 (generalizations of) Gram matrices
Back to TopTop