Next Article in Journal
The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy
Next Article in Special Issue
Machine Learning Predictors of Extreme Events Occurring in Complex Dynamical Systems
Previous Article in Journal
Statistical Complexity of the Coriolis Antipairing Effect
Previous Article in Special Issue
State and Parameter Estimation from Observed Signal Increments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation with Data-Driven Nonparametric Likelihood Functions

1
Department of Mathematics, the Pennsylvania State University, 109 McAllister Building, University Park, PA 16802-6400, USA
2
Department of Meteorology and Atmospheric Science, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013, USA
3
Institute for CyberScience, the Pennsylvania State University, 224B Computer Building, University Park, PA 16802, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2019, 21(6), 559; https://doi.org/10.3390/e21060559
Submission received: 24 April 2019 / Revised: 30 May 2019 / Accepted: 1 June 2019 / Published: 3 June 2019
(This article belongs to the Special Issue Information Theory and Stochastics for Multiscale Nonlinear Systems)

Abstract

:
In this paper, we consider a surrogate modeling approach using a data-driven nonparametric likelihood function constructed on a manifold on which the data lie (or to which they are close). The proposed method represents the likelihood function using a spectral expansion formulation known as the kernel embedding of the conditional distribution. To respect the geometry of the data, we employ this spectral expansion using a set of data-driven basis functions obtained from the diffusion maps algorithm. The theoretical error estimate suggests that the error bound of the approximate data-driven likelihood function is independent of the variance of the basis functions, which allows us to determine the amount of training data for accurate likelihood function estimations. Supporting numerical results to demonstrate the robustness of the data-driven likelihood functions for parameter estimation are given on instructive examples involving stochastic and deterministic differential equations. When the dimension of the data manifold is strictly less than the dimension of the ambient space, we found that the proposed approach (which does not require the knowledge of the data manifold) is superior compared to likelihood functions constructed using standard parametric basis functions defined on the ambient coordinates. In an example where the data manifold is not smooth and unknown, the proposed method is more robust compared to an existing polynomial chaos surrogate model which assumes a parametric likelihood, the non-intrusive spectral projection. In fact, the estimation accuracy is comparable to direct MCMC estimates with only eight likelihood function evaluations that can be done offline as opposed to 4000 sequential function evaluations, whenever direct MCMC can be performed. A robust accurate estimation is also found using a likelihood function trained on statistical averages of the chaotic 40-dimensional Lorenz-96 model on a wide parameter domain.

1. Introduction

Bayesian inference is a popular approach for solving inverse problems with far-reaching applications, such as parameter estimation and uncertainty quantification (see for example [1,2,3]). In this article, we will focus on a classical Bayesian inference problem of estimating the conditional distribution of hidden parameters of dynamical systems from a given set of noisy observations. In particular, let x ( t ; θ ) be a time-dependent state variable, which implicitly depends on the parameter θ through the following initial value problem,
x ˙ = f ( x , θ ) , x ( 0 ) = x 0 .
Here, for any fixed θ , f can be either deterministic or stochastic. Our goal is to estimate the conditional distribution of θ , given discrete-time noisy observations y = { y 1 , , y T } , where:
y i = g ( x i , ξ i ) , i = 1 , , T .
Here, x i x ( t i ; θ ) are the solutions of Equation (1) for a specific hidden parameter θ , g is the observation function, and ξ i are unbiased noises representing the measurement or model error. Although the proposed approach can also estimate the conditional density of the initial condition x 0 , we will not explore this inference problem in this article.
Given a prior density, p 0 ( θ ) , Bayes’ theorem states that the conditional distribution of the parameter θ can be estimated as,
p ( θ | y ) p ( y | θ ) p 0 ( θ ) ,
where p ( y | θ ) denotes the likelihood function of θ given the measurements y that depend on a hidden parameter value θ through (2). In most applications, the statistics of the conditional distribution p ( θ | y ) are the quantity of interest. For example, one can use the mean statistic as a point estimator of θ and the higher order moments for uncertainty quantification. To realize this goal, one draws samples of p ( θ | y ) and estimates these statistics via Monte Carlo averages over these samples. In this application, Markov Chain Monte Carlo (MCMC) is a natural sampling method that plays a central role in the computational statistics behind most Bayesian inference techniques [4].
In our setup, we assume that for any θ , one can simulate:
y i ( θ ) = g ( x i ( θ ) , ξ i ) , i = 1 , , T .
where x i ( θ ) x ( t i ; θ ) denote solutions to the initial value problem in Equation (1). If the observation function has the following form,
g ( x i ( θ ) , ξ i ) = h ( x i ( θ ) ) + ξ i ,
where ξ i are i.i.d. noises, then one can define the likelihood function of θ , p ( y | θ ) , as a product of the density functions of the noises ξ i ,
p ( y | θ ) i = 1 T p ( ξ i ) = i = 1 T p ( y i h ( x i ( θ ) ) ) .
When the observations are noise-less, ξ i = 0 , and the underlying system is an Itô diffusion process with additive or multiplicative noises, one can use the Bayesian imputation to approximate the likelihood function [5]. In both parametric approaches, it is worth noting that the dependence of the likelihood function on the parameter is implicit through the solutions x i ( θ ) . Practically, this implicit dependence is the source of the computational burden in evaluating the likelihood function since it requires solving the dynamical model in (1) or every proposal in the MCMC chain. In the case when simulating y i ( θ ) is computationally feasible, but the likelihood function is intractable, then one can use, e.g., the Approximate Bayesian Computation (ABC) rejection algorithm [6,7] for Bayesian inference. Basically, the ABC rejection scheme generates the samples of p ( θ | y ) by comparing the simulated y i ( θ ) to the observed data, y i , with an appropriate choice of metric comparison for each proposal θ p 0 ( θ ) . In general, however, repetitive evaluation of (4) can be expensive when the dynamics in (1) is high-dimensional and/or stiff, or when T is large, or when the function g is an average of a long time series. Our goal is to address this situation in addition to not knowing the approximate likelihood function.
Broadly speaking, the existing approaches to overcome repetitive evaluation of (4) require knowledge of an approximate likelihood function such as in (6). They can be grouped into two classes. The first class consists of methods that improve/accelerate the sampling strategy; for example, the Hamiltonian Monte Carlo [8], adaptive MCMC [9], and delay rejection adaptive Metropolis [10], just to name a few. The second class consists of methods that avoid solving the dynamical model in (1) when running the MCMC chain by replacing it with a computationally more efficient model on a known parameter domain. This class of approach, also known as surrogate modeling, includes Gaussian process models [11], polynomial chaos [12,13], and enhanced model error [14]; for example, the non-intrusive spectral projection [13] approximate x i ( θ ) in (6) with a polynomial chaos expansion. Another related approach, which also avoids MCMC on top of integrating (1), is to employ a polynomial expansion on the likelihood function [15]. This method represents the parametric likelihood function in (6) with orthonormal basis functions of a Hilbert space weighted by the prior measure. This choice of basis functions makes the computation for the statistics of the posterior density straightforward, and thus, MCMC is not needed.
In this paper, we consider a surrogate modeling approach where a nonparametric likelihood function is constructed using a data-driven spectral expansion. By nonparametric, we mean that our approach does not require any parametric form or assume any distribution as in (6). Instead, we approximate the likelihood function using the kernel embedding of conditional distribution formulation introduced in [16,17]. In our application, we will extend their formulation onto a Hilbert space weighted by the sampling measure of the training dataset as in [18]. We will rigorously demonstrate that using orthonormal basis functions of this data-driven weighted Hilbert space, the error bound is independent of the variance of the basis functions, which allows us to determine the amount of training data for accurate likelihood function estimations.
Computationally, assuming that the observations lie on (or close to) a Riemannian manifold N embedded in R n with sampling density q ( y ) , we apply the diffusion maps algorithm [19,20] to approximate orthonormal basis functions φ k L 2 ( N , q ) using the training dataset. Subsequently, a nonparametric likelihood function is represented as a weighted sum of these data-driven basis functions, where the coefficients are precomputed using the kernel embedding formulation. In this fashion, our approach respects the geometry of the data manifold. Using this nonparametric likelihood function, we then generate the MCMC chain for estimating the conditional distribution of hidden parameters. For the present work, our aim is to demonstrate that one can obtain accurate and robust parameter estimation by implementing a simple Bayesian inference algorithm, the Metropolis scheme, with the data-driven nonparametric likelihood function. We should also point out that the present method is computationally feasible on low-dimensional parameter space, like any other surrogate modeling approach. Possible ways to overcome this dimensionality issue will be discussed.
This paper is organized as follows: In Section 2, we review the formulation of the reproducing kernel Hilbert space to estimate conditional density functions. In Section 3, we discuss the error estimate of the likelihood function approximation. In Section 4, we discuss the construction of the analytic basis functions for the Euclidean data manifold, as well as the data-driven basis functions with the diffusion maps algorithm for data that lie on embedded Riemannian geometry. In Section 5, we provide numerical results with parameter estimation application on instructive examples. In one of the examples where the dynamical model is low-dimensional and the observation is in the form of (5), we compare the proposed approach with the direct MCMC and non-intrusive spectral projection method (both schemes use likelihood of the form (6)). In addition, we will also demonstrate the robustness of the proposed approach on an example where g is a statistical average of a long-time trajectory (in which the likelihood is intractable) and the dynamical model has relatively high-dimensional chaotic dynamics such that repetitive evaluation of (4) is numerically expensive. In Section 6, we conclude this paper with a short summary. We accompany this paper with Appendices for treating large amount of data and more numerical results.

2. Conditional Density Estimation via Reproducing Kernel Weighted Hilbert Spaces

Let y N R n , where N is a smooth manifold with intrinsic dimension d n . In practice, we measure the observations in the ambient coordinates and denote their components as y = { y 1 , , y n } . For the parameter θ space, M has a Euclidean structure with components, θ = { θ 1 , , θ m } , so M is assumed to be either an m-dimensional hyperrectangle or R m . For training, we are given M number of training parameters θ j j = 1 , , M = { θ j 1 , , θ j m } j = 1 , , M . For each training parameter θ j , we generate a discrete time series of length N for noisy observation data y i , j = { y i , j 1 , , y i , j n } R n for i = 1 , , N , and j = 1 , , M . Here, the sub-index i and the sub-index j of y i , j correspond to the i th observation data for the j th training parameter θ j . Our goal for training is to learn the conditional density p ( y | θ ) from the training dataset θ j j = 1 , , M and y i , j j = 1 , , M i = 1 , , N for arbitrary y and θ within the range of θ j j = 1 , , M .
The construction of the conditional density p ( y | θ ) is based on a machine learning tool known as the kernel embedding of the conditional distribution formulation introduced in [16,17]. In their formulation, the representation of conditional distributions is an element of a Reproducing Kernel Hilbert Space (RKHS).
Recently, the representation using a Reproducing Kernel Weighted Hilbert Space (RKWHS) was introduced in [18]. That is, let Ψ k : = ψ k q be the orthonormal basis of L 2 ( N , q 1 ) , where they are eigenbasis of an integral operator,
K f ( y ) = N K ( y , y ) f ( y ) q 1 ( y ) d V , f L 2 ( N , q 1 ) ,
that is, K Ψ k = λ k Ψ k .
In the case where N is compact and K is Hilbert–Schmidt, the kernel can be written as,
K ( y , y ) = k = 1 λ k Ψ k ( y ) Ψ k ( y ) ,
which converges in L 2 ( N , q 1 ) . Define the feature map Φ : M 2 as,
Φ ( y ) : = Φ k ( y ) = λ k Ψ k ( y ) : k Z + , y N .
Therefore, any f L 2 ( N , q 1 ) can be represented as f = k = 1 f ^ k Ψ k = k = 1 f k ^ λ k Φ k , where f ^ k = f , Ψ k q 1 = f , ψ k : = N f ( y ) ψ k ( y ) d V and provided that k | f ^ k | 2 / λ k < . If we define f , g H q 1 : = k = 1 f k ^ g k ^ λ k , we can write the kernel in (8) as K ( y , y ) = Φ ( y ) , Φ ( y ) H q 1 . Throughout this manuscript, we denote the RKHS H q 1 ( N ) generating the feature map Φ in (9) as the space of square integrable functions with a reproducing property,
f ( y ) = f , K ( · , y ) H q 1 : = k = 1 f ^ k K ( · , y ) , Ψ k q 1 λ k = k = 1 f ^ k λ k Φ k ( y ) = f , Φ ( y ) H q 1 , y N ,
induced by the basis of Ψ k L 2 ( N , q 1 ) . While this definition deceptively suggests that H q 1 ( N ) is similar to L 2 ( N , q 1 ) , we should also point out that the RKHS requires that the Dirac functional δ x : H q 1 ( N ) R defined as δ x f = f ( x ) be continuous. Since L 2 contains a class of functions, it is not an RKHS and H q 1 ( N ) L 2 ( N , q 1 ) . See, e.g., Chapter 4 of [21] for more details. Using the same definition, we denote H q ˜ 1 ( M ) as the RKHS induced by orthonormal basis of L 2 ( M , q ˜ 1 ) of functions of the parameter θ .
In this work, we will represent conditional density functions using the RKWHS induced by the data, where the bases will be constructed using the diffusion maps algorithm. The outcome of the training is an estimate of the conditional density, p ^ ( y | θ ) , for arbitrary y and θ within the range of θ j j = 1 , , M .

2.1. Review of Nonparametric RKWHS Representation of Conditional Density Functions

We first review the RKWHS representation of conditional density functions deduced in [18]. Let ψ k y be the orthonormal basis functions of L 2 N , q , where N contains the domain of the training data y i , j , and the weight function q y is defined with respect to the volume form inherited by N from the ambient space R n . Let φ l θ L 2 M , q ˜ be the orthonormal basis functions in the parameter θ space, where the training parameters are θ j M , with weight function q ˜ θ . For finite modes, k = 1 , , K 1 , and l = 1 , , K 2 , a nonparametric RKWHS representation of the conditional density can be written as follows [18]:
p ^ y | θ = k = 1 K 1 c ^ Y | θ , k ψ k y q y ,
where p ^ y | θ denotes an estimate of the conditional density p y | θ H q 1 ( N ) , and the expansion coefficients are defined as:
c ^ Y | θ , k = l = 1 K 2 C Y Θ C Θ Θ 1 k l φ l θ .
Here, the matrix C Y Θ is K 1 × K 2 , and the matrix C Θ Θ is K 2 × K 2 , whose components can be approximated by Monte Carlo averages [18]:
C Y Θ k s = E Y Θ ψ k φ s 1 M N j = 1 M i = 1 N ψ k y i , j φ s θ j ,
C Θ Θ s l = E Θ Θ φ s φ l 1 M j = 1 M φ s θ j φ l θ j ,
where the expectations E are taken with respect to the sampling densities of the training dataset y i , j j = 1 , , M i = 1 , , N and θ j j = 1 , , M . The equation for the expansion coefficients in Equation (11) is based on the theory of kernel embedding of the conditional distribution [16,17,18]. See [18] for the detailed proof of Equations (11)–(13). Note that for RKWHS representation, the weight functions q and q ˜ can be different from the sampling densities of the training dataset y i , j j = 1 , , M i = 1 , , N and θ j j = 1 , , M , respectively. This generalizes the representation in [18], which sets the weights q and q ˜ to be the sampling densities of the training dataset y i , j and θ j , respectively. If the assumption of p y | θ H q 1 ( N ) is not satisfied, then C Θ Θ can be singular. In such a case, one can follow the suggestion in [16,17] to regularize the linear regression in (11) by replacing C Θ Θ 1 with ( C Θ Θ + λ I K 2 ) 1 , where λ R is an empirically-chosen parameter and I K 2 denotes an identity matrix of size K 2 × K 2 .
Incidentally, it is worth mentioning that the conditional density in (10) and (11) is represented as a regression in infinite-dimensional spaces with basis functions ψ k y and φ l θ . The expression (10) is a nonparametric representation in the sense that we do not assume any particular distribution for the density function p y | θ . In this representation, only training dataset y i , j j = 1 , , M i = 1 , , N and θ j j = 1 , , M with appropriate basis functions are used to specify the coefficients c ^ Y | θ , k and the densities p ^ y | θ . In Section 4, we will demonstrate how to construct the appropriate basis completely from the training data, motivated by the theoretical result in Section 3 below.

2.2. Simplification of the Expansion Coefficients (11)

If the weight function q ˜ θ is the sampling density of the training parameters θ j j = 1 , , M , the matrix C Θ Θ in (13) can be simplified to a K 2 × K 2 identity matrix,
C Θ Θ s l = E Θ Θ φ s φ l = M φ s ( θ ) φ l ( θ ) q ˜ ( θ ) d θ = δ s l .
where δ s l is the Kronecker delta function. Here, the second equality follows from the weight q ˜ θ being the sampling density, and the third equality follows from the orthonormality of φ l θ L 2 M , q ˜ with respect to the weight function q ˜ . Then, the expansion coefficients c ^ Y | θ , k in (11) can be simplified to,
c ^ Y | θ , k = l = 1 K 2 C Y Θ k l φ l θ ,
with the K 1 × K 2 matrix C Y Θ still given by (12). In this work, we always take the weight function q ˜ θ to be the sampling density of the training parameters θ j j = 1 , , M for the simplification of the expansion coefficients c ^ Y | θ , k in (15). This assumption is not too restrictive since the training parameters are specified by the users.
Finally, the formula in (10) combined with the expansion coefficients c ^ Y | θ , k in (15) and the matrix C Y Θ in (12) forms an RKWHS representation of the conditional density p y | θ for arbitrary y and θ . Numerically, the training outcome is the matrix C Y Θ in (12), and then, the conditional density p ^ y | θ can be represented by (10) with coefficients (15) using the basis functions ψ k y k = 1 K 1 and φ l θ l = 1 K 2 . From above, one can see that two important questions naturally arise as a consequence of the usage of RKWHS representation: first, whether the representation p ^ y | θ in (10) is valid in estimating the conditional density p y | θ ; second, how to construct the orthonormal basis functions ψ k y L 2 N , q and φ l θ L 2 M , q ˜ . We will address these two important questions in the next two sections.

3. Error Estimation

In this section, we focus on the error estimation of the expansion coefficient c ^ Y | θ j , k and, later, the conditional density p ^ y | θ j at the training parameter θ j . The notation c ^ Y | θ j , k is defined as the expansion coefficient c ^ Y | θ , k in (15), evaluated at the training parameter θ j . Let the total number of basis functions in parameter space, K 2 , be equal to the total number of training parameters, M, that is, K 2 = M . Denoting Φ = [ φ 1 , , φ M ] R M × M , where the j th component of φ l approximates the basis function evaluated at the training data φ l ( θ j ) , we can write the last equality in (14) in a compact form as M 1 Φ Φ = I M . This also means that, M 1 Φ Φ = I M , the components of which are,
1 M l = 1 M φ l θ s φ l θ j = δ s j .
For the training parameter θ j , we can simplify the expansion coefficient c ^ Y | θ j , k by substituting Equation (12) into Equation (15),
c ^ Y | θ j , k = l = 1 M C Y Θ k l φ l θ j l = 1 M 1 M N s = 1 M i = 1 N ψ k y i , s φ l θ s φ l θ j = 1 N i = 1 N ψ k y i , j ,
where the last equality follows from (16).

3.1. Error Estimation Using Arbitrary Bases

We first study the error estimation for the expansion coefficient c ^ Y | θ j , k . For each training parameter θ j , the conditional density function p ( y | θ j ) H q 1 N can be analytically represented in the form,
p ( y | θ j ) = k = 1 c Y | θ j , k ψ k ( y ) q ( y ) ,
due to the completeness of L 2 N , q . Here, the analytic expansion coefficient c Y | θ j , k is given by,
c Y | θ j , k = p · | θ j , ψ k .
Note that the estimator c ^ Y | θ j , k in (17) is a Monte Carlo approximation of the expansion coefficient c Y | θ j , k in (19), i.e.,
c Y | θ j , k = p · | θ j , ψ k = E Y | θ j [ ψ k Y ] 1 N i = 1 N ψ k y i , j ,
where the last equality follows from the training dataset y i , j i = 1 , , N , which admits a conditional density p y | θ j . Note also that in the following theorems and propositions, the condition p ( y | θ j ) H q 1 N is required. In Section 5.2 and Appendix B, we will provide an example to discuss this condition in detail. Next, we provide the unbiasedness and consistency of the estimator c ^ Y | θ j , k .
Proposition 1.
Let y i , j i = 1 , , N be i.i.d. samples of Y | θ j with density p ( y | θ j ) . Let p ( y | θ j ) H q 1 N and ψ k ( y ) form a complete orthonormal basis of L 2 N , q . Assume that Var Y | θ j ψ k Y is finite, then c ^ Y | θ j , k defined in (17) is an unbiased and consistent estimator for c Y | θ j , k in (19).
Proof. 
The estimator c ^ Y | θ j , k is unbiased,
E c ^ Y | θ j , k = 1 N i = 1 N E Y | θ j ψ k Y i , j = c Y | θ j , k .
where the expectation is taken with respect to the conditional density p y | θ j . If the variance, Var Y | θ j ψ k Y , is finite, then the variance of c ^ Y | θ j , k converges to zero as the number of training data N ,
Var c ^ Y | θ j , k = 1 N Var Y | θ j ψ k Y 0 , as N .
Then, we can obtain that the estimator c ^ Y | θ j , k is consistent,
P r c ^ Y | θ j , k c Y | θ j , k > ε Var c ^ Y | θ j , k ε 2 0 , as N , for ε > 0 ,
where Chebyshev’s inequality has been used. □
If the estimator of p ( y | θ j ) is given by the representation with an infinite number of basis functions, p ˜ ( y | θ j ) = k = 1 c ^ Y | θ j , k ψ k ( y ) q ( y ) , then the estimator p ˜ ( y | θ j ) is pointwise unbiased for every observation y . However, in the numerical implementation, only a finite number of basis functions can be used in the representation (10). Numerically, the estimator of p ( y | θ j ) is given by the representation (10) at the training parameter θ j ,
p ^ ( y | θ j ) = k = 1 K 1 c ^ Y | θ j , k ψ k ( y ) q ( y ) .
Then, the pointwise error of the estimator, e ^ ( y | θ j ) , can be defined as:
e ^ ( y | θ j ) p ( y | θ j ) p ^ ( y | θ j ) = k = K 1 + 1 c Y | θ j , k ψ k ( y ) q ( y ) + k = 1 K 1 c Y | θ j , k c ^ Y | θ j , k ψ k ( y ) q ( y ) .
It can be seen that the estimator p ^ ( y | θ j ) is no longer unbiased or consistent due to the first error term in (23) induced by modes k > K 1 . Next, we estimate the expectation and the variance of an L 2 -norm error of p ^ ( y | θ j ) for all training parameters θ j .
Theorem 1.
Let the condition in Proposition 1 be satisfied for all θ j j = 1 , , M , and Var Y | θ j ψ k Y be finite for all k N + . Define the L 2 -norm error,
e ^ L 2 = j = 1 M N e ^ ( y | θ j ) 2 q 1 ( y ) d V 1 / 2 ,
where e ^ ( y | θ j ) is the pointwise error in (23), and d V is the volume form inherited by the manifold N from the ambient space R n [18,20]. Then,
E e ^ L 2 j = 1 M k = K 1 + 1 c Y | θ j , k 2 + 1 N j = 1 M k = 1 K 1 Var Y | θ j ψ k ( Y ) 1 2 ,
Var e ^ L 2 j = 1 M k = K 1 + 1 c Y | θ j , k 2 + 1 N j = 1 M k = 1 K 1 Var Y | θ j ψ k ( Y ) ,
where E and Var are defined with respect to the joint distribution of p ( y | θ j ) for all θ j j = 1 , , M . Moreover, E e ^ L 2 and Var e ^ L 2 converge to zero as K 1 and then N , where the limiting operations of K 1 and N are not commutative.
Proof. 
The expectation of e ^ L 2 can be estimated as,
E e ^ L 2 2 E j = 1 M N e ^ ( y | θ j ) 2 q 1 ( y ) d V = E j = 1 M N k = K 1 + 1 c Y | θ j , k ψ k ( y ) + k = 1 K 1 c Y | θ j , k c ^ Y | θ j , k ψ k ( y ) 2 q ( y ) d V ,
where the first inequality follows from Jensen’s inequality. Here, the randomness comes from the estimators c ^ Y | θ j , k . Due to the orthonormality of basis functions, ψ k L 2 N , q , the error estimation in (27) can be simplified as,
E e ^ L 2 2 j = 1 M k = K 1 + 1 c Y | θ j , k 2 + j = 1 M k = 1 K 1 E Y | θ j c Y | θ j , k c ^ Y | θ j , k 2 , = j = 1 M k = K 1 + 1 c Y | θ j , k 2 + 1 N j = 1 M k = 1 K 1 Var Y | θ j ψ k ( Y ) ,
where the inequality follows from the linearity of expectation, and the equality follows from E c ^ Y | θ j , k = c Y | θ j , k in (21) and Var c ^ Y | θ j , k = 1 N Var Y | θ j ψ k Y in (22). In error estimation (28), the first term is deterministic, and the second term is random. We have so far proven that the expectation E e ^ L 2 is bounded by (25). Similarly, we can prove that the variance Var e ^ L 2 is bounded by (26).
Next, we prove that the expectation E e ^ L 2 converges to zero as K 1 and then N . Parseval’s theorem states that:
k = 1 c Y | θ j , k 2 = N p ( y | θ j ) 2 q 1 y d V < + , for all θ j ,
where the inequality follows from p ( y | θ j ) H q 1 ( N ) L 2 N , q 1 for all θ j . For ε > 0 , there exists an integer K ˜ 1 θ j for θ j such that:
k = K ˜ 1 θ j c Y | θ j , k 2 < ε 2 M .
Let:
K 1 = max K ˜ 1 θ 1 , , K ˜ 1 θ M ,
then the first term in (28) can be bounded by: ε / 2 ,
j = 1 M k = K 1 + 1 c Y | θ j , k 2 < ε 2 .
Since the variance Var Y | θ j ψ k ( Y ) is assumed to be finite for all k and j, there exists a constant D > 0 such that Var Y | θ j ψ k ( Y ) can be bounded above by this constant D,
Var Y | θ j ψ k ( Y ) D , for all k = 1 , , K 1 and j = 1 , , M .
Then, for ε > 0 , there exists a sufficiently large number of training data:
N min = 2 M K 1 D ε .
such that whenever N > N min , then:
1 N j = 1 M k = 1 K 1 Var Y | θ j ψ k ( Y ) < ε 2 .
Since ε > 0 is arbitrary, by substituting Equation (32) and Equation (35) into the error estimation (28), we obtain that E e ^ L 2 converges to zero as K 1 and then N . Note that, we first take K 1 to ensure the first error term in (28) vanishes and then take N to ensure the second error term in (28) vanishes. Thus, the limiting operations of K 1 and N are not commutative. Similarly, we can prove that the variance Var e ^ L 2 converges to zero as K 1 and then N . □
Theorem 1 provides the intuition for specifying the number of training observation data N to achieve any desired accuracy ε > 0 given fixed M-parameters and sufficiently large K 1 . It can be seen from Theorem 1 that numerically, the expectation E e ^ L 2 in (25) and the variance Var e ^ L 2 in (26) can be bounded within arbitrarily small ε by choosing sufficiently large K 1 and N. Specifically, there are two error terms in Equations (25) and (26), the first being deterministic, induced by modes k > K 1 , and the second random, induced by modes k K 1 . For the deterministic term ( k > K 1 ), the error can be bounded by ε / 2 by choosing sufficiently large K 1 satisfying (31). In our implementation, the number of basis functions K 1 is empirically chosen to be large enough in order to make the first error term in Equations (25) and (26) for k > K 1 as small as possible.
For the random term ( k K 1 ), the error can be bounded by ε / 2 by choosing sufficiently large N satisfying N > N min = 2 M K 1 D / ε (Equation (34)). The minimum number of training data, N min , depends on the upper bound of Var Y | θ j ψ k ( Y ) , D. However, the upper bound D may not exist for some problems. This means that for some problems, the assumption for finite Var Y | θ j ψ k ( Y ) in Theorem 1 may not be satisfied. Even if the upper bound D exists, it is typically not easy to evaluate its value given an arbitrary basis ψ k L 2 N , q since one needs to evaluate Var Y | θ j ψ k ( Y ) for all k = 1 , , K 1 and j = 1 , , M . Note that Theorem 1 holds true for representing p ^ ( y | θ j ) with an arbitrary basis ψ k L 2 N , q as long as p ( y | θ j ) H q 1 N for all θ j and Var Y | θ j ψ k ( Y ) is finite for k K 1 and j = 1 , , M . Next, we provide several cases in which Var Y | θ j ψ k ( Y ) is finite for k and j.
Remark 1.
If the weighted Hilbert space L 2 N , q is defined on a compact manifold N and has smooth basis functions ψ k , then Var Y | θ j ψ k Y is finite for a fixed k N + and j = 1 , , M . This assertion follows from the fact that continuous functions on a compact manifold are bounded. The smoothness assumption is not unreasonable in many applications since the orthonormal basis functions are obtained as solutions of an eigenvalue problem of a self-adjoint second-order elliptic differential operator. Note that the bound here is not necessarily a uniform bound of ψ k Y for all k N + and j = 1 , , M . As long as Var Y | θ j ψ k Y is finite for k K 1 and j = 1 , , M , the upper bound D is finite, and then, Theorem 1 holds.
Remark 2.
If the manifold N is a hyperrectangle in R n and the weight q is a uniform distribution on N , then Var Y | θ j ψ k Y is finite for a fixed k N + and j = 1 , , M . This assertion is an immediate consequence of Remark 1.
In Theorem 1, N min depends on the upper bound of Var Y | θ j ψ k ( Y ) , D, as shown in (34). In the following, we will specify a Hilbert space, referred to as a data-driven Hilbert space, so that N min is independent of D and is only dependent of M, K 1 , and ε . As a consequence, we can easily determine how many training data N for bounding the second error term in Equations (25) and (26).

3.2. Error Estimation Using a Data-Driven Hilbert Space

We now turn to the discussion of a specific data-driven Hilbert space L 2 N , q ¯ with orthonormal basis functions ψ ¯ k . Our goal is to specify the weight function q ¯ such that the minimum number of training data, N ¯ min , only depends on M, K 1 , and ε . Here, the overline · ¯ corresponds to the specific data-driven Hilbert space. The second error term in (28) can be further estimated as,
1 N j = 1 M k = 1 K 1 Var Y | θ j ψ ¯ k ( Y ) 1 N k = 1 K 1 j = 1 M E Y | θ j ψ ¯ k 2 ( Y ) = M N k = 1 K 1 N ψ ¯ k 2 ( y ) 1 M j = 1 M p ( y | θ j ) d V ,
where the basis functions are substituted with the specific ψ ¯ k . Notice that ψ ¯ k ( y ) are orthonormal basis functions with respect to the weight q ¯ in L 2 N , q ¯ . One specific choice of the weight function q ¯ y is:
q ¯ y = 1 M j = 1 M p ( y | θ j ) ,
where q ¯ y has been normalized, i.e.,
N q ¯ y d V = N 1 M j = 1 M p ( y | θ j ) d V = 1 .
For the data-driven Hilbert space, we always use a normalized weight function q ¯ y . Note that the weight function q ¯ y in (37) is a discretization of the marginal density function of Y with Θ marginalized out,
q ¯ y = 1 M j = 1 M p ( y | θ j ) M p ( y | θ ) q ˜ θ d θ = M p ( y , θ ) d θ ,
where p ( y , θ ) denotes the joint density of ( Y , Θ ) . Essentially, the weight function q ¯ y in (37) is the sampling density of all the training data y i , j j = 1 , , M i = 1 , , N , which motivates us to refer to L 2 N , q ¯ as a data-driven Hilbert space.
Next, we prove that by specifying the data-driven basis functions ψ ¯ k L 2 N , q ¯ , the variance Var Y | θ j ψ ¯ k ( Y ) is finite for all k N + and j = 1 , , M . Subsequently, we can obtain the minimum number of training data, N ¯ min , to only depend on M, K 1 , and ε , such that the expectation E e ^ L 2 in (25) and the variance Var e ^ L 2 in (26) are bounded above by any ε > 0 .
Proposition 2.
Let y i , j i = 1 , , N be i.i.d. samples of Y | θ j with density p ( y | θ j ) . Let p ( y | θ j ) H q 1 ( N ) for all θ j j = 1 , , M with weight q ¯ specified in (37), and let ψ ¯ k be the complete orthonormal basis of L 2 N , q ¯ . Then, Var Y | θ j ψ ¯ k Y is finite for all k N + and j = 1 , , M .
Proof. 
Notice that for all k N + , we have:
1 M j = 1 M Var Y | θ j ψ ¯ k ( Y ) 1 M j = 1 M E Y | θ j ψ ¯ k 2 ( Y ) = N ψ ¯ k 2 ( y ) 1 M j = 1 M p ( y | θ j ) d V = N ψ ¯ k 2 ( y ) q ¯ y d V = 1 ,
where the last equality follows directly from the orthonormality of basis functions ψ ¯ k ( y ) L 2 N , q ¯ . From Equation (40), we can obtain that for all k N + and j = 1 , , M , the variance Var Y | θ j ψ ¯ k Y is finite. □
Theorem 2.
Given the same hypothesis as in Proposition 2, then:
E e ^ L 2 j = 1 M k = K 1 + 1 c Y | θ j , k 2 + M K 1 N 1 2 ,
V a r e ^ L 2 j = 1 M k = K 1 + 1 c Y | θ j , k 2 + M K 1 N .
where e ^ L 2 is defined by (24) and c Y | θ j , k is given by (19). Moreover, E e ^ L 2 and Var e ^ L 2 converge to zero as K 1 and then N , where the limiting operations of K 1 and N are not commutative.
Proof. 
According to Proposition 2, we have that the variance Var Y | θ j ψ ¯ k Y is finite for all k N + and j = 1 , , M . According to Proposition 1, since Var Y | θ j ψ ¯ k Y is finite, we have that the estimator c ^ Y | θ j , k is both unbiased and consistent for c Y | θ j , k . All conditions in Theorem 1 are satisfied, so that we can obtain the error estimation of the expectation E e ^ L 2 in (25) and the error estimation of the variance Var e ^ L 2 in (26). Moreover, the second error term in E e ^ L 2 (25) and Var e ^ L 2 (26) can be both bounded by Equation (40), so that we can obtain our error estimations (41) and (42).
Choose K 1 as in (31) such that the first term in (41) and (42) is bounded by ε / 2 . The second term M K 1 / N in (41) and (42) can be bounded by an arbitrarily small ε / 2 if the number of training data N satisfies:
N > N ¯ min 2 M K 1 / ε .
Then, both the expectation E e ^ L 2 and the variance Var e ^ L 2 can be bounded by ε . Since ε > 0 is arbitrary, the proof is complete. □
Recall that by applying arbitrary basis functions to represent p ^ y | θ in (10), it is typically not easy to evaluate the upper bound D in (33), which implies that it is not easy to determine how many observation data, N min (Equation (34)), should be used for training. However, by applying the data-driven basis functions ψ ¯ k to represent p ^ y | θ in (10), the minimum number of training data, N ¯ min (Equation (43)), becomes independent of D, and is only dependent of M, K 1 , and ε , as can be seen from Theorem 2. To let the error induced by modes k K 1 be smaller than a desired ε / 2 , we can easily determine how many observation data, N ¯ min (Equation (43)), should be used for training. In this sense, the specific data-driven Hilbert space L 2 N , q ¯ with the corresponding basis functions ψ ¯ k is a good choice for representing (10).
We have so far theoretically verified the validity of the representation (10) in estimating the conditional density p ( y | θ j ) (Theorem 1). In particular, using the data-driven basis ψ ¯ k L 2 N , q ¯ , we can easily control the error of conditional density estimation by specifying the number of training data N (Theorem 2). To summarize, the training procedures can be outlined as follows:
(1-A)
Generate the training dataset, including training parameters θ j j = 1 , , M and observations y i , j j = 1 , , M i = 1 , , N . The length of training data N is empirically determined based on the criteria (34) or (43).
(1-B)
Construct the basis functions for parameter θ space and for observation y space by using the training dataset. For y space, we need to empirically choose the number of basis functions K 1 to let the error induced by modes k > K 1 be as small as possible. In particular, for the data-driven Hilbert space, we will provide a detailed discussion on how to estimate the data-driven basis functions of L 2 ( N , q ¯ ) with the sampling density q ¯ from the training data in the following Section 4. Note that this basis estimation will introduce additional errors beyond the results in this section, which assumed the data-driven basis functions to be given.
(1-C)
Train the matrix C Y Θ in (12) and then estimate the conditional density p ^ y | θ by using the nonparametric RKWHS representation (10) with the expansion coefficients c ^ Y | θ , k (15).
(1-D)
Finally, for new observations y = { y 1 , , y T } , define the likelihood function as a product of the conditional densities of new observations y given any θ ,
p ( y | θ ) t = 1 T p ^ ( y t | θ ) .
Next, we address the second important question for the RKWHS representation (Procedure (1-B)): how to construct basis functions for θ and y . Especially, we focus on how to construct the data-driven basis functions for y .

4. Basis Functions

This section will be organized as follows. In Section 4.1, we discuss how to employ analytical basis functions for parameter θ and for observation y as in the usual polynomial chaos expansion. In Section 4.2, we discuss how to construct the data-driven basis functions ψ ¯ k L 2 N , q ¯ with N being the manifold of the training dataset y i , j j = 1 , , M i = 1 , , N and the weight q ¯ by (37) being the sampling density of y i , j j = 1 , , M i = 1 , , N .

4.1. Analytic Basis Functions

If no prior information about the parameter space other than its domain is known, we can assume that the training parameters are uniformly distributed on the parameter θ space. In particular, we choose M number of well-sampled training parameters θ j j = 1 , , M = { θ j 1 , , θ j m } j = 1 , , M in an m-dimensional box M R m ,
M [ θ min 1 , θ max 1 ] × × [ θ min m , θ max m ] ,
where × denotes a Cartesian product and the two parameters θ min s and θ max s are the minimum and maximum values of the uniform distribution for the s th coordinate of θ space. Here, the well-sampled uniform distribution corresponds to a regular grid, which is a tessellation of m-dimensional Euclidean space R m by congruent parallelotopes. Two parameters θ min s and θ max s are determined by:
θ min s = min j = 1 , , M θ j s γ max j = 1 , , M θ j s min j = 1 , , M θ j s , θ max s = max j = 1 , , M θ j s + γ max j = 1 , , M θ j s min j = 1 , , M θ j s .
For M regularly-spaced grid points θ j s , we set γ = . 5 ( M s 1 ) 1 in all of our numerical examples below, where M s is the number of training parameters in the s th coordinate. For example, see Figure 1 for the 2D well-sampled uniformly-distributed data { ( 5 , 5 ) , ( 6 , 5 ) , , ( 12 , 12 ) } (blue circles). In this case, the two-dimensional box M is [ 4.5 , 12.5 ] 2 (red square).
On this simple geometry, we will choose φ k to be the tensor product of the basis functions on each coordinate. Notice that we have taken the weight function q ˜ to be the sampling density of the training parameters in order to simplify the expansion coefficient c ^ Y | θ , k in (15). In this case, the weight q ˜ is a uniform distribution on M . Then, for the s th coordinate of the parameter, θ s , the weight function q ˜ s θ s is a uniform distribution on the interval [ θ min s , θ max s ] , and one can choose the following cosine basis functions,
Φ k s ( θ s ) = 1 , if k s = 0 , 2 cos k s π θ s θ min s θ max s θ min s , else ,
where Φ k s ( θ s ) form a complete orthonormal basis of L 2 [ θ min s , θ max s ] , q ˜ s . This choice of basis functions corresponds to exactly the data-driven basis functions produced by the diffusion maps algorithm on the uniformly-distributed dataset on a compact interval, which will be discussed in Section 4.2. Although other choices such as the Legendre polynomials can be used, this choice will lead to a larger value of constant D in (34) that controls the minimum number of training data for accurate estimation.
Subsequently, we set L 2 M , q ˜ = s = 1 m L 2 [ θ min s , θ max s ] , q ˜ s , where ⊗ denotes the Hilbert tensor product, and q ˜ θ = s = 1 m q ˜ s ( θ s ) is the uniform distribution on the m-dimensional box M . Correspondingly, the basis functions φ k θ are a tensor product of Φ k s ( θ s ) for s = 1 , , m ,
φ k θ = s = 1 m Φ k s ( θ s ) = Φ k 1 ( θ 1 ) Φ k m ( θ m ) ,
where k = k 1 , , k m and θ = θ 1 , , θ m . Based on the property of the tensor product of Hilbert spaces, φ k θ forms a complete orthonormal basis of L 2 M , q ˜ .
We now turn to the discussion of how to construct analytic basis functions for y . The approach is similar to the one for parameter θ , except that the domain of the data is specified empirically and the weight function is chosen to correspond to some well-known analytical basis functions, independent of the sampling distribution of the data y . That is, we assume the geometry of the data has the following tensor structure, N = N 1 × × N n , where N s will be specified empirically based on the ambient space coordinate of y . Let y s be the s th ambient component of y ; we can choose a weighted Hilbert space L 2 N s , q s ( y s ; α s ) with the weight q s depending on the parameters α s and being normalized to satisfy R q s ( y s ; α s ) d y s = 1 . For each coordinate, let Ψ k s ( y s ; α s ) be the corresponding orthonormal basis functions, which possess analytic expressions. Subsequently, we can obtain a set of complete orthonormal basis functions ψ k L 2 N , q for y by taking the tensor product of these Ψ k s as in (48).
For example, if the weight q s is uniform, N s R is simply a one-dimensional interval. In this case, we can choose the cosine basis functions Ψ k s for y as in (47) such that the parameters α s correspond to the boundaries of the domain N s , which can be estimated as in (46). In our numerical experiments below, we will set γ = 0.1 . Another choice is to set the weight q s ( y s ; α s ) to be Gaussian. In this case, the domain is assumed to be the real line, N s = R . For this choice, the corresponding orthonormal basis functions Ψ k s are Hermite polynomials, and the parameters α s , corresponding to the mean and variance of the Gaussian distribution, can be empirically estimated from the training data.
In the remainder of this paper, we will always use the cosine basis functions for θ . The application of (10) using cosine basis functions for y is referred to as the cosine representation. The application of (10) using Hermite basis functions for y is referred to as the Hermite representation.

4.2. Data-Driven Basis Functions

In this section, we discuss how to construct a set of data-driven basis functions ψ ¯ k L 2 N , q ¯ with N being the manifold of the training dataset y i , j j = 1 , , M i = 1 , , N and weight q ¯ in (37) being the sampling density of y i , j for all i = 1 , , N , and j = 1 , , M . The issues here are that the analytical expression of the sampling density q ¯ is unknown and the Riemannian metric inherited by the data manifold N from the ambient space R n is also unknown. Fortunately, these issues can be overcome by the diffusion maps algorithm [18,19,20].

4.2.1. Learning the Data-Driven Basis Functions

Given a dataset y i , j N R n with the sampling density q ¯ ( y ) (37), defined with respect to the volume form inherited by the manifold N from the ambient space R n , one can use the kernel-based diffusion maps algorithm to construct an M N × M N matrix L that approximates a weighted Laplacian operator, L = log q ¯ · + , that takes functions with Neumann boundary conditions for the compact manifold N with the boundary if the manifold has a boundary. The eigenvectors ψ ¯ k of the matrix L are discrete approximations of the eigenfunctions ψ ¯ k y of the operator L , which form an orthonormal basis of the weighted Hilbert space L 2 N , q ¯ . Connecting to the discussion on the RKWHS in Section 2, the eigenfunctions of L * = d i v ( log q ¯ ) + Δ , that is { Ψ k : = ψ ¯ k q ¯ } , can be approximated using an integral operator in (7) with the appropriate kernel constructed by the diffusion maps algorithm, up to a diagonal conjugation. Basically, H q ¯ 1 ( N ) is the data-driven reproducing kernel Hilbert space defined with the feature map in (9), induced by eigenfunctions of L * .
Each component of the eigenvector ψ ¯ k R M N is a discrete estimate of the eigenfunction ψ ¯ k y i , j , evaluated at the training data point y i , j . The sampling density q ¯ defined in (37) is estimated using a kernel density estimation method [22]. In contrast to the analytic continuous basis functions in the above Section 4.1, the data-driven basis functions ψ ¯ k L 2 N , q ¯ are represented nonparametrically by the discrete eigenvectors ψ ¯ k R M N using the diffusion maps algorithm. The outcome of the training is a discrete estimate of the conditional density, p ^ y i , j | θ , which estimates the representation p ^ y | θ (10) on each training data point y i , j .
In our implementation, we use the Variable-Bandwidth Diffusion Maps (VBDM) algorithm introduced in [20], which extends the diffusion maps to non-compact manifolds without a boundary. See the supplementary material of [23] for the MATLAB code of this algorithm. We should point out that this discrete approximation induces errors in the basis function, which are estimated in detail in [24]. These errors are in addition to the error estimations in Section 3.
We note that if the data are uniformly distributed on a one-dimensional bounded interval, then the VBDM solutions are the cosine basis functions, which are eigenfunctions of the Laplacian operator on bounded interval with Neumann boundary conditions. This means that the cosine functions in (47) that are used to represent each component of θ are analogous to the data-driven basis functions. The difference is that with the parametric choice in (47), one avoids VBDM at the expense of specifying the boundaries of the domain, [ θ min s , θ max s ] . In the remainder of this paper, we refer to an application of (10) with cosine basis functions for θ and VBDM basis functions for y as the VBDM representation.
However, a direct application of the VBDM algorithm suffers from the expensive computational cost for large training data. Basically, we need an algorithm that allows us to subsample from the training dataset while preserving the sampling distribution of the full dataset. In Appendix A, we provide a simple box-averaging method to achieve this goal. In the remainder of this paper, we will denote the reduced data obtained via the box-averaging method in Appendix A by { y ¯ b } b = 1 , , B , where B M N . We refer to them as the box-averaged data points. When the number of training data is too large, we apply the VBDM algorithm on these box-averaged data to obtain the discrete estimate of the eigenfunctions ψ ¯ k y ¯ b .
The second issue arises from the discrete representation of the conditional density in the observation y space using the VBDM algorithm. Notice that the VBDM representation, p ^ y i , j | θ , is only estimated at each training data point y i , j . A natural problem is to extend the representation onto new observations y t y i , j j = 1 , , M i = 1 , , N that are not part of the training dataset (Procedure (1-D)). Next, we address this issue.

4.2.2. Nyström Extension

We now discuss an extension method to evaluate basis functions ψ ¯ k on a new data point that does not belong to the training dataset. Given such an extension method, we can proceed with Procedure (1-D) by evaluating ψ ¯ k y t on new observations y t y i , j j = 1 , , M i = 1 , , N , which in turn give p ^ y t | θ . Second, this extension is also needed in the training Procedure (1-C) when M N is large. More specifically, for training the matrix C Y Θ in (12), we need to know the estimate of the eigenfunction ψ ¯ k y i , j for all the original training data y i , j . Computationally, however, we can only construct the discrete estimate of the eigenfunction ψ ¯ k y ¯ b at the reduced box-averaged data points y ¯ b . This suggests that we need to extend the eigenfunctions ψ ¯ k y ¯ b onto all the original training data y i , j j = 1 , , M i = 1 , , N .
For the convenience of discussion, the training data that are used to construct the eigenfunctions are denoted by y r old r = 1 , , R , and all the data that are not part of y r old r = 1 , , R are denoted by y new . To extend the eigenfunctions ψ ¯ k y r old onto the data point y new y r old r = 1 , , R , one approach would be to use the Nyström extension [25] that is based on the basic theory of RKHS [26]. Let H q ¯ N be the RKWHS with a symmetric positive kernel T ^ : N × N R defined as,
T ^ ( y , y ) = k = 1 λ k ψ ¯ k ( y ) ψ ¯ k ( y ) ,
where λ k is the corresponding eigenvalue of L associated with eigenfunction ψ ¯ k . Then, for any function f H q ¯ N , the Moore–Aronszajn theorem states that one can evaluate f at a N with the following inner product, f ( a ) = f , T ^ a , · H q ¯ . In our application, this amounts to evaluating,
ψ ¯ k y new = 1 R r = 1 R T y new , y r old ψ ¯ k y r old ,
where the non-symmetric kernel function T : N × N R (constructed by the diffusion maps algorithm) is related to the symmetric kernel T ^ by,
T y i , y j = q ¯ 1 / 2 y i T ^ y i , y j q ¯ 1 / 2 y j
with q ¯ y i being the sampling density of y r old r = 1 , , R at y i . See the detailed evaluation of the kernels T ^ and T for the Nyström extension in [27]. After obtaining the estimate of the eigenfunction ψ ¯ k y new using the Nyström extension, we can train the matrix C Y Θ in (12) for large M N and then obtain the representation of the conditional density on arbitrary new observation y t , p ^ y t | θ .
To summarize this section, we have constructed two different sets of basis functions for y , the analytic basis functions of L 2 N , q such as the Hermite and cosine basis functions, which assume that the manifold is R n or hyperrectangle, respectively, and the data-driven basis functions of L 2 N , q ¯ , with N being the data manifold and q ¯ being the sampling density that are computed using the VBDM algorithm.

5. Parameter Estimation Using the Metropolis Scheme

First, we briefly review the Metropolis scheme for estimating the posterior density p ( θ | y ) given new observations y = { y 1 , , y T } for a specific parameter θ . The key idea of the Metropolis scheme is to construct a Markov chain such that it converges to samples of conditional density p ( θ | y ) as the target density. In our application, the parameter estimation procedures can be outlined as follows:
(2-A)
Suppose we have θ 0 p ( θ 0 | y ) > 0 , then for i 1 , we can sample θ * κ θ i 1 , θ * . Here, κ is the proposal kernel density. For example, use the random walk Metropolis algorithm to generate proposals, κ θ i 1 , θ * = N θ i 1 , C , where C , the proposal covariance, and is a tunable nuisance parameter.
(2-B)
Accept the proposal, θ i = θ * with probability min ( p ( θ * | y ) p ( θ i 1 | y ) , 1 ) , otherwise set θ i = θ i 1 . Repeat Procedures (2-A) and (2-B) above. Notice that the posterior p ( θ | y ) can be determined from the prior p 0 ( θ ) and the likelihood p ( y | θ ) based on Bayes’ theorem (3). The likelihood function p ( y | θ ) is defined as a product of conditional densities of new observations y = { y 1 , , y T } in (44) (Procedure (1-D)). The conditional densities of new observations y given θ are obtained from the training Procedure (1-C).
(2-C)
Generate a sufficiently long chain and use the chain’s statistic as an estimator of the true parameter θ . Take multiple runs of the chain started at different initial θ 0 , and examine whether all these runs converge to the same distribution. The convergence of all the examples below has been validated using 10 randomly-chosen different initial conditions.
In the remainder of this section, we present numerical results of the Metropolis scheme using the proposed data-driven likelihood function on various instructive examples, where the likelihood function is either explicitly known, or can be approximated as in (6), or is intractable. In an example where the explicit likelihood is known, our goal is to show that the approach numerically converges to the true posterior estimate. In the second example, where the dimension of the data manifold is strictly less than the ambient dimension, we will show that the RKHS framework with the knowledge of the intrinsic geometry is superior. When the intrinsic geometrical information is unknown, the proposed data-driven likelihood function is competitive. In the third example with a low-dimensional dynamic and observation model of the form (5), we compare the proposed approach with standard methods, including the direct MCMC and nonintrusive spectral projection (both use the likelihood function of the form (6)). In our last example, we consider an observation model where the likelihood function is intractable and the cost of evaluating the observation model in (4) is numerically expensive.

5.1. Example I: Two-Dimensional Ornstein–Uhlenbeck Process

Consider an Ornstein–Uhlenbeck (OU) process as follows:
d X = 1 2 X d t + Σ 1 / 2 d W t ,
where X X 1 , X 2 denotes the state variable, W t = ( W 1 , W 2 ) denotes two-dimensional Wiener processes, and Σ R 2 × 2 is a diagonal matrix with main diagonal components σ X 1 2 and σ X s 2 to be estimated. In the stationary process, the solution of Equation (50) X = X 1 , X 2 admits a Gaussian distribution X N 0 , Σ ,
p ( X | Σ ) = det 2 π Σ 1 2 exp 1 2 X Σ 1 X .
Our goal here is to estimate the posterior density and the posterior mean of the parameters ( σ X 1 2 , σ X 2 2 ) , given a finite number, T, of observations, X ( X 1 , , X T ) , for hidden true parameters ( ( σ X 1 2 ) , ( σ X 2 2 ) ) = ( 6.5 , 6.3 ) , where each X t is an i.i.d. sample of (51) for Σ = Σ . This example is shown here to verify the validity of the framework of our RKWHS representations for parameter estimation application.
One can show that the likelihood function for this problem is the inverse matrix gamma distribution, Σ I M G T 2 3 2 , 2 , Ψ , where Ψ = X ( X ) R 2 × 2 . If a prior is defined to be also the inverse matrix gamma distribution, Σ I M G α 0 , 2 , 0 , for some value of α 0 , then the posterior density p ( Σ | X ) can be obtained by applying Bayes’ theorem,
p ( Σ | X ) I M G α 0 + T 2 , 2 , Ψ .
The posterior mean can thereafter be obtained as,
Σ P M = σ X 1 2 P M 0 0 σ X 2 2 P M = Ψ T + 2 α 0 3 .
To compare with the analytic conditional density p ( X | Σ ) (51), we trained three RKWHS representations of the conditional density function, p ^ X | Σ , by using the same training dataset. For training, we used M = 64 well-sampled uniformly-distributed training parameters (shown in Figure 1), ( σ X 1 2 , σ X 2 2 ) , where σ X j 2 { 5 , 6 , , 12 } , which are denoted by Σ j j = 1 M . For each training parameter Σ j , we generated N = 640,000 well-sampled normally distributed observation data of density in (51) with Σ = Σ j . For Hermite and cosine representations, we used 20 basis functions for each coordinate, and then, we could construct K 1 = 400 basis functions of two-dimensional observation, X , by taking the tensor product. For the VBDM representation, we first reduced the data from M N = 8 × 640,000 to B = B 1 × B 2 = 100 × 100 by the box-averaging method (Appendix A). Subsequently, we trained K 1 = 400 data-driven basis functions from the B box-averaged data using the VBDM algorithm [20].
Figure 2a displays the analytic conditional density (51), and Figure 2b–d display the pointwise errors of the conditional densities e ^ X | Σ p X | Σ p ^ X | Σ for the training parameter ( σ X 1 2 , σ X 2 2 ) = 5 , 5 . It can be seen from Figure 2b–d that all the pointwise errors are small compared to the analytic p ( X | Σ ) in Figure 2a, so that all representations of conditional densities p ^ X | Σ are in excellent agreement with the analytic p ( X | Σ ) (Figure 2a). This suggests that for the Hermite representation, the upper bound D (33) in Theorem 1 is finite so that the representation is valid in estimating the conditional density, as can be seen from Figure 2b. On the other hand, the upper bounds D (33) for the cosine and the VBDM representations are always finite, as mentioned in Remark 2 and Proposition 2, respectively. We should also point out that for this example, the VBDM representation performed the worst with errors of order 10 4 compared to the Hermite and cosine representations whose errors were on the order of 10 6 . This larger error in the VBDM representation was because the data-driven basis functions were estimated by discrete eigenvectors ψ ¯ k R B , so additional errors [20] were introduced through this discrete approximation (especially on the high modes) on the box-averaged data, { y ¯ b } b = 1 , , B , B = 10,000. On the other hand, for Hermite and cosine representations, their analytic basis functions are known, so that the errors could be approximated by (25) in Theorem 1.
We now estimate the posterior density (52) and mean (53) by using the MCMC method (Procedures (2-A)–(2-C)). We generated T = 400 well-sampled normally-distributed data as the observations from the true values of variance Σ = ( ( σ X 1 2 ) , ( σ X 2 2 ) ) = 6.5 , 6.3 . From the analytical Formula (53), we obtained the posterior mean as ( σ X 1 2 , σ X 2 2 ) P M = 6.03 , 5.84 . Here, the posterior mean deviated greatly from the true value since we only used T = 400 normally-distributed observation data as new observations. If using much more new observation data, the analytical posterior mean (53) will get closer to the true value, ( ( σ X 1 2 ) , ( σ X 2 2 ) ) . In our simulation, we set the parameter in the prior, α 0 = 1 , and the proposal covariance, C = 0.01 I . For each chain, the initial condition σ X 1 , 0 2 , σ X 2 , 0 2 was drawn randomly from U [ 5 , 12 ] 2 , and 800,000 iterations are generated for the chain.
Figure 3b,c,d display the densities of the chain by using Hermite, cosine, and VBDM representation, respectively. The densities are plotted using the kernel density estimate on the chain ignoring the first 10,000 iterations. For comparison, Figure 3a displays the analytic posterior density (52). It can be seen from Figure 3 that the posterior densities by the three representations were in excellent agreement with each other and with the analytic posterior density (52). Figure 3 also shows the comparison between the posterior mean (53) and the MCMC mean estimates. From our numerical results, MCMC mean estimates by all representations and the analytic posterior mean (53) were identical within numerical accuracy. Therefore, for this 2D OU-process example, all representations were valid in estimating the posterior density and posterior mean of parameter Σ .
Next, we will investigate a system for which the intrinsic dimension d of the data manifold where the observations lie is smaller than the dimension of ambient space n.

5.2. Example II: Three-Dimensional System of SDE’s on a Torus

Consider a system of SDE’s on a torus defined in the intrinsic coordinates θ , ϕ [ 0 , 2 π ) 2 :
d θ ϕ = a θ , ϕ d t + b θ , ϕ d W 1 d W 2 ,
where W 1 and W 2 are two independent Wiener processes, and the drift and diffusion coefficients are:
a θ , ϕ = 1 2 + 1 8 cos θ cos 2 ϕ + 1 2 cos θ + π / 2 10 + 1 2 cos θ + ϕ / 2 + cos θ + π / 2 , b θ , ϕ = D + D sin θ 1 4 cos θ + ϕ 1 4 cos θ + ϕ 1 40 + 1 40 sin ϕ cos θ .
The initial condition is θ , ϕ = π , π . Here, D is a parameter to be estimated. This example exhibits non-gradient drift, anisotropic diffusion, and multiple time scales. Both the observations and the training dataset were generated by numerically solving the SDE on appropriate parameters D in (54) with a time step Δ t = 0.1 and then mapping this data into the ambient space, R 3 , via the standard embedding of the torus given by:
x x , y , z = 2 + sin θ cos ϕ , 2 + sin θ sin ϕ , cos θ .
Here, x x , y , z are observations. This system on a torus satisfies d < n , where d = 2 is the intrinsic dimension of x and n = 3 is the dimension of ambient space R n . Our goal is to estimate the posterior density and the posterior mean of parameter D given discrete-time observations of x , which are the solutions of (54) for a specific parameter D .
For training, we used M = 8 well-sampled uniformly-distributed training parameters, D j = j / 4 j = 1 8 . For each training parameter D j , we generated N = 54,000 observations of x by solving the SDE’s in (54) for parameter D j . For Hermite and cosine representation, we constructed 10 basis functions for each x , y , z coordinate in Euclidean space. After taking tensor product of these basis functions, we could obtain K 1 = 1000 basis functions on the ambient space R 3 . For VBDM representation, we first computed B = B 1 × B 2 × B 3 = 30 3 box-averaged data points by the data reduction method in Appendix A. However, we found that some of the B box-averaged data points were far away from the torus. After ignoring these points, we eventually chose B ˜ = 26,020 of the box-averaged data points that were close enough to the torus for training. Then, we trained K 1 = 1000 data-driven basis functions on N from these 26,020 box-averaged data points using the VBDM algorithm.
Unlike the previous example, the derivation of the analytical expression for the likelihood function p ( x | D j ) was not trivial. This difficulty is due to the fact that the diffusion coefficient, b ( θ , ϕ ) , is state dependent. While direct MCMC with an approximate likelihood function constructed using the Bayesian imputation [5] can be done in principle, we neglected this numerical computation since the cost in generating the path { x i } for i = 1 , , T on each sampling step was too costly in our setup below (where T = 10,000, and we would generate a chain of length 400,000 samples). For diagnostic comparisons, we constructed another representation p ^ ( x | D j ) , named the intrinsic Fourier representation, which can be regarded as an accurate approximation of p ( x | D j ) , as it used the basis functions defined on the intrinsic coordinates ( θ , ϕ ) instead of x R 3 . See Appendix B for the construction and the convergence of the intrinsic Fourier representation in detail. We should point out that this intrinsic representation is not available in general since one may not know the embedding of the data manifold.
Figure 4 displays the comparison of the density estimates. It can be observed from Figure 4 that the VBDM representation was in good agreement with the intrinsic Fourier representation, whereas Hermite and cosine representations of p ^ ( x | D j ) deviated significantly from the intrinsic Fourier representation. The reason in short was that if the density p ( θ , ϕ | D ) in ( θ , ϕ ) coordinate were in H ( [ 0 , 2 π ) 2 ) L 2 [ 0 , 2 π ) 2 , then the corresponding VBDM representation with respect to d V ( x ) would be in H q ¯ 1 ( N ) . However, the representation (Hermite and cosine) with respect to d x , x R 3 is not in H q ¯ 1 ( R 3 ) . A more detailed explanation of this assertion is presented in Appendix B.
We now compare the MCMC estimates with the true value, D = 0.9 , from T = 10,000 observations. For this simulation, we set the prior to be uniformly distributed and empirically chose C = 0.01 for the proposal. Figure 5 displays the posterior densities of the chains for all representations (each plot of the density estimate was constructed using KDE on a chain of length 400,000). Displayed also is the comparison between the true value D and the MCMC mean estimates by all representations. Here, the mean estimate by the intrinsic Fourier representation nearly overlaps with the true value D = 0.9 , as shown in Figure 5. The mean estimate by the VBDM representation is closer to the true value D compared to the estimates by Hermite and cosine representations. Moreover, it can be seen from Figure 5 that the posterior by the VBDM representation is close to the posterior by intrinsic Fourier representation, whereas the posterior densities by Hermite and cosine representation are not. We should point out that this result is encouraging considering that the training parameter domain is rather wide, D j [ 1 / 4 , 2 ] . This result suggests that when the intrinsic dimension is less than the ambient space dimension, d < n , the VBDM representation (which does not require the knowledge of the embedding function in (55)) with data-driven basis functions in L 2 N , q ¯ is superior compared to the representations with analytic basis functions defined on the ambient coordinates R 3 .

5.3. Example III: Five-Dimensional Lorenz-96 Model

Consider the Lorenz-96 model [28]:
d x j d t = x j 1 x j + 1 x j 2 x j + F , j = 1 , , J ,
with periodic boundary, x j + J = x j . For the example in this section, we set J = 5 . The initial condition was x j ( 0 ) = sin ( 2 π j / 5 ) . Our goal here was to estimate the posterior density and posterior mean of the hidden parameter F given a time series of noisy observations y = y 1 , y 2 , y 3 , y 4 , y 5 , where:
y j t m = x j t m + ϵ m , j , ϵ m , j N 0 , σ 2 , m = 1 , , T ,
with noise variance σ 2 = 0.01 . Here, x j t m denotes the approximate solution (with the Runge–Kutta method) with a specific parameter value F at discrete times t m = m s t , where t = 0.05 is the integration time step and s is the observation interval. Since the embedding function of the observation data is unknown, we do not have a parametric analog to the intrinsic Fourier representation as in the previous example.
In this low-dimensional setting, we can compare the proposed method with basic techniques, including the direct MCMC and the Non-Intrusive Spectral Projection (NISP) method [13]. By direct MCMC, we refer to employing the random walk Metropolis scheme directly on the following likelihood function,
p y | F exp m = 1 T j = 1 5 y j t m x j t m ; F 2 2 σ 2 ,
where σ 2 is the noise variance and x j t m ; F is the solution of the initial value problem in Equation (56) with the parameter F at time t m . Note that evaluating x j t m ; F is time consuming if the model time T s Δ t is long or the MCMC chain has many iterations. In our implementation, we generated the chain for 4000 iterations. This amounts to 4000 sequential evaluations of the likelihood function in (57), where each evaluation requires integrating the model in (56) with the proposal parameter value F * until model unit time T s Δ t . We used a uniform prior distribution and C = 0.1 for the proposal.
For the NISP method [13], we used the same Gaussian likelihood function (57) with approximated x j . In particular, we approximated the solutions x j with x ˜ j t , F for j = 1 , , 5 in the form of:
x ˜ j t , F = k = 1 K x ^ j , k t φ k F ,
where φ k F are chosen to be the orthonormal cosine basis functions, x ^ j , k t are the expansion coefficients, and K is the number of basis functions. Subsequently, we prescribe a fixed set of nodes F j = 7.55 + 0.1 j j = 1 8 to be used for training x ^ j , k ( t ) . Practically, this training procedure only requires eight model evaluations that can be done in parallel, where each evaluation involves integrating the model with the specified F j until model unit time T s Δ t . The number of basis functions is K = 8 . After specifying the coefficients x ^ j , k t such that x ˜ j t , F = x j t ; F , we obtain the approximation of the solutions x ˜ j t , F for all parameters F. Using these approximated x ˜ j t , F , in place of x j ( t m , F ) in (57), we can generate the Markov chain using the Metropolis scheme. Again, we used a uniform prior distribution and C = 0.1 for the proposal. In our MCMC implementation, we generated the chain for 40,000 iterations; this involved only evaluating (58) instead of integrating the true dynamical model in (56) on the proposal parameter value F * .
For RKWHS representations, we also used M = 8 uniformly-distributed training parameters, F j = 7.55 + 0.1 j j = 1 8 . As in the NISP, this training procedure required only eight model integrations with parameter value F j until the model unit time T s Δ t , resulting in a total of M N = 8 T s training data. In this example, we did not reduce the data using the box-averaging method in Appendix A. In fact, for some cases, such as s = 1 and T = 50 , the total of training data were only M N = 400 , which was too few for estimation of the eigenfunctions. Of course, one can consider more training parameters to increase this training dataset, but for a fair comparison with NISP, we chose to just add 10 i.i.d. Gaussian noises to each dataset, resulting in a total of M N = 4000 for training dataset. This configuration (with a small dataset) is a tough setting for the VBDM since the nonparametric method is advantageous in the limit of a large dataset. When 8 T s is sufficiently large, we do not need to increase the dataset by adding multiple i.i.d Gaussian noises.
For Hermite and Cosine representation, we constructed five Hermite and cosine basis functions for each coordinate, which yielding a total of K 1 = 5 5 = 3125 basis functions in R 5 . For the VBDM representation, we directly applied the VBDM algorithm to train K 1 = 3125 data-driven basis functions on manifold N from the M N = 4000 training dataset. From the VBDM algorithm, the estimated intrinsic dimension was d 2 , which was smaller than the dimension of the ambient space n = 5 . Then, we applied a uniform prior distribution and C = 0.01 for the proposal. As in NISP, we generated the chain for 40,000 iterations, which amounted to evaluating (44) instead of integrating the true dynamical model in (56) on each iteration.
We now compare the posterior densities and mean estimates for the case of s = 1 and T = 50 noisy observations y t m corresponding to the true parameter value F = 8 . Figure 6 displays the posterior densities of the chains and mean estimates for the direct MCMC method, NISP method, and all representations. It can be seen from Figure 6 that the mean estimate by VBDM representation was in good agreement with the true value F . In contrast, the mean estimates by Hermite and cosine representations deviated substantially from the true value. Based on this numerical result, where the estimated intrinsic dimension d 2 of the observations was lower than the ambient space dimension n = 5 , the data-driven VBDM representation was superior compared to the Hermite and cosine representations. It can be further observed that direct MCMC, NISP, and VBDM representation can provide good mean estimates to the true value. However, notice that we only ran the model M = 8 times for the NISP method and VBDM representation, whereas we ran the model 4000 times for the direct MCMC method.
In real applications where the observations are not simulated by the model, we expect the observation configuration to be pre-determined. Therefore it is important to have an algorithm that is robust under various observation configurations. In our next numerical experiment, we checked such robustness by comparing the direct MCMC method, NISP method, and VBDM representation for different cases of s and T (Figure 7a). It can be observed from Figure 7a that both the direct MCMC method and VBDM representation can provide reasonably accurate mean estimates for all cases of s and T. However, again notice that we need to run the model much more times for the direct MCMC method than for VBDM representation. It can be further observed that the NISP method can only provide a good mean estimate for observation time up to T s Δ t = 200 Δ t when eight uniform nodes F j = 7.55 + 0.1 j j = 1 8 are used. The reason was that the approximated solution by NISP method was only accurate for observation time up to 200 Δ t (see the green and red curves in Figure 7b). This result suggests that our surrogate modeling approach using the VBDM representation can provide accurate and robust mean estimates under various observation configurations.

5.4. Example IV: The 40-Dimensional Lorenz-96 Model

In this section, we consider estimating the parameter F in the Lorenz-96 model in (56), but of a J = 40 dimensional system. We now consider observing the autocorrelation function of several energetic Fourier modes of the system phase-space variables. In particular, let { x ^ k ( t m ; F ) } k = J / 2 + 1 , , J / 2 be the k th discrete Fourier mode of { x j ( t m ; F ) } k = 1 , J , where t m = m Δ t with Δ t = 0.05 . Let the observation function be defined as in (4) with four-dimensional { y m ( F ) } m = 0 , , T , whose components are the autocorrelation function of Fourier mode k j ,
y m , j ( F ) = E [ x ^ k j ( t m ; F ) x ^ k j ( t 0 ; F ) ] , m = 0 , , T , j = 1 , , 4 ,
of the energetic Fourier modes, k j { 7 , 8 , 9 , 14 } . See [29] for the detailed discussion of the statistical equilibrium behavior of this model for various values of F. Such observations arise naturally since some of the model parameters can be identified from non-equilibrium statistical information via the linear response statistics [30,31]. In our numerics, we will approximate the correlation function by averaging over a long trajectory,
E [ x ^ k j ( t m ; F ) x ^ k j ( t 0 ; F ) ] 1 L = 1 L x ^ k j ( t m + ; F ) x ^ k j ( t ; F ) ,
with L = 10 6 . Here, each of these Fourier modes is assumed to have zero empirical mean. We will consider observing the autocorrelation function up to time T = 50 (corresponding to 2.5 unit time).
With this setup, the corresponding likelihood function for p ( y m | F ) is not easily approximated (since it is not in the form of (6)), and it is computationally demanding to generate y m , j ( F ) since each evaluation requires integration of the 40-dimensional Lorenz-96 model up to time index L = 10 6 . This expensive computational cost makes either the direct MCMC or approximate Bayesian computation infeasible. We should also point out the fact that a long trajectory is needed in the evaluation of (59), making this problem intractable with NISP even if a parametric likelihood function becomes available. This issue is because the approximated trajectory by polynomial chaos expansion in NISP is only accurate for short times, as shown in the previous example. We will consider constructing the likelihood function from a wide range of training parameter values, F i = 6 + 0.1 ( i 1 ) , i = 1 , , M = 31 . This parameter domain is rather wide and includes the weakly chaotic regime ( F = 6 ) and strongly chaotic regime ( F = 8 ). See [32] for a complete list of chaotic measures in these regimes including the largest Lyapunov exponent and the Kolmogorov–Sinai entropy.
In this setup, we had a total of M N = M ( T + 1 ) = 31 × 51 = 1581 of y m ( F i ) R 4 for training. We will consider an RKHS representation with K 1 = 500 basis functions. We will demonstrate the performance on 30 sets of observations y m ( F s ) , where in each case, F s does not belong to the training parameter set, namely F s = 6.05 + 0.1 ( s 1 ) , s = 1 , , 30 . In each simulation, the MCMC initial chain will be set to be random, F U ( 6.5 , 8.5 ) ; the prior is uniform; and C = 0.01 for the proposal. In Figure 8, we show the mean estimates and an error bar (based on one standard deviation) computed from averaging the MCMC chain of length 40,000 in each case. Notice the robustness of these estimates on a wide range of true parameter values F using a likelihood function constructed using a single set of training parameter values on [ 6 , 9 ] .

6. Conclusions

We have developed a framework of a parameter estimation approach where MCMC was employed with a nonparametric likelihood function. Our approach approximated the likelihood function using the kernel embedding of conditional distribution formulation based on RKWHS. By analyzing the error estimation in Theorem 1, we have verified the validity of our RKWHS representation of the conditional density as long as p ( y | θ j ) H q 1 N induced by the basis in L 2 ( N , q ) and Var Y | θ j ψ k ( Y ) is finite. Furthermore, the analysis suggests that if the weight q is chosen to be the sampling density of the data, the Var Y | θ j ψ k ( Y ) is always finite. This justifies the use of Variable Bandwidth Diffusion Maps (VBDM) for estimating the data-driven basis functions of the Hilbert space weighted by the sampling density on the data manifold.
We have demonstrated the proposed approach with four numerical examples. In the first example, where the dimension of the data manifold was exactly the dimension of the ambient space, d = n , the RKHS representation with VBDM basis yielded a parameter estimate as accurate as using other analytic basis representation. However, in the examples where the dimension of the data manifold was strictly less than the dimension of the ambient space, d < n , only VBDM representation could provide more accurate estimation of the true parameter value. We also found that VBDM representation produced mean estimates that were robustly accurate (with accuracies that were comparable to the direct MCMC) on various observation configurations where the NISP was not accurate. This numerical comparison was based on using only eight model evaluations, which can be done in parallel for both VBDM and NISP, whereas the direct MCMC involved 4000 sequential model evaluations. Finally, we demonstrated robust accurate parameter estimation on an example where the analytic likelihood function was intractable and computationally demanding, even if it became available. Most importantly, this result was based on training on a wide parameter domain that included different chaotic dynamical behaviors.
From our numerical experiments, we conclude that the proposed nonparametric representation was advantageous in any of these configurations: (1) when the parametric likelihood function was not known, such as in Example IV; (2) when the observation time stamp was long (such as in Example II or for large s T in Example III and Example IV). Ultimately, the only real advantage of this method (as a surrogate model) was when the direct MCMC or ABC, which require sequential model evaluation, was computationally not feasible.
While the theoretical and numerical results were encouraging as a proof the concept for using the VBDM representation in many other parameter estimation applications, there were still practical limitations that need to be overcome. As in the other surrogate modeling approaches, one needs to have knowledge of the feasible domain for the parameters. Even when the parameter domain is given and wide, it is practically not feasible to generate training dataset by evaluating the model on the specified training grid points on this domain when the dimension of the parameter space is large (e.g., order 10), even if the Smolyak sparse grid is used. One possible way to simultaneously overcome these two issues is to use “crude” methods, such as ensemble Kalman filtering or smoothing, to obtain the training parameters. We refer to such a method as “crude” since the parameter estimation with ensemble Kalman filtering is sensitive to the initial conditions, especially when the persistent model is used as the dynamical model for the parameters [23]. However, with such crude methods, we can at least obtain a set of parameters that reflect the observational data, instead of specifying training parameters uniformly or in a random fashion, which can lead to unphysical training parameters. Another issue that arises in the VBDM representation is the expensive computational cost when the amount of data M N is large. When the dimension of the observations is low (as in the examples in this paper), the data reduction technique described in Appendix A is sufficient. For larger dimensional problems, a more sophisticated data reduction is needed. Alternatively, one can explore representations using other orthonormal data-driven basis, such as the QR factorized basis functions as a less expensive alternative to the eigenbasis [27].

Author Contributions

Both authors contribute equally. Conceptualization, J.H.; methodology, J.H. and S.W.J.; software, S.W.J.; validation, S.W.J. and J.H.; formal analysis, S.W.J. and J.H.; investigation, S.W.J. and J.H.; resources, J.H.; data curation, S.W.J.; writing, original draft preparation, S.W.J. and J.H.; writing, review and editing, S.W.J. and J.H.; visualization, S.W.J.; supervision, J.H.; project administration, J.H.; funding acquisition, J.H.

Funding

This research was funded by the Office of Naval Research Grant Number N00014-16-1-2888. J.H. would also like to acknowledge support from the NSF Grant DMS-1619661.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
VBDMVariable Bandwidth Diffusion Maps
RKHSReproducing Kernel Hilbert Space
RKWHSReproducing Kernel Weighted Hilbert Space
MCMCMarkov Chain Monte Carlo
ABCApproximate Bayesian Computation
NISPNon-Intrusive Spectral Projection

Appendix A. Data Reduction

When M N is very large, the VBDM algorithm becomes numerically expensive since it involves solving an eigenvalue problem of matrix size M N × M N . Notice that the number of training parameters M grows exponentially as a function of the dimension of parameter, m, if well-sampled uniformly distributed training parameters are used. To overcome this large training data problem, we employ an empirical data reduction method to reduce the original M N training data points y i , j j = 1 , , M i = 1 , , N to a small B M N number of training data points yet preserving the sampling density q ¯ y in (37). Subsequently, we apply the VBDM algorithm on these reduced training data points. It is worthwhile to mention that this data reduction method is numerically applicable for low-dimensional dataset although in the following we will introduce this reduction method for any n-dimensional dataset.
The basic idea of our method is that we first cluster the training dataset y i , j j = 1 , , M i = 1 , , N into B number of boxes and then take the average of data points in each box as a reduced training data point. First, we cluster the training data y i , j , based on the ascending order of the 1st coordinate of the training data, y i , j 1 , into B 1 number of groups such that each group has the same number = M N / B 1 of data points. After the first clustering, we obtain B 1 groups with each group denoted by G k 1 1 for k 1 = 1 , , B 1 . Here, the super-index 1 denotes the first clustering and the sub-index k 1 denotes the k 1 th group. Second, for each group G k 1 1 , we cluster the training data y i , j inside G k 1 1 , based on the ascending order of the second coordinate of the training data, y i , j 2 , into B 2 number of groups such that each group has the same number = M N / B 1 B 2 of data points. After the second clustering, we obtain totally B 1 B 2 groups with each group denoted by G k 1 k 2 2 for k 1 = 1 , , B 1 , and k 2 = 1 , , B 2 . We can operate such clustering n times, where n is the dimension of the observation y ambient space. After n times clustering, we obtain B s = 1 n B s groups with each group denoted by G k 1 k 2 k n n with k s = 1 , , B s , for all s = 1 , , n . Each group is a box [see Figure A1 for example]. After taking the average of the data points in each box G k 1 k 2 k n n , we obtain B number of reduced training data points. In the remainder of this paper, we denote these B number of reduced training data points by y ¯ b b = 1 , , B and refer to them as the box-averaged data points. Intuitively, this algorithm partitions the domain into hyperrectangle such that P r ( y ¯ G k 1 k n n ) 1 / B . Note that the idea of our data reduction method is analogous to that of multivariate k-nearest neighbor density estimates [33,34]. The error estimation can be found in the Refs. [33,34].
To examine whether the distribution of these box-averaged data points is close to the sampling density q ¯ y of the original dataset, we apply our reduction method to several numerical examples in the following. Figure A1a shows the reduction result for few number ( = 64 ) of uniform data in [ 0 , 1 ] × [ 0 , 1 ] . Here, B 1 = 4 and B 2 = 4 so that there are total B = 16 boxes and inside each box there are 4 uniform data points (blue circles). It can be seen that the box-averaged data points (red circles) are far away from the well-sampled uniform data points (cyan crosses). However, when uniform data points (blue circles) increase to a large number ( = 6400 ), box-average data points (red circles) are very close to well-sampled uniform data points (cyan crosses) as shown in Figure A1b. This suggests that these box-averaged data points nearly admit the uniform distribution when there are a large number of original uniform data points. Figure A1c and d show the comparison of the kernel density estimates applied on the box-averaged data for different B for standard normal distribution and the distribution proportional to exp X 1 2 + X 1 3 + X 1 4 , respectively. It can be seen that the reduced box-averaged data points nearly preserve the distribution of the original large dataset, N = 640,000 .
Figure A1. (Color online) Data reduction for (a) few number ( = 64 ) of uniformly distributed data, and (b) many number ( = 6400 ) of uniformly distributed data. The 64 blue circles correspond to uniformly distributed data, 16 cyan crosses correspond to well-sampled uniformly distributed data, and 16 red circles correspond to box-averaged data. Boxes are partitioned by horizontal and vertical black lines. The vertical black lines correspond to the first clustering and the horizontal lines correspond to the second clustering. Panels (c) and (d) display the comparison of kernel density estimates on the box-averaged data for different number B for (c) standard normal distribution, and (d) the distribution proportional to exp [ ( X 1 2 + X 1 3 + X 1 4 ) ] , respectively. For comparison, also plotted is the analytic probability density of the distribution. The total number of the points is 640,000 . It can be seen that the reduced box-averaged data points nearly preserve the distribution of the original dataset.
Figure A1. (Color online) Data reduction for (a) few number ( = 64 ) of uniformly distributed data, and (b) many number ( = 6400 ) of uniformly distributed data. The 64 blue circles correspond to uniformly distributed data, 16 cyan crosses correspond to well-sampled uniformly distributed data, and 16 red circles correspond to box-averaged data. Boxes are partitioned by horizontal and vertical black lines. The vertical black lines correspond to the first clustering and the horizontal lines correspond to the second clustering. Panels (c) and (d) display the comparison of kernel density estimates on the box-averaged data for different number B for (c) standard normal distribution, and (d) the distribution proportional to exp [ ( X 1 2 + X 1 3 + X 1 4 ) ] , respectively. For comparison, also plotted is the analytic probability density of the distribution. The total number of the points is 640,000 . It can be seen that the reduced box-averaged data points nearly preserve the distribution of the original dataset.
Entropy 21 00559 g0a1
When M N is very large, the VBDM algorithm for the construction of data-driven basis functions [Procedure (1-B)] can be outlined as follows. We first use our data reduction method to obtain B M N number of box-averaged data y ¯ b b = 1 , , B N R n with sampling density q ¯ y 1 M j = 1 M p ( y | θ j ) in (37). The sampling density q ¯ is estimated at the box-averaged data y ¯ b using all the box-averaged data points y ¯ b b = 1 , , B by a kernel density estimation method. Implementing the VBDM algorithm, we can obtain orthonormal eigenvectors ψ ¯ k R B , which are discrete estimates of the eigenfunctions ψ ¯ k y L 2 N , q ¯ . The b th component of the eigenvector ψ ¯ k is a discrete estimate of the eigenfunction ψ ¯ k y ¯ b , evaluated at the box-averaged data point y ¯ b . Due to the dramatic reduction of the training data, the computation of these eigenvectors ψ ¯ k R B becomes much cheaper than the computation of the eigenvectors ψ ¯ k R M N using the original training dataset y i , j j = 1 , , M i = 1 , , N . Then we can obtain a discrete representation (10) of the conditional density at the box-averaged data points y ¯ b , p ^ y ¯ b | θ .

Appendix B. Additional Results on Example II

In this section, we discuss the intrinsic Fourier representation constructed for numerical comparisons in Example II and provide more detailed discussion on the numerical results.
We first discuss the construction for the intrinsic Fourier representation of the true conditional density, p x | D , defined with respect to the volume form inherited by N from the ambient space R n for the system on the torus (54) in Example II. By noticing the embedding (55) of θ , ϕ in x x , y , z , we can obtain the following equality,
1 = N p x | D d V x = [ 0 , 2 π ) 2 p x θ , ϕ | D x θ × x ϕ d θ d ϕ [ 0 , 2 π ) 2 p IC θ , ϕ | D d θ d ϕ ,
where d V x = x θ × x ϕ d θ d ϕ is the volume form, p IC denotes the true conditional density as a function of the intrinsic coordinates, θ , ϕ . Assuming that p IC θ , ϕ | D H ( [ 0 , 2 π ) 2 ) L 2 ( [ 0 , 2 π ) 2 ) , and the relation in (A1), we can construct the intrinsic Fourier representation as follows,
p ^ x | D = p ^ IC θ , ϕ | D x θ × x ϕ ,
where p ^ IC θ , ϕ | D is a RKWHS representation (10) of the conditional density p IC θ , ϕ | D with a set of orthonormal Fourier basis functions ψ k θ , ϕ L 2 [ 0 , 2 π ) 2 . Here, ψ k θ , ϕ are formed by the tensor product of two sets of orthonormal Fourier basis functions { 1 , 2 cos m θ , 2 sin m θ } and { 1 , 2 cos m ϕ , 2 sin m ϕ } for m N + . Note that for intrinsic Fourier representation, we need to know the embedding (55) and know the data of θ , ϕ in intrinsic coordinates for training, which is available for this example. Nevertheless, for Hermite, Cosine, and VBDM representations, we only need to know the observation data x for training.
The convergence of p ^ x | D to the true density can be explained as follows. For the system (54) in the intrinsic coordinates θ , ϕ , where p IC θ , ϕ | D H [ 0 , 2 π ) 2 for all parameter D, the statistics Var θ , ϕ | D ψ k θ , ϕ are bounded for all D and all k N + , by the compactness of [ 0 , 2 π ) 2 and the uniform boundedness of ψ k θ , ϕ for all k. According to Theorem 1, we can obtain the convergence of the representation p ^ IC θ , ϕ | D . Then by noticing the smoothness of x θ × x ϕ on the torus, we can obtain the convergence of the intrinsic Fourier representation p ^ x | D in (A2).
Next, we give an intuitive explanation for the reason why in the regime d < n , VBDM representation can provide a good approximation whereas Hermite and Cosine representations cannot. Essentially, the VBDM representation uses basis functions of the weighted Hilbert space of functions defined with respect to a volume form V ˜ that is conformally equivalent to the volume form V that is inherited by the data manifold N from the ambient space, R n . That is, the weighted Hilbert space, L 2 ( N , q ¯ 1 ) , means,
L 2 ( N , q ¯ 1 ) = f : N | f ( x ) | 2 d V ˜ ( x ) < ,
where d V ˜ ( x ) = q ¯ ( x ) 1 d V ( x ) denotes the volume form that is conformally changed by the sampling density q ¯ . We should point out that the key point of the diffusion maps algorithm [19] is to introduce an appropriate normalization to avoid biased in the geometry induced by the sampling density q ¯ when the data are not sampled according to the Riemannian metric inherited by N from the ambient space R n . Furthermore, the orthonormal basis functions of the Hilbert space L 2 ( N , q ¯ 1 ) are the eigenfunctions of the adjoint (with respect to L 2 ( N ) ) of the operator, L = log ( q ¯ ) · + Δ , that is constructed by the VBDM algorithm. Incidentally, the adjoint operator L * is the Fokker-Planck operator of a gradient system forced by stochastic noises. The point is that this adjoint operator takes density functions of the weighted Hilbert space L 2 ( N , q ¯ 1 ) . Since the Hilbert space L 2 N , q ¯ 1 is a function space of some Fokker-Planck operator that acts on densities defined with respect to the geometry of data, then representing the conditional density with basis functions of the weighted Hilbert space L 2 N , q ¯ 1 is a natural choice. Thus, the error estimation in Theorem 2 is valid in controlling the error of the estimate.
Next, we will show that the representation of the true density p EX in the ambient space R 3 is not a function of H q 1 ( R ) L 2 R , q 1 , where for Hermite representation R is R 3 and q is a normal distribution, and for Cosine representation R is a hyperrectangle containing the torus and q is a uniform distribution. Recall that the torus is parametrized by:
x x , y , z = 2 + r sin θ cos ϕ , 2 + r sin θ sin ϕ , r cos θ ,
where θ , ϕ are angles which make a full circle, and r is the radius of the tube known as the minor radius. All observation data are located on the torus with r = 1 . Then, the generalized conditional density p ( r , θ , ϕ | D ) in r , θ , ϕ coordinate can be defined using the Dirac delta function as follows,
p r , θ , ϕ | D = p IC θ , ϕ | D δ r 1 ,
where p IC , defined in (A1), denotes the conditional density function in the intrinsic coordinate, θ , ϕ , and δ is the Dirac delta function. After coordinate transformation, the density p EX : R 3 R can be obtained as
p EX x | D = p r , θ , ϕ | D J = p IC θ , ϕ | D δ r 1 J ,
where J is the Jacobian determinant det x , y , z / r , θ , ϕ . It can be examined that p EX x | D is a generalized conditional density, that is, R 3 p EX ( x | D ) d x = 1 . Now it can be clearly seen that due to the Dirac delta function δ r 1 in (A5), the density p EX ( x | D ) is no longer in the weighted Hilbert space, p EX ( x | D ) H q 1 R . Consequently, the error estimation in Theorem 1 becomes invalid in controlling the error of the conditional density.
Here, the key point is that for Cosine and Hermite representations, the volume integral is with respect to d x . The complete basis functions are obtained from tensor product of three sets of basis functions in ( x , y , z ) coordinates. In order to represent a conditional density function p EX ( x | D ) defined only on an intrinsically 2D torus domain, theoretically infinite number of basis functions are needed. However, numerically only finite number of basis functions can be used. Then, the density p EX ( x | D ) in (A5) cannot be well approximated for Hermite and Cosine representations (10). Moreover, if only finite number of Hermite or Cosine basis functions are used for representations, typically Gibbs phenomenon can be observed, i.e., the Dirac delta function δ r 1 in (A5) will be approximated by a function having a single tall spike at r = 1 with some oscillations at two sides along the r direction. On the other hand, the data-driven basis functions obtained via the diffusion maps algorithm are smooth functions defined on the data manifold N . Therefore, while the Gibbs phenomenon still occurs in this spectral expansion, it is due to finite truncation in representing a positive smooth functions (densities) on the data manifold, and not due to the singularity that occurs in the ambient direction as in (A5).

References

  1. Kaipio, J.; Somersalo, E. Statistical and Computational Inverse Problems; Springer: New York, NY, USA, 2005. [Google Scholar]
  2. Sullivan, T.J. Introduction to Uncertainty Quantification; Springer: Cham, Switzerland, 2015; Volume 63. [Google Scholar]
  3. Dashti, M.; Stuart, A.M. The Bayesian Approach to Inverse Problems. In Handbook of Uncertainty Quantification; Ghanem, R., Higdon, D., Owhadi, H., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 311–428. [Google Scholar] [Green Version]
  4. Brooks, S.; Gelman, A.; Jones, G.; Meng, X.L. Handbook of Markov Chain Monte Carlo; CRC Press: London, UK, 2011. [Google Scholar]
  5. Golightly, A.; Wilkinson, D. Markov Chain Monte Carlo Algorithms for SDE Parameter Estimation. 2010, pp. 253–276. Available online: http://www.mas.ncl.ac.uk/ nag48/diffchap.pdf (accessed on 2 June 2019).
  6. Tavaré, S.; Balding, D.J.; Griffiths, R.C.; Donnelly, P. Inferring coalescence times from DNA sequence data. Genetics 1997, 145, 505–518. [Google Scholar] [PubMed]
  7. Turner, B.M.; Van Zandt, T. A tutorial on approximate Bayesian computation. J. Math. Psychol. 2012, 56, 69–85. [Google Scholar] [CrossRef]
  8. Neal, R.M. MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo; Brooks, S., Gelman, A., Jones, G., Meng, X.-L., Eds.; CRC Press: London, UK, 2011; Chapter 5; pp. 113–167. [Google Scholar]
  9. Beck, J.L.; Au, S.K. Bayesian updating of structural models and reliability using Markov Chain Monte Carlo simulation. J. Eng. Mech. 2002, 128, 380–391. [Google Scholar] [CrossRef]
  10. Haario, H.; Laine, M.; Mira, A.; Saksman, E. DRAM: Efficient adaptive MCMC. Stat. Comput. 2006, 16, 339–354. [Google Scholar] [CrossRef]
  11. Higdon, D.; Kennedy, M.; Cavendish, J.C.; Cafeo, J.A.; Ryne, R.D. Combining field data and computer simulations for calibration and prediction. SIAM J. Sci. Comput. 2004, 26, 448–466. [Google Scholar] [CrossRef]
  12. Marzouk, Y.; Najm, H.; Rahn, L. Stochastic spectral methods for efficient Bayesian solution of inverse problems. J. Comput. Phys. 2007, 224, 560–586. [Google Scholar] [CrossRef]
  13. Marzouk, Y.; Xiu, D. A stochastic collocation approach to Bayesian inference in inverse problems. Commun. Comput. Phys. 2009, 6, 826–847. [Google Scholar] [CrossRef]
  14. Huttunen, J.M.; Kaipio, J.P.; Somersalo, E. Approximation errors in nonstationary inverse problems. Inverse Probl. Imag. 2007, 1, 77–93. [Google Scholar] [CrossRef]
  15. Nagel, J.B.; Sudret, B. Spectral likelihood expansions for Bayesian inference. J. Comput. Phys. 2016, 309, 267–294. [Google Scholar] [CrossRef] [Green Version]
  16. Song, L.; Huang, J.; Smola, A.; Fukumizu, K. Hilbert space embeddings of conditional distributions with applications to dynamical systems. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; ACM: New York, NY, USA, 2009; pp. 961–968. [Google Scholar] [Green Version]
  17. Song, L.; Fukumizu, K.; Gretton, A. Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. IEEE Signal Process. Mag. 2013, 30, 98–111. [Google Scholar] [CrossRef]
  18. Berry, T.; Harlim, J. Correcting biased observation model error in data assimilation. Mon. Weather Rev. 2017, 145, 2833–2853. [Google Scholar] [CrossRef]
  19. Coifman, R.R.; Lafon, S. Diffusion maps. Appl. Comput. Harmon. Anal. 2006, 21, 5–30. [Google Scholar] [CrossRef] [Green Version]
  20. Berry, T.; Harlim, J. Variable bandwidth diffusion kernels. Appl. Comput. Harmon. Anal. 2016, 40, 68–96. [Google Scholar] [CrossRef]
  21. Steinwart, I.; Christmann, A. Support Vector Machines; Springer: New York, NY, USA, 2008. [Google Scholar]
  22. Berry, T.; Harlim, J. Forecasting turbulent modes with nonparametric diffusion models: Learning from noisy data. Physica D 2016, 320, 57–76. [Google Scholar] [CrossRef] [Green Version]
  23. Harlim, J. Data-Driven Computational Methods: Parameter and Operator Estimations; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  24. Berry, T.; Sauer, T. Consistent manifold representation for topological data analysis. Found. Data Sci. 2019, 1, 1–38. [Google Scholar] [CrossRef]
  25. Nyström, E.J. Über die praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben. Acta Math. 1930, 54, 185–204. (In German) [Google Scholar] [CrossRef]
  26. Aronszajn, N. Theory of reproducing kernels. Trans. Am. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
  27. Harlim, J.; Yang, H. Diffusion Forecasting Model with Basis Functions from QR-Decomposition. J. Nonlinear Sci. 2018, 28, 847–872. [Google Scholar] [CrossRef]
  28. Lorenz, E. Predictability: A problem partly solved. In Proceedings of the Seminar on Predictability, Shinfield Park, Reading, UK, 4–8 September 1995; ECMWF: Reading, UK, 1995; Volume 1, pp. 1–18. [Google Scholar]
  29. Majda, A.; Abramov, R.; Grote, M. Information Theory and Stochastics for Multiscale Nonlinear Systems; (CRM Monograph Series); American Mathematical Society: Providence, RI, USA, 2005. [Google Scholar]
  30. Harlim, J.; Li, X.; Zhang, H. A Parameter Estimation Method Using Linear Response Statistics. J. Stat. Phys. 2017, 168, 146–170. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, H.; Li, X.; Harlim, J. A Parameter Estimation Method Using Linear Response Statistics: Numerical Scheme. Chaos 2019, 29, 033101. [Google Scholar] [CrossRef]
  32. Abramov, R.; Majda, A. Blended response algorithm for linear fluctuation-dissipation for complex nonlinear dynamical systems. Nonlinearity 2007, 20, 2793–2821. [Google Scholar] [CrossRef]
  33. Loftsgaarden, D.O.; Quesenberry, C.P. A nonparametric estimate of a multivariate density function. Ann. Math. Stat. 1965, 36, 1049–1051. [Google Scholar] [CrossRef]
  34. Mack, Y.; Rosenblatt, M. Multivariate k-nearest neighbor density estimates. J. Multivar. Anal. 1979, 9, 1–15. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (Color online) An example of well-sampled 2D uniformly-distributed data points (blue circles). The boundary of the uniform distribution is depicted with a red square. Furthermore, these well-sampled data points correspond to the training parameters in Example I in Section 5. In this example, the well-sampled uniformly-distributed training parameters are ( σ X 1 2 , σ X 2 2 ) ( i , j ) i = 5 , , 12 j = 5 , , 12 (blue circles). The equal spacing distances of both coordinates are one. The two-dimensional box M is [ 4.5 , 12.5 ] 2 (red square).
Figure 1. (Color online) An example of well-sampled 2D uniformly-distributed data points (blue circles). The boundary of the uniform distribution is depicted with a red square. Furthermore, these well-sampled data points correspond to the training parameters in Example I in Section 5. In this example, the well-sampled uniformly-distributed training parameters are ( σ X 1 2 , σ X 2 2 ) ( i , j ) i = 5 , , 12 j = 5 , , 12 (blue circles). The equal spacing distances of both coordinates are one. The two-dimensional box M is [ 4.5 , 12.5 ] 2 (red square).
Entropy 21 00559 g001
Figure 2. (Color online) (a) The analytic conditional density p ( X | Σ ) (51). For comparison, plotted are the pointwise errors of conditional density functions e ^ X | Σ p X | Σ p ^ X | Σ for (b) Hermite, (c) cosine, and (d) VBDM representations. The density and all the error functions are plotted on the B = 10,000 box-averaged data points. The training parameter Σ ( σ X 1 2 , σ X 2 2 ) = ( 5 , 5 ) .
Figure 2. (Color online) (a) The analytic conditional density p ( X | Σ ) (51). For comparison, plotted are the pointwise errors of conditional density functions e ^ X | Σ p X | Σ p ^ X | Σ for (b) Hermite, (c) cosine, and (d) VBDM representations. The density and all the error functions are plotted on the B = 10,000 box-averaged data points. The training parameter Σ ( σ X 1 2 , σ X 2 2 ) = ( 5 , 5 ) .
Entropy 21 00559 g002
Figure 3. (Color online) Comparison of the posterior density functions p ( Σ | X ) . (a) Analytical posterior density p ( Σ | X ) (52). (b) Hermite representation. (c) Cosine representation. (d) VBDM representation. The true value Σ ( σ X 1 2 , σ X 2 2 ) = ( 6.5 , 6.3 ) (blue plus). The analytic posterior mean is ( σ X 1 2 , σ X 2 2 ) P M = ( 6.03 , 5.84 ) (green cross). The MCMC mean estimate using Hermite representation is ( σ X 1 2 , σ X 2 2 ) = ( 6.05 , 5.87 ) (black square). The MCMC mean estimate using Cosine representation is ( σ X 1 2 , σ X 2 2 ) = ( 6.05 , 5.87 ) (black triangle). The MCMC mean estimate using VBDM representation is ( σ X 1 2 , σ X 2 2 ) = ( 6.04 , 5.86 ) (black circle).
Figure 3. (Color online) Comparison of the posterior density functions p ( Σ | X ) . (a) Analytical posterior density p ( Σ | X ) (52). (b) Hermite representation. (c) Cosine representation. (d) VBDM representation. The true value Σ ( σ X 1 2 , σ X 2 2 ) = ( 6.5 , 6.3 ) (blue plus). The analytic posterior mean is ( σ X 1 2 , σ X 2 2 ) P M = ( 6.03 , 5.84 ) (green cross). The MCMC mean estimate using Hermite representation is ( σ X 1 2 , σ X 2 2 ) = ( 6.05 , 5.87 ) (black square). The MCMC mean estimate using Cosine representation is ( σ X 1 2 , σ X 2 2 ) = ( 6.05 , 5.87 ) (black triangle). The MCMC mean estimate using VBDM representation is ( σ X 1 2 , σ X 2 2 ) = ( 6.04 , 5.86 ) (black circle).
Entropy 21 00559 g003
Figure 4. (Color online) Comparison of the conditional densities p ^ ( x | D j ) estimated by using Hermite representation (first row), cosine representation (second row), VBDM representation (third row), and intrinsic Fourier representation (fourth row). The left (a,d,g,j), middle (b,e,h,k), and right (c,f,i,l) columns correspond to the densities on the training parameters D 1 = 0.25 , D 4 = 1.00 , and D 7 = 1.75 , respectively. K 1 = 1000 basis functions are used for all representations. For fair visual comparison, all conditional densities are plotted on the same box-averaged data points and normalized to satisfy 1 B ˜ b = 1 B ˜ p ^ ( x b | D j ) / q ¯ x b = 1 with q ¯ being the estimated sampling density of the box-averaged data x b b = 1 B ˜ .
Figure 4. (Color online) Comparison of the conditional densities p ^ ( x | D j ) estimated by using Hermite representation (first row), cosine representation (second row), VBDM representation (third row), and intrinsic Fourier representation (fourth row). The left (a,d,g,j), middle (b,e,h,k), and right (c,f,i,l) columns correspond to the densities on the training parameters D 1 = 0.25 , D 4 = 1.00 , and D 7 = 1.75 , respectively. K 1 = 1000 basis functions are used for all representations. For fair visual comparison, all conditional densities are plotted on the same box-averaged data points and normalized to satisfy 1 B ˜ b = 1 B ˜ p ^ ( x b | D j ) / q ¯ x b = 1 with q ¯ being the estimated sampling density of the box-averaged data x b b = 1 B ˜ .
Entropy 21 00559 g004
Figure 5. (Color online) Comparison of the posterior density functions by all representations. Plotted also are mean estimates by Hermite representation D ^ = 0.78 (blue triangle), cosine representation D ^ = 0.79 (red square), VBDM representation D ^ = 0.88 (black circle), the intrinsic Fourier representation D ^ = 0.90 (green circle), and the true parameter value D = 0.9 (magenta asterisk).
Figure 5. (Color online) Comparison of the posterior density functions by all representations. Plotted also are mean estimates by Hermite representation D ^ = 0.78 (blue triangle), cosine representation D ^ = 0.79 (red square), VBDM representation D ^ = 0.88 (black circle), the intrinsic Fourier representation D ^ = 0.90 (green circle), and the true parameter value D = 0.9 (magenta asterisk).
Entropy 21 00559 g005
Figure 6. (Color online) Comparison of the posterior density functions among the direct MCMC method, NISP method, and all RKWHS representations. Plotted also are the true parameter value F = 8 (black cross), mean the estimate by the direct MCMC method F ^ = 8.00 (green circle), the mean estimate by the NISP method F ^ = 8.00 (magenta square), and the mean estimates by Hermite representation F ^ = 8.21 (blue triangle), cosine representation F ^ = 8.10 (red square), and VBDM representation F ^ = 7.99 (black circle). The noisy observations are y j ( t m ) for s = 1 , T = 50 .
Figure 6. (Color online) Comparison of the posterior density functions among the direct MCMC method, NISP method, and all RKWHS representations. Plotted also are the true parameter value F = 8 (black cross), mean the estimate by the direct MCMC method F ^ = 8.00 (green circle), the mean estimate by the NISP method F ^ = 8.00 (magenta square), and the mean estimates by Hermite representation F ^ = 8.21 (blue triangle), cosine representation F ^ = 8.10 (red square), and VBDM representation F ^ = 7.99 (black circle). The noisy observations are y j ( t m ) for s = 1 , T = 50 .
Entropy 21 00559 g006
Figure 7. (Color online) (a) Comparison of the mean estimates among the direct MCMC method, NISP method, and VBDM representation for different cases of s and T. Plotted also is the true parameter value F = 8 (green curve). (b) Comparison of the exact solution by numerical integration and the approximated solution by the NISP method at the training parameter F = 7.65 and at the parameter value F = 8 , which is not in the training parameter.
Figure 7. (Color online) (a) Comparison of the mean estimates among the direct MCMC method, NISP method, and VBDM representation for different cases of s and T. Plotted also is the true parameter value F = 8 (green curve). (b) Comparison of the exact solution by numerical integration and the approximated solution by the NISP method at the training parameter F = 7.65 and at the parameter value F = 8 , which is not in the training parameter.
Entropy 21 00559 g007
Figure 8. (Color online) Mean error estimates and error bars for various true values of F that are not in the training parameters.
Figure 8. (Color online) Mean error estimates and error bars for various true values of F that are not in the training parameters.
Entropy 21 00559 g008

Share and Cite

MDPI and ACS Style

Jiang, S.W.; Harlim, J. Parameter Estimation with Data-Driven Nonparametric Likelihood Functions. Entropy 2019, 21, 559. https://doi.org/10.3390/e21060559

AMA Style

Jiang SW, Harlim J. Parameter Estimation with Data-Driven Nonparametric Likelihood Functions. Entropy. 2019; 21(6):559. https://doi.org/10.3390/e21060559

Chicago/Turabian Style

Jiang, Shixiao W., and John Harlim. 2019. "Parameter Estimation with Data-Driven Nonparametric Likelihood Functions" Entropy 21, no. 6: 559. https://doi.org/10.3390/e21060559

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop