Malliavin weight sampling: a practical guide

Malliavin weight sampling (MWS) is a stochastic calculus technique for computing the derivatives of averaged system properties with respect to parameters in stochastic simulations, without perturbing the system's dynamics. It applies to systems in or out of equilibrium, in steady state or time-dependent situations, and has applications in the calculation of response coefficients, parameter sensitivities and Jacobian matrices for gradient-based parameter optimisation algorithms. The implementation of MWS has been described in the specific contexts of kinetic Monte Carlo and Brownian dynamics simulation algorithms. Here, we present a general theoretical framework for deriving the appropriate MWS update rule for any stochastic simulation algorithm. We also provide pedagogical information on its practical implementation.


I. INTRODUCTION
Malliavin weight sampling (MWS) is a method for computing derivatives of averaged system properties with respect to parameters in stochastic simulations [1,2]. The method has been used in quantitative financial modelling to obtain the "Greeks" (price sensitivities) [3]; and, as the Girsanov transform, in kinetic Monte Carlo simulations for systems biology [4]. Similar ideas have been used to study fluctuation-dissipation relations in supercooled liquids [5]. However, MWS appears to be relatively unknown in the fields of soft matter, chemical and biological physics, perhaps because the theory is relatively impenetrable for non-specialists, being couched in the language of abstract mathematics (e.g., martingales, Girsanov transform, Malliavin calculus, etc.); an exception in financial modelling is Ref. [6].
MWS works by introducing an auxiliary stochastic quantity, the Malliavin weight, for each parameter of interest. The Malliavin weights are updated alongside the system's usual (unperturbed) dynamics, according to a set of rules. The derivative of any system function, A, with respect to a parameter of interest is then given by the average of the product of A with the relevant Malliavin weight, or in other words by a weighted average of A, in which the weight function is given by the Malliavin weight. Importantly, MWS works for non-equilibrium situations, such as time-dependent processes or driven steady states. It thus complements existing methods based on equilibrium statistical mechanics, which are widely used in soft matter and chemical physics.
MWS has so far been discussed only in the context of specific simulation algorithms. In this paper, we present a pedagogical and generic approach to the construction of Malliavin weights, which can be applied to any stochastic simulation scheme. We further describe its practical implementation in some detail using as our example one dimensional Brownian motion in a force field.

II. THE CONSTRUCTION OF MALLIAVIN WEIGHTS
The rules for the propagation of Malliavin weights have been derived for the kinetic Monte-Carlo algorithm [4,7], for the Metropolis Monte-Carlo scheme [5] and for both underdamped and overdamped Brownian dynamics [8].
Here, we present a generic theoretical framework, which encompasses these algorithms and also allows extension to other stochastic simulation schemes.
We suppose that our system evolves in some state space, and a point in this state space is denoted as S.
Here, we assume that the state space is continuous, but our approach can easily be translated to discrete or mixed discrete-continuous state spaces. Since the system is stochastic, its state at time t is described by a probability distribution, P (S). In each simulation step, the state of the system changes according to a propagator, W (S → S ), which gives the probability that the system moves from point S to point S during an application of the update algorithm. The propagator has the property that where P (S) is the probability distribution after the update step has been applied and the integral is over the whole state space. We shall write this in a shorthand notation as Integrating Eq. (1) over S , we see that the propagator must obey S W (S → S ) = 1. It is important to note, however, that we do not assume the detailed balance condition, P eq (S) W (S → S ) = P eq (S ) W (S → S) for some equilibrium P eq (S). Thus, our results apply to systems whose dynamical rules do not obey detailed balance arXiv:1312.7480v1 [cond-mat.stat-mech] 28 Dec 2013 (such as chemical models of gene regulatory networks [9]), as well as to systems out of steady state. We observe that the (finite) product is proportional to the probability of occurrence of a trajectory of states, {S 1 , . . . , S n }, and can be interpreted as a trajectory weight.
Let us now consider the average of some quantity, A(S), over the state space, in shorthand The quantity, A, might well be a complicated function of the state of the system: for example the extent of crystalline order in a particle-based simulation, or a combination of the concentrations of various chemical species in a simulation of a biochemical network. We suppose that we are interested in the sensitivity of A to variations in some parameter of the simulation, which we denote as λ. This might be one of the force field parameters (or the temperature) in a particle-based simulation or a rate constant in a kinetic Monte Carlo simulation. We are interested in computing ∂ A /∂λ. This quantity can be written as where Let us now suppose that we track in our simulation not only the physical state of the system, but also an auxiliary stochastic variable, which we term q λ . At each simulation step, q λ is updated according to a rule that depends on the system state; this does not perturb the system's dynamics, but merely acts as a "readout". By tracking q λ , we extend the state space, so that S becomes {S, q λ }. We can then define the average q λ S , which is an average of the value of q λ in the extended state space, with the constraint that the original (physical) state space point is fixed at S (see further below). Our aim is to define a set of rules for updating q λ , such that q λ S = Q λ , i.e., such that the average of the auxiliary variable, for a particular state space point, measures the derivative of the probability distribution with respect to the parameter of interest, λ. If this is the case then, from Eq. (5) The auxiliary variable, q λ , is the Malliavin weight corresponding to the parameter, λ.
How do we go about finding the correct updating rule? If the Malliavin weight exists, we should be able to derive its updating rule from the system's underlying stochastic equations of motion. We obtain an important clue from differentiating Eq. (1) with respect to λ. Extending the shorthand notation, one finds This strongly suggests that the rule for updating the Malliavin weight should be In fact, this is correct. The proof is not difficult and, for the case of Brownian dynamics, can be found in the supplementary material for Ref. [8]. It involves averaging Eq. (9) in the extended state space, {S, q λ }. From a practical point of view, for each time step, we implement the following procedure: • propagate the system from its current state, S, to a new state, S , using the algorithm that implements the stochastic equations of motion (Brownian, kinetic Monte-Carlo, etc.); • with knowledge of S and S , and the propagator, W (S → S ), calculate the change in the Malliavin weight ∆q λ = ∂ ln W (S → S )/∂λ; • update the Malliavin weight according to q λ → q λ = q λ + ∆q λ .
At the start of the simulation, the Malliavin weight is usually initialised to q λ = 0.
Let us first suppose that our system is not in steady state, but rather the quantity A in which we are interested is changing in time, and likewise ∂ A(t) /∂λ is a time-dependent quantity. To compute ∂ A(t) /∂λ, we run N independent simulations, in each one tracking as a function of time A(t), q λ (t) and the product, A(t) q λ (t). The quantities A(t) and ∂ A(t) /∂λ are then given by where A i (t) is the value of A(t) recorded in the ith simulation run (and likewise for q λ,i (t)). Error estimates can be obtained from replicate simulations. If, instead, our system is in steady state, the procedure needs to be modified slightly. This is because the variance in the values of q λ (t) across replicate simulations increases linearly in time (this point is discussed further below). For long times, computation of ∂ A /∂λ using Eq. (10) therefore incurs a large statistical error. Fortunately, this problem can easily be solved, by computing the correlation function In steady state, C(t, t ) = C(t − t ), with the property that C(∆t) → ∂A/∂λ as ∆t → ∞. In a single simulation run, we simply measure q λ (t) and A(t) at time intervals separated by ∆t (which is typically multiple simulation steps). At each measurement, we compute We then average this latter quantity over the whole simulation run to obtain an estimate of ∂ A /∂λ. For this estimate to be accurate, we require that ∆t is long enough that C(∆t) has reached its plateau value; this typically means that ∆t should be longer than the typical relaxation time of the system's dynamics. The correlation function approach is discussed in more detail in Refs. [7,8].
Returning to a more theoretical perspective, it is interesting to note that the rule for updating the Malliavin weight, Eq. (9), depends deterministically on S and S . This implies that the value of the Malliavin weight at time t is completely determined by the trajectory of system states during the time interval, 0 → t. In fact, it is easy to show that where W is the trajectory weight defined in Eq. (3). Similar expressions are given in Refs. [5,7]. Thus, the Malliavin weight, q λ , is not fixed by the state point, S, but by the entire trajectory of states that have led to state point S. Since many different trajectories can lead to S, many values of q λ are possible for the same state point, S. The average q λ (t) S is actually the expectation value of the Malliavin weight, averaged over all trajectories that reach state point S at time t. This can be used to obtain an alternative proof that q λ S = ∂ ln P/∂λ. Suppose we sample N trajectories, of which N S end up at state point S (or a suitably defined vicinity thereof, in a continuous state space). We have P (S) = N S /N . Then, the Malliavin property implies ∂P/∂λ = N S q λ /N , and hence, ∂ ln P/∂λ = (∂P/∂λ)/P = N S q λ / N S = q λ S .

III. MULTIPLE VARIABLES, SECOND DERIVATIVES AND THE ALGEBRA OF MALLIAVIN WEIGHTS
Up to now, we have assumed that the quantity, A, does not depend on the parameter, λ. There may be cases, however, when A does have an explicit λ-dependence. In these cases, Eq. (7) should be replaced by This reveals a kind of 'algebra' for Malliavin weights: we see that the operations of taking an expectation value and taking a derivative can be commuted, provided the Malliavin weight is introduced as the commutator. We can also extend our analysis further to allow us to compute higher derivatives with respect to the parameters. These may be useful, for example, for increasing the efficiency of gradient-based parameter optimisation algorithms. Taking the derivative of Eq. (13) with respect to a second parameter, µ, gives In the second line, we iterate the commutation relation and, in the third line, we collect like terms and introduce In the case where A is independent of the parameters, this result simplifies to The quantity, q λµ , here is a new, second order Malliavin weight, which, from Eqs. (12) and (15), satisfies To compute second derivatives with respect to the parameters, we should therefore track these second order Malliavin weights in our simulation, updating them alongside the existing Malliavin weights by the rule A corollary, if we take A as a constant in Eqs. (13) and (16) respectively, is that quite generally q λ = 0 and q λµ = − q λ q µ .
Steady state problems can be approached by extending the correlation function method to second order weights. Define, cf. Eq. (11), As in the first order case, in steady state, we expect C(t, t ) = C(t − t ), with the property that C(∆t) → ∂ 2 A /∂λ∂µ as ∆t → ∞.
We now demonstrate this machinery by way of a practical but very simple example, namely one-dimensional (overdamped) Brownian motion in a force field. In this case, the state space is specified by the particle position, x, which evolves according to the Langevin equation In this f (x) is the force field and η is Gaussian white noise of amplitude 2T , where T is temperature. Without loss of generality we have chosen units so that there is no prefactor multiplying the force field. We discretise the Langevin equation to the following updating rule where δt is the time step and ξ is a Gaussian random variate with zero mean and variance 2T δt. Corresponding to this updating rule is an explicit expression for the propagator (22) This follows from the statistical distribution of ξ. Let us suppose that the parameter of interest, λ, enters into the force field (the temperature, T , could also be chosen as a parameter). Making this assumption We can simplify this result by noting that from Eq. (21), x − x − f δt = ξ. Making use of this, the final updating rule for the Malliavin weight is where ξ is the exact same value that was used for updating the position in Eq. (21). Because the value of ξ is the same for the updates of position and of q λ , the change in q λ is completely determined by the end points, x and x . The derivative, ∂f /∂λ, should be evaluated at x, since that is the position at which the force is computed in Eq. (21). Since ξ in Eq. (21) is a random variate uncorrelated with x, averaging Eq. (24) shows that q λ = q λ . As the initial condition is q λ = 0, this means that q λ = 0, as predicted in the previous section. Eq. (24) is essentially the same as that derived in Ref. [8].
If we differentiate Eq. (23) with respect to a second parameter, µ, we get Hence, the updating rule for the second order Malliavin weight can be written as where, again, ξ is the exact same value as that used for updating the position in Eq. (21). If we average Eq. (26) over replicate simulation runs, we find q λµ = q λµ − (δt/2T )(∂f /∂λ)(∂f /∂µ). Hence, the mean value, q λµ , drifts in time, unlike q λ or q µ . However, one can show that the mean value of the sum, (q λµ + q λ q µ ) , is constant in time and equal to zero, as long as, initially, q λ = q µ = 0. Now, let us consider the simplest case of a particle in a linear force field, f = −κx + h (also discussed in Ref. [8]). This corresponds to a harmonic trap with the potential U = 1 2 κx 2 − hx. We let the particle start from x 0 at t = 0 and track its time-dependent relaxation to the steady state. We shall set T = 1 for simplicity. The Langevin equation can be solved exactly for this case, and the mean position evolves according to We suppose that we are interested in derivatives with respect to both h and κ, for a "baseline" parameter set in which κ is finite, but h = 0. Taking derivatives of Eq. (27) and setting h = 0, we find We now show how to compute these derivatives using Malliavin weight sampling. Applying the definitions in Eqs. (24) and (26), the Malliavin weight increments are and the position update itself is We track these Malliavin weights in our simulation and use them to calculate derivatives according to Eqs. (29)-(31) have been coded up as a matlab script, described in Appendix B. A typical result generated by running this script is shown in Fig. 1 Fig. 1. The dashed lines are theoretical predictions for the time dependent derivatives from Eqs. (28). As can be seen, the agreement between the time-dependent derivatives and the Malliavin weight averages is very good.
As discussed briefly above, in this procedure, the sampling error in the computation of ∂ A(t) /∂λ is expected to grow with time. Fig. 2 shows the mean square Malliavin weight as a function of time for the same problem. For the first order weights, q h and q κ , the growth rate is typically linear in time. Indeed, from Eqs. (29), one can prove that in the limit δt → 0 (see Appendix A) Thus q h behaves exactly as a random walk, as should be obvious from the updating rule. The other weight, q κ , also ultimately behaves as a random walk, since x 2 = 1/κ in steady state (from equipartition). Fig. 2 also shows that the second order weight, q hκ , grows superdiffusively; one can show that, eventually, (q hκ + q h q κ ) 2 ∼ t 2 , although the transient behaviour is complicated. Full expressions are given in Appendix A. This suggests that computation of second order derivatives is likely to suffer more severely from statistical sampling problems than the computation of first order derivatives.

V. CONCLUSIONS
In this paper, we have provided an outline of the generic use of Malliavin weights for sampling derivatives in stochastic simulations, with an emphasis on practical aspects. The usefulness of MWS for a particular simulation scheme hinges on the simplicity, or otherwise, of constructing the propagator, W (S → S ), which fixes the updating rule for the Malliavin weights according to Eq. (9). The propagator is determined by the algorithm used to implement the stochastic equations of motion; MWS may be easier to implement for some algorithms than for others. We note, however, that there is often some freedom of choice about the algorithm, such as the choice of a stochastic thermostat in molecular dynamics, or the order in which update steps are implemented. In these cases, a suitable choice may simplify the construction of the propagator and facilitate the use of Malliavin weights.
-oOo-Rosalind J. Allen is supported by a Royal Society University Research Fellowship. differential equations (ODEs) Some of these have already been encountered in the main text. The last one is for the desired mean square second order weight. The ODEs can be solved with the initial conditions that at t = 0, all averages involving Malliavin weights vanish, but x 2 = x 2 0 . The results include inter alia These are shown as the dashed lines in Fig. 2. The leading behaviour of the last as t → ∞ is However, the approach to this limit is slow.

Appendix B: MATLAB script
The matlab script in Listing 1 was used to generate the results shown in Fig. 1. It implements Eqs. (29)-(31) above, making extensive use of the compact matlab syntax for array operations, for instance, invoking '.*' for element-by-element multiplication of arrays.
Here is a brief explanation of the script. Lines 1-3 initialise the problem and the parameter values. Lines 4 and 5 calculate the number of points in a trajectory and initialise a vector containing the time coordinate of each point. Lines 6-9 set aside storage for the actual trajectory, Malliavin weights and cumulative statistics. Lines 10-23 implement a pair of nested loops, which are the kernel of the simulation. Within the outer (trajectory sampling) loop, Line 11 initialises the particle position and Malliavin weights, Line 12 precomputes a vector of random displacements (Gaussian random variates) and Lines 13-18 generate the actual trajectory. Within the inner (trajectory generating loop), Lines 14-17 are a direct implementation of Eqs. (29) and (30). After each individual trajectory has been generated, the cumulative sampling step implied by Eq. (31) is done in Lines 19-22; after all the trajectories have been generated, these quantities are normalised in Lines 24 and 25. Finally, Lines 26-32 generate a plot similar to Fig. 1 (albeit with the addition of x ), and Lines 33 and 34 show how the data can be exported in tabular format for replotting using an external package. Listing 1 is complete and self-contained. It will run in either matlab or octave. One minor comment is perhaps in order. The choice was made to precompute a vector of Gaussian random variates, which are used as random displacements to generate the trajectory and update the Malliavin weights. One could equally well generate random displacements on-the-fly, in the inner loop. For this one-dimensional problem, storage is not an issue, and it seems more elegant and efficient to exploit the vectorisation capabilities of matlab. For a more realistic three-dimensional problem, with many particles (and a different programming language), it is obviously preferable to use an on-the-fly approach.