Exact Response Theory for Time-Dependent and Stochastic Perturbations

The exact, non perturbative, response theory developed within the field of non-equilibrium molecular dynamics, also known as TTCF (transient time correlation function), applies to quite general dynamical systems. Its key element is called the dissipation function because it represents the power dissipated by external fields acting on the particle system of interest, whose coupling with the environment is given by deterministic thermostats. This theory has been initially developed for time-independent external perturbations, and then it has been extended to time-dependent perturbations. It has also been applied to dynamical systems of different nature, and to oscillator models undergoing phase transitions, which cannot be treated with, e.g., linear response theory. The present work includes time-dependent stochastic perturbations in the theory using the Karhunen–Loève theorem. This leads to three different investigations of a given process. In the first, a single realization of the stochastic coefficients is fixed, and averages are taken only over the initial conditions, as in a deterministic process. In the second, the initial condition is fixed, and averages are taken with respect to the distribution of stochastic coefficients. In the last investigation, one averages over both initial conditions and stochastic coefficients. We conclude by illustrating the applicability of the resulting exact response theory with simple examples.


Introduction
Linear response theory successfully describes the behavior of macroscopic systems that are close to thermodynamic equilibrium, obtaining the corresponding transport coefficients by solely using the equilibrium correlation functions of the microscopic fluctuating currents, computed with the equilibrium dynamics, cf.Refs.[1][2][3].While the range of applicability of the linear theory covers most phenomena occurring on the scale of our daily life, extending well beyond that scale, contemporary science and technology concern scales that are much smaller or that involve extremely large driving forces, which often exceed the linear regime.Indeed, anomalous phenomena, for which the linear transport coefficients do not exist or vanish, are common at the nanometric scale.Strong drivings, such as high voltages, produce large currents that alter the physical properties of the system at hand, leading to non-linear effects.Under external drivings, microscopic motions may be impaired to the point of producing localization or, in any case, drastic changes of states, like phase transitions.In many of these situations, dynamical correlations persist in time and give rise to behaviors that are not fully understood.
Within the field of non-equilibrium molecular dynamics [4], particularly following the discovery of the fluctuation relation, response theory has been generalized so that small systems as well as large drivings can be treated.A general exact (non-perturbative) response theory, also known as TTCF (transient time correlation function), has been derived [4], which has proven very effective.In particular, recently it has allowed treating hard nonequilibrium problems, at low drivings [5,6], drastically improving the signal-to-noise ratio, and providing a superior method with respect to direct averaging for such calculations.Moreover, this theory has allowed the derivation of a host of relations concerning nonequilibrium systems; cf.e.g., Ref. [7].The key ingredient of this exact response theory is known as the dissipation function, first explicitly defined by Evans and Searles as the energy dissipation rate that verifies the fluctuation relation [8,9].Through this, the quantities singularly used (until then) in specific instances of non-equilibrium molecular dynamic studies have been unified into a single general concept, which can be used in the analysis of any dynamical system.The original result, denoted as the dissipation theorem in Ref. [10], concerned systems subjected to time-independent perturbations, such as an external constant field.Periodic perturbations were then considered by Petravic and Evans, in Refs.[11,12], for time-periodic planar shear flow, and by Todd, in Ref. [13], for time-periodic planar elongational flow.
Inspired by the original works, and profiting from the general applicability of the dissipation function previously evidenced within the framework of the fluctuation relation [14], an extension to the general dynamical systems perturbed by time-dependent vector fields, expressed by the Fourier series, was developed in Ref. [15].This was conducted on an enlarged phase space, in which the equations of motion are autonomous.Apart from the generality that allows the use of the TTCF outside the realm of molecular dynamics, anywhere dynamical systems are used, our approach preserves the time reversibility of the unperturbed dynamics, which is a fundamental ingredient in many statistical mechanics calculations, including the Onsager reciprocal relations and the fluctuation relations [2,16,17].Moreover, expressing a signal in terms of the Fourier mode makes the contributions to the responses of various harmonics of the forcing directly computable, and provides important information on both the forcing and response in terms of power spectra, as often required in statistical physics [4,[18][19][20][21][22][23][24][25][26][27].Now, physics applications often deal with periodic functions whose periods exceed any relevant time scale, as in standard approaches to the fluctuation-dissipation relation [1,2,28].However, the Fourier series formalism naturally lends itself to the analysis of non-periodic time-dependent driving, letting the period of periodic signals grow without bounds.In that case, the Fourier series turns, under relatively mild conditions, into a Fourier transform, which is mathematically suited to treat non-periodic signals [29][30][31].
In the present paper, we take that approach one step forward, allowing stochastic timedependent perturbations [32,33], which can be represented as Karhunen-Loève expansions [34,35].In Section 2, we recall the Karhunen-Loève theorem, from which we obtain the particular representation of the stochastic perturbation of our dynamical systems.Section 3 briefly illustrates the linear response theory, highlighting the role of the initial perturbation, and then we also illustrate the time-independent exact response formalism.Section 4 starts from the equations of motion of the deterministic system of interest, to which we add the time-dependent stochastic perturbation discussed in Section 2. The first step is to make our system autonomous by enlarging the phase space, to accommodate a (Hamiltonian) harmonic oscillator, which embeds in itself the time dependence of the vector field.Then, we redefine the quantities of interest in the enlarged phase space and we distinguish three cases: (a) the case in which a single realization of the stochastic process is fixed [36], which yields a deterministic process in which only the initial conditions are random; (b) the stochastic case in which the initial condition is fixed and one should only average over the realizations of the stochastic coefficients of the Karhunen-Loève expansion; (c) the stochastic case in which averages are taken over both the initial conditions in the phase space and the stochastic coefficients of the process.Finally, we compute the generalized exact response formula for any observable of our system.Section 5 illustrates the theory using simple examples, such as the perturbed harmonic oscillator, and we compare the linear and the exact response results.Section 6 presents this explicit calculation for forcing with stochastic amplitude and a single Fourier component.It reports expressions for a non-equilibrium simple system: one particle in a viscous medium, perturbed by stochastic forcing with infinitely many frequencies and a non-vanishing mean.The various terms of the response theory are analytically expressed as functions of the stochastic coefficients.Generally, quantitative results now necessitate a numerical treatment.In Section 7, we summarize our work, discuss our results, and note how even the standard linear response theory may benefit from the extended phase space approach.Indeed, that allows one to properly treat the initial condition of the time-dependent perturbation, if one only considers finite evolution times, as required in many modern investigations of fluctuating observables.Appendix A computes some integrals used in the main text.

The Karhunen-Loève Theorem
The Karhunen-Loève theorem allows a stochastic process to be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series, with stochastic rather than deterministic coefficients.If {X t } t∈[a,b] is a centered stochastic process satisfying certain continuity conditions, one can decompose it as a sum of pairwise orthogonal functions multiplied by random coefficients that are pairwise uncorrelated and, hence, orthogonal in probability space.More precisely, the following holds [34]: ] be a square-integrable stochastic process with a zero mean, defined on a probability space (Θ, F, P), with a continuous covariance function K X (s, t).By letting e k be an orthonormal basis on L 2 ([a, b]) formed by the eigenfunctions of T K X with respective eigenvalues λ k , X t admits the following representation: where the convergence is in L 2 , uniform in t and Furthermore, the random variables Z k have a zero mean, are uncorrelated, and have variance λ k : As an example, consider the particular case of a Wiener process, w t with t ∈ [0, T], and covariance function K w (t, s) = min(s, t).To find the corresponding eigenvalues and eigenvectors, we need to solve the following integral equation: Differentiating once with respect to t yields: A second differentiation produces: −e(t) = λe ′′ (t).
whose general solution takes the form: where A and B are determined by the boundary conditions.Setting t = 0 in the integral equation gives e(0) = 0, which implies B = 0; setting t = T in Equation ( 5) yields e ′ (T) = 0, where: The corresponding eigenfunctions are, thus, given by the following: and A is finally determined by the normalization condition: Finally, we obtain the following representation of the Wiener process: where {Z k } ∞ k=1 is a sequence of independent Gaussian random variables, each having a zero mean and unit variance.This representation is valid for t ∈ [0, T] for any T > 0 and, as stated in the theorem, convergence is in the L 2 norm and uniform in t.

Response Theory
Linear response theory is suitable to describe the evolution of observables of systems subjected to small perturbations, while the exact response applies regardless of the magnitude of the perturbation.To set our notation, let us recall the main aspects of the timedependent linear theory [1].Given a Hamiltonian dynamical system, with Hamiltonian H 0 , consider a Hamiltonian perturbation H p (Γ, t) = −F (t)A(Γ), whose small intensity is F , and A has the dimensions of energy.The new perturbed Hamiltonian is given by the following: and the equations of motion take the following form: By denoting G as the corresponding vector field, we have the following: and the Liouville operator allows us to write the continuity equation for the probability distribution in the phase space as follows: where and whose solution can be expressed as follows: As well-known, this is an exact, but not useful, expression.However, the fact that L ext is proportional to the small intensity of the perturbation F , justifies the following linear approximation of f t : where H.O. stands for negligible higher-order terms in F .More than the probabilities, though, we are interested in the evolution of observables, which are identified with their evolving ensemble averages.We denote one observable by O : M → R, and its ensemble average at time t by the following: In the linear regime, we can then write: where is called the response function and subscript 0 in S t 0 indicates the unperturbed evolution.This is a very important result: The response to a small perturbation is determined by the equilibrium dynamics and by the time correlation function of the perturbation and the observable of interest, computed with respect to the known equilibrium ensemble.For thermodynamic systems, this description is fully satisfactory, because the observables of interest in such systems only negligibly fluctuate, and the observed signal concerning each single object practically equals the ensemble-averaged signal.This is not guaranteed more when non-thermodynamic systems-or large time-dependent perturbations-are considered.Therefore, an extension of the linear response theory to small systems or large perturbations is required to address these two issues.
The (time-independent) exact response theory initially developed in Ref. [10] may be adapted to describe a generic dynamical system, S t : M → M, on a phase space M, where the time t may be continuous or discrete [37].Let us focus on the case in which S t Γ, with Γ ∈ M represents the solution at time t of the differential equation: For any such system, the phase space variation rate, Λ : M → R, is the divergence in M of the vector field: We denote its time integral along the phase space trajectory segment delimited by the points reached at time s and time t by the following: which gives the variation factor of the phase space volume element from S s Γ to S t Γ.
Assume M is endowed with a probability measure dµ(Γ) = f 0 (Γ)dΓ, of density f 0 , which evolves as prescribed by the Liouville equation: We denote by f t the probability density at time t, obtained as the solution of Equation ( 28) with the initial condition, f 0 .This can be rewritten in terms of the dissipation function [38]: as Starting at a point Γ ∈ M, the corresponding integral of Ω f 0 along the trajectory segment delimited by S s Γ and S t Γ is given by the following: and the solution of the Liouville equation can be equivalently expressed as follows: and as follows: Note that the dynamics are assumed to be invertible, which does not mean that it has to be time-reversal invariant; it suffices that each point in a trajectory has a unique pre-image under the dynamics.The ensemble average of an observable O at time t, can eventually be expressed as follows: Interestingly, Equation (35) only requires the unperturbed initial distribution f 0 , a striking analogy to the linear response formulae ( 23) and ( 24), although it is an exact-not an approximate/perturbative-response formula.The difference lies in the fact that, unlike the linear response formulae, here, dynamic S s is the perturbed one.See, e.g., Refs.[4,10,38] for detailed derivations.
For properly chosen f 0 , Ω f 0 represents the dissipative flux, like Ȧ = {A, H} = {A, H 0 } which appears in Equation (24).In particular, non-equilibrium molecular dynamics models require f 0 to be the equilibrium distribution of the unperturbed dynamics (drivings set to 0) subjected to the same constraints of the perturbed dynamics.For instance, if the perturbed dynamics preserves the kinetic energy thanks to isokinetic thermostats, the unperturbed dynamics must also preserve the kinetic energy, and f 0 must be invariant under the resulting (generally non-Hamiltonian) dynamics.In this case, analogous to the linear formula (23), Equation (35) concerns the correlation function of the evolving observable of interest with the dissipative flux and such a correlation function is an average, computed with respect to the initial distribution f 0 .The difference is that the dynamics followed by the observable are the exact perturbed dynamics and not the approximate equilibrium dynamics.However, the formalism can now be used more generally, without the need for f 0 to be the equilibrium distribution, or for the system of interest to be a particle system.

Exact Response of Stochastic Processes
In this section, we first adapt the exact response theory illustrated above to a generic abstract dynamical system perturbed by a stochastic, time-dependent, vector field, extending the approach for deterministic time-dependent perturbations developed in Ref. [15].Then, we distinguish three cases: (a) The deterministic case in which a single realization of the perturbation is fixed (as in e.g., the Green-Kubo linear theory); thus, observables are only averaged with respect to the initial conditions of phase space trajectories; (b) The stochastic case in which the initial condition in the phase space is fixed.The realizations of the perturbation vary; hence, observables are averaged over the distribution of such realizations; (c) The stochastic case in which the phase space initial conditions and perturbation realizations vary, and observables are averaged over both.

Wiener Process Perturbation
Given a dynamical system Γ = G(Γ) on a phase space M, consider the following stochastic equation: where F is a constant that gives the strength of the perturbation, T > 0 can be much larger than the scale of observation, as in [1], and w(t) is a Wiener process of the same dimension, as M.This can be represented by the Karhunen-Loève expansion, as follows: [34]: where the details of the process are determined by the vectors Z n = (Z 1n , ..., Z Nn ), or equivalently, α n = (α 1n , . . ., α Nn ), with: We recall that this expression holds for all perturbations in L 2 [0, T], for any T > 0, so it is a very general result.As this is a zero mean process, it does not include a net force acting on the system.However, such a force, constituting a systematic action where the Wiener process fluctuates, can be included as a deterministic term in G.
Rather than directly computing the evolution of probability densities, we follow Ref. [15] in order to obtain a more flexible tool, and to preserve the notion of time reversal invariance.Therefore, we first eliminate the time dependence of the perturbation, and introduce two new variables (θ and ϕ) that evolve as follows: These new variables live on an ellipse, whose axes depend on the initial independently chosen conditions, θ 0 and ϕ 0 .Then, we can write the following: where the deterministic parts of the perturbation are given by the functions w n (θ, ϕ), and the stochastic parts by the coefficients α n .When F is given, we can write the following: with n ∈ Z , and Alternatively, the initial condition determines F through the equality θ(t) − iϕ(t) = F exp(it).The difference between the two situations depends on the initial distribution of the points (θ, ϕ), which we are now free to choose as the problem of interest requires.
In both cases, the explicit time dependence of the perturbation w is turned into an implicit dependence, mediated by θ and ϕ.The new dynamical system is as follows: which is an autonomous system of differential equations, whose phases Γ = (Γ, θ, ϕ) live in the new phase space where M Γ coincides with the original phase space of the unperturbed dynamics, M, while M θϕ ⊂ R 2 is the space of the points (θ, ϕ), which depends on the problem at hand, and may be bounded or not.In particular, a given initial condition confines (θ, ϕ) to a circle of the given radius, F , for a harmonic oscillator in one dimension, whose energy is fixed by the initial conditions.The corresponding new phase space variation rate equals the old one: because the new coordinates describe a Hamiltonian system.Introducing a new probability density, ), can also be obtained, applying the general rule (29) to the quantities with a tilde.The density f 0 can be given in various guises.However, the distribution of θ and ϕ is not affected by the phase space coordinates Γ, and should not affect the distribution of Γ; therefore, it can be factorized as follows: In the case that the realization of the perturbation is fixed, and its initial value is given by w 0 = w(θ 0 , ϕ 0 ), one could obtain the following: where f 0 is invariant under the unperturbed dynamics, as in the usual molecular dynamic applications of the exact response formula, and δ is the Dirac delta function.On the other hand, finite accuracy in setting the initial value of w may be described by replacing the δ-function with a smooth non-negative and normalized function, centered on θ 0 and ϕ 0 , which vanishes in a suitably narrow or wide interval around 0. Physically, this is more meaningful; mathematically, it is useful since it preserves the differentiability of f 0 .In this case, we would write the following: where the relation to θ 0 and ϕ 0 is stressed.If, on the other hand, the magnitude of the perturbation is fixed, one may take M θϕ = S F , the circle of radius F .Then, because this circle is traversed with uniform speed by the point (θ(t), ϕ(t)) and the origin of times is arbitrary, a natural form of the phase space distribution is as follows: Let us rewrite the equations of motion as follows: where (Γ 1 , ..., Γ N ) = Γ ∈ M, and (G 1 , ..., G N ) = G.Given the initial distribution, the new dissipation function is derived as follows: Concerning the observables O defined on M, we only need those that do not depend on θ and ϕ, i.e., such that O(Γ, θ, ϕ) = O(Γ), where O is one observable defined on M.
Nevertheless, the correlation of Ω f 0 with the time evolution of one such observable must be written as follows: because O does not depend on the last two components of Γ and can be replaced by O; however, the time evolution of Γ and, hence, of the phases Γ ∈ M, depends on Γ, θ, and ϕ.In the case where the time-dependent perturbation is fixed at time 0, i.e., the distribution g 0 is a delta function centered on the required initial values of θ and ϕ, the dependence of S s Γ on such initial conditions can be absorbed in the deterministic part of the dynamics, and the different contributions to the above expressions take the following form.The third addend of Ω f 0 does not depend on Γ, and O( S s Γ) = O(S s Γ); therefore, it yields: Because variables θ and ϕ are only introduced as auxiliary quantities and have no effect on the evolution in M and the evolution of observables, their distribution g 0 can be chosen quite freely.Moreover, their roles are fully interchangeable, so we can take distributions such as g 0 (θ, ϕ) = g 0 (ϕ, θ), which make the integral in expression (54) vanish.The first addend in the expression of Ω f 0 yields: because g 0 is normalized.The remaining term, concerning the time-dependent perturbation, finally yields: At this point, it remains to decide how to deal with the different realizations of the perturbation, namely how to treat the stochastic vectors α n .

Single Perturbation Realization
Here, we treat our system as deterministic, subjected to the time-dependent perturbation that corresponds to a single realization of the stochastic perturbation.Let us label by (j) this realization.In this case, we can write: where the expansion coefficients, denoted by α nN ), characterize the chosen perturbation (j), and the initial condition and the magnitude of the perturbation are fixed by θ and ϕ.The phase space expansion rate is given by where Λ denotes the unperturbed phase space volume variation rate.The new dissipation function is expressed by where g 0 is the distribution of the initial condition (θ 0 , ϕ 0 ) of the auxiliary variables.Choosing g 0 as in the previous section, the last addend of Ω (j) f 0 does not affect the response of observables.Therefore, given an observable O (which depends on Γ only), and denoting by S (j) s the evolution operator of the perturbed dynamics in M, we can write the following: does not depend on F , cf.Equation (41).Here, the integral concerning the auxiliary variables give the mean of w n with respect to the initial distribution of (θ, ϕ).If this is fixed, then this integral simply gives with K θ 0 ϕ 0 being a constant that depends on the initial condition, raised to the power of (n − 1/2)π/T if n > 0 and to the power of (n + 1/2)π/T if n < 0. But the situation is analogous if the initial values of θ and ϕ are distributed with any other density g 0 .Naturally, letting F vanish makes the stochastic contribution vanish as well, thus returning to the response formula for time-independent perturbations.So far, we have considered the coefficients α (j) nk as fixed, which makes the dynamics look deterministic, and only averages over the initial conditions in the phase space.

Stochastic Coefficients with Fixed Initial Condition
Let us now average over the distribution of the stochastic coefficients, considering the first and second cumulants of the fluctuations generated by the stochastic perturbation, in relation to the response due to the deterministic term G.The higher-order cumulants vanish since the stochastic coefficients are Gaussian random variables.Keeping the initial condition (Γ, θ, ϕ) ∈ M fixed, we denote by ⟨ • ⟩ (st) the averages made with respect to all realizations of the stochastic perturbation.Consider the equation of motion (42) and its first cumulant: ˙ Γ As anticipated, the first cumulant equals the cumulant of the deterministic part of the equation of motion.Moreover, integrating in time, we obtain the following: For the second cumulant, let us consider, for instance, the component Γ l of Γ.We find the following: Then, let us compute the following: Indeed, recalling the fact that the random coefficients Z k are delta-correlated, we have Now, setting t = t ′ , we obtain the following: and, as a result, the second cumulant takes the following simple form: Integrating in time expression (70), we obtain t 0 ds Γl (s) and note that Equations ( 40) and ( 41) imply: so that we can write Then, we obtain the following: the autocorrelation function is expressed by the following: and t = t ′ yields: For the dissipation function, the first cumulant is trivial since the stochastic coefficients have a zero mean: For the second cumulant, we have the following: thanks to the statistical properties of stochastic coefficients.

Averaging Over Initial Conditions and Stochastic Coefficients
Given a single realization of the stochastic perturbation, the response formula for an observable O takes the following form: If we further average over the realizations of the stochastic process, we use the following notation: and we can write:

Stochastic Single Frequency Periodic Force
To illustrate the theory developed above, let us consider a simple example, which can be treated analytically.More complex situations can be dealt with by performing numerical integration.Let us consider a harmonic oscillator in one dimension, which is perturbed with a time-dependent force.We will compare the linear and the exact response.Let H 0 = p 2 2 + ω 2 q 2 2 be the unperturbed Hamiltonian, and consider the following perturbation: where ϵ is a pure number, and Z has the dimension of force, which could be deterministic as a special case but is stochastic in general.Then, the perturbed energy is given by and the equations of motion are expressed by the following: We make the system autonomous, introducing two new variables, θ and ϕ, so that the dynamics are given by the following: and the perturbation can be written as follows: The motion is then given by where Let us compare the linear and exact responses of our system to the perturbation.First, suppose that f 0 takes the following form: where σ is the circumference of the variables θ and ϕ, whose radius depends on the initial The exact response theory yields the following: where S s w denotes the perturbed dynamics and the ensemble average is taken over the augmented phase space.The linear response theory yields the following: where S s 0 denotes the unperturbed dynamics.Let us consider, for instance, O = q.To proceed, we need some preliminary results.⟨q⟩ (st) q 2 (st) (106) Then, the linear response for a single realization of the perturbation, i.e., for a given value Z, yields the following: On the other hand, the exact response is expressed by the following: because the integral in θ and ϕ vanishes (see Appendix A for details).Therefore, the linear and exact responses differ under the chosen conditions.If, on the other hand, the initial condition of the auxiliary variables is fixed to a single point with θ 0 = 1 and ϕ 0 = 0, we have F = 1, and The exact response is then expressed by the following: where the integral on the curve in dθ and dϕ gives the length of the curve that is σ.This is the same as the linear response with (θ 0 , ϕ 0 ) uniformly distributed in the unit circle.If we take O = p as an observable, we obtain something similar.For the linear response, we have the following: and for the exact response, we have the following: Again, the same response is obtained only for the fixed initial condition (θ 0 = 1, ϕ 0 = 0).This is not strange; it happens when the exact response amounts to a first order perturbation, which is the case in special situations [39].However, in general, this condition is not verified.
For instance, we consider the observable O = qp, and the linear theory yields the following: ⟨qp⟩ (st) because integrating the terms pq 2 , qp 2 , p 3 , over dqdp yields zero.For the exact response, we have the following: Taking into account all the terms that cancel under integration over dqdp, and those that cancel by integrating over dθdϕ, what remains are elliptic integrals over dθdϕ, which can be solved numerically, and integrals over t of sine and cosine terms.The result does not vanish, as can be seen by inspection.In this example, even fixing the initial conditions as (θ 0 = 1, ϕ 0 = 0) and using the Dirac delta distribution, we obtain a non-vanishing result: In this case, the linear response yields zero because the exact response is second order in the perturbation magnitude F .When that is sufficiently small, the terms of order O(F 2 ) can be neglected, but not in general.Furthermore, forced oscillators may undergo resonance phenomena, greatly amplifying the amplitude of their oscillations even if they are driven by very weak forces.This is captured by the denominators of expression (115), which are missing in the linear response.Near the resonances, the linear response is bound to fail, even with small driving forces, and the exact formula is necessary.

Viscous Medium and General Stochastic Forcing
As the unperturbed dynamics, consider the one-dimensional motion of a single particle in a viscous fluid of constant viscosity γ: It is irrelevant that this is not a Hamiltonian system, as the theory illustrated above applies to general dynamical systems.As the distribution of the initial conditions, let us take, for simplicity, which is not invariant for the unperturbed dynamics and implies the following: As f 0 is not invariant under the unperturbed dynamics that, in turn, are not Hamiltonian, the dissipation function does not need to represent the dissipated power.However, the mathematics can still be performed.
Let us perturb the dynamics with stochastic forcing of mean F > 0: so that the mean of each expansion coefficient vanishes.If the initial distribution of the phases (θ, ϕ) is uniform in the unit circle, g 0 (θ, ϕ) = 1/2π, the initial distribution on the enlarged phase space takes the following form: In this example, the original phase space has two dimensions, Γ = (x, v), and the perturbation only appears in the equation for v. Therefore, we have α n = (α nx , α nv ) = (0, α nv ).Moreover, w n only depends on θ, because in the unit circle, we have ϕ = ϕ(θ).We can then write where W n (θ) represents w n (θ, ϕ) in the unit circle.Consequently, the average at time t of an observable O takes the following form: which can be computed once the distribution of the vectors α n is given, considering that the evolution S s (x, v) also depends on θ, while α n and (x, v) do not.The performance of this formula, compared to direct averaging, must be assessed numerically, and is expected to be superior to direct averaging, as in the time-independent deterministic case, because the convergence issues one expects are analogous.

Concluding Remarks
In this paper, we provided a generalization to stochastic processes of the exact response theory developed for deterministic molecular dynamics models, also known as TTCF in the field of molecular dynamics, where it mostly concerns time-independent perturbations, apart from a few seminal papers on periodic perturbations.This was conducted, starting from the previously developed time-dependent response theory of Ref. [15], expressing the stochastic perturbation as a Karhunen-Loève expansion, and then adding two auxiliary variables that make the corresponding time-dependent system of ordinary differential equations autonomous.As in Ref. [15], which is based on the Fourier expansion of the perturbing field, information on spectral properties of the forcing and response are directly available.The main ingredient of the exact response theory, the dissipation function, has then been derived in the enlarged phase space.This allows us to treat processes whose initial conditions are variously distributed, adding flexibility to the general response theory, which considers fixed initial perturbations.For instance, we can now compute different kinds of averages, which describe different experimental situations, i.e., time averages over single realizations of stochastic processes, ensemble averages over the initial conditions in the enlarged phase space, and combinations of the two.
We illustrated the theory by applying it to a one-dimensional harmonic oscillator, perturbed by a sinusoidal force, whose amplitude can be either deterministic or stochastic.This is enough to compare the exact response with the linear response.Although in special situations, the two responses may coincide to a certain degree, defining the range of applicability of the linear theory, they differ in general, even in such simple situations.The exact formula results are necessary, not only in the presence of large perturbations, as obvious, but also for small perturbations, in the presence of resonance phenomena.Furthermore, we have shown that different results are obtained for different distributions of the initial perturbation, something that is not obvious within the standard linear response theory.Indeed, enlarging the phase space to make the system autonomous allows us to choose the distribution of the initial condition of the perturbation, referring to different kinds of experiments, where the initial perturbation can either be deterministically fixed, as commonly assumed, or distributed in various ways.As a second simple example, we considered one particle moving in a one-dimensional viscous medium, under the action of stochastic driving with a non-vanishing mean, so that a non-equilibrium stationary state may eventually be reached.Note that, even in this case of perturbations, which exert a net action on the system, we needed to explicitly develop the theory for zero mean perturbations only.Indeed, the possible net perturbation may be included in the autonomous part of the dynamics, for which the exact theory was developed in the past; the results are applicable to generic dynamical systems, extending beyond the field of molecular dynamics where it was born.
The above is in addition to the fact that many theoretical results can be derived from the exact response formula before they can be numerically computed in simulations, e.g., see [7,40,41].Moreover, the linear response cannot treat phase transitions [42], and various studies show an impressively better performance of the TTCF formula compared to other averaging methods for time-independent perturbations [5,6].We expect this to be the case with our approach when dealing with time-dependent (deterministic or stochastic) perturbations.This will constitute the basis of future works.