Next Article in Journal
Homogeneous Symplectic Spaces and Central Extensions
Previous Article in Journal
Two Unitary Quantum Process Tomography Algorithms Robust to Systematic Errors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Analysis of Dynamical Field Inference in a Supersymmetric Theory †

by
Margret Westerkamp
1,2,*,
Igor V. Ovchinnikov
3,*,
Philipp Frank
1,2,* and
Torsten Enßlin
1,2,3,*
1
Max Planck Institute for Astrophysics, Karl-Schwarzschildstraße 1, 85748 Garching, Germany
2
Physics Department, Ludwig-Maximilians-Universität, Geschwister-Scholl Platz 1, 80539 Munich, Germany
3
Excellence Cluster Universe, Technische Universität München, Boltzmannstr. 2, 85748 Garching, Germany
*
Authors to whom correspondence should be addressed.
Presented at the 41st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Paris, France, 18–22 July 2022.
Phys. Sci. Forum 2022, 5(1), 27; https://doi.org/10.3390/psf2022005027
Published: 12 December 2022

Abstract

:
The inference of dynamical fields is of paramount importance in science, technology, and economics. Dynamical field inference can be based on information field theory and used to infer the evolution of fields in dynamical systems from finite data. Here, the partition function, as the central mathematical object of our investigation, invokes a Dirac delta function as well as a field-dependent functional determinant, which impede the inference. To tackle this problem, Fadeev–Popov ghosts and a Lagrange multiplier are introduced to represent the partition function by an integral over those fields. According to the supersymmetric theory of stochastics, the action associated with the partition function has a supersymmetry for those ghost and signal fields. In this context, the spontaneous breakdown of supersymmetry leads to chaotic behavior of the system. To demonstrate the impact of chaos, characterized by positive Lyapunov exponents, on the predictability of a system’s evolution, we show for the case of idealized linear dynamics that the dynamical growth rates of the fermionic ghost fields impact the uncertainty of the field inference. Finally, by establishing perturbative solutions to the inference problem associated with an idealized nonlinear system, using a Feynman diagrammatic expansion, we expose that the fermionic contributions, implementing the functional determinant, are key to obtain the correct posterior of the system.

1. Introduction

The evolution of some field of interest, called signal field, in a dynamical system described by a stochastic differential equation is not entirely determined by initial and boundary conditions but is excited by some stochastic field. These kind of dynamics play an important role not just in physics [1], but also in biology [2], chemistry [3], and economics [4,5] and can be based on imperfections in the model [6] or just an intrinsic stochastic behavior [7]. Here, we consider a measurement in the dynamical system, from which we can infer the current state and the evolution of a signal field using dynamical field inference (DFI). The base of DFI is information field theory (IFT), which is an information theory for fields within a Bayesian framework. DFI allows, in principle, to work with arbitrary stochastic differential equations to reconstruct a field, i.e., an infinite-dimensional signal, from some finite-dimensional measurement or other data. The reconstruction of a signal field in DFI exploits the prior knowledge on the signal properties, which are encoded in the stochastic differential equation. Nonlinearities in the stochastic differential equation lead to a complicated structure of the prior probability, which impedes the reconstruction of the corresponding signal field. Here, the central part of the investigation is the partition function. To cope with the problematic terms of the prior probability in the partition function, the supersymmetric theory of stochastics (STS) [8,9,10,11] introduces bosonic and fermionic ghost fields, which prove to obey a supersymmtry. In accordance with STS, we analyze this supersymmetry in the system from an IFT perspective. One of the major takeaways of STS is that the spontaneous breakdown of the observed supersymmetry leads to a chaotic behavior of the system. We show for two instructive examples of dynamical systems that chaos impacts the uncertainty of the field inference on a level that depends on the relevant system’s Lyapunov exponent.

2. Information Field Theory

In many areas of science, technology, and economics, the task of interpreting incomplete, noisy, and finite datasets arises [12,13]. Especially, if the quantity of interest is a field, there is an infinite number of possible signal field realizations that meet the constraints of a finite number of measurements. For such problems, which are called field inference problems, IFT was developed.
A physical field φ : Ω R is a function that assigns a value to each point in time and u-dimensional position space. Here, the space and time will be handled in same the manner as in [14,15] and are denoted by the space-time location x = ( x , t ) Ω = R u × R 0 + , u N . For definiteness, we let the time axis start at t 0 = 0 . As the field φ x = φ ( x ) has an infinite number of degrees of freedom, the integration is represented by path integrals over the integration measure D φ = x Ω d φ x [16]. In the following, these space-time coordinate-dependent fields are denoted as abstract vectors in Hilbert space, such that the scalar product can be written as γ φ : = d x γ * ( x ) φ ( x ) , where * denotes the complex conjugate.
In order to get to know a field φ , one has to measure it. Bayes’ theorem states how to update any existing knowledge given a finite number of measurement constraints. In this probabilistic logic, knowledge states are described by probability distributions. Accordingly, the prior knowledge P ( φ ) is updated given a data vector d to the posterior probability [17]
P ( φ | d ) = P ( d | φ ) P ( φ ) P ( d ) = P ( d | φ ) P ( φ ) D φ P ( d | φ ) P ( φ ) .
To construct the posterior, we need to have the prior P ( φ ) and the likelihood P ( d | φ ) . However, the evidence, P ( d ) is the normalization of the posterior and can be calculated from the prior and the likelihood. The prior probability of the signal, φ , specifies the knowledge on the signal before any measurement, whereas the likelihood describes the measurement process. Any measurement process can be described by a response of an instrument R, which converts some discrete dataset to a continuous signal, and some additive noise n, d = R [ φ ] + n . The statistics of the noise then determine the likelihood. Particularly, if we assume Gaussian noise with known covariance N, we get
P ( d | φ ) = D n P ( d | n , φ ) P ( n | φ ) = D n δ ( d R [ φ ] n ) G ( n , N ) = G ( d R [ φ ] , N ) .
Here, we can also define initial conditions via initial data d 0 = φ 0 = φ ( · , t 0 ) . In this case, the response is defined as R 0 [ φ ] = φ ( · , t 0 ) and the noise vanishes, P ( n ) = δ ( n ) . Thus, the initial data likelihood is represented by P ( d 0 , φ ) = P ( d | φ , d 0 = φ ( · , t 0 ) ) = δ ( φ ( · , t 0 ) φ 0 ) and can be combined with any data on the later evolution, d ˜ , via P ( d | φ ) = P ( d 0 | φ ) P ( d ˜ | φ ) , with d = ( d 0 , d ˜ ) .
For the reconstruction, particularly the cumulants of first and second order are of interest, as they describe the mean, m = φ ( φ | d ) c = φ ( φ | d ) , and the uncertainty dispersion, D = φ φ ( φ | d ) c : = ( φ m ) ( φ m ) ( φ | d ) . These posterior-connected correlation functions, or cumulants can be obtained via the moment-generating partition function Z d [ J ] ,
Z d [ J ] = D φ e H ( d , φ ) + J φ ( x 1 ) φ ( x n ) ( φ | d ) c : = δ n log Z d [ J ] δ J * ( x 1 ) δ J * ( x n ) | J = 0 ,
where H ( d , φ ) : = ln ( P ( d , φ ) ) is the so-called information Hamiltonian.

3. Dynamical Field Inference

In DFI, a signal, for which we have prior information on the dynamics of the system, is inferred. In particular, we consider a signal which obeys a stochastic differential equation
t φ = F [ φ ] ( x ) + ξ ( x ) .
The first part of the stochastic differential equation, t φ ( x ) = F [ φ ] ( x ) , describes the deterministic dynamics of the field and the excitation field, ξ , turns this deterministic evolution into a stochastic one. Here, the signal field, φ , is defined for all x Ω = R u × R 0 + , while t φ and ξ live on Ω = R u × R + . Therefore, Equation (4) makes only statements about fields on Ω . Equation (4) can be summarized by an operator G ˜ [ φ ] , G ˜ : C n , 1 ( Ω ) C ( Ω ) , which contains all the time and space derivatives of the stochastic differential equation up to order n in space
G ˜ [ φ ] ( x ) = ξ ( x ) with G ˜ [ φ ] ( x ) : = t φ ( x ) F [ φ ] ( x ) ,
where φ ˜ = φ ( · , t t 0 ) and φ = ( φ 0 , φ ˜ ) . In order to define the prior information given by the stochastic differential equation, we assume Gaussian statistics for the excitation field, P ( ξ ) = G ( ξ , Ξ ) , with known covariance Ξ . We define the combined vector of fields η = ( φ 0 , ξ ) that determines the dynamics with P ( η ) = P ( φ 0 ) G ( ξ , Ξ ) and extend the operator G ˜ by the initial conditions to G [ φ ] = ( φ 0 , G ˜ [ φ ] ) with G : C n , 1 ( Ω ) C ( Ω ) such that G [ φ ] = η . Assuming that there is a unique solution to the stochastic differential equation in Equation (5), the prior probability can be calculated via P ( φ | η ) = δ ( η G [ φ ] ) δ G ˜ [ φ ] δ φ ˜ as follows:
P ( φ ) = D η P ( φ | η ) P ( η ) = 1 | 2 π Ξ | e 1 2 G ˜ [ φ ] Ξ 1 G ˜ [ φ ] = : B ( φ ) δ G ˜ [ φ ] δ φ ˜ = : J ( φ ) P ( φ 0 ) ,
where δ G ˜ [ φ ] / δ φ ˜ : C ( Ω ) C ( Ω ) .
From this, we see that the prior contains a signal-dependent term B ( φ ) from the excitation statistics as well as a functional determinant, the Jacobian, J ( φ ) . For nonlinear dynamics, i.e., G ˜ [ φ ] L φ , where L is some linear operator, the Jacobian becomes field-dependent and the term B ( φ ) becomes highly non-Gaussian. To represent these terms conveniently, we introduce Fadeev–Popov ghost fields in the following.
The Jacobian in DFI can be represented via an integral over independent Grassmann fields, χ and χ ¯ , which are scalar fields that obey the Pauli principle,
J = δ G ˜ [ φ ] δ φ ˜ = D χ D χ ¯ e i χ ¯ δ G ˜ [ φ ] δ φ ˜ χ .
For the representation of the term B ( φ ) , we step back to the initial formulation including the excitation field and introduce a Lagrange multiplier β for the substitution of the δ -function,
B ( φ ) = D ξ 2 π Ξ δ ( ξ G ˜ [ φ ] ) e 1 2 ξ Ξ 1 ξ = D ξ D β 2 π Ξ 2 π 𝟙 e 1 2 ξ Ξ 1 ξ + i β ( G ˜ [ φ ] ξ ) .
This leads to the prior,
P ( φ ) D ξ D β D χ D χ ¯ | 2 π Ξ | 2 π 𝟙 e 1 2 ξ Ξ 1 ξ i χ ¯ δ G ˜ [ φ ] δ φ ˜ χ β ( G ˜ [ φ ] ξ ) H ( φ 0 ) ,
where H ( φ 0 ) is the information on the initial conditions. In the end, if we integrate over the excitation ξ , we can write the data-free partition function via an integral over the tuple of fields ψ = ( φ , β , χ , χ ¯ ) ,
Z = D φ P ( φ ) = D φ e H ( φ ) = D ψ e H ( ψ | φ 0 ) H ( φ 0 ) .
Here, H ( ψ | φ 0 ) = P ( ψ | φ 0 ) is the information Hamiltonian for the field tuple ψ , given some initial position φ 0 , defined by Equation (9),
H ( ψ | φ 0 ) = 1 2 β Ξ β i χ ¯ δ G ˜ [ φ ] δ φ ˜ χ + i β G ˜ [ φ ] = i χ ¯ t χ i β t φ : = H dyn ( ψ | φ 0 ) + i β F [ φ ] i χ ¯ δ F [ φ ] δ φ ˜ χ 1 2 β Ξ β : = H stat ( ψ | φ 0 ) .
The dynamic information, H dyn , contains the time derivatives of the fermionic and bosonic fields, whereas the meaning of the static information H stat shall be analyzed in the following section.

4. Ghost Fields in Dynamical Field Inference

In the previous section, we introduced auxiliary fields, to substitute the Jacobian and the δ -function. In this section, we want to analyze the meaning of those fermionic and bosonic fields and the corresponding terms in the information Hamiltonian. The auxiliary fields are called ghost fields, as they are only part of the formalism, but cannot be measured. It can be shown that the corresponding information Hamiltonian is only well defined, if these ghost fields do not exist at the initial time t 0 [18].
In case of a white excitation field, the partition function of DFI for a bosonic and fermionic field can be derived via the Markov property, i.e.,
Z = D φ D χ P ( φ , χ ) = n = 0 N D φ n D χ n P ( φ N , χ N | φ N 1 , χ N 1 ) × . . . × P ( φ 1 , χ 1 | φ 0 ) P ( φ 0 ) ,
where φ 0 = φ ( · , t 0 ) is the field at the initial time t 0 = 0 , while there is no χ 0 = χ ( · , t 0 ) . By assigning field operators to the fermionic and bosonic fields, χ and φ , as well as their momenta, ν and ω , respectively, this partition function can be rewritten in terms of the generalized Fokker–Planck operator of the states H ^ according to [16,19,20,21], P ( φ k , χ k | φ k 1 , χ k 1 ) = φ k , χ k , t k | | φ k 1 , χ k 1 , t k 1 = φ k , χ k | e H ^ Δ t | φ k 1 , χ k 1 . At this stage, these are just formal definitions, with time-localized states φ k , χ k , t k | : = δ ( φ ( · , t k ) φ k ) δ ( χ ( · , t k ) χ k ) and not time-localized states φ k , χ k | : = δ ( φ ( · , t ) φ k ) δ ( χ ( · , t ) χ k ) ) . The transfer operator M ( t k , t k 1 ) = e H ^ Δ t describes the mapping between states at time t k and t k 1 . Here, H ^ is not to be confused with the information Hamiltonian H ( ψ | φ 0 ) . The precise relation of these will be established in the following. Taking the limit Δ t 0 and N , we can make the definition of the function H ( φ k , χ k , ω k , ν k ) = 1 Δ t ln φ k , χ k | e H ^ Δ t | ω k , ν k and with this in mind and the definition of the field tuple ψ ˜ = ( φ , χ , ω , ν ) , the partition function can be rewritten to
Z D ψ ˜ e d t H ( φ t , χ t , ω t , ν t ) + i ω t φ i ν t χ H ( φ 0 ) .
In the end, the partition functions in Equations (10) and (11) need to be the same to guarantee the consistency of the theory. This permits the identifications, ν = χ ¯ , ω = β and d t H ( ψ ˜ t ) = H stat ( ψ | φ 0 ) . To sum up, it was shown that the auxiliary fields χ ¯ and β are simply the momenta of the fields χ and φ , whereas the time evolution is governed by the static information H stat ( ψ | φ 0 ) . If we introduce the functional { Q , X } [ ψ ] = β δ δ χ ¯ + χ δ δ φ T X [ ψ ] , for some X, we can bring the static information in a Q-exact form. This means that H stat only depends on the introduced functional { Q , Q ¯ } , for specific Q ¯ ,
H stat ( ψ , φ 0 ) = i { Q , Q ¯ } Q ¯ ( ψ ) = χ ^ F [ φ ˜ ] i 1 2 χ Ξ β .
The important finding is that the time evolution is governed by a Q-exact static information, i.e., d t H ( t ) = i { Q , Q ¯ } . In [9], it was shown that the defined functional { Q , · } is the path-integral version of the exterior derivative d ^ in the exterior algebra. Thus, it is demonstrated that the time-evolution is d ^ -exact and it commutes with the exterior derivative, as it is nilpotent. The exterior derivative as the operator representative of { Q , · } interchanges fermions and bosons. As a physical system is symmetric with regard to an operator, if it commutes with the time-evolution, the conclusion is made that the field dynamics is supersymmetric. The supersymmetry of a system can be broken spontaneously, which coincides with the appearance of chaos as derived in [8,9,10,11]. The corresponding separation of infinitesimal close trajectories in a dynamical system is then characterized by the Lyapunov exponents. In the following, we investigate the impact of measurements on the supersymmetry of the field knowledge encoded in the partition function. Furthermore, it is intuitively clear that the occurrence of chaos should reduce the predictability of the system. To show the exact impact of chaos on the predictability of a system, we will analyze two instructive scenarios.

5. SUSY and Measurements

First, we want to make some abstract considerations before we look at linear and nonlinear examples. Taking in account Section 4, we can rewrite the moment-generating function in Equation (3),
Z d [ J ] = D φ e H ( d , φ ) + J φ D ψ e H stat ( ψ | φ 0 ) H dyn ( ψ | φ 0 ) H ( φ 0 ) H ( d | φ ) + J φ .
From Equation (12), we see that the combined information, H ( d , φ ) , representing the knowledge from measurement data d and the dynamics consists of several parts. The first part, H stat ( ψ | φ 0 ) H dyn ( ψ | φ 0 ) = i χ t χ i β t φ + { Q ( ψ ) , Q ¯ ( ψ ) } , describes the dynamics of the field φ ˜ and that of the ghost fields χ and χ ¯ for times after the initial moment. The evolution of the dynamics can be described by a Q-exact term, meaning that supersymmetry is conserved for non-initial times t > t 0 . The middle term, H ( φ 0 ) , describes the knowledge on the initial conditions instead. The last term, H ( d | φ ) , contains the knowledge gain by the measurement. Thus, if the measurement addresses non-initial times, the information gets non-Q-exact by the inclusion of the measurement information. By the measurement, “external forces” need to be introduced into the system, which are not stationary nor necessarily Gaussian, which guide the systems evolution through the constraints set by the measurement. Precisely, the posterior knowledge on the excitation is described by P ( ξ | d ) which has an explicitly time-dependent mean and correlation structure in ξ .
Let us consider idealized linear dynamics to illustrate the impact of chaos on the predictability of a system. As in the previous sections, we assume φ to be initially φ ( · , 0 ) = φ 0 at t = 0 and to obey Equation (4) afterwards with ξ G ( ξ , 𝟙 ) , i.e., Ξ = 𝟙 . We define the classical field, φ cl , which follows excitation-free dynamics t φ cl ( x ) = F [ φ cl ] ( x ) and a deviation, ϵ : = φ φ cl , between the actual signal, φ , and the classical field, φ cl . Here, we assume that only a short period after t = 0 is considered and perform for the dynamics a first-order expansion in the deviation field, with the initial deviation ϵ ( · , 0 ) = 0 ,
t ϵ = F [ φ ] F [ φ cl ] + ξ = δ F [ φ cl ] δ φ cl = : A ϵ + ξ + O ( ε 2 ) ϵ t = 0 t d τ e A ( t τ ) ξ τ ,
where we can assume that a time dependence in A can be ignored for short periods after t = 0 . Further, we assume that A can be fully expressed in terms of a set of orthonormal eigenmodes, A = a λ a a a ^ with a ^ a = δ a a , where the ^ denotes the adjoint with respect to the spatial coordinates only. In the linear case, the real parts of the eigenvalues of the operator A are the Lyapunov exponents. We imagine a system measurement at time t = t o that probes perfectly a normalized eigendirection b of A, such that we get noiseless data according to d = b ^ ϵ ( · , t o ) = ϵ b ( t o ) with R : = b x δ ( t t o ) . The eigenmode b then fulfills A b = λ b b , which leads to a stable mode if λ b < 0 and an unstable mode if λ b > 0 . In the linear case, we get a Gaussian prior for the deviation, as G [ ϵ ] = ( t A ) ϵ = L 1 ϵ can be represented by a linear operator, P ( ϵ ) = G ( ϵ , L Ξ L ) = G ( ϵ , E ) . For some Gaussian measurement noise n with a vanishing covariance N, we get an information which contains no field interactions:
H ( d , ϵ ) = ln ( G ( R ϵ d , N ) G ( ϵ , E ) ) )
= 1 2 ϵ D 1 ϵ j ϵ + H 0
= 1 2 ( ϵ m ) D 1 ( ϵ m ) + H ˜ 0
where the information source j = R N 1 d , the information propagator D = ( E 1 + R N 1 R ) 1 , and H 0 or H ˜ 0 , were introduced. The latter both contain all the terms of the information that are constant in the deviation ϵ . The information can be expressed in terms of the field m = D j by completing the square in Equation (16), which is also known as the generalized Wiener filter solution [22]. As all cumulants of an order higher than n = 2 vanish, the posterior distribution can be represented by a Gaussian with mean m and uncertainty covariance D, P ( ϵ | d ) = G ( ϵ m , D ) . The prior dispersion in the eigenbasis can be calculated to
E ( a , t ) ( a , t ) : = a ϵ t ϵ t a ( ξ )
= δ a a f a ( t , t ) with f a ( t , t ) = e λ ( t + t ) e λ a | t t | ( 2 λ a ) 1 ,
where f a ( t , t ) : = a ϵ t ϵ t a ( ϵ ) is the a priori temporal correlation function of a field eigenmode a. Expressing the mean and the posterior uncertainty in the eigenbasis of A then leads to
m a ( t ) : = a m ( · , t ) = δ a b f b ( t , t o ) f b ( t o , t o ) d
D ( a , t ) ( a , t ) = δ a a f a ( t , t ) δ a b f b ( t , t o ) f b ( t , t o ) f b ( t o , t o ) .
Figure 1 shows the prior and posterior mean and uncertainty for three different Lyapunov exponents λ b and a perfect measurement at t o . As we can see from Equation (20), the correlation between different modes a a vanishes, and therefore, any mode a b behaves like a prior mode shown in gray in Figure 1. Moreover, the propagator for the measured mode b is in general non-zero, but we can show that it vanishes for times, which are separated by the measurement, i.e., D ( b , t ) ( b , t ) = 0 for t < t o < t . In other words, the measurement introduces a Markov blanket. This Markov blanket separates the periods before and after the measurement from each other.
The impact of the Lyapunov exponents on the predictability of the system is clearly visible in Figure 1. The illustration shows that Lyapunov exponents which are greater than one lead to diverging uncertainties. In other words, chaos makes the inference more difficult on an absolute scale.
In Figure 2, we can see that the uncertainty grows faster for a larger Lyapunov exponent. Moreover, the relative uncertainty grows slower with increasing Lyapunov exponents, which mirrors the memory effect of a chaotic system. Thus, small initial disturbances can be remembered for long times.
In the case of nonlinear dynamics, the posterior knowledge becomes non-Gaussian, or in other words, the theory becomes interactive, i.e., it does not only incorporate the propagator and the source term but also interaction terms between more than two fields. At first, we will consider the case λ b = 0 and extend it to the next higher order in ϵ b = b ϵ ( · , t ) ,
t ϵ b = 1 2 μ b ϵ b 2 + ξ + O ( ϵ b 3 ) G ˜ [ ϵ b ] = t ϵ b 1 2 μ b ϵ b 2 .
This leads to an information Hamiltonian that contains a free and an interaction part, H ( d , ϵ , χ , χ ¯ ) = H free ( d , ϵ , χ , χ ¯ ) + H int ( d , ϵ , χ , χ ¯ ) ,
H free ( d , φ , χ , χ ¯ ) = 1 2 ε b t t ε b + χ ¯ t χ + H ( d | φ ) + H ( φ 0 ) H int ( d , φ , χ , χ ¯ ) = μ b ε b 2 t ε b + μ b 2 8 ε b 2 ε b 2 μ b χ ¯ ε b χ .
The free information Hamiltonian H free ( d , ϵ , χ , χ ¯ ) defines the Wiener process field inference problem we addressed before and has the classical field as well as the bosonic and fermionic propagators and interactions between the fields for t < t o . It can be shown that all first-order diagrams with bosonic three-vertex are zero for λ b = 0 [18]. Thus, the posterior mean and uncertainty dispersion for 0 t , t t o only contain corrections due to the fermion loop,
Psf 05 00027 i001
Now, we consider the case λ b 0 , i.e., G ˜ [ ϵ b ] = t ϵ b λ b ϵ b 1 2 μ b ϵ b 2 , such that
H free ( d , ϵ , χ , χ ¯ ) = 1 2 ϵ b ( t λ b ) ( t λ b ) ϵ b + χ ¯ ( t λ b ) χ + H ( d | φ ) + H ( φ 0 )
H int ( d , ϵ , χ , χ ¯ ) = μ b [ ϵ b 2 ] ( t λ b ) ϵ b + μ b 2 8 [ ϵ b 2 ] [ ϵ b 2 ] μ b χ ¯ ( ϵ b χ )
The only changed vertex for this information Hamiltonian is the bosonic three-vertex, which does not vanish for λ b 0 , leading to the following Feynman diagram representation of the posterior mean and uncertainty:
Psf 05 00027 i002
In any case, we can see that the fermionic loop correction appears independently of the eigenvalue of the measured mode λ b in the nonlinear case. Thus, taking into account the functional determinant, represented by the fermionic fields here, is important to arrive at the correct posterior values. The posterior values are calculated with the computer algebra system SymPy [23] and represented in Figure 3.
Note that the a priori mean and uncertainty dispersion are both infinite for any time t > 0 , as without the measurement, trajectories reaching positive infinity within finite times are not excluded from the ensemble of permitted possibilities. For times t [ 0 , t o ] , the posterior mean should stay finite as well as its uncertainty. The reason is that any diverging trajectory in this region is excluded by the measurement, as the dynamics do not allow trajectories to return from infinity as this would require an infinite excitation. The figure shows that in all displayed cases ( λ b { 1 , 0 , 1 } ), the posterior trajectories avoid to get close to easily diverging regimes, and the more the dynamics is unstable, for larger values in λ b , the more they avoid such regimes. Interestingly though, the posterior uncertainty of unstable systems with λ b > 0 in the period before the measurement is reduced in comparison with the posterior uncertainty of the stable system. In the end, this is in accordance with the statement that chaotic systems are harder to predict, as for chaotic systems the trajectories diverge, and thus, the variety of trajectories that pass through the initial and the measurement condition is smaller. Thus, for a chaotic system, the measurement provides more information on the period before the measurement but less for the period after the measurement.

6. Conclusions

Two of the challenges in DFI are the incorporation of the delta function and the Jacobian, which appear in the path integral over all possible solutions, in the inference. Here, advanced DFI schemes were developed, which introduce bosonic and fermionic auxiliary fields to circumvent the inference of the Jacobian. Using the introduced fields in perturbative calculations such as Feynman diagrams, might allow to develop DFI schemes in future research that can tackle the difficulties of inference in nonlinear systems. We showed that in the case of white Gaussian noise and no measurement constraint, the dynamical system in the language of DFI has a supersymmetry. For chaotic systems, mirrored by positive Lyapunov exponents, this supersymmetry is broken spontaneously leading to a lower predictability of the system. In order to illustrate the impact of chaos on the inference in linear and nonlinear systems, we considered two simplified scenarios, such that the inference problem becomes a one-dimensional problem for the measured mode. It turns out that the fermionic contributions for any eigenvalue of the measured mode are key to obtain the correct posterior mean. From the considered examples, we saw that as expected the prediction of a system’s absolute evolution from measurements is harder for a chaotic system. However, the relative uncertainty grows slower in a chaotic system, mirroring the memory effect of chaotic systems. Finally, predicting a nonlinear chaotic system gets even harder than for linear ones, but the information obtained from a measurement is enhanced, as more a priori possible trajectories are excluded by measurement in a chaotic system due to its larger sensitivity to perturbations. For future research, the analysis of more complex systems including the insights from STS can be interesting. For those, the inclusion of the fermionic corrections, as seen from the simplified scenarios, are likely to be essential to obtain the correct solution.

Author Contributions

Conceptualization: I.V.O., P.F., T.E. and M.W.; Methodology: I.V.O., P.F., T.E. and M.W.; Formal analysis: I.V.O., P.F., T.E. and M.W.; Investigation: I.V.O., P.F., T.E. and M.W.; Writing—original draft preparation: M.W.; Writing—review and editing: I.V.O., P.F., and T.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krülls, W.; Achterberg, A. Computation of cosmic-ray acceleration by Ito’s stochastic differential equations. Astron. Astrophys. 1994, 286, 314–327. [Google Scholar]
  2. Allen, L.J. An Introduction to Stochastic Processes with Applications to Biology; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  3. Gardiner, C.W. Handbook of Stochastic Methods; Springer: Berlin, Germany, 1985; Volume 3. [Google Scholar]
  4. Mao, X. Stochastic Differential Equations and Applications; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  5. Black, F.; Scholes, M. The pricing of options and corporate liabilities. In World Scientific Reference on Contingent Claims Analysis in Corporate Finance: Volume 1: Foundations of CCA and Equity Valuation; World Scientific: Singapore, 2019; pp. 3–21. [Google Scholar]
  6. Galenko, P.; Kharchenko, D.; Lysenko, I. Stochastic generalization for a hyperbolic model of spinodal decomposition. Phys. A Stat. Mech. Its Appl. 2010, 389, 3443–3455. [Google Scholar] [CrossRef]
  7. Uhlenbeck, G.E.; Ornstein, L.S. On the theory of the Brownian motion. Phys. Rev. 1930, 36, 823. [Google Scholar] [CrossRef]
  8. Ovchinnikov, I.V. Topological field theory of dynamical systems. Chaos Solitons Fractals 2012, 22, 033134. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Ovchinnikov, I.V. Introduction to Supersymmetric Theory of Stochastics. Entropy 2016, 18, 108. [Google Scholar] [CrossRef] [Green Version]
  10. Ovchinnikov, I.V.; Schwartz, R.N.; Wang, K.L. Topological supersymmetry breaking: Definition and stochastic generalization of chaos and the limit of applicability of statistics. Mod. Phys. Lett. B 2016, 30, 1650086. [Google Scholar] [CrossRef] [Green Version]
  11. Ovchinnikov, I.V.; Di Ventra, M. Chaos or Order? Mod. Phys. Lett. B 2019, 33, 1950287. [Google Scholar] [CrossRef]
  12. Box, G.E.; Tiao, G.C. Bayesian Inference in Statistical Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2011; Volume 40. [Google Scholar]
  13. Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  14. Doering, C.R. A stochastic partial differential equation with multiplicative noise. Phys. Lett. A 1987, 122, 133–139. [Google Scholar] [CrossRef]
  15. Roberts, A.J. A step towards holistic discretisation of stochastic partial differential equations. ANZIAM J. 2003, 45, 1–15. [Google Scholar] [CrossRef] [Green Version]
  16. Mukhanov, V.; Winitzki, S. Introduction to Quantum Effects in Gravity; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  17. Enßlin, T.A. Information Theory for Fields. Ann. Phys. 2019, 531, 1800127. [Google Scholar] [CrossRef] [Green Version]
  18. Westerkamp, M.; Ovchinnikov, I.; Frank, P.; Enßlin, T. Dynamical Field Inference and Supersymmetry. Entropy 2021, 23, 1652. [Google Scholar] [CrossRef] [PubMed]
  19. Das, A. Field Theory: A Path Integral Approach, 2nd ed.; World Scientific Lecture Notes in Physics; World Scientific: Singapore, 2006. [Google Scholar]
  20. Münster, G. Quantentheorie, 2nd ed.; De Gruyter: Berlin, Germany, 2010. [Google Scholar]
  21. Bartelmann, M.; Feuerbacher, B.; Krüger, T.; Lüst, D.; Rebhan, A.; Wipf, A. Theoretische Physik; Springer Spektrum: Heidelberg, Germany, 2015. [Google Scholar]
  22. Wiener, N. Generalized harmonic analysis. Acta Math. 1930, 55, 117–258. [Google Scholar] [CrossRef]
  23. Meurer, A.; Smith, C.P.; Paprocki, M.; Čertík, O.; Kirpichev, S.B.; Rocklin, M.; Kumar, A.; Ivanov, S.; Moore, J.K.; Singh, S.; et al. SymPy: Symbolic computing in Python. PeerJ Comput. Sci. 2017, 3, e103. [Google Scholar] [CrossRef]
Figure 1. Illustration of the knowledge on a measured system mode b. A priori (gray) and a posteriori (magenta) field mean (lines) and one sigma uncertainty (shaded) for an Ornstein–Uhlenbeck process (left, λ b = 1 ) , a Wiener process (middle, λ b = 0 ) , and a chaotic process (right, λ b = 1 ) of a system eigenmode b after one perfect measurement at t o = 1 .
Figure 1. Illustration of the knowledge on a measured system mode b. A priori (gray) and a posteriori (magenta) field mean (lines) and one sigma uncertainty (shaded) for an Ornstein–Uhlenbeck process (left, λ b = 1 ) , a Wiener process (middle, λ b = 0 ) , and a chaotic process (right, λ b = 1 ) of a system eigenmode b after one perfect measurement at t o = 1 .
Psf 05 00027 g001
Figure 2. Illustration of the mean, uncertainty of the prior and posterior, as well as the relative posterior uncertainty on a measured system mode b on logarithmic scale for a set of Lyapunov exponents λ b = 3 , 2 , 1 , 0, 1, 2, and 3, as displayed in colors ranging from light to dark gray in this order (i.e., strongest chaos is shown in black). Left: Posterior mean. Middle: Uncertainty of prior (dotted) and posterior (dashed). Right: Relative posterior uncertainty.
Figure 2. Illustration of the mean, uncertainty of the prior and posterior, as well as the relative posterior uncertainty on a measured system mode b on logarithmic scale for a set of Lyapunov exponents λ b = 3 , 2 , 1 , 0, 1, 2, and 3, as displayed in colors ranging from light to dark gray in this order (i.e., strongest chaos is shown in black). Left: Posterior mean. Middle: Uncertainty of prior (dotted) and posterior (dashed). Right: Relative posterior uncertainty.
Psf 05 00027 g002
Figure 3. Illustration of the knowledge on a measured system mode b like in Figure 1 for a nonlinear system within the period t [ 0 , 1 ] with first-order bosonic and fermionic perturbation corrections for μ b = 0.3 in green and without such nonlinear corrections in magenta, and with only bosonic corrections in blue (dotted, displayed without uncertainty). The three panels display the cases λ b = 1 (left), λ b = 0 (middle), and λ b = 1 (right).
Figure 3. Illustration of the knowledge on a measured system mode b like in Figure 1 for a nonlinear system within the period t [ 0 , 1 ] with first-order bosonic and fermionic perturbation corrections for μ b = 0.3 in green and without such nonlinear corrections in magenta, and with only bosonic corrections in blue (dotted, displayed without uncertainty). The three panels display the cases λ b = 1 (left), λ b = 0 (middle), and λ b = 1 (right).
Psf 05 00027 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Westerkamp, M.; Ovchinnikov, I.V.; Frank, P.; Enßlin, T. Analysis of Dynamical Field Inference in a Supersymmetric Theory. Phys. Sci. Forum 2022, 5, 27. https://doi.org/10.3390/psf2022005027

AMA Style

Westerkamp M, Ovchinnikov IV, Frank P, Enßlin T. Analysis of Dynamical Field Inference in a Supersymmetric Theory. Physical Sciences Forum. 2022; 5(1):27. https://doi.org/10.3390/psf2022005027

Chicago/Turabian Style

Westerkamp, Margret, Igor V. Ovchinnikov, Philipp Frank, and Torsten Enßlin. 2022. "Analysis of Dynamical Field Inference in a Supersymmetric Theory" Physical Sciences Forum 5, no. 1: 27. https://doi.org/10.3390/psf2022005027

APA Style

Westerkamp, M., Ovchinnikov, I. V., Frank, P., & Enßlin, T. (2022). Analysis of Dynamical Field Inference in a Supersymmetric Theory. Physical Sciences Forum, 5(1), 27. https://doi.org/10.3390/psf2022005027

Article Metrics

Back to TopTop