Open Access This article is
- freely available
Entropy 2019, 21(7), 681; https://doi.org/10.3390/e21070681
Information Geometry of Spatially Periodic Stochastic Systems
Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT, UK
School of Mathematics and Statistics, University of Sheffield, Sheffield S3 7RH, UK
Author to whom correspondence should be addressed.
Received: 8 June 2019 / Accepted: 10 July 2019 / Published: 12 July 2019
We explore the effect of different spatially periodic, deterministic forces on the information geometry of stochastic processes. The three forces considered are and , with chosen to be particularly flat (locally cubic) at the equilibrium point , and particularly flat at the unstable fixed point . We numerically solve the Fokker–Planck equation with an initial condition consisting of a periodically repeated Gaussian peak centred at , with in the range . The strength D of the stochastic noise is in the range –. We study the details of how these initial conditions evolve toward the final equilibrium solutions and elucidate the important consequences of the interplay between an initial PDF and a force. For initial positions close to the equilibrium point , the peaks largely maintain their shape while moving. In contrast, for initial positions sufficiently close to the unstable point , there is a tendency for the peak to slump in place and broaden considerably before reconstituting itself at the equilibrium point. A consequence of this is that the information length , the total number of statistically distinguishable states that the system evolves through, is smaller for initial positions closer to the unstable point than for more intermediate values. We find that as a function of initial position is qualitatively similar to the force, including the differences between and , illustrating the value of information length as a useful diagnostic of the underlying force in the system.
Keywords:stochastic processes; Fokker–Planck equation; information length
It is of interest to apply the idea of a metric to problems involving stochastic processes, e.g., [1,2,3,4,5,6]. Given a metric, the differences between different Probability Density Functions (PDFs) can be quantified, with different metrics focusing on a range of aspects, and hence most suitable for various applications. Fisher information  yields a metric where distance is measured in units of the PDF’s width. The distance in the Fisher metric is thus dimensionless, and represents the number of statistically different states .
By extending the statistical distance in  to time-dependent situations, we recently introduced a way of quantifying information changes associated with time-varying PDFs [9,10,11,12,13,14,15,16]. We first compare two PDFs separated by an infinitesimal increment in time, and consider the corresponding infinitesimal distance. Integrating in time gives the total number of statistically distinguishable states that the system passes through, called the information length , e.g., [6,7,8,14]. Another interpretation of that can be useful is as a measure of the total elapsed time in units of an ‘information-change’ dynamical timescale.
We start by defining the dynamical time asThat is, is the characteristic timescale over which the information changes, and quantifies the PDF’s correlation time. Alternatively, quantifies the (average) rate of change of information in time. A PDF that evolves such that is constant in time is referred to as a geodesic, along which the information propagates at a uniform rate . The information length is then defined bywhich can be interpreted as measuring time in units of . It is important to note that has no dimension (unlike entropy) and represents the total number of statistically different states that a system passes through in time between 0 and t. If we know the parameters that determine the PDF , and in Equations (1) and (2) can be written in terms of the Fisher metric tensor defined in the statistical space spanned by those parameters. However, it is not always possible to have access to the parameters that govern PDFs, for instance, in the case of PDFs calculated from data. The merit of Equations (1) and (2) is thus that and can be directly calculated from PDFs even without knowing the parameters governing the PDFs, nor the Fisher metric. For instance, was calculated from PDFs of music data in . In the work here, we first compute time-dependent PDFs by solving the Fokker–Planck equation numerically, and then calculate and from these PDFs as additional diagnostics.
Unlike quantities such as entropy, relative entropy, Kullback–Leibler divergence, or Jensen divergence, information length is a Lagrangian measure, that is, it includes the full details of the PDF’s evolution, and not just the initial and final states. , the total information length over the entire evolution, is then particularly useful to quantify the proximity of any initial PDF to a final attractor of a dynamical system. In previous work [12,15] we explored these aspects of for restoring forces that were power-laws in the distance to the attractor. For instance, for the Ornstein–Uhlenbeck process, which is a linear relaxation process, we showed that consists of two parts: the first is due to the movement of the mean position measured in units of the width of the PDF, and the second is due to the entropy change. Thus, the total entropy change that is often discussed in previous works (e.g., ) contributes only partially to . Importantly, for the Ornstein–Uhlenbeck process, increases linearly from the stable equilibrium point (with its minimum value at the stable equilibrium point) with the mean position of the initial PDFs regardless of the strength of the stochastic noise and the width of the initial PDFs. The linear relation indicates that a linear process preserves a linearity of the underlying process. Heseltine & Kim  shows that this linear relation is lost for other metrics (e.g., Kullback–Leibler divergence, Jensen divergence). Note that is related to the integral of the square root of the infinitesimal relative entropy (see Appendix A). In comparison, for a chaotic attractor, varies sensitively with the mean position of a narrow initial PDF, taking its minimum value at the most unstable point . This sensitive dependence of on the initial PDF is similar to a Lyapunov exponent.
These results highlight as an alternative diagnostic to understand attractor structures of dynamical systems. It is this attractor structure that we are interested in in this paper. We thus focus on the relaxation problem as in [9,12,15,18] by considering periodic deterministic forces and elucidate the importance of the initial condition and its interplay with the deterministic forces in the relaxation and thus attractor structure.
We consider the following nonlinear Langevin equation:Here x is a random variable; is a deterministic force; is a stochastic forcing, which for simplicity can be taken as a short-correlated Gaussian random forcing as follows:where the angular brackets represent the average over , , and D is the strength of the forcing.
In  we considered the choice and investigated how varying the degree of nonlinearity affects the system. In this work we take to be periodic in x, and explore some of the new effects this can create. The three choices of we consider are
Figure 1 shows these profiles, which are all anti-symmetric in x, and periodic on the interval . All three choices have as an attractor, and as an unstable fixed point. The particular combinations of harmonics for were chosen so that they are locally cubic rather than linear at either (for ) or (for ). In applications such a Brownian motors many specific choices of are considered to model particular physics. However, as noted in the introduction, we are here more interested in attractor structures in the relaxation problem, in particular, how initial conditions and stochastic noise interact with deterministic forces and the role of the asymmetry of the deterministic force and the stable and unstable fixed points on the local dynamics.
Comparing these three periodic functions with the previous choices, two significant differences stand out. First, for with , all initial conditions are pushed directly toward the origin, and there are no unstable fixed points. It is therefore of particular interest to see how the choices here behave for initial conditions near . Second, all curve upward (that is, have for all ), whereas the choices here have different combinations of curvatures, which will turn out to have clearly identifiable effects.
The Fokker–Planck equation [19,20] corresponding to Equation (3) isIn  we solved the corresponding equation by finite-differencing in x. For the periodic systems considered here, it is more convenient to start with the Fourier expansionThe coefficients and are then time-stepped using second-order Runge-Kutta. The term is separated out into the relevant Fourier components using a fast Fourier transform. (For the very simple choices of considered here, consisting of at most two Fourier modes, it would be straightforward to do this separation analytically, and thereby do the entire calculation purely in Fourier space, but the code was developed with more general choices for in mind, where this approach becomes increasingly cumbersome as the number of harmonics in increases. For such more general choices of the FFT approach is most convenient).
Resolutions in the range are used, and carefully checked to ensure fully resolved solutions. Time-steps were in the range –, and were again varied to ensure proper accuracy. Another useful test of the numerical implementation is to monitor the coefficient : this is time-stepped along with the others, but must in fact remain constant if the total probability is to remain constant. It was found that if the initial condition is correctly set to have , then this was maintained throughout the entire subsequent evolution.
The initial conditions are of the formthat is, Gaussians centred at and having half-width scaling as . We are interested in the range ; by symmetry the range would behave the same, simply approaching from the other direction.
This initial condition is also periodic, on the same interval as the entire problem. For the purposes of actually implementing Equation (8), it was most convenient to consider the range as being . In particular, for and the values of considered here, Equation (8) yields results at and that are different, but both are so vanishingly small that the discrepancy does not need to be smoothed out in defining the initial condition. If instead Equation (8) were implemented on either or , then near either 0 or 1 would be more awkward to handle correctly.
In  we also used a Gaussian initial condition, with , and then explored the regime to . Here we are again interested in the regime , which allows at least the initial parts of the evolution to be nondiffusive. Having the initial peak be so narrow that can also be interesting in other contexts (e.g., ), but diffusive effects are then necessarily important from the outset, which would obscure some of the dynamics of interest here. We therefore focus on the range to , and to .
Figure 2 shows how the peak amplitudes evolve in time for the three choices , and . Starting with the initial position in the top row, we see that the solutions for and equilibrate to their final values on very rapid timescales, involving relatively little variation with D. In contrast, the timescales for are much longer, and vary substantially with D. Comparing the results here with Figure 1 in , we see that is exactly analogous to the previous . This is because for the shape of is very close to a cubic. Similarly, for the shape of is still reasonably close to linear, and the evolution is therefore essentially like the linear Ornstein–Uhlenbeck process , for which an exact analytic solution exists .
It is only whose shape is already substantially different from either linear or cubic even on the interval , being close to linear for but strongly curved for . Correspondingly also shows a new effect, namely an initial reduction in the peak amplitudes. This effect becomes even more pronounced for and and the initial position , in the bottom row of Figure 2. This reduction in the peak amplitudes is not caused by diffusive spreading but is a consequence of the non-diffusive () evolution resulting from the interplay between an initial PDF and the deterministic force. We note in particular how to yield identical reductions in amplitudes here. It is worth comparing this with the non-diffusive evolution in  where the opposite behavior—an initial increase in peak amplitudes (the same effect as seen here for , )—was observed. The interplay between the initial PDF and the deterministic force is elaborated below.
If is such that it increases more rapidly than linearly, i.e., curves upward, then those parts of any initial condition furthest from the origin are pushed toward it fastest, whereas those parts closest move more slowly. The result is that an initial Gaussian peak bunches up on itself, causing the amplitude to increase. In contrast, if curves downward the opposite effect occurs, and an initial Gaussian peak is spread out, even before diffusion starts to play a role. Eventually of course the peak moves sufficiently close to the origin that the behaviour is as before, explaining why the behaviour at later times is similar to the previous results.
Finally, the behaviour for with is yet again different, namely an initial reduction in amplitude up to , followed by an abrupt increase. This is caused by a fundamentally new peak forming at the origin, rather than the initial peak moving toward it. Note also that time here is on a logarithmic scale, corresponding to a very slow equilibration process, unlike the previous case with .
Figure 3, Figure 4 and Figure 5 illustrate these various behaviours in more detail, showing the actual PDFs at different times for , and , respectively. Starting with , we see how the peak initially located at becomes broader as it moves toward the origin, an effect again not caused by diffusion, but rather by the curvature of at these values of x. Note for example how the solutions at or 4 have much steeper leading edges (nearer to the origin) than trailing edges, caused by the trailing edges moving so much slower. Another feature to note is how parts of the solution reach the origin coming from the ‘other’ direction. That is, if the initial condition is a peak centred at , and having half-width 0.07 (corresponding to ), then a small but non-negligible portion of the initial condition is in the range , as seen also in Figure 3. For this part of the initial condition the nearest attractor is rather than . Viewed on the interval , this part therefore approaches from negative x values, as seen at times and 6. (The interval or equivalently is not shown in these figures because the amplitudes are rather small there, due to the PDFs being very spread out as they traverse this range). Finally, between and 9 we see how the two peaks coming from negative and positive x values combine to form the single final equilibrium consisting of a Gaussian centred at the origin.
Figure 4 shows the corresponding solutions for . For small durations the behaviour is very similar to that seen in Figure 3, except that it happens roughly twice faster (e.g., compare in Figure 4 with in Figure 3). This is readily understandable by noting that the slope of near is roughly twice that of , yielding faster evolution. The later evolution is much slower though, with the merging of the two peaks only occurring between and 100, and even still displaying some asymmetry, and hence not yet the final quartic profile. This is the same very slow final adjustment process previously analysed in detail in , and is caused by being cubic rather than linear near the origin.
Figure 5 shows the solutions for . We see the behaviour alluded to above, of an abrupt transition from one peak to another. Because is so flat near , there is hardly any tendency to push the initial peak away. Instead, it simply broadens out, slumping as it spreads. A new peak then forms at the origin, overtaking the original one in amplitude around , as previously noted in Figure 2. Note though that long after this time a significant portion of the original peak still remains near , and this portion only fades away on very long timescales; is an unstable fixed point, but is so small everywhere near that there is very little tendency to push the solutions away from there.
As noted in the introduction, we are particularly interested in the effects that these various different types of behaviour have on the information length quantities and . Figure 6 shows for the same solutions as before in Figure 2. We see that is initially uniform, and independent of D (provided D is sufficiently small in comparison with ), corresponding to the ‘geodesic’ behaviour first identified by . For some configurations, then immediately transitions to an exponential decay, whereas for others it first has a power-law decay before ultimately decaying exponentially. Correspondingly, the timescales to achieve also vary dramatically, as seen by the various linear and logarithmic scales for t. Different scaling regimes signify fundamentally different dynamics.
Figure 7 shows how varies with , for to , and to within each panel. It is interesting to note how the shapes generally mimic the corresponding functions , and . The largest values always occur for intermediate values of , even though larger values correspond to initial conditions that have farther to travel to reach the origin. Such initial conditions also spread out much more though, as seen above, and according to the interpretation of information length, this should indeed reduce . Very close to the values are particularly small, because having peaks collapse in place and reform at the new location is an informationally very efficient way to move, as seen also in other contexts [13,22,23].
Finally, Figure 8 shows the time, call it , needed for to drop to . The precise cutoff is of course somewhat arbitrary, but as seen in Figure 6 is sufficiently small to be in the exponential decay regime in all cases. This is therefore a convenient measure of the time taken to reach , and any even smaller cutoff would only add small increments to (and essentially nothing to ).
Starting with , we note first that is on a linear scale, meaning that each reduction of D by a factor of 10 only adds a constant amount to . This is the same effect already seen in Figure 2, where smaller D requires slightly longer to settle in to the final states. Equivalently, smaller D in Figure 6 remains in the flat, geodesic regime for slightly longer times. The other feature to note for is the behaviour near , where increases strongly, and increasingly abruptly for smaller . This can be understood by noting that if , the initial condition Equation (8) is essentially zero at , whereas if is within of 1, Equation (8) does have a non-negligible component at . Therefore, if the initial peak will simply move monotonically toward the origin, which occurs on a rapid timescale, whereas if the evolution will include a significant component of the slumping-in-place behaviour, which we saw only happens on slower timescales.
For , the scale for is logarithmic, so that each reduction of D by a factor of 10 increases by a factor of . For intermediate values of , is also essentially independent of . The equilibration time is completely dominated by the final settling-in time, just as in the cubic case in , and the initial motion of the peak toward the origin is negligible in comparison. For very small values of the behaviour is different, with much smaller values of . If , the peak is essentially at the origin already, making the adjustment quicker. Finally, there is a similar end-effect for sufficiently close to 1; if , the initial peak is essentially at the unstable fixed point, and the evolution is the slumping-in-place behaviour, which has a faster final adjustment than if the peak moves toward the origin and then adjusts its shape there (but still scaling as ).
Finally, is qualitatively similar to , in the sense that is a monotonically increasing function of . Indeed, for intermediate values of the behaviour is virtually identical to , with increasing by a constant amount every time D is decreased by a constant factor. (This is simply not visible because is on a logarithmic rather than linear scale here). Because and are both linear near the origin, the extremely slow final adjustment that happens for does not apply to either of them, leaving only this much weaker dependence on D. The behaviour near , with the very strong increase in , and again more abruptly for smaller , is again because this is the regime where the slumping-in-place behaviour occurs. Also, because is so much flatter near than either of of , this slumping-in-place behaviour is much slower for than for the other choices (recall again how long the peak at lasts in Figure 5). This explains why is on a logarithmic scale for but on a linear scale for , even though for intermediate values of they exhibit the same (weak) scaling with D.
The results presented here extend our previous work [12,15] to the deterministic forces that are periodic in space. This naturally allows for forces that curve in opposite directions in different regions, as well as unstable fixed points. The deterministic force can also be adjusted to be particularly flat at either the stable equilibrium or the unstable fixed points, which both turn out to have important consequences, with either choice yielding particularly long timescales, scaling as . The interesting consequences of the interplay between an initial PDF and a deterministic force on the PDF evolution and scalings of different quantities were discussed in detail by considering three types of periodic deterministic forces and comparing the results.
In particular, we computed how the rate of information change and the resulting total information length depend on the position of an initial Gaussian peak. We found that for all choices of , the unstable fixed points yield comparatively small , even though they are farthest away from the final equilibrium points. It is particularly interesting that as a function of initial position qualitatively follows , indicating the close connection between the information geometry and the underlying forcing.
Finally, we note that this work can be extended in many different directions, including: (1) If the initial condition is not one Gaussian peak for every period of , but only one peak for some much larger (tending to infinity) number of periods of , then one can study how this initial peak gradually spreads out, how that compares with pure diffusion, and what effect the precise shape of might have [24,25,26]. This is also related to so-called anomalous diffusion [27,28,29], which can be considerably more general though, involving fractional derivatives. (2) If the force also includes a constant component tending to push the initial condition in a particular direction, it is of interest to study situations where the constant force is comparable to the periodic component of the force [30,31,32]. (3) Allowing the force to vary in time as well as space is relevant to so-called Brownian motors [33,34,35,36]. Work in some of these areas is currently ongoing.
The underlying ideas were developed by E.K. and R.H. Numerical calculations were done by R.H. The paper was written by R.H. and E.K.; both authors have read and approved the final manuscript.
This research was funded by Leverhulme Trust Research Fellowship RF-2018-142\9 awarded to E.K.
Conflicts of Interest
The authors declare no conflict of interest.
We now recall how and in Equations (1) and (2) are related to the relative entropy (Kullback–Leibler divergence) [6,11]. We consider two nearby PDFs and at time and , and the limit of a very small to do Taylor expansion of by using
In the limit (), Equations (A1)–(A4) give us
Up to (), Equation (A5) and lead toand thus the infinitesimal distance between and as
By summing for (where ) in the limit , we havewhere is the information length. Thus, is related to the sum of infinitesimal relative entropy. Note that is a Lagrangian distance between PDFs at time 0 and t and sensitively depends on the particular path that a system passed through reaching the final state. In contrast, the relative entropy depends only on PDFs at time 0 and t and thus does not tell us about intermediate states between initial and final states.
- Jordan, R.; Kinderlehrer, D.; Otto, F. The variational formulation of the Fokker–Planck equation. SIAM J. Math. Anal. 1998, 29, 1–17. [Google Scholar] [CrossRef]
- Gibbs, A.L.; Su, F.E. On choosing and bounding probability metrics. Int. Stat. Rev. 2002, 70, 419–435. [Google Scholar] [CrossRef]
- Lott, J. Some geometric calculations on Wasserstein space. Commun. Math. Phys. 2008, 277, 423–437. [Google Scholar] [CrossRef]
- Takatsu, A. Wasserstein geometry of Gaussian measures. Osaka J. Math. 2011, 48, 1005–1026. [Google Scholar]
- Costa, S.; Santos, S.; Strapasson, J. Fisher information distance. Discrete Appl. Math. 2015, 197, 59–69. [Google Scholar] [CrossRef]
- Kim, E.; Lee, U.; Heseltine, J.; Hollerbach, R. Geometric structure and geodesic in a solvable model of nonequilibrium process. Phys. Rev. E 2016, 93, 062127. [Google Scholar] [CrossRef] [PubMed]
- Frieden, B.R. Science from Fisher Information; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Wootters, W.K. Statistical distance and Hilbert space. Phys. Rev. D 1981, 23, 357–362. [Google Scholar] [CrossRef]
- Nicholson, S.B.; Kim, E. Investigation of the statistical distance to reach stationary distributions. Phys. Lett. A 2015, 379, 83–88. [Google Scholar] [CrossRef]
- Nicholson, S.B.; Kim, E. Structures in sound: Analysis of classical music using the information length. Entropy 2016, 18, 258. [Google Scholar] [CrossRef]
- Heseltine, J.; Kim, E. Novel mapping in non-equilibrium stochastic processes. J. Phys. A 2016, 49, 175002. [Google Scholar] [CrossRef]
- Kim, E.; Hollerbach, R. Signature of nonlinear damping in geometric structure of a nonequilibrium process. Phys. Rev. E 2017, 95, 022137. [Google Scholar] [CrossRef] [PubMed]
- Hollerbach, R.; Kim, E. Information geometry of non-equilibrium processes in a bistable system with a cubic damping. Entropy 2017, 19, 268. [Google Scholar] [CrossRef]
- Kim, E.; Lewis, P. Information length in quantum systems. J. Stat. Mech. 2018, 043106. [Google Scholar] [CrossRef]
- Hollerbach, R.; Dimanche, D.; Kim, E. Information geometry of nonlinear stochastic systems. Entropy 2018, 20, 550. [Google Scholar] [CrossRef]
- Hollerbach, R.; Kim, E.; Mahi, Y. Information length as a new diagnostic in the periodically modulated double-well model of stochastic resonance. Phys. A 2019, 525, 1313–1322. [Google Scholar] [CrossRef]
- Van den Broeck, C.; Esposito, M. Three faces of the second law. II. Fokker–Planck formulation. Phys. Rev. E 2010, 82, 011144. [Google Scholar] [CrossRef] [PubMed]
- Heseltine, J.; Kim, E. Comparing information metrics for a coupled Ornstein–Uhlenbeck process. 2019; in preparation. [Google Scholar]
- Risken, H. The Fokker–Planck Equation: Methods of Solution and Applications; Springer: Berlin, Germany, 1996. [Google Scholar]
- Klebaner, F. Introduction to Stochastic Calculus with Applications; Imperial College Press: London, UK, 2012. [Google Scholar]
- Kim, E.; Hollerbach, R. Time-dependent probability density function in cubic stochastic processes. Phys. Rev. E 2016, 94, 052118. [Google Scholar] [CrossRef]
- Kim, E.; Tenkès, L.-M.; Hollerbach, R.; Radulescu, O. Far-from-equilibrium time evolution between two gamma distributions. Entropy 2017, 19, 511. [Google Scholar] [CrossRef]
- Kim, E.; Hollerbach, R. Geometric structure and information change in phase transitions. Phys. Rev. E 2017, 95, 062107. [Google Scholar] [CrossRef]
- Dean, D.S.; Gupta, S.; Oshanin, G.; Rosso, A.; Schehr, G. Diffusion in periodic, correlated random forcing landscapes. J. Phys. A 2014, 47, 372001. [Google Scholar] [CrossRef]
- Dean, D.S.; Oshanin, G. Approach to asymptotically diffusive behavior for Brownian particles in periodic potentials: Extracting information from transients. Phys. Rev. E 2014, 90, 022112. [Google Scholar] [CrossRef] [PubMed]
- Sivan, M.; Farago, O. Probability distribution of Brownian motion in periodic potentials. Phys. Rev. E 2018, 98, 052117. [Google Scholar] [CrossRef]
- Metzler, R.; Jeon, J.-H.; Cherstvy, A.G.; Barkai, E. Anomalous diffusion models and their properties: Non-stationarity, non-ergodicity, and ageing at the centenary of single particle tracking. Phys. Chem. Chem. Phys. 2014, 16, 24128–24164. [Google Scholar] [CrossRef] [PubMed]
- Zaburdaev, V.; Denisov, S.; Klafter, J. Lévy walks. Rev. Mod. Phys. 2015, 87, 483–530. [Google Scholar] [CrossRef]
- Evangelista, L.R.; Lenzi, E.K. Fractional Diffusion Equations and Anomalous Diffusion; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
- Risken, H.; Vollmer, H.D. Brownian motion in periodic potentials; nonlinear response to an external force. Z. Phys. B 1979, 33, 297–305. [Google Scholar] [CrossRef]
- Lindner, B.; Schimansky-Geier, L.; Reimann, P.; Hänggi, P.; Nagaoka, M. Inertia ratchets: A numerical study versus theory. Phys. Rev. E 1999, 59, 1417–1424. [Google Scholar] [CrossRef]
- Guérin, T.; Dean, D.S. Universal time-dependent dispersion properties for diffusion in a one-dimensional critically tilted potential. Phys. Rev. E 2017, 95, 012109. [Google Scholar] [CrossRef]
- Reimann, P. Brownian motors: Noisy transport far from equilibrium. Phys. Rep. 2002, 361, 57–265. [Google Scholar] [CrossRef]
- Vorotnikov, D. Analytical aspects of the Brownian motor effect in randomly flashing ratchets. J. Math. Biol. 2014, 68, 1677–1705. [Google Scholar] [CrossRef]
- Frezzato, D. Dissipation, lag, and drift in driven fluctuating systems. Phys. Rev. E 2017, 96, 062113. [Google Scholar] [CrossRef] [PubMed]
- Ethier, S.N.; Lee, J. The tilted flashing Brownian ratchet. Fluct. Noise Lett. 2019, 18, 1950005. [Google Scholar] [CrossRef]
Figure 1. The three profiles , and . Note how is chosen to be flat at the attractor , and at the unstable fixed point . All three choices are anti-symmetric in x, and periodic with period 2.
Figure 2. Peak amplitudes as functions of time, for the three choices as labelled. The initial positions are at in the top row, and in the bottom row, with initial widths in all cases. to as labelled, also colour-coded as blue, red, black. Note also that some panels have t on a linear scale, indicating a very rapid adjustment process, whereas others have t on a logarithmic scale, corresponding to much slower dynamics.
Figure 3. The solutions for , , and initial condition and . The numbers beside individual curves indicate the times, from to 9. The different colours are for clarity only, but do not indicate a specific colour ↔ time relationship.
Figure 4. The solutions for , , and initial condition and . The numbers beside individual curves indicate the times, from to 300. Different colours are again only to help distinguish between the different lines.
Figure 5. The solutions for , , and initial condition and . The numbers beside individual curves indicate the times, from to 300.
Figure 6. as a function of time, for the six configurations as in Figure 2. Note again how the scale for t is sometimes linear and sometimes logarithmic.
Figure 7. The total information length as a function of initial position . The labels indicate the particular profile , and the initial width . Within each panel the three curves are (lowest, blue), (middle, red), and (top, black).
Figure 8. As in Figure 7, but now showing the time that it takes to achieve , that is, a measure of the time it takes to reach . Note how has a linear scale for , but logarithmic scales for and . The colour-coding is again blue, red, black.
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).