entropy-logo

Journal Browser

Journal Browser

Entropy Methods for Stochastic Dynamical Systems and Evolution Equations

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Statistical Physics".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 16119

Special Issue Editors


E-Mail Website
Guest Editor
Chair of Applied Mathematics, University of Paderborn, 33098 Paderborn, Germany
Interests: dynamical systems; multiobjective optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Mathematics, Brandenburgische Technische Universität Cottbus-Senftenberg, Konrad-Wachsmann-Allee 1, 03046 Cottbus, Germany
Interests: statistical inference; rare events; large deviations; stochastic control; nonequilibrium statistical mechanics; multiscale methods; diffusion in confined geometries

E-Mail Website
Guest Editor
Department of Mathematics, University of Paderborn, 33098 Paderborn, Germany
Interests: stochastic analysis; Markov processes; numerical linear algebra

Special Issue Information

Dear colleagues,

We are pleased to invite contributions to a Special Issue on “Entropy Methods for Stochastic Dynamical Systems and Evolution Equations”.

Entropy has now become a key tool in the analysis and the simulation of dynamical systems, way beyond its traditional role in statistical physics and statistics. Entropy methods appear in connection with functional inequalities in the analysis of evolution equations and gradient flows, the modeling of inverse problems for UQ and data assimilation, or as a probabilistic tool for solving high-dimensional and nonconvex problems in combinatorial optimization and machine learning. 

The focus of this Special Issue are original and/or review papers that deal with all aspects of entropy in dynamical systems. Possible topics include but are not limited to:

  • Gradient flows and their geometric structures;
  • Functional inequalities, optimal transportation;
  • Information-theoretic aspects of coarse-graining;
  • Sensitivity analysis of dynamical systems;
  • Nonequilibrium statistical mechanics and large deviations;
  • Machine learning and statistics (e.g., Stein’s method);
  • Duality methods in control and estimation;
  • Maximum entropy methods for prediction and inference;
  • Application to large-scale systems, e.g., molecular dynamics, fluid dynamics, material science.

Prof. Dr. Michael Dellnitz
Prof. Dr. Carsten Hartmann
Dr. Feliks Nüske
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 790 KiB  
Article
Learn Quasi-Stationary Distributions of Finite State Markov Chain
by Zhiqiang Cai, Ling Lin and Xiang Zhou
Entropy 2022, 24(1), 133; https://doi.org/10.3390/e24010133 - 17 Jan 2022
Cited by 1 | Viewed by 1775
Abstract
We propose a reinforcement learning (RL) approach to compute the expression of quasi-stationary distribution. Based on the fixed-point formulation of quasi-stationary distribution, we minimize the KL-divergence of two Markovian path distributions induced by candidate distribution and true target distribution. To solve this challenging [...] Read more.
We propose a reinforcement learning (RL) approach to compute the expression of quasi-stationary distribution. Based on the fixed-point formulation of quasi-stationary distribution, we minimize the KL-divergence of two Markovian path distributions induced by candidate distribution and true target distribution. To solve this challenging minimization problem by gradient descent, we apply a reinforcement learning technique by introducing the reward and value functions. We derive the corresponding policy gradient theorem and design an actor-critic algorithm to learn the optimal solution and the value function. The numerical examples of finite state Markov chain are tested to demonstrate the new method. Full article
Show Figures

Figure 1

19 pages, 4142 KiB  
Article
A Novel Hybrid Monte Carlo Algorithm for Sampling Path Space
by Francis J. Pinski
Entropy 2021, 23(5), 499; https://doi.org/10.3390/e23050499 - 22 Apr 2021
Cited by 2 | Viewed by 1731
Abstract
To sample from complex, high-dimensional distributions, one may choose algorithms based on the Hybrid Monte Carlo (HMC) method. HMC-based algorithms generate nonlocal moves alleviating diffusive behavior. Here, I build on an already defined HMC framework, hybrid Monte Carlo on Hilbert spaces (Beskos, et [...] Read more.
To sample from complex, high-dimensional distributions, one may choose algorithms based on the Hybrid Monte Carlo (HMC) method. HMC-based algorithms generate nonlocal moves alleviating diffusive behavior. Here, I build on an already defined HMC framework, hybrid Monte Carlo on Hilbert spaces (Beskos, et al. Stoch. Proc. Applic. 2011), that provides finite-dimensional approximations of measures π, which have density with respect to a Gaussian measure on an infinite-dimensional Hilbert (path) space. In all HMC algorithms, one has some freedom to choose the mass operator. The novel feature of the algorithm described in this article lies in the choice of this operator. This new choice defines a Markov Chain Monte Carlo (MCMC) method that is well defined on the Hilbert space itself. As before, the algorithm described herein uses an enlarged phase space Π having the target π as a marginal, together with a Hamiltonian flow that preserves Π. In the previous work, the authors explored a method where the phase space π was augmented with Brownian bridges. With this new choice, π is augmented by Ornstein–Uhlenbeck (OU) bridges. The covariance of Brownian bridges grows with its length, which has negative effects on the acceptance rate in the MCMC method. This contrasts with the covariance of OU bridges, which is independent of the path length. The ingredients of the new algorithm include the definition of the mass operator, the equations for the Hamiltonian flow, the (approximate) numerical integration of the evolution equations, and finally, the Metropolis–Hastings acceptance rule. Taken together, these constitute a robust method for sampling the target distribution in an almost dimension-free manner. The behavior of this novel algorithm is demonstrated by computer experiments for a particle moving in two dimensions, between two free-energy basins separated by an entropic barrier. Full article
Show Figures

Figure 1

25 pages, 1135 KiB  
Article
Spectral Properties of Effective Dynamics from Conditional Expectations
by Feliks Nüske, Péter Koltai, Lorenzo Boninsegna and Cecilia Clementi
Entropy 2021, 23(2), 134; https://doi.org/10.3390/e23020134 - 21 Jan 2021
Cited by 5 | Viewed by 2600
Abstract
The reduction of high-dimensional systems to effective models on a smaller set of variables is an essential task in many areas of science. For stochastic dynamics governed by diffusion processes, a general procedure to find effective equations is the conditioning approach. In this [...] Read more.
The reduction of high-dimensional systems to effective models on a smaller set of variables is an essential task in many areas of science. For stochastic dynamics governed by diffusion processes, a general procedure to find effective equations is the conditioning approach. In this paper, we are interested in the spectrum of the generator of the resulting effective dynamics, and how it compares to the spectrum of the full generator. We prove a new relative error bound in terms of the eigenfunction approximation error for reversible systems. We also present numerical examples indicating that, if Kramers–Moyal (KM) type approximations are used to compute the spectrum of the reduced generator, it seems largely insensitive to the time window used for the KM estimators. We analyze the implications of these observations for systems driven by underdamped Langevin dynamics, and show how meaningful effective dynamics can be defined in this setting. Full article
Show Figures

Figure 1

35 pages, 3124 KiB  
Article
Interacting Particle Solutions of Fokker–Planck Equations Through Gradient–Log–Density Estimation
by Dimitra Maoutsa, Sebastian Reich and Manfred Opper
Entropy 2020, 22(8), 802; https://doi.org/10.3390/e22080802 - 22 Jul 2020
Cited by 15 | Viewed by 8902
Abstract
Fokker–Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort [...] Read more.
Fokker–Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker–Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker–Planck equations in low and moderate dimensions. The proposed gradient–log–density estimator is also of independent interest, for example, in the context of optimal control. Full article
Show Figures

Figure 1

Back to TopTop