Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = Langevin–Markov chain model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 536 KB  
Article
Convergence Rates for the Constrained Sampling via Langevin Monte Carlo
by Yuanzheng Zhu
Entropy 2023, 25(8), 1234; https://doi.org/10.3390/e25081234 - 18 Aug 2023
Viewed by 2902
Abstract
Sampling from constrained distributions has posed significant challenges in terms of algorithmic design and non-asymptotic analysis, which are frequently encountered in statistical and machine-learning models. In this study, we propose three sampling algorithms based on Langevin Monte Carlo with the Metropolis–Hastings steps to [...] Read more.
Sampling from constrained distributions has posed significant challenges in terms of algorithmic design and non-asymptotic analysis, which are frequently encountered in statistical and machine-learning models. In this study, we propose three sampling algorithms based on Langevin Monte Carlo with the Metropolis–Hastings steps to handle the distribution constrained within some convex body. We present a rigorous analysis of the corresponding Markov chains and derive non-asymptotic upper bounds on the convergence rates of these algorithms in total variation distance. Our results demonstrate that the sampling algorithm, enhanced with the Metropolis–Hastings steps, offers an effective solution for tackling some constrained sampling problems. The numerical experiments are conducted to compare our methods with several competing algorithms without the Metropolis–Hastings steps, and the results further support our theoretical findings. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

20 pages, 1250 KB  
Article
Locally Scaled and Stochastic Volatility Metropolis– Hastings Algorithms
by Wilson Tsakane Mongwe, Rendani Mbuvha and Tshilidzi Marwala
Algorithms 2021, 14(12), 351; https://doi.org/10.3390/a14120351 - 30 Nov 2021
Cited by 5 | Viewed by 3649
Abstract
Markov chain Monte Carlo (MCMC) techniques are usually used to infer model parameters when closed-form inference is not feasible, with one of the simplest MCMC methods being the random walk Metropolis–Hastings (MH) algorithm. The MH algorithm suffers from random walk behaviour, which results [...] Read more.
Markov chain Monte Carlo (MCMC) techniques are usually used to infer model parameters when closed-form inference is not feasible, with one of the simplest MCMC methods being the random walk Metropolis–Hastings (MH) algorithm. The MH algorithm suffers from random walk behaviour, which results in inefficient exploration of the target posterior distribution. This method has been improved upon, with algorithms such as Metropolis Adjusted Langevin Monte Carlo (MALA) and Hamiltonian Monte Carlo being examples of popular modifications to MH. In this work, we revisit the MH algorithm to reduce the autocorrelations in the generated samples without adding significant computational time. We present the: (1) Stochastic Volatility Metropolis–Hastings (SVMH) algorithm, which is based on using a random scaling matrix in the MH algorithm, and (2) Locally Scaled Metropolis–Hastings (LSMH) algorithm, in which the scaled matrix depends on the local geometry of the target distribution. For both these algorithms, the proposal distribution is still Gaussian centred at the current state. The empirical results show that these minor additions to the MH algorithm significantly improve the effective sample rates and predictive performance over the vanilla MH method. The SVMH algorithm produces similar effective sample sizes to the LSMH method, with SVMH outperforming LSMH on an execution time normalised effective sample size basis. The performance of the proposed methods is also compared to the MALA and the current state-of-art method being the No-U-Turn sampler (NUTS). The analysis is performed using a simulation study based on Neal’s funnel and multivariate Gaussian distributions and using real world data modeled using jump diffusion processes and Bayesian logistic regression. Although both MALA and NUTS outperform the proposed algorithms on an effective sample size basis, the SVMH algorithm has similar or better predictive performance when compared to MALA and NUTS across the various targets. In addition, the SVMH algorithm outperforms the other MCMC algorithms on a normalised effective sample size basis on the jump diffusion processes datasets. These results indicate the overall usefulness of the proposed algorithms. Full article
(This article belongs to the Special Issue Monte Carlo Methods and Algorithms)
Show Figures

Figure 1

48 pages, 8203 KB  
Review
An Overview of the Lagrangian Dispersion Modeling of Heavy Particles in Homogeneous Isotropic Turbulence and Considerations on Related LES Simulations
by Daniel G. F. Huilier
Fluids 2021, 6(4), 145; https://doi.org/10.3390/fluids6040145 - 8 Apr 2021
Cited by 21 | Viewed by 7250
Abstract
Particle tracking is a competitive technique widely used in two-phase flows and best suited to simulate the dispersion of heavy particles in the atmosphere. Most Lagrangian models in the statistical approach to turbulence are based either on the eddy interaction model (EIM) and [...] Read more.
Particle tracking is a competitive technique widely used in two-phase flows and best suited to simulate the dispersion of heavy particles in the atmosphere. Most Lagrangian models in the statistical approach to turbulence are based either on the eddy interaction model (EIM) and the Monte-Carlo method or on random walk models (RWMs) making use of Markov chains and a Langevin equation. In the present work, both discontinuous and continuous random walk techniques are used to model the dispersion of heavy spherical particles in homogeneous isotropic stationary turbulence (HIST). Their efficiency to predict particle long time dispersion, mean-square velocity and Lagrangian integral time scales are discussed. Computation results with zero and no-zero mean drift velocity are reported; they are intended to quantify the inertia, gravity, crossing-trajectory and continuity effects controlling the dispersion. The calculations concern dense monodisperse spheres in air, the particle Stokes number ranging from 0.007 to 4. Due to the weaknesses of such models, a more sophisticated matrix method will also be explored, able to simulate the true fluid turbulence experienced by the particle for long time dispersion studies. Computer evolution and performance since allowed to develop, instead of Reynold-Averaged Navier-Stokes (RANS)-based studies, large eddy simulation (LES) and direct numerical simulation (DNS) of turbulence coupled to Generalized Langevin Models. A short review on the progress of the Lagrangian simulations based on large eddy simulation (LES) will therefore be provided too, highlighting preferential concentration. The theoretical framework for the fluid time correlation functions along the heavy particle path is that suggested by Wang and Stock. Full article
(This article belongs to the Special Issue Numerical Methods and Physical Aspects of Multiphase Flow)
Show Figures

Figure 1

18 pages, 1871 KB  
Article
A Neural Network MCMC Sampler That Maximizes Proposal Entropy
by Zengyi Li, Yubei Chen and Friedrich T. Sommer
Entropy 2021, 23(3), 269; https://doi.org/10.3390/e23030269 - 25 Feb 2021
Cited by 6 | Viewed by 4283
Abstract
Markov Chain Monte Carlo (MCMC) methods sample from unnormalized probability distributions and offer guarantees of exact sampling. However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MCMC methods. Augmenting samplers with neural networks can potentially [...] Read more.
Markov Chain Monte Carlo (MCMC) methods sample from unnormalized probability distributions and offer guarantees of exact sampling. However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MCMC methods. Augmenting samplers with neural networks can potentially improve their efficiency. Previous neural network-based samplers were trained with objectives that either did not explicitly encourage exploration, or contained a term that encouraged exploration but only for well structured distributions. Here we propose to maximize proposal entropy for adapting the proposal to distributions of any shape. To optimize proposal entropy directly, we devised a neural network MCMC sampler that has a flexible and tractable proposal distribution. Specifically, our network architecture utilizes the gradient of the target distribution for generating proposals. Our model achieved significantly higher efficiency than previous neural network MCMC techniques in a variety of sampling tasks, sometimes by more than an order magnitude. Further, the sampler was demonstrated through the training of a convergent energy-based model of natural images. The adaptive sampler achieved unbiased sampling with significantly higher proposal entropy than a Langevin dynamics sample. The trained sampler also achieved better sample quality. Full article
Show Figures

Figure 1

Back to TopTop