Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = Langevin Monte Carlo

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2799 KiB  
Article
Efficiency Investigation of Langevin Monte Carlo Ray Tracing
by Sergey Ershov, Vladimir Frolov, Alexander Nikolaev, Vladimir Galaktionov and Alexey Voloboy
Mathematics 2024, 12(21), 3437; https://doi.org/10.3390/math12213437 - 3 Nov 2024
Viewed by 944
Abstract
The main computationally expensive task of realistic computer graphics is the calculation of global illumination. Currently, most of the lighting simulation methods are based on various types of Monte Carlo ray tracing. One of them, the Langevin Monte Carlo ray tracing, generates samples [...] Read more.
The main computationally expensive task of realistic computer graphics is the calculation of global illumination. Currently, most of the lighting simulation methods are based on various types of Monte Carlo ray tracing. One of them, the Langevin Monte Carlo ray tracing, generates samples using the time series of a system of the Langevin dynamics. The method seems to be very promising for calculating the global illumination. However, it remains poorly studied, while its analysis could significantly speed up the calculations without losing the quality of the result. In our work, we analyzed the most computationally expensive operations of this method and also conducted the computational experiments demonstrating the contribution of a particular operation to the convergence speed. One of our main conclusions is that the computationally expensive drift term can be dropped because it does not improve convergence. Another important conclution is that the preconditioning matrix makes the greatest contribution to the improvement of convergence. At the same time, calculation of this matrix is not so expensive, because it does not require calculating the gradient of the potential. The results of our study allow to significantly speed up the method. Full article
(This article belongs to the Special Issue Mathematical Applications in Computer Graphics)
Show Figures

Figure 1

19 pages, 2523 KiB  
Article
Hyperspectral Image Denoising by Pixel-Wise Noise Modeling and TV-Oriented Deep Image Prior
by Lixuan Yi, Qian Zhao and Zongben Xu
Remote Sens. 2024, 16(15), 2694; https://doi.org/10.3390/rs16152694 - 23 Jul 2024
Cited by 4 | Viewed by 2290
Abstract
Model-based hyperspectral image (HSI) denoising methods have attracted continuous attention in the past decades, due to their effectiveness and interpretability. In this work, we aim at advancing model-based HSI denoising, through sophisticated investigation for both the fidelity and regularization terms, or correspondingly noise [...] Read more.
Model-based hyperspectral image (HSI) denoising methods have attracted continuous attention in the past decades, due to their effectiveness and interpretability. In this work, we aim at advancing model-based HSI denoising, through sophisticated investigation for both the fidelity and regularization terms, or correspondingly noise and prior, by virtue of several recently developed techniques. Specifically, we formulate a novel unified probabilistic model for the HSI denoising task, within which the noise is assumed as pixel-wise non-independent and identically distributed (non-i.i.d) Gaussian predicted by a pre-trained neural network, and the prior for the HSI image is designed by incorporating the deep image prior (DIP) with total variation (TV) and spatio-spectral TV. To solve the resulted maximum a posteriori (MAP) estimation problem, we design a Monte Carlo Expectation–Maximization (MCEM) algorithm, in which the stochastic gradient Langevin dynamics (SGLD) method is used for computing the E-step, and the alternative direction method of multipliers (ADMM) is adopted for solving the optimization in the M-step. Experiments on both synthetic and real noisy HSI datasets have been conducted to verify the effectiveness of the proposed method. Full article
Show Figures

Figure 1

16 pages, 3986 KiB  
Article
Accelerating Convergence of Langevin Dynamics via Adaptive Irreversible Perturbations
by Zhenqing Wu, Zhejun Huang, Sijin Wu, Ziying Yu, Liuxin Zhu and Lili Yang
Mathematics 2024, 12(1), 118; https://doi.org/10.3390/math12010118 - 29 Dec 2023
Viewed by 1671
Abstract
Irreversible perturbations in Langevin dynamics have been widely recognized for their role in accelerating convergence in simulations of multi-modal distributions π(θ). A commonly used and easily computed standard irreversible perturbation is Jlogπ(θ), [...] Read more.
Irreversible perturbations in Langevin dynamics have been widely recognized for their role in accelerating convergence in simulations of multi-modal distributions π(θ). A commonly used and easily computed standard irreversible perturbation is Jlogπ(θ), where J is a skew-symmetric matrix. However, Langevin dynamics employing a fixed-scale standard irreversible perturbation encounter a trade-off between local exploitation and global exploration, associated with small and large scales of standard irreversible perturbation, respectively. To address this trade-off, we introduce the adaptive irreversible perturbations Langevin dynamics, where the scale of the standard irreversible perturbation changes adaptively. Through numerical examples, we demonstrate that adaptive irreversible perturbations in Langevin dynamics can enhance performance compared to fixed-scale irreversible perturbations. Full article
Show Figures

Figure 1

25 pages, 7834 KiB  
Review
Models for Simulation of Fractal-like Particle Clusters with Prescribed Fractal Dimension
by Oleksandr Tomchuk
Fractal Fract. 2023, 7(12), 866; https://doi.org/10.3390/fractalfract7120866 - 5 Dec 2023
Cited by 10 | Viewed by 4252
Abstract
This review article delves into the growing recognition of fractal structures in mesoscale phenomena. The article highlights the significance of realistic fractal-like aggregate models and efficient modeling codes for comparing data from diverse experimental findings and computational techniques. Specifically, the article discusses the [...] Read more.
This review article delves into the growing recognition of fractal structures in mesoscale phenomena. The article highlights the significance of realistic fractal-like aggregate models and efficient modeling codes for comparing data from diverse experimental findings and computational techniques. Specifically, the article discusses the current state of fractal aggregate modeling, with a focus on particle clusters that possess adjustable fractal dimensions (Df). The study emphasizes the suitability of different models for various Df–intervals, taking into account factors such as particle size, fractal prefactor, the polydispersity of structural units, and interaction potential. Through an analysis of existing models, this review aims to identify key similarities and differences and offer insights into future developments in colloidal science and related fields. Full article
(This article belongs to the Section Mathematical Physics)
Show Figures

Figure 1

27 pages, 536 KiB  
Article
Convergence Rates for the Constrained Sampling via Langevin Monte Carlo
by Yuanzheng Zhu
Entropy 2023, 25(8), 1234; https://doi.org/10.3390/e25081234 - 18 Aug 2023
Viewed by 2298
Abstract
Sampling from constrained distributions has posed significant challenges in terms of algorithmic design and non-asymptotic analysis, which are frequently encountered in statistical and machine-learning models. In this study, we propose three sampling algorithms based on Langevin Monte Carlo with the Metropolis–Hastings steps to [...] Read more.
Sampling from constrained distributions has posed significant challenges in terms of algorithmic design and non-asymptotic analysis, which are frequently encountered in statistical and machine-learning models. In this study, we propose three sampling algorithms based on Langevin Monte Carlo with the Metropolis–Hastings steps to handle the distribution constrained within some convex body. We present a rigorous analysis of the corresponding Markov chains and derive non-asymptotic upper bounds on the convergence rates of these algorithms in total variation distance. Our results demonstrate that the sampling algorithm, enhanced with the Metropolis–Hastings steps, offers an effective solution for tackling some constrained sampling problems. The numerical experiments are conducted to compare our methods with several competing algorithms without the Metropolis–Hastings steps, and the results further support our theoretical findings. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

28 pages, 1583 KiB  
Article
Ornstein–Uhlenbeck Process on Three-Dimensional Comb under Stochastic Resetting
by Pece Trajanovski, Petar Jolakoski, Ljupco Kocarev and Trifce Sandev
Mathematics 2023, 11(16), 3576; https://doi.org/10.3390/math11163576 - 18 Aug 2023
Cited by 5 | Viewed by 1797
Abstract
The Ornstein–Uhlenbeck (O-U) process with resetting is considered as the anomalous transport taking place on a three-dimensional comb. The three-dimensional comb is a comb inside a comb structure, consisting of backbones and fingers in the following geometrical correspondence x–backbone →y–fingers–backbone [...] Read more.
The Ornstein–Uhlenbeck (O-U) process with resetting is considered as the anomalous transport taking place on a three-dimensional comb. The three-dimensional comb is a comb inside a comb structure, consisting of backbones and fingers in the following geometrical correspondence x–backbone →y–fingers–backbone →z–fingers. Realisation of the O-U process on the three-dimensional comb leads to anomalous (non-Markovian) diffusion. This specific anomalous transport in the presence of resets results in non-equilibrium stationary states. Explicit analytical expressions for the mean values and the mean squared displacements along all three directions of the comb are obtained and verified numerically. The marginal probability density functions for each direction are obtained numerically by Monte Carlo simulation of a random transport described by a system of coupled Langevin equations for the comb geometry. Full article
(This article belongs to the Section E4: Mathematical Physics)
Show Figures

Figure 1

21 pages, 7864 KiB  
Article
Variational Hybrid Monte Carlo for Efficient Multi-Modal Data Sampling
by Shiliang Sun, Jing Zhao, Minghao Gu and Shanhu Wang
Entropy 2023, 25(4), 560; https://doi.org/10.3390/e25040560 - 24 Mar 2023
Cited by 4 | Viewed by 2300
Abstract
The Hamiltonian Monte Carlo (HMC) sampling algorithm exploits Hamiltonian dynamics to construct efficient Markov Chain Monte Carlo (MCMC), which has become increasingly popular in machine learning and statistics. Since HMC uses the gradient information of the target distribution, it can explore the state [...] Read more.
The Hamiltonian Monte Carlo (HMC) sampling algorithm exploits Hamiltonian dynamics to construct efficient Markov Chain Monte Carlo (MCMC), which has become increasingly popular in machine learning and statistics. Since HMC uses the gradient information of the target distribution, it can explore the state space much more efficiently than random-walk proposals, but may suffer from high autocorrelation. In this paper, we propose Langevin Hamiltonian Monte Carlo (LHMC) to reduce the autocorrelation of the samples. Probabilistic inference involving multi-modal distributions is very difficult for dynamics-based MCMC samplers, which is easily trapped in the mode far away from other modes. To tackle this issue, we further propose a variational hybrid Monte Carlo (VHMC) which uses a variational distribution to explore the phase space and find new modes, and it is capable of sampling from multi-modal distributions effectively. A formal proof is provided that shows that the proposed method can converge to target distributions. Both synthetic and real datasets are used to evaluate its properties and performance. The experimental results verify the theory and show superior performance in multi-modal sampling. Full article
Show Figures

Figure 1

27 pages, 422 KiB  
Article
From Bilinear Regression to Inductive Matrix Completion: A Quasi-Bayesian Analysis
by The Tien Mai
Entropy 2023, 25(2), 333; https://doi.org/10.3390/e25020333 - 11 Feb 2023
Cited by 4 | Viewed by 1977
Abstract
In this paper, we study the problem of bilinear regression, a type of statistical modeling that deals with multiple variables and multiple responses. One of the main difficulties that arise in this problem is the presence of missing data in the response matrix, [...] Read more.
In this paper, we study the problem of bilinear regression, a type of statistical modeling that deals with multiple variables and multiple responses. One of the main difficulties that arise in this problem is the presence of missing data in the response matrix, a problem known as inductive matrix completion. To address these issues, we propose a novel approach that combines elements of Bayesian statistics with a quasi-likelihood method. Our proposed method starts by addressing the problem of bilinear regression using a quasi-Bayesian approach. The quasi-likelihood method that we employ in this step allows us to handle the complex relationships between the variables in a more robust way. Next, we adapt our approach to the context of inductive matrix completion. We make use of a low-rankness assumption and leverage the powerful PAC-Bayes bound technique to provide statistical properties for our proposed estimators and for the quasi-posteriors. To compute the estimators, we propose a Langevin Monte Carlo method to obtain approximate solutions to the problem of inductive matrix completion in a computationally efficient manner. To demonstrate the effectiveness of our proposed methods, we conduct a series of numerical studies. These studies allow us to evaluate the performance of our estimators under different conditions and provide a clear illustration of the strengths and limitations of our approach. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
20 pages, 13011 KiB  
Article
Finite Iterative Forecasting Model Based on Fractional Generalized Pareto Motion
by Wanqing Song, Shouwu Duan, Dongdong Chen, Enrico Zio, Wenduan Yan and Fan Cai
Fractal Fract. 2022, 6(9), 471; https://doi.org/10.3390/fractalfract6090471 - 26 Aug 2022
Cited by 7 | Viewed by 1638
Abstract
In this paper, an efficient prediction model based on the fractional generalized Pareto motion (fGPm) with Long-Range Dependent (LRD) and infinite variance characteristics is proposed. Firstly, we discuss the meaning of each parameter of the generalized Pareto distribution (GPD), and the LRD characteristics [...] Read more.
In this paper, an efficient prediction model based on the fractional generalized Pareto motion (fGPm) with Long-Range Dependent (LRD) and infinite variance characteristics is proposed. Firstly, we discuss the meaning of each parameter of the generalized Pareto distribution (GPD), and the LRD characteristics of the generalized Pareto motion are analyzed by taking into account the heavy-tailed characteristics of its distribution. Then, the mathematical relationship H=1α between the self-similar parameter H and the tail parameter α is obtained. Also, the generalized Pareto increment distribution is obtained using statistical methods, which offers the subsequent derivation of the iterative forecasting model based on the increment form. Secondly, the tail parameter α is introduced to generalize the integral expression of the fractional Brownian motion, and the integral expression of fGPm is obtained. Then, by discretizing the integral expression of fGPm, the statistical characteristics of infinite variance is shown. In addition, in order to study the LRD prediction characteristic of fGPm, LRD and self-similarity analysis are performed on fGPm, and the LRD prediction conditions H>1α is obtained. Compared to the fractional Brownian motion describing LRD by a self-similar parameter H, fGPm introduces the tail parameter α, which increases the flexibility of the LRD description. However, the two parameters are not independent, because of the LRD condition H>1α. An iterative prediction model is obtained from the Langevin-type stochastic differential equation driven by fGPm. The prediction model inherits the LRD condition H>1α of fGPm and the time series, simulated by the Monte Carlo method, shows the superiority of the prediction model to predict data with high jumps. Finally, this paper uses power load data in two different situations (weekdays and weekends), used to verify the validity and general applicability of the forecasting model, which is compared with the fractional Brown prediction model, highlighting the “high jump data prediction advantage” of the fGPm prediction model. Full article
(This article belongs to the Special Issue New Trends in Fractional Stochastic Processes)
Show Figures

Figure 1

19 pages, 5449 KiB  
Article
Monte Carlo Simulation of Stochastic Differential Equation to Study Information Geometry
by Abhiram Anand Thiruthummal and Eun-jin Kim
Entropy 2022, 24(8), 1113; https://doi.org/10.3390/e24081113 - 12 Aug 2022
Cited by 10 | Viewed by 3159
Abstract
Information Geometry is a useful tool to study and compare the solutions of a Stochastic Differential Equations (SDEs) for non-equilibrium systems. As an alternative method to solving the Fokker–Planck equation, we propose a new method to calculate time-dependent probability density functions (PDFs) and [...] Read more.
Information Geometry is a useful tool to study and compare the solutions of a Stochastic Differential Equations (SDEs) for non-equilibrium systems. As an alternative method to solving the Fokker–Planck equation, we propose a new method to calculate time-dependent probability density functions (PDFs) and to study Information Geometry using Monte Carlo (MC) simulation of SDEs. Specifically, we develop a new MC SDE method to overcome the challenges in calculating a time-dependent PDF and information geometric diagnostics and to speed up simulations by utilizing GPU computing. Using MC SDE simulations, we reproduce Information Geometric scaling relations found from the Fokker–Planck method for the case of a stochastic process with linear and cubic damping terms. We showcase the advantage of MC SDE simulation over FPE solvers by calculating unequal time joint PDFs. For the linear process with a linear damping force, joint PDF is found to be a Gaussian. In contrast, for the cubic process with a cubic damping force, joint PDF exhibits a bimodal structure, even in a stationary state. This suggests a finite memory time induced by a nonlinear force. Furthermore, several power-law scalings in the characteristics of bimodal PDFs are identified and investigated. Full article
Show Figures

Figure 1

20 pages, 1250 KiB  
Article
Locally Scaled and Stochastic Volatility Metropolis– Hastings Algorithms
by Wilson Tsakane Mongwe, Rendani Mbuvha and Tshilidzi Marwala
Algorithms 2021, 14(12), 351; https://doi.org/10.3390/a14120351 - 30 Nov 2021
Cited by 5 | Viewed by 3367
Abstract
Markov chain Monte Carlo (MCMC) techniques are usually used to infer model parameters when closed-form inference is not feasible, with one of the simplest MCMC methods being the random walk Metropolis–Hastings (MH) algorithm. The MH algorithm suffers from random walk behaviour, which results [...] Read more.
Markov chain Monte Carlo (MCMC) techniques are usually used to infer model parameters when closed-form inference is not feasible, with one of the simplest MCMC methods being the random walk Metropolis–Hastings (MH) algorithm. The MH algorithm suffers from random walk behaviour, which results in inefficient exploration of the target posterior distribution. This method has been improved upon, with algorithms such as Metropolis Adjusted Langevin Monte Carlo (MALA) and Hamiltonian Monte Carlo being examples of popular modifications to MH. In this work, we revisit the MH algorithm to reduce the autocorrelations in the generated samples without adding significant computational time. We present the: (1) Stochastic Volatility Metropolis–Hastings (SVMH) algorithm, which is based on using a random scaling matrix in the MH algorithm, and (2) Locally Scaled Metropolis–Hastings (LSMH) algorithm, in which the scaled matrix depends on the local geometry of the target distribution. For both these algorithms, the proposal distribution is still Gaussian centred at the current state. The empirical results show that these minor additions to the MH algorithm significantly improve the effective sample rates and predictive performance over the vanilla MH method. The SVMH algorithm produces similar effective sample sizes to the LSMH method, with SVMH outperforming LSMH on an execution time normalised effective sample size basis. The performance of the proposed methods is also compared to the MALA and the current state-of-art method being the No-U-Turn sampler (NUTS). The analysis is performed using a simulation study based on Neal’s funnel and multivariate Gaussian distributions and using real world data modeled using jump diffusion processes and Bayesian logistic regression. Although both MALA and NUTS outperform the proposed algorithms on an effective sample size basis, the SVMH algorithm has similar or better predictive performance when compared to MALA and NUTS across the various targets. In addition, the SVMH algorithm outperforms the other MCMC algorithms on a normalised effective sample size basis on the jump diffusion processes datasets. These results indicate the overall usefulness of the proposed algorithms. Full article
(This article belongs to the Special Issue Monte Carlo Methods and Algorithms)
Show Figures

Figure 1

19 pages, 6036 KiB  
Article
The X-ray Sensitivity of an Amorphous Lead Oxide Photoconductor
by Oleksandr Grynko, Tristen Thibault, Emma Pineau and Alla Reznik
Sensors 2021, 21(21), 7321; https://doi.org/10.3390/s21217321 - 3 Nov 2021
Cited by 12 | Viewed by 2921
Abstract
The photoconductor layer is an important component of direct conversion flat panel X-ray imagers (FPXI); thus, it should be carefully selected to meet the requirements for the X-ray imaging detector, and its properties should be clearly understood to develop the most optimal detector [...] Read more.
The photoconductor layer is an important component of direct conversion flat panel X-ray imagers (FPXI); thus, it should be carefully selected to meet the requirements for the X-ray imaging detector, and its properties should be clearly understood to develop the most optimal detector design. Currently, amorphous selenium (a-Se) is the only photoconductor utilized in commercial direct conversion FPXIs for low-energy mammographic imaging, but it is not practically feasible for higher-energy diagnostic imaging. Amorphous lead oxide (a-PbO) photoconductor is considered as a replacement to a-Se in radiography, fluoroscopy, and tomosynthesis applications. In this work, we investigated the X-ray sensitivity of a-PbO, one of the most important parameters for X-ray photoconductors, and examined the underlying mechanisms responsible for charge generation and recombination. The X-ray sensitivity in terms of electron–hole pair creation energy, W±, was measured in a range of electric fields, X-ray energies, and exposure levels. W± decreases with the electric field and X-ray energy, saturating at 18–31 eV/ehp, depending on the energy of X-rays, but increases with the exposure rate. The peculiar dependencies of W± on these parameters lead to a conclusion that, at electric fields relevant to detector operation (~10 V/μm), the columnar recombination and the bulk recombination mechanisms interplay in the a-PbO photoconductor. Full article
(This article belongs to the Special Issue Sensors and X-ray Detectors)
Show Figures

Figure 1

45 pages, 704 KiB  
Article
Accelerated Diffusion-Based Sampling by the Non-Reversible Dynamics with Skew-Symmetric Matrices
by Futoshi Futami, Tomoharu Iwata, Naonori Ueda and Issei Sato
Entropy 2021, 23(8), 993; https://doi.org/10.3390/e23080993 - 30 Jul 2021
Cited by 6 | Viewed by 4608
Abstract
Langevin dynamics (LD) has been extensively studied theoretically and practically as a basic sampling technique. Recently, the incorporation of non-reversible dynamics into LD is attracting attention because it accelerates the mixing speed of LD. Popular choices for non-reversible dynamics include underdamped Langevin dynamics [...] Read more.
Langevin dynamics (LD) has been extensively studied theoretically and practically as a basic sampling technique. Recently, the incorporation of non-reversible dynamics into LD is attracting attention because it accelerates the mixing speed of LD. Popular choices for non-reversible dynamics include underdamped Langevin dynamics (ULD), which uses second-order dynamics and perturbations with skew-symmetric matrices. Although ULD has been widely used in practice, the application of skew acceleration is limited although it is expected to show superior performance theoretically. Current work lacks a theoretical understanding of issues that are important to practitioners, including the selection criteria for skew-symmetric matrices, quantitative evaluations of acceleration, and the large memory cost of storing skew matrices. In this study, we theoretically and numerically clarify these problems by analyzing acceleration focusing on how the skew-symmetric matrix perturbs the Hessian matrix of potential functions. We also present a practical algorithm that accelerates the standard LD and ULD, which uses novel memory-efficient skew-symmetric matrices under parallel-chain Monte Carlo settings. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

16 pages, 2230 KiB  
Article
Comparative Modeling of Frequency Mixing Measurements of Magnetic Nanoparticles Using Micromagnetic Simulations and Langevin Theory
by Ulrich M. Engelmann, Ahmed Shalaby, Carolyn Shasha, Kannan M. Krishnan and Hans-Joachim Krause
Nanomaterials 2021, 11(5), 1257; https://doi.org/10.3390/nano11051257 - 11 May 2021
Cited by 13 | Viewed by 3965
Abstract
Dual frequency magnetic excitation of magnetic nanoparticles (MNP) enables enhanced biosensing applications. This was studied from an experimental and theoretical perspective: nonlinear sum-frequency components of MNP exposed to dual-frequency magnetic excitation were measured as a function of static magnetic offset field. The Langevin [...] Read more.
Dual frequency magnetic excitation of magnetic nanoparticles (MNP) enables enhanced biosensing applications. This was studied from an experimental and theoretical perspective: nonlinear sum-frequency components of MNP exposed to dual-frequency magnetic excitation were measured as a function of static magnetic offset field. The Langevin model in thermodynamic equilibrium was fitted to the experimental data to derive parameters of the lognormal core size distribution. These parameters were subsequently used as inputs for micromagnetic Monte-Carlo (MC)-simulations. From the hysteresis loops obtained from MC-simulations, sum-frequency components were numerically demodulated and compared with both experiment and Langevin model predictions. From the latter, we derived that approximately 90% of the frequency mixing magnetic response signal is generated by the largest 10% of MNP. We therefore suggest that small particles do not contribute to the frequency mixing signal, which is supported by MC-simulation results. Both theoretical approaches describe the experimental signal shapes well, but with notable differences between experiment and micromagnetic simulations. These deviations could result from Brownian relaxations which are, albeit experimentally inhibited, included in MC-simulation, or (yet unconsidered) cluster-effects of MNP, or inaccurately derived input for MC-simulations, because the largest particles dominate the experimental signal but concurrently do not fulfill the precondition of thermodynamic equilibrium required by Langevin theory. Full article
(This article belongs to the Special Issue Applications and Properties of Magnetic Nanoparticles)
Show Figures

Figure 1

48 pages, 8203 KiB  
Review
An Overview of the Lagrangian Dispersion Modeling of Heavy Particles in Homogeneous Isotropic Turbulence and Considerations on Related LES Simulations
by Daniel G. F. Huilier
Fluids 2021, 6(4), 145; https://doi.org/10.3390/fluids6040145 - 8 Apr 2021
Cited by 20 | Viewed by 6005
Abstract
Particle tracking is a competitive technique widely used in two-phase flows and best suited to simulate the dispersion of heavy particles in the atmosphere. Most Lagrangian models in the statistical approach to turbulence are based either on the eddy interaction model (EIM) and [...] Read more.
Particle tracking is a competitive technique widely used in two-phase flows and best suited to simulate the dispersion of heavy particles in the atmosphere. Most Lagrangian models in the statistical approach to turbulence are based either on the eddy interaction model (EIM) and the Monte-Carlo method or on random walk models (RWMs) making use of Markov chains and a Langevin equation. In the present work, both discontinuous and continuous random walk techniques are used to model the dispersion of heavy spherical particles in homogeneous isotropic stationary turbulence (HIST). Their efficiency to predict particle long time dispersion, mean-square velocity and Lagrangian integral time scales are discussed. Computation results with zero and no-zero mean drift velocity are reported; they are intended to quantify the inertia, gravity, crossing-trajectory and continuity effects controlling the dispersion. The calculations concern dense monodisperse spheres in air, the particle Stokes number ranging from 0.007 to 4. Due to the weaknesses of such models, a more sophisticated matrix method will also be explored, able to simulate the true fluid turbulence experienced by the particle for long time dispersion studies. Computer evolution and performance since allowed to develop, instead of Reynold-Averaged Navier-Stokes (RANS)-based studies, large eddy simulation (LES) and direct numerical simulation (DNS) of turbulence coupled to Generalized Langevin Models. A short review on the progress of the Lagrangian simulations based on large eddy simulation (LES) will therefore be provided too, highlighting preferential concentration. The theoretical framework for the fluid time correlation functions along the heavy particle path is that suggested by Wang and Stock. Full article
(This article belongs to the Special Issue Numerical Methods and Physical Aspects of Multiphase Flow)
Show Figures

Figure 1

Back to TopTop