Previous Issue
Volume 11, Axion-Wimp 2024
 
 

Phys. Sci. Forum, 2025, MaxEnt 2024

The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Ghent, Belgium | 1–7 July 2024

Volume Editors:
Geert Verdoolaege, Ghent University, Belgium

 

Number of Papers: 19
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image): The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2024) continued a long series of MaxEnt workshops lasting 43 years that explored [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Other

3 pages, 148 KB  
Editorial
Preface and Statement of Peer Review: 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2024)
by Geert Verdoolaege
Phys. Sci. Forum 2025, 12(1), 19; https://doi.org/10.3390/psf2025012019 - 10 Dec 2025
Viewed by 95

Other

Jump to: Editorial

13 pages, 597 KB  
Proceeding Paper
On Singular Bayesian Inference of Underdetermined Quantities—Part I: Invariant Discrete Ill-Posed Inverse Problems in Small and Large Dimensions
by Fabrice Pautot
Phys. Sci. Forum 2025, 12(1), 1; https://doi.org/10.3390/psf2025012001 - 19 Sep 2025
Viewed by 759
Abstract
When the quantities of interest remain underdetermined a posteriori, we would like to draw inferences for at least one particular solution. Can we do so in a Bayesian way? What is a probability distribution over an underdetermined quantity? How do we get a [...] Read more.
When the quantities of interest remain underdetermined a posteriori, we would like to draw inferences for at least one particular solution. Can we do so in a Bayesian way? What is a probability distribution over an underdetermined quantity? How do we get a posterior for one particular solution from a posterior for infinitely many underdetermined solutions? Guided by discrete invariant underdetermined ill-posed inverse problems, we find that a probability distribution over an underdetermined quantity is non-absolutely continuous, partially improper with respect to the initial reference measure but proper with respect to its restriction to its support. Thus, it is necessary and sufficient to choose the prior restricted reference measure to assign partially improper priors using e.g., the principle of maximum entropy and the posterior restricted reference measure to obtain the proper posterior for one particular solution. We can then work with underdetermined models like Hoeffding–Sobol expansions seamlessly, especially to effectively counter the curse of dimensionality within discrete nonparametric inverse problems. We show Singular Bayesian Inference (SBI) at work in an advanced Bayesian optimization application: dynamic pricing. Such a nice generalization of Bayesian–maxentropic inference could motivate many theoretical and practical developments. Full article
Show Figures

Figure 1

10 pages, 761 KB  
Proceeding Paper
Nonparametric FBST for Validating Linear Models
by Rodrigo F. L. Lassance, Julio M. Stern and Rafael B. Stern
Phys. Sci. Forum 2025, 12(1), 2; https://doi.org/10.3390/psf2025012002 - 24 Sep 2025
Viewed by 385
Abstract
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) [...] Read more.
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) sidesteps this issue, standing out for also being logically coherent and offering a measure of evidence against H 0 , although its application to nonparametric settings is still limited. In this work, we use Gaussian process priors to derive FBST procedures that evaluate general linearity assumptions, such as testing the adherence of data and performing variable selection to linear models. We also make use of pragmatic hypotheses to verify if the data might be compatible with a linear model when factors such as measurement errors or utility judgments are accounted for. This contribution extends the theory of the FBST, allowing for its application in nonparametric settings and requiring, at most, simple optimization procedures to reach the desired conclusion. Full article
Show Figures

Figure 1

8 pages, 1008 KB  
Proceeding Paper
Combining Knowledge About Metabolic Networks and Single-Cell Data with Maximum Entropy
by Carola S. Heinzel, Johann F. Jadebeck, Elisabeth Zelle, Johannes Seiffarth and Katharina Nöh
Phys. Sci. Forum 2025, 12(1), 3; https://doi.org/10.3390/psf2025012003 - 24 Sep 2025
Viewed by 623
Abstract
Better understanding of the fitness and flexibility of microbial platform organisms is central to biotechnological process development. Live-cell experiments uncover the phenotypic heterogeneity of living cells, emerging even within isogenic cell populations. However, how this observed heterogeneity in growth relates to the variability [...] Read more.
Better understanding of the fitness and flexibility of microbial platform organisms is central to biotechnological process development. Live-cell experiments uncover the phenotypic heterogeneity of living cells, emerging even within isogenic cell populations. However, how this observed heterogeneity in growth relates to the variability of intracellular processes that drive cell growth and division is less understood. We here approach the question, how the observed phenotypic variability in single-cell growth rates links to metabolic processes, specifically intracellular reaction rates (fluxes). To approach this question, we employ the Maximum Entropy (MaxEnt) principle that allows us to bring together the phenotypic solution space, derived from metabolic network models, to single-cell growth rates observed in live-cell experiments. We apply the computational machinery to first-of-its-kind data of the microorganism Corynebacterium glutamicum, grown on different substrates under continuous medium supply. We compare the MaxEnt-based estimates of metabolic fluxes with estimates obtained by assuming that the average cell operates at its maximum growth rate, which is the current predominant practice in biotechnology. Full article
Show Figures

Figure 1

10 pages, 790 KB  
Proceeding Paper
A Comparison of MCMC Algorithms for an Inverse Squeeze Flow Problem
by Aricia Rinkens, Rodrigo L. S. Silva, Clemens V. Verhoosel, Nick O. Jaensson and Erik Quaeghebeur
Phys. Sci. Forum 2025, 12(1), 4; https://doi.org/10.3390/psf2025012004 - 22 Sep 2025
Viewed by 481
Abstract
Using Bayesian inference to calibrate constitutive model parameters has recently seen a rise in interest. The Markov chain Monte Carlo (MCMC) algorithm is one of the most commonly used methods to sample from the posterior. However, the choice of which MCMC algorithm to [...] Read more.
Using Bayesian inference to calibrate constitutive model parameters has recently seen a rise in interest. The Markov chain Monte Carlo (MCMC) algorithm is one of the most commonly used methods to sample from the posterior. However, the choice of which MCMC algorithm to apply is typically pragmatic and based on considerations such as software availability and experience. We compare three commonly used MCMC algorithms: Metropolis-Hastings (MH), Affine Invariant Stretch Move (AISM) and No-U-Turn sampler (NUTS). For the comparison, we use the Kullback-Leibler (KL) divergence as a convergence criterion, which measures the statistical distance between the sampled and the ‘true’ posterior. We apply the Bayesian framework to a Newtonian squeeze flow problem, for which there exists an analytical model. Furthermore, we have collected experimental data using a tailored setup. The ground truth for the posterior is obtained by evaluating it on a uniform reference grid. We conclude that, for the same number of samples, the NUTS results in the lowest KL divergence, followed by the AISM sampler and last the MH sampler. Full article
Show Figures

Figure 1

8 pages, 1340 KB  
Proceeding Paper
Trans-Dimensional Diffusive Nested Sampling for Metabolic Network Inference
by Johann Fredrik Jadebeck, Wolfgang Wiechert and Katharina Nöh
Phys. Sci. Forum 2025, 12(1), 5; https://doi.org/10.3390/psf2025012005 - 24 Sep 2025
Viewed by 472
Abstract
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly [...] Read more.
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly effective for estimating evidence and sampling the parameter posterior probability distributions. However, the practicality of nested sampling for metabolic network inference has yet to be studied. In this technical report, we explore the amalgamation of nested sampling, specifically diffusive nested sampling, with reversible jump Markov chain Monte Carlo. We apply the algorithm to two synthetic problems from the field of metabolic flux analysis. We present run times and share insights into hyperparameter choices, providing a useful point of reference for future applications of nested sampling to metabolic flux problems. Full article
Show Figures

Figure 1

11 pages, 1274 KB  
Proceeding Paper
The Value of Information in Economic Contexts
by Stefan Behringer and Roman V. Belavkin
Phys. Sci. Forum 2025, 12(1), 6; https://doi.org/10.3390/psf2025012006 - 23 Sep 2025
Viewed by 404
Abstract
This paper explores the application of the Value of Information, (VoI), based on the Claude Shannon/Ruslan Stratonovich framework within economic contexts. Unlike previous studies that examine circular settings and strategic interactions, we focus on a non-strategic linear setting. We employ standard [...] Read more.
This paper explores the application of the Value of Information, (VoI), based on the Claude Shannon/Ruslan Stratonovich framework within economic contexts. Unlike previous studies that examine circular settings and strategic interactions, we focus on a non-strategic linear setting. We employ standard economically motivated utility functions, including linear, quadratic, constant absolute risk aversion (CARA), and constant relative risk aversion (CRRA), across various priors of the stochastic environment, and analyse the resulting specific VoI forms. The curvature of these VoI functions play a decisive role in determining whether acquiring additional costly information enhances the efficiency of the decision making process. We also outline potential implications for broader decision-making frameworks. Full article
Show Figures

Figure 1

14 pages, 300 KB  
Proceeding Paper
Exploring Quantized Entropy Production Strength in Mesoscopic Irreversible Thermodynamics
by Giorgio Sonnino
Phys. Sci. Forum 2025, 12(1), 7; https://doi.org/10.3390/psf2025012007 - 13 Oct 2025
Viewed by 380
Abstract
This letter aims to investigate thermodynamic processes in small systems in the Onsager region by showing that fundamental quantities such as total entropy production can be discretized on the mesoscopic scale. Even thermodynamic variables can conjugate to thermodynamic forces, and thus, Glansdorff–Prigogine’s dissipative [...] Read more.
This letter aims to investigate thermodynamic processes in small systems in the Onsager region by showing that fundamental quantities such as total entropy production can be discretized on the mesoscopic scale. Even thermodynamic variables can conjugate to thermodynamic forces, and thus, Glansdorff–Prigogine’s dissipative variable may be discretized. The canonical commutation rules (CCRs) valid at the mesoscopic scale are postulated, and the measurement process consists of determining the eigenvalues of the operators associated with the thermodynamic quantities. The nature of the quantized quantity β , entering the CCRs, is investigated by a heuristic model for nano-gas and analyzed through the tools of classical statistical physics. We conclude that according to our model, the constant β does not appear to be a new fundamental constant but corresponds to the minimum value. Full article
Show Figures

Figure 1

10 pages, 1916 KB  
Proceeding Paper
Nested Sampling for Exploring Lennard-Jones Clusters
by Lune Maillard, Fabio Finocchi, César Godinho and Martino Trassinelli
Phys. Sci. Forum 2025, 12(1), 8; https://doi.org/10.3390/psf2025012008 - 13 Oct 2025
Cited by 1 | Viewed by 371
Abstract
Lennard-Jones clusters, while an easy system, have a significant number of non equivalent configurations that increases rapidly with the number of atoms in the cluster. Here, we aim at determining the cluster partition function; we use the nested sampling algorithm, which transforms the [...] Read more.
Lennard-Jones clusters, while an easy system, have a significant number of non equivalent configurations that increases rapidly with the number of atoms in the cluster. Here, we aim at determining the cluster partition function; we use the nested sampling algorithm, which transforms the multidimensional integral into a one-dimensional one, to perform this task. In particular, we use the nested_fit program, which implements slice sampling as search algorithm. We study here the 7-atom and 36-atom clusters to benchmark nested_fit for the exploration of potential energy surfaces. We find that nested_fit is able to recover phase transitions and find different stable configurations of the cluster. Furthermore, the implementation of the slice sampling algorithm has a clear impact on the computational cost. Full article
Show Figures

Figure 1

11 pages, 2705 KB  
Proceeding Paper
Understanding Exoplanet Habitability: A Bayesian ML Framework for Predicting Atmospheric Absorption Spectra
by Vasuda Trehan, Kevin H. Knuth and M. J. Way
Phys. Sci. Forum 2025, 12(1), 9; https://doi.org/10.3390/psf2025012009 - 13 Oct 2025
Viewed by 704
Abstract
The evolution of space technology in recent years, fueled by advancements in computing such as Artificial Intelligence (AI) and machine learning (ML), has profoundly transformed our capacity to explore the cosmos. Missions like the James Webb Space Telescope (JWST) have made information about [...] Read more.
The evolution of space technology in recent years, fueled by advancements in computing such as Artificial Intelligence (AI) and machine learning (ML), has profoundly transformed our capacity to explore the cosmos. Missions like the James Webb Space Telescope (JWST) have made information about distant objects more easily accessible, resulting in extensive amounts of valuable data. As part of this work-in-progress study, we are working to create an atmospheric absorption spectrum prediction model for exoplanets. The eventual model will be based on both collected observational spectra and synthetic spectral data generated by the ROCKE-3D general circulation model (GCM) developed by the climate modeling program at NASA’s Goddard Institute for Space Studies (GISS). In this initial study, spline curves are used to describe the bin heights of simulated atmospheric absorption spectra as a function of one of the values of the planetary parameters. Bayesian Adaptive Exploration is then employed to identify areas of the planetary parameter space for which more data are needed to improve the model. The resulting system will be used as a forward model so that planetary parameters can be inferred given a planet’s atmospheric absorption spectrum. This work is expected to contribute to a better understanding of exoplanetary properties and general exoplanet climates and habitability. Full article
Show Figures

Figure 1

12 pages, 1558 KB  
Proceeding Paper
Model-Based and Physics-Informed Deep Learning Neural Network Structures
by Ali Mohammad-Djafari, Ning Chu, Li Wang, Caifang Cai and Liang Yu
Phys. Sci. Forum 2025, 12(1), 10; https://doi.org/10.3390/psf2025012010 - 20 Oct 2025
Viewed by 556
Abstract
Neural Networks (NNs) have been used in many areas with great success. When an NN’s structure (model) is given, during the training steps, the parameters of the model are determined using an appropriate criterion and an optimization algorithm (training). Then, the trained model [...] Read more.
Neural Networks (NNs) have been used in many areas with great success. When an NN’s structure (model) is given, during the training steps, the parameters of the model are determined using an appropriate criterion and an optimization algorithm (training). Then, the trained model can be used for the prediction or inference step (testing). As there are also many hyperparameters related to optimization criteria and optimization algorithms, a validation step is necessary before the NN’s final use. One of the great difficulties is the choice of NN structure. Even if there are many “on the shelf” networks, selecting or proposing a new appropriate network for a given data signal or image processing task, is still an open problem. In this work, we consider this problem using model-based signal and image processing and inverse problems methods. We classify the methods into five classes: (i) explicit analytical solutions, (ii) transform domain decomposition, (iii) operator decomposition, (iv) unfolding optimization algorithms, (v) physics-informed NN methods (PINNs). A few examples in each category are explained. Full article
Show Figures

Figure 1

10 pages, 632 KB  
Proceeding Paper
Nonparametric Full Bayesian Significance Testing for Bayesian Histograms
by Fernando Corrêa, Julio Michael Stern and Rafael Bassi Stern
Phys. Sci. Forum 2025, 12(1), 11; https://doi.org/10.3390/psf2025012011 - 20 Oct 2025
Viewed by 281
Abstract
In this article, we present an extension of the Full Bayesian Significance Test (FBST) for nonparametric settings, termed NP-FBST, which is constructed using the limit of finite dimension histograms. The test statistics for NP-FBST are based on a plug-in estimate of the cross-entropy [...] Read more.
In this article, we present an extension of the Full Bayesian Significance Test (FBST) for nonparametric settings, termed NP-FBST, which is constructed using the limit of finite dimension histograms. The test statistics for NP-FBST are based on a plug-in estimate of the cross-entropy between the null hypothesis and a histogram. This method shares similarities with Kullback–Leibler and entropy-based goodness-of-fit tests, but it can be applied to a broader range of hypotheses and is generally less computationally intensive. We demonstrate that when the number of histogram bins increases slowly with the sample size, the NP-FBST is consistent for Lipschitz continuous data-generating densities. Additionally, we propose an algorithm to optimize the NP-FBST. Through simulations, we compare the performance of the NP-FBST to traditional methods for testing uniformity. Our results indicate that the NP-FBST is competitive in terms of power, even surpassing the most powerful likelihood-ratio-based procedures for very small sample sizes. Full article
Show Figures

Figure 1

10 pages, 2230 KB  
Proceeding Paper
Bayesian Functional Data Analysis in Astronomy
by Thomas Loredo, Tamás Budavári, David Kent and David Ruppert
Phys. Sci. Forum 2025, 12(1), 12; https://doi.org/10.3390/psf2025012012 - 4 Nov 2025
Viewed by 291
Abstract
Cosmic demographics—the statistical study of populations of astrophysical objects—has long relied on tools from multivariate statistics for analyzing data comprising fixed-length vectors of properties of objects, as might be compiled in a tabular astronomical catalog (say, with sky coordinates, and brightness measurements in [...] Read more.
Cosmic demographics—the statistical study of populations of astrophysical objects—has long relied on tools from multivariate statistics for analyzing data comprising fixed-length vectors of properties of objects, as might be compiled in a tabular astronomical catalog (say, with sky coordinates, and brightness measurements in a fixed number of spectral passbands). But beginning with the emergence of automated digital sky surveys, ca. 2000, astronomers began producing large collections of data with more complex structures: light curves (brightness time series) and spectra (brightness vs. wavelength). These comprise what statisticians call functional data—measurements of populations of functions. Upcoming automated sky surveys will soon provide astronomers with a flood of functional data. New methods are needed to accurately and optimally analyze large ensembles of light curves and spectra, accumulating information both along individual measured functions and across a population of such functions. Functional data analysis (FDA) provides tools for statistical modeling of functional data. Astronomical data presents several challenges for FDA methodology, e.g., sparse, irregular, and asynchronous sampling, and heteroscedastic measurement error. Bayesian FDA uses hierarchical Bayesian models for function populations, and is well suited to addressing these challenges. We provide an overview of astronomical functional data and some key Bayesian FDA modeling approaches, including functional mixed effects models, and stochastic process models. We briefly describe a Bayesian FDA framework combining FDA and machine learning methods to build low-dimensional parametric models for galaxy spectra. Full article
Show Figures

Figure 1

10 pages, 1742 KB  
Proceeding Paper
Bayesian Integrated Data Analysis and Experimental Design for External Magnetic Plasma Diagnostics in DEMO
by Jeffrey De Rycke, Alfredo Pironti, Marco Ariola, Antonio Quercia and Geert Verdoolaege
Phys. Sci. Forum 2025, 12(1), 13; https://doi.org/10.3390/psf2025012013 - 4 Nov 2025
Viewed by 320
Abstract
Magnetic confinement nuclear fusion offers a promising solution to the world’s growing energy demands. The DEMO reactor presented here aims to bridge the gap between laboratory fusion experiments and practical electricity generation, posing unique challenges for magnetic plasma diagnostics due to limited space [...] Read more.
Magnetic confinement nuclear fusion offers a promising solution to the world’s growing energy demands. The DEMO reactor presented here aims to bridge the gap between laboratory fusion experiments and practical electricity generation, posing unique challenges for magnetic plasma diagnostics due to limited space for diagnostic equipment. This study employs Bayesian inference and Gaussian process modeling to integrate data from pick-up coils, flux loops, and saddle coils, enabling a qualitative estimation of the plasma current density distribution relying on only external magnetic measurements. The methodology successfully infers total plasma current, plasma centroid position, and six plasma–wall gap positions, while adhering to DEMO’s stringent accuracy standards. Additionally, the interchangeability between normal pick-up coils and saddle coils was assessed, revealing a clear preference for saddle coils. Initial steps were taken to utilize Bayesian experimental design for optimizing the orientation (normal or tangential) of pick-up coils within DEMO’s design constraints to improve the diagnostic setup’s inference precision. Our approach indicates the feasibility of Bayesian integrated data analysis in achieving precise and accurate probability distributions of plasma parameter crucial for the successful operation of DEMO. Full article
Show Figures

Figure 1

8 pages, 1080 KB  
Proceeding Paper
Inverse Bayesian Methods for Groundwater Vulnerability Assessment
by Nasrin Taghavi, Robert K. Niven, Matthias Kramer and David J. Paull
Phys. Sci. Forum 2025, 12(1), 14; https://doi.org/10.3390/psf2025012014 - 5 Nov 2025
Viewed by 195
Abstract
Groundwater vulnerability assessment (GVA) is critical for understanding contaminant migration into groundwater systems, yet conventional methods often overlook its probabilistic nature. Bayesian inference offers a robust framework using Bayes’ rule to enhance decision-making through posterior probability calculations. This study introduces inverse Bayesian methods [...] Read more.
Groundwater vulnerability assessment (GVA) is critical for understanding contaminant migration into groundwater systems, yet conventional methods often overlook its probabilistic nature. Bayesian inference offers a robust framework using Bayes’ rule to enhance decision-making through posterior probability calculations. This study introduces inverse Bayesian methods for GVA using spatial-series data, focusing on nitrate concentrations in groundwater as an indicator of groundwater vulnerability in agricultural catchments. Using the joint maximum a-posteriori (JMAP) and variational Bayesian approximation (VBA) algorithms, the advantages of the Bayesian framework over traditional index-based methods are demonstrated for GVA of the Burdekin Basin, Queensland, Australia. This provides an evidence-based methodology for GVA which enables model ranking, parameter estimation, and uncertainty quantification. Full article
Show Figures

Figure 1

10 pages, 817 KB  
Proceeding Paper
Automatic Modeling and Object Identification in Radio Astronomy
by Richard Fuchs, Jakob Knollmüller and Lukas Heinrich
Phys. Sci. Forum 2025, 12(1), 15; https://doi.org/10.3390/psf2025012015 - 5 Nov 2025
Viewed by 169
Abstract
Building appropriate models is crucial for imaging tasks in many fields but often challenging due to the richness of the systems. In radio astronomy, for example, wide-field observations can contain various and superposed structures that require different descriptions, such as filaments, point sources [...] Read more.
Building appropriate models is crucial for imaging tasks in many fields but often challenging due to the richness of the systems. In radio astronomy, for example, wide-field observations can contain various and superposed structures that require different descriptions, such as filaments, point sources or compact objects. This work presents an automatic pipeline that iteratively adapts probabilistic models for such complex systems in order to improve the reconstructed images. It uses the Bayesian imaging library NIFTy, which is formulated in the language of information field theory. Starting with a preliminary reconstruction using a simple and flexible model, the pipeline employs deep learning and clustering methods to identify and separate different objects. In a further step, these objects are described by adding new building blocks to the model, allowing for a component separation in the next reconstruction step. This procedure can be repeated several times for refinement to iteratively improve the overall reconstruction. In addition, the individual components can be modeled at different resolutions allowing us to focus on important parts of the emission field without getting computationally too expensive. Full article
Show Figures

Figure 1

10 pages, 291 KB  
Proceeding Paper
Maximum Entropy Production for Optimizing Carbon Catalysis: An Active-Matter-Inspired Approach
by Klaus Regenauer-Lieb, Manman Hu, Hui Tong Chua, Victor Calo, Boris Yakobson, Evgeny P. Zemskov and
Phys. Sci. Forum 2025, 12(1), 16; https://doi.org/10.3390/psf2025012016 - 15 Nov 2025
Abstract
The static topology of surface characteristics and active sites in catalysis overlooks a crucial element: the dynamic processes of optimal pattern formation over time and the creation of intermediate structures that enhance reactions. Nature’s principle of coupling reaction and motion in catalytic processes [...] Read more.
The static topology of surface characteristics and active sites in catalysis overlooks a crucial element: the dynamic processes of optimal pattern formation over time and the creation of intermediate structures that enhance reactions. Nature’s principle of coupling reaction and motion in catalytic processes by enzymes or higher organisms offers a new perspective. This work explores a novel theoretical approach by adding the time dimension to optimise topological variations using the Maximum Entropy Production (MEP) assumption. This approach recognises that the catalyst surface is not an unchanging energy landscape but can change dynamically. The time-dependent transport problem of molecules is here interpreted by a non-equilibrium model used for modelling and predicting dynamic pattern formation in excitable media, a class of active matter requiring an activation threshold. We present a nonlocal reaction–cross-diffusion (RXD) formulation of catalytic reactions that can capture the catalyst’s interaction with the target molecule in space and time. The approach provides a theoretical basis for future deep learning models and multiphysics upscaling of catalysts and their support structures across multiphysics fields. The particular advantage of the RXD approach is that it allows each scale to investigate dynamic pattern-forming processes using linear and nonlinear stability analysis, thus establishing a rule base for developing new catalysts. Full article
Show Figures

Figure 1

10 pages, 5564 KB  
Proceeding Paper
Bayesian Regularization for Dynamical System Identification: Additive Noise Models
by Robert K. Niven, Laurent Cordier, Ali Mohammad-Djafari, Markus Abel and Markus Quade
Phys. Sci. Forum 2025, 12(1), 17; https://doi.org/10.3390/psf2025012017 - 14 Nov 2025
Viewed by 106
Abstract
Consider the dynamical system x ˙ = f ( x ) , where x R n is the state vector, x ˙ is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its [...] Read more.
Consider the dynamical system x ˙ = f ( x ) , where x R n is the state vector, x ˙ is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its time-series or spatial data. For this, we propose a Bayesian framework based on the maximum a posteriori (MAP) point estimate, to give a generalized Tikhonov regularization method with the residual and regularization terms identified, respectively, with the negative logarithms of the likelihood and prior distributions. As well as estimates of the model coefficients, the Bayesian interpretation provides access to the full Bayesian apparatus, including the ranking of models, the quantification of model uncertainties, and the estimation of unknown (nuisance) hyperparameters. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives a Gaussian posterior distribution, in which the numerator contains a Mahalanobis distance or “Gaussian norm”. In this study, two Bayesian algorithms for the estimation of hyperparameters—the joint maximum a posteriori (JMAP) and variational Bayesian approximation (VBA)—are compared to the popular SINDy, LASSO, and ridge regression algorithms for the analysis of several dynamical systems with additive noise. We consider two dynamical systems, the Lorenz convection system and the Shil’nikov cubic system, with four choices of noise model: symmetric Gaussian or Laplace noise and skewed Rayleigh or Erlang noise, with different magnitudes. The posterior Gaussian norm is found to provide a robust metric for quantitative model selection—with quantification of the model uncertainties—across all dynamical systems and noise models examined. Full article
Show Figures

Figure 1

7 pages, 2393 KB  
Proceeding Paper
Determination of Uncertainty Model of a Particle-Reflection-Distribution
by Roland Preuss and Udo von Toussaint
Phys. Sci. Forum 2025, 12(1), 18; https://doi.org/10.3390/psf2025012018 - 24 Nov 2025
Viewed by 80
Abstract
The modelling of plasma–wall interactions (PWIs) depends on distributions describing the angle and energy distribution of particles scattered at the first wall of fusion devices. Most PWI codes rely on extensive tables based on data from reflection simulations, employing a Monte Carlo method. [...] Read more.
The modelling of plasma–wall interactions (PWIs) depends on distributions describing the angle and energy distribution of particles scattered at the first wall of fusion devices. Most PWI codes rely on extensive tables based on data from reflection simulations, employing a Monte Carlo method. At first glance, the uncertainty distribution of the data should be assumed Gaussian. However, in order to obtain the resulting particle distribution, the reflected ions are counted within angle sections of the upper hemisphere, which hints to a Poisson uncertainty distribution. In this paper, we let Bayesian model comparison decide which uncertainty model should be taken. Full article
Show Figures

Figure 1

Back to TopTop