Previous Issue
Volume 11, Axion-Wimp 2024
 
 

Phys. Sci. Forum, 2025, MaxEnt 2024

The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering
Ghent, Belgium | 1–7 July 2024

Volume Editors:
Geert Verdoolaege, Ghent University, Belgium

 

Number of Papers: 11
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image): The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2024) continued a long series of MaxEnt workshops lasting 43 years that explored [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:

Other

13 pages, 597 KB  
Proceeding Paper
On Singular Bayesian Inference of Underdetermined Quantities—Part I: Invariant Discrete Ill-Posed Inverse Problems in Small and Large Dimensions
by Fabrice Pautot
Phys. Sci. Forum 2025, 12(1), 1; https://doi.org/10.3390/psf2025012001 - 19 Sep 2025
Viewed by 409
Abstract
When the quantities of interest remain underdetermined a posteriori, we would like to draw inferences for at least one particular solution. Can we do so in a Bayesian way? What is a probability distribution over an underdetermined quantity? How do we get a [...] Read more.
When the quantities of interest remain underdetermined a posteriori, we would like to draw inferences for at least one particular solution. Can we do so in a Bayesian way? What is a probability distribution over an underdetermined quantity? How do we get a posterior for one particular solution from a posterior for infinitely many underdetermined solutions? Guided by discrete invariant underdetermined ill-posed inverse problems, we find that a probability distribution over an underdetermined quantity is non-absolutely continuous, partially improper with respect to the initial reference measure but proper with respect to its restriction to its support. Thus, it is necessary and sufficient to choose the prior restricted reference measure to assign partially improper priors using e.g., the principle of maximum entropy and the posterior restricted reference measure to obtain the proper posterior for one particular solution. We can then work with underdetermined models like Hoeffding–Sobol expansions seamlessly, especially to effectively counter the curse of dimensionality within discrete nonparametric inverse problems. We show Singular Bayesian Inference (SBI) at work in an advanced Bayesian optimization application: dynamic pricing. Such a nice generalization of Bayesian–maxentropic inference could motivate many theoretical and practical developments. Full article
Show Figures

Figure 1

10 pages, 761 KB  
Proceeding Paper
Nonparametric FBST for Validating Linear Models
by Rodrigo F. L. Lassance, Julio M. Stern and Rafael B. Stern
Phys. Sci. Forum 2025, 12(1), 2; https://doi.org/10.3390/psf2025012002 - 24 Sep 2025
Viewed by 198
Abstract
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) [...] Read more.
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) sidesteps this issue, standing out for also being logically coherent and offering a measure of evidence against H 0 , although its application to nonparametric settings is still limited. In this work, we use Gaussian process priors to derive FBST procedures that evaluate general linearity assumptions, such as testing the adherence of data and performing variable selection to linear models. We also make use of pragmatic hypotheses to verify if the data might be compatible with a linear model when factors such as measurement errors or utility judgments are accounted for. This contribution extends the theory of the FBST, allowing for its application in nonparametric settings and requiring, at most, simple optimization procedures to reach the desired conclusion. Full article
Show Figures

Figure 1

8 pages, 1008 KB  
Proceeding Paper
Combining Knowledge About Metabolic Networks and Single-Cell Data with Maximum Entropy
by Carola S. Heinzel, Johann F. Jadebeck, Elisabeth Zelle, Johannes Seiffarth and Katharina Nöh
Phys. Sci. Forum 2025, 12(1), 3; https://doi.org/10.3390/psf2025012003 - 24 Sep 2025
Viewed by 313
Abstract
Better understanding of the fitness and flexibility of microbial platform organisms is central to biotechnological process development. Live-cell experiments uncover the phenotypic heterogeneity of living cells, emerging even within isogenic cell populations. However, how this observed heterogeneity in growth relates to the variability [...] Read more.
Better understanding of the fitness and flexibility of microbial platform organisms is central to biotechnological process development. Live-cell experiments uncover the phenotypic heterogeneity of living cells, emerging even within isogenic cell populations. However, how this observed heterogeneity in growth relates to the variability of intracellular processes that drive cell growth and division is less understood. We here approach the question, how the observed phenotypic variability in single-cell growth rates links to metabolic processes, specifically intracellular reaction rates (fluxes). To approach this question, we employ the Maximum Entropy (MaxEnt) principle that allows us to bring together the phenotypic solution space, derived from metabolic network models, to single-cell growth rates observed in live-cell experiments. We apply the computational machinery to first-of-its-kind data of the microorganism Corynebacterium glutamicum, grown on different substrates under continuous medium supply. We compare the MaxEnt-based estimates of metabolic fluxes with estimates obtained by assuming that the average cell operates at its maximum growth rate, which is the current predominant practice in biotechnology. Full article
Show Figures

Figure 1

10 pages, 790 KB  
Proceeding Paper
A Comparison of MCMC Algorithms for an Inverse Squeeze Flow Problem
by Aricia Rinkens, Rodrigo L. S. Silva, Clemens V. Verhoosel, Nick O. Jaensson and Erik Quaeghebeur
Phys. Sci. Forum 2025, 12(1), 4; https://doi.org/10.3390/psf2025012004 - 22 Sep 2025
Viewed by 261
Abstract
Using Bayesian inference to calibrate constitutive model parameters has recently seen a rise in interest. The Markov chain Monte Carlo (MCMC) algorithm is one of the most commonly used methods to sample from the posterior. However, the choice of which MCMC algorithm to [...] Read more.
Using Bayesian inference to calibrate constitutive model parameters has recently seen a rise in interest. The Markov chain Monte Carlo (MCMC) algorithm is one of the most commonly used methods to sample from the posterior. However, the choice of which MCMC algorithm to apply is typically pragmatic and based on considerations such as software availability and experience. We compare three commonly used MCMC algorithms: Metropolis-Hastings (MH), Affine Invariant Stretch Move (AISM) and No-U-Turn sampler (NUTS). For the comparison, we use the Kullback-Leibler (KL) divergence as a convergence criterion, which measures the statistical distance between the sampled and the ‘true’ posterior. We apply the Bayesian framework to a Newtonian squeeze flow problem, for which there exists an analytical model. Furthermore, we have collected experimental data using a tailored setup. The ground truth for the posterior is obtained by evaluating it on a uniform reference grid. We conclude that, for the same number of samples, the NUTS results in the lowest KL divergence, followed by the AISM sampler and last the MH sampler. Full article
Show Figures

Figure 1

8 pages, 1340 KB  
Proceeding Paper
Trans-Dimensional Diffusive Nested Sampling for Metabolic Network Inference
by Johann Fredrik Jadebeck, Wolfgang Wiechert and Katharina Nöh
Phys. Sci. Forum 2025, 12(1), 5; https://doi.org/10.3390/psf2025012005 - 24 Sep 2025
Viewed by 312
Abstract
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly [...] Read more.
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly effective for estimating evidence and sampling the parameter posterior probability distributions. However, the practicality of nested sampling for metabolic network inference has yet to be studied. In this technical report, we explore the amalgamation of nested sampling, specifically diffusive nested sampling, with reversible jump Markov chain Monte Carlo. We apply the algorithm to two synthetic problems from the field of metabolic flux analysis. We present run times and share insights into hyperparameter choices, providing a useful point of reference for future applications of nested sampling to metabolic flux problems. Full article
Show Figures

Figure 1

11 pages, 1274 KB  
Proceeding Paper
The Value of Information in Economic Contexts
by Stefan Behringer and Roman V. Belavkin
Phys. Sci. Forum 2025, 12(1), 6; https://doi.org/10.3390/psf2025012006 - 23 Sep 2025
Viewed by 208
Abstract
This paper explores the application of the Value of Information, (VoI), based on the Claude Shannon/Ruslan Stratonovich framework within economic contexts. Unlike previous studies that examine circular settings and strategic interactions, we focus on a non-strategic linear setting. We employ standard [...] Read more.
This paper explores the application of the Value of Information, (VoI), based on the Claude Shannon/Ruslan Stratonovich framework within economic contexts. Unlike previous studies that examine circular settings and strategic interactions, we focus on a non-strategic linear setting. We employ standard economically motivated utility functions, including linear, quadratic, constant absolute risk aversion (CARA), and constant relative risk aversion (CRRA), across various priors of the stochastic environment, and analyse the resulting specific VoI forms. The curvature of these VoI functions play a decisive role in determining whether acquiring additional costly information enhances the efficiency of the decision making process. We also outline potential implications for broader decision-making frameworks. Full article
Show Figures

Figure 1

14 pages, 300 KB  
Proceeding Paper
Exploring Quantized Entropy Production Strength in Mesoscopic Irreversible Thermodynamics
by Giorgio Sonnino
Phys. Sci. Forum 2025, 12(1), 7; https://doi.org/10.3390/psf2025012007 - 13 Oct 2025
Viewed by 182
Abstract
This letter aims to investigate thermodynamic processes in small systems in the Onsager region by showing that fundamental quantities such as total entropy production can be discretized on the mesoscopic scale. Even thermodynamic variables can conjugate to thermodynamic forces, and thus, Glansdorff–Prigogine’s dissipative [...] Read more.
This letter aims to investigate thermodynamic processes in small systems in the Onsager region by showing that fundamental quantities such as total entropy production can be discretized on the mesoscopic scale. Even thermodynamic variables can conjugate to thermodynamic forces, and thus, Glansdorff–Prigogine’s dissipative variable may be discretized. The canonical commutation rules (CCRs) valid at the mesoscopic scale are postulated, and the measurement process consists of determining the eigenvalues of the operators associated with the thermodynamic quantities. The nature of the quantized quantity β , entering the CCRs, is investigated by a heuristic model for nano-gas and analyzed through the tools of classical statistical physics. We conclude that according to our model, the constant β does not appear to be a new fundamental constant but corresponds to the minimum value. Full article
Show Figures

Figure 1

10 pages, 1916 KB  
Proceeding Paper
Nested Sampling for Exploring Lennard-Jones Clusters
by Lune Maillard, Fabio Finocchi, César Godinho and Martino Trassinelli
Phys. Sci. Forum 2025, 12(1), 8; https://doi.org/10.3390/psf2025012008 - 13 Oct 2025
Cited by 1 | Viewed by 93
Abstract
Lennard-Jones clusters, while an easy system, have a significant number of non equivalent configurations that increases rapidly with the number of atoms in the cluster. Here, we aim at determining the cluster partition function; we use the nested sampling algorithm, which transforms the [...] Read more.
Lennard-Jones clusters, while an easy system, have a significant number of non equivalent configurations that increases rapidly with the number of atoms in the cluster. Here, we aim at determining the cluster partition function; we use the nested sampling algorithm, which transforms the multidimensional integral into a one-dimensional one, to perform this task. In particular, we use the nested_fit program, which implements slice sampling as search algorithm. We study here the 7-atom and 36-atom clusters to benchmark nested_fit for the exploration of potential energy surfaces. We find that nested_fit is able to recover phase transitions and find different stable configurations of the cluster. Furthermore, the implementation of the slice sampling algorithm has a clear impact on the computational cost. Full article
Show Figures

Figure 1

11 pages, 2705 KB  
Proceeding Paper
Understanding Exoplanet Habitability: A Bayesian ML Framework for Predicting Atmospheric Absorption Spectra
by Vasuda Trehan, Kevin H. Knuth and M. J. Way
Phys. Sci. Forum 2025, 12(1), 9; https://doi.org/10.3390/psf2025012009 - 13 Oct 2025
Viewed by 148
Abstract
The evolution of space technology in recent years, fueled by advancements in computing such as Artificial Intelligence (AI) and machine learning (ML), has profoundly transformed our capacity to explore the cosmos. Missions like the James Webb Space Telescope (JWST) have made information about [...] Read more.
The evolution of space technology in recent years, fueled by advancements in computing such as Artificial Intelligence (AI) and machine learning (ML), has profoundly transformed our capacity to explore the cosmos. Missions like the James Webb Space Telescope (JWST) have made information about distant objects more easily accessible, resulting in extensive amounts of valuable data. As part of this work-in-progress study, we are working to create an atmospheric absorption spectrum prediction model for exoplanets. The eventual model will be based on both collected observational spectra and synthetic spectral data generated by the ROCKE-3D general circulation model (GCM) developed by the climate modeling program at NASA’s Goddard Institute for Space Studies (GISS). In this initial study, spline curves are used to describe the bin heights of simulated atmospheric absorption spectra as a function of one of the values of the planetary parameters. Bayesian Adaptive Exploration is then employed to identify areas of the planetary parameter space for which more data are needed to improve the model. The resulting system will be used as a forward model so that planetary parameters can be inferred given a planet’s atmospheric absorption spectrum. This work is expected to contribute to a better understanding of exoplanetary properties and general exoplanet climates and habitability. Full article
Show Figures

Figure 1

12 pages, 1558 KB  
Proceeding Paper
Model-Based and Physics-Informed Deep Learning Neural Network Structures
by Ali Mohammad-Djafari, Ning Chu, Li Wang, Caifang Cai and Liang Yu
Phys. Sci. Forum 2025, 12(1), 10; https://doi.org/10.3390/psf2025012010 - 20 Oct 2025
Viewed by 179
Abstract
Neural Networks (NNs) have been used in many areas with great success. When an NN’s structure (model) is given, during the training steps, the parameters of the model are determined using an appropriate criterion and an optimization algorithm (training). Then, the trained model [...] Read more.
Neural Networks (NNs) have been used in many areas with great success. When an NN’s structure (model) is given, during the training steps, the parameters of the model are determined using an appropriate criterion and an optimization algorithm (training). Then, the trained model can be used for the prediction or inference step (testing). As there are also many hyperparameters related to optimization criteria and optimization algorithms, a validation step is necessary before the NN’s final use. One of the great difficulties is the choice of NN structure. Even if there are many “on the shelf” networks, selecting or proposing a new appropriate network for a given data signal or image processing task, is still an open problem. In this work, we consider this problem using model-based signal and image processing and inverse problems methods. We classify the methods into five classes: (i) explicit analytical solutions, (ii) transform domain decomposition, (iii) operator decomposition, (iv) unfolding optimization algorithms, (v) physics-informed NN methods (PINNs). A few examples in each category are explained. Full article
Show Figures

Figure 1

10 pages, 632 KB  
Proceeding Paper
Nonparametric Full Bayesian Significance Testing for Bayesian Histograms
by Fernando Corrêa, Julio Michael Stern and Rafael Bassi Stern
Phys. Sci. Forum 2025, 12(1), 11; https://doi.org/10.3390/psf2025012011 - 20 Oct 2025
Viewed by 67
Abstract
In this article, we present an extension of the Full Bayesian Significance Test (FBST) for nonparametric settings, termed NP-FBST, which is constructed using the limit of finite dimension histograms. The test statistics for NP-FBST are based on a plug-in estimate of the cross-entropy [...] Read more.
In this article, we present an extension of the Full Bayesian Significance Test (FBST) for nonparametric settings, termed NP-FBST, which is constructed using the limit of finite dimension histograms. The test statistics for NP-FBST are based on a plug-in estimate of the cross-entropy between the null hypothesis and a histogram. This method shares similarities with Kullback–Leibler and entropy-based goodness-of-fit tests, but it can be applied to a broader range of hypotheses and is generally less computationally intensive. We demonstrate that when the number of histogram bins increases slowly with the sample size, the NP-FBST is consistent for Lipschitz continuous data-generating densities. Additionally, we propose an algorithm to optimize the NP-FBST. Through simulations, we compare the performance of the NP-FBST to traditional methods for testing uniformity. Our results indicate that the NP-FBST is competitive in terms of power, even surpassing the most powerful likelihood-ratio-based procedures for very small sample sizes. Full article
Show Figures

Figure 1

Back to TopTop