Journal Description
Physical Sciences Forum
Physical Sciences Forum
is an open access journal dedicated to publishing findings resulting from academic conferences, workshops and similar events in the area of physical sciences. Each conference proceeding can be individually indexed, is citable via a digital object identifier (DOI) and freely available under an open access license. The conference organizers and proceedings editors are responsible for managing the peer-review process and selecting papers for conference proceedings.
Latest Articles
Preface and Statement of Peer Review: 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2024)
Phys. Sci. Forum 2025, 12(1), 19; https://doi.org/10.3390/psf2025012019 - 10 Dec 2025
Abstract
n/a
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
Open AccessProceeding Paper
Approaching the Quantum Limit in Axion Detection at IBS-CAPP and IBS-DMAG
by
Sergey V. Uchaikin, Boris I. Ivanov, Arjan F. van Loo, Yasunobu Nakamura, MinSu Ko, Jinmyeong Kim, Saebyeok Ahn, Seonjeong Oh, Yannis K. Semertzidis and SungWoo Youn
Phys. Sci. Forum 2025, 11(1), 5; https://doi.org/10.3390/psf2025011005 - 26 Nov 2025
Abstract
We present the development of two complementary amplifier architectures for axion haloscope experiments, based on two types of Josephson Parametric Amplifiers (JPAs). The first employs a multi-chip module of flux-driven JPAs in a parallel–series configuration, enabling near quantum-limited amplification over an extended tunable
[...] Read more.
We present the development of two complementary amplifier architectures for axion haloscope experiments, based on two types of Josephson Parametric Amplifiers (JPAs). The first employs a multi-chip module of flux-driven JPAs in a parallel–series configuration, enabling near quantum-limited amplification over an extended tunable range of between 1.2 and 1.5 GHz. The second design features a lumped-element JPA, offering continuous tunability across a wide frequency range from 2.4 to 4 GHz. Both approaches demonstrate near-quantum-limited noise performance and are compatible with operation in cryogenic environments. These amplifiers significantly enhance the sensitivity and frequency coverage of axion search experiments, and also provide new opportunities for broadband quantum sensing applications.
Full article
(This article belongs to the Proceedings of The 19th Patras Workshop on Axions, WIMPs and WISPs)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Determination of Uncertainty Model of a Particle-Reflection-Distribution
by
Roland Preuss and Udo von Toussaint
Phys. Sci. Forum 2025, 12(1), 18; https://doi.org/10.3390/psf2025012018 - 24 Nov 2025
Abstract
The modelling of plasma–wall interactions (PWIs) depends on distributions describing the angle and energy distribution of particles scattered at the first wall of fusion devices. Most PWI codes rely on extensive tables based on data from reflection simulations, employing a Monte Carlo method.
[...] Read more.
The modelling of plasma–wall interactions (PWIs) depends on distributions describing the angle and energy distribution of particles scattered at the first wall of fusion devices. Most PWI codes rely on extensive tables based on data from reflection simulations, employing a Monte Carlo method. At first glance, the uncertainty distribution of the data should be assumed Gaussian. However, in order to obtain the resulting particle distribution, the reflected ions are counted within angle sections of the upper hemisphere, which hints to a Poisson uncertainty distribution. In this paper, we let Bayesian model comparison decide which uncertainty model should be taken.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Maximum Entropy Production for Optimizing Carbon Catalysis: An Active-Matter-Inspired Approach
by
Klaus Regenauer-Lieb, Manman Hu, Hui Tong Chua, Victor Calo, Boris Yakobson, Evgeny P. Zemskov and
Phys. Sci. Forum 2025, 12(1), 16; https://doi.org/10.3390/psf2025012016 - 15 Nov 2025
Abstract
The static topology of surface characteristics and active sites in catalysis overlooks a crucial element: the dynamic processes of optimal pattern formation over time and the creation of intermediate structures that enhance reactions. Nature’s principle of coupling reaction and motion in catalytic processes
[...] Read more.
The static topology of surface characteristics and active sites in catalysis overlooks a crucial element: the dynamic processes of optimal pattern formation over time and the creation of intermediate structures that enhance reactions. Nature’s principle of coupling reaction and motion in catalytic processes by enzymes or higher organisms offers a new perspective. This work explores a novel theoretical approach by adding the time dimension to optimise topological variations using the Maximum Entropy Production (MEP) assumption. This approach recognises that the catalyst surface is not an unchanging energy landscape but can change dynamically. The time-dependent transport problem of molecules is here interpreted by a non-equilibrium model used for modelling and predicting dynamic pattern formation in excitable media, a class of active matter requiring an activation threshold. We present a nonlocal reaction–cross-diffusion (RXD) formulation of catalytic reactions that can capture the catalyst’s interaction with the target molecule in space and time. The approach provides a theoretical basis for future deep learning models and multiphysics upscaling of catalysts and their support structures across multiphysics fields. The particular advantage of the RXD approach is that it allows each scale to investigate dynamic pattern-forming processes using linear and nonlinear stability analysis, thus establishing a rule base for developing new catalysts.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Bayesian Regularization for Dynamical System Identification: Additive Noise Models
by
Robert K. Niven, Laurent Cordier, Ali Mohammad-Djafari, Markus Abel and Markus Quade
Phys. Sci. Forum 2025, 12(1), 17; https://doi.org/10.3390/psf2025012017 - 14 Nov 2025
Abstract
Consider the dynamical system
, where
is the state vector,
is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its
[...] Read more.
Consider the dynamical system
x
˙
=
f
(
x
)
, where
x
∈
R
n
is the state vector,
x
˙
is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its time-series or spatial data. For this, we propose a Bayesian framework based on the maximum a posteriori (MAP) point estimate, to give a generalized Tikhonov regularization method with the residual and regularization terms identified, respectively, with the negative logarithms of the likelihood and prior distributions. As well as estimates of the model coefficients, the Bayesian interpretation provides access to the full Bayesian apparatus, including the ranking of models, the quantification of model uncertainties, and the estimation of unknown (nuisance) hyperparameters. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives a Gaussian posterior distribution, in which the numerator contains a Mahalanobis distance or “Gaussian norm”. In this study, two Bayesian algorithms for the estimation of hyperparameters—the joint maximum a posteriori (JMAP) and variational Bayesian approximation (VBA)—are compared to the popular SINDy, LASSO, and ridge regression algorithms for the analysis of several dynamical systems with additive noise. We consider two dynamical systems, the Lorenz convection system and the Shil’nikov cubic system, with four choices of noise model: symmetric Gaussian or Laplace noise and skewed Rayleigh or Erlang noise, with different magnitudes. The posterior Gaussian norm is found to provide a robust metric for quantitative model selection—with quantification of the model uncertainties—across all dynamical systems and noise models examined.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Automatic Modeling and Object Identification in Radio Astronomy
by
Richard Fuchs, Jakob Knollmüller and Lukas Heinrich
Phys. Sci. Forum 2025, 12(1), 15; https://doi.org/10.3390/psf2025012015 - 5 Nov 2025
Abstract
Building appropriate models is crucial for imaging tasks in many fields but often challenging due to the richness of the systems. In radio astronomy, for example, wide-field observations can contain various and superposed structures that require different descriptions, such as filaments, point sources
[...] Read more.
Building appropriate models is crucial for imaging tasks in many fields but often challenging due to the richness of the systems. In radio astronomy, for example, wide-field observations can contain various and superposed structures that require different descriptions, such as filaments, point sources or compact objects. This work presents an automatic pipeline that iteratively adapts probabilistic models for such complex systems in order to improve the reconstructed images. It uses the Bayesian imaging library NIFTy, which is formulated in the language of information field theory. Starting with a preliminary reconstruction using a simple and flexible model, the pipeline employs deep learning and clustering methods to identify and separate different objects. In a further step, these objects are described by adding new building blocks to the model, allowing for a component separation in the next reconstruction step. This procedure can be repeated several times for refinement to iteratively improve the overall reconstruction. In addition, the individual components can be modeled at different resolutions allowing us to focus on important parts of the emission field without getting computationally too expensive.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Inverse Bayesian Methods for Groundwater Vulnerability Assessment
by
Nasrin Taghavi, Robert K. Niven, Matthias Kramer and David J. Paull
Phys. Sci. Forum 2025, 12(1), 14; https://doi.org/10.3390/psf2025012014 - 5 Nov 2025
Abstract
Groundwater vulnerability assessment (GVA) is critical for understanding contaminant migration into groundwater systems, yet conventional methods often overlook its probabilistic nature. Bayesian inference offers a robust framework using Bayes’ rule to enhance decision-making through posterior probability calculations. This study introduces inverse Bayesian methods
[...] Read more.
Groundwater vulnerability assessment (GVA) is critical for understanding contaminant migration into groundwater systems, yet conventional methods often overlook its probabilistic nature. Bayesian inference offers a robust framework using Bayes’ rule to enhance decision-making through posterior probability calculations. This study introduces inverse Bayesian methods for GVA using spatial-series data, focusing on nitrate concentrations in groundwater as an indicator of groundwater vulnerability in agricultural catchments. Using the joint maximum a-posteriori (JMAP) and variational Bayesian approximation (VBA) algorithms, the advantages of the Bayesian framework over traditional index-based methods are demonstrated for GVA of the Burdekin Basin, Queensland, Australia. This provides an evidence-based methodology for GVA which enables model ranking, parameter estimation, and uncertainty quantification.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Bayesian Integrated Data Analysis and Experimental Design for External Magnetic Plasma Diagnostics in DEMO
by
Jeffrey De Rycke, Alfredo Pironti, Marco Ariola, Antonio Quercia and Geert Verdoolaege
Phys. Sci. Forum 2025, 12(1), 13; https://doi.org/10.3390/psf2025012013 - 4 Nov 2025
Abstract
Magnetic confinement nuclear fusion offers a promising solution to the world’s growing energy demands. The DEMO reactor presented here aims to bridge the gap between laboratory fusion experiments and practical electricity generation, posing unique challenges for magnetic plasma diagnostics due to limited space
[...] Read more.
Magnetic confinement nuclear fusion offers a promising solution to the world’s growing energy demands. The DEMO reactor presented here aims to bridge the gap between laboratory fusion experiments and practical electricity generation, posing unique challenges for magnetic plasma diagnostics due to limited space for diagnostic equipment. This study employs Bayesian inference and Gaussian process modeling to integrate data from pick-up coils, flux loops, and saddle coils, enabling a qualitative estimation of the plasma current density distribution relying on only external magnetic measurements. The methodology successfully infers total plasma current, plasma centroid position, and six plasma–wall gap positions, while adhering to DEMO’s stringent accuracy standards. Additionally, the interchangeability between normal pick-up coils and saddle coils was assessed, revealing a clear preference for saddle coils. Initial steps were taken to utilize Bayesian experimental design for optimizing the orientation (normal or tangential) of pick-up coils within DEMO’s design constraints to improve the diagnostic setup’s inference precision. Our approach indicates the feasibility of Bayesian integrated data analysis in achieving precise and accurate probability distributions of plasma parameter crucial for the successful operation of DEMO.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Bayesian Functional Data Analysis in Astronomy
by
Thomas Loredo, Tamás Budavári, David Kent and David Ruppert
Phys. Sci. Forum 2025, 12(1), 12; https://doi.org/10.3390/psf2025012012 - 4 Nov 2025
Abstract
Cosmic demographics—the statistical study of populations of astrophysical objects—has long relied on tools from multivariate statistics for analyzing data comprising fixed-length vectors of properties of objects, as might be compiled in a tabular astronomical catalog (say, with sky coordinates, and brightness measurements in
[...] Read more.
Cosmic demographics—the statistical study of populations of astrophysical objects—has long relied on tools from multivariate statistics for analyzing data comprising fixed-length vectors of properties of objects, as might be compiled in a tabular astronomical catalog (say, with sky coordinates, and brightness measurements in a fixed number of spectral passbands). But beginning with the emergence of automated digital sky surveys, ca. 2000, astronomers began producing large collections of data with more complex structures: light curves (brightness time series) and spectra (brightness vs. wavelength). These comprise what statisticians call functional data—measurements of populations of functions. Upcoming automated sky surveys will soon provide astronomers with a flood of functional data. New methods are needed to accurately and optimally analyze large ensembles of light curves and spectra, accumulating information both along individual measured functions and across a population of such functions. Functional data analysis (FDA) provides tools for statistical modeling of functional data. Astronomical data presents several challenges for FDA methodology, e.g., sparse, irregular, and asynchronous sampling, and heteroscedastic measurement error. Bayesian FDA uses hierarchical Bayesian models for function populations, and is well suited to addressing these challenges. We provide an overview of astronomical functional data and some key Bayesian FDA modeling approaches, including functional mixed effects models, and stochastic process models. We briefly describe a Bayesian FDA framework combining FDA and machine learning methods to build low-dimensional parametric models for galaxy spectra.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
WISPFI Experiment: Prototype Development
by
Josep Maria Batllori, Michael H. Frosz, Dieter Horns and Marios Maroudas
Phys. Sci. Forum 2025, 11(1), 4; https://doi.org/10.3390/psf2025011004 - 31 Oct 2025
Abstract
Axions and axion-like particles (ALPs) are well-motivated dark matter (DM) candidates that couple with photons in external magnetic fields. The parameter space around
meV remains largely unexplored by haloscope experiments. We present the first prototype of Weakly Interacting Sub-eV
[...] Read more.
Axions and axion-like particles (ALPs) are well-motivated dark matter (DM) candidates that couple with photons in external magnetic fields. The parameter space around
m
a
∼
50
meV remains largely unexplored by haloscope experiments. We present the first prototype of Weakly Interacting Sub-eV Particles (WISP) Searches on a Fiber Interferometer (WISPFI), a table-top, model-independent scheme based on resonant photon–axion conversion in a hollow-core photonic crystal fiber (HC-PCF) integrated into a Mach–Zehnder interferometer (MZI). Operating near a dark fringe with active phase-locking, combined with amplitude modulation, the interferometer converts axion-induced photon disappearance into a measurable signal. A 2 W, 1550 nm laser is coupled with a 1 m-long HC-PCF placed inside a ∼2 T permanent magnet array, probing a fixed axion mass of
m
a
≃
49
meV with a projected sensitivity of
g
a
γ
γ
≳
1.3 ×
10
−
9
GeV−1 for a measurement time of 30 days. Future upgrades, including pressure tuning of the effective refractive index and implementation of a Fabry–Pérot cavity, could extend the accessible mass range and improve sensitivity, establishing WISPFI as a scalable platform to explore previously inaccessible regions of the axion parameter space.
Full article
(This article belongs to the Proceedings of The 19th Patras Workshop on Axions, WIMPs and WISPs)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Progress in GrAHal-CAPP/DMAG for Axion Dark Matter Search in the 1–3 μeV Range
by
Pierre Pugnat, Rafik Ballou, Philippe Camus, Guillaume Donnier-Valentin, Thierry Grenet, Ohjoon Kwon, Jérôme Lacipière, Mickaël Pelloux, Rolf Pfister, Yannis K. Semertzidis, Arthur Talarmin, Jérémy Vessaire and SungWoo Youn
Phys. Sci. Forum 2025, 11(1), 3; https://doi.org/10.3390/psf2025011003 - 24 Oct 2025
Abstract
Two outstanding problems of particle physics and cosmology, namely the strong-CP problem and the nature of dark matter, can be solved with the discovery of a single new particle, the axion. The modular high magnetic field and flux hybrid magnet platform of LNCMI-Grenoble,
[...] Read more.
Two outstanding problems of particle physics and cosmology, namely the strong-CP problem and the nature of dark matter, can be solved with the discovery of a single new particle, the axion. The modular high magnetic field and flux hybrid magnet platform of LNCMI-Grenoble, which was recently put in operation up to 42 T, offers unique opportunities for axion/axion-like particle search using Sikivie-type haloscopes. In this paper, the focus will be on the 350–600 MHz frequency range corresponding to the 1–3 μeV axion mass range requiring a large-bore RF-cavity. It will be built by DMAG and integrated within the large-bore superconducting hybrid magnet outsert, providing a central magnetic field up to 9 T in 812 mm warm bore diameter. The progress achieved by Néel Institute in the design of the complex cryostat with its double dilution refrigerators to cooldown below 50 mK the ultra-light Cu RF-cavity of 650 mm inner diameter and the first stage of the RF measurement chain are presented. Perspectives for the targeted sensitivity, assuming less than 2-year integration time, are recalled.
Full article
(This article belongs to the Proceedings of The 19th Patras Workshop on Axions, WIMPs and WISPs)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Nonparametric Full Bayesian Significance Testing for Bayesian Histograms
by
Fernando Corrêa, Julio Michael Stern and Rafael Bassi Stern
Phys. Sci. Forum 2025, 12(1), 11; https://doi.org/10.3390/psf2025012011 - 20 Oct 2025
Abstract
In this article, we present an extension of the Full Bayesian Significance Test (FBST) for nonparametric settings, termed NP-FBST, which is constructed using the limit of finite dimension histograms. The test statistics for NP-FBST are based on a plug-in estimate of the cross-entropy
[...] Read more.
In this article, we present an extension of the Full Bayesian Significance Test (FBST) for nonparametric settings, termed NP-FBST, which is constructed using the limit of finite dimension histograms. The test statistics for NP-FBST are based on a plug-in estimate of the cross-entropy between the null hypothesis and a histogram. This method shares similarities with Kullback–Leibler and entropy-based goodness-of-fit tests, but it can be applied to a broader range of hypotheses and is generally less computationally intensive. We demonstrate that when the number of histogram bins increases slowly with the sample size, the NP-FBST is consistent for Lipschitz continuous data-generating densities. Additionally, we propose an algorithm to optimize the NP-FBST. Through simulations, we compare the performance of the NP-FBST to traditional methods for testing uniformity. Our results indicate that the NP-FBST is competitive in terms of power, even surpassing the most powerful likelihood-ratio-based procedures for very small sample sizes.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Model-Based and Physics-Informed Deep Learning Neural Network Structures
by
Ali Mohammad-Djafari, Ning Chu, Li Wang, Caifang Cai and Liang Yu
Phys. Sci. Forum 2025, 12(1), 10; https://doi.org/10.3390/psf2025012010 - 20 Oct 2025
Abstract
Neural Networks (NNs) have been used in many areas with great success. When an NN’s structure (model) is given, during the training steps, the parameters of the model are determined using an appropriate criterion and an optimization algorithm (training). Then, the trained model
[...] Read more.
Neural Networks (NNs) have been used in many areas with great success. When an NN’s structure (model) is given, during the training steps, the parameters of the model are determined using an appropriate criterion and an optimization algorithm (training). Then, the trained model can be used for the prediction or inference step (testing). As there are also many hyperparameters related to optimization criteria and optimization algorithms, a validation step is necessary before the NN’s final use. One of the great difficulties is the choice of NN structure. Even if there are many “on the shelf” networks, selecting or proposing a new appropriate network for a given data signal or image processing task, is still an open problem. In this work, we consider this problem using model-based signal and image processing and inverse problems methods. We classify the methods into five classes: (i) explicit analytical solutions, (ii) transform domain decomposition, (iii) operator decomposition, (iv) unfolding optimization algorithms, (v) physics-informed NN methods (PINNs). A few examples in each category are explained.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Superconducting Quantum Sensors for Fundamental Physics Searches
by
Gulden Othman, Robert H. Hadfield, Katharina-Sophie Isleif, Friederike Januschek, Axel Lindner, Manuel Meyer, Dmitry Morozov, Devendra Kumar Namburi, Elmeri Rivasto, José Alejandro Rubiera Gimeno and Christina Schwemmbauer
Phys. Sci. Forum 2025, 11(1), 2; https://doi.org/10.3390/psf2025011002 - 20 Oct 2025
Abstract
Superconducting Transition Edge Sensors (TESs) are a promising technology for fundamental physics applications due to their low dark count rates, excellent energy resolution, and high detection efficiency. On the DESY campus, we have been developing a program to characterize cryogenic quantum sensors for
[...] Read more.
Superconducting Transition Edge Sensors (TESs) are a promising technology for fundamental physics applications due to their low dark count rates, excellent energy resolution, and high detection efficiency. On the DESY campus, we have been developing a program to characterize cryogenic quantum sensors for fundamental physics applications, particularly focused on TESs. We currently have two fully equipped dilution refrigerators that enable simultaneous TES characterization and fundamental physics searches. In this paper, we summarize the current status of our TES characterization, including recent calibration efforts and efficiency measurements, as well as simulations to better understand TES behavior and its backgrounds. Additionally, we summarize some physics applications that we are already exploring or planning to explore. We will give preliminary projections on a direct dark matter search with our TES, where exploiting low-threshold electron scattering in superconducting materials allows us to search for sub-MeV-scale dark matter. We are also working toward performing a measurement of the even-number photon distribution (beyond one pair) of a quantum-squeezed light source. Finally, if it proves to meet the requirements, our TES detector may be used as a second, independent detection system to search for an axion signal at the ALPS II experiment.
Full article
(This article belongs to the Proceedings of The 19th Patras Workshop on Axions, WIMPs and WISPs)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Understanding Exoplanet Habitability: A Bayesian ML Framework for Predicting Atmospheric Absorption Spectra
by
Vasuda Trehan, Kevin H. Knuth and M. J. Way
Phys. Sci. Forum 2025, 12(1), 9; https://doi.org/10.3390/psf2025012009 - 13 Oct 2025
Abstract
The evolution of space technology in recent years, fueled by advancements in computing such as Artificial Intelligence (AI) and machine learning (ML), has profoundly transformed our capacity to explore the cosmos. Missions like the James Webb Space Telescope (JWST) have made information about
[...] Read more.
The evolution of space technology in recent years, fueled by advancements in computing such as Artificial Intelligence (AI) and machine learning (ML), has profoundly transformed our capacity to explore the cosmos. Missions like the James Webb Space Telescope (JWST) have made information about distant objects more easily accessible, resulting in extensive amounts of valuable data. As part of this work-in-progress study, we are working to create an atmospheric absorption spectrum prediction model for exoplanets. The eventual model will be based on both collected observational spectra and synthetic spectral data generated by the ROCKE-3D general circulation model (GCM) developed by the climate modeling program at NASA’s Goddard Institute for Space Studies (GISS). In this initial study, spline curves are used to describe the bin heights of simulated atmospheric absorption spectra as a function of one of the values of the planetary parameters. Bayesian Adaptive Exploration is then employed to identify areas of the planetary parameter space for which more data are needed to improve the model. The resulting system will be used as a forward model so that planetary parameters can be inferred given a planet’s atmospheric absorption spectrum. This work is expected to contribute to a better understanding of exoplanetary properties and general exoplanet climates and habitability.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Nested Sampling for Exploring Lennard-Jones Clusters
by
Lune Maillard, Fabio Finocchi, César Godinho and Martino Trassinelli
Phys. Sci. Forum 2025, 12(1), 8; https://doi.org/10.3390/psf2025012008 - 13 Oct 2025
Cited by 1
Abstract
Lennard-Jones clusters, while an easy system, have a significant number of non equivalent configurations that increases rapidly with the number of atoms in the cluster. Here, we aim at determining the cluster partition function; we use the nested sampling algorithm, which transforms the
[...] Read more.
Lennard-Jones clusters, while an easy system, have a significant number of non equivalent configurations that increases rapidly with the number of atoms in the cluster. Here, we aim at determining the cluster partition function; we use the nested sampling algorithm, which transforms the multidimensional integral into a one-dimensional one, to perform this task. In particular, we use the nested_fit program, which implements slice sampling as search algorithm. We study here the 7-atom and 36-atom clusters to benchmark nested_fit for the exploration of potential energy surfaces. We find that nested_fit is able to recover phase transitions and find different stable configurations of the cluster. Furthermore, the implementation of the slice sampling algorithm has a clear impact on the computational cost.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Exploring Quantized Entropy Production Strength in Mesoscopic Irreversible Thermodynamics
by
Giorgio Sonnino
Phys. Sci. Forum 2025, 12(1), 7; https://doi.org/10.3390/psf2025012007 - 13 Oct 2025
Abstract
This letter aims to investigate thermodynamic processes in small systems in the Onsager region by showing that fundamental quantities such as total entropy production can be discretized on the mesoscopic scale. Even thermodynamic variables can conjugate to thermodynamic forces, and thus, Glansdorff–Prigogine’s dissipative
[...] Read more.
This letter aims to investigate thermodynamic processes in small systems in the Onsager region by showing that fundamental quantities such as total entropy production can be discretized on the mesoscopic scale. Even thermodynamic variables can conjugate to thermodynamic forces, and thus, Glansdorff–Prigogine’s dissipative variable may be discretized. The canonical commutation rules (CCRs) valid at the mesoscopic scale are postulated, and the measurement process consists of determining the eigenvalues of the operators associated with the thermodynamic quantities. The nature of the quantized quantity
β
, entering the CCRs, is investigated by a heuristic model for nano-gas and analyzed through the tools of classical statistical physics. We conclude that according to our model, the constant
β
does not appear to be a new fundamental constant but corresponds to the minimum value.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Trans-Dimensional Diffusive Nested Sampling for Metabolic Network Inference
by
Johann Fredrik Jadebeck, Wolfgang Wiechert and Katharina Nöh
Phys. Sci. Forum 2025, 12(1), 5; https://doi.org/10.3390/psf2025012005 - 24 Sep 2025
Abstract
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly
[...] Read more.
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly effective for estimating evidence and sampling the parameter posterior probability distributions. However, the practicality of nested sampling for metabolic network inference has yet to be studied. In this technical report, we explore the amalgamation of nested sampling, specifically diffusive nested sampling, with reversible jump Markov chain Monte Carlo. We apply the algorithm to two synthetic problems from the field of metabolic flux analysis. We present run times and share insights into hyperparameter choices, providing a useful point of reference for future applications of nested sampling to metabolic flux problems.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Combining Knowledge About Metabolic Networks and Single-Cell Data with Maximum Entropy
by
Carola S. Heinzel, Johann F. Jadebeck, Elisabeth Zelle, Johannes Seiffarth and Katharina Nöh
Phys. Sci. Forum 2025, 12(1), 3; https://doi.org/10.3390/psf2025012003 - 24 Sep 2025
Abstract
Better understanding of the fitness and flexibility of microbial platform organisms is central to biotechnological process development. Live-cell experiments uncover the phenotypic heterogeneity of living cells, emerging even within isogenic cell populations. However, how this observed heterogeneity in growth relates to the variability
[...] Read more.
Better understanding of the fitness and flexibility of microbial platform organisms is central to biotechnological process development. Live-cell experiments uncover the phenotypic heterogeneity of living cells, emerging even within isogenic cell populations. However, how this observed heterogeneity in growth relates to the variability of intracellular processes that drive cell growth and division is less understood. We here approach the question, how the observed phenotypic variability in single-cell growth rates links to metabolic processes, specifically intracellular reaction rates (fluxes). To approach this question, we employ the Maximum Entropy (MaxEnt) principle that allows us to bring together the phenotypic solution space, derived from metabolic network models, to single-cell growth rates observed in live-cell experiments. We apply the computational machinery to first-of-its-kind data of the microorganism Corynebacterium glutamicum, grown on different substrates under continuous medium supply. We compare the MaxEnt-based estimates of metabolic fluxes with estimates obtained by assuming that the average cell operates at its maximum growth rate, which is the current predominant practice in biotechnology.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Nonparametric FBST for Validating Linear Models
by
Rodrigo F. L. Lassance, Julio M. Stern and Rafael B. Stern
Phys. Sci. Forum 2025, 12(1), 2; https://doi.org/10.3390/psf2025012002 - 24 Sep 2025
Abstract
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST)
[...] Read more.
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) sidesteps this issue, standing out for also being logically coherent and offering a measure of evidence against
H
0
, although its application to nonparametric settings is still limited. In this work, we use Gaussian process priors to derive FBST procedures that evaluate general linearity assumptions, such as testing the adherence of data and performing variable selection to linear models. We also make use of pragmatic hypotheses to verify if the data might be compatible with a linear model when factors such as measurement errors or utility judgments are accounted for. This contribution extends the theory of the FBST, allowing for its application in nonparametric settings and requiring, at most, simple optimization procedures to reach the desired conclusion.
Full article
(This article belongs to the Proceedings of The 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics





