Next Issue
Volume 6, ICPSDREE 2022
Previous Issue
Volume 4, ICEM 2022
 
 

Phys. Sci. Forum, 2022, MaxEnt 2022

The 41st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Paris, France | 18–22 July 2022

Volume Editors:
Frédéric Barbaresco, Thales Land and Air Systems, France
Ali Mohammad-Djafari, International Science Consulting and Training (ISCT), France
Frank Nielsen, Sony Computer Science Laboratories Inc., Japan
Martino Trassinelli, Sorbonne Université, France

Number of Papers: 53
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image): The 41st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt’22) was held in Institut Henri Poincaré (IHP), Paris, 18–22 [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Other

3 pages, 861 KiB  
Editorial
Preface of the 41st International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering
by Frédéric Barbaresco, Ali Mohammad-Djafari, Frank Nielsen and Martino Trassinelli
Phys. Sci. Forum 2022, 5(1), 43; https://doi.org/10.3390/psf2022005043 - 28 Mar 2023
Viewed by 1283
Abstract
The forty-first International Conference on Bayesian and Maximum Entropy methods in Science and Engineering (41st MaxEnt’22) was held in Institut Henri Poincaré (IHP), Paris, 18–22 July 2022 (https://maxent22 [...] Full article
Show Figures

Figure 1

Other

Jump to: Editorial

9 pages, 773 KiB  
Proceeding Paper
Marginal Bayesian Statistics Using Masked Autoregressive Flows and Kernel Density Estimators with Examples in Cosmology
by Harry Bevins, Will Handley, Pablo Lemos, Peter Sims, Eloy de Lera Acedo and Anastasia Fialkov
Phys. Sci. Forum 2022, 5(1), 1; https://doi.org/10.3390/psf2022005001 - 27 Oct 2022
Cited by 3 | Viewed by 1161
Abstract
Cosmological experiments often employ Bayesian workflows to derive constraints on cosmological and astrophysical parameters from their data. It has been shown that these constraints can be combined across different probes, such as Planck and the Dark Energy Survey, and that this can be [...] Read more.
Cosmological experiments often employ Bayesian workflows to derive constraints on cosmological and astrophysical parameters from their data. It has been shown that these constraints can be combined across different probes, such as Planck and the Dark Energy Survey, and that this can be a valuable exercise to improve our understanding of the universe and quantify tension between multiple experiments. However, these experiments are typically plagued by differing systematics, instrumental effects, and contaminating signals, which we collectively refer to as ‘nuisance’ components, which have to be modelled alongside target signals of interest. This leads to high dimensional parameter spaces, especially when combining data sets, with ≳20 dimensions of which only ∼5 correspond to key physical quantities. We present a means by which to combine constraints from different data sets in a computationally efficient manner by generating rapid, reusable, and reliable marginal probability density estimators, giving us access to nuisance-free likelihoods. This is possible through the unique combination of nested sampling, which gives us access to Bayesian evidence, and the marginal Bayesian statistics code margarine. Our method is lossless in the signal parameters, resulting in the same posterior distributions as would be found from a full nested sampling run over all nuisance parameters, and typically quicker than evaluating full likelihoods. We demonstrate our approach by applying it to the combination of posteriors from the Dark Energy Survey and Planck. Full article
Show Figures

Figure 1

9 pages, 332 KiB  
Proceeding Paper
Comparing the Zeta Distributions with the Pareto Distributions from the Viewpoint of Information Theory and Information Geometry: Discrete versus Continuous Exponential Families of Power Laws
by Frank Nielsen
Phys. Sci. Forum 2022, 5(1), 2; https://doi.org/10.3390/psf2022005002 - 31 Oct 2022
Viewed by 1688
Abstract
We consider the zeta distributions, which are discrete power law distributions that can be interpreted as the counterparts of the continuous Pareto distributions with a unit scale. The family of zeta distributions forms a discrete exponential family with normalizing constants expressed using the [...] Read more.
We consider the zeta distributions, which are discrete power law distributions that can be interpreted as the counterparts of the continuous Pareto distributions with a unit scale. The family of zeta distributions forms a discrete exponential family with normalizing constants expressed using the Riemann zeta function. We present several information-theoretic measures between zeta distributions, study their underlying information geometry, and compare the results with their continuous counterparts, the Pareto distributions. Full article
Show Figures

Figure 1

8 pages, 450 KiB  
Proceeding Paper
Bulk and Point Defect Properties in α-Zr: Uncertainty Quantification on a Semi-Empirical Potential
by Alessandra Del Masto
Phys. Sci. Forum 2022, 5(1), 3; https://doi.org/10.3390/psf2022005003 - 31 Oct 2022
Viewed by 1124
Abstract
Modelling studies of irradiation defects in α-Zr, such as point defects and their multiple clusters, often use semi-empirical potentials because of their higher computational efficiency as compared to ab initio approaches. Such potentials rely on a fixed number of parameters that need [...] Read more.
Modelling studies of irradiation defects in α-Zr, such as point defects and their multiple clusters, often use semi-empirical potentials because of their higher computational efficiency as compared to ab initio approaches. Such potentials rely on a fixed number of parameters that need to be fitted to a reference dataset (ab initio and/or experimental), and their reliability is closely related to the uncertainty associated with their parameters, coming from both data inconsistency and model approximations. In this work, parametric uncertainties are quantified on a Second Moment Approximation (SMA) potential, focusing on bulk and point defect properties in α-Zr. A surrogate model, based on polynomial chaos expansion, is first built for properties of interest computed from atomistics, and simultaneously allows us to analytically compute the sensitivity indices of the observed properties to the potential parameters. This additional information is then used to select a limited number of material properties for the Bayesian inference. The posterior probability distributions of the parameters are estimated through two Markov Chain Monte Carlo (MCMC) sampling algorithms. The estimated posteriors of the model parameters are finally used to estimate materials properties (not used for the inference): in any case, most of the properties are closer to the reference ab initio and experimental data than those obtained from the original potential. Full article
Show Figures

Figure 1

8 pages, 1724 KiB  
Proceeding Paper
Simulation-Based Inference of Bayesian Hierarchical Models While Checking for Model Misspecification
by Florent Leclercq
Phys. Sci. Forum 2022, 5(1), 4; https://doi.org/10.3390/psf2022005004 - 2 Nov 2022
Viewed by 1402
Abstract
This paper presents recent methodological advances for performing simulation-based inference (SBI) of a general class of Bayesian hierarchical models (BHMs) while checking for model misspecification. Our approach is based on a two-step framework. First, the latent function that appears as a second layer [...] Read more.
This paper presents recent methodological advances for performing simulation-based inference (SBI) of a general class of Bayesian hierarchical models (BHMs) while checking for model misspecification. Our approach is based on a two-step framework. First, the latent function that appears as a second layer of the BHM is inferred and used to diagnose possible model misspecification. Second, target parameters of the trusted model are inferred via SBI. Simulations used in the first step are recycled for score compression, which is necessary for the second step. As a proof of concept, we apply our framework to a prey–predator model built upon the Lotka–Volterra equations and involving complex observational processes. Full article
Show Figures

Figure 1

9 pages, 1250 KiB  
Proceeding Paper
Nested Sampling of Materials’ Potential Energy Surfaces: Case Study of Zirconium
by George A. Marchant and Livia B. Pártay 
Phys. Sci. Forum 2022, 5(1), 5; https://doi.org/10.3390/psf2022005005 - 2 Nov 2022
Cited by 1 | Viewed by 1421
Abstract
The nested sampling (NS) method was originally proposed by John Skilling to calculate the evidence in Bayesian inference. The method has since been utilised in various research fields, and here we focus on how NS has been adapted to sample the Potential Energy [...] Read more.
The nested sampling (NS) method was originally proposed by John Skilling to calculate the evidence in Bayesian inference. The method has since been utilised in various research fields, and here we focus on how NS has been adapted to sample the Potential Energy Surface (PES) of atomistic systems, enabling the straightforward estimation of the partition function. Using two interatomic potential models of zirconium, we demonstrate the workflow and advantages of using nested sampling to calculate pressure-temperature phase diagrams. Without any prior knowledge of the stable phases or the phase transitions, we are able to identify the melting line, as well as the transition between the body-centred-cubic and hexagonal-close-packed structures. Full article
Show Figures

Figure 1

10 pages, 2826 KiB  
Proceeding Paper
Geometric Variational Inference and Its Application to Bayesian Imaging
by Philipp Frank
Phys. Sci. Forum 2022, 5(1), 6; https://doi.org/10.3390/psf2022005006 - 2 Nov 2022
Cited by 1 | Viewed by 1542
Abstract
Modern day Bayesian imaging problems in astrophysics as well as other scientific areas often result in non-Gaussian and very high-dimensional posterior probability distributions as their formal solution. Efficiently accessing the information contained in such distributions remains a core challenge in modern statistics as, [...] Read more.
Modern day Bayesian imaging problems in astrophysics as well as other scientific areas often result in non-Gaussian and very high-dimensional posterior probability distributions as their formal solution. Efficiently accessing the information contained in such distributions remains a core challenge in modern statistics as, on the one hand, point estimates such as Maximum a Posteriori (MAP) estimates are insufficient due to the nonlinear structure of these problems, while on the other hand, posterior sampling methods such as Markov Chain Monte Carlo (MCMC) techniques may become computationally prohibitively expensive in such high-dimensional settings. To nevertheless enable (approximate) inference in these cases, geometric Variational Inference (geoVI) has recently been introduced as an accurate Variational Inference (VI) technique for nonlinear unimodal probability distributions. It utilizes the Fisher–Rao information metric (FIM) related to the posterior probability distribution and the Riemannian manifold associated with the FIM to construct a set of normal coordinates in which the posterior metric is approximately the Euclidean metric. Transforming the posterior distribution into these coordinates results in a distribution that takes a particularly simple form, which ultimately allows for an accurate approximation with a normal distribution. A computationally efficient approximation of the associated coordinate transformation has been provided by geoVI, which now enables its application to real-world astrophysical imaging problems in millions of dimensions. Full article
Show Figures

Figure 1

10 pages, 522 KiB  
Proceeding Paper
Towards Moment-Constrained Causal Modeling
by Matteo Guardiani, Philipp Frank, Andrija Kostić and Torsten Enßlin
Phys. Sci. Forum 2022, 5(1), 7; https://doi.org/10.3390/psf2022005007 - 2 Nov 2022
Viewed by 1289
Abstract
The fundamental problem with causal inference involves discovering causal relations between variables used to describe observational data. We address this problem within the formalism of information field theory (IFT). Specifically, we focus on the problems of bivariate causal discovery (XY [...] Read more.
The fundamental problem with causal inference involves discovering causal relations between variables used to describe observational data. We address this problem within the formalism of information field theory (IFT). Specifically, we focus on the problems of bivariate causal discovery (XY, YX) from an observational dataset (X,Y). The bivariate case is especially interesting because the methods of statistical independence testing are not applicable here. For this class of problems, we propose the moment-constrained causal model (MCM). The MCM goes beyond the additive noise model by exploiting Bayesian hierarchical modeling to provide non-parametric reconstructions of the observational distributions. In order to identify the correct causal direction, we compare the performance of our newly-developed Bayesian inference algorithm for different causal directions (XY, YX) by calculating the evidence lower bound (ELBO). To this end, we developed a new method for the ELBO estimation that takes advantage of the adopted variational inference scheme for parameter inference. Full article
Show Figures

Figure 1

9 pages, 442 KiB  
Proceeding Paper
Value of Information in the Binary Case and Confusion Matrix
by Roman Belavkin, Panos Pardalos and Jose Principe
Phys. Sci. Forum 2022, 5(1), 8; https://doi.org/10.3390/psf2022005008 - 2 Nov 2022
Cited by 1 | Viewed by 1265
Abstract
The simplest Bayesian system used to illustrate ideas of probability theory is a coin and a boolean utility function. To illustrate ideas of hypothesis testing, estimation or optimal control, one needs to use at least two coins and a confusion matrix accounting for [...] Read more.
The simplest Bayesian system used to illustrate ideas of probability theory is a coin and a boolean utility function. To illustrate ideas of hypothesis testing, estimation or optimal control, one needs to use at least two coins and a confusion matrix accounting for the utilities of four possible outcomes. Here we use such a system to illustrate the main ideas of Stratonovich’s value of information (VoI) theory in the context of a financial time-series forecast. We demonstrate how VoI can provide a theoretical upper bound on the accuracy of the forecasts facilitating the analysis and optimization of models. Full article
Show Figures

Figure 1

8 pages, 338 KiB  
Proceeding Paper
Linear (h,φ)-Entropies for Quasi-Power Sequences with a Focus on the Logarithm of Taneja Entropy
by Valérie Girardin and Philippe Regnault
Phys. Sci. Forum 2022, 5(1), 9; https://doi.org/10.3390/psf2022005009 - 3 Nov 2022
Viewed by 962
Abstract
Conditions are highlighted for generalized entropies to allow for non-trivial time-averaged entropy rates for a large class of random sequences, including Markov chains and continued fractions. The axiomatic-free conditions arise from the behavior of the marginal entropy of the sequence. Apart from the [...] Read more.
Conditions are highlighted for generalized entropies to allow for non-trivial time-averaged entropy rates for a large class of random sequences, including Markov chains and continued fractions. The axiomatic-free conditions arise from the behavior of the marginal entropy of the sequence. Apart from the well-known Shannon and Rényi cases, only logarithmic versions of Sharma–Taneja–Mittal entropies may fulfill these conditions. Their main properties are detailed. Full article
10 pages, 879 KiB  
Proceeding Paper
Geometric Learning of Hidden Markov Models via a Method of Moments Algorithm
by Berlin Chen, Cyrus Mostajeran and Salem Said
Phys. Sci. Forum 2022, 5(1), 10; https://doi.org/10.3390/psf2022005010 - 3 Nov 2022
Viewed by 1520
Abstract
We present a novel algorithm for learning the parameters of hidden Markov models (HMMs) in a geometric setting where the observations take values in Riemannian manifolds. In particular, we elevate a recent second-order method of moments algorithm that incorporates non-consecutive correlations to a [...] Read more.
We present a novel algorithm for learning the parameters of hidden Markov models (HMMs) in a geometric setting where the observations take values in Riemannian manifolds. In particular, we elevate a recent second-order method of moments algorithm that incorporates non-consecutive correlations to a more general setting where observations take place in a Riemannian symmetric space of non-positive curvature and the observation likelihoods are Riemannian Gaussians. The resulting algorithm decouples into a Riemannian Gaussian mixture model estimation algorithm followed by a sequence of convex optimization procedures. We demonstrate through examples that the learner can result in significantly improved speed and numerical accuracy compared to existing learners. Full article
Show Figures

Figure 1

10 pages, 337 KiB  
Proceeding Paper
A Connection between Probability, Physics and Neural Networks
by Sascha Ranftl
Phys. Sci. Forum 2022, 5(1), 11; https://doi.org/10.3390/psf2022005011 - 7 Nov 2022
Cited by 3 | Viewed by 2790
Abstract
I illustrate an approach that can be exploited for constructing neural networks that a priori obey physical laws. We start with a simple single-layer neural network (NN) but refrain from choosing the activation functions yet. Under certain conditions and in the infinite-width limit, [...] Read more.
I illustrate an approach that can be exploited for constructing neural networks that a priori obey physical laws. We start with a simple single-layer neural network (NN) but refrain from choosing the activation functions yet. Under certain conditions and in the infinite-width limit, we may apply the central limit theorem, upon which the NN output becomes Gaussian. We may then investigate and manipulate the limit network by falling back on Gaussian process (GP) theory. It is observed that linear operators acting upon a GP again yield a GP. This also holds true for differential operators defining differential equations and describing physical laws. If we demand the GP, or equivalently the limit network, to obey the physical law, then this yields an equation for the covariance function or kernel of the GP, whose solution equivalently constrains the model to obey the physical law. The central limit theorem then suggests that NNs can be constructed to obey a physical law by choosing the activation functions such that they match a particular kernel in the infinite-width limit. The activation functions constructed in this way guarantee the NN to a priori obey the physics, up to the approximation error of non-infinite network width. Simple examples of the homogeneous 1D-Helmholtz equation are discussed and compared to naive kernels and activations. Full article
Show Figures

Figure 1

9 pages, 512 KiB  
Proceeding Paper
Classification and Uncertainty Quantification of Corrupted Data Using Supervised Autoencoders
by Philipp Joppich, Sebastian Dorn, Oliver De Candido, Jakob Knollmüller and Wolfgang Utschick
Phys. Sci. Forum 2022, 5(1), 12; https://doi.org/10.3390/psf2022005012 - 7 Nov 2022
Cited by 2 | Viewed by 1521
Abstract
Parametric and non-parametric classifiers often have to deal with real-world data, where corruptions such as noise, occlusions, and blur are unavoidable. We present a probabilistic approach to classify strongly corrupted data and quantify uncertainty, even though the corrupted data do not have to [...] Read more.
Parametric and non-parametric classifiers often have to deal with real-world data, where corruptions such as noise, occlusions, and blur are unavoidable. We present a probabilistic approach to classify strongly corrupted data and quantify uncertainty, even though the corrupted data do not have to be included to the training data. A supervised autoencoder is the underlying architecture. We used the decoding part as a generative model for realistic data and extended it by convolutions, masking, and additive Gaussian noise to describe imperfections. This constitutes a statistical inference task in terms of the optimal latent space activations of the underlying uncorrupted datum. We solved this problem approximately with Metric Gaussian Variational Inference (MGVI). The supervision of the autoencoder’s latent space allowed us to classify corrupted data directly under uncertainty with the statistically inferred latent space activations. We show that the derived model uncertainty can be used as a statistical “lie detector” of the classification. Independent of that, the generative model can optimally restore the corrupted datum by decoding the inferred latent space activations. Full article
Show Figures

Figure 1

10 pages, 942 KiB  
Proceeding Paper
Equivariant Neural Networks and Differential Invariants Theory for Solving Partial Differential Equations
by Pierre-Yves Lagrave and Eliot Tron
Phys. Sci. Forum 2022, 5(1), 13; https://doi.org/10.3390/psf2022005013 - 7 Nov 2022
Viewed by 2657
Abstract
This paper discusses the use of Equivariant Neural Networks (ENN) for solving Partial Differential Equations by exploiting their underlying symmetry groups. We first show that Group-Convolutionnal Neural Networks can be used to generalize Physics-Informed Neural Networks and then consider the use of ENN [...] Read more.
This paper discusses the use of Equivariant Neural Networks (ENN) for solving Partial Differential Equations by exploiting their underlying symmetry groups. We first show that Group-Convolutionnal Neural Networks can be used to generalize Physics-Informed Neural Networks and then consider the use of ENN to approximate differential invariants of a given symmetry group, hence allowing to build symmetry-preserving Finite Difference methods without the need to formally derivate corresponding numerical invariantizations. The benefit of our approach is illustrated on the 2D heat equation through the instantiation of an SE(2) symmetry-preserving discretization. Full article
Show Figures

Figure 1

9 pages, 339 KiB  
Proceeding Paper
Fluid Densities Defined from Probability Density Functions, and New Families of Conservation Laws
by Robert K. Niven
Phys. Sci. Forum 2022, 5(1), 14; https://doi.org/10.3390/psf2022005014 - 8 Nov 2022
Viewed by 1388
Abstract
The mass density, commonly denoted ρ(x,t) as a function of position x and time t, is considered an obvious concept in physics. It is, however, fundamentally dependent on the continuum assumption, the ability of the observer [...] Read more.
The mass density, commonly denoted ρ(x,t) as a function of position x and time t, is considered an obvious concept in physics. It is, however, fundamentally dependent on the continuum assumption, the ability of the observer to downscale the mass of atoms present within a prescribed volume to the limit of an infinitesimal volume. In multiphase systems such as flow in porous media, the definition becomes critical, and has been addressed by taking the convolution [ρ](x,t)=V(x,t)w(r,t)ρ(x+r,t)dV(r,t), involving integration of a local density ρ(x+r,t) multiplied by a weighting function w(r,t) over the small local volume V(r,t), where [·] is an expectation and r is a local coordinate. This weighting function is here formally identified as the probability density function p(r|t), enabling the construction of densities from probabilities. This insight is extended to a family of five probability densities derived from p(u,x|t), applicable to fluid elements of velocity u and position x at time t in a fluid flow system. By convolution over a small geometric volume V and/or a small velocimetric domain U, these can be used to define five corresponding fluid densities. Three of these densities are functions of the fluid velocity, enabling a description of fluid flow of higher fidelity than that provided by ρ(x,t) alone. Applying this set of densities within an extended form of the Reynolds transport theorem, it is possible to derive new families of integral conservation laws applicable to different parameter spaces, for the seven common conserved quantities (fluid mass, species mass, linear momentum, angular momentum, energy, charge and entropy). The findings considerably expand the set of known conservation laws for the analysis of physical systems. Full article
Show Figures

Figure 1

9 pages, 1067 KiB  
Proceeding Paper
Reputation Communication from an Information Perspective
by Torsten Enßlin, Viktoria Kainz and Céline Bœhm
Phys. Sci. Forum 2022, 5(1), 15; https://doi.org/10.3390/psf2022005015 - 28 Nov 2022
Cited by 1 | Viewed by 1165
Abstract
Communication, the exchange of information between intelligent agents, whether human or artificial, is susceptible to deception and misinformation. Reputation systems can help agents decide how much to trust an information source that is not necessarily reliable. Consequently, the reputation of the agents themselves [...] Read more.
Communication, the exchange of information between intelligent agents, whether human or artificial, is susceptible to deception and misinformation. Reputation systems can help agents decide how much to trust an information source that is not necessarily reliable. Consequently, the reputation of the agents themselves determines the influence of their communication on the beliefs of others. This makes reputation a valuable resource, and thus a natural target for manipulation. To investigate the vulnerability of reputation systems, we simulate the dynamics of communicating agents seeking high reputation within their social group using an agent-based model. The simulated agents are equipped with a cognitive model that is limited in mental capacity but otherwise follows information-theoretic principles. Various malicious strategies of the agents are examined for their effects on group sociology, such as sycophancy, egocentrism, pathological lying, and aggressiveness. Phenomena resembling real social psychological effects are observed, such as echo chambers, self-deception, deceptive symbiosis, narcissistic supply, and freezing of group opinions. Here, the information-theoretical aspects of the reputation game simulation are discussed. Full article
Show Figures

Figure 1

9 pages, 651 KiB  
Proceeding Paper
Credit Risk Scoring Forecasting Using a Time Series Approach
by Ayoub El-Qadi, Maria Trocan, Thomas Frossard and Natalia Díaz-Rodríguez
Phys. Sci. Forum 2022, 5(1), 16; https://doi.org/10.3390/psf2022005016 - 1 Dec 2022
Cited by 1 | Viewed by 3773
Abstract
Credit risk assessments are vital to the operations of financial institutions. These activities depend on the availability of data. In many cases, the records of financial data processed by the credit risk models are frequently incomplete. Several methods have been proposed in the [...] Read more.
Credit risk assessments are vital to the operations of financial institutions. These activities depend on the availability of data. In many cases, the records of financial data processed by the credit risk models are frequently incomplete. Several methods have been proposed in the literature to address the problem of missing values. Yet, when assessing a company, there are some critical features that influence the final credit assessment. The availability of financial data also depends strongly on the country to which the company belongs. This is due to the fact there are countries where the regulatory frameworks allow companies to not publish their financial statements. In this paper, we propose a framework that can process historical credit assessments of a large number of companies, which were performed between 2008 and 2019, in order to treat the data as time series. We then used these time series data in order to fit two different models: a traditional statistics model (an autoregressive moving average model) and a machine-learning based model (a gradient boosting model). This approach allowed the generation of future credit assessments without the need for new financial data. Full article
Show Figures

Figure 1

9 pages, 401 KiB  
Proceeding Paper
Adaptive Importance Sampling for Equivariant Group-Convolution Computation
by Pierre-Yves Lagrave and Frédéric Barbaresco
Phys. Sci. Forum 2022, 5(1), 17; https://doi.org/10.3390/psf2022005017 - 5 Dec 2022
Viewed by 1890
Abstract
This paper introduces an adaptive importance sampling scheme for the computation of group-based convolutions, a key step in the implementation of equivariant neural networks. By leveraging information geometry to define the parameters update rule for inferring the optimal sampling distribution, we show promising [...] Read more.
This paper introduces an adaptive importance sampling scheme for the computation of group-based convolutions, a key step in the implementation of equivariant neural networks. By leveraging information geometry to define the parameters update rule for inferring the optimal sampling distribution, we show promising results for our approach by working with the two-dimensional rotation group SO(2) and von Mises distributions. Finally, we position our AIS scheme with respect to quantum algorithms for computing Monte Carlo estimations. Full article
Show Figures

Figure 1

8 pages, 474 KiB  
Proceeding Paper
SEIR Modeling, Simulation, Parameter Estimation, and Their Application for COVID-19 Epidemic Prediction
by Elham Taghizadeh and Ali Mohammad-Djafari
Phys. Sci. Forum 2022, 5(1), 18; https://doi.org/10.3390/psf2022005018 - 5 Dec 2022
Cited by 3 | Viewed by 5979
Abstract
In this paper, we consider the SEIR (Susceptible-Exposed-Infectious-Removed) model for studying COVID-19. The main contributions of this paper are: (i) a detailed explanation of the SEIR model, with the significance of its parameters. (ii) calibration and estimation of the parameters of the model [...] Read more.
In this paper, we consider the SEIR (Susceptible-Exposed-Infectious-Removed) model for studying COVID-19. The main contributions of this paper are: (i) a detailed explanation of the SEIR model, with the significance of its parameters. (ii) calibration and estimation of the parameters of the model using the observed data. To do this, we used a nonlinear least squares (NLS) optimization and a Bayesian estimation method. (iii) When the parameters are estimated, we use the models for the prediction of the spread of the virus and compute the probable number of infections and deaths of individuals. (iii) We show the performances of the proposed method on simulated and real data. (iv) Remarking that the fixed parameter model could not give satisfactory results on real data, we proposed the use of a time-dependent parameter model. Then, this model is implemented and used on real data. Full article
Show Figures

Figure 1

9 pages, 439 KiB  
Proceeding Paper
Information Properties of a Random Variable Decomposition through Lattices
by Fábio C. C. Meneghetti, Henrique K. Miyamoto and Sueli I. R. Costa
Phys. Sci. Forum 2022, 5(1), 19; https://doi.org/10.3390/psf2022005019 - 5 Dec 2022
Viewed by 1285
Abstract
A full-rank lattice in the Euclidean space is a discrete set formed by all integer linear combinations of a basis. Given a probability distribution on Rn, two operations can be induced by considering the quotient of the space by such a [...] Read more.
A full-rank lattice in the Euclidean space is a discrete set formed by all integer linear combinations of a basis. Given a probability distribution on Rn, two operations can be induced by considering the quotient of the space by such a lattice: wrapping and quantization. For a lattice Λ, and a fundamental domain D, which tiles Rn through Λ, the wrapped distribution over the quotient is obtained by summing the density over each coset, while the quantized distribution over the lattice is defined by integrating over each fundamental domain translation. These operations define wrapped and quantized random variables over D and Λ, respectively, which sum up to the original random variable. We investigate information-theoretic properties of this decomposition, such as entropy, mutual information and the Fisher information matrix, and show that it naturally generalizes to the more abstract context of locally compact topological groups. Full article
Show Figures

Figure 1

9 pages, 281 KiB  
Proceeding Paper
Graphical Gaussian Models Associated to a Homogeneous Graph with Permutation Symmetries
by Piotr Graczyk, Hideyuki Ishi and Bartosz Kołodziejek
Phys. Sci. Forum 2022, 5(1), 20; https://doi.org/10.3390/psf2022005020 - 7 Dec 2022
Viewed by 1227
Abstract
We consider multivariate-centered Gaussian models for the random vector (Z1,,Zp), whose conditional structure is described by a homogeneous graph and which is invariant under the action of a permutation subgroup. The following paper is [...] Read more.
We consider multivariate-centered Gaussian models for the random vector (Z1,,Zp), whose conditional structure is described by a homogeneous graph and which is invariant under the action of a permutation subgroup. The following paper is concerned with model selection within colored graphical Gaussian models, when the underlying conditional dependency graph is known. We derive an analytic expression of the normalizing constant of the Diaconis–Ylvisaker conjugate prior for the precision parameter and perform Bayesian model selection in the class of graphical Gaussian models invariant by the action of a permutation subgroup. We illustrate our results with a toy example of dimension 5. Full article
Show Figures

Figure 1

8 pages, 296 KiB  
Proceeding Paper
Dynamical Systems over Lie Groups Associated with Statistical Transformation Models
by Daisuke Tarama and Jean-Pierre Françoise
Phys. Sci. Forum 2022, 5(1), 21; https://doi.org/10.3390/psf2022005021 - 7 Dec 2022
Viewed by 1347
Abstract
A statistical transformation model consists of a smooth data manifold, on which a Lie group smoothly acts, together with a family of probability density functions on the data manifold parametrized by elements in the Lie group. For such a statistical transformation model, the [...] Read more.
A statistical transformation model consists of a smooth data manifold, on which a Lie group smoothly acts, together with a family of probability density functions on the data manifold parametrized by elements in the Lie group. For such a statistical transformation model, the Fisher–Rao semi-definite metric and the Amari–Chentsov cubic tensor are defined in the Lie group. If the family of probability density functions is invariant with respect to the Lie group action, the Fisher–Rao semi-definite metric and the Amari–Chentsov tensor are left-invariant, and hence we have a left-invariant structure of a statistical manifold. In the present work, the general framework of statistical transformation models is explained. Then, the left-invariant geodesic flow associated with the Fisher–Rao metric is considered for two specific families of probability density functions on the Lie group. The corresponding Euler–Poincaré and the Lie–Poisson equations are explicitly found in view of geometric mechanics. Related dynamical systems over Lie groups are also mentioned. A generalization in relation to the invariance of the family of probability density functions is further studied. Full article
24 pages, 1308 KiB  
Proceeding Paper
Kangaroos in Cambridge
by Romke Bontekoe and Barrie J. Stokes
Phys. Sci. Forum 2022, 5(1), 22; https://doi.org/10.3390/psf2022005022 - 8 Dec 2022
Viewed by 1636
Abstract
In this tutorial paper the Gull–Skilling kangaroo problem is revisited. The problem is used as an example of solving an under-determined system by variational principles, the maximum entropy principle (MEP), and Information Geometry. The relationship between correlation and information is demonstrated. The Kullback–Leibler [...] Read more.
In this tutorial paper the Gull–Skilling kangaroo problem is revisited. The problem is used as an example of solving an under-determined system by variational principles, the maximum entropy principle (MEP), and Information Geometry. The relationship between correlation and information is demonstrated. The Kullback–Leibler divergence of two discrete probability distributions is shown to fail as a distance measure. However, an analogy with rigid body rotations in classical mechanics is motivated. A table of proper “geodesic” distances between probability distributions is presented. With this paper the authors pay tribute to their late friend David Blower. Full article
Show Figures

Figure 1

8 pages, 1127 KiB  
Proceeding Paper
Maxwell’s Demon and Information Theory in Market Efficiency: A Brillouin’s Perspective
by Xavier Brouty and Matthieu Garcin
Phys. Sci. Forum 2022, 5(1), 23; https://doi.org/10.3390/psf2022005023 - 12 Dec 2022
Viewed by 1746
Abstract
By using Brillouin’s perspective on Maxwell’s demon, we determine a new way to describe investor behaviors in financial markets. The efficient market hypothesis (EMH) in its strong form states that all information in the market, public or private, is accounted for in the [...] Read more.
By using Brillouin’s perspective on Maxwell’s demon, we determine a new way to describe investor behaviors in financial markets. The efficient market hypothesis (EMH) in its strong form states that all information in the market, public or private, is accounted for in the stock price. By simulations in an agent-based model, we show that an informed investor using alternative data, correlated to the time series of prices of a financial asset, is able to act as a Maxwell’s demon on financial markets. They are then able to perform statistical arbitrage consistently with the adaptive market hypothesis (AMH). A new statistical test of market efficiency provides some insight into the impact of the demon on the market. This test determines the amount of information contained in the series, using quantities which are widespread in information theory such as Shannon’s entropy. As in Brillouin’s perspective, we observe a cycle: Negentropy->Information->Negentropy. This cycle proves the implication of the investor depicted as a Maxwell’s demon in the market with the knowledge of alternative data. Full article
Show Figures

Figure 1

7 pages, 280 KiB  
Proceeding Paper
Homogeneous Symplectic Spaces and Central Extensions
by Andrew Beckett
Phys. Sci. Forum 2022, 5(1), 24; https://doi.org/10.3390/psf2022005024 - 12 Dec 2022
Cited by 1 | Viewed by 1554
Abstract
We summarise recent work on the classical result of Kirillov that any simply connected homogeneous symplectic space of a connected group G is a hamiltonian G^-space for a one-dimensional central extension G^ of G, and is thus (by a [...] Read more.
We summarise recent work on the classical result of Kirillov that any simply connected homogeneous symplectic space of a connected group G is a hamiltonian G^-space for a one-dimensional central extension G^ of G, and is thus (by a result of Kostant a cover of a coadjoint orbit of G^. We emphasise that existing proofs in the literature assume that G is simply connected and that this assumption can be removed by application of a theorem of Neeb. We also interpret Neeb’s theorem as relating the integrability of one-dimensional central extensions of Lie algebras to the integrability of an associated Chevalley–Eilenberg 2-cocycle. Full article
12 pages, 746 KiB  
Proceeding Paper
Information Geometry Control under the Laplace Assumption
by Adrian-Josue Guel-Cortez and Eun-jin Kim
Phys. Sci. Forum 2022, 5(1), 25; https://doi.org/10.3390/psf2022005025 - 12 Dec 2022
Cited by 2 | Viewed by 2470
Abstract
By combining information science and differential geometry, information geometry provides a geometric method to measure the differences in the time evolution of the statistical states in a stochastic process. Specifically, the so-called information length (the time integral of the information rate) describes the [...] Read more.
By combining information science and differential geometry, information geometry provides a geometric method to measure the differences in the time evolution of the statistical states in a stochastic process. Specifically, the so-called information length (the time integral of the information rate) describes the total amount of statistical changes that a time-varying probability distribution takes through time. In this work, we outline how the application of information geometry may permit us to create energetically efficient and organised behaviour artificially. Specifically, we demonstrate how nonlinear stochastic systems can be analysed by utilising the Laplace assumption to speed up the numerical computation of the information rate of stochastic dynamics. Then, we explore a modern control engineering protocol to obtain the minimum statistical variability while analysing its effects on the closed-loop system’s stochastic thermodynamics. Full article
Show Figures

Figure 1

8 pages, 1162 KiB  
Proceeding Paper
Attention-Guided Multi-Scale CNN Network for Cervical Vertebral Maturation Assessment from Lateral Cephalometric Radiography
by Hamideh Manoochehri, Seyed Ahmad Motamedi, Ali Mohammad-Djafari, Masrour Makaremi and Alireza Vafaie Sadr
Phys. Sci. Forum 2022, 5(1), 26; https://doi.org/10.3390/psf2022005026 - 12 Dec 2022
Cited by 1 | Viewed by 1463
Abstract
Accurate determination of skeletal maturation indicators is crucial in the orthodontic process. Chronologic age is not a reliable skeletal maturation indicator, thus physicians use bone age. In orthodontics, the treatment timing depends on Cervical Vertebral Maturation (CVM) assessment. Determination of CVM degree remains [...] Read more.
Accurate determination of skeletal maturation indicators is crucial in the orthodontic process. Chronologic age is not a reliable skeletal maturation indicator, thus physicians use bone age. In orthodontics, the treatment timing depends on Cervical Vertebral Maturation (CVM) assessment. Determination of CVM degree remains challenging due to the limited annotated dataset, the existence of significant irrelevant areas in the image, the huge intra-class variances, and the high degree of inter-class similarities. To address this problem, researchers have started looking for external information beyond current available medical datasets. This work utilizes the domain knowledge from radiologists to train neural network models that can be utilized as a decision support system. We proposed a novel supervised learning method with a multi-scale attention mechanism, and we incorporated the general diagnostic patterns of medical doctors to classify lateral X-ray images as six CVM classes. The proposed network highlights the important regions, surpasses the irrelevant part of the image, and efficiently models long-range dependencies. Employing the attention mechanism improves both the performance and interpretability. In this work, we used additive spatial and channel attention modules. Our proposed network consists of three branches. The first branch extracts local features, and creates attention maps and related masks, the second branch uses the masks to extract discriminative features for classification, and the third branch fuses local and global features. The result shows that the proposed method can represent more discriminative features, therefore, the accuracy of image classification is greater in comparison to in backbone and some attention-based state-of-the-art networks. Full article
Show Figures

Figure 1

10 pages, 877 KiB  
Proceeding Paper
Analysis of Dynamical Field Inference in a Supersymmetric Theory
by Margret Westerkamp, Igor V. Ovchinnikov, Philipp Frank and Torsten Enßlin
Phys. Sci. Forum 2022, 5(1), 27; https://doi.org/10.3390/psf2022005027 - 12 Dec 2022
Viewed by 1159
Abstract
The inference of dynamical fields is of paramount importance in science, technology, and economics. Dynamical field inference can be based on information field theory and used to infer the evolution of fields in dynamical systems from finite data. Here, the partition function, as [...] Read more.
The inference of dynamical fields is of paramount importance in science, technology, and economics. Dynamical field inference can be based on information field theory and used to infer the evolution of fields in dynamical systems from finite data. Here, the partition function, as the central mathematical object of our investigation, invokes a Dirac delta function as well as a field-dependent functional determinant, which impede the inference. To tackle this problem, Fadeev–Popov ghosts and a Lagrange multiplier are introduced to represent the partition function by an integral over those fields. According to the supersymmetric theory of stochastics, the action associated with the partition function has a supersymmetry for those ghost and signal fields. In this context, the spontaneous breakdown of supersymmetry leads to chaotic behavior of the system. To demonstrate the impact of chaos, characterized by positive Lyapunov exponents, on the predictability of a system’s evolution, we show for the case of idealized linear dynamics that the dynamical growth rates of the fermionic ghost fields impact the uncertainty of the field inference. Finally, by establishing perturbative solutions to the inference problem associated with an idealized nonlinear system, using a Feynman diagrammatic expansion, we expose that the fermionic contributions, implementing the functional determinant, are key to obtain the correct posterior of the system. Full article
Show Figures

Figure 1

9 pages, 455 KiB  
Proceeding Paper
Model Selection in the World of Maximum Entropy
by Orestis Loukas and Ho-Ryun Chung
Phys. Sci. Forum 2022, 5(1), 28; https://doi.org/10.3390/psf2022005028 - 14 Dec 2022
Viewed by 1174
Abstract
Science aims at identifying suitable models that best describe a population based on a set of features. Lacking information about the relationships among features there is no justification to a priori fix a certain model. Ideally, we want to incorporate only those relationships [...] Read more.
Science aims at identifying suitable models that best describe a population based on a set of features. Lacking information about the relationships among features there is no justification to a priori fix a certain model. Ideally, we want to incorporate only those relationships into the model which are supported by observed data. To achieve this goal the model that best balances goodness of fit with simplicity should be selected. However, parametric approaches to model selection encounter difficulties pertaining to the precise definition of the invariant content that enters the selection procedure and its interpretation. A naturally invariant formulation of any statistical model consists of the joint distribution of features, which provides all the information that is required to answer questions in classification tasks or identification of feature relationships. The principle of Maximum Entropy (maxent) offers a framework to directly estimate a model for this joint distribution based on phenomenological constraints. Reformulating the inverse problem to obtain a model distribution as an under-constrained linear system of equations, where the remaining degrees of freedom are fixed by entropy maximization, tremendously simplifies large-N expansions around the optimal distribution of Maximum Entropy. We have exploited this conceptual advancement to clarify the nature of prominent model-selection schemes providing an approach to systematically select significant constraints evidenced by the data. To facilitate the treatment of higher-dimensional problems, we propose hypermaxent—a clustering method to efficiently tackle the maxent selection procedure. We demonstrate the utility of our approach by applying the advocated methodology to analyze long-range interactions from spin glasses and uncover three-point effects in COVID-19 data. Full article
Show Figures

Figure 1

9 pages, 366 KiB  
Proceeding Paper
Two Unitary Quantum Process Tomography Algorithms Robust to Systematic Errors
by François Verdeil and Yannick Deville
Phys. Sci. Forum 2022, 5(1), 29; https://doi.org/10.3390/psf2022005029 - 12 Dec 2022
Viewed by 977
Abstract
Quantum process tomography (QPT) methods aim at identifying a given quantum process. QPT is a major quantum information processing tool, since it especially allows one to characterize the actual behavior of quantum gates, which are the building blocks of quantum computers. The present [...] Read more.
Quantum process tomography (QPT) methods aim at identifying a given quantum process. QPT is a major quantum information processing tool, since it especially allows one to characterize the actual behavior of quantum gates, which are the building blocks of quantum computers. The present paper focuses on the estimation of a unitary process. This class is of particular interest because quantum mechanics postulates that the evolution of any closed quantum system is described by a unitary transformation. Unitary processes have significantly fewer parameters than general quantum processes (22nqb vs. 24nqb22nqb real independent parameters for nqb qubits). By assuming that the process is unitary we develop two methods that scale better with the size of the system. In the present paper, we stay as close as possible to the standard setup of QPT: the operator has to prepare copies of different input states. The properties those states have to satisfy in order for our method to achieve QPT are very mild. Therefore, we choose to operate with copies of 2nqb initially unknown pure input states. In order to perform QPT without knowing the input states, we perform measurements on half the copies of each state, and let the other half be transformed by the system before measuring them (each copy is only measured once). This setup has the advantage of removing the issue of systematic (i.e., same on all the copies of a state) errors entirely because it does not require the process input to take predefined values. We develop a straightforward analytical solution that first estimates the states from the averaged measurements and then finds the unitary matrix (representing the process) coherent with those estimates by using our analytical solution to an extended version of Wahba’s problem. This estimate may then be used as an initial point for a fine tuning algorithm that maximizes the likelihood of the measurements. Simulation results show the effectiveness of the proposed methods. Full article
Show Figures

Figure 1

9 pages, 960 KiB  
Proceeding Paper
What Is Randomness? The Interplay between Alpha Entropies, Total Variation and Guessing
by Olivier Rioul
Phys. Sci. Forum 2022, 5(1), 30; https://doi.org/10.3390/psf2022005030 - 13 Dec 2022
Cited by 1 | Viewed by 1137
Abstract
In many areas of computer science, it is of primary importance to assess the randomness of a certain variable X. Many different criteria can be used to evaluate randomness, possibly after observing some disclosed data. A “sufficiently random” X is often described [...] Read more.
In many areas of computer science, it is of primary importance to assess the randomness of a certain variable X. Many different criteria can be used to evaluate randomness, possibly after observing some disclosed data. A “sufficiently random” X is often described as “entropic”. Indeed, Shannon’s entropy is known to provide a resistance criterion against modeling attacks. More generally one may consider the Rényi α-entropy where Shannon’s entropy, collision entropy and min-entropy are recovered as particular cases α=1, 2 and +, respectively. Guess work or guessing entropy is also of great interest in relation to α-entropy. On the other hand, many applications rely instead on the “statistical distance”, also known as “total variation" distance, to the uniform distribution. This criterion is particularly important because a very small distance ensures that no statistical test can effectively distinguish between the actual distribution and the uniform distribution. In this paper, we establish optimal lower and upper bounds between α-entropy, guessing entropy on one hand, and error probability and total variation distance to the uniform on the other hand. In this context, it turns out that the best known “Pinsker inequality” and recent “reverse Pinsker inequalities” are not necessarily optimal. We recover or improve previous Fano-type and Pinsker-type inequalities used for several applications. Full article
Show Figures

Figure 1

9 pages, 4593 KiB  
Proceeding Paper
Time-Dependent Maximum Entropy Model for Populations of Retinal Ganglion Cells
by Geoffroy Delamare and Ulisse Ferrari
Phys. Sci. Forum 2022, 5(1), 31; https://doi.org/10.3390/psf2022005031 - 13 Dec 2022
Cited by 1 | Viewed by 1536
Abstract
The inverse Ising model is used in computational neuroscience to infer probability distributions of the synchronous activity of large neuronal populations. This method allows for finding the Boltzmann distribution with single neuron biases and pairwise interactions that maximize the entropy and reproduce the [...] Read more.
The inverse Ising model is used in computational neuroscience to infer probability distributions of the synchronous activity of large neuronal populations. This method allows for finding the Boltzmann distribution with single neuron biases and pairwise interactions that maximize the entropy and reproduce the empirical statistics of the recorded neuronal activity. Here, we apply this strategy to large populations of retinal output neurons (ganglion cells) of different types, stimulated by multiple visual stimuli with their own statistics. The activity of retinal output neurons is driven by both the inputs from upstream neurons, which encode the visual information and reflect stimulus statistics, and the recurrent connections, which induce network effects. We first apply the standard inverse Ising model approach and show that it accounts well for the system’s collective behavior when the input visual stimulus has short-ranged spatial correlations but fails for long-ranged ones. This happens because stimuli with long-ranged spatial correlations synchronize the activity of neurons over long distances. This effect cannot be accounted for by pairwise interactions, and so by the pairwise Ising model. To solve this issue, we apply a previously proposed framework that includes a temporal dependence in the single neurons biases to model how neurons are driven in time by the stimulus. Thanks to this addition, the stimulus effects are taken into account by the biases, and the pairwise interactions allow for the characterization of the network effect in the population activity and for reproducing the structure of the recurrent functional connections in the retinal architecture. In particular, the inferred interactions are strong and positive only for nearby neurons of the same type. Inter-type connections are instead small and slightly negative. Therefore, the retinal architecture splits into weakly interacting subpopulations composed of strongly interacting neurons. Overall, this temporal framework fixes the problems of the standard, static, inverse Ising model and accounts for the system’s collective behavior, for stimuli with either short or long-range correlations. Full article
Show Figures

Figure 1

9 pages, 650 KiB  
Proceeding Paper
Quantum Finite Automata and Quiver Algebras
by George Jeffreys and Siu-Cheong Lau
Phys. Sci. Forum 2022, 5(1), 32; https://doi.org/10.3390/psf2022005032 - 14 Dec 2022
Viewed by 1258
Abstract
We find an application in quantum finite automata for the ideas and results of [JL21] and [JL22]. We reformulate quantum finite automata with multiple-time measurements using the algebraic notion of a near-ring. This gives a unified understanding towards quantum computing and deep learning. [...] Read more.
We find an application in quantum finite automata for the ideas and results of [JL21] and [JL22]. We reformulate quantum finite automata with multiple-time measurements using the algebraic notion of a near-ring. This gives a unified understanding towards quantum computing and deep learning. When the near-ring comes from a quiver, we have a nice moduli space of computing machines with a metric that can be optimized by gradient descent. Full article
Show Figures

Figure 1

10 pages, 343 KiB  
Proceeding Paper
Efficient Representations of Spatially Variant Point Spread Functions with Butterfly Transforms in Bayesian Imaging Algorithms
by Vincent Eberle, Philipp Frank, Julia Stadler, Silvan Streit and Torsten Enßlin
Phys. Sci. Forum 2022, 5(1), 33; https://doi.org/10.3390/psf2022005033 - 14 Dec 2022
Cited by 3 | Viewed by 1302
Abstract
Bayesian imaging algorithms are becoming increasingly important in, e.g., astronomy, medicine and biology. Given that many of these algorithms compute iterative solutions to high-dimensional inverse problems, the efficiency and accuracy of the instrument response representation are of high importance for the imaging process. [...] Read more.
Bayesian imaging algorithms are becoming increasingly important in, e.g., astronomy, medicine and biology. Given that many of these algorithms compute iterative solutions to high-dimensional inverse problems, the efficiency and accuracy of the instrument response representation are of high importance for the imaging process. For this reason, point spread functions, which make up a large fraction of the response functions of telescopes and microscopes, are usually assumed to be spatially invariant in a given field of view and can thus be represented by a convolution. For many instruments, this assumption does not hold and degrades the accuracy of the instrument representation. Here, we discuss the application of butterfly transforms, which are linear neural network structures whose sizes scale subquadratically with the number of data points. Butterfly transforms are efficient by design, since they are inspired by the structure of the Cooley–Tukey Fast Fourier transform. In this work, we combine them in several ways into butterfly networks, compare the different architectures with respect to their performance and identify a representation that is suitable for the efficient respresentation of a synthetic spatially variant point spread function up to a 1% error. Full article
Show Figures

Figure 1

8 pages, 283 KiB  
Proceeding Paper
Unfolding of Relative g-Entropies and Monotone Metrics
by Fabio Di Nocera
Phys. Sci. Forum 2022, 5(1), 34; https://doi.org/10.3390/psf2022005034 - 15 Dec 2022
Viewed by 967
Abstract
We discuss the geometric aspects of a recently described unfolding procedure and show the form of objects relevant in the field of quantum information geometry in the unfolding space. In particular, we show the form of the quantum monotone metric tensors characterized by [...] Read more.
We discuss the geometric aspects of a recently described unfolding procedure and show the form of objects relevant in the field of quantum information geometry in the unfolding space. In particular, we show the form of the quantum monotone metric tensors characterized by Petz and retrace in this unfolded perspective a recently introduced procedure of extracting a covariant tensor from a relative g-entropy. Full article
8 pages, 586 KiB  
Proceeding Paper
Outlier-Robust Surrogate Modelling of Ion-Solid Interaction Simulations
by Roland Preuss and Udo von Toussaint
Phys. Sci. Forum 2022, 5(1), 35; https://doi.org/10.3390/psf2022005035 - 15 Dec 2022
Cited by 1 | Viewed by 1095
Abstract
Data for complex plasma–wall interactions require long-running and expensive computer simulations of codes like EIRENE or SOLPS. Furthermore, the number of input parameters is large, which results in a low coverage of the (physical) parameter space. Unpredictable occasions of outliers create a need [...] Read more.
Data for complex plasma–wall interactions require long-running and expensive computer simulations of codes like EIRENE or SOLPS. Furthermore, the number of input parameters is large, which results in a low coverage of the (physical) parameter space. Unpredictable occasions of outliers create a need to conduct the exploration of this multi-dimensional space using robust analysis tools. We restate the Gaussian-process (GP) method as a Bayesian adaptive exploration method for establishing surrogate surfaces in the variables of interest. On this basis, we complete the analysis by the Student-t process (TP) method in order to improve the robustness of the result with respect to outliers. The most obvious difference between both methods shows up in the marginal likelihood for the hyperparameters of the covariance function where the TP method features a broader marginal probability distribution in the presence of outliers. Full article
Show Figures

Figure 1

11 pages, 299 KiB  
Proceeding Paper
Entropic Dynamics and Quantum “Measurement”
by Ariel Caticha
Phys. Sci. Forum 2022, 5(1), 36; https://doi.org/10.3390/psf2022005036 - 15 Dec 2022
Cited by 1 | Viewed by 1081
Abstract
The entropic dynamics (ED) approach to quantum mechanics is ideally suited to address the problem of measurement because it is based on entropic and Bayesian methods of inference that have been designed to process information and data. The approach succeeds because ED achieves [...] Read more.
The entropic dynamics (ED) approach to quantum mechanics is ideally suited to address the problem of measurement because it is based on entropic and Bayesian methods of inference that have been designed to process information and data. The approach succeeds because ED achieves a clear-cut separation between ontic and epistemic elements: positions are ontic, while probabilities and wave functions are epistemic. Thus, ED is a viable realist ψ-epistemic model. Such models are widely assumed to be ruled out by various no-go theorems. We show that ED evades those theorems by adopting purely epistemic dynamics and denying the existence of an ontic dynamics at the subquantum level. Full article
9 pages, 284 KiB  
Proceeding Paper
Hamilton–Jacobi–Bellman Equations in Stochastic Geometric Mechanics
by Qiao Huang and Jean-Claude Zambrini
Phys. Sci. Forum 2022, 5(1), 37; https://doi.org/10.3390/psf2022005037 - 16 Dec 2022
Cited by 1 | Viewed by 1511
Abstract
This paper summarises a new framework of Stochastic Geometric Mechanics that attributes a fundamental role to Hamilton–Jacobi–Bellman (HJB) equations. These are associated with geometric versions of probabilistic Lagrangian and Hamiltonian mechanics. Our method uses tools of the “second-order differential geometry”, due to L. [...] Read more.
This paper summarises a new framework of Stochastic Geometric Mechanics that attributes a fundamental role to Hamilton–Jacobi–Bellman (HJB) equations. These are associated with geometric versions of probabilistic Lagrangian and Hamiltonian mechanics. Our method uses tools of the “second-order differential geometry”, due to L. Schwartz and P.-A. Meyer, which may be interpreted as a probabilistic counterpart of the canonical quantization procedure for geometric structures of classical mechanics. The inspiration for our results comes from what is called “Schrödinger’s problem” in Stochastic Optimal Transport theory, as well as from the hydrodynamical interpretation of quantum mechanics. Our general framework, however, should also be relevant in Machine Learning and other fields where HJB equations play a key role. Full article
9 pages, 231 KiB  
Proceeding Paper
Borel and the Emergence of Probability on the Mathematical Scene in France
by Matthias Cléry and Laurent Mazliak
Phys. Sci. Forum 2022, 5(1), 38; https://doi.org/10.3390/psf2022005038 - 19 Dec 2022
Viewed by 1781
Abstract
In 1928, the Henri Poincaré Institute opened in Paris thanks to the efforts of the mathematician Emile Borel and the support of the Rockefeller Foundation. Teaching and research on the mathematics of chance were placed by Borel at the center of the institute’s [...] Read more.
In 1928, the Henri Poincaré Institute opened in Paris thanks to the efforts of the mathematician Emile Borel and the support of the Rockefeller Foundation. Teaching and research on the mathematics of chance were placed by Borel at the center of the institute’s activity, a result imposed by the French mathematicians in the face of indifference and even hostility towards a discipline accused of a lack of seriousness. This historical account, based in large part on the results of Matthias Cléry’s thesis, presents the way in which Borel became convinced of the importance of making up for the gap between France and other countries as regards the place of probability and statistics in the educational system, and elaborates the strategy that led to the creation of the IHP and how its voluntarist functioning enabled it to become in ten years one of the main world centers of reflection on this subject. Full article
8 pages, 667 KiB  
Proceeding Paper
Upscaling Reputation Communication Simulations
by Viktoria Kainz, Céline Bœhm, Sonja Utz and Torsten Enßlin
Phys. Sci. Forum 2022, 5(1), 39; https://doi.org/10.3390/psf2022005039 - 26 Dec 2022
Viewed by 984
Abstract
Social communication is omnipresent and a fundamental basis of our daily lives. Especially due to the increasing popularity of social media, communication flows are becoming more complex, faster and more influential. It is therefore not surprising that in these highly dynamic communication structures, [...] Read more.
Social communication is omnipresent and a fundamental basis of our daily lives. Especially due to the increasing popularity of social media, communication flows are becoming more complex, faster and more influential. It is therefore not surprising that in these highly dynamic communication structures, strategies are also developed to spread certain opinions, to deliberately steer discussions or to inject misinformation. The reputation game is an agent-based simulation that uses information theoretical principles to model the effect of such malicious behavior taking reputation dynamics as an example. So far, only small groups of 3 to 5 agents have been studied, whereas now, we extend the reputation game to larger groups of up to 50 agents, also including one-to-many conversations. In this setup, the resulting group dynamics are examined, with particular emphasis on the emerging network topology and the influence of agents’ personal characteristics thereon. In the long term, the reputation game should thus help to determine relations between the arising communication network structure, the used communication strategies and the recipients’ behavior, allowing us to identify potentially harmful communication patterns, e.g., in social media. Full article
Show Figures

Figure 1

11 pages, 347 KiB  
Proceeding Paper
On Foundational Physics
by John Skilling and Kevin H. Knuth
Phys. Sci. Forum 2022, 5(1), 40; https://doi.org/10.3390/psf2022005040 - 3 Jan 2023
Viewed by 1094
Abstract
As physicists, we wish to make mental models of the world around us. For this to be useful, we need to be able to classify features of the world into symbols and develop a rational calculus for their manipulation. In seeking maximal generality, [...] Read more.
As physicists, we wish to make mental models of the world around us. For this to be useful, we need to be able to classify features of the world into symbols and develop a rational calculus for their manipulation. In seeking maximal generality, we aim for minimal restrictive assumptions. That inquiry starts by developing basic arithmetic and proceeds to develop the formalism of quantum theory and relativity. Full article
9 pages, 1615 KiB  
Proceeding Paper
Multi-Objective Optimization of the Nanocavities Diffusion in Irradiated Metals
by Andrée De Backer, Abdelkader Souidi, Etienne A. Hodille, Emmanuel Autissier, Cécile Genevois, Farah Haddad, Antonin Della Noce, Christophe Domain, Charlotte S. Becquart and Marie France Barthe
Phys. Sci. Forum 2022, 5(1), 41; https://doi.org/10.3390/psf2022005041 - 6 Jan 2023
Cited by 1 | Viewed by 2535
Abstract
Materials in fission reactors or fusion tokamaks are exposed to neutron irradiation, which creates defects in the microstructure. With time, depending on the temperature, defects diffuse and form, among others, nanocavities, altering the material performance. The goal of this work is to determine [...] Read more.
Materials in fission reactors or fusion tokamaks are exposed to neutron irradiation, which creates defects in the microstructure. With time, depending on the temperature, defects diffuse and form, among others, nanocavities, altering the material performance. The goal of this work is to determine the diffusion properties of the nanocavities in tungsten. We combine (i) a systematic experimental study in irradiated samples annealed at different temperatures up to 1800 K (the created nanocavities diffuse, and their coalescence is studied by transmission electron microscopy); (ii) our object kinetic Monte Carlo model of the microstructure evolution fed by a large collection of atomistic data; and (iii) a multi-objective optimization method (using model inversion) to obtain the diffusion of nanocavities, input parameters of our model, from the comparison with the experimental observations. We simplify the multi-objective function, proposing a projection into the parameter space. Non-dominated solutions are revealed: two “valleys” of minima corresponding to the nanocavities density and size objectives, respectively, which delimit the Pareto optimal solution. These “valleys” are found to be the upper and lower uncertainties on the diffusion beyond the uncertainties on the experimental and simulated results. The nanocavity diffusion can be split in three domains: the mono vacancy and small vacancy clusters, for which atomistic models are affordable, the small nanocavities for which our approach is decisive, and the nanocavities larger than 1.5 nm for which the classical surface diffusion theory is valid. Full article
Show Figures

Figure 1

9 pages, 300 KiB  
Proceeding Paper
The Geometry of Quivers
by Antoine Bourget
Phys. Sci. Forum 2022, 5(1), 42; https://doi.org/10.3390/psf2022005042 - 19 Jan 2023
Viewed by 3228
Abstract
Quivers are oriented graphs that have profound connections to various areas of mathematics, including representation theory and geometry. Quiver representations correspond to a vast generalization of classical linear algebra problems. The geometry of these representations can be described in the framework of Hamiltonian [...] Read more.
Quivers are oriented graphs that have profound connections to various areas of mathematics, including representation theory and geometry. Quiver representations correspond to a vast generalization of classical linear algebra problems. The geometry of these representations can be described in the framework of Hamiltonian reduction and geometric invariant theory, giving rise to the concept of quiver variety. In parallel to these developments, quivers have appeared to naturally encode certain supersymmetric quantum field theories. The associated quiver variety then corresponds to a part of the moduli space of vacua of the theory. However, physics tells us that another natural geometric object associated with quivers exists, which can be seen as a magnetic analog of the (electric) quiver variety. When viewed from that angle, magnetic quivers are a new tool, developed in the past decade, that help mathematicians and physicists alike to understand geometric spaces. This note is the writeup of a talk in which I review these developments from both the mathematical and physical perspective, emphasizing the dialogue between the two communities. Full article
Show Figures

Figure 1

8 pages, 561 KiB  
Proceeding Paper
Reciprocity Relations for Quantum Systems Based on Fisher Information
by Mariela Portesi, Juan Manuel Pujol and Federico Holik
Phys. Sci. Forum 2022, 5(1), 44; https://doi.org/10.3390/psf2022005044 - 29 Jan 2023
Viewed by 1011
Abstract
We study reciprocity relations between fluctuations of the probability distributions corresponding to position and momentum, and other observables, in quantum theory. These kinds of relations have been previously studied in terms of quantifiers based on the Lipschitz constants of the concomitant distributions. However, [...] Read more.
We study reciprocity relations between fluctuations of the probability distributions corresponding to position and momentum, and other observables, in quantum theory. These kinds of relations have been previously studied in terms of quantifiers based on the Lipschitz constants of the concomitant distributions. However, it turned out that they were not valid for all states. Here, we ask the following question: can those relations be described using other quantifiers? By appealing to the Fisher information, we study reciprocity relations for different families of states. In particular, we look for a connection of this problem with previous works. Full article
Show Figures

Figure 1

9 pages, 1792 KiB  
Proceeding Paper
A Computational Model to Determine Membrane Ionic Conductance Using Electroencephalography in Epilepsy
by Tahereh Najafi, Rosmina Jaafar, Rabani Remli, Wan Asyraf Wan Zaidi and Kalaivani Chellappan
Phys. Sci. Forum 2022, 5(1), 45; https://doi.org/10.3390/psf2022005045 - 1 Feb 2023
Viewed by 1187
Abstract
Epilepsy is a multiscale disease in which small alterations at the cellular scale affect the electroencephalogram (EEG). We use a computational model to bridge the cellular scale to EEG by evaluating the ionic conductance of the Hodkin–Huxley (HH) membrane model and comparing the [...] Read more.
Epilepsy is a multiscale disease in which small alterations at the cellular scale affect the electroencephalogram (EEG). We use a computational model to bridge the cellular scale to EEG by evaluating the ionic conductance of the Hodkin–Huxley (HH) membrane model and comparing the EEG in response to intermittent photic stimulation (IPS) for epilepsy and normal subjects. Modeling is sectioned into IPS encoding, determination of an LTI system, and modifying ionic conductance to generate epilepsy signals. Machine learning is employed where it results in 0.6 (mScm2) ionic conductance in epilepsy. This ionic conductance is lower than the unitary conductance for normal subjects. Full article
Show Figures

Figure 1

8 pages, 304 KiB  
Proceeding Paper
Comparison of Step Samplers for Nested Sampling
by Johannes Buchner
Phys. Sci. Forum 2022, 5(1), 46; https://doi.org/10.3390/psf2022005046 - 6 Feb 2023
Cited by 3 | Viewed by 1416
Abstract
Bayesian inference with nested sampling requires a likelihood-restricted prior sampling method, which draws samples from the prior distribution that exceed a likelihood threshold. For high-dimensional problems, Markov Chain Monte Carlo derivatives have been proposed. We numerically study ten algorithms based on slice sampling, [...] Read more.
Bayesian inference with nested sampling requires a likelihood-restricted prior sampling method, which draws samples from the prior distribution that exceed a likelihood threshold. For high-dimensional problems, Markov Chain Monte Carlo derivatives have been proposed. We numerically study ten algorithms based on slice sampling, hit-and-run and differential evolution algorithms in ellipsoidal, non-ellipsoidal and non-convex problems from 2 to 100 dimensions. Mixing capabilities are evaluated with the nested sampling shrinkage test. This makes our results valid independent of how heavy-tailed the posteriors are. Given the same number of steps, slice sampling is outperformed by hit-and-run and whitened slice sampling, while whitened hit-and-run does not provide results that are as good. Proposing along differential vectors of live point pairs also leads to the highest efficiencies and appears promising for multi-modal problems. The tested proposals are implemented in the UltraNest nested sampling package, enabling efficient low and high-dimensional inference of a large class of practical inference problems relevant to astronomy, cosmology, particle physics and astronomy. Full article
Show Figures

Figure 1

9 pages, 747 KiB  
Proceeding Paper
Is Quantum Tomography a Difficult Problem for Machine Learning?
by Philippe Jacquet
Phys. Sci. Forum 2022, 5(1), 47; https://doi.org/10.3390/psf2022005047 - 7 Feb 2023
Viewed by 972
Abstract
One of the key issues in machine learning is the characterization of the learnability of a problem. Regret is a way to quantify learnability. Quantum tomography is a special case of machine learning where the training set is a set of quantum measurements [...] Read more.
One of the key issues in machine learning is the characterization of the learnability of a problem. Regret is a way to quantify learnability. Quantum tomography is a special case of machine learning where the training set is a set of quantum measurements and the ground truth is the result of these measurements, but nothing is known about the hidden quantum system. We will show that in some case quantum tomography is a hard problem to learn. We consider a problem related to optical fiber communication where information is encoded in photon polarizations. We will show that the learning regret cannot decay faster than 1/T where T is the size of the training dataset and that incremental gradient descent may converge worse. Full article
Show Figures

Figure 1

10 pages, 3762 KiB  
Proceeding Paper
Variational Bayesian Approximation (VBA): A Comparison between Three Optimization Algorithms
by Seyedeh Azadeh Fallah Mortezanejad and Ali Mohammad-Djafari
Phys. Sci. Forum 2022, 5(1), 48; https://doi.org/10.3390/psf2022005048 - 8 Feb 2023
Viewed by 1176
Abstract
In many Bayesian computations, we first obtain the expression of the joint distribution of all the unknown variables given the observed data. In general, this expression is not separable in those variables. Thus, obtaining the marginals for each variable and computing the expectations [...] Read more.
In many Bayesian computations, we first obtain the expression of the joint distribution of all the unknown variables given the observed data. In general, this expression is not separable in those variables. Thus, obtaining the marginals for each variable and computing the expectations is difficult and costly. This problem becomes even more difficult in high dimensional quandaries, which is an important issue in inverse problems. We may then try to propose a surrogate expression with which we can carry out approximate computations. Often, a separable expression approximation can be useful enough. The variational Bayesian approximation (VBA) is a technique that approximates the joint distribution p with an easier, for example separable, distribution q by minimizing the Kullback–Leibler divergence KL(q|p). When q is separable in all the variables, the approximation is also called the mean field approximation (MFA), and so q is the product of the approximated marginals. A first standard and general algorithm is the alternate optimization of KL(q|p) with respect to qi. A second general approach is its optimization in the Riemannian manifold. However, in this paper, for practical reasons, we consider the case where p is in the exponential family and so is q. For this case, KL(q|p) becomes a function of the parameters θ of the exponential family. Then, we can use any other optimization algorithm to obtain those parameters. In this paper, we compare three optimization algorithms, namely a standard alternate optimization, a gradient-based algorithm and a natural gradient algorithm, and study their relative performances in three examples. Full article
Show Figures

Figure 1

9 pages, 1833 KiB  
Proceeding Paper
Switching Machine Improvisation Models by Latent Transfer Entropy Criteria
by Shlomo Dubnov, Vignesh Gokul and Gerard Assayag
Phys. Sci. Forum 2022, 5(1), 49; https://doi.org/10.3390/psf2022005049 - 8 Feb 2023
Cited by 1 | Viewed by 1238
Abstract
Machine improvisation is the ability of musical generative systems to interact with either another music agent or a human improviser. This is a challenging task, as it is not trivial to define a quantitative measure that evaluates the creativity of the musical agent. [...] Read more.
Machine improvisation is the ability of musical generative systems to interact with either another music agent or a human improviser. This is a challenging task, as it is not trivial to define a quantitative measure that evaluates the creativity of the musical agent. It is also not feasible to create huge paired corpora of agents interacting with each other to train a critic system. In this paper we consider the problem of controlling machine improvisation by switching between several pre-trained models by finding the best match to an external control signal. We introduce a measure SymTE that searches for the best transfer entropy between representations of the generated and control signals over multiple generative models. Full article
Show Figures

Figure 1

9 pages, 8999 KiB  
Proceeding Paper
Bayesian and Machine Learning Methods in the Big Data Era for Astronomical Imaging
by Fabrizia Guglielmetti, Philipp Arras, Michele Delli Veneri, Torsten Enßlin, Giuseppe Longo, Lukasz Tychoniec and Eric Villard
Phys. Sci. Forum 2022, 5(1), 50; https://doi.org/10.3390/psf2022005050 - 15 Feb 2023
Viewed by 1466
Abstract
The Atacama large millimeter/submillimeter array with the planned electronic upgrades will deliver an unprecedented number of deep and high resolution observations. Wider fields of view are possible with the consequential cost of image reconstruction. Alternatives to commonly used applications in image processing have [...] Read more.
The Atacama large millimeter/submillimeter array with the planned electronic upgrades will deliver an unprecedented number of deep and high resolution observations. Wider fields of view are possible with the consequential cost of image reconstruction. Alternatives to commonly used applications in image processing have to be sought and tested. Advanced image reconstruction methods are critical to meet the data requirements needed for operational purposes. Astrostatistics and astroinformatics techniques are employed. Evidence is given that these interdisciplinary fields of study applied to synthesis imaging meet the Big Data challenges and have the potential to enable new scientific discoveries in radio astronomy and astrophysics. Full article
Show Figures

Figure 1

8 pages, 614 KiB  
Proceeding Paper
SuperNest: Accelerated Nested Sampling Applied to Astrophysics and Cosmology
by Aleksandr Petrosyan and Will Handley
Phys. Sci. Forum 2022, 5(1), 51; https://doi.org/10.3390/psf2022005051 - 8 Mar 2023
Cited by 2 | Viewed by 1150
Abstract
We present a method for improving the performance of nested sampling as well as its accuracy. Building on previous work we show that posterior repartitioning may be used to reduce the amount of time nested sampling spends in compressing from prior to posterior [...] Read more.
We present a method for improving the performance of nested sampling as well as its accuracy. Building on previous work we show that posterior repartitioning may be used to reduce the amount of time nested sampling spends in compressing from prior to posterior if a suitable “proposal” distribution is supplied. We showcase this on a cosmological example with a Gaussian posterior, and release the code as an LGPL licensed, extensible Python package supernest. Full article
Show Figures

Figure 1

8 pages, 856 KiB  
Proceeding Paper
Bayesian Statistics Approach to Imaging of Aperture Synthesis Data: RESOLVE Meets ALMA
by Łukasz Tychoniec, Fabrizia Guglielmetti, Philipp Arras, Torsten Enßlin and Eric Villard
Phys. Sci. Forum 2022, 5(1), 52; https://doi.org/10.3390/psf2022005052 - 15 Mar 2023
Viewed by 1166
Abstract
The Atacama Large Millimeter/submillimeter Array (ALMA) is currently revolutionizing observational astrophysics. The aperture synthesis technique provides angular resolution otherwise unachievable with the conventional single-aperture telescope. However, recovering the image from inherently undersampled data is a challenging task. The clean algorithm has proven successful [...] Read more.
The Atacama Large Millimeter/submillimeter Array (ALMA) is currently revolutionizing observational astrophysics. The aperture synthesis technique provides angular resolution otherwise unachievable with the conventional single-aperture telescope. However, recovering the image from inherently undersampled data is a challenging task. The clean algorithm has proven successful and reliable and is commonly used in imaging interferometric observations. It is not, however, free of limitations. Point-source assumption, central to the clean is not optimal for the extended structures of molecular gas recovered by ALMA. Additionally, negative fluxes recovered with clean are not physical. This begs the search for alternatives that would be better suited for specific scientific cases. We present recent developments in imaging ALMA data using Bayesian inference techniques, namely the resolve algorithm. This algorithm, based on information field theory, has already been successfully applied to image the Very Large Array data. We compare the capability of both clean and resolve to recover known sky signal, convoluted with the simulator of ALMA observation data, and we investigate the problem with a set of actual ALMA observations. Full article
Show Figures

Figure 1

8 pages, 284 KiB  
Proceeding Paper
A Foliation by Deformed Probability Simplexes for Transition of α-Parameters
by Keiko Uohashi
Phys. Sci. Forum 2022, 5(1), 53; https://doi.org/10.3390/psf2022005053 - 28 Mar 2023
Cited by 1 | Viewed by 912
Abstract
This study considers dualistic structures of the probability simplex from the information geometry perspective. We investigate a foliation by deformed probability simplexes for the transition of α-parameters, not for a fixed α-parameter. We also describe the properties of extended divergences on [...] Read more.
This study considers dualistic structures of the probability simplex from the information geometry perspective. We investigate a foliation by deformed probability simplexes for the transition of α-parameters, not for a fixed α-parameter. We also describe the properties of extended divergences on the foliation when different α-parameters are defined on each of the various leaves. Full article
Previous Issue
Back to TopTop