proceedings-logo

Journal Browser

Journal Browser

Table of Contents

Proceedings, 2019, MaxEnt 2019

The 39th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Garching, Germany | 30 June–5 July 2019

Volume Editors: Udo von Toussaint, Roland Preuss


  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) This volume of Proceedings gathers papers presented at MaxEnt 2019, the 39th International Workshop [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial
Bayesian Inference and Maximum Entropy Methods in Science and Engineering—MaxEnt 2019
Proceedings 2019, 33(1), 8; https://doi.org/10.3390/proceedings2019033008 - 22 Nov 2019
Viewed by 213
Abstract
As key building blocks for modern data processing and analysis methods—ranging from AI, ML and UQ to model comparison, density estimation and parameter estimation—Bayesian inference and entropic concepts are in the center of this rapidly growing research area. [...] Full article

Research

Jump to: Editorial

Open AccessProceedings
Bayesian Approach with Entropy Prior for Open Systems
Proceedings 2019, 33(1), 1; https://doi.org/10.3390/proceedings2019033001 - 12 Nov 2019
Viewed by 226
Abstract
The Bayesian approach Maximum a Posteriori (MAP) is discussed in the context of solving the image reconstruction problem in nuclear medicine: positron emission tomography (PET) and single photon emission computer tomography (SPECT). Two standard probabilistic forms, Gibbs and entropy prior probabilities, are analyzed. [...] Read more.
The Bayesian approach Maximum a Posteriori (MAP) is discussed in the context of solving the image reconstruction problem in nuclear medicine: positron emission tomography (PET) and single photon emission computer tomography (SPECT). Two standard probabilistic forms, Gibbs and entropy prior probabilities, are analyzed. It is shown that both the entropy-based and Gibbs priors in their standard formulations result in global regularization when a single parameter controls the solution. Global regularization leads to over-smoothed images and loss of fine structures. Over-smoothing is undesirable, especially in oncology in diagnosis of cancer lesions of small size and low activity. To overcome the over-smoothing problem and to improve resolution of images, the new approach based on local statistical regularization is developed. Full article
Open AccessProceedings
Effects of Neuronal Noise on Neural Communication
Proceedings 2019, 33(1), 2; https://doi.org/10.3390/proceedings2019033002 - 19 Nov 2019
Viewed by 192
Abstract
In this work, we propose an approach to better understand the effects of neuronal noise on neural communication systems. Here, we extend the fundamental Hodgkin-Huxley (HH) model by adding synaptic couplings to represent the statistical dependencies among different neurons under the effect of [...] Read more.
In this work, we propose an approach to better understand the effects of neuronal noise on neural communication systems. Here, we extend the fundamental Hodgkin-Huxley (HH) model by adding synaptic couplings to represent the statistical dependencies among different neurons under the effect of additional noise. We estimate directional information-theoretic quantities, such as the Transfer Entropy (TE), to infer the couplings between neurons under the effect of different noise levels. Based on our computational simulations, we demonstrate that these nonlinear systems can behave beyond our predictions and TE is an ideal tool to extract such dependencies from data. Full article
Open AccessProceedings
Electromagnetic Induction and Relativistic Double Layer: Mechanism for Ball Lightning Formation
Proceedings 2019, 33(1), 3; https://doi.org/10.3390/proceedings2019033003 - 21 Nov 2019
Viewed by 485
Abstract
What is the probability that ball lightning (BL) is a real phenomenon of nature? The answer depends on your prior information. If you are one of those lucky men who had a close encounter with a BL and escaped unscathed, your probability [...] Read more.
What is the probability that ball lightning (BL) is a real phenomenon of nature? The answer depends on your prior information. If you are one of those lucky men who had a close encounter with a BL and escaped unscathed, your probability that it is real equals, of course, unity. On the other hand, if you are a theoretical physicist deeply involved in the problem of controlled thermonuclear fusion, your probability is likely to be zero. In this study, an attempt is being made to raise the likelihood of reality of BL phenomenon for everyone, plasma physicists included. BL is conceived here as highly structured formation of air, at roughly atmospheric pressure, with a set of nested sheaths, each of which is a double electrical layer with voltage drop in the order of 100 kV. Full article
Open AccessProceedings
Variational Bayesian Approach in Model-Based Iterative Reconstruction for 3D X-Ray Computed Tomography with Gauss-Markov-Potts Prior
Proceedings 2019, 33(1), 4; https://doi.org/10.3390/proceedings2019033004 - 21 Nov 2019
Viewed by 206
Abstract
3D X-ray Computed Tomography (CT) is used in medicine and non-destructive testing (NDT) for industry to visualize the interior of a volume and control its healthiness. Compared to analytical reconstruction methods, model-based iterative reconstruction (MBIR) methods obtain high-quality reconstructions while reducing the dose. [...] Read more.
3D X-ray Computed Tomography (CT) is used in medicine and non-destructive testing (NDT) for industry to visualize the interior of a volume and control its healthiness. Compared to analytical reconstruction methods, model-based iterative reconstruction (MBIR) methods obtain high-quality reconstructions while reducing the dose. Nevertheless, usual Maximum-A-Posteriori (MAP) estimation does not enable to quantify the uncertainties on the reconstruction, which can be useful for the control performed afterwards. Herein, we propose to estimate these uncertainties jointly with the reconstruction by computing Posterior Mean (PM) thanks to Variational Bayesian Approach (VBA). We present our reconstruction algorithm using a Gauss-Markov-Potts prior model on the volume to reconstruct. For PM calculation in VBA, the uncertainties on the reconstruction are given by the variances of the posterior distribution of the volume. To estimate these variances in our algorithm, we need to compute diagonal coefficients of the posterior covariance matrix. Since this matrix is not available in 3D X-ray CT, we propose an efficient solution to tackle this difficulty, based on the use of a matched pair of projector and backprojector. In our simulations using the Separable Footprint (SF) pair, we compare our PM estimation with MAP estimation. Perspectives for this work are applications to real data as improvement of our GPU implementation of SF pair. Full article
Open AccessProceedings
Gaussian Processes for Data Fulfilling Linear Differential Equations
Proceedings 2019, 33(1), 5; https://doi.org/10.3390/proceedings2019033005 - 21 Nov 2019
Cited by 1 | Viewed by 195
Abstract
A method to reconstruct fields, source strengths and physical parameters based on Gaussian process regression is presented for the case where data are known to fulfill a given linear differential equation with localized sources. The approach is applicable to a wide range of [...] Read more.
A method to reconstruct fields, source strengths and physical parameters based on Gaussian process regression is presented for the case where data are known to fulfill a given linear differential equation with localized sources. The approach is applicable to a wide range of data from physical measurements and numerical simulations. It is based on the well-known invariance of the Gaussian under linear operators, in particular differentiation. Instead of using a generic covariance function to represent data from an unknown field, the space of possible covariance functions is restricted to allow only Gaussian random fields that fulfill the homogeneous differential equation. The resulting tailored kernel functions lead to more reliable regression compared to using a generic kernel and makes some hyperparameters directly interpretable. For differential equations representing laws of physics such a choice limits realizations of random fields to physically possible solutions. Source terms are added by superposition and their strength estimated in a probabilistic fashion, together with possibly unknown hyperparameters with physical meaning in the differential operator. Full article
Open AccessProceedings
2D Deconvolution Using Adaptive Kernel
Proceedings 2019, 33(1), 6; https://doi.org/10.3390/proceedings2019033006 - 21 Nov 2019
Viewed by 150
Abstract
An analysis tool using Adaptive Kernel to solve an ill-posed inverse problem for a 2D model space is introduced. It is applicable for linear and non-linear forward models, for example in tomography and image reconstruction. While an optimisation based on a Gaussian Approximation [...] Read more.
An analysis tool using Adaptive Kernel to solve an ill-posed inverse problem for a 2D model space is introduced. It is applicable for linear and non-linear forward models, for example in tomography and image reconstruction. While an optimisation based on a Gaussian Approximation is possible, it becomes intractable for more than some hundred kernel functions. This is because the determinant of the Hessian of the system has be evaluated. The SVD typically used for 1D problems fails with increasing problem size. Alternatively Stochastic Trace Estimation can be used, giving a reasonable approximation. An alternative to searching for the MAP solution is to integrate using Marcov Chain Monte Carlo without the need to determine the determinant of the Hessian. This also allows to treat problems where a linear approximation is not justified. Full article
Open AccessProceedings
Using Entropy to Forecast Bitcoin’s Daily Conditional Value at Risk
Proceedings 2019, 33(1), 7; https://doi.org/10.3390/proceedings2019033007 - 21 Nov 2019
Viewed by 193
Abstract
Conditional value at risk (CVaR), or expected shortfall, is a risk measure for investments according to Rockafellar and Uryasev. Yamai and Yoshiba define CVaR as the conditional expectation of loss given that the loss is beyond the value at risk (VaR) level. The [...] Read more.
Conditional value at risk (CVaR), or expected shortfall, is a risk measure for investments according to Rockafellar and Uryasev. Yamai and Yoshiba define CVaR as the conditional expectation of loss given that the loss is beyond the value at risk (VaR) level. The VaR is a risk measure that represents how much an investment might lose during usual market conditions with a given probability in a time interval. In particular, Rockafellar and Uryasev show that CVaR is superior to VaR in applications related to investment portfolio optimization. On the other hand, the Shannon entropy has been used as an uncertainty measure in investments and, in particular, to forecast the Bitcoin’s daily VaR. In this paper, we estimate the entropy of intraday distribution of Bitcoin’s logreturns through the symbolic time series analysis (STSA) and we forecast Bitcoin’s daily CVaR using the estimated entropy. We find that the entropy is positively correlated to the likelihood of extreme values of Bitcoin’s daily logreturns using a logistic regression model based on CVaR and the use of entropy to forecast the Bitcoin’s daily CVaR of the next day performs better than the naive use of the historical CVaR. Full article
Open AccessProceedings
TI-Stan: Adaptively Annealed Thermodynamic Integration with HMC
Proceedings 2019, 33(1), 9; https://doi.org/10.3390/proceedings2019033009 - 22 Nov 2019
Viewed by 203
Abstract
We present a novel implementation of the adaptively annealed thermodynamic integration technique using Hamiltonian Monte Carlo (HMC). Thermodynamic integration with importance sampling and adaptive annealing is an especially useful method for estimating model evidence for problems that use physics-based mathematical models. Because it [...] Read more.
We present a novel implementation of the adaptively annealed thermodynamic integration technique using Hamiltonian Monte Carlo (HMC). Thermodynamic integration with importance sampling and adaptive annealing is an especially useful method for estimating model evidence for problems that use physics-based mathematical models. Because it is based on importance sampling, this method requires an efficient way to refresh the ensemble of samples. Existing successful implementations use binary slice sampling on the Hilbert curve to accomplish this task. This implementation works well if the model has few parameters or if it can be broken into separate parts with identical parameter priors that can be refreshed separately. However, for models that are not separable and have many parameters, a different method for refreshing the samples is needed. HMC, in the form of the MC-Stan package, is effective for jointly refreshing the ensemble under a high-dimensional model. MC-Stan uses automatic differentiation to compute the gradients of the likelihood that HMC requires in about the same amount of time as it computes the likelihood function itself, easing the programming burden compared to implementations of HMC that require explicitly specified gradient functions. We present a description of the overall TI-Stan procedure and results for representative example problems. Full article
Open AccessProceedings
Entropic Dynamics for Learning in Neural Networks and the Renormalization Group
Proceedings 2019, 33(1), 10; https://doi.org/10.3390/proceedings2019033010 - 25 Nov 2019
Viewed by 210
Abstract
We study the dynamics of information processing in the continuous depth limit of deep feed-forward Neural Networks (NN) and find that it can be described in language similar to the Renormalization Group (RG). The association of concepts to patterns by NN is analogous [...] Read more.
We study the dynamics of information processing in the continuous depth limit of deep feed-forward Neural Networks (NN) and find that it can be described in language similar to the Renormalization Group (RG). The association of concepts to patterns by NN is analogous to the identification of the few variables that characterize the thermodynamic state obtained by the RG from microstates. We encode the information about the weights of a NN in a Maxent family of distributions. The location hyper-parameters represent the weights estimates. Bayesian learning of new examples determine new constraints on the generators of the family, yielding a new pdf and in the ensuing entropic dynamics of learning, hyper-parameters change along the gradient of the evidence. For a feed-forward architecture the evidence can be written recursively from the evidence up to the previous layer convoluted with an aggregation kernel. The continuum limit leads to a diffusion-like PDE analogous to Wilson’s RG but with an aggregation kernel that depends on the the weights of the NN, different from those that integrate out ultraviolet degrees of freedom. Approximations to the evidence can be obtained from solutions of the RG equation. Its derivatives with respect to the hyper-parameters, generate examples of Entropic Dynamics in Neural Networks Architectures (EDNNA) learning algorithms. For simple architectures, these algorithms can be shown to yield optimal generalization in student- teacher scenarios. Full article
Open AccessProceedings
Learning Model Discrepancy of an Electric Motor with Bayesian Inference
Proceedings 2019, 33(1), 11; https://doi.org/10.3390/proceedings2019033011 - 25 Nov 2019
Viewed by 236
Abstract
Uncertainty Quantification (UQ) is highly requested in computational modeling and simulation, especially in an industrial context. With the continuous evolution of modern complex systems demands on quality and reliability of simulation models increase. A main challenge is related to the fact that the [...] Read more.
Uncertainty Quantification (UQ) is highly requested in computational modeling and simulation, especially in an industrial context. With the continuous evolution of modern complex systems demands on quality and reliability of simulation models increase. A main challenge is related to the fact that the considered computational models are rarely able to represent the true physics perfectly and demonstrate a discrepancy compared to measurement data. Further, an accurate knowledge of considered model parameters is usually not available. e.g., fluctuations in manufacturing processes of hardware components or noise in sensors introduce uncertainties which must be quantified in an appropriate way. Mathematically, such UQ tasks are posed as inverse problems, requiring efficient methods to solve. This work investigates the influence of model discrepancies onto the calibration of physical model parameters and further considers a Bayesian inference framework including an attempt to correct for model discrepancy. A polynomial expansion is used to approximate and learn model discrepancy. This work extends by discussion and specification of a guideline on how to define the model discrepancy term complexity, based on the available data. Application to an electric motor model with synthetic measurements illustrates the importance and promising perspective of the method. Full article
Open AccessProceedings
Haphazard Intentional Sampling Techniques in Network Design of Monitoring Stations
Proceedings 2019, 33(1), 12; https://doi.org/10.3390/proceedings2019033012 - 27 Nov 2019
Viewed by 171
Abstract
In empirical science, random sampling is the golden standard to ensure unbiased, impartial, or fair results, as it works as a technological barrier designed to prevent spurious communication or illegitimate interference between parties in the application of interest. However, the chance of at [...] Read more.
In empirical science, random sampling is the golden standard to ensure unbiased, impartial, or fair results, as it works as a technological barrier designed to prevent spurious communication or illegitimate interference between parties in the application of interest. However, the chance of at least one covariate showing a “significant difference” between two treatment groups increases exponentially with the number of covariates. In 2012, Morgan and Rubin proposed a coherent approach to solve this problem based on rerandomization in order to ensure that the final allocation obtained is balanced, but with an exponential computation cost in the number of covariates. Haphazard Intentional Sampling is a statistical technique that combines intentional sampling using goal optimization techniques with random perturbations. On one hand, it has all the benefits of standard randomization and, on the other hand, avoid exponentially large (and costly) sample sizes. In this work, we compare the haphazard and rerandomization methods in a case study regarding the re-engineering of the network of measurement stations for atmospheric pollutants. In comparison with rerandomization, the haphazard method provided groups with a better balance and permutation tests consistently more powerful. Full article
Open AccessProceedings
An Entropic Dynamics Approach to Geometrodynamics
Proceedings 2019, 33(1), 13; https://doi.org/10.3390/proceedings2019033013 - 27 Nov 2019
Viewed by 133
Abstract
In the Entropic Dynamics (ED) framework quantum theory is derived as an application of entropic methods of inference. The physics is introduced through appropriate choices of variables and of constraints that codify the relevant physical information. In previous work, a manifestly covariant ED [...] Read more.
In the Entropic Dynamics (ED) framework quantum theory is derived as an application of entropic methods of inference. The physics is introduced through appropriate choices of variables and of constraints that codify the relevant physical information. In previous work, a manifestly covariant ED of quantum scalar fields in a fixed background spacetime was developed. Manifest relativistic covariance was achieved by imposing constraints in the form of Poisson brackets and of intial conditions to be satisfied by a set of local Hamiltonian generators. Our approach succeeded in extending to the quantum domain the classical framework that originated with Dirac and was later developed by Teitelboim and Kuchar. In the present work the ED of quantum fields is extended further by allowing the geometry of spacetime to fully partake in the dynamics. The result is a first-principles ED model that in one limit reproduces quantum mechanics and in another limit reproduces classical general relativity. Our model shares some formal features with the so-called “semi-classical” approach to gravity. Full article
Open AccessProceedings
The Nested_fit Data Analysis Program
Proceedings 2019, 33(1), 14; https://doi.org/10.3390/proceedings2019033014 - 28 Nov 2019
Viewed by 186
Abstract
We present here Nested_fit, a Bayesian data analysis code developed for investigations of atomic spectra and other physical data. It is based on the nested sampling algorithm with the implementation of an upgraded lawn mower robot method for finding new live points. For [...] Read more.
We present here Nested_fit, a Bayesian data analysis code developed for investigations of atomic spectra and other physical data. It is based on the nested sampling algorithm with the implementation of an upgraded lawn mower robot method for finding new live points. For a given data set and a chosen model, the program provides the Bayesian evidence, for the comparison of different hypotheses/models, and the different parameter probability distributions. A large database of spectral profiles is already available (Gaussian, Lorentz, Voigt, Log-normal, etc.) and additional ones can easily added. It is written in Fortran, for an optimized parallel computation, and it is accompanied by a Python library for the results visualization. Full article
Open AccessProceedings
The Information Geometry of Space-Time
Proceedings 2019, 33(1), 15; https://doi.org/10.3390/proceedings2019033015 - 28 Nov 2019
Viewed by 166
Abstract
The method of maximum entropy is used to model curved physical space in terms of points defined with a finite resolution. Such a blurred space is automatically endowed with a metric given by information geometry. The corresponding space-time is such that the geometry [...] Read more.
The method of maximum entropy is used to model curved physical space in terms of points defined with a finite resolution. Such a blurred space is automatically endowed with a metric given by information geometry. The corresponding space-time is such that the geometry of any embedded spacelike surface is given by its information geometry. The dynamics of blurred space, its geometrodynamics, is constructed by requiring that as space undergoes the deformations associated with evolution in local time, it sweeps a four-dimensional space-time. This reproduces Einstein’s equations for vacuum gravity. We conclude with brief comments on some of the peculiar properties of blurred space: There is a minimum length and blurred points have a finite volume. There is a relativistic “blur dilation”. The volume of space is a measure of its entropy. Full article
Open AccessProceedings
Interaction between Model Based Signal and Image Processing, Machine Learning and Artificial Intelligence
Proceedings 2019, 33(1), 16; https://doi.org/10.3390/proceedings2019033016 - 28 Nov 2019
Viewed by 194
Abstract
Signale and image processing has always been the main tools in many area and in particular in Medical and Biomedical applications. Nowadays, there are great number of toolboxes, general purpose and very specialized, in which classical techniques are implemented and can be used: [...] Read more.
Signale and image processing has always been the main tools in many area and in particular in Medical and Biomedical applications. Nowadays, there are great number of toolboxes, general purpose and very specialized, in which classical techniques are implemented and can be used: all the transformation based methods (Fourier, Wavelets, ...) as well as model based and iterative regularization methods. Statistical methods have also shown their success in some area when parametric models are available. Bayesian inference based methods had great success, in particular, when the data are noisy, uncertain, incomplete (missing values) or with outliers and where there is a need to quantify uncertainties. In some applications, nowadays, we have more and more data. To use these “Big Data” to extract more knowledge, the Machine Learning and Artificial Intelligence tools have shown success and became mandatory. However, even if in many domains of Machine Learning such as classification and clustering these methods have shown success, their use in real scientific problems are limited. The main reasons are twofold: First, the users of these tools cannot explain the reasons when the are successful and when they are not. The second is that, in general, these tools can not quantify the remaining uncertainties. Model based and Bayesian inference approach have been very successful in linear inverse problems. However, adjusting the hyper parameters is complex and the cost of the computation is high. The Convolutional Neural Networks (CNN) and Deep Learning (DL) tools can be useful for pushing farther these limits. At the other side, the Model based methods can be helpful for the selection of the structure of CNN and DL which are crucial in ML success. In this work, I first provide an overview and then a survey of the aforementioned methods and explore the possible interactions between them. Full article
Open AccessProceedings
Auditable Blockchain Randomization Tool
Proceedings 2019, 33(1), 17; https://doi.org/10.3390/proceedings2019033017 - 02 Dec 2019
Viewed by 264
Abstract
Randomization is an integral part of well-designed statistical trials, and is also a required procedure in legal systems. Implementation of honest, unbiased, understandable, secure, traceable, auditable and collusion resistant randomization procedures is a mater of great legal, social and political importance. Given the [...] Read more.
Randomization is an integral part of well-designed statistical trials, and is also a required procedure in legal systems. Implementation of honest, unbiased, understandable, secure, traceable, auditable and collusion resistant randomization procedures is a mater of great legal, social and political importance. Given the juridical and social importance of randomization, it is important to develop procedures in full compliance with the following desiderata: (a) Statistical soundness and computational efficiency; (b) Procedural, cryptographical and computational security; (c) Complete auditability and traceability; (d) Any attempt by participating parties or coalitions to spuriously influence the procedure should be either unsuccessful or be detected; (e) Open-source programming; (f) Multiple hardware platform and operating system implementation; (g) User friendliness and transparency; (h) Flexibility and adaptability for the needs and requirements of multiple application areas (like, for example, clinical trials, selection of jury or judges in legal proceedings, and draft lotteries). This paper presents a simple and easy to implement randomization protocol that assures, in a formal mathematical setting, full compliance to the aforementioned desiderata for randomization procedures. Full article
Open AccessProceedings
A Sequential Marginal Likelihood Approximation Using Stochastic Gradients
Proceedings 2019, 33(1), 18; https://doi.org/10.3390/proceedings2019033018 - 03 Dec 2019
Viewed by 229
Abstract
Existing algorithms like nested sampling and annealed importance sampling are able to produce accurate estimates of the marginal likelihood of a model, but tend to scale poorly to large data sets. This is because these algorithms need to recalculate the log-likelihood for each [...] Read more.
Existing algorithms like nested sampling and annealed importance sampling are able to produce accurate estimates of the marginal likelihood of a model, but tend to scale poorly to large data sets. This is because these algorithms need to recalculate the log-likelihood for each iteration by summing over the whole data set. Efficient scaling to large data sets requires that algorithms only visit small subsets (mini-batches) of data on each iteration. To this end, we estimate the marginal likelihood via a sequential decomposition into a product of predictive distributions p ( y n | y < n ) . Predictive distributions can be approximated efficiently through Bayesian updating using stochastic gradient Hamiltonian Monte Carlo, which approximates likelihood gradients using mini-batches. Since each data point typically contains little information compared to the whole data set, the convergence to each successive posterior only requires a short burn-in phase. This approach can be viewed as a special case of sequential Monte Carlo (SMC) with a single particle, but differs from typical SMC methods in that it uses stochastic gradients. We illustrate how this approach scales favourably to large data sets with some simple models. Full article
Open AccessProceedings
Galilean and Hamiltonian Monte Carlo
Proceedings 2019, 33(1), 19; https://doi.org/10.3390/proceedings2019033019 - 05 Dec 2019
Viewed by 218
Abstract
Galilean Monte Carlo (GMC) allows exploration in a big space along systematic trajectories, thus evading the square-root inefficiency of independent steps. Galilean Monte Carlo has greater generality and power than its historical precursor Hamiltonian Monte Carlo because it discards second-order propagation under forces [...] Read more.
Galilean Monte Carlo (GMC) allows exploration in a big space along systematic trajectories, thus evading the square-root inefficiency of independent steps. Galilean Monte Carlo has greater generality and power than its historical precursor Hamiltonian Monte Carlo because it discards second-order propagation under forces in favour of elementary force-free motion. Nested sampling (for which GMC was originally designed) has similar dominance over simulated annealing, which loses power by imposing an unnecessary thermal blurring over energy. Full article
Open AccessProceedings
Information Geometry Conflicts With Independence
Proceedings 2019, 33(1), 20; https://doi.org/10.3390/proceedings2019033020 - 05 Dec 2019
Viewed by 157
Abstract
Information Geometry conflicts with the independence that is required for science and for rational inference generally. Full article
Open AccessProceedings
Bayesian Reconstruction through Adaptive Image Notion
Proceedings 2019, 33(1), 21; https://doi.org/10.3390/proceedings2019033021 - 05 Dec 2019
Viewed by 184
Abstract
A stable and unique solution to the ill-posed inverse problem in radio synthesis image analysis is sought employing Bayesian probability theory combined with a probabilistic two-component mixture model. The solution of the ill-posed inverse problem is given by inferring the values of model [...] Read more.
A stable and unique solution to the ill-posed inverse problem in radio synthesis image analysis is sought employing Bayesian probability theory combined with a probabilistic two-component mixture model. The solution of the ill-posed inverse problem is given by inferring the values of model parameters defined to describe completely the physical system arised by the data. The analysed data are calibrated visibilities, Fourier transformed from the ( u , v ) to image planes. Adaptive splines are explored to model the cumbersome background model corrupted by the largely varying dirty beam in the image plane. The de-convolution process of the dirty image from the dirty beam is tackled in probability space. Probability maps in source detection at several resolution values quantify the acquired knowledge on the celestial source distribution from a given state of information. The information available are data constrains, prior knowledge and uncertain information. The novel algorithm has the aim to provide an alternative imaging task for the use of the Atacama Large Millimeter/Submillimeter Array (ALMA) in support of the widely used Common Astronomy Software Applications (CASA) enhancing the capabilities in source detection. Full article
Open AccessProceedings
Intracellular Background Estimation for Quantitative Fluorescence Microscopy
Proceedings 2019, 33(1), 22; https://doi.org/10.3390/proceedings2019033022 - 06 Dec 2019
Viewed by 232
Abstract
Fluorescently targeted proteins are widely used for studies of intracellular organelles dynamic. Peripheral proteins are transiently associated with organelles and a significant fraction of them are located at the cytosol. Image analysis of peripheral proteins poses a problem on properly discriminating membrane-associated signal [...] Read more.
Fluorescently targeted proteins are widely used for studies of intracellular organelles dynamic. Peripheral proteins are transiently associated with organelles and a significant fraction of them are located at the cytosol. Image analysis of peripheral proteins poses a problem on properly discriminating membrane-associated signal from the cytosolic one. In most cases, signals from organelles are compact in comparison with diffuse signal from cytosol. Commonly used methods for background estimation depend on the assumption that background and foreground signals are separable by spatial frequency filters. However, large non-stained organelles (e.g., nuclei) result in abrupt changes in the cytosol intensity and lead to errors in the background estimation. Such mistakes result in artifacts in the reconstructed foreground signal. We developed a new algorithm that estimates background intensity in fluorescence microscopy images and does not produce artifacts on the borders of nuclei. Full article
Open AccessProceedings
A Complete Classification and Clustering Model to Account for Continuous and Categorical Data in Presence of Missing Values and Outliers
Proceedings 2019, 33(1), 23; https://doi.org/10.3390/proceedings2019033023 - 09 Dec 2019
Viewed by 187
Abstract
Classification and clustering problems are closely connected with pattern recognition where many general algorithms have been developed and used in various fields. Depending on the complexity of patterns in data, classification and clustering procedures should take into consideration both continuous and categorical data [...] Read more.
Classification and clustering problems are closely connected with pattern recognition where many general algorithms have been developed and used in various fields. Depending on the complexity of patterns in data, classification and clustering procedures should take into consideration both continuous and categorical data which can be partially missing and erroneous due to mismeasurements and human errors. However, most algorithms cannot handle missing data and imputation methods are required to generate data to use them. Hence, the main objective of this work is to define a classification and clustering framework that handles both outliers and missing values. Here, an approach based on mixture models is preferred since mixture models provide a mathematically based, flexible and meaningful framework for the wide variety of classification and clustering requirements. More precisely, a scale mixture of Normal distributions is updated to handle outliers and missing data issues for any types of data. Then a variational Bayesian inference is used to find approximate posterior distributions of parameters and to provide a lower bound on the model log evidence used as a criterion for selecting the number of clusters. Eventually, experiments are carried out to exhibit the effectiveness of the proposed model through an application in Electronic Warfare. Full article
Open AccessProceedings
On the Diagnosis of Aortic Dissection with Impedance Cardiography: A Bayesian Feasibility Study Framework with Multi-Fidelity Simulation Data
Proceedings 2019, 33(1), 24; https://doi.org/10.3390/proceedings2019033024 - 09 Dec 2019
Cited by 1 | Viewed by 191
Abstract
Aortic dissection is a cardiovascular disease with a disconcertingly high mortality. When it comes to diagnosis, medical imaging techniques such as Computed Tomography, Magnetic Resonance Tomography or Ultrasound certainly do the job, but also have their shortcomings. Impedance cardiography is a standard method [...] Read more.
Aortic dissection is a cardiovascular disease with a disconcertingly high mortality. When it comes to diagnosis, medical imaging techniques such as Computed Tomography, Magnetic Resonance Tomography or Ultrasound certainly do the job, but also have their shortcomings. Impedance cardiography is a standard method to monitor a patients heart function and circulatory system by injecting electric currents and measuring voltage drops between electrode pairs attached to the human body. If such measurements distinguished healthy from dissected aortas, one could improve clinical procedures. Experiments are quite difficult, and thus we investigate the feasibility with finite element simulations beforehand. In these simulations, we find uncertain input parameters, e.g., the electrical conductivity of blood. Inference on the state of the aorta from impedance measurements defines an inverse problem in which forward uncertainty propagation through the simulation with vanilla Monte Carlo demands a prohibitively large computational effort. To overcome this limitation, we combine two simulations: one simulation with a high fidelity and another simulation with a low fidelity, and low and high computational costs accordingly. We use the inexpensive low-fidelity simulation to learn about the expensive high-fidelity simulation. It all boils down to a regression problem—and reduces total computational cost after all. Full article
Open AccessProceedings
Quantum Trajectories in Entropic Dynamics
Proceedings 2019, 33(1), 25; https://doi.org/10.3390/proceedings2019033025 - 13 Dec 2019
Viewed by 134
Abstract
Entropic Dynamics is a framework for deriving the laws of physics from entropic inference. In an (ED) of particles, the central assumption is that particles have definite yet unknown positions. By appealing to certain symmetries, one can derive a quantum mechanics of scalar [...] Read more.
Entropic Dynamics is a framework for deriving the laws of physics from entropic inference. In an (ED) of particles, the central assumption is that particles have definite yet unknown positions. By appealing to certain symmetries, one can derive a quantum mechanics of scalar particles and particles with spin, in which the trajectories of the particles are given by a stochastic equation. This is much like Nelson’s stochastic mechanics which also assumes a fluctuating particle as the basis of the microstates. The uniqueness of ED as an entropic inference of particles allows one to continuously transition between fluctuating particles and the smooth trajectories assumed in Bohmian mechanics. In this work we explore the consequences of the ED framework by studying the trajectories of particles in the continuum between stochastic and Bohmian limits in the context of a few physical examples, which include the double slit and Stern-Gerlach experiments. Full article
Open AccessProceedings
Estimating Flight Characteristics of Anomalous Unidentified Aerial Vehicles in the 2004 Nimitz Encounter
Proceedings 2019, 33(1), 26; https://doi.org/10.3390/proceedings2019033026 - 16 Dec 2019
Viewed by 228
Abstract
A number of Unidentified Aerial Phenomena (UAP) encountered by military, commercial, and civilian aircraft have been reported to be structured craft that exhibit `impossible’ flight characteristics. We consider the 2004 UAP encounters with the Nimitz Carrier Group off the coast of California, and [...] Read more.
A number of Unidentified Aerial Phenomena (UAP) encountered by military, commercial, and civilian aircraft have been reported to be structured craft that exhibit `impossible’ flight characteristics. We consider the 2004 UAP encounters with the Nimitz Carrier Group off the coast of California, and estimate lower bounds on the accelerations exhibited by the craft during the observed maneuvers. Estimated accelerations range from 75 g to more than 5000 g with no observed air disturbance, no sonic booms, and no evidence of excessive heat commensurate with even the minimal estimated energies. In accordance with observations, the estimated parameters describing the behavior of these craft are both anomalous and surprising. The extreme estimated flight characteristics reveal that these observations are either fabricated or seriously in error, or that these craft exhibit technology far more advanced than any known craft on Earth. In the case of the Nimitz encounters the number and quality of witnesses, the variety of roles they played in the encounters, and the equipment used to track and record the craft favor the latter hypothesis that these are technologically advanced craft. Full article
Open AccessProceedings
Bayesian Determination of Parameters for Plasma-Wall Interactions
Proceedings 2019, 33(1), 27; https://doi.org/10.3390/proceedings2019033027 - 18 Dec 2019
Viewed by 164
Abstract
Within a Bayesian framework we propose a non-intrusive reduced-order spectral approach (polynomial chaos expansion) to assess the uncertainty of ion-solid interaction simulations. The method not only reduces the number of function evaluations but provides simultaneously a quantitative measure for which combinations of inputs [...] Read more.
Within a Bayesian framework we propose a non-intrusive reduced-order spectral approach (polynomial chaos expansion) to assess the uncertainty of ion-solid interaction simulations. The method not only reduces the number of function evaluations but provides simultaneously a quantitative measure for which combinations of inputs have the most important impact on the result. It is applied to the ion-solid simulation program SDTrimSP with several uncertain and Gaussian distributed input parameters, i.e., angle α , projectile energy E 0 and surface binding energy E s b . In combination with recently acquired experimental data the otherwise hardly accessible model parameter E s b can now be estimated. Full article
Open AccessProceedings
Carpets Color and Pattern Detection Based on Their Images
Proceedings 2019, 33(1), 28; https://doi.org/10.3390/proceedings2019033028 - 24 Dec 2019
Viewed by 196
Abstract
In these days of fast-paced business, accurate automatic color or pattern detection is a necessity for carpet retailers. Many well-known color detection algorithms have many shortcomings. Apart from the color itself, neighboring colors, style, and pattern also affects how humans perceive color. Most [...] Read more.
In these days of fast-paced business, accurate automatic color or pattern detection is a necessity for carpet retailers. Many well-known color detection algorithms have many shortcomings. Apart from the color itself, neighboring colors, style, and pattern also affects how humans perceive color. Most if not all, color detection algorithms do not take this into account. Furthermore, the algorithm needed should be invariant to changes in brightness, size, and contrast of the image. In a previous experiment, the accuracy of the algorithm was half of the human counterpart. Therefore, we propose a supervised approach to reduce detection errors. We used more than 37,000 images from a retailer’s database as the learning set to train a Convolutional Neural Network (CNN, or ConvNet) architecture. Full article
Open AccessProceedings
A New Approach to the Formant Measuring Problem
Proceedings 2019, 33(1), 29; https://doi.org/10.3390/proceedings2019033029 - 25 Dec 2019
Viewed by 206
Abstract
Formants are characteristic frequency components in human speech that are caused by resonances in the vocal tract during speech production. They are of primary concern in acoustic phonetics and speech recognition. Despite this, making accurate measurements of the formants, which we dub “the [...] Read more.
Formants are characteristic frequency components in human speech that are caused by resonances in the vocal tract during speech production. They are of primary concern in acoustic phonetics and speech recognition. Despite this, making accurate measurements of the formants, which we dub “the formant measurement problem” for convenience, is as yet not considered to be fully resolved. One particular shortcoming is the lack of error bars on the formant frequencies’ estimates. As a first step towards remedying this, we propose a new approach for the formant measuring problem in the particular case of steady-state vowels—a case which occurs quite abundantly in natural speech. The approach is to look at the formant measuring problem from the viewpoint of Bayesian spectrum analysis. We develop a pitch-synchronous linear model for steady-state vowels and apply it to the open-mid front unrounded vowel [ɛ] observed in a real speech utterance. Full article
Open AccessProceedings
Determination of the Cervical Vertebra Maturation Degree from Lateral Radiography
Proceedings 2019, 33(1), 30; https://doi.org/10.3390/proceedings2019033030 - 14 Jan 2020
Viewed by 211
Abstract
Many environmental and genetic conditions may modify jaws growth. In orthodontics, the right treatment timing is crucial. This timing is a function of the Cervical Vertebra Maturation (CVM) degree. Thus, determining the CVM is important. In orthodontics, the lateral X-ray radiography is used [...] Read more.
Many environmental and genetic conditions may modify jaws growth. In orthodontics, the right treatment timing is crucial. This timing is a function of the Cervical Vertebra Maturation (CVM) degree. Thus, determining the CVM is important. In orthodontics, the lateral X-ray radiography is used to determine it. Many classical methods need knowledge and time to look and identify some features to do it. Nowadays, Machine Learning (ML) and Artificial Intelligent (AI) tools are used for many medical and biological image processing, clustering and classification. This paper reports on the development of a Deep Learning (DL) method to determine directly from the images the degree of maturation of CVM classified in six degrees. Using 300 such images for training and 200 for evaluating and 100 for testing, we could obtain a 90% accuracy. The proposed model and method are validated by cross validation. The implemented software is ready for use by orthodontists. Full article
Open AccessProceedings
On the Estimation of Mutual Information
Proceedings 2019, 33(1), 31; https://doi.org/10.3390/proceedings2019033031 - 15 Jan 2020
Viewed by 164
Abstract
In this paper we focus on the estimation of mutual information from finite samples (X×Y). The main concern with estimations of mutual information (MI) is their robustness under the class of transformations for which it remains invariant: i.e., [...] Read more.
In this paper we focus on the estimation of mutual information from finite samples ( X × Y ) . The main concern with estimations of mutual information (MI) is their robustness under the class of transformations for which it remains invariant: i.e., type I (coordinate transformations), III (marginalizations) and special cases of type IV (embeddings, products). Estimators which fail to meet these standards are not robust in their general applicability. Since most machine learning tasks employ transformations which belong to the classes referenced in part I, the mutual information can tell us which transformations are most optimal. There are several classes of estimation methods in the literature, such as non-parametric estimators like the one developed by Kraskov et al., and its improved versions. These estimators are extremely useful, since they rely only on the geometry of the underlying sample, and circumvent estimating the probability distribution itself. We explore the robustness of this family of estimators in the context of our design criteria. Full article
Open AccessProceedings
Radiometric Scale Transfer Using Bayesian Model Selection
Proceedings 2019, 33(1), 32; https://doi.org/10.3390/proceedings2019033032 - 03 Feb 2020
Viewed by 181
Abstract
The key input quantity to climate modelling and weather forecasts is the solar beam irradiance, i.e., the primary amount of energy provided by the sun. Despite its importance the absolute accuracy of the measurements are limited—which not only affects the modelling but also [...] Read more.
The key input quantity to climate modelling and weather forecasts is the solar beam irradiance, i.e., the primary amount of energy provided by the sun. Despite its importance the absolute accuracy of the measurements are limited—which not only affects the modelling but also ground truth tests of satellite observations. Here we focus on the problem of improving instrument calibration based on dedicated measurements. A Bayesian approach reveals that the standard approach results in inferior results. An alternative approach method based on monomial based selection of regression functions, combined with model selection is shown to yield superior estimations for a wide range of conditions. The approach is illustrated on selected data and possible further enhancements are outlined. Full article
Open AccessArticle
Bayesian Identification of Dynamical Systems
Proceedings 2019, 33(1), 33; https://doi.org/10.3390/proceedings2019033033 - 12 Feb 2020
Viewed by 139
Abstract
Many inference problems relate to a dynamical system, as represented by dx/dt = f (x), where x ∈ ℝn is the state vector and f is the (in general nonlinear) system function or model. Since the time of [...] Read more.
Many inference problems relate to a dynamical system, as represented by dx/dt = f (x), where x ∈ ℝn is the state vector and f is the (in general nonlinear) system function or model. Since the time of Newton, researchers have pondered the problem of system identification: how should the user accurately and efficiently identify the model f – including its functional family or parameter values – from discrete time-series data? For linear models, many methods are available including linear regression, the Kalman filter and autoregressive moving averages. For nonlinear models, an assortment of machine learning tools have been developed in recent years, usually based on neural network methods, or various classification or order reduction schemes. The first group, while very useful, provide “black box" solutions which are not readily adaptable to new situations, while the second group necessarily involve the sacrificing of resolution to achieve order reduction. To address this problem, we propose the use of an inverse Bayesian method for system identification from time-series data. For a system represented by a set of basis functions, this is shown to be mathematically identical to Tikhonov regularization, albeit with a clear theoretical justification for the residual and regularization terms, respectively as the negative logarithms of the likelihood and prior functions. This insight justifies the choice of regularization method, and can also be extended to access the full apparatus of the Bayesian inverse solution. Two Bayesian methods, based on the joint maximum a posteriori (JMAP) and variational Bayesian approximation (VBA), are demonstrated for the Lorenz equation system with added Gaussian noise, in comparison to the regularization method of least squares regression with thresholding (the SINDy algorithm). The Bayesian methods are also used to estimate the variances of the inferred parameters, thereby giving the estimated model error, providing an important advantage of the Bayesian approach over traditional regularization methods. Full article
Open AccessProceedings
The Spin Echo, Entropy, and Experimental Design
Proceedings 2019, 33(1), 34; https://doi.org/10.3390/proceedings2019033034 - 19 Feb 2020
Viewed by 116
Abstract
The spin echo experiment is an important tool in magnetic resonance for exploring the coupling of spin systems to their local environment. The strong couplings in a typical Electron Spin Resonance (ESR) experiment lead to rapid relaxation effects that puts significant technical constraints [...] Read more.
The spin echo experiment is an important tool in magnetic resonance for exploring the coupling of spin systems to their local environment. The strong couplings in a typical Electron Spin Resonance (ESR) experiment lead to rapid relaxation effects that puts significant technical constraints on the kinds of time domain experiments that one can perform in ESR. Recent developments in high frequency ESR hardware have opened up new possibilities for utilizing phase-modulated or composite phase slice (CPS) pulses at 95 GHz and higher. In particular, we report preliminary results at 95 GHz on experiments performed with CPS pulses in studies of rapidly relaxing fluid state systems. In contemporary ESR, this has important consequences for the design of pulse sequences where, due to finite excitation bandwidths, contributions from the Hamiltonian dynamics and relaxation processes must be considered together in order to achieve a quantitative treatment of the effects of selective, finite bandwidth pulses on the spin system under study. The approach reported on here is generic and may be expected to be of use for solid state and fluid systems. In particular we indicate how our approach may be extended to higher frequencies, e.g., 240 GHz. Full article
Back to TopTop