Special Issue "Bayesian Inference and Information Theory"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 April 2019).

Special Issue Editors

Prof. Dr. Kevin H. Knuth
E-Mail Website
Guest Editor
Department of Physics, University at Albany, 1400 Washington Avenue, Albany, NY 12222, USA
Interests: entropy; probability theory; Bayesian; foundational issues; lattice theory; data analysis; maxent; machine learning; robotics; information theory; entropy-based experimental design
Special Issues and Collections in MDPI journals
Dr. Brendon J. Brewer
E-Mail Website
Guest Editor
Department of Statistics, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand
Interests: bayesian inference, markov chain monte carlo, nested sampling, MaxEnt
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

In Bayesian inference, probabilities describe plausibility, or the degree to which one statement implies another. In a similar manner, the entropies of information theory describe relevance, or the degree to which resolving one question would resolve another. However, this latter understanding is relatively undeveloped and is not often used in practical Bayesian data analysis. In this Special Issue we invite contributions to the area of Bayesian inference and information theory. The following suggested subtopics are of particular interest:

- Foundations of Bayesian inference and information theory

- Applications of Bayesian inference involving well-motivated uses of information theoretic concepts

- Bayesian experimental design

- Maximum entropy and choice of prior distributions

We look forward to receiving your contributions.

Prof. Dr. Kevin H. Knuth
Dr. Brendon J. Brewer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Bayesian inference;
  • Information theory;
  • Bayesian data analysis;
  • Maximum entropy;
  • Prior distributions;
  • Kullback-Leibler divergence;
  • Relevance

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Sensor Control in Anti-Submarine Warfare—A Digital Twin and Random Finite Sets Based Approach
Entropy 2019, 21(8), 767; https://doi.org/10.3390/e21080767 - 06 Aug 2019
Cited by 6 | Viewed by 1702
Abstract
Since the submarine has become the major threat to maritime security, there is an urgent need to find a more efficient method of anti-submarine warfare (ASW). The digital twin theory is one of the most outstanding information technologies, and has been quite popular [...] Read more.
Since the submarine has become the major threat to maritime security, there is an urgent need to find a more efficient method of anti-submarine warfare (ASW). The digital twin theory is one of the most outstanding information technologies, and has been quite popular in recent years. The most influential change produced by digital twin is the ability to enable real-time dynamic interactions between the simulation world and the real world. Digital twin can be regarded as a paradigm by means of which selected online measurements are dynamically assimilated into the simulation world, with the running simulation model guiding the real world adaptively in reverse. By combining digital twin theory and random finite sets (RFSs) closely, a new framework of sensor control in ASW is proposed. Two key algorithms are proposed for supporting the digital twin-based framework. First, the RFS-based data-assimilation algorithm is proposed for online assimilating the sequence of real-time measurements with detection uncertainty, data association uncertainty, noise, and clutters. Second, the computation of the reward function by using the results of the proposed data-assimilation algorithm is introduced to find the optimal control action. The results of three groups of experiments successfully verify the feasibility and effectiveness of the proposed approach. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples
Entropy 2019, 21(6), 623; https://doi.org/10.3390/e21060623 - 25 Jun 2019
Cited by 3 | Viewed by 2188
Abstract
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual [...] Read more.
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Universality of Logarithmic Loss in Fixed-Length Lossy Compression
Entropy 2019, 21(6), 580; https://doi.org/10.3390/e21060580 - 10 Jun 2019
Cited by 1 | Viewed by 1067
Abstract
We established a universality of logarithmic loss over a finite alphabet as a distortion criterion in fixed-length lossy compression. For any fixed-length lossy-compression problem under an arbitrary distortion criterion, we show that there is an equivalent lossy-compression problem under logarithmic loss. The equivalence [...] Read more.
We established a universality of logarithmic loss over a finite alphabet as a distortion criterion in fixed-length lossy compression. For any fixed-length lossy-compression problem under an arbitrary distortion criterion, we show that there is an equivalent lossy-compression problem under logarithmic loss. The equivalence is in the strong sense that we show that finding good schemes in corresponding lossy compression under logarithmic loss is essentially equivalent to finding good schemes in the original problem. This equivalence relation also provides an algebraic structure in the reconstruction alphabet, which allows us to use known techniques in the clustering literature. Furthermore, our result naturally suggests a new clustering algorithm in the categorical data-clustering problem. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Article
Bayesian Inference for Acoustic Direction of Arrival Analysis Using Spherical Harmonics
Entropy 2019, 21(6), 579; https://doi.org/10.3390/e21060579 - 10 Jun 2019
Cited by 6 | Viewed by 2000
Abstract
This work applies two levels of inference within a Bayesian framework to accomplish estimation of the directions of arrivals (DoAs) of sound sources. The sensing modality is a spherical microphone array based on spherical harmonics beamforming. When estimating the DoA, the acoustic signals [...] Read more.
This work applies two levels of inference within a Bayesian framework to accomplish estimation of the directions of arrivals (DoAs) of sound sources. The sensing modality is a spherical microphone array based on spherical harmonics beamforming. When estimating the DoA, the acoustic signals may potentially contain one or multiple simultaneous sources. Using two levels of Bayesian inference, this work begins by estimating the correct number of sources via the higher level of inference, Bayesian model selection. It is followed by estimating the directional information of each source via the lower level of inference, Bayesian parameter estimation. This work formulates signal models using spherical harmonic beamforming that encodes the prior information on the sensor arrays in the form of analytical models with an unknown number of sound sources, and their locations. Available information on differences between the model and the sound signals as well as prior information on directions of arrivals are incorporated based on the principle of the maximum entropy. Two and three simultaneous sound sources have been experimentally tested without prior information on the number of sources. Bayesian inference provides unambiguous estimation on correct numbers of sources followed by the DoA estimations for each individual sound sources. This paper presents the Bayesian formulation, and analysis results to demonstrate the potential usefulness of the model-based Bayesian inference for complex acoustic environments with potentially multiple simultaneous sources. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Learning Coefficient of Vandermonde Matrix-Type Singularities in Model Selection
Entropy 2019, 21(6), 561; https://doi.org/10.3390/e21060561 - 04 Jun 2019
Viewed by 1915
Abstract
In recent years, selecting appropriate learning models has become more important with the increased need to analyze learning systems, and many model selection methods have been developed. The learning coefficient in Bayesian estimation, which serves to measure the learning efficiency in singular learning [...] Read more.
In recent years, selecting appropriate learning models has become more important with the increased need to analyze learning systems, and many model selection methods have been developed. The learning coefficient in Bayesian estimation, which serves to measure the learning efficiency in singular learning models, has an important role in several information criteria. The learning coefficient in regular models is known as the dimension of the parameter space over two, while that in singular models is smaller and varies in learning models. The learning coefficient is known mathematically as the log canonical threshold. In this paper, we provide a new rational blowing-up method for obtaining these coefficients. In the application to Vandermonde matrix-type singularities, we show the efficiency of such methods. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Article
Discriminative Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing Instance
Entropy 2019, 21(5), 489; https://doi.org/10.3390/e21050489 - 13 May 2019
Cited by 1 | Viewed by 1315
Abstract
Over recent decades, the rapid growth in data makes ever more urgent the quest for highly scalable Bayesian networks that have better classification performance and expressivity (that is, capacity to respectively describe dependence relationships between attributes in different situations). To reduce the search [...] Read more.
Over recent decades, the rapid growth in data makes ever more urgent the quest for highly scalable Bayesian networks that have better classification performance and expressivity (that is, capacity to respectively describe dependence relationships between attributes in different situations). To reduce the search space of possible attribute orders, k-dependence Bayesian classifier (KDB) simply applies mutual information to sort attributes. This sorting strategy is very efficient but it neglects the conditional dependencies between attributes and is sub-optimal. In this paper, we propose a novel sorting strategy and extend KDB from a single restricted network to unrestricted ensemble networks, i.e., unrestricted Bayesian classifier (UKDB), in terms of Markov blanket analysis and target learning. Target learning is a framework that takes each unlabeled testing instance P as a target and builds a specific Bayesian model Bayesian network classifiers (BNC) P to complement BNC T learned from training data T . UKDB respectively introduced UKDB P and UKDB T to flexibly describe the change in dependence relationships for different testing instances and the robust dependence relationships implicated in training data. They both use UKDB as the base classifier by applying the same learning strategy while modeling different parts of the data space, thus they are complementary in nature. The extensive experimental results on the Wisconsin breast cancer database for case study and other 10 datasets by involving classifiers with different structure complexities, such as Naive Bayes (0-dependence), Tree augmented Naive Bayes (1-dependence) and KDB (arbitrary k-dependence), prove the effectiveness and robustness of the proposed approach. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
How the Choice of Distance Measure Influences the Detection of Prior-Data Conflict
Entropy 2019, 21(5), 446; https://doi.org/10.3390/e21050446 - 29 Apr 2019
Cited by 3 | Viewed by 1115
Abstract
The present paper contrasts two related criteria for the evaluation of prior-data conflict: the Data Agreement Criterion (DAC; Bousquet, 2008) and the criterion of Nott et al. (2016). One aspect that these criteria have in common is that they depend on a distance [...] Read more.
The present paper contrasts two related criteria for the evaluation of prior-data conflict: the Data Agreement Criterion (DAC; Bousquet, 2008) and the criterion of Nott et al. (2016). One aspect that these criteria have in common is that they depend on a distance measure, of which dozens are available, but so far, only the Kullback-Leibler has been used. We describe and compare both criteria to determine whether a different choice of distance measure might impact the results. By means of a simulation study, we investigate how the choice of a specific distance measure influences the detection of prior-data conflict. The DAC seems more susceptible to the choice of distance measure, while the criterion of Nott et al. seems to lead to reasonably comparable conclusions of prior-data conflict, regardless of the distance measure choice. We conclude with some practical suggestions for the user of the DAC and the criterion of Nott et al. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Bayesian Network Modelling of ATC Complexity Metrics for Future SESAR Demand and Capacity Balance Solutions
Entropy 2019, 21(4), 379; https://doi.org/10.3390/e21040379 - 08 Apr 2019
Cited by 2 | Viewed by 1713
Abstract
Demand & Capacity Management solutions are key SESAR (Single European Sky ATM Research) research projects to adapt future airspace to the expected high air traffic growth in a Trajectory Based Operations (TBO) environment. These solutions rely on processes, methods and metrics regarding the [...] Read more.
Demand & Capacity Management solutions are key SESAR (Single European Sky ATM Research) research projects to adapt future airspace to the expected high air traffic growth in a Trajectory Based Operations (TBO) environment. These solutions rely on processes, methods and metrics regarding the complexity assessment of traffic flows. However, current complexity methodologies and metrics do not properly take into account the impact of trajectories’ uncertainty to the quality of complexity predictions of air traffic demand. This paper proposes the development of several Bayesian network (BN) models to identify the impacts of TBO uncertainties to the quality of the predictions of complexity of air traffic demand for two particular Demand Capacity Balance (DCB) solutions developed by SESAR 2020, i.e., Dynamic Airspace Configuration (DAC) and Flight Centric Air Traffic Control (FCA). In total, seven BN models are elicited covering each concept at different time horizons. The models allow evaluating the influence of the “complexity generators” in the “complexity metrics”. Moreover, when the required level for the uncertainty of complexity is set, the networks allow identifying by how much uncertainty of the input variables should improve. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
A Mesoscopic Traffic Data Assimilation Framework for Vehicle Density Estimation on Urban Traffic Networks Based on Particle Filters
Entropy 2019, 21(4), 358; https://doi.org/10.3390/e21040358 - 03 Apr 2019
Cited by 1 | Viewed by 1029
Abstract
Traffic conditions can be more accurately estimated using data assimilation techniques since these methods incorporate an imperfect traffic simulation model with the (partial) noisy measurement data. In this paper, we propose a data assimilation framework for vehicle density estimation on urban traffic networks. [...] Read more.
Traffic conditions can be more accurately estimated using data assimilation techniques since these methods incorporate an imperfect traffic simulation model with the (partial) noisy measurement data. In this paper, we propose a data assimilation framework for vehicle density estimation on urban traffic networks. To compromise between computational efficiency and estimation accuracy, a mesoscopic traffic simulation model (we choose the platoon based model) is employed in this framework. Vehicle passages from loop detectors are considered as the measurement data which contain errors, such as missed and false detections. Due to the nonlinear and non-Gaussian nature of the problem, particle filters are adopted to carry out the state estimation, since this method does not have any restrictions on the model dynamics and error assumptions. Simulation experiments are carried out to test the proposed data assimilation framework, and the results show that the proposed framework can provide good vehicle density estimation on relatively large urban traffic networks under moderate sensor quality. The sensitivity analysis proves that the proposed framework is robust to errors both in the model and in the measurements. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
PID Control as a Process of Active Inference with Linear Generative Models
Entropy 2019, 21(3), 257; https://doi.org/10.3390/e21030257 - 07 Mar 2019
Cited by 11 | Viewed by 3401
Abstract
In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life [...] Read more.
In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical framework derived from information and control theory, and statistical mechanics. However, we argue that if the active inference proposal is to be taken as a general process theory for biological systems, it is necessary to understand how it relates to existing control theoretical approaches routinely used to study and explain biological systems. For example, recently, PID (Proportional-Integral-Derivative) control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as chemotaxis in bacteria and amoebae, and robust adaptation in biochemical networks. In this work, we will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world. This more general interpretation also provides a new perspective on traditional problems of PID controllers such as parameter tuning as well as the need to balance performances and robustness conditions of a controller. Specifically, we then show how these problems can be understood in terms of the optimisation of the precisions (inverse variances) modulating different prediction errors in the free energy functional. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Hidden Node Detection between Observable Nodes Based on Bayesian Clustering
Entropy 2019, 21(1), 32; https://doi.org/10.3390/e21010032 - 07 Jan 2019
Cited by 1 | Viewed by 1277
Abstract
Structure learning is one of the main concerns in studies of Bayesian networks. In the present paper, we consider networks consisting of both observable and hidden nodes, and propose a method to investigate the existence of a hidden node between observable nodes, where [...] Read more.
Structure learning is one of the main concerns in studies of Bayesian networks. In the present paper, we consider networks consisting of both observable and hidden nodes, and propose a method to investigate the existence of a hidden node between observable nodes, where all nodes are discrete. This corresponds to the model selection problem between the networks with and without the middle hidden node. When the network includes a hidden node, it has been known that there are singularities in the parameter space, and the Fisher information matrix is not positive definite. Then, the many conventional criteria for structure learning based on the Laplace approximation do not work. The proposed method is based on Bayesian clustering, and its asymptotic property justifies the result; the redundant labels are eliminated and the simplest structure is detected even if there are singularities. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Application of Bayesian Networks and Information Theory to Estimate the Occurrence of Mid-Air Collisions Based on Accident Precursors
Entropy 2018, 20(12), 969; https://doi.org/10.3390/e20120969 - 14 Dec 2018
Cited by 10 | Viewed by 1494
Abstract
This paper combines Bayesian networks (BN) and information theory to model the likelihood of severe loss of separation (LOS) near accidents, which are considered mid-air collision (MAC) precursors. BN is used to analyze LOS contributing factors and the multi-dependent relationship of causal factors, [...] Read more.
This paper combines Bayesian networks (BN) and information theory to model the likelihood of severe loss of separation (LOS) near accidents, which are considered mid-air collision (MAC) precursors. BN is used to analyze LOS contributing factors and the multi-dependent relationship of causal factors, while Information Theory is used to identify the LOS precursors that provide the most information. The combination of the two techniques allows us to use data on LOS causes and precursors to define warning scenarios that could forecast a major LOS with severity A or a near accident, and consequently the likelihood of a MAC. The methodology is illustrated with a case study that encompasses the analysis of LOS that have taken place within the Spanish airspace during a period of four years. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Bayesian Inference in Auditing with Partial Prior Information Using Maximum Entropy Priors
Entropy 2018, 20(12), 919; https://doi.org/10.3390/e20120919 - 01 Dec 2018
Cited by 1 | Viewed by 1032
Abstract
Problems in statistical auditing are usually one–sided. In fact, the main interest for auditors is to determine the quantiles of the total amount of error, and then to compare these quantiles with a given materiality fixed by the auditor, so that the accounting [...] Read more.
Problems in statistical auditing are usually one–sided. In fact, the main interest for auditors is to determine the quantiles of the total amount of error, and then to compare these quantiles with a given materiality fixed by the auditor, so that the accounting statement can be accepted or rejected. Dollar unit sampling (DUS) is a useful procedure to collect sample information, whereby items are chosen with a probability proportional to book amounts and in which the relevant error amount distribution is the distribution of the taints weighted by the book value. The likelihood induced by DUS refers to a 201–variate parameter p but the prior information is in a subparameter θ linear function of p , representing the total amount of error. This means that partial prior information must be processed. In this paper, two main proposals are made: (1) to modify the likelihood, to make it compatible with prior information and thus obtain a Bayesian analysis for hypotheses to be tested; (2) to use a maximum entropy prior to incorporate limited auditor information. To achieve these goals, we obtain a modified likelihood function inspired by the induced likelihood described by Zehna (1966) and then adapt the Bayes’ theorem to this likelihood in order to derive a posterior distribution for θ . This approach shows that the DUS methodology can be justified as a natural method of processing partial prior information in auditing and that a Bayesian analysis can be performed even when prior information is only available for a subparameter of the model. Finally, some numerical examples are presented. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Efficient Heuristics for Structure Learning of k-Dependence Bayesian Classifier
Entropy 2018, 20(12), 897; https://doi.org/10.3390/e20120897 - 22 Nov 2018
Cited by 4 | Viewed by 1024
Abstract
The rapid growth in data makes the quest for highly scalable learners a popular one. To achieve the trade-off between structure complexity and classification accuracy, the k-dependence Bayesian classifier (KDB) allows to represent different number of interdependencies for different data sizes. In [...] Read more.
The rapid growth in data makes the quest for highly scalable learners a popular one. To achieve the trade-off between structure complexity and classification accuracy, the k-dependence Bayesian classifier (KDB) allows to represent different number of interdependencies for different data sizes. In this paper, we proposed two methods to improve the classification performance of KDB. Firstly, we use the minimal-redundancy-maximal-relevance analysis, which sorts the predictive features to identify redundant ones. Then, we propose an improved discriminative model selection to select an optimal sub-model by removing redundant features and arcs in the Bayesian network. Experimental results on 40 UCI datasets demonstrate that these two techniques are complementary and the proposed algorithm achieves competitive classification performance, and less classification time than other state-of-the-art Bayesian network classifiers like tree-augmented naive Bayes and averaged one-dependence estimators. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Ranking the Impact of Different Tests on a Hypothesis in a Bayesian Network
Entropy 2018, 20(11), 856; https://doi.org/10.3390/e20110856 - 07 Nov 2018
Cited by 1 | Viewed by 1347
Abstract
Testing of evidence in criminal cases can be limited by temporal or financial constraints or by the fact that certain tests may be mutually exclusive, so choosing the tests that will have maximal impact on the final result is essential. In this paper, [...] Read more.
Testing of evidence in criminal cases can be limited by temporal or financial constraints or by the fact that certain tests may be mutually exclusive, so choosing the tests that will have maximal impact on the final result is essential. In this paper, we assume that a main hypothesis, evidence for it and possible tests for existence of this evidence are represented in the form of a Bayesian network, and use three different methods to measure the impact of a test on the main hypothesis. We illustrate the methods by applying them to an actual digital crime case provided by the Hong Kong police. We conclude that the Kullback–Leibler divergence is the optimal method for selecting the tests with the highest impact. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Using the Data Agreement Criterion to Rank Experts’ Beliefs
Entropy 2018, 20(8), 592; https://doi.org/10.3390/e20080592 - 09 Aug 2018
Cited by 7 | Viewed by 1700 | Correction
Abstract
Experts’ beliefs embody a present state of knowledge. It is desirable to take this knowledge into account when making decisions. However, ranking experts based on the merit of their beliefs is a difficult task. In this paper, we show how experts can be [...] Read more.
Experts’ beliefs embody a present state of knowledge. It is desirable to take this knowledge into account when making decisions. However, ranking experts based on the merit of their beliefs is a difficult task. In this paper, we show how experts can be ranked based on their knowledge and their level of (un)certainty. By letting experts specify their knowledge in the form of a probability distribution, we can assess how accurately they can predict new data, and how appropriate their level of (un)certainty is. The expert’s specified probability distribution can be seen as a prior in a Bayesian statistical setting. We evaluate these priors by extending an existing prior-data (dis)agreement measure, the Data Agreement Criterion, and compare this approach to using Bayes factors to assess prior specification. We compare experts with each other and the data to evaluate their appropriateness. Using this method, new research questions can be asked and answered, for instance: Which expert predicts the new data best? Is there agreement between my experts and the data? Which experts’ representation is more valid or useful? Can we reach convergence between expert judgement and data? We provided an empirical example ranking (regional) directors of a large financial institution based on their predictions of turnover. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
A Definition of Conditional Probability with Non-Stochastic Information
Entropy 2018, 20(8), 572; https://doi.org/10.3390/e20080572 - 03 Aug 2018
Viewed by 1510
Abstract
The current definition of a conditional probability enables one to update probabilities only on the basis of stochastic information. This paper provides a definition for conditional probability with non-stochastic information. The definition is derived by a set of axioms, where the information is [...] Read more.
The current definition of a conditional probability enables one to update probabilities only on the basis of stochastic information. This paper provides a definition for conditional probability with non-stochastic information. The definition is derived by a set of axioms, where the information is connected to the outcome of interest via a loss function. An illustration is presented. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)

Other

Jump to: Research

Correction
Correction: Veen, D.; Stoel, D.; Schalken, N.; Mulder, K.; Van de Schoot, R. Using the Data Agreement Criterion to Rank Experts’ Beliefs. Entropy 2018, 20, 592
Entropy 2019, 21(3), 307; https://doi.org/10.3390/e21030307 - 21 Mar 2019
Viewed by 1135
Abstract
Due to a coding error the marginal likelihoods have not been correctly calculated for the empirical example and thus the Bayes Factors following from these marginal likelihoods are incorrect. The corrections required occur in Section 3.2 and in two paragraphs of the discussion [...] Read more.
Due to a coding error the marginal likelihoods have not been correctly calculated for the empirical example and thus the Bayes Factors following from these marginal likelihoods are incorrect. The corrections required occur in Section 3.2 and in two paragraphs of the discussion in which the results are referred to. The corrections have limited consequences for the paper and the main conclusions hold. Additionally typos in Equations, and, an error in the numbering of the Equations are remedied. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Back to TopTop