Next Article in Journal
The Duality of Entropy/Extropy, and Completion of the Kullback Information Complex
Next Article in Special Issue
Ranking the Impact of Different Tests on a Hypothesis in a Bayesian Network
Previous Article in Journal
Fuzzy and Sample Entropies as Predictors of Patient Survival Using Short Ventricular Fibrillation Recordings during out of Hospital Cardiac Arrest
Previous Article in Special Issue
A Definition of Conditional Probability with Non-Stochastic Information
Correction published on 21 March 2019, see Entropy 2019, 21(3), 307.
Open AccessArticle

Using the Data Agreement Criterion to Rank Experts’ Beliefs

1
Department of Methods and Statistics, Utrecht University, 3584 CH 14 Utrecht, The Netherlands
2
ProfitWise International, 1054 HV 237 Amsterdam, The Netherlands
3
Optentia Research Focus Area, North-West University, Vanderbijlpark 1900, South Africa
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(8), 592; https://doi.org/10.3390/e20080592
Received: 30 May 2018 / Revised: 7 August 2018 / Accepted: 7 August 2018 / Published: 9 August 2018
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Experts’ beliefs embody a present state of knowledge. It is desirable to take this knowledge into account when making decisions. However, ranking experts based on the merit of their beliefs is a difficult task. In this paper, we show how experts can be ranked based on their knowledge and their level of (un)certainty. By letting experts specify their knowledge in the form of a probability distribution, we can assess how accurately they can predict new data, and how appropriate their level of (un)certainty is. The expert’s specified probability distribution can be seen as a prior in a Bayesian statistical setting. We evaluate these priors by extending an existing prior-data (dis)agreement measure, the Data Agreement Criterion, and compare this approach to using Bayes factors to assess prior specification. We compare experts with each other and the data to evaluate their appropriateness. Using this method, new research questions can be asked and answered, for instance: Which expert predicts the new data best? Is there agreement between my experts and the data? Which experts’ representation is more valid or useful? Can we reach convergence between expert judgement and data? We provided an empirical example ranking (regional) directors of a large financial institution based on their predictions of turnover. View Full-Text
Keywords: Bayes; Bayes factor; decision making; expert judgement; Kullback–Leibler divergence; prior-data (dis)agreement; ranking Bayes; Bayes factor; decision making; expert judgement; Kullback–Leibler divergence; prior-data (dis)agreement; ranking
Show Figures

Figure 1

MDPI and ACS Style

Veen, D.; Stoel, D.; Schalken, N.; Mulder, K.; Van de Schoot, R. Using the Data Agreement Criterion to Rank Experts’ Beliefs. Entropy 2018, 20, 592.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop