Next Article in Journal
Second Law Analysis of a non-Newtonian Laminar Falling Liquid Film Along an Inclined Heated Plate
Previous Article in Journal
A Utility-Based Approach to Some Information Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Other

An Adaption of the Jaynes Decision Algorithm

Str. Piscului 5, bl. 81, sc. A, ap. 1, sector 4, Bucharest, Romania
Entropy 2007, 9(1), 27-29; https://doi.org/10.3390/e9010027
Submission received: 7 September 2006 / Accepted: 23 January 2007 / Published: 31 January 2007

Abstract

:
There are two types of decisions: given the estimated state of affairs, one decides to change oneself in a certain way (that is best suited for the given conditions); given what one is, one decides to change the state of affairs in a certain way (that is best suited for what one wants for oneself). Jaynes' approach to decision theory accounts only for the first type of decisions, the case when one is just an observer of the external world and the decision doesn't change the world. However, many decisions involve the wish to transform the external environment. To account for this we need to add an additional step in Jaynes' proposed algorithm.

Jaynes [1] proposes the following algorithm for “finding the optimal decision of any kind”:
(1)
Enumerate the possible states of nature θj, discrete or continuous, as the case might be.
(2)
Assign prior probabilities (θj | X) which maximize the entropy subject to whatever prior information X you have.
(3)
Digest any additional evidence E by application of Bayes' theorem, thus obtaining the posterior probabilities (θj | EX).
(4)
Enumerate the possible decisions Di.
(5)
Specify the loss function L(Di, θj) that tells what you want to accomplish.
(6)
Make that decision Di which minimizes the expected loss:
L i = j L ( D i , θ j ) ( θ j | E X )
However, in spite of the claim, this algorithm is not general – it applies only to the case when the decision does not change the state of nature, θj. But, the aim of a decision might be exactly to change the state of nature – we usually want something because we are not satisfied with what we have.
The loss function has to define the value of the loss due to decision Di for the case when the state of nature happens to be θk after the decision has been enforced and its consequences have unfolded. Jaynes used a loss function which defined the value of the loss due to decision Di for the case when the state of nature happened to be θj when the decision was made.
Therefore, in general, we have to add an additional step, between (4) and (5), one which takes into consideration the possible consequences of decisions, i.e. that describes the transition from θj to θk.
Let Pj be the probability that the state of nature is θj before any decision is made:
Pj = (θj | EX )
and Qk­j(Di) the probability that the state of nature is θk after a decision has been enforced and its consequences have unfolded, given that the initial state was θj. Qkj(Di) is a function of Di and of the initial state θj – this function gives the probability that the final state is θk if one takes the decision Di and the initial state happened to be θj :
Qkj = (θk | Di θj )
So, in the final step one has to take the decision that minimizes the following expected loss:
L i = j , k L ( D i , θ k ) ( θ k | D i θ j ) ( θ j | E X )
(there is no summation over i)
The terms Pj are static terms which describe our probabilistic knowledge about the state of nature before the decision. The terms Qkj are dynamic terms which describe our probabilistic knowledge about the consequences of each decision.
In case decision Di does not affect the state of nature we have:
( θ k | D i θ j ) = δ k j
This is the special case Jaynes described.

References and Notes

  1. Jaynes, E.T. Probability Theory with Applications in Science and Engineering, chapter 13. “Introduction to Decision Theory”.

Share and Cite

MDPI and ACS Style

Tarko, V. An Adaption of the Jaynes Decision Algorithm. Entropy 2007, 9, 27-29. https://doi.org/10.3390/e9010027

AMA Style

Tarko V. An Adaption of the Jaynes Decision Algorithm. Entropy. 2007; 9(1):27-29. https://doi.org/10.3390/e9010027

Chicago/Turabian Style

Tarko, Vlad. 2007. "An Adaption of the Jaynes Decision Algorithm" Entropy 9, no. 1: 27-29. https://doi.org/10.3390/e9010027

Article Metrics

Back to TopTop