Derivation and Application of the Subjective– Objective Probability Relationship from Entropy: The Entropy Decision Risk Model (EDRM)

: The uncertainty, or entropy, of an atom of an ideal gas being in a certain energy state mirrors the way people perceive uncertainty in the making of decisions, uncertainty that is related to unmeasurable subjective probability. It is well established that subjects evaluate risk decisions involving uncertain choices using subjective probability rather than objective, which is usually calculated using empirically derived decision weights, such as those described in Prospect Theory; however, an exact objective–subjective probability relationship can be derived from statistical mechanics and information theory using Kullback–Leibler entropy divergence. The resulting Entropy Decision Risk Model (EDRM) is based upon proximity or nearness to a state and is predictive rather than descriptive. A priori EDRM, without factors or corrections, accurately aligns with the results of prior decision making under uncertainty (DMUU) studies, including Prospect Theory and others. This research is a first step towards the broader effort of quantifying financial, programmatic, and safety risk decisions in fungible terms, which applies proximity (i.e., subjective probability) with power utility to evaluate choice preference of gains, losses, and mixtures of the two in terms of a new parameter referred to as Prospect. To facilitate evaluation of the EDRM against prior studies reported in terms of the percentage of subjects selecting a choice, the Percentage Evaluation Model (PEM) is introduced to convert choice value results into subject response percentages, thereby permitting direct comparison of a utility model for the first time.


Introduction
An executive is presented with an engineering risk analysis for a critical decision that involves a potential for loss of life for a failure mode that is highly unlikely and has no history of prior failure, a high-consequence, low-probability event that would take years and tens of millions of dollars to mitigate; however, the system under consideration itself is a safety system that provides mitigation for other Black Swan events, so its unavailability adds to risk in other interconnected areas. The executive chooses to accept the risk in spite of the grave prediction by the system's engineers. In another example, an individual chooses to buy insurance for their property, but at the same time buys lottery tickets despite the overwhelming odds against success-seemingly a contraction. In yet another case, a financial manager is presented with the results of a value at risk analysis from the company's risk management team for a transaction and chooses to go against their recommendation and make the trade based upon instinct. Such situations easily lead one to conclude that subjects appear irrational when it comes to making probabilistic choices; however, there is a clear pattern to these decisions.
These scenarios illustrate that people make decisions contrary to normative risk theories that quantify risk purely in economic or monetary terms, such as expected utility and the expected value rule, making quantization of risk in terms consistent with actual decision making elusive. At the choice level, this has been well studied as positive decision theory (i.e., Prospect Theory) and is replete with descriptive, but not predictive, models based upon various studies, most of which involve measurable objective probabilities and nominal, narrow ranges of values. This proven difference between how subjects should and do make decisions must be reconciled before risk can be universally quantified into monetary terms, as the risk value is based upon the perception of a decision maker.

A New Approach
The uncertainty, or entropy, of a single atom of an ideal gas being in a certain energy state mirrors the way people perceive uncertainty in the making of decisions, uncertainty that is related to unmeasurable subjective probability. The sense that the workings of the physical world are replicated in the making of choices has been, and continues to be, investigated by many great minds. One such luminary is John von Neumann, who formalized quantum mechanics and game theory as he sought resolution of the contradiction between the perceived macroscopic world and unmeasurable parameters in microscopic quantum mechanics [1]. It is this premise that provides the starting point for the present research, for the difference between the macro and microscopic views provides the relationship of objective and subjective probabilities that helps resolve the conflict between how decisions are supposed to be made and how people actually make them. The results of this new approach are profound. Without factors or corrections, the proposed model nearly perfectly predicts the results of Tversky and Kahneman's Cumulative Prospect Theory results. This approach also addresses a nagging question of the true nature of the decision weighting factor, which has been stated to not be a probability. This research shows that the decision weighting factor is subjective probability and it does not necessarily need to sum to 1 for a system, as is the case for objective probabilities.
A second outcome of this new approach, which supports validation of the first, is a method to directly compare the model results for choices and the subject percent responses for the first time. All of the research reviewed to date merely evaluates if the predicted choices match the actual, but is unable to evaluate the degree to which the model matches the data beyond binary comparison. The new percentage evaluation model also provides a measure of the relative difficulty of a decision between two or more choices.

Objectives
This paper takes the first step towards expressing the prospect of choices in terms consistent with positive decision theories, rather than with the standard expected value definition of risk [2][3][4]. As a result of articulating choices in terms of prospect of an outcome, rather than probability of success or failure, the Entropy Decision Risk Model (EDRM) is essentially a translation of probabilities between the positive and normative domains, as shown in Figure 1. This ability to translate between domains permits the expression of risk consistent with decision making and allows for risk estimates to be translated back into probabilities and values from prospects. It has been asserted that subjects without training do not intuitively understand probabilities [5][6][7], but the value of expected utility theory is well established as the foundation of economic and risk analysis, so reconciliation is required. Other related research has shown that these two systems (normative and positive) are explained by dual process theory's system 1 (intuitive thinking) and system 2 (deliberate thinking) [5,7]. This research suggests that the more complex the choice (e.g., multi-state and mixed gains/losses versus single-state gains), the better the agreement with a priori EDRM's uncorrected models, ostensibly owing to intuitive system 1 processes. This concept of aligning positive decision theories to system 1 and normative utility theories to system 2 appears consistent with recent work in the field [8]. The results of this research show that EDRM effectively translates between the two domains and consistently predicts subject results in terms of state subjective probabilities when provided objective probabilities, which sets aside the long-held contention that people do not understand probability. There are two groups of decision theories: positive and normative. Normative theories are those applied in standard economic decisions and are tied with deliberate choices (i.e., system 2). In contrast, positive theories counter the normative to address how subjects make choices, often involving intuition (system 1). In other words, normative theories are viewed as how people should make decisions, whereas positive theories address how people actually make decisions. The Entropy Decision Risk Model (EDRM) provides a translation between the two domains. Subsequent research will report on the use of EDRM to apply Expected Utility Theory in the positive domain.

Definitions
It is necessary to state new working definitions for terms used within the model consistent with their origin and application.
Relative Certainty ( : Equivalent to redundancy (information theory), given as one minus the relative entropy as a function of the state probability, denoted by the lower-case consistent with the classical definition of objective probability (See Section 4.3 and Appendix A). The term relative certainty is more descriptive of the use herein than is redundancy, and is lexically consistent with its derivation from relative entropy.
Proximity : Subjective probability representing the nearness to a state and a function of the relative certainty, denoted by the Greek letter . Proximity increases from 0 to 1 monotonically with relative certainty as nearness to a given state, with 0 implying no relation to the state and 1 that of achieving the state. Proximity and relative certainty are related as follows (See Appendix A): ln .
Prospect (T): Product of magnitude and proximity as a function of relative certainty and is an extensive property. Prospect can also be seen as a weighted uncertainty of an outcome (See Section 4.6.4).
Risk: As stated in ISO 31000, "the effect of uncertainty on objectives," [9]. In the context of this definition, prospect is a relative measure of risk; the greater the prospect of a choice, the lower the risk of achieving the desired objective, whether it is avoiding loss or achieving a gain. The ISO definition of risk is not widely applied, as most risk analyses are performed using expected values in a probabilistic risk assessment [10].
Reasonable Decision: Selection of a choice which increases the prospect of attaining an objective or end state; selection of the choice with the greatest prospect. To clarify terminologies, this paper will make use of the term reasonable, versus rational, to draw distinction from normative decision theories, like VNM utility. Highlighting this distinction, Charles Tapiero suggests an alternative rationality, and Dan Ariely similarly offers the concept of predictable irrationality, for choices by otherwise rational individuals that follow clear patterns which do not align with results as predicted by homo economicus (i.e., utility theory) [11][12][13]. It is interesting that all the literature reviewed appears consistent on this point and is careful not to redefine rationality in positivist terms; therefore, this research will treat the concept similarly.

Literature Review
Prospect Theory and Cumulative Prospect Theory provide the basic behavior theory evaluated for comparison in this research. In Prospect Theory: An Analysis of Decision Under Risk, Daniel Kahneman and Amos Tversky built upon the work of Markowitz and Allais to firmly establish a theory that addresses weaknesses in the venerable expected utility theory; their hypothetical decision weight curve is shown in Figure 2. Prospect Theory (PT) is based upon a critique of Daniel Bernoulli's 1738 wealth-based utility theory by highlighting its contradictions and weaknesses in explaining discrete choices under risk which are based upon changes in wealth, rather than final wealth [14]. Markowitz recognizes that subjects do not necessarily perceive gains and losses referenced to initial wealth and, in his landmark paper The Utility of Wealth, he refers to a neutral reference point, the point of inflection, as the "customary wealth" [15]. Lacking a reference point, Kahneman considers Bernoulli's model overly simple [5,16]. Prospect Theory's most important finding is that people are risk averse in the presence of gains and risk seeking in the presence of loss. Kahneman and Tversky approached PT in two domains: positive and negative; seemingly, a model which naturally accounts for both domains would surely be preferable. PT decision weight. Contrary to expected utility theory, Prospect Theory (1979) empirically determined that subjects make decisions based upon a weighting factor rather than objective probability. Kahneman and Tversky provided this plot as a notional relationship [14].
Thirteen years after Prospect Theory, Cumulative Prospect Theory (CPT) was introduced as "a new version of prospect theory that incorporates the cumulative function and extends the theory to uncertain as well as risky prospects with any number of outcomes" [17]. The updated model holds for a number of phenomena that violate expected utility and traditional von Neumann-Morgenstern (VNM) rationality, including framing effects, nonlinear preferences, source dependence, risk seeking, and loss aversion. The CPT decision weighting factor shown in Figure 3 varies between 0 and 1 but the authors state that it is not a probability; however, this research will demonstrate that it is a probability, specifically the probability of being in a specific state. CPT is initially modeled as positive (gain) and negative (loss) cumulative weighting functions that are empirically developed and then fit using regression to yield the following relationships [17], Equation (6)): Tversky and Kahneman are careful to critique the limitations and concerns within their model. They acknowledge that it provides greater generality than Prospect Theory, but they also express reservation over the accuracy and sensitivity of the decision weights based upon the data. They also recognize the challenges of maintaining simplicity in an empirically derived model while striving for better fit [17]. Therefore, it is clear that other mathematical models which fit the data and are within the constraints of CPT would be considered valid. In PT and CPT, Kahneman and Tversky assume an exponential value function (power utility), where is positive and less than or equal to 1, and is positive and greater than or equal to 1 to account for loss aversion, where losses loom greater than gains; however, this research assumes that loss aversion, while present, is a secondary effect and will set 1 for all analyses (the validity of this assumption is proven in Section 6). This initial assumption is important in establishing the idea that gains and losses are contiguous on the same scale, rather than treated separately as they are under PT. In its original form, this relationship allows for different power utility exponents for positive and negative values; however, because CPT and several subsequent studies assign the same value to the gain and loss exponent, we will do so here [17]. Much of the literature reviewed discusses positive decision theory in terms of rank order utility and first and second order stochastic dominance; however, because this research approaches the modeling of decisions from another perspective, consistency with prior research results will be considered sufficient for generally aligning with these principles. Future research to axiomatically analyze the model is intended. Similar to that proposed by Uday Karmarkar [18], Richard Gonzalez and George Wu provided a descriptive model based upon that suggested by Tversky and Kahneman in Equations (1) and (2) that and is based upon the logit, or logarithm of the odds (log-odds), which is actually the negative derivative of two-state information theory entropy. This relation to entropy is not discussed in their paper, but conceptually makes the convergence of the models all the more supportive of the underlying approach taken in developing EDRM: The steps are shown below [18,19]  Solving for , they obtain where . This model of differs slightly from Tversky and Kahneman but achieves similar results [19]. Also noteworthy is that this equation is nearly identical to that used earlier for the weighting function by John Quiggin in his paper, A theory of anticipated utility ([20], Equation (1)).
Additionally, for comparison, R. Duncan Luce et al. in Utility of Gambling II, presented the entropy-modified expected utility model as shown with their Equations (7) and (8) combined. is defined as a constant [21].
If were equal to , then this relationship would be in the same form as Equation (A5), which approximates EDRM, lending additional credence to that taken by the present research.

Method
As an answer to the questions of predictive versus descriptive behavioral models and subject understanding of probabilities, two hypotheses are evaluated: Starting with an assumption that subjects understand choice in terms of subjective, rather than objective probabilities (Hypothesis 2), a qualitative research methodology is used to synthesize philosophical and foundational works in the fields of risk, entropy, and DMUU to develop the predictive EDRM. Performance of the EDRM against numerous prior studies will be used as model validation. Specifically, the EDRM will be evaluated against results reported in six studies by Allais, Kahneman and Tversky, and Wu and Markle. None of the studies involved actual financial loss/reward to subjects, except a small subset of one study, making it consistent with the risk decisions made in bureaucratic organizations where personal consequence is limited [22]. In addition to comparing the binary choice results (matching: yes or no), where appropriate, calculated prospect values are translated into percentages representing the fraction of subjects selecting a choice using the Percentage Evaluation Model (PEM) for direct comparison with prior research results. When applicable, statistical analysis will be performed by evaluating the coefficient of determination ( ) or Spearman's rank correlation coefficient (Rho) and through a design of experiments methodology using ANOVA at a standard 5% significance level as algorithms in R without transformations. Assumptions of independence and constant variance can be presumed unless stated otherwise; normality will be confirmed by use of the Shapiro-Wilk test using a 5% significance. In a departure from most prior studies in this field, and as supported by Wakker and Zank [23], it is initially 1 The factors used in equations by Gonzalez and Wu , & are not those used in EDRM but are quoted in their original form for accuracy. Additionally, this relationship is nearly identical to that stated by Karmarkar. assumed that there is no difference between gains and losses other than the sign of the magnitude; any differences are considered as higher-order effects. A flowchart illustrating the present research is provided in Figure 4.

Derivation of EDRM: Theoretical Framework
The derivation of EDRM consists of two major sections: philosophical and mathematical. Because EDRM is derived from basic theory to predict results of subject choice behavior rather than presenting a descriptive a posteriori model that best fits the data, a firm philosophical foundation is required to establish EDRM using behavior theory, statistical mechanics/information theory, and probability theory.

Foundation of Utility Theory
Jeremy Bentham (1748-1832) introduced the notion of utility, describing it as follows: "By utility, is meant that property of any object to produce benefit, advantage, pleasure, good, or happiness, (all this in the present case comes the same thing) or (what comes again to the same thing) to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is concerned" [24]. The goal of the principle of utility is that people seek to maximize happiness (pleasure) and minimize unhappiness (pain) [25]. As pleasure and pain are scaled together, so too can gain and loss be considered as regions of the same measure. Stated slightly differently, Aristotle uses the phrase "pleasure or not without pleasure," which is understood to be a framework wherein the greater value goes to the certainty of gains (pleasure) or the uncertainty of loss (not without pleasure) ([26], 1098b23), which is also consistent with the certainty effect from Prospect Theory [14]. It follows that a reasonable decision is one in which the prospect of happiness or pleasure is greatest.
In chapter 4 of An Introduction to the Principles of Morals and Legislation, Bentham identifies four primary factors, or circumstances, which define the value of utility, or Greatest Happiness Principle: intensity or magnitude, duration, certainty or uncertainty, proximity or remoteness [24]. In the context of this research, the first, third, and fourth factors are of greatest interest; time as a factor will be considered later. This research defines proximity as the nearness to a given state within a choice, which is also subjective probability. Daniel Ellsberg recognized the same three factors in the evaluation of a choice: the payoff, the relative likelihood, and the third, "the nature of one's information concerning the relative likelihood of events," which is understood here as knowledge of the proximity or nearness to a state [27]. A basic economic prospect model can be inferred as magnitude times proximity as a function of certainty, an idea further supported by Peter Wakker's separation of risk aversion into factors of magnitude (marginal utility represented by power utility or expected utility) and proximity (cumulative probability transformation) [28].
Over the past 150 years, utility theory has been increasingly reduced to the seeking of monetary gain or economic satisfaction (ophelimity) which forms the fundamental disjointedness between how people should objectively make decisions and how they subjectively select among various choices. In their paper, Back to Bentham? Explorations in Expected Utility, Kahneman et al. draw the distinction between these two notions of utility as experienced utility, which aligns with Bentham and Mill, and decision utility or expected utility [29]. Kahneman's conclusion is especially important in justifying this present research because it leaves room for cautiously reintroducing classical experienced utility into the field of economic decision utility, specifically in consumer rationality [29]. Kahneman and Thaler further explore the difference between decision utility and hedonistic experienced utility in their paper, Anomalies: Utility Maximization and Experienced Utility [30].

Entropy
Arieh Ben-Naim describes three definitions of entropy with different origins that all provide agreeable results: Clausius' macro state definition (thermodynamic), Boltzmann's micro state definition (statistical mechanics), and Shannon's measure of information (SMI or information theory) [31]. Within this research, Boltzmann's statistical mechanics entropy for the case of a non-equilibrium ideal gas and Shannon's entropy will be used interchangeably in the context of choices, an action supported by Ben-Naim's derivation of their equivalence and writings of the physicist Edwin Jaynes [31,32]. The remaining entropy definition, thermodynamic, will be introduced to draw out the relationship between subjective probabilities associated with Boltzmann/SMI and objective probabilities associated with the thermodynamic view where all states are equiprobable in thermal equilibrium, as shown in Boltzmann's derivation [33].
The concept that entropy and uncertainty are synonymous, as concluded by Jaynes and Ben-Naim, is crucial to this research because human decision making is so strongly influenced by the presence of certainty and decisions lead to actions that locally create order out of disorder (e.g., build a house or write a book) [4,14,32]. Therefore, for the purposes of this research, the idea that people have the ability to conceive ideas and then decide to put them into action in the information or physical realms is reflected in the fact that, in general, people choose certainty over uncertainty for gains and the opposite for losses.
Information theory (SMI) and statistical mechanics hold the answer to quantifying decision uncertainty and has roots in Maxwell Boltzmann's foundational paper on statistical mechanics [33]. John von Neumann, who also established Game Theory with Oscar Morgenstern, further tied together Boltzmann's work and the work of other physicists, such as J. Willard Gibbs, into the Mathematical Foundations of Quantum Mechanics. von Neumann identified that there exists a "thermodynamic value of knowledge which consists of an alternative of two cases": ln 2, the maximum entropy of a binary choice [1,34]. Claude Shannon, who reportedly consulted von Neumann while at Princeton, established information theory based upon this concept of entropy [35]. Shannon also defines two terms important for this research: relative entropy (entropy divided by the maximum entropy) and redundancy, which is one minus relative entropy.
Shannon considers information theory in terms of states and choices, forming a natural application to decision theory; however, only several of the numerous papers reviewed in the course of this research attempted to apply information theory to risk decisions. Nawrocki and Harding in their paper, State-value weighted entropy as a measure of investment risk, make use of entropy's extrinsic properties to weight the uncertainty of choices by their economic value or utility [36]. Yang and Qui in Normalized Expected Utility-Entropy Measure of Risk, apply an additive entropy term in an attempt to model Prospect Theory and introduce the concept of redundancy, but do not subsequently make application [37]. Roman Belavkin, in Asymmetry of Risk and Value of Information, discusses many topics and even suggests application of entropy to Prospect Theory [38] and, in an earlier work, The Use of Entropy for Analysis and Control of Cognitive Models, Belavkin suggests the use of redundancy in estimating system accumulated information [39]. Even Tversky discussed information theory entropy as a measure of decision uncertainty in his paper, On the Optimal Number of Alternatives at a Choice Point, but this was not explored further [40]. Of all the relevant literature reviewed, none go so far as to directly apply redundancy as a measure of certainty to a decision model.
More recently, several papers work to apply various forms of entropy to decision making by individuals and organizations [41,42], but one is particularly interesting to the present research. In A Unified Theory of Human Judgement and Decision-Making under Uncertainty, Raffaele Pisano and Sandro Sozzo draw the conclusion that quantum theory (i.e., statistical mechanics) is representative of human cognition and that quantum state probability is subjective, which supports this research approach [43]. However, the Authors avoid directly applying entropy and make the assumption that the Born rule (or law) of quantum mechanics defines the relationship between subjective probability as the square root of the objective probability. This research shows that the square root relationship between probabilities is a special case assuming very small state probabilities (see Section 4.6.2 and Appendix B).

Two Types of Probabilities
Throughout the literature, there appear two general categories of probabilities [44][45][46][47][48][49]: those which are objective and physically measurable and those which are subjective and not directly measurable and are often correlated to various degrees of psychologistics, to include beliefs, states of mind, logical proximity (from logical positivism), or judged probabilities [50]. In 1763, Rev Thomas Bayes' method of translating between probabilities of measurable events and their unmeasurable conditions was published posthumously and has since spawned an entire field of study [51]. Similarly, the goal of this research is to translate between what is directly measurable and what is not in the arena of behavior theory and risk, since the probabilities of risk events are usually posed in measurable objective terms, but positive behavior theory shows that subjects make risk choices differently. In all of the prior research reviewed, it was observed that probabilities provided to subjects were objective.
Building upon the prior discussion on entropy, there are two different, but related, types of probabilities contrasted by Roman Frigg based upon whether the problem is considered from a macro (temperature, pressure, volume) or micro state (energy state of a single atom): macro probabilities and micro probabilities [52]. In Probability Theory, Jaynes makes the clear delineation between subjective and objective probabilities. While probabilities that are subjective are merely descriptive of the knowledge of a specific state and are not physically measurable, objective probabilities can be physically measured and consider all states (ignoring none) and assumes an equivalent knowledge to each (i.e., equal probability to every possible state combination) [46]; for example, the probability of rolling any specific value on a fair six-sided die is objectively 1/6. This distinction precisely fits those of micro and macro probabilities of statistical mechanics and thermodynamics, respectively. When considered from a macroscopic or thermodynamic perspective, equilibrium entropy is based merely upon all possible combinations of all states and uses the classical definition of objective probability where the individual micro-state probabilities are not known and assume an equal probability, as calculated by the Boltzmann principle 2 .
Micro states are subsets of the macro-region where a change in entropy is calculated for each state based upon knowledge of the micro-probability of being in that state and is not directly related to the state of other atoms or the measurable effects on the system; a neutral reference point, if you will, since it is only based upon knowledge of that state and not the system as a whole. Following an exhaustive comparison of these two types of probabilities, Frigg philosophically concludes, "There is no causal connection between knowledge and happenings in the world" [52]; an elegant contrast of micro (subjective) and macro (objective) probabilities. However, while not causal, Frigg proposes there exists a direct relationship where the macro state is a function of the system's micro state at a given time. Similarly, EDRM functionally relates subjective and objective probabilities. Now, the final step in aligning definitions is to match proximity with subjective knowledge and micro-probability and relative certainty with objective or macro-probability, since the foundation of the EDRM derivation hinges upon the definitional connection between these terms, as shown in two generalized categories in Figure 5. Figure 5. EDRM mathematically and philosophically relates subjective and objective probabilities, which are referred to as proximity and relative certainty, respectively. Proximity encompasses the group of unmeasurable subjective probabilities and relative certainty relates to those probabilities which are directly measurable, as listed in the respective boxes.
Likewise, logical positivism holds that there are two different types of probabilities: Frequency and logical proximity. Frequency is definitionally objective probability, so it stands that logical proximity is synonymous with subjective probability. Frederick Weismann, who worked closely with Ludwig Wittgenstein, introduced the term as "the logical proximity or deductive connection between propositions"[translated] [47,53]. Waismann's terminology is especially helpful for this discussion because it is both a type of subjective probability and reinforces the use of the term proximity in this context. This assertion is further supported by Karl Popper, who ties logical proximity to psychologistic theory through Keynes' degrees of rational belief and appears synonymous with his logic of knowledge terminology, logical relation [44,54]. Popper continues regarding subjective probability, "It treats the degree of probability as a measure of the feeling of certainty or uncertainty, or belief or doubt, which may be aroused is us be certain assertation of conjectures" [54]. Interestingly, George Shackle's surprise-belief curves are largely founded upon Keynes' degrees of rational belief, for which he deduced a relationship between potential surprise and belief which closely approximates CPT, with belief then being the subjective probability and surprise being the objective [55].
Therefore, micro probabilities are definitionally subjective probabilities, with psychologistical connections to knowledge and beliefs, and macro probabilities are equivalent to objective probabilities. Ideally, a relationship that defines the difference between macro and micro probabilities would be effective in translating between these two contexts and would provide an isomorphic framework for contrasting between normative and positive behavior theory. EDRM's relationship between proximity and relative certainty provides such a solution and offers an explanation of the differences between the neutral reference points observed in positive behavior theories, such as PT and CPT, and the wealth-based utilities found in normative theories. Similar to the development of entropy over the past 300 years through differing perspectives of thermodynamics and statistical mechanics, which were brought together by Boltzmann's H-theorem, behavioral economics has long considered the same problem of decision making from two differing perspectives.
To finalize the philosophical foundation for derivation of EDRM as a translation between subjective and objective probabilities, it must be shown that relative certainty (i.e., redundancy, which is one minus relative entropy) is an objective probability. Shannon defines relative entropy as the ratio of the entropy of a source, based upon knowledge of the probability of each state (subjective probabilities), divided by the maximum entropy, which assumes an equal probability for each state with no knowledge of a specific state (objective probability). Entropy itself contains no knowledge of a state and is ambiguous about probability, as illustrated by the state entropy plot in Figure A1, which shows two values of state probability for any value of state entropy, except at its maximum. Because entropy does not contain state knowledge and there are only two types of probabilities, relative certainty cannot be subjective and therefore is an objective probability.
Referring back to Bentham's identification of certainty and proximity as distinct factors in the definition of utility, this research therefore understands that his statement is a clear acknowledgement that both objective and subjective probabilities must be evaluated. EDRM accounts for these factors and provides translation between them.

Entropy Decision Risk Model (EDRM) Framework
The EDRM is developed from the following observations derived from the prior philosophical discussion: 1. Certainty of gains and the uncertainty of losses are more highly valued; 2. Gains and losses are considered contiguously as two regions of the same scale; 3. Relative certainty, or redundancy, is one minus the relative entropy; 4. Proximity is represented by the subjective probability of reaching a state; 5. Prospect can be stated as magnitude times proximity as a function of relative certainty; 6. The choice with the greatest prospect, positive or negative, is preferred.

Choices and States
Shannon says that choices are made up of individual states [35]. Employing Problem 1 of Prospect Theory as a classical example, Choice A (2500, 0.33; 2400, 0.66) has three states: 2500, 2400, and zero, although the zero state is implied by the remaining probability (0.01) and is usually omitted in the notation. Choice B (2400, 1.0) has two states, 2400 and zero, although both of these states are certain. However, there is a problem. All these probabilities are objective rather than subjective (micro) probabilities, which reveals the fundamental weakness of the current risk management paradigm; this can be easily seen in the first example problem from PT [14]: Choose between A: 2500 with probability 0.33 2400 with probability 0.66; B: 2400 with certainty 0 with probability 0.01. Based upon expected value, probability times consequence, Choice A (2409) should be preferred over Choice B (2400); however, an overwhelming 82 percent of subjects selected Choice B, all because the wrong probability is used. This single example demonstrates the misalignment between risk modeling and human decision making, a discord that has ostensibly been generally accepted by the risk community to maintain simplicity of calculating risk by laypersons. Therefore, state probabilities must be subjective, with choices shown in Figure 6. According to information theory, SMI for state within choice is [35] log .
The maximum possible entropy for any given choice occurs when 1⁄ for all , which results in the well-known basic equation of maximum entropy 3 , log .
Although there is a recognition that uncertainty's effect (i.e., entropy) on outcomes may be used in a definition of risk, as stated in ISO 31000, there is no discussion of how to apply the uncertainty of approaching one of two states (failure or no failure) to a risk model [9,56]. To resolve the expected value inconsistency shown above in PT problem 1 and to incorporate the concept of uncertainty, the EDRM is proposed.

Prospect
The derivation of prospect requires application of statistical mechanics and information theory placed in the context of DMUU. To provide distinction between the two types of probabilities, and are used for proximity (subjective probability) and relative certainty (objective probability), respectively. Prospect is identified with the Greek letter T (tau) 4 . Proximity and the CPT weighting factor are generally synonymous, except that Tversky and Kahneman explicitly state that the CPT weighting factor and PT decision weight are not probabilities, ostensibly because individually they do not necessarily sum to 1 within a choice as only objective probabilities are assumed; which is, however, indicative of additive subjective probabilities. Prospect for a given state is equivalent to its certainty equivalent (CE), which is the 100 percent probability (certainty) of the non-zero state. For example, under EDRM the CE for (USD 1000, 0.5) is (USD 432), which will be shown to be consistent with the results of CPT.

Derivation of Proximity from Information Theory Entropy (SMI) and Statistical Mechanics
The basic relationship between proximity ( ) and relative certainty ( ) is the foundation of EDRM and is derived by taking the entropy divergence of a single state, which is fully presented in Appendix A: ln ln .
The inverse of this equality, , is much more useful; however, Equation (8) is not invertible, so numerical methods in R and Excel are used to apply the model.

Very Small Probabilities
Although extremely small probabilities (≲ 1 10 ) are not part of most behavioral economics studies, because EDRM is derived from basic theory it should generally be extensible to these cases. For very small values of relative certainty and proximity, the relationship between the two converges to an exponential factor. For a priori EDRM, a simple relationship between objective and subjective probabilities results for very small probabilities, which is consistent with Born rule of quantum mechanics (See full derivation in Appendix B): .
To illustrate, given an objective probability of 3.3 10 , as could be found in the Powerball lottery grand prize, the proximity is the square root, 5.8 10 , an increase of greater than 10,000 times that is perhaps consistent with the popularity of lotteries despite the poor odds of winning.

Inflection and Preference Reversal Points
Many studies compare the alignment of descriptive models to CPT based upon the point of inflection, where the shape shifts from concave to convex, and the crossover point or preference reversal point. Drazen Prelec, in The Probability Weighting Function, developed a similar relationship that forms a curve like EDRM and has a combined inflection and crossover point, This is of particular interest because the basic entropy equation, log , has its maximum of 1 ln2 ⁄ at 1⁄ , which aligns with an inflection point at 3 ⁄ 0.4060 and is highly consistent with the conclusions of Wu and Gonzalez who validated prior studies to confirm that the inflection point of the weighting function is at about 0.40 [58].
The EDRM preference reversal point naturally occurs at 0.2847, as shown in Figure   7, which appears to more closely correspond to Tversky and Kahneman's reported data than their proposed descriptive model and other follow-on studies; it is shown superimposed upon their actual plot ( Figure 3) in Figure 8 [17,19]. To aid in visual assessment, including the preference reversal point, a 5th order polynomial trendline (orange dashed line) is shown nearly overlapping the predicted results (black line). Statistical analysis of the uncorrected model performance is provided in Appendix C.3. Lichtenstein, and Slovic reported reversal in three experiments with the following results: 0.295, 0.315, and 0.270, which averages to 0.293 [59]. In another preference reversal study by Tversky, Sattath, and Slovic, they reported a similar value for preference reversal of 0.28 [60]. These results are all consistent with the predicted EDRM preference reversal point. The base layer showing the CPT weighting factor curves, which includes the axes, is taken directly from the original CPT paper [17]. The second layer of blue dots represent the actual positive and negative data points from the original report 6 . The next (orange) layer is a 5th order linear regression trendline calculated from the original results. The final layer shows the uncorrected EDRM, which more closely trends with the original data than the reported weighting factor curves.

Calculating Prospect of a Choice
As defined, prospect is the magnitude times proximity as a function of relative certainty for state i within choice j is calculated from Equations (3) and (A4), expressed as .
The prospect of a choice of m states is given by, .
The preferred choice is that with the greatest (most positive or least negative) value of , whether the various values are all positive (gains), all negative (losses), or a mixture of the two. The default value function will use a standard exponential value of 0.88 for the power utility.
Indifference plots graphically represent all possible combinations of a three-state choice , ; , ; , for a given decision curve. By convention, the objective probabilities and are on the axes; is inferred as 1 and lies along the diagonal from the origin. Using Tversky and Kahneman's example, the corners represent the three outcomes (states): 0, 100, and 200 [17]. Other values can be used, including negative and mixes of positive and negative. The contour lines depict equal prospects. Some authors portray indifference plots in equilateral triangles but to remain consistent, this research will use that reported in CPT. The uncorrected EDRM indifference plot is shown in Figure 9 in comparison with those originally reported by Tversky and Kahneman, indicating close alignment between EDRM and original CPT.  Figures (a,b) are taken directly from the original text [17]. Figures (c,d) are calculated using EDRM. The dashed lines represent probability . It is noteworthy that EDRM generally matches the original CPT indifference plots, except along the edges. This may be explained by the fact that proximities calculated from relative certainties ( ) are not required to sum to 1 (Section 4.6).

Applying a Proximity Exponent ( ) to the Prospect of a Choice
Although the focus of this paper is on validating a priori EDRM without factors or corrections, it is appropriate to note how a factor would be applied and what effect it would have on the results. Equation (11) can be modified, as discussed in Appendix B, by expanding the application of to proximity in general for all values, not just the very small, , and merged with power utility in Equation (3) To illustrate the ability to model a wide range of prospect curves, proximity for various values of is shown in Figure 10. For values of 2 the preference reversal point shifts along identity; reversal is at 0.5 when 0.8560. For 2, proximity is always less than relative certainty so there is no preference reversal. At the extremes, proximity is 1 for 0 and tends to 0 as → ∞. The loss aversion factor, , is assumed to 1 throughout this research, which is validated in analysis presented in Section 6. there is no preference reversal. While this paper will only apply 1 for comparison to prior studies to validate the a priori model, in Section 6 the proximity exponent is varied along with the value exponent to further validate the use of 0.88 and 1 across all prior studies as a system.
The studies used for comparison will assume a natural value of 1 to validate the a priori relationship; comparison of other studies with varying values of will be considered in subsequent research.

EDRM Validation (Without Application of Any Factors or Corrections,
Validation of the various versions of EDRM is done using data reported in prior studies and assumes that all reported choice decisions are reasonable decisions, as previously defined. The consistency of the data varies based upon the specific study and the number of subjects, which fluctuates between ten and several hundred. As this research will not replicate prior studies, the specifics of how choices were presented to subjects will not be discussed unless necessary to explain results, such as in the CPT analysis which reports certainty equivalent values derived from subject responses rather than the responses themselves. None of the studies involve actual financial loss or reward to the subjects, except a subset of one study (Wu and Markle), making them generally consistent with the bureaucratic risk decision systems under consideration, although subjects were sometimes compensated for participating in the study.

The Percentage Evaluation Model (PEM)
Most studies reviewed report results in terms of the fraction of subjects selecting between alternatives, so results must be converted to enable direct performance comparison with prior works, beyond that of merely evaluating the binary results (i.e., do they match?); however, literature reviews did not identify any such method for directly comparing value results with frequency of subjects selecting an alternative. The PEM is presented as a tool for conducting this evaluation and may be useful for comparing values with subject percentages in other research. Additionally, the difference in percentages reported by PEM can be evaluated as the choice difficulty, where a small difference represents a difficult choice.
While a straightforward ratio of prospect values might appear to work for pairs of gains or losses, it does not suffice for mixed gambles nor does it capture subject perception. This research proposes use of the natural shape of inverse hyperbolic sine over the range of possible positive and negative values to compute a relative percentage that is consistent with subject responses based upon the calculated values of prospect.
The challenge is to develop a scale that is both respective of the difference between the prospects and is referenced to the absolute values of the minimum and maximum possible values from the two choices. The solution is to use the inverse hyperbolic sine of the difference in the numerator and the difference of the asinh of the maximum and minimum values in the denominator. The maximum and minimum functions are both referenced to zero, such that the minimum value is never greater than zero and the maximum is never less than zero. Since the inverse hyperbolic sine is logarithmic, this approach is compatible with the Weber-Fechner law for human perception (psychophysics). To further support this approach, it is already well established that economic decision theory is closely related to the field of psychophysics, of which Daniel Bernoulli is considered the inventor [5,61]. This relationship is given by, Figure 11 graphically represents the development of Equation (15). To enable comparison of prospects to maximums and minimums in cases where power utility was applied to the state prospects, the inverse of the function (e.g., . ⁄ ) must be applied to calculate the corrected choice prospect ( ) to undo this effect, similar to that performed by Bernoulli is his discussion of expected utility. As PEM is calculated only from the prospects, it is independent of the binary matching results. Figure 11. This illustration shows a new model for converting the prospect (T) of two choices into the relative percentages of subject responses for direct comparison with prior studies, which universally report these percentages. No prior works reviewed attempt to compare results in this manner, making this the first to do so, to the authors' knowledge. This model is based upon the Weber-Fechner law of human perception, which is logarithmic, and scaled by the minimum and maximum values. Asinh was chosen because it is likewise logarithmic and permits comparison of positive and negative prospects contiguously along a single scale.
One interesting special case must be considered in the evaluation model, that of dominance [62]. When comparing two choices of an equal number of states, each with an identical probability set, dominance exists when the value of every state of one choice is equal to or greater that of the pairwise ℎ % 50% 50% asinh 2 asinh Max Value, 0 asinh Min Value, 0 .  [62,63]. While the EDRM prospects will predict the correct binary result in this case, Equation (15) may not accurately predict percentages because the prospects are often nearly equal; however, if there is even a small difference in probabilities, then this effect is not present and the evaluation model proves quite accurate, as demonstrated in Section 5.6. Therefore, when dominance is present, the percentage of the choice with the greater prospect will be 100%; the lesser will be 0%. The proposed evaluation model used for validating EDRM itself requires assurance that it consistently and accurately translates between prospects and percentages. Since the evaluation model draws its validity from the very data it is used to evaluate, the following set of credible and objective criteria are established as a standard: 1. Varies monotonically with the difference in prospect between choices; 2. Scaled by the range, positive and negative, of values being evaluated in a given choice; 3. Accounts for non-linearities of human perception; 4. Equitably reports subject percentages for choices involving gains, losses, or mixtures of the two; 5. Performs consistently across a range of studies (not tuned to a specific set of research). Criteria 1 through 3 are met by definition, and as previously discussed. Criteria 4 and 5 are met through analysis of eight related studies conducted by different researchers, all of which have been analyzed using matching binary results and are optimized values for the exponential parameters and [14,58,[62][63][64][65][66][67]. Table A1 in Appendix C.1 summarizes this analysis and affirms consistency of PEM performance throughout this research with an of 0.80. Specifically, despite the presence of gain, loss, and mixed choices (criteria 4) and the myriad sources of the surveys (criteria 5), there is no statistical significance independently or in their interactions. Therefore, it is reasonable to conclude that this evaluation model is adequate for translating between prospects and subject response percentages.

Allais Paradox
As a foundation of DMUU, agreement with the Allais Paradox is an imperative for validation of EDRM, as shown in Table 1. EDRM correctly predicts results for the paradox, as posed by Allais, as well as other variants embedded within subsequent research. No actual results showing subject preference percentages were shown in his paper; however, the calculated percentages predict nearly all would agree with the choices. Maurice Allais, in his 1988 Nobel Lecture, referred to the VNM utility as the "neo-Bernoullian utility index" and critically refuted it as "unacceptable because it amounts to neglecting the probability distribution of psychological values around their mean" [68], which was consistent with research by Harry Markowitz and points to use of subjective probabilities. To demonstrate the fundamental weakness of utility theory in predicting subject choice, Allais offered the Allais Paradox in his paper, Le Comportement de l'Homme Rationnel devant le Risque: Critique des Postulats et Axiomes de l'Ecole Americaine [69]. The paradox, cited below from Mark Machina and differing slightly from Allais' original in currency and magnitude (1 USD = 100 Franc) for ease of transcription, consists of two pairs of gambles, , , and , . Subjects usually select and , contrary to results predicted by utility theory, which requires that subjects select choice after selecting [70]: : 1.00 ℎ 1,000,000 versus : . 10 ℎ 5,000,000 . 89 ℎ 1,000,000 . 01 ℎ 0 and : . 10 ℎ 5,000,000 . 90 ℎ 0 versus : . 11 ℎ 1,000,000 . 89 ℎ 0 .

Prospect Theory (Kahneman and Tversky)
As with the Allais Paradox, no positive decision model could make any claim to universality without predicating all results of Kahneman and Tversky's hallmark work, Prospect Theory. EDRM accurately predicts all PT results, including lotteries and insurance problems (14 and 14′) which are usually characterized as large gambles where people tend to evaluate choices based upon the value of potential winnings alone without considering the probability, as is normal for small gambles [71].
The correlation between actual versus predicted results are shown in Figure 12. The detailed results comparing performance of EDRM against reported PT results is shown in Table 2.   [14]. Therefore, the first stage is not applied in this model.
The close alignment between EDRM and PT ( 0.86 with 100% matching (See Appendix C.2), as seen in Figure 12, and the results reported by Kahneman and Tversky as shown in Table 2 is striking, especially considering that no factors were applied to modify to shape of the proximity curve to match their results. The gamble type, whether gain or loss, has no statistical effect, which supports the assumption that there is no difference between the two and affirms Hypothesis 1.

Cumulative Prospect Theory
The weighting factor curve developed by Tversky and Kahneman serves as the foundation for many subsequent works seeking to apply it or to provide further validation. Therefore, for EDRM to be of value, it must accurately predict CPT results, beyond the general agreement between EDRM and CPT for shape and critical point agreement (inflection and preference reversal) demonstrated in Section 4.6.3.
By nature of the method employed by Tversky and Kahneman to derive the median certainty equivalent (CE) data from observed choices rather than portraying raw subject preference data, the use of a unity power utility factor ( 1 is warranted, i.e., the inverse power utility correction has already been applied. Figure 13 displays the difference between the actual CEs and the calculated prospect as reported in Table 3. The consistency of the CE difference is tighter for losses than for gains, which can be seen in the increased dispersion of two-state gains. Consistent with the and curves of Figure 3, the linear trendline indicates that calculated CE is slightly less than actual for gains, and slightly greater for losses. Figure 13. This plot shows a high degree of alignment of EDRM compared with actual CPT data for one and two-state choices. The plot scales are different on the horizontal and vertical axes to amplify the results. The dashed line represents a linear trendline using all the data, which shows excellent alignment with the positive and negative extremes. There is a slightly tighter correlation of the model for negative values. The negative slope of the trendline shows there is a very small difference between gains and losses (loss aversion), but is considered a minor effect in this research. Exhibiting excellent alignment between EDRM and CPT with a near-perfect result of 0.9971 (See Appendix C.3), not to mention the tight agreement between its predicted preference reversal and inflection points as shown from prior research, EDRM applied to CPT soundly affirms Hypothesis 1, along with Kahneman and Tversky's groundbreaking work. EDRM serves as the baseline relationship between objective probability and one's perception of the likelihood of an outcome (subjective probability). The results shown in Table A3 indicate that the type of gamble (gain or loss) only has a secondary effect, affirming the assumption that gains and losses can be considered together within this research.

The Framing of Decisions and the Psychology of Choice (Tversky and Kahneman)
Beyond their works of PT and CPT, Tversky and Kahneman produced a volume of research on related topics that provide additional sources for EDRM validation. In their paper, The Framing of Decisions and the Psychology of Choice, they explored a wide range of problem types that involved gains, losses, and the mixture of the two [62]. Three of the problems posed (8, 9, and 10) are without probabilities presented and are akin to those offered by Richard Thaler in Mental Accounting and Consumer Choice. They will not be included here but will be considered in future studies applying EDRM to Thaler's works [72].
Due to the paucity of problems in this group and the 100% matching, statistical analysis was not conducted; however, the results were considered in the analysis of EDRM evaluation model performance. The results shown in Table 4 were produced using uncorrected EDRM with the default power utility exponent of 0.88. These results support Hypothesis 1. Notes: 1 . Problem 6 is the second stage of a two-stage version of problem 5 where there is only a 25% chance of proceeding past the first stage; however, as stated by Kahneman and Tversky in problem 10 of Prospect Theory, people tend to disregard the first stage [14]. Therefore, the first stage is not applied in this model; 2 . Dominance is present, so the evaluation model returns 100% for the choice with the greater prospect.

Rational Choice and the Framing of Decisions (Tversky and Kahneman)
While the paper, Rational Choice and the Framing of Decisions, includes problems that are identical to those in other papers, such as Framing of Decision and the Psychology of Choice, two of the problems presented (7 and 8) are of particular interest to this research because they contain mixes of gains and losses, more than three states, and dominance [63]. Additionally, both problems have the same expected values for their respective choices which would otherwise incorrectly predict Choice B for both problems. As shown in Table 5, EDRM accurately predicts the results of both, noting that the percentage result in problem 7 applies the dominance special case. Normatively, problems 7 and 8 should be equivalent; however, subjects appear to intuitively evaluate differences in certainty consistent with CPT as predicted by EDRM. This result supports Hypothesis 1.

Gain-Loss Separability (Wu and Markle)
George Wu and Alex Markle focused their research on the separability of gain and losses of mixed gambles, which provides data that can be used to validate the EDRM's ability to model choices consisting of mixed gains and losses. Their study was made up of six different surveys of 59 to 81 participants, depending upon the test. Surveys 1, 2, and 3 were conducted using prepared booklets for which subjects were paid for completing, while surveys 4, 5, and 6 were performed by subjects using a computer with a randomized order of gambles in a format designed to replicate that of the booklets [67]. This variation in test method may have produced differing results, as observed when compared with EDRM predictions. Due to the generated mix of positive and negative prospects, this study also serves as a validation test for the evaluation model itself. Figure 14, graphically compares actual results with the EDRM prediction by survey number, showing reasonable alignment. EDRM predicted results that agree with 82.4% of the binary results. Assuming EDRM is accurate, the comparative statistical analysis shows that all of the non-conformities were contained within the first three booklet-based surveys, especially survey 2 with three negative results, which appears significant given the comparatively lower value of (0.69 for correct binary results, 0.35 for all results including incorrect). The tests were designed to increase subject response to the "high" choice (H) with survey number, which was by design of the test. EDRM likewise shows an increasing trend with survey number but with a lesser slope. Despite these concerns and based upon the statistical results in Table 6, and notwithstanding variability in the subject data, EDRM is shown to generally predict results of mixed gambles with a Spearman rank correlation coefficient of 0.695 (See Appendix C.4), which supports Hypothesis 1.

Summary of Analyses
EDRM has been shown to effectively predict results of the studies considered using a standard value of 0.88 and the neutral value of 1 and assuming no loss aversion (i.e., λ 1). This section will show that these values naturally maximize valid binary results through comparison of plots of the results obtained by varying these factors over nominal ranges for the prior studies considered in this research. Specifically, is varied from 0 to 1 and is varied from 0 to 2, holding λ constant at 1; λ is then varied from 1 to 3, holding constant at 1. Two types of plots are discussed, the first is a subset of the data included in the second. Figure  15 illustrates results for a sample Wu and Markle problem (number 25) as the difference between the prospect of the two choices ( , which clearly shows a linear preference reversal relationship between the factors. The standard values of and are well within the range for selecting Choice A, which is consistent with reported results.

System-Level Analysis of Choices (Sensitivity)
Independent of the subject percentages and PEM results, by layering only the binary results of each problem analyzed in this paper upon one other, the effect of varying , , and λ, using the relationship in Equation (14), can be considered at a system level, where results from multiple studies are integrated. Figures 16 and 17 demonstrate the results of combining the 63 previously discussed problems from Prospect Theory, Allais Paradox, Framing of Decisions and the Psychology of Choice, and Wu and Markle's Gain-Loss Separability (mixed). The standard values are shown using dashed lines and clearly fall within the white zone for the 57 problems correctly predicted by EDRM (90.5%). The remaining six negative binary results were discussed in Section 5.7.  . Formatted similarly to Figure   16, this plot shows that as (loss aversion factor) increases, slightly wider ranges of will correctly predict the binary result for a maximum of 57 of 63 choices analyzed. This shows that loss aversion is present for negative state values, but validates its consideration as a secondary effect, since the standard values of 0.88 and 1 are valid assuming loss aversion is not present (i.e., 1).
Plotting of -vs-has nearly identical results.
This observation serves four purposes. The first is that it validates the value of determined from prior studies as a standard for subject responses in selecting between choices, along with the neutral value of 1; the plot is optimized at 0.88 and 1.07. Secondly, this result shows that there is a mostly linear relationship between and ( Figure 17). Third, this analysis further validates EDRM's universality and consistency when applied to differing sources and researchers, which supports Hypothesis 1. Lastly, the assumption that loss aversion is a secondary effect is validated, although some loss aversion is evident.

Discussion
The broad goal of this research is to provide a method for addressing the mismatch between standard expected utility risk analysis tools and decision makers, ultimately to enable quantization of risk in fungible terms. In the process of answering this question, the present research has developed the predictive EDRM decision model developed from utility theory, statistical mechanics, and information theory that is highly consistent with myriad studies. Although derived independently, EDRM bears resemblance to several prior descriptive positive models from Kahneman and Tversky, Luce et al., Gonzalez and Wu, Prelec, and Quiggin, which lends significant credence to the validity of approach and the result. This research also reinforces validation of the various studies used in the analyses, especially that of CPT.
This research demonstrates that entropy divergence from certainty can be used to develop a positive decision model from basic theory that accurately predicts prior study results and provides a translation between positive and normative decision theory domains by relating subjective and objective probabilities, respectively. Tversky and Kahneman introduced this technique of translation when they stated, "In expected utility theory the utility of an uncertain outcome is weighted by its probability; in prospect theory the value of an uncertain outcome is multiplied by a decision weight ," [62]. Since the decision weight and proximity are synonymous, Equation (A4) provides a translation between the two domains.
The first hypothesis is proven through the validation demonstrated in Section 5 and, in the process, it was demonstrated that gains and losses can be accurately considered together without correction; i.e., the assumption that 1 is valid. This conclusion establishes the basis for expressing risks with measurable objective probabilities in terms useable by decision makers. It also permits translation of subjective prospects based upon perception of an outcome into standard objective utility risk models. Additionally, the assumption that gains and losses can generally be considered contiguously is validated. The second hypothesis is also proven. As the prior studies used in this analysis are understood to accurately represent subject behavior, which have been shown to align with the EDRM prospect and is by definition based upon subjective probability, it follows that people do understand probabilities; however, as subjective probability. There is also some evidence that as choice complexity increases (greater number of states and mixtures of gain and loss states within a choice), decisions more closely align with uncorrected EDRM, which is consistent with intuitive system 1 behavior.
With the PEM validated and demonstrating consistent performance within this research, there is clearly potential application to other related studies to permit comparison of decision model outputs and subject responses. Since PEM quantifies relative choice difficulty as the difference between percentages, from an economics perspective it may be useful for engineering alternatives that are easier for subjects to choose between (i.e., make it easier to select one product over another). Additionally, there is an opportunity to conduct further research to understand how this relates as a function of variance in subject responses, i.e., is there more variance in difficult decisions?
With the positive results of the two hypotheses proven, the initial step towards quantizing programmatic risk is addressed, that is considering the mismatch between how decisions should be and are made. Future research to further evaluate the EDRM model in greater depth is requisite, especially the complex interactions of an increased number of states and mixtures of gains and losses within a choice, which are evident in many complex economic scenarios. Future research in this area will also consider application of continuous probability distributions and the use of utility functions other than the exponential power utility (i.e., logarithmic expected utility) to understand perception of risk. Funding: This research received no external funding.

Conflicts of Interest:
The authors declare no conflicts of interest.

Appendix A. Derivation of Proximity from Entropy
The EDRM is derived by calculating the Kullback-Leibler entropy divergence of the state probabilities from certainty, where is a continuous distribution as a linear function of and certainty is a value of 1 for all values of , which is also an integration of information theory entropy for a single state, shown in Equation (7) [11,35,73]. Like micro probabilities in statistical mechanics, one should note that proximity is a subjective probability as is not directly measurable. Derivation is as follows:  Figure A1. Divergence, or relative entropy, is the distance between certainty and uncertainty for a given subjective probability. The arrow shows how the divergence curve is flipped when converted to Shannon's redundancy, which is referred to herein as relative certainty and is an objective probability. Figure A2. Plot of relative certainty versus proximity that relates objective and subjective probabilities, but is then flipped around the diagonal to place relative certainty on the horizontal. This plot is shown to graphically illustrate the steps of the mathematical derivation.
The relationship between and proximity is illustrated in Figure A1, along with the Shannon entropy (base 2) for a single state which has a maximum at 1/ . Kullback-Leibler entropy divergence is also known as relative entropy, which can be used to express relative certainty, , in terms of Shannon redundancy, which is one minus relative entropy, as shown in Equation (A4) and Figure  A2. Figure 7 shows the inverted relationship for determining proximity as a function of relative certainty, which is more useful since subjects are usually provided relative certainty (objective probability) to evaluate a choice, which is expressed as 1 ln ln . (A4) Additionally, through the course of this research an alternate equation based upon Shannon's redundancy and entropy of a single state was derived which, with the right factors, closely approximates the ideal Equation (A4) for probabilities not near the extremes of 0 or 1. Because equality (A4) is not invertible, the following relationship may be more convenient mathematically: where 0.7276587, 0.401077, 2.664828, 1.0.

Appendix B. Very Small Probabilities
The relationship for small probabilities is derived by introducing the factor to the proximity relationship expressed in Equation (A4) and assuming Therefore, the exponential factor for very small probabilities converges to / .
For uncorrected EDRM, 1 by definition, so a profoundly simple relationship between objective and subjective probabilities results for very small probabilities, which is consistent with Born rule of quantum mechanics: .
(A9) 's broader application as a factor in Equation (A4) is considered in Section 4.6.5 and in future research after validation of the ideal model in this report.

Appendix C. Statistical Analyses
Appendix C. 1 Table A3. Statistical analysis of EDRM Performance with Cumulative Prospect Theory showing nearly perfect alignment between a priori EDRM and data reported by Tversky and Kahneman. These results suggest that there is some difference between gains and losses, but as a second-order effect. The number of states (1 or 2) has no effect. The ANOVA results for type of gamble do not reject the null hypothesis of no significant effect; however, the probability is very close to the 5% significance value indicating there is some difference between gains and losses, but that they can be considered as a secondary effect in this research given there is nearly no difference in the for positive (0.9885) and negative (0.9980) problems. Using a value of 0.947 rather than 1 increases the type of gamble Prob(>F) to nearly 0.35 from 0.053. 2. The sign of the resulting choice prospects has no significant effect. 3. The survey number is significant. All of the non-matching problems come from the surveys 1 through 3, which were conducted differently than surveys 4, 5, and 6; Survey 1 has a significantly higher difference mean than the other surveys.