# Balanced Quantum-Like Bayesian Networks

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- A Law of Balance: a novel mathematical formalism for quantum-like probabilistic inferences that enables the cancellation of quantum interference terms upon the application of Bayes normalisation factor. This way, the amplitudes of the probability waves become balanced.
- A Law of Maximum Uncertainty: which states that in order to predict disjunction effects, one should choose the amplitude of the wave that contains the maximum uncertainty or the the maximum information.

## 2. Probabilistic Inference in Bayesian Networks

- A burglar can set the alarm on.
- An earthquake can set the alarm on.
- The alarm can cause John to call.

## 3. Quantum Probabilities

## 4. The Two-Slit Experiment, Intensity Waves and Probabilisitc Waves

#### 4.1. Intensity Waves

#### 4.2. Probability Waves

- They sum to one$$p(y,\theta )+p(\neg y,{\theta}_{\neg})=p(y)+p(\neg y)=1;$$
- They are bigger or equal than 0 and smaller or equal than one$$0\le p(y,\theta )\le 1,\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0\le p(\neg y,{\theta}_{\neg})\le 1.$$

#### 4.3. Normalisation

## 5. The Law of Balance

**Case 1: probability wave**$p(\neg y,{\theta}_{\neg})$**dominates probability wave**$p(y,\theta )$. For the constraint$$\frac{p(y|x)p(y|\neg x)}{p(\neg y|x)p(\neg y|\neg x)}<1,$$$$p(\neg y,{\theta}_{\neg})=1-p(y,\theta ),$$$${\theta}_{\neg}=co{s}^{-1}\left(\right)open="("\; close=")">-\sqrt{\frac{p(y|x)p(y|\neg x)}{p(\neg y|x)p(\neg y|\neg x)}}cos(\theta )$$**Case 2: probability wave**$p(y,\theta )$**dominates probability wave**$p(\neg y,{\theta}_{\neg})$.For the constraint,$$\frac{p(\neg y|x)p(\neg y|\neg x)}{p(y|x)p(y|\neg x)}\le 1.$$$$p(y,\theta )=1-p(\neg y,{\theta}_{\neg}),$$$$\theta ={cos}^{-1}\left(\right)open="("\; close=")">-\sqrt{\frac{p(\neg y|x)p(\neg y|\neg x)}{p(y|x)p(y|\neg x)}}cos({\theta}_{\neg})$$**Case 3: none of the waves dominate each other.**For the constraint$$\frac{p(y|x)p(y|\neg x)}{p(\neg y|x)p(\neg y|\neg x)}=\frac{p(\neg y|x)p(\neg y|\neg x)}{p(y|x)p(y|\neg x)}=1.$$$${\theta}_{\neg}={cos}^{-1}\left(\right)open="("\; close=")">-cos(\theta )$$$${\theta}_{\neg}={cos}^{-1}\left(\right)open="("\; close=")">cos(\theta \pm \pi )$$$$\theta ={cos}^{-1}\left(\right)open="("\; close=")">-cos({\theta}_{\neg})$$$$\theta ={cos}^{-1}\left(\right)open="("\; close=")">cos({\theta}_{\neg}\pm \pi $$

**Probability Waves are Non-Negative Real Numbers Smaller Equal One.**We assume without loss of generality that $p(y)\le p(\neg y)$. It follows that $p(y)\le 0.5$. By the inequality of arithmetic and geometric means, we get the relationship$$\sqrt{p(y,x)p(y,\neg x)}\le \frac{p(y,x)+p(y,\neg x)}{2}$$$$2\sqrt{p(y,x)p(y,\neg x)}\le p(y,x)+p(y,\neg x)$$$$p(y,\theta )=p(y)+2\sqrt{p(y|x)p(x)p(y|\neg x)p(\neg x)}cos(\theta )$$$$p(y,\theta )\le 2p(y)\le 1.$$**Conformity with Double Stochastic Models.**Double stochastic models correspond to the situation where condition 3 occurs, that is,$${\theta}_{\neg}={cos}^{-1}\left(\right)open="("\; close=")">cos(\theta \pm \pi )$$$${e}^{i\theta}={e}^{i0}=1\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\mathtt{and}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{e}^{i{\theta}_{\neg}}={e}^{i\pi}=-1.$$$$\left(\right)open="("\; close=")">\begin{array}{cc}\sqrt{p(y|x)}& \sqrt{p(y|\neg x)}\\ \sqrt{p(\neg y|x)}& -\sqrt{p(\neg y|\neg x)}\end{array}=\left(\right)open="("\; close=")">\begin{array}{cc}1& 0\\ 0& 1\end{array}$$$$\left(\right)open="("\; close=")">\begin{array}{cc}\sqrt{p(y|x)}& -\sqrt{p(y|\neg x)}\\ \sqrt{p(\neg y|x)}& \sqrt{p(\neg y|\neg x)}\end{array}=\left(\right)open="("\; close=")">\begin{array}{cc}1& 0\\ 0& 1\end{array}$$

## 6. Balanced Probability Waves

- For $p(y)\le p(\neg y)$, the maximum interference is$$\pm Interferenc{e}_{max}=\pm \sqrt{p(y,x)p(y,\neg x)}.$$
- For $p(\neg y)\le p(y)$, the maximum interference is$$\pm Interferenc{e}_{max}=\pm \sqrt{p(\neg y,x)p(\neg y,\neg x)}.$$

#### 6.1. Principle of Entropy

**Principle of Maximum Entropy:**states that the probability distribution which best represents the current state of knowledge is the one with largest entropy (see [36,37,38]). For the case of a binary random variable, the highest entropy corresponds to an equal distribution,$$H=-p(y){log}_{2}(p(y))-p(\neg y){log}_{2}(p(\neg y))=-lo{g}_{2}(0.5)=1\phantom{\rule{3.33333pt}{0ex}}bit.$$

- For $p(y)\le p(\neg y)$,$${p}_{q}(y)=p(y)+2\sqrt{p(y,x)p(y,\neg x)}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{and}$$$${p}_{q}(\neg y)=1-{p}_{q}(y).$$
- For $p(\neg y)\le p(y)$,$$p{(\neg y)}_{q}=p(\neg y)+2\sqrt{p(\neg y,x)p(\neg y,\neg x)}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{and}$$$${p}_{q}(y)=1-{p}_{q}(\neg y).$$
- For $p(y)=p(\neg y)$,$${p}_{q}(y)=2\sqrt{p(\neg y,x)p(\neg y,\neg x)}\approx p(\neg y)\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{and}$$$${p}_{q}(\neg y)=1-{p}_{q}(y).$$

#### 6.2. Mirror Principle

- For the case $p(y)\le p(\neg y)$, we define$${p}_{q}(\neg y)=2\sqrt{p(y,x)p(y,\neg x)}\approx p(y)\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}{p}_{q}(y)=1-{p}_{q}(\neg y).$$
- For the case $p(\neg y)\le p(y)$, we define$${p}_{q}(y)=2\sqrt{p(\neg y,x)p(\neg y,\neg x)}\approx p(\neg y)\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\phantom{\rule{4.pt}{0ex}}{p}_{q}(\neg y)=1-{p}_{q}(y).$$

## 7. Empirical Validation

#### 7.1. Prisoner’s Dilemma Game and Probability Waves

- Participant was informed that the opponent chose to defect, $\neg x$.
- Participant was informed that the opponent chose to cooperate, x.
- Participant was not informed of the opponents choice.

- The probability that the prisoner y defects given x defects, $p(\neg y|\neg x)$.
- The probability that the prisoner y defects given x cooperates, $p(\neg y|x)$.
- The probability that the prisoner y defects given there is no information present about knowing if prisoner x cooperates or defects. This can be expressed by$$p(\neg y)=p(\neg y,x)+p(\neg y,\neg x)=p(\neg y|x)p(x)+p(\neg y|\neg x)p(\neg x).$$

#### 7.2. Two Stage Gambling Game

Imagine that you have just played a game of chance that gave you a 50% chance to win $200 and a 50% chance to lose $100. Imagine that you have already made such a bet. If you won this bet and were up $200, and were offered a chance to make the same bet a second time, would you take it? What if you lost the first bet and were down $100, would you make the same bet again? What if you do not yet know whether you won or lost the first bet and so do not yet know whether you are up or in debt, would you go ahead and make the same bet a second time?.

- The participant was informed that he lost the first gamble, $\neg x$, and was asked if he wanted to play the second gamble y.
- The participant was informed that he won the first gamble, x, and was asked if he wanted to play the second gamble y.
- The participant was not informed about the outcome of the first gamble, and was asked if he wanted to play the second gamble y. This would by by the law of total probability$$p(y)=p(y|x)p(x)+p(y|\neg x)p(\neg x)$$

#### 7.3. Probability Waves in the Prisoner’s Dilemma and the Two Stage Gambling Game

## 8. Quantum-like Bayesian Network

#### 8.1. Generalisation

#### 8.2. Probability Waves Sum to One according by the Law of Balance

#### 8.3. Probability Waves are Smaller Equal One only after Normalisation

#### 8.4. Example of Estimation of Balanced Phases

#### 8.5. Example of Application in the Burglar / Alarm Bayesian Network

- For the constraint,$$\frac{p(\neg {x}_{4}|{x}_{1})}{p({x}_{4}|{x}_{1})}\le 1,$$$${\theta}_{i}-{\theta}_{ii}={cos}^{-1}\left(\right)open="("\; close=")">-\frac{p(\neg {x}_{4}|{x}_{1})}{p({x}_{4}|{x}_{1})}cos({\theta}_{\neg i}-{\theta}_{\neg ii})$$
- For the constraint,$$\frac{p({x}_{4}|{x}_{1})}{p(\neg {x}_{4}|{x}_{1})}\le 1,$$$${\theta}_{\neg i}-{\theta}_{\neg ii}={cos}^{-1}\left(\right)open="("\; close=")">-\frac{p({x}_{4}|{x}_{1})}{p(\neg {x}_{4}|{x}_{1})}cos({\theta}_{i}-{\theta}_{ii})$$

- 1st Equation:$${\theta}_{i}-{\theta}_{ii}={cos}^{-1}\left(\right)open="("\; close=")">-\frac{p(\neg {x}_{4}|{x}_{1})}{p({x}_{4}|{x}_{1})}cos({\theta}_{\neg i}-{\theta}_{\neg ii})$$
- 2nd Equation:$${\theta}_{i}-{\theta}_{iii}={cos}^{-1}\left(\right)open="("\; close=")">-\frac{p(\neg {x}_{4}|{x}_{1})p(\neg {x}_{4}|\neg {x}_{1})}{p({x}_{4}|{x}_{1})p({x}_{4}|\neg {x}_{1})}cos({\theta}_{\neg i}-{\theta}_{\neg iii})$$
- 3rd Equation:$${\theta}_{iii}-{\theta}_{iv}={cos}^{-1}\left(\right)open="("\; close=")">-\frac{p(\neg {x}_{4}|\neg {x}_{1})}{p({x}_{4}|\neg {x}_{1})}cos({\theta}_{\neg iii}-{\theta}_{\neg iv})$$The balanced phases are computed as before.

## 9. Interpretation

- For unknown events, the classical law of total probability is applied;
- For events of which we are unaware, we apply quantum-like models in which the phase information is related to ignorance. We determine the possible values of the wave using the law of maximum entropy of quantum-like systems.

- In the two-slit experiment, with the electron detectors showing which slit the electron goes through, the electron behaves as a particle. Assuming that the information about the detectors is unknown to us, we apply the law of total probability.
- When the detectors are removed, the electron is unobserved and is represented as a wave. In this case, we apply the quantum-like law of total probability.

## 10. Conclusions

- The Law of Balance: a novel mathematical formalism for quantum-like probabilistic inferences that enables the cancellation of the quantum interference terms upon the application of Bayes normalisation factor. This way, the amplitudes of the probability waves become balanced.
- The Law of Maximum Uncertainty: which states that in order to predict disjunction effects, one should choose the amplitude of the wave that contains most the maximum uncertainty, or the the maximum information.

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Moreira, C.; Wichert, A. Quantum-like bayesian networks for modeling decision making. Front. Psychol.
**2016**, 7. [Google Scholar] [CrossRef][Green Version] - Brighton, H.; Gigerenzer, G. Bayesian brains and cognitive mechanisms: Harmony or dissonance? In The Probabilistic Mind: Prospects for a Bayesian Cognitive Science; Oxford University Press: Oxford, UK, 2008; pp. 189–208. [Google Scholar]
- Gigerenzer, G.; Gaissmaier, W. Heuristic decision making. Annu. Rev. Psychol.
**2011**, 62, 451–482. [Google Scholar] [CrossRef] [PubMed][Green Version] - Kahnemann, D.; Tversky, A. Subjective probability: A judgement of representativeness. Cognit. Psychol.
**1972**, 1, 430–454. [Google Scholar] [CrossRef] - Tversky, A.; Kahneman, D. Availability: A heuristic for judging frequency and probability. Cognit. Psychol.
**1973**, 5, 207–232. [Google Scholar] [CrossRef] - Tversky, A.; Kahneman, D. Judgment under uncertainty: Heuristics and Biases. Science
**1974**, 185, 1124–1131. [Google Scholar] [CrossRef] [PubMed] - Busemeyer, J.R.; Wang, Z.; Townsend, J.T. Quantum dynamics of human decision-making. J. Math. Psychol.
**2006**, 50, 220–241. [Google Scholar] [CrossRef] - Bowers, J.S.; Davis, C.J. More varieties of Bayesian theories, but no enlightenment. Behav. Brain Sci.
**2011**, 34, 193–194. [Google Scholar] [CrossRef] - Charness, G.; Levin, D. When optimal choices feel wrong: A laboratory study of and affect Bayesian updating, complexity, and affect. Am. Econ. Rev.
**2005**, 95, 1300–1309. [Google Scholar] [CrossRef][Green Version] - Grether, D.M. Testing Bayes rule and the representativeness heuristic: Some experimental evidence. J. Econ. Behav. Organ.
**1992**, 17, 31–57. [Google Scholar] [CrossRef][Green Version] - Jones, M.; Love, B.C. Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behav. Brain Sci.
**2011**, 34, 169–188. [Google Scholar] [CrossRef][Green Version] - Peters, O. The ergodicity problem in economics. Nat. Phys.
**2019**, 15, 1216–1221. [Google Scholar] [CrossRef][Green Version] - Schwartenbeck, P.; FitzGerald, T.H.; Mathys, C.; Dolan, R.; Wurst, F.; Kronbichler, M.; Friston, K. Optimal inference with suboptimal models: Addiction and active Bayesian inference. Med. Hypotheses
**2015**, 84, 109–117. [Google Scholar] [CrossRef] [PubMed][Green Version] - Beck, J.M.; Ma, W.J.; Pitkow, X.; Latham, P.E.; Pouget, A. Not noisy, just wrong: The role of suboptimal inference in behavioral variability. Neuron
**2012**, 74, 30–39. [Google Scholar] [CrossRef] [PubMed][Green Version] - Busemeyer, J.R.; Wang, Z. Quantum cognition: Key issues and discussion. Top. Cognit. Sci.
**2014**, 6, 43–46. [Google Scholar] [CrossRef] [PubMed] - Busemeyer, J.; Wang, Z.; Trueblood, J. Hierarchical bayesian estimation of quantum decision model parameters. Int. Symp. Quantum Interact.
**2012**, 7620, 80–89. [Google Scholar] - Busemeyer, J.R.; Trueblood, J. Comparison of quantum and bayesian inference models. Int. Symp. Quantum Interact.
**2009**, 5494, 29–43. [Google Scholar] - Busemeyer, J.R.; Wang, Z.; Lambert-Mogiliansky, A. Empirical comparison of markov and quantum models of decision making. J. Math. Psychol.
**2009**, 53, 423–433. [Google Scholar] [CrossRef] - Pothos, E.; Busemeyer, J. A quantum probability model explanation for violations of rational decision theory. Proc. R. Soc. B
**2009**, 276, 2171–2178. [Google Scholar] [CrossRef][Green Version] - Khrennikov, A. Quantum-like model of cognitive decision making and information processing. J. BioSyst.
**2009**, 95, 179–187. [Google Scholar] [CrossRef] - Moreira, C.; Wichert, A. Exploring the Relations Between Quantum-Like Bayesian Networks and Decision-Making Tasks with Regard to Face Stimuli. J. Math. Psychol.
**2017**, 78, 86–95. [Google Scholar] [CrossRef][Green Version] - Aerts, D.; Aerts, S. Application of quantum statistics in psychological studies of decision processes. Found. Sci.
**1995**, 1, 85–97. [Google Scholar] [CrossRef] - Wichert, A.; Moreira, C. Balanced Quantum-Like Model for Decision-Making. Int. Symp. Quantum Interact.
**2018**, 11690, 79–90. [Google Scholar] - Busemeyer, J.; Matthew, M.; Wang, Z. A quantum information processing explanation of disjunction effects. Proc. Anal. Conf. Cognit. Sci. Soc.
**2006**, 28, 131–135. [Google Scholar] - Hristova, E.; Grinberg, M. Disjunction effect in prisoner’s dilemma: Evidences from an eye-tracking study. In Proceedings of the 30th Annual Conference of the Cognitive Science Society, Washington, DC, USA, 23–26 July 2008; pp. 1225–1230. [Google Scholar]
- Li, S.; Taplin, J. Examining whether there is a disjunction effect in prisoner’s dilemma game. Chin. J. Psychol.
**2002**, 44, 25–46. [Google Scholar] - Tversky, A.; Shafir, E. The disjunction effect in choice under uncertainty. J. Psychol. Sci.
**1992**, 3, 305–309. [Google Scholar] [CrossRef] - Kuhberger, A.; Komunska, D.; Josef, P. The disjunction effect: Does it exist for two-step gambles? Organ. Behav. Hum. Decis. Processes
**2001**, 85, 250–264. [Google Scholar] [CrossRef] - Lambdin, C.; Burdsal, C. The disjunction effect reexamined: Relevant methodological issues and the fallacy of unspecified percentage comparisons. Organ. Behav. Hum. Decis. Processes
**2007**, 103, 268–276. [Google Scholar] [CrossRef] - Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
- Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: Palo Alto, CA, USA, 1988. [Google Scholar]
- Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach; Pearson Education Limited: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
- Binney, J.; Skinner, D. The Physics of Quantum Mechanics; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
- Moreira, C.; Wichert, A. Interference effects in quantum belief networks. Appl. Soft Comput.
**2014**, 25, 64–85. [Google Scholar] [CrossRef][Green Version] - Moreira, C.; Wichert, A. Quantum probabilistic models revisited: The case of disjunction effects in cognition. Front. Psychol.
**2016**, 4. [Google Scholar] [CrossRef][Green Version] - Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. Ser. II
**1957**, 106, 620–630. [Google Scholar] [CrossRef] - Jaynes, E.T. Information theory and statistical mechanics ii. Phys. Rev. Ser. II
**1957**, 108, 171–190. [Google Scholar] [CrossRef] - Jaynes, E.T. Prior probabilities. IEEE Trans. Syst. Sci. Cybern.
**1968**, 4, 227–241. [Google Scholar] [CrossRef] - Savage, L.J. The Foundations of Statistics; Courier Corporation: Chelmsford, MA, USA, 1972. [Google Scholar]
- Shafir, E.; Tversky, A. Thinking through uncertainty: Nonconsequential reasoning and choice. Cognit. Psychol.
**1992**, 24, 449–474. [Google Scholar] [CrossRef] - Yukalov, V.; Sornette, D. Decision theory with prospect interference and entanglement. Theory Decis.
**2011**, 70, 283–328. [Google Scholar] [CrossRef][Green Version] - Moreira, C.; Wichert, A. Are Quantum-like Bayesian Networks More Powerful than Classical Bayesian Networks? J. Math. Psychol.
**2018**, 28, 73–83. [Google Scholar] [CrossRef] - Vedral, V. Living in a quantum world. Sci. Am.
**2011**, 304, 38–43. [Google Scholar] [CrossRef] [PubMed] - Amico, L.; Osterloh, A.; Vedral, V. Entanglement in many-body systems. Rev. Mod. Phys.
**2008**, 80, 517–576. [Google Scholar] [CrossRef][Green Version] - Ghosh, S.; Rosenbaum, T.F.; Aeppli, G.; Coppersmith, S.N. Entangled quantum state of magnetic dipoles. Nature
**2003**, 425, 48–51. [Google Scholar] [CrossRef] [PubMed][Green Version] - Machina, M. Risk, ambiguity, and the ark-dependence axioms. Am. Econ. Rev.
**2009**, 99, 385–392. [Google Scholar] [CrossRef][Green Version] - Trueblood, J.S.; Yearsley, J.M.; Pothos, E.M. A quantum probability framework for human probabilistic inference. J. Exp. Psychol. Gen.
**2017**, 146, 1307–1341. [Google Scholar] [CrossRef] - Moreira, C.; Haven, E.; Sozzo, S.; Wichert, A. Process mining with Real World Financial Loan Applications: Improving Inference on Incomplete Event Logs. PLoS ONE
**2018**, 13, e0207806. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**The probabilistic influence of random variables X on Y represented by a Bayesian network. Note that each node is followed by a conditional probability table that specifies the probability distribution of how node Y is conditioned by node X.

**Figure 2.**A Bayesian network representing the causal relationship between events ${x}_{1}$, ${x}_{2}$, ${x}_{3}$ and ${x}_{4}$. The four variables can be associated with causal knowledge, in our example $Burglary(={x}_{2}$), $Earthquake(={x}_{3}$), $Alarm(={x}_{1}$) and $JohnCalls(={x}_{4}$).

**Figure 3.**(

**a**) Two intensity waves $I(y,\theta )$, $I(\neg y,{\theta}_{\neg})$ in relation to the phase ($-2\pi ,2\pi $) with the parametrisation as indicated in corresponding to the values of Figure 1. Note that the two waves oscillate around $p(y)=0.1950$ and $p(\neg y)=0.8050$ (the two lines). (

**b**) Normalisation of the two intensity waves $I(y,\theta )$, $I(\neg y,{\theta}_{\neg})$. The two normalised waves do not oscillate around $p(y)$ and $p(\neg y)$. (

**c**) The resulting probability waves as determined by the law of balance, the bigger wave is replaced by the negative smaller one.

**Figure 4.**Probability waves for the experiments described in Table 1 and Table 2. In plots (

**a**–

**d**) the waves $p(\neg y,{\theta}_{\neg})$ are around $p(\neg y)$, (for (

**e**) see Figure 3c). In the plots (

**i**–

**iv**) the waves $p(y,\theta )$ are around $p(y)$. Additionally the values ${p}_{sub}(\neg y)$ and ${p}_{sub}(y)$ are indicated by a line. Note that the curves in the plots (

**i**–

**iv**) overlap.

**Figure 5.**(

**a**) Having two different phases there is only one combination and we can project the two dimensional function onto one dimension function. The cos function in the relation ${\theta}_{1}-{\theta}_{2}$. In (

**b**–

**d**) we assume that each of the three phases of $p(y,{\theta}_{1},{\theta}_{2},{\theta}_{3})$ and $p(\neg y,{\theta}_{\neg 1},{\theta}_{\neg 2},{\theta}_{\neg 3})$ is zero and get three different plots which approximate the three dimensional function by three projections onto two dimension. In (

**b**) we assume ${\theta}_{\neg 3}={\theta}_{3}=0$. In (

**c**) we assume ${\theta}_{\neg 2}={\theta}_{2}=0$. In (

**d**) we assume ${\theta}_{\neg 1}={\theta}_{1}=0$.

**Figure 6.**Probability waves. Since the parameter values are small $Burglary(={x}_{2}=0.001$), $Earthquake(={x}_{3}=0.002$), the interference part is not noticeable.

**Figure 7.**Probability waves. Since the parameter are increased $Burglary(={x}_{2}=0.5$), $Earthquake(={x}_{3}=0.2$), the interference part is noticeable.

**Table 1.**Experimental results obtained in four different works of the literature for the prisoner’s dilemma game. The column $p(\neg y|\neg x)$ corresponds to the probability of $defecting$ given that it is known that the other participant chose to $defect$. The column $p(\neg y|x)$ corresponds to the probability of $defecting$ given that it is known that the other participant chose to $cooperate$. The column ${p}_{sub}(\neg y)$ corresponds to the subjective probability of the second participant choosing the $defect$ action given there is no information present about knowing if prisoner x cooperates or defects. The column $p(\neg y)$ corresponds to the classical probability. Finally, the column Sample Size describes the number of participants used in each experiment of the Prisoner’s Dilemma game. ${}^{a}$ corresponds to the average results of all seven experiments reported.

Experiment | $\mathit{p}(\neg \mathit{y}|\neg \mathit{x})$ | $\mathit{p}(\neg \mathit{y}|\mathit{x})$ | ${\mathit{p}}_{\mathit{s}\mathit{u}\mathit{b}}(\neg \mathit{y})$ | $\mathit{p}(\neg \mathit{y})$ | Sample Size |
---|---|---|---|---|---|

(a) Tversky and Shafir [27] | 0.97 | 0.84 | 0.63 | 0.9050 | 80 |

(b) Li and Taplin [26] ${}^{a}$ | 0.82 | 0.77 | 0.72 | 0.7950 | 30 |

(c) Busemeyer et al. [24] | 0.91 | 0.84 | 0.66 | 0.8750 | 88 |

(d) Hristova and Grinberg [25] | 0.97 | 0.93 | 0.88 | 0.9500 | 20 |

(e) Average | 0.92 | 0.85 | 0.72 | 0.8813 | 54 |

**Table 2.**Experimental results obtained in three different works of the literature indicating the probability of a player choosing to make a second gamble for the two stage gambling game. The column $p(y|\neg x)$ corresponds to the probability when the outcome of the first gamble is known to be lost. The column $p(y|x)$ corresponds to the probability when the outcome of the first gamble is known to be win. Finally, the column ${p}_{sub}(y)$ corresponds to the subjective probability when the outcome of the first gamble is not known. The column $p(y)$ corresponds to the classical probability. ${}^{a}$ corresponds to the average results of all four experiments reported.

Experiment | $\mathit{p}(\mathit{y}|\neg \mathit{x})$ | $\mathit{p}(\mathit{y}|\mathit{x})$ | ${\mathit{p}}_{\mathit{s}\mathit{u}\mathit{b}}(\mathit{y})$ | $\mathit{p}(\mathit{y})$ | Sample Size |
---|---|---|---|---|---|

(i) Tversky and Shafir [27] | 0.58 | 0.69 | 0.37 | 0.6350 | 98 |

(ii) Kuhberger et al. [28] ${}^{a}$ | 0.47 | 0.72 | 0.48 | 0.5950 | 135 |

(iii) Lambdin and Burdsal [29] | 0.45 | 0.63 | 0.41 | 0.5400 | 57 |

(iv) Average | 0.50 | 0.68 | 0.42 | 0.5900 | 96 |

**Table 3.**Probability waves, the resulting probabilities ${p}_{q}$ that are based on the law of maximal uncertainty, the subjective probability and the classical probability values. Entries (a)–(e) are based on the principle of entropy and entries (i)–(iv) are based mirror principle.

Experiment: Prisoner’s Dilemma | ${\mathit{I}}_{\neg \mathit{y}}$ | ${\mathit{p}}_{\mathit{s}\mathit{u}\mathit{b}}(\neg \mathit{y})$ | ${\mathit{p}}_{\mathit{q}}(\neg \mathit{y})$ | $\mathit{p}(\neg \mathit{y})$ |

(a) Tversky and Shafir [27] | [0.84, 0.97] | 0.63 | 0.84 | 0.91 |

(b) Li and Taplin [26] | [0.59, 1.00] | 0.72 | 0.59 | 0.79 |

(c) Busemeyer et al. [24] | [0.76, 1.00] | 0.66 | 076 | 0.88 |

(d) Hristova and Grinberg [25] | $[0.90,1.00]$ | 0.88 | 0.90 | 0.95 |

(e) Average | $[0.77,0.99]$ | 0.72 | 0.77 | 0.88 |

Experiment: Two Stage Gamble | ${\mathit{I}}_{\mathit{y}}$ | ${\mathit{p}}_{\mathit{s}\mathit{u}\mathit{b}}(\mathit{y})$ | ${\mathit{p}}_{\mathit{q}}(\mathit{y})$ | $\mathit{p}(\mathit{y})$ |

(i) Tversky and Shafir [27] | [0.27, 0.98] | 0.37 | 0.36 | 0.64 |

(ii) Kuhberger et al. [28] | [0.20, 0.98] | 0.48 | 0.39 | 0.59 |

(iii) Lambdin and Burdsal [29] | [0.09, 0.99] | 0.41 | 0.45 | 0.54 |

(vi) Average | [0.19, 0.99] | 0.42 | 0.40 | 0.59 |

**Table 4.**Comparison between the Quantum Prospect Decision Theory (DT), see Yukalov and Sornette [41], the dynamic heuristic (DH), see Moreira and Wichert [1] and the law of maximal uncertainty (MU) of the balanced quantum-like model. The results of the dynamic heuristic (DH) and the law of maximal uncertainty (MU) are similar, however the the law of maximal uncertainty (MU) was not adapted to a domain.

Experiment: Prisoner’s Dilemma | observed | PDT | DH | MU |

(a) Tversky and Shafir [27] | 0.63 | 0.65 | 0.64 | 0.84 |

(b) Li and Taplin [26] | 0.72 | 0.54 | 0.71 | 0.59 |

(c) Busemeyer et al. [24] | 0.66 | 0.63 | 0.80 | 0.76 |

(d) Hristova and Grinberg [25] | 0.88 | 0.70 | 0.90 | 0.90 |

(e) Average | 0.72 | 0.63 | 0.76 | 0.77 |

Experiment: Two-Stage Gamble | observed | PDT | DH | MU |

(i) Tversky and Shafir [27] | 0.37 | 0.39 | 0.36 | 0.36 |

(ii) Kuhberger et al. [28] | 0.48 | 0.35 | 0.40 | 0.39 |

(iii) Lambdin and Burdsal [29] | 0.41 | 0.29 | 0.41 | 0.45 |

(iv) Average | 0.42 | 0.34 | 0.39 | 0.40 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wichert, A.; Moreira, C.; Bruza, P.
Balanced Quantum-Like Bayesian Networks. *Entropy* **2020**, *22*, 170.
https://doi.org/10.3390/e22020170

**AMA Style**

Wichert A, Moreira C, Bruza P.
Balanced Quantum-Like Bayesian Networks. *Entropy*. 2020; 22(2):170.
https://doi.org/10.3390/e22020170

**Chicago/Turabian Style**

Wichert, Andreas, Catarina Moreira, and Peter Bruza.
2020. "Balanced Quantum-Like Bayesian Networks" *Entropy* 22, no. 2: 170.
https://doi.org/10.3390/e22020170