# Game Theory-Inspired Evolutionary Algorithm for Global Optimization

## Abstract

**:**

## 1. Introduction

- A novel game evolutionary algorithm (GameEA) is introduced which is a framework to simulate human game behavior. GameEA includes imitation and belief learning strategies and a payoff expectation mechanism. Learning strategies are used to improve competitiveness of the players, while the payoff expectation formulation is employed to estimate and master the information of the players.
- We compared GameEA with the standard genetic algorithm (StGA) [24], island-model genetic algorithms (IMGA) [25], finding multimodal solutions using restricted tournament selection (RTS) [26], dual-population genetic algorithm (DPGA) [27], and GameEA outperforms the compared four algorithms in terms of stability and accuracy.

## 2. Related Works

## 3. Proposed Algorithm: GameEA

#### 3.1. Fundamentals of GameEA

- Stable payoffs are achieved by a player after winning against an opponent and another challenger.
- If a player accepts a game, then he/she can learn something from the opponent whether he/she losses or wins, which indirectly influences future competition.
- If a player gives up in a competition, then he/she can improve by self-training.

_{1}and w

_{2}indicate the payoffs from fighting with nature’s choice and the gains from self-training, respectively. Nature has a probability of p to choose a weaker rival and provides a stronger opponent with a probability of 1 − p. Nature’s choice with a probability of p, which is characterized as weaker, exhibits different performances for various challengers even for homogenous challengers in different stages. Thus, payoffs w

_{1}and w

_{2}are undetermined. The participant chosen by nature can achieve gains, which is distinct from the traditional Harsanyi Transformation. Nature’s choice is a virtual player who does not benefit from the game in the traditional Harsanyi Transformation.

_{1}+ p × (w

_{1}+ 1). Otherwise, if the challenger gives up in the competition, then its expectation payoffs can be calculated using (1 − p) × w

_{2}+ p × w

_{2}. Thus, total expectation payoff E is calculated using Equation (1). Let w = w

_{2}− w

_{1}be substituted into Equation (1). Hence, we can obtain Equation (2):

#### 3.2. Framework of Proposed Algorithm

_{i}selects an opponent I

_{j}as his/her challenger, and the challenger makes a decision by carefully checking the selected player. At the beginning of the game evolution, considering that the players are very weak, namely I

_{i}

^{v}= 0, we let the player perform the imitation operation with P

_{1}to produce a new individual (shown in step 7). Evolving ahead, the challenger I

_{i}makes a decision by carefully checking the selected player I

_{j}, then the imitation or the belief learning procedure is employed for offspring generation. If player I

_{j}is more competitiveness then player I

_{i}, player I

_{i}decides to imitate some genes from player I

_{j}by applying the imitation operator. Otherwise, player I

_{i}insists that it will become more competitiveness by applying the belief learning operator. GameEA has only one population (set of players) and generates new offspring through the imitation operator between the challenger and the opponent and the belief learning operator via self-training strategies. In GameEA, the objective values are not used to calculate the dominance among the players, and the total expectation payoffs based on the history information are employed to facilitate the player becoming a rational decision-maker. The implementation details of each component in GameEA are described in the succeeding paragraphs.

Algorithm 1. GameEA. |

Begin |

1. t: = 1; // iteration number |

2. Initialize players set I, I_{i}^{a}:= 0, and I_{i}^{p}:= 0 (|I^{t}| = N, 0 < I < N); // Initialize players population for iteration |

3. Evaluate f(I); // for each I_{i} of I, evaluate I_{i}^{obj}; |

4. while t > T_{max} do |

5. Select 2 different competitors I_{i} and I_{j} from I; |

6. Refresh the payoff of I_{i} and I_{j}: I_{i}^{v}:= I_{i}^{a} + I_{i}^{p}, I_{j}^{v}:= I_{j}^{a} + I_{j}^{p}; // the following steps are responsible to reproduce a new player I_{i} |

7. if I_{i}^{v} == 0 && random() < P_{1} then Perform imitation operator : I_{i} = imitation(I_{i}, I_{j}); |

8. else |

9. Calculate the expectation payoffs E(I_{i}) of I_{i} using Equation (3); |

10. if E(I_{i}) > 0 then Perform imitation operator: I_{i} = imitation(I_{i}, I_{j}); |

11. else if random() < P_{2} then Perform belief learning operator: I_{i} = beliefLearning(I_{i}); |

12. t: = t + 1; |

14. end while |

end |

#### 3.3. Initialization Players Population

**I**; and (3) the assignment of the passive and active payoffs of each player to zero. For the optimization problems, the set of players

**I**should be randomly sampled from the decision space

**R**

^{n}via a uniform distribution using a real-value representation. The objective value of each player is calculated using the objective function.

#### 3.4. Imitation Operator

_{i}:

_{1}, which is a very high number, such as 0.9. The weak challenger I

_{i}must compete with the selected I

_{j}and imitate useful information from others by using operator imitation (I

_{i}, I

_{j}). However, the challenger may do nothing at the current game to survive some schema. This approach is a special strategy with varying conditions and unchanging genes when the player does not have substantial information about others.

_{a}+ 1) and the total number of imitations (2H

_{s}+ 1). If random() × (H

_{a}+ 1)/(2H

_{s}+ 1) < P

_{3}, then all the conditions indicate that the player should improve by speculatively learning from others. For example, using one method that exhibits perfect performance in solving a problem in a specific field to address a problem in another field frequently yields good and unexpected result. This outcome is related to speculative learning because it is based on guesses or ideas about what may happen or be true rather than what are factual. Strategically copying a segment of genes entails obtaining chromosomes from others, which may result in certain improvement. Companies achieve substantial success by investing in new technology. Other companies allocate resources to follow their investment strategies. Followers are likely to benefit. Thus, strategic copying or learning is proposed.

_{1}= rand(0, n − 1), r

_{2}= rand(0, n − 1), and β = random(), then if β < 0.5 then $\tau ={(2\beta )}^{1/16}$.

_{i}, which speculatively learns from I

_{j}(line 5, Algorithm 2):

- (1)
- I
_{p}.gen[r_{2}] = 0.5(1 − τ) I_{i}.gen[r_{2}] + (1 + τ) I_{j}.gen[r_{1}]. - (2)
- If the value of I
_{p}.gen[r_{2}] is out of range, then a random value must be assigned to I_{p}.gen[r_{2}]. - (3)
- I
_{i}strategically copies a segment of genes from I_{j}via I_{p}.gen[r_{1}] = 0.5(1 − τ) I_{i}.gen[r_{1}] + (1 + τ) I_{j}.gen[r_{1}]. - (4)
- If the value of I
_{p}.gen[r_{1}] is out of the decision space, then a random value must be assigned to I_{p}.gen[r_{1}].

Algorithm 2. Imitation (I_{i}, I_{j}). |

Begin |

1. if I_{i}^{obj} < I_{j}^{obj} then I_{i}^{a}:= I_{i}^{a} + 1; |

2. else I_{j}^{p} = I_{j}^{p} + 1; |

3. Initialize temporary variable I_{p} = I_{i} and B:= 0; |

4. if (random() × (H_{a} + 1)/(2H_{s} + 1)) < P_{3} then |

5. Modify genes of I_{p} by speculatively learning from I_{j}; |

6. B = 1 and H_{a} = H_{a} + 1; |

7. else change genes of I_{i} by strategically copying a segment of genes from I_{j}; |

8. update the objectives value I_{p}^{obj}; |

9. if I_{p}^{obj} < I_{i}^{obj} then I_{i} = I_{p} ; |

10. if B == 1 then H_{s} = H_{s} + 1; |

11. return I_{i}; |

end |

#### 3.5. Belief Learning Operator

_{2}. If belief learning is expected to run for a significant amount of time, then P

_{2}is assigned a high value. Otherwise, P

_{2}is given a low value. If a characteristic of a solution must be emphasized, then special knowledge can be used to specify a belief learning algorithm. Algorithm 3 presents a belief learning procedure for real-value presentation problems. An offspring replaces its parent only when the offspring is better than its parent and, thus, the eliting conservation strategy is implemented. The belief learning operator differs from the belief space of a cultural algorithm [50], which is divided into distinct categories that represent different fields of knowledge that the population has regarding the search space. The belief space is updated after each iteration by the best individuals of the population.

Algorithm 3. Belieflearning(I_{i}). // for real-valued presentation. |

Begin |

1. r_{1}:= rand(0, n − 1), β:= random() |

2. Δ:= difference of maximum and minimum value of r_{1}th dimension |

3. if β < 0.5 then τ = (2β)^{1/21} − 1 |

4. else τ =1 − (2 − 2β)^{1/21} |

5. I_{i}.gen[r_{1}] = I_{i}.gen[r_{1}] + τ × Δ |

6. if the value of I_{p}.gen[r_{1}] out of the given ranges |

7. then assign a required random value to I_{p}.gen[r_{1}] |

end |

#### 3.6. Players Set Updating Strategy

## 4. Performance Comparison and Experimental Results

#### 4.1. Test Problems and Compared Algorithms

_{1}–f

_{13}shown in Table 2 have been considered in our experiments on real-valued representations, which are widely adopted for comparing the capabilities of evolutionary algorithms. The functions f

_{1}–f

_{7}are unimodal distribution functions with one peak within the entire given domain, the functions f

_{8}–f

_{13}are multimodal functions with a large number of local optima in the searching space. Column n in Table 2 indicates the dimensions used.

#### 4.2. Experimental Setup

_{1}, losses weight W

_{2}, learning probability P

_{1}, mutation probability P

_{2}and the speculative probability P

_{3}were set to 0.9, 0.01, 0.9, 0.1, and 0.1, respectively. For each function, the maximum iteration of GameEA was set to a homogeneous iteration with the compared algorithm. Table 3 summarizes iterations used for each function, which are indicated in the iteration column of the table. The following sections present the statistical experimental results obtained for the functions mentioned above.

#### 4.3. Results and Comparision Analysis

_{1}, f

_{2}, and f

_{4}, the GameEA obtains the best results with an average result of 4.33 × 10

^{−96}, 1.52 × 10

^{−66}, and 0.0374, respectively. The DPGA shows the second-best results with an average result of 1.47 × 10

^{−52}for function f

_{1}, and the RTS is third on the list. The StGA convergence accuracy is slightly worse than that of the IMGA for functions f

_{1}and f

_{2}, but the IMGA is worse than the StGA for function f

_{4}.

_{4}, the RTS with an average of 1.391 is the silver medalist, but the GameEA’s average is less than 3% of the RTS.

_{6}, the GameEA is significantly better than the StGA and the DPGA. The GameEA is similar to the IMGA and the RTS, but it has a smaller standard deviation. For function f

_{5}and f

_{7,}all the mentioned algorithms above converge to the global minimum.

_{8}–f

_{13}, which have a large number of local optima, all the algorithms converge to the global minimum on the functions f

_{8}and f

_{9}. The RTS, the DPGA and the GameEA show no significant difference with the convergence of the global optimum in each independent trial on function f

_{10}, which means that all the algorithms can solve such problems.

_{11}–f

_{13}, which demonstrates the search ability, stability, and robustness. It is a remarkable fact that the StGA and the DPGA do not converge to the global optimal solution with the same standard deviation of zero in all the trials for function f

_{13}, which indicates that those algorithms stalled at the local optimum points, whereas the GameEA escapes and continually evolves to the global minimum. We also observed the final solutions of the GameEA with respect to the f

_{13}, although its average optimums are not perfect, thirty-nine parts in fifty obtain the global optimum.

_{3}, the GameEA is obviously worse than other four algorithms. When we scrutinize the 50 experimental outputs, we find that the best solutions are set to 4.43 × 10

^{−12}and the worst optimum is set to 0.0122, which means that the final results have a wide range and, thus, a poor average and standard deviation.

_{5}, f

_{7}, f

_{8}, and f

_{9}, so we do not check the Friedman test on those functions.

_{6}and f

_{13}, on the whole. It is clear that the GameEA has given the best performance amongst all four algorithms according to Table 3. For function f

_{2}, the decision rejected the null hypothesis according to Table 4, and the GameEA has the best performance according to Table 3, but the results of the pairwise comparison was only rejected in the case of the StGA. In addition, for function f

_{3,}it indicates that the GameEA has given the worst performance according to Table 3, but the Friedman test results shows that the distribution of IMGA, RTS, DPGA, and GameEA are the same according to Table 4. The null hypothesis of pairwise comparison was only rejected in the case of StGA. Additionally, for functions f

_{6}and f

_{13,}although all the algorithms show different distributions according to the statistic results shown in Table 3, their null hypothesizes were not rejected, which means that all the algorithms could offer similar performances.

_{1}, we checked its detailed Friedman test results, and Figure 1 and Table 5 show the detailed statistics of the Friedman test.

## 5. Conclusions

## Acknowledgments

## Conflicts of Interest

## References

- Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput.
**1997**, 1, 53–66. [Google Scholar] [CrossRef] - Yang, X. Firefly algorithm, stochastic test functions and design optimisation. Int. J. BioInspir. Comput.
**2010**, 2, 78–84. [Google Scholar] [CrossRef] - Li, X.; Shao, Z.; Qian, J. An optimizing method based on autonomous animats: Fish-swarm algorithm. Syst. Eng. Theory Pract.
**2002**, 22, 32–38. [Google Scholar] - Neshat, M.; Sepidnam, G.; Sargolzaei, M.; Toosi, A.N. Artificial fish swarm algorithm: A survey of the state-of-the-art, hybridization, combinatorial and indicative applications. Artif. Intell. Rev.
**2014**, 42, 965–997. [Google Scholar] [CrossRef] - Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim.
**2007**, 39, 459–471. [Google Scholar] [CrossRef] - Mernik, M.; Liu, S.; Karaboga, D.; Črepinšek, M. On clarifying misconceptions when comparing variants of the Artificial Bee Colony Algorithm by offering a new implementation. Inf. Sci.
**2015**, 291, 115–127. [Google Scholar] [CrossRef] - Yang, X.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009. [Google Scholar]
- Woldemariam, K.M.; Yen, G.G. Vaccine-Enhanced Artificial Immune System for Multimodal Function Optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern.
**2010**, 40, 218–228. [Google Scholar] [CrossRef] [PubMed] - Maass, W.; Natschläger, T.; Markram, H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Comput.
**2002**, 14, 2531–2560. [Google Scholar] [CrossRef] [PubMed] - Shi, Y. Brain storm optimization algorithm. In Advances in Swarm Intelligence; Springer: Berlin, Germany, 2011; pp. 303–309. [Google Scholar]
- Wikipedia. Game Theory. Available online: https://en.wikipedia.org/wiki/Game_theory (accessed on 3 March 2016).
- Madani, K.; Hooshyar, M. A game theory-reinforcement learning (GT-RL) method to develop optimal operation policies for multi-operator reservoir systems. J. Hydrol.
**2014**, 519, 732–742. [Google Scholar] [CrossRef] - Spiliopoulos, L. Pattern recognition and subjective belief learning in a repeated constant-sum game. Games Econ. Behav.
**2012**, 75, 921–935. [Google Scholar] [CrossRef] [Green Version] - Friedman, D.; Huck, S.; Oprea, R.; Weidenholzer, S. From imitation to collusion: Long-run learning in a low-information environment. J. Econ. Theory
**2015**, 155, 185–205. [Google Scholar] [CrossRef] - Nax, H.H.; Perc, M. Directional learning and the provisioning of public goods. Sci. Rep.
**2015**, 5, 8010. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Anderson, S.P.; Goeree, J.K.; Holt, C.A. Stochastic Game Theory: Adjustment to Equilibrium under Noisy Directional Learning; University of Virginia: Charlottesville, VA, USA, 1999. [Google Scholar]
- Stahl, D.O. Rule learning in symmetric normal-form games: Theory and evidence. Games Econ. Behav.
**2000**, 32, 105–138. [Google Scholar] [CrossRef] - Nowak, M.A.; Sigmund, K. Evolutionary Dynamics of Biological Games. Science
**2004**, 303, 793–799. [Google Scholar] [CrossRef] [PubMed] - Gwak, J.; Sim, K.M. A novel method for coevolving PS-optimizing negotiation strategies using improved diversity controlling EDAs. Appl. Intell.
**2013**, 38, 384–417. [Google Scholar] [CrossRef] - Gwak, J.; Sim, K.M.; Jeon, M. Novel dynamic diversity controlling EAs for coevolving optimal negotiation strategies. Inf. Sci.
**2014**, 273, 1–32. [Google Scholar] [CrossRef] - Rosenstrom, T.; Jylha, P.; Pulkki-Raback, L.; Holma, M.; Raitakari, I.T.; Isometsa, E.; Keltikangas-Jarvinen, L. Long-term personality changes and predictive adaptive responses after depressive episodes. Evol. Hum. Behav.
**2015**, 36, 337–344. [Google Scholar] [CrossRef] - Szubert, M.; Jaskowski, W.; Krawiec, K. On Scalability, Generalization, and Hybridization of Coevolutionary Learning: A Case Study for Othello. IEEE Trans. Comput. Intell. AI Games
**2013**, 5, 214–226. [Google Scholar] [CrossRef] - Yang, G.C.; Wang, Y.; Li, S.B.; Xie, Q. Game evolutionary algorithm based on behavioral game theory. J. Huazhong Univ. Sci. Technol. (Nat. Sci. Ed.)
**2016**, 7, 68–73. [Google Scholar] - Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
- Alba, E.; Tomassini, M. Parallelism and evolutionary algorithms. IEEE Trans. Evol. Comput.
**2002**, 6, 443–462. [Google Scholar] [CrossRef] - Harik, G.R. Finding Multimodal Solutions Using Restricted Tournament Selection. In Proceedings of the 6th International Conference on Genetic Algorithms, San Francisco, CA, USA, 15–19 July 1995. [Google Scholar]
- Park, T.; Ryu, K.R. A Dual-Population Genetic Algorithm for Adaptive Diversity Control. IEEE Trans. Evol. Comput.
**2010**, 14, 865–884. [Google Scholar] [CrossRef] - Kontogiannis, S.; Spirakis, P. Evolutionary games: An algorithmic view. In Lecture Notes in Computer Science; Babaoglu, O., Jelasity, M., Montresor, A., Eds.; Springer: Berlin, Germany, 2005; pp. 97–111. [Google Scholar]
- Ganesan, T.; Elamvazuthi, I.; Vasant, P. Multiobjective design optimization of a nano-CMOS voltage-controlled oscillator using game theoretic-differential evolution. Appl. Soft Comput.
**2015**, 32, 293–299. [Google Scholar] [CrossRef] - Liu, W.; Wang, X. An evolutionary game based particle swarm optimization algorithm. J. Comput. Appl. Math.
**2008**, 214, 30–35. [Google Scholar] [CrossRef] - Koh, A. An evolutionary algorithm based on Nash Dominance for Equilibrium Problems with Equilibrium Constraints. Appl. Soft Comput.
**2012**, 12, 161–173. [Google Scholar] [CrossRef] - He, J.; Yao, X. A game-theoretic approach for designing mixed mutation strategies. In Lecture Notes in Computer Science; Wang, L., Chen, K., Ong, Y.S., Eds.; Springer: Berlin, Germany, 2005; pp. 279–288. [Google Scholar]
- Periaux, J.; Chen, H.Q.; Mantel, B.; Sefrioui, M.; Sui, H.T. Combining game theory and genetic algorithms with application to DDM-nozzle optimization problems. Finite Elem. Anal. Des.
**2001**, 37, 417–429. [Google Scholar] [CrossRef] - Lee, D.; Gonzalez, L.F.; Periaux, J.; Srinivas, K.; Onate, E. Hybrid-Game Strategies for multi-objective design optimization in engineering. Comput. Fluids
**2011**, 47, 189–204. [Google Scholar] [CrossRef] [Green Version] - Dorronsoro, B.; Burguillo, J.C.; Peleteiro, A.; Bouvry, P. Evolutionary Algorithms Based on Game Theory and Cellular Automata with Coalitions. In Handbook of Optimization: From Classical to Modern Approach; Zelinka, I., Snášel, V., Abraham, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 481–503. [Google Scholar]
- Greiner, D.; Periaux, J.; Emperador, J.M.; Galván, B.; Winter, G. Game Theory Based Evolutionary Algorithms: A Review with Nash Applications in Structural Engineering Optimization Problems. Arch Comput. Method E
**2016**. [Google Scholar] [CrossRef] - Niyato, D.; Hossain, E.; Zhu, H. Dynamics of Multiple-Seller and Multiple-Buyer Spectrum Trading in Cognitive Radio Networks: A Game-Theoretic Modeling Approach. IEEE Trans. Mob. Comput.
**2009**, 8, 1009–1022. [Google Scholar] [CrossRef] - Wei, G.; Vasilakos, A.V.; Zheng, Y.; Xiong, N. A game-theoretic method of fair resource allocation for cloud computing services. J. Supercomput.
**2010**, 54, 252–269. [Google Scholar] [CrossRef] - Jiang, G.; Shen, S.; Hu, K.; Huang, L.; Li, H.; Han, R. Evolutionary game-based secrecy rate adaptation in wireless sensor networks. Int. J. Distrib. Sens. N
**2015**, 2015, 25. [Google Scholar] [CrossRef] - Tembine, H.; Altman, E.; El-Azouzi, R.; Hayel, Y. Evolutionary Games in Wireless Networks. IEEE Trans. Syst. Man Cybern. Part B Cybern.
**2010**, 40, 634–646. [Google Scholar] [CrossRef] [PubMed] - Fontanari, J.F.; Perlovsky, L.I. A game theoretical approach to the evolution of structured communication codes. Theory Biosci.
**2008**, 127, 205–214. [Google Scholar] [CrossRef] [PubMed] - Mejia, M.; Pena, N.; Munoz, J.L.; Esparza, O.; Alzate, M.A. A game theoretic trust model for on-line distributed evolution of cooperation in MANETs. J. Netw. Comput. Appl.
**2011**, 34, 39–51. [Google Scholar] [CrossRef] - Bulo, S.R.; Torsello, A.; Pelillo, M. A game-theoretic approach to partial clique enumeration. Image Vis. Comput.
**2009**, 27, 911–922. [Google Scholar] [CrossRef] [Green Version] - Misra, S.; Sarkar, S. Priority-based time-slot allocation in wireless body area networks during medical emergency situations: An evolutionary game-theoretic perspective. IEEE J. Biomed. Health
**2015**, 19, 541–548. [Google Scholar] [CrossRef] [PubMed] - Qin, Z.; Wan, T.; Dong, Y.; Du, Y. Evolutionary collective behavior decomposition model for time series data mining. Appl. Soft Comput.
**2015**, 26, 368–377. [Google Scholar] [CrossRef] - Hausknecht, M.; Lehman, J.; Miikkulainen, R.; Stone, P. A neuroevolution approach to general atari game playing. IEEE Trans. Comput. Intell. AI Games
**2014**, 6, 355–366. [Google Scholar] [CrossRef] - Hu, H.; Stuart, H.W. An epistemic analysis of the Harsanyi transformation. Int. J. Game Theory
**2002**, 30, 517–525. [Google Scholar] [CrossRef] - Colman, A.M. Cooperation, psychological game theory, and limitations of rationality in social interaction. Behav. Brain Sci.
**2003**, 26, 139. [Google Scholar] [CrossRef] [PubMed] - Borgers, T.; Sarin, R. Learning through reinforcement and replicator dynamics. J. Econ. Theory
**1997**, 77, 1–14. [Google Scholar] [CrossRef] - Reynolds, R.G. Cultural algorithms: Theory and applications. In New Ideas in Optimization; Corne, D., Dorigo, M., Glover, F., Eds.; McGraw-Hill Ltd.: Maidenhead, UK, 1999; pp. 367–378. [Google Scholar]
- Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput.
**2011**, 1, 3–18. [Google Scholar] [CrossRef] - Draa, A. On the performances of the flower pollination algorithm—Qualitative and quantitative analyses. Appl. Soft Comput.
**2015**, 34, 349–371. [Google Scholar] [CrossRef] - Veček, N.; Mernik, M.; Črepinšek, M. A chess rating system for evolutionary algorithms: A new method for the comparison and ranking of evolutionary algorithms. Inf. Sci.
**2014**, 277, 656–679. [Google Scholar] [CrossRef] - GitHub, Inc. (US). Available online: https://github.com/simonygc/GameEA.git (accessed on 14 July 2017).
- Fernandes, F.E.; Guanci, Y.; Do, H.M. Detection of privacy-sensitive situations for social robots in smart homes. In Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016. [Google Scholar]

Symbol | Description | Symbol | Description |
---|---|---|---|

T_{max} | Maximum of game iteration number | N | Size of players/population |

W_{1} | Payoffs weight | W_{2} | Losses weight |

P_{1} | Imitation probability | P_{2} | Learning probability |

P_{3} | speculative probability | n | Dimension of problem |

T | tth game generation | H_{a} | Total number of speculation |

H_{s} | Total number of successful speculation | I_{i} | ith player/individual |

I_{i}^{a} | Active payoff of game of I_{i} | I_{i}^{p} | Passive payoff of game of I_{i} |

I_{i}^{obj} | objectives of I_{i} | I_{i}^{v} | Total payoffs of I_{i} |

No. | Name | n | Function | Range |
---|---|---|---|---|

f_{1} | Sphere | 30 | $f(x)={\displaystyle {\sum}_{i=1}^{n}{x}_{i}^{2}}$ | x_{i} ∈ [−100, 100] |

f_{2} | Schwefel 2.22 | 30 | $f(x)={\displaystyle {\sum}_{i=1}^{n}\left|{x}_{i}\right|}+{\displaystyle {\prod}_{i=1}^{n}\left|{x}_{i}\right|}$ | x_{i} ∈ [−10, 10] |

f_{3} | Schwefel 2.21 | 30 | $f(x)=\mathrm{max}\left\{\right|{x}_{i}|,1\le i\le n\}$ | x_{i} ∈ [−100, 100] |

f_{4} | Rosenbrock | 30 | $f(x)={\displaystyle {\sum}_{i=1}^{n-1}[100{({x}_{i+1}-{x}_{i}^{2})}^{2}+{(1-{x}_{i})}^{2}]}$ | x_{i} ∈ [−30, 30] |

f_{5} | Step | 30 | $f(x)={\displaystyle {\sum}_{i=1}^{n}({\lfloor {x}_{i}+0.5\rfloor}^{2}})$ | x_{i} ∈ [−100, 100] |

f_{6} | Noisy Quartic | 30 | $f(x)={\displaystyle {\sum}_{i=1}^{n}i{x}_{i}^{4}}+\mathrm{random}[0,1)$ | x_{i} ∈ [−1.28, 1.28] |

f_{7} | Goldstein-price | 2 | $\begin{array}{l}f(x)=(1+{({x}_{1}+{x}_{2}+1)}^{2}(19-14{x}_{1}+3{x}_{1}^{2}-14{x}_{2}+6{x}_{1}{x}_{2}+3{x}_{2}^{2}))g(x)\\ g\left(x\right)=30+{(2{x}_{1}-3{x}_{2})}^{2}(18-32{x}_{1}+12{x}_{1}^{2}+48{x}_{2}-36{x}_{1}{x}_{2}+27{x}_{2}^{2})\end{array}$ | x_{i} ∈ [−2, 2] |

f_{8} | Branin | 2 | $f(x)={\left({x}_{2}-\frac{5.1{x}_{1}^{2}}{4{\mathsf{\pi}}^{2}}+\frac{5{x}_{1}}{\mathsf{\pi}}-6\right)}^{2}+10\left(1-\frac{1}{8\mathsf{\pi}}\right)\mathrm{cos}{x}_{1}+10$ | x_{1} ∈ [−5, 10]x_{2} ∈ [0, 15] |

f_{9} | Six-hump camelback | 2 | $f(x)=4{x}_{1}^{2}-2.1{x}_{1}^{4}+{x}_{1}^{6}+{x}_{1}{x}_{2}-4{x}_{2}^{2}+4{x}_{2}^{4}$ | x_{i} ∈ [−5, 5]^{n} |

f_{10} | Rastrigin | 30 | $f(x)=10n+{\displaystyle {\sum}_{i=1}^{n}({x}_{i}^{2}-10\mathrm{cos}(2\mathsf{\pi}{x}_{i}))}$ | x_{i} ∈ [−5.12, 5.12] |

f_{11} | Griewank | 30 | $f(x)=1+\frac{{\displaystyle {\sum}_{i=1}^{n}({x}_{i}^{2})}}{4000}-{\displaystyle {\prod}_{i=1}^{n}\mathrm{cos}\left(\frac{{x}_{i}}{\sqrt{i}}\right)}$ | x_{i} ∈ [−600, 600] |

f_{12} | Schwefel 2.26 | 30 | $f(x)=-{\displaystyle {\sum}_{i=1}^{n}({x}_{i}\mathrm{sin}\sqrt{\left|{x}_{i}\right|}})$ | x_{i} ∈ [−500, 500] |

f_{13} | Ackley | 30 | $f(x)=20+\mathrm{e}-20\mathrm{exp}\left(-\frac{1}{5}\sqrt{\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}({x}_{i}^{2})}}\right)-\mathrm{exp}\left(\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\mathrm{cos}(2\mathsf{\pi}{x}_{i})}\right)$ | x_{i} ∈ [−32, 32] |

**Table 3.**Fifty independent experimental statistics results based on StGA, IMGA, RTS, DPGA, and GameEA using real-valued representations of functions ${f}_{1}-{f}_{13}$.

Iteration | Optimum Solution | StGA | IMGA | RTS | DPGA | GameEA | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

Average | Standard Deviation | Average | Standard Deviation | Average | Standard Deviation | Average | Standard Deviation | Average | Standard Deviation | |||

f_{1} | 1.5 × 10^{5} | 0 | 6.12 × 10^{−34} | 1.27 × 10^{−38} | 2.04 × 10^{−34} | 5.23 × 10^{−34} | 7.90 × 10^{−43} | 1.74 × 10^{−42} | 1.47 × 10^{−52} | 4.17 × 10^{−52} | 4.33 × 10^{−96} | 2.79 × 10^{−95} |

f_{2} | 2.0 × 10^{5} | 0 | 3.32 × 10^{−29} | 1.78 × 10^{−28} | 6.40 × 10^{−32} | 1.36 × 10^{−31} | 7.18 × 10^{−37} | 6.19 × 10^{−37} | 5.19 × 10^{−45} | 7.90 × 10^{−45} | 1.52 × 10^{−66} | 4.61 × 10^{−66} |

f_{3} | 5.0 × 10^{5} | 0 | 7.00 × 10^{−15} | 1.37 × 10^{−14} | 4.28 × 10^{–6} | 3.64 × 10^{−6} | 1.54 × 10^{−5} | 1.91 × 10^{−5} | 3.12 × 10^{−9} | 1.18 × 10^{−8} | 7.85 × 10^{−4} | 2.26 × 10^{−3} |

f_{4} | 2.0 × 10^{6} | 0 | 5.454 | 3.662 | 5.554 | 4.522 | 1.391 | 1.211 | 3.047 | 3.558 | 0.0374 | 0.264 |

f_{5} | 1.5 × 10^{5} | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

f_{6} | 3.0 × 10^{5} | 0 | 1.37 × 10^{−2} | 3.24 × 10^{−3} | 7.52 × 10^{−3} | 2.24 × 10^{−3} | 1.82 × 10^{−3} | 4.56 × 10^{−4} | 1.46 × 10^{−2} | 3.93 × 10^{−3} | 7.66 × 10^{−3} | 1.82 × 10^{−3} |

f_{7} | 1.0 × 10^{4} | 3 | 3 | 0 | 3 | 0 | 3 | 0 | 3 | 0 | 3 | 0 |

f_{8} | 1.0 × 10^{4} | 0.398 | 0.398 | 0 | 0.398 | 0 | 0.398 | 0 | 0.398 | 0 | 0.398 | 0 |

f_{9} | 1.0 × 10^{4} | −1.032 | −1.032 | 0 | −1.032 | 0 | −1.032 | 0 | −1.032 | 0 | −1.032 | 0 |

f_{10} | 5.0 × 10^{5} | 0 | 11.809 | 2.369 | 0.358 | 0.746 | 0 | 0 | 0 | 0 | 0 | 0 |

f_{11} | 2.0 × 10^{5} | 0 | 1.63 × 10^{−3} | 3.91 × 10^{−3} | 3.54 × 10^{−3} | 7.73 × 10^{−3} | 2.07 × 10^{−3} | 5.31 × 10^{−3} | 1.28 × 10^{−3} | 3.31 × 10^{−3} | 0 | 0 |

f_{12} | 9.0 × 10^{5} | −12,569.4866 | −11,195.1 | 284.5 | −12,008.1 | 284.9 | −12,443.9 | 142.4 | −12,550.5 | 43.9 | −12,569.4866 | 0 |

f_{13} | 1.5 × 10^{5} | 0 | 3.55 × 10^{−15} | 0 | 4.69 × 10^{−15} | 1.67 × 10^{−15} | 5.26 × 10^{−15} | 1.79 × 10^{−15} | 3.55 × 10^{−15} | 0 | 6.84 × 10^{−16} | 1.30 × 10^{−15} |

Function | Null Hypothesis | Test | Decision | Results of Pairwise Comparisons (GameEA Versus) | |||
---|---|---|---|---|---|---|---|

StGA | IMGA | RTS | DPGA | ||||

f_{1} | The distributions of StGA, IMGA, RTS, DPGA and GameEA are the same. | Related-Samples Friedman’s Two-Way Analysis of Variance by Ranks | Reject the null hypothesis | Reject | Reject | Reject | Reject |

f_{2} | Reject the null hypothesis | Reject | Retain | Retain | Retain | ||

f_{3} | Reject the null hypothesis | Reject | Retain | Retain | Retain | ||

f_{4} | Reject the null hypothesis | Reject | Reject | Reject | Reject | ||

f_{6} | Retain the null hypothesis | Retain | Retain | Retain | Retain | ||

f_{10} | Reject the null hypothesis | Reject | Reject | Retain | Retain | ||

f_{11} | Reject the null hypothesis | Reject | Reject | Reject | Reject | ||

f_{12} | Reject the null hypothesis | Reject | Reject | Reject | Reject | ||

f_{13} | Retain the null hypothesis | Retain | Retain | Retain | Retain |

**Table 5.**Pairwise comparisons. Each row tests the null hypothesis that the Sample 1 and Sample 2 distributions are the same. Asymptotic significances (2-sided tests) are displayed. The significance level is 0.05.

Sample 1-Sample 2 | Test Statistic | Standard Error | Standard Test Statistic | Significance | Adjust Significance |
---|---|---|---|---|---|

GameEA-DPGA | 1.400 | 0.316 | 4.427 | 0.000 | 0.000 |

GameEA-RTS | 2.300 | 0.316 | 7.273 | 0.000 | 0.000 |

GameEA-IMGA | 2.480 | 0.316 | 7.842 | 0.000 | 0.000 |

GameEA-StGA | 3.820 | 0.316 | 12.080 | 0.000 | 0.000 |

DPGA-RTS | 0.900 | 0.316 | 2.846 | 0.004 | 0.044 |

DPGA-IMGA | 1.080 | 0.316 | 3.415 | 0.001 | 0.006 |

DPGA-StGA | 2.420 | 0.316 | 7.653 | 0.000 | 0.000 |

RTS-IMGA | 0.180 | 0.316 | 0.569 | 0.569 | 1.000 |

RTS-StGA | 1.520 | 0.316 | 4.807 | 0.000 | 0.000 |

IMGA-StGA | 1.340 | 0.316 | 4.237 | 0.000 | 0.000 |

© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Yang, G.
Game Theory-Inspired Evolutionary Algorithm for Global Optimization. *Algorithms* **2017**, *10*, 111.
https://doi.org/10.3390/a10040111

**AMA Style**

Yang G.
Game Theory-Inspired Evolutionary Algorithm for Global Optimization. *Algorithms*. 2017; 10(4):111.
https://doi.org/10.3390/a10040111

**Chicago/Turabian Style**

Yang, Guanci.
2017. "Game Theory-Inspired Evolutionary Algorithm for Global Optimization" *Algorithms* 10, no. 4: 111.
https://doi.org/10.3390/a10040111