A Comparison of Evolutionary and Tree-Based Approaches for Game Feature Validation in Real-Time Strategy Games with a Novel Metric
Abstract
:1. Introduction
- RQ1: How easy is it to adapt gameplaying agents as playtesting agents in RTS games?
- RQ2: Which RTS game definitions can be used to make a comparison between different playtesting agents?
- RQ3: How to evaluate playtesting agents based on RTS game definitions, and which are the most beneficial to them?
- RQ4: Is there a difference between evolutionary and the non-evolutionary approaches (like standard Monte Carlo tree searches [44]) with regard to playtesting abilities?
- RQ5: How does one define valid/invalid game features in the game space?
- -
- A novel metric is proposed to make a comparison between different playtesting agents;
- -
- A method is proposed for adapting gameplaying agents as playtesting agents in real-time strategy games; and
- -
- The proposed metric is used in a series of experiments involving adapted evolutionary and tree-based state-of-the-art gameplaying agents.
2. Real-Time Strategy Games
2.1. Game Features of RTS Games
2.2. microRTS
- Players gather resources and use them to create structures and new mobile units;
- The game goal is to defeat the opposing player in a battle for supremacy; and
- Resources, structures, and mobile units must be cleverly used.
- Four mobile units: worker, light (battle unit), heavy (battle unit) and ranged (battle unit);
- Two structures: base and barracks;
- Resources; and
- A wall.
3. Proposal of a Metric for Game Feature Validation
- -
- STEP 1: The RTS game features are identified;
- -
- STEP 2: The game features are grouped in precise game feature groups; (STEP 2.1): Classification of game feature groups according to their correlation (groups that are similar in description tend to be correlated, and this also allows single game features to be placed into multiple groups) and importance (some groups are of a higher importance, because they reflect and are essential to RTS gameplay, while some could be left out without jeopardizing the game’s position in the RTS game genre);
- -
- STEP 3: For empty groups in STEP 2, a further identification of the RTS game features is conducted by including more search strings and other search engines (e.g., Google Scholar); and
- -
- STEP 4: The novel metric is proposed.
3.1. Identification of RTS Game Features
3.2. Grouping the Game Features into Specific Groups
3.3. Classification of Feature Groups According to Their Correlation and Importance
- The high-importance class contains groups that represent the essence of RTS gameplay (based on our understanding of the RTS game worlds and their aspects [63]);
- Groups that operate on a game mechanics level (e.g., Interaction/Interactivity (Equipment) group) or are not essential to the game (they could potentially be left out, e.g., Mystery group) are in the medium-importance class; and
- Groups that, in Table 2, did not have a feature representative (empty of features) were included in the low-importance class.
3.4. Proposal of the Metric
W1 * calcSetScore ({1, 3, 5, 13, 14, 16, 17}, numOfScenRep)
+ W2 * calcSetScore ({2, 4, 6, 7, 8, 12}, numOfScenRep)
+ W3 * calcSetScore ({9, 10, 11, 15, 18}, numOfScenRep)
4. Experiments and Results
4.1. Experimental Environment
4.2. Adaptation of Gameplaying Agents as Playtesting Agents
4.3. Playtesting Agents
- Basic (part of the microRTS package):
- RandomAI: The choice of actions is completely random;
- RandomBiasedAI: Based on RandomAI, but with a five times higher probability of choosing fighting or harvesting action over other actions; and
- MonteCarlo: A standard Monte Carlo search algorithm.
- Evolutionary Algorithm (online source):
- TiamatBot (original): Uses an evolutionary procedure to derive action abstractions (conducted as a preprocessing step [67]). The generation of action abstractions can be cast as a problem of selecting a subset of pure strategies from a pool of options. It uses Stratified Strategy Selection (SSS) to plan in real time in the space defined by the action abstraction thus generated [68]. It outperformed the best performing methods in the 2017 microRTS competition [69] and is therefore considered as one of the current state-of-the-art gameplaying agents.
- Tree-Based (part of the microRTS package):
- IDRTMinimax: An iterative-deepening version of RTMinimax (minimax is defined here by time, not by agent moves) that uses available time to search in a tree as deeply as possible;
- IDRTMinimaxRandomized: An agent that uses randomized alpha-beta (a better assessment for situations where players execute moves simultaneously);
- IDABCD: Alpha-beta considering duration. It is a modified RTMinimax [70];
- UCT: Standard UCT (with a UCB1 sampling policy);
- PuppetSearchMCTS: An adversarial search framework based on scripts that can expose choice points to a look-ahead procedure. A Monte Carlo adversarial search tree was used to search over sequences of puppet moves. The input script into an agent’s constructor was a basic configurable script that used a Unit Type Table [71].
- NaiveMCTS: A Standard Monte Carlo search, but which uses naïve sampling [72]. Two variations of the same algorithm were used (which differ in their initial parameter settings): NaiveMCTS#A (max_Depth = 1, εl = 0.33, ε0 = 0.75) and NaiveMCTS#B (max_depth = 10, εl = 1.00, ε0 = 0.25).
- Evolutionary and Tree-Based (online source):
- MixedBot: This bot integrates three bots into a single agent. The TiamatBot (improved original) was used for strategy decisions, Capivara was used for tactical decisions [73], and MicroRTSbot [74] included a mechanism that could change the time allocated for two decision parts dynamically based on the number of close armies. MixedBot placed second in the 2019 microRTS (standard track) competition (first place went to the game bot that also uses offline/out-game learning [75]).
4.4. Results of the Playtesting Agents
5. Discussion
- Good agents’ gameplaying performance is important, because it also reflects playtesting performance; and
- With a greater number of scenario repeats comes a higher probability of game features being valid.
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A
Playtesting Agent | Metric Scores | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
RandomBiasedAI | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 0.24 | 0.239 | 0.238 | 0.237 | 0.236 | 0.235 | 0.234 | 0.233 | 0.232 | 0.231 | |
0.45 | 0.218 | 0.217 | 0.216 | 0.215 | 0.214 | 0.213 | 0.212 | 0.211 | 0.21 | 0.209 | |
0.4 | 0.196 | 0.195 | 0.194 | 0.193 | 0.192 | 0.191 | 0.19 | 0.189 | 0.188 | 0.187 | |
0.35 | 0.174 | 0.173 | 0.172 | 0.171 | 0.17 | 0.169 | 0.168 | 0.167 | 0.166 | 0.165 | |
0.3 | 0.152 | 0.151 | 0.15 | 0.149 | 0.148 | 0.147 | 0.146 | 0.145 | 0.144 | 0.143 | |
0.25 | 0.13 | 0.129 | 0.128 | 0.127 | 0.126 | 0.125 | 0.124 | 0.123 | 0.122 | 0.121 | |
0.2 | 0.108 | 0.107 | 0.106 | 0.105 | 0.104 | 0.103 | 0.102 | 0.101 | 0.1 | 0.099 | |
0.15 | 0.086 | 0.085 | 0.084 | 0.083 | 0.082 | 0.081 | 0.08 | 0.079 | 0.078 | 0.077 | |
0.1 | 0.064 | 0.063 | 0.062 | 0.061 | 0.06 | 0.059 | 0.058 | 0.057 | 0.056 | 0.055 | |
0.05 | 0.042 | 0.041 | 0.04 | 0.039 | 0.038 | 0.037 | 0.036 | 0.035 | 0.034 | 0.033 | |
MonteCarlo | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | |
0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | |
0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | |
0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | |
0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | |
0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | |
0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | |
0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | |
0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | |
0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | |
TiamatBot | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 1.08 | 1.051 | 1.022 | 0.993 | 0.964 | 0.935 | 0.906 | 0.877 | 0.848 | 0.819 | |
0.45 | 1.03 | 1.001 | 0.972 | 0.943 | 0.914 | 0.885 | 0.856 | 0.827 | 0.798 | 0.769 | |
0.4 | 0.98 | 0.951 | 0.922 | 0.893 | 0.864 | 0.835 | 0.806 | 0.777 | 0.748 | 0.719 | |
0.35 | 0.93 | 0.901 | 0.872 | 0.843 | 0.814 | 0.785 | 0.756 | 0.727 | 0.698 | 0.669 | |
0.3 | 0.88 | 0.851 | 0.822 | 0.793 | 0.764 | 0.735 | 0.706 | 0.677 | 0.648 | 0.619 | |
0.25 | 0.83 | 0.801 | 0.772 | 0.743 | 0.714 | 0.685 | 0.656 | 0.627 | 0.598 | 0.569 | |
0.2 | 0.78 | 0.751 | 0.722 | 0.693 | 0.664 | 0.635 | 0.606 | 0.577 | 0.548 | 0.519 | |
0.15 | 0.73 | 0.701 | 0.672 | 0.643 | 0.614 | 0.585 | 0.556 | 0.527 | 0.498 | 0.469 | |
0.1 | 0.68 | 0.651 | 0.622 | 0.593 | 0.564 | 0.535 | 0.506 | 0.477 | 0.448 | 0.419 | |
0.05 | 0.63 | 0.601 | 0.572 | 0.543 | 0.514 | 0.485 | 0.456 | 0.427 | 0.398 | 0.369 | |
IDRTMinimax | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | |
0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | |
0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | |
0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | |
0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | |
0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | |
0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | |
0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | |
0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | |
0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | |
IDRTMinimaxRandomized | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | |
0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | 0.45 | |
0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | |
0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | 0.35 | |
0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | 0.3 | |
0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | |
0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | |
0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | |
0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | |
0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | 0.05 | |
IDABCD | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 0.52 | 0.519 | 0.518 | 0.517 | 0.516 | 0.515 | 0.514 | 0.513 | 0.512 | 0.511 | |
0.45 | 0.47 | 0.469 | 0.468 | 0.467 | 0.466 | 0.465 | 0.464 | 0.463 | 0.462 | 0.461 | |
0.4 | 0.42 | 0.419 | 0.418 | 0.417 | 0.416 | 0.415 | 0.414 | 0.413 | 0.412 | 0.411 | |
0.35 | 0.37 | 0.369 | 0.368 | 0.367 | 0.366 | 0.365 | 0.364 | 0.363 | 0.362 | 0.361 | |
0.3 | 0.32 | 0.319 | 0.318 | 0.317 | 0.316 | 0.315 | 0.314 | 0.313 | 0.312 | 0.311 | |
0.25 | 0.27 | 0.269 | 0.268 | 0.267 | 0.266 | 0.265 | 0.264 | 0.263 | 0.262 | 0.261 | |
0.2 | 0.22 | 0.219 | 0.218 | 0.217 | 0.216 | 0.215 | 0.214 | 0.213 | 0.212 | 0.211 | |
0.15 | 0.17 | 0.169 | 0.168 | 0.167 | 0.166 | 0.165 | 0.164 | 0.163 | 0.162 | 0.161 | |
0.1 | 0.12 | 0.119 | 0.118 | 0.117 | 0.116 | 0.115 | 0.114 | 0.113 | 0.112 | 0.111 | |
0.05 | 0.07 | 0.069 | 0.068 | 0.067 | 0.066 | 0.065 | 0.064 | 0.063 | 0.062 | 0.061 | |
UCT | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 1.02 | 0.994 | 0.968 | 0.942 | 0.916 | 0.89 | 0.864 | 0.838 | 0.812 | 0.786 | |
0.45 | 0.97 | 0.944 | 0.918 | 0.892 | 0.866 | 0.84 | 0.814 | 0.788 | 0.762 | 0.736 | |
0.4 | 0.92 | 0.894 | 0.868 | 0.842 | 0.816 | 0.79 | 0.764 | 0.738 | 0.712 | 0.686 | |
0.35 | 0.87 | 0.844 | 0.818 | 0.792 | 0.766 | 0.74 | 0.714 | 0.688 | 0.662 | 0.636 | |
0.3 | 0.82 | 0.794 | 0.768 | 0.742 | 0.716 | 0.69 | 0.664 | 0.638 | 0.612 | 0.586 | |
0.25 | 0.77 | 0.744 | 0.718 | 0.692 | 0.666 | 0.64 | 0.614 | 0.588 | 0.562 | 0.536 | |
0.2 | 0.72 | 0.694 | 0.668 | 0.642 | 0.616 | 0.59 | 0.564 | 0.538 | 0.512 | 0.486 | |
0.15 | 0.67 | 0.644 | 0.618 | 0.592 | 0.566 | 0.54 | 0.514 | 0.488 | 0.462 | 0.436 | |
0.1 | 0.62 | 0.594 | 0.568 | 0.542 | 0.516 | 0.49 | 0.464 | 0.438 | 0.412 | 0.386 | |
0.05 | 0.57 | 0.544 | 0.518 | 0.492 | 0.466 | 0.44 | 0.414 | 0.388 | 0.362 | 0.336 | |
PuppetSearchMCTS | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 0.58 | 0.576 | 0.572 | 0.568 | 0.564 | 0.56 | 0.556 | 0.552 | 0.548 | 0.544 | |
0.45 | 0.53 | 0.526 | 0.522 | 0.518 | 0.514 | 0.51 | 0.506 | 0.502 | 0.498 | 0.494 | |
0.4 | 0.48 | 0.476 | 0.472 | 0.468 | 0.464 | 0.46 | 0.456 | 0.452 | 0.448 | 0.444 | |
0.35 | 0.43 | 0.426 | 0.422 | 0.418 | 0.414 | 0.41 | 0.406 | 0.402 | 0.398 | 0.394 | |
0.3 | 0.38 | 0.376 | 0.372 | 0.368 | 0.364 | 0.36 | 0.356 | 0.352 | 0.348 | 0.344 | |
0.25 | 0.33 | 0.326 | 0.322 | 0.318 | 0.314 | 0.31 | 0.306 | 0.302 | 0.298 | 0.294 | |
0.2 | 0.28 | 0.276 | 0.272 | 0.268 | 0.264 | 0.26 | 0.256 | 0.252 | 0.248 | 0.244 | |
0.15 | 0.23 | 0.226 | 0.222 | 0.218 | 0.214 | 0.21 | 0.206 | 0.202 | 0.198 | 0.194 | |
0.1 | 0.18 | 0.176 | 0.172 | 0.168 | 0.164 | 0.16 | 0.156 | 0.152 | 0.148 | 0.144 | |
0.05 | 0.13 | 0.126 | 0.122 | 0.118 | 0.114 | 0.11 | 0.106 | 0.102 | 0.098 | 0.094 | |
NaiveMCTS#A | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 1.28 | 1.241 | 1.202 | 1.163 | 1.124 | 1.085 | 1.046 | 1.007 | 0.968 | 0.929 | |
0.45 | 1.23 | 1.191 | 1.152 | 1.113 | 1.074 | 1.035 | 0.996 | 0.957 | 0.918 | 0.879 | |
0.4 | 1.18 | 1.141 | 1.102 | 1.063 | 1.024 | 0.985 | 0.946 | 0.907 | 0.868 | 0.829 | |
0.35 | 1.13 | 1.091 | 1.052 | 1.013 | 0.974 | 0.935 | 0.896 | 0.857 | 0.818 | 0.779 | |
0.3 | 1.08 | 1.041 | 1.002 | 0.963 | 0.924 | 0.885 | 0.846 | 0.807 | 0.768 | 0.729 | |
0.25 | 1.03 | 0.991 | 0.952 | 0.913 | 0.874 | 0.835 | 0.796 | 0.757 | 0.718 | 0.679 | |
0.2 | 0.98 | 0.941 | 0.902 | 0.863 | 0.824 | 0.785 | 0.746 | 0.707 | 0.668 | 0.629 | |
0.15 | 0.93 | 0.891 | 0.852 | 0.813 | 0.774 | 0.735 | 0.696 | 0.657 | 0.618 | 0.579 | |
0.1 | 0.88 | 0.841 | 0.802 | 0.763 | 0.724 | 0.685 | 0.646 | 0.607 | 0.568 | 0.529 | |
0.05 | 0.83 | 0.791 | 0.752 | 0.713 | 0.674 | 0.635 | 0.596 | 0.557 | 0.518 | 0.479 | |
NaiveMCTS#B | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 1.26 | 1.222 | 1.184 | 1.146 | 1.108 | 1.07 | 1.032 | 0.994 | 0.956 | 0.918 | |
0.45 | 1.21 | 1.172 | 1.134 | 1.096 | 1.058 | 1.02 | 0.982 | 0.944 | 0.906 | 0.868 | |
0.4 | 1.16 | 1.122 | 1.084 | 1.046 | 1.008 | 0.97 | 0.932 | 0.894 | 0.856 | 0.818 | |
0.35 | 1.11 | 1.072 | 1.034 | 0.996 | 0.958 | 0.92 | 0.882 | 0.844 | 0.806 | 0.768 | |
0.3 | 1.06 | 1.022 | 0.984 | 0.946 | 0.908 | 0.87 | 0.832 | 0.794 | 0.756 | 0.718 | |
0.25 | 1.01 | 0.972 | 0.934 | 0.896 | 0.858 | 0.82 | 0.782 | 0.744 | 0.706 | 0.668 | |
0.2 | 0.96 | 0.922 | 0.884 | 0.846 | 0.808 | 0.77 | 0.732 | 0.694 | 0.656 | 0.618 | |
0.15 | 0.91 | 0.872 | 0.834 | 0.796 | 0.758 | 0.72 | 0.682 | 0.644 | 0.606 | 0.568 | |
0.1 | 0.86 | 0.822 | 0.784 | 0.746 | 0.708 | 0.67 | 0.632 | 0.594 | 0.556 | 0.518 | |
0.05 | 0.81 | 0.772 | 0.734 | 0.696 | 0.658 | 0.62 | 0.582 | 0.544 | 0.506 | 0.468 | |
MixedBot | 1 | 0.95 | 0.9 | 0.85 | 0.8 | 0.75 | 0.7 | 0.65 | 0.6 | 0.55 | |
0.5 | 0.82 | 0.84 | 0.788 | 0.772 | 0.756 | 0.74 | 0.724 | 0.708 | 0.692 | 0.676 | |
0.45 | 0.77 | 0.754 | 0.738 | 0.722 | 0.706 | 0.69 | 0.674 | 0.658 | 0.642 | 0.626 | |
0.4 | 0.72 | 0.704 | 0.688 | 0.672 | 0.656 | 0.64 | 0.624 | 0.608 | 0.592 | 0.576 | |
0.35 | 0.67 | 0.654 | 0.638 | 0.622 | 0.606 | 0.59 | 0.574 | 0.558 | 0.542 | 0.526 | |
0.3 | 0.62 | 0.604 | 0.588 | 0.572 | 0.556 | 0.54 | 0.524 | 0.508 | 0.492 | 0.476 | |
0.25 | 0.57 | 0.554 | 0.538 | 0.522 | 0.506 | 0.49 | 0.474 | 0.458 | 0.442 | 0.426 | |
0.2 | 0.52 | 0.504 | 0.488 | 0.472 | 0.456 | 0.44 | 0.424 | 0.408 | 0.392 | 0.376 | |
0.15 | 0.47 | 0.454 | 0.438 | 0.422 | 0.406 | 0.39 | 0.374 | 0.358 | 0.342 | 0.326 | |
0.1 | 0.42 | 0.404 | 0.388 | 0.372 | 0.356 | 0.34 | 0.324 | 0.308 | 0.292 | 0.276 | |
0.05 | 0.37 | 0.354 | 0.338 | 0.322 | 0.306 | 0.29 | 0.274 | 0.258 | 0.242 | 0.226 |
References
- Balla, R.K.; Fern, A. UCT for Tactical Assault Planning in Real-Time Strategy Games. In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence, Pasadena, CA, USA, 14–17 July 2009; AAAI Press: Menlo Park, CA, USA, 2009; pp. 40–45. [Google Scholar]
- Buro, M. Real-Time Strategy Games: A New AI Research Challenge. In Proceedings of the IJCAI, Acapulco, Mexico, 9–15 August 2003; Morgan Kaufmann: Burlington, MA, USA, 2003; pp. 1534–1535. [Google Scholar]
- Shafi, K.; Abbass, H.A. A Survey of Learning Classifier Systems in Games. IEEE Comput. Intell. Mag. 2017, 12, 42–55. [Google Scholar] [CrossRef]
- Synnaeve, G.; Bessiere, P. Multi-scale Bayesian modeling for RTS games: An application to StarCraft AI. IEEE Trans. Comput. Intell. AI Games 2015, 8, 338–350. [Google Scholar] [CrossRef]
- Usunier, N.; Synnaeve, G.; Lin, Z.; Chintala, S. Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks. arXiv 2016, arXiv:1609.02993. [Google Scholar]
- Isaksen, A.; Gopstein, D.; Nealen, A. Exploring Game Space Using Survival Analysis. In Proceedings of the FDG, Pacific Grove, CA, USA, 22–25 June 2015. [Google Scholar]
- Gottlob, G.; Greco, G.; Scarcello, F. Pure Nash Equilibria: Hard and Easy Games. JAIR 2005, 24, 357–406. [Google Scholar] [CrossRef] [Green Version]
- Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computing; Eiben, A.E., Ed.; Springer: Berlin, Germany, 2003; Volume 53, p. 18. [Google Scholar]
- Fister, I., Jr.; Yang, X.S.; Fister, I.; Brest, J.; Fister, D. A brief review of nature-inspired algorithms for optimization. arXiv 2013, arXiv:1307.4186. [Google Scholar]
- Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Beckington, UK, 2010. [Google Scholar]
- Yang, X.S. Nature-Inspired Optimization Algorithms; Elsevier: London/Waltham, UK, 2014. [Google Scholar]
- Biswas, A.; Mishra, K.; Tiwari, S.; Misra, A. Physics-Inspired Optimization Algorithms: A Survey. J. Optim. 2013. [Google Scholar] [CrossRef]
- Del Ser, J.; Osaba, E.; Molina, D.; Yang, X.S.; Salcedo-Sanz, S.; Camacho, D.; Das, S.; Suganthan, P.; Coello, C.; Herrera, F. Bio-inspired computation: Where we stand and what’s next. SWEVO 2019, 48. [Google Scholar] [CrossRef]
- Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
- Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Goldberg, D.E. Genetic algorithms in search. In Optimization, and Machine Learning; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1989. [Google Scholar]
- Wang, G.G.; Deb, S.; Cui, Z. Monarch Butterfly Optimization. Neural Comput. Appl. 2015, 31, 1995–2014. [Google Scholar] [CrossRef] [Green Version]
- Jin, N.; Rahmat-Samii, Y. Advances in Particle Swarm Optimization for Antenna Designs: Real-Number, Binary, Single-Objective and Multiobjective Implementations. IEEE Trans. Antennas Propag. 2007, 55, 556–567. [Google Scholar] [CrossRef]
- Santucci, V.; Milani, A.; Caraffini, F. An Optimisation-Driven Prediction Method for Automated Diagnosis and Prognosis. Mathematics 2019, 7, 1051. [Google Scholar] [CrossRef] [Green Version]
- Yeoh, J.M.; Caraffini, F.; Homapour, E.; Santucci, V.; Milani, A. A Clustering System for Dynamic Data Streams Based on Metaheuristic Optimisation. Mathematics 2019, 7, 1229. [Google Scholar] [CrossRef] [Green Version]
- Hendrikx, M.; Meijer, S.; Van Der Velden, J.; Iosup, A. Procedural content generation for games: A survey. ACM Trans. Multimed. Comput. Commun. Appl. 2013, 9, 1–22. [Google Scholar] [CrossRef]
- Wilson, D.G.; Cussat-Blanc, S.; Luga, H.; Miller, J.F. Evolving simple programs for playing Atari games. Proc. Genet. Evol. Comput. Conf. 2018, 229–236. [Google Scholar] [CrossRef] [Green Version]
- Ponticorvo, M.; Rega, A.; Di Ferdinando, A.; Marocco, D.; Miglino, O. Approaches to Embed Bio-inspired Computational Algorithms in Educational and Serious Games. In Proceedings of the CAID@ IJCAI, Melbourne, Australia, May 2017. [Google Scholar]
- Woźniak, M.; Połap, D.; Napoli, C.; Tramontana, E. Application of Bio-Inspired Methods in Distributed Gaming Systems. ITC 2017, 46. [Google Scholar] [CrossRef]
- Boskovic, B.; Greiner, S.; Brest, J.; Zumer, V. A differential evolution for the tuning of a chess evaluation function. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 1851–1856. [Google Scholar]
- Diaz, G.; Iglesias, A. Evolutionary Behavioral Design of Non-Player Characters in a FPS Video Game through Particle Swarm Optimization. In Proceedings of the 13th International Conference on SKIMA, Island of Ulkulhas, Ulkulhas, Maldives, 26–28 August 2019; pp. 1–8. [Google Scholar] [CrossRef]
- Kuhlmann, G.; Stone, P. Automatic Heuristic Construction in a Complete General Game Player. AAAI Conf. 2006, 6, 1456–1462. [Google Scholar]
- Joppen, T.; Strubig, T.; Furnkranz, J. Ordinal Bucketing for Game Trees using Dynamic Quantile Approximation. In Proceedings of the IEEE CoG, London, UK, 20–23 August 2019; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
- Borovikov, I.; Zhao, Y.; Beirami, A.; Harder, J.; Kolen, J.; Pestrak, J.; Pinto, J.; Pourabolghasem, R.; Chaput, H.; Sardari, M.; et al. Winning isn’t everything: Training agents to playtest modern games. In Proceedings of the AAAI Workshop on Reinforcement Learning in Games, Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
- Naves, T.; Lopes, C. One Approach to Determine Goals in RTS Games Using Maximization of Resource Production with Local Search and Scheduling. In Proceedings of the ICTAI, Vietri sul Mare, Italy, 9–11 November 2015; pp. 469–477. [Google Scholar] [CrossRef]
- Bosc, G.; Tan, P.; Boulicaut, J.F.; Raïssi, C.; Kaytoue, M. A Pattern Mining Approach to Study Strategy Balance in RTS Games. IEEE T-CIAIG 2015, 9, 123–132. [Google Scholar] [CrossRef]
- Uriarte, A.; Ontañón, S. Combat Models for RTS Games. IEEE TOG 2018, 10, 29–41. [Google Scholar] [CrossRef]
- Rogers, K.; Skabar, A. A Micromanagement Task Allocation System for Real-Time Strategy Games. IEEE TCIAIG 2014, 6, 67–77. [Google Scholar] [CrossRef]
- Kawase, K.; Thawonmas, R. Scout of the route of entry into the enemy camp in StarCraft with potential field. In Proceedings of the GCCE, Tokyo, Japan, 1–4 October 2013; pp. 318–319. [Google Scholar] [CrossRef]
- Cunha, R.; Chaimowicz, L. An Artificial Intelligence System to Help the Player of Real-Time Strategy Games. In Proceedings of the SBGames, Florianopolis, Brazil, 8–10 November 2010; pp. 71–81. [Google Scholar] [CrossRef]
- Ontañón, S.; Synnaeve, G.; Uriarte, A.; Richoux, F.; Churchill, D.; Preuss, M. A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft. IEEE T-CIAIG 2013, 5, 293–311. [Google Scholar] [CrossRef] [Green Version]
- Zhao, Y.; Borovikov, I.; Beirami, A.; Rupert, J.; Somers, C.; Harder, J.; De Mesentier Silva, F.; Kolen, J.; Pinto, J.; Pourabolghasem, R.; et al. Winning Isn’t Everything: Enhancing Game Development with Intelligent Agents. In Proceedings of the AAAI Workshop on Reinforcement Learning in Games, Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
- Guerrero-Romero, C.; Lucas, S.; Perez Liebana, D. Using a Team of General AI Algorithms to Assist Game Design and Testing. In Proceedings of the IEEE Conference on CIG, Maastricht, The Netherlands, 14–17 August 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Jaffe, A.B. Understanding Game Balance with Quantitative Methods. Ph.D. Thesis, University of Washington, Washington, DC, USA, 2013. [Google Scholar]
- Risi, S.; Preuss, M. From Chess and Atari to StarCraft and Beyond: How Game AI is Driving the World of AI. KI-Künstliche Intell. 2020, 34, 1–11. [Google Scholar] [CrossRef] [Green Version]
- Perrotta, C.; Bailey, C.; Ryder, J.; Haggis-Burridge, M.; Persico, D. Games as (Not) Culture: A Critical Policy Analysis of the Economic Agenda of Horizon 2020. Games Cult. 2019. [Google Scholar] [CrossRef]
- Salazar, M.G.; Mitre, H.A.; Olalde, C.L.; Sánchez, J.L.G. Proposal of Game Design Document from software engineering requirements perspective. In Proceedings of the Conference on CGAMES, Louisville, KY, USA, 30 July–1 August 2012; pp. 81–85. [Google Scholar]
- Holmgård, C.; Green, M.C.; Liapis, A.; Togelius, J. Automated Playtesting with Procedural Personas through MCTS with Evolved Heuristics. IEEE Trans. Games 2018, 11, 352–362. [Google Scholar] [CrossRef]
- Chaslot, G.; Bakkes, S.; Szita, I.; Spronck, P. Monte-Carlo Tree Search: A New Framework for Game AI. In Proceedings of the AAAI Conference on AIIDE, Palo Alto, CA, USA, 22–24 October 2008. [Google Scholar]
- Heintz, S.; Law, E. Digital Educational Games: Methodologies for Evaluating the Impact of Game Type. ACM Trans. Comput. Hum. Interact. 2018, 25, 1–47. [Google Scholar] [CrossRef]
- Walfisz, M.; Zackariasson, P.; Wilson, T. Real-Time Strategy: Evolutionary Game Development. Bus. Horiz. 2006, 49, 487–498. [Google Scholar] [CrossRef]
- Sicart, M. Defining Game Mechanics. Int. J. Comput. Game Res. 2008, 8. [Google Scholar]
- Wilson, K.; Bedwell, W.; Lazzara, E.; Salas, E.; Burke, S.; Estock, J.; Orvis, K.; Conkey, C. Relationships Between Game Attributes and Learning Outcomes: Review and Research Proposals. Simul. Gaming 2008, 40, 217–266. [Google Scholar] [CrossRef]
- Erickson, G.; Buro, M. Global state evaluation in StarCraft. In Proceedings of the AAAI Conference on AIIDE, Raleigh, NC, USA, 3–7 October 2014; pp. 112–118. [Google Scholar]
- Ludwig, J.; Farley, A. Examining Extended Dynamic Scripting in a Tactical Game Framework. In Proceedings of the Conference on AIIDE, Palo Alto, CA, USA, 14–16 October 2009. [Google Scholar]
- Aly, M.; Aref, M.; Hassan, M. Dimensions-based classifier for strategy classification of opponent models in real-time strategy games. In Proceedings of the IEEE Seventh ICICIS, Cairo, Egypt, 12–14 December 2015; pp. 442–446. [Google Scholar]
- Bangay, S.; Makin, O. Generating an attribute space for analyzing balance in single unit RTS game combat. In Proceedings of the IEEE Conference on CIG, Dortmund, Germany, 26–29 August 2014; pp. 1–8. [Google Scholar] [CrossRef]
- Cho, H.; Park, H.; Kim, C.Y.; Kim, K.J. Investigation of the Effect of “Fog of War” in the Prediction of StarCraft Strategy Using Machine Learning. Comput. Entertain. 2017, 14, 1–16. [Google Scholar] [CrossRef]
- Mishra, K.; Ontañón, S.; Ram, A. Situation Assessment for Plan Retrieval in Real-Time Strategy Games. In Advances in Case-Based Reasoning, ECCBR 2008; Althoff, K.D., Bergmann, R., Minor, M., Hanft, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin, Germany, 2008; Volume 5239, pp. 355–369. [Google Scholar]
- Togelius, J.; Preuss, M.; Hochstrate, N.; Wessing, S.; Hagelbäck, J.; Yannakakis, G. Multiobjective exploration of the StarCraft map space. IEEE Conf. CIG 2010, 1, 265–272. [Google Scholar] [CrossRef] [Green Version]
- Lin, M.; Wang, T.; Li, X.; Liu, J.; Wang, Y.; Zhu, Y.; Wang, W. An Uncertainty-Incorporated Approach to Predict the Winner in StarCraft II Using Neural Processes. IEEE Access 2019, 7, 101609–101619. [Google Scholar] [CrossRef]
- Tong, C.; On, C.; Teo, J.; Chua, B.L. Automatic generation of real time strategy tournament units using differential evolution. In Proceedings of the IEEE CSUDET, Semenyih, Malaysia, 20–21 October 2011; pp. 101–106. [Google Scholar] [CrossRef]
- Long, M. Radio General: A Real-Time Strategy Game Where You Cannot See Your Units. In Proceedings of the Annual Symposium on CHI PLAY, Melbourne, Australia, 28–31 October 2018; pp. 345–351. [Google Scholar] [CrossRef]
- Li, Y.; Li, Y.; Zhai, J.; Shiu, S. RTS game strategy evaluation using extreme learning machine. Soft Comput. 2012. [Google Scholar] [CrossRef]
- Si, C.; Pisan, Y.; Tan, C.T. A Scouting Strategy for Real-Time Strategy Games. Conf. Interact. Entertain. 2014, 1–8. [Google Scholar] [CrossRef]
- McCoy, J.; Mateas, M. An Integrated Agent for Playing Real-Time Strategy Games. AAAI Conf. AI 2008, 8, 1313–1318. [Google Scholar]
- DeRouin-Jessen, R. Game on: The Impact of Game Features in Computer-Based Training. Ph.D. Thesis, University of Central Florida, Orlando, FL, USA, 2008. [Google Scholar]
- Novak, D.; Čep, A.; Verber, D. Classification of modern real-time strategy game worlds. GSTF J. Comput. 2018, 6. [Google Scholar] [CrossRef]
- Microrts. Available online: https://github.com/santiontanon/microrts (accessed on 20 March 2020).
- TiamatBot. Available online: https://github.com/jr9Hernandez/TiamatBot (accessed on 20 March 2020).
- MixedBotmRTS. Available online: https://github.com/AmoyZhp/MixedBotmRTS (accessed on 20 March 2020).
- Evolutionary Action-Abstractions. Available online: https://github.com/julianmarino/evolutionary-action-abstractions (accessed on 15 April 2020).
- Mariño, J.; De Oliveira Moraes Filho, R.; Toledo, C.; Lelis, L. Evolving Action Abstractions for Real-Time Planning in Extensive-Form Games. AAAI Conf. AI 2019, 33, 2330–2337. [Google Scholar] [CrossRef]
- Ontanon, S.; Barriga, N.A.; Silva, C.; De Oliveira Moraes Filho, R.; Lelis, L. The First microRTS Artificial Intelligence Competition. AI Mag. 2018, 39, 75. [Google Scholar] [CrossRef] [Green Version]
- Churchill, D.; Saffidine, A.; Buro, M. Fast Heuristic Search for RTS Game Combat Scenarios. In Proceedings of the AAAI Conference on AIIDE, Stanford, CA, USA, 8–12 October 2012. [Google Scholar]
- Barriga, N.A.; Stanescu, M.; Buro, M. Game Tree Search Based on Non-Deterministic Action Scripts in Real-Time Strategy Games. IEEE TCIAIG 2017. [Google Scholar] [CrossRef]
- Ontanon, S. The combinatorial Multi-armed Bandit problem and its application to real-time strategy games. In Proceedings of the Conference on AIIDE, Boston, MA, USA, 14–18 October 2013; pp. 58–64. [Google Scholar]
- De Oliveira Moraes Filho, R.; Mariño, J.; Lelis, L.; Nascimento, M. Action Abstractions for Combinatorial Multi-Armed Bandit Tree Search. In Proceedings of the Conference on AIIDE, Edmonton, AB, Canada, 13–17 November 2018. [Google Scholar]
- Barriga, N.A.; Stanescu, M.; Buro, M. Combining Strategic Learning and Tactical Search in Real-Time Strategy Games. In Proceedings of the AAAI Conference on AIIDE, Snowbird, UT, USA, 5–9 October 2017. [Google Scholar]
- Stanley, K.; Bryant, B.; Miikkulainen, R. Real-Time Neuroevolution in the NERO Video Game. IEEE TEVC 2005, 9, 653–668. [Google Scholar] [CrossRef]
Game Feature Label | Short Game Feature Description | Short Game Feature Label | Reference (Used as a Basis for Extraction) |
---|---|---|---|
Resource gathering | Game unit (worker) collects at least x units of type A resources and at least y units of type B resources in num trips. | GF1_RG | [50] |
Game engine features and objects | Game unit (battle unit) always hits with x points of damage. | GF2_EOBJ | [51] |
Game difficulty (aiding) | The opponent is aided with x more units, resulting in a player losing every game. Note: such a feature can be part of an advanced mode, where non-advanced users must not/cannot win. | GF3_DIFA | [52] |
Game objective (construction) | If the player tries to, it must be able to create x game structure(s) (e.g., barracks). | GF4_CONS | [53] |
Game assessment | Game score is calculated based on raw features (e.g., no. of workers) and must represent the game state status correctly when presented to the player. | GF5_AST | [54] |
Stumbling block | The player cannot destroy the enemy in a specific part of the map due to stumbling blocks (e.g., a wall). | GF6_SB | [55] |
Game exploration (unlocking new technologies) | If the player tries discovery, it can create x game units (e.g., battle unit–light) through the usage of game structure(s) (e.g., barracks). | GF7_EXPL | [56] |
Special unit | The player is confronted with a special game unit (e.g., Super-Heavy with special features), which cannot be destroyed with the given resources. | GF8_FANT | [57] |
Partial information (fog-of-war) | The player cannot operate in a partially observable environment, so it therefore cannot destroy the opponent in such an environment. | GF9_PARI | [58] |
Game difficulty (challenge) | The player cannot destroy x structures (e.g., barracks) guarded with y rushing game units (e.g., battle unit–heavy) with access to z units of A type resources. | GF10_DIFC | [52] |
Game control (take over the map) | The player can destroy all the structures on the map before the time runs out. | GF11_GCMP | [59] |
Interaction on a complex map | If the player controlling x battle units (e.g., a heavy battle unit) finds a static unit (e.g., barracks) in a maze (or complex map), the static unit is always destroyed. | GF12_INTE | [60] |
Resource gathering under attack | A gatherer (e.g., a worker) is always destroyed when trying to gather resources. | GF13_RG2 | [61] |
ID | Group | Short Game Feature Label |
---|---|---|
G1 | Adaptation | GF3_DIFA 1, GF10_DIFC |
G2 | Assessment/Rewards/Scores | GF5_AST 1 |
G3 | Challenge | GF3_DIFA, GF8_FANT, GF10_DIFC 1, GF12_INTE |
G4 | Conflict | GF6_SB 1, GF9_PARI, GF10_DIFC, GF12_INTE, GF13_RG2 |
G5 | Control | GF2_EOBJ, GF9_PARI, GF10_DIFC, GF11_GCMP 1, GF12_INTE |
G6 | Exploration | GF7_EXPL 1, GF9_PARI, GF12_INTE |
G7 | Fantasy/Location | GF8_FANT 1 |
G8 | Interaction/Interactivity (Equipment) | GF2_EOBJ, GF4_CONS, GF7_EXPL, GF12_INTE 1 |
G9 | Interaction (Interpersonal/Social) | (empty—beyond the scope of this article 2) |
G10 | Language/Communication | (empty) |
G11 | Motivation | (empty) |
G12 | Mystery | GF9_PARI 1 |
G13 | Pieces or Players | GF1_RG, GF2_EOBJ 1, GF3_DIFA, GF4_CONS, GF5_AST, GF6_SB, GF7_EXPL, GF8_FANT, GF9_PARI, GF10_DIFC, GF11_GCMP, GF12_INTE, GF13_RG2 |
G14 | Progress and Surprise | GF1_RG, GF4_CONS 1, GF6_SB, GF7_EXPL, GF8_FANT, GF9_PARI, GF10_DIFC, GF11_GCMP, GF12_INTE, GF13_RG2 |
G15 | Representation | (empty) |
G16 | Rules/goals | GF1_RG 1, GF2_EOBJ, GF4_CONS, GF7_EXPL, GF13_RG2 |
G17 | Safety | GF1_RG, GF13_RG2 1 |
G18 | Sensory stimuli | (empty) |
Class | Groups | Weight | Set |
---|---|---|---|
High importance | Adaptation, Challenge, Control, Pieces or Players, Progress and Surprise, Rules/goals, Safety | W1 | CH = {G1, G3, G5, G13, G14, G16, G17} |
Medium importance | Assessment/Rewards/Scores, Conflict, Exploration, Fantasy/Location, Interaction/Interactivity (Equipment), Motivation, Mystery | W2 | CM = {G2, G4, G6, G7, G8, G12} |
Low importance | Interaction (Interpersonal/Social), Language/Communication, Representation, Sensory stimuli | W3 | CL = {G9, G10, G11, G15, G18} |
Hyper-Parameter | Value |
---|---|
continuing | true |
max_actions | 100 |
max_playouts | −1 |
playout_time | 100 |
max_depth | 10 |
randomized_ab_repeats | 10 |
max_cycles | 3000 |
max_inactive_cycles | 300 |
Short Game Feature Label | Experimental microRTS Game Feature Description | Map |
---|---|---|
GF1_RG | Worker collects at least 2 units of a resource in 2 trips. | basesWorkers8x8.xml (standard map, which comes with microRTS) |
GF2_EOBJ | A light battle unit always hits with 2 points of damage. | melee4x4light2.xml (standard map) |
GF3_DIFA | The opponent is aided by 5 more heavy battle units, resulting in the player losing every game. | basesWorkers8x8.xml (standard map with 5 heavy units added for the opponent) |
GF4_CONS | If the player tries to, they must be able to create 1 barracks. | basesWorkers8x8.xml (standard map) |
GF5_AST | The game score is calculated on the basis of raw features of the game state (no. of workers and no. of light, heavy and ranged units multiplied by their cost factors) and must represent the game state status correctly when presented to the player. | melee14x12Mixed18.xml (standard map) |
GF6_SB | The player cannot destroy the enemy in a specific part of the map due to a wall. | basesWorkers12x12.xml (standard map with a wall placed in the middle of the map). |
GF7_EXPL | If the player tries discovery, it must be able to create 1 light battle unit through the usage of game barracks. | basesWorkers8x8.xml (standard map) |
GF8_FANT | The player is confronted with a special game unit (Super-Heavy battle unit with ten-times the armor of a normal-Heavy one), which cannot be destroyed with the given resources. | basesWorkers8x8 (standard map with Super-Heavy battle units added to help the opponent) |
GF9_PARI | The player cannot operate in a partially observable environment, so it therefore cannot destroy the opponent in such an environment. | basesWorkers12x12.xml (standard map with a partially observable environment enabled) |
GF10_DIFC | The player cannot destroy 2 barracks guarded with 3 heavy rushing units with access to 60 units of resources. | 8x8_2barracks3rushingHeavy60res.xml (custom map) |
GF11_GCMP | The player can destroy three barracks before the time runs out. | 8x8_3barracks.xml (custom map) |
GF12_INTE | If the player controlling four heavy battle units finds an enemy barracks in a large map (with obstacles and walls), the enemy barracks are always destroyed. | chambers32x32.xml (standard map with four heavy battle units and barracks added) |
GF13_RG2 | The worker is always destroyed when trying to gather resources. | 8x8_workerDestroyed.xml (custom map with the base and resources on different parts of the map and four light battle units in the middle) |
Playtesting Agent | Groups and Game Features (Valid num./Invalid num.) | Metric Score |
---|---|---|
RandomAI | G1GF3(50, 0), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(50, 0), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 0 |
RandomBiasedAI | G1GF3(49, 1), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(28, 22), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 0.24 |
MonteCarlo | G1GF3(50, 0), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 0.5 |
TiamatBot | G1GF3(21, 29), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 1.08 |
IDRTMinimax | G1GF3(50, 0), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 0.5 |
IDRTMinimaxRandomized | G1GF3(50, 0), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 0.5 |
IDABCD | G1GF3(49, 1), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 0.52 |
UCT | G1GF3(24, 26), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 1.02 |
PuppetSearchMCTS | G1GF3(46, 4), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 0.58 |
NaiveMCTS#A | G1GF3(11, 39), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 1.28 |
NaiveMCTS#B | G1GF3(12, 38), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 1.26 |
MixedBot | G1GF3(34, 16), G2GF5(50, 0), G3GF10(50, 0), G4GF6(50, 0), G5GF11(50, 0), G6GF7(50, 0), G7GF8(0, 50), G8GF12(50, 0), G12GF9(50, 0), G13GF2(50, 0), G14GF4(50, 0), G16GF1(50, 0), G17GF13(50, 0) | 0.82 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Novak, D.; Verber, D.; Dugonik, J.; Fister, I., Jr. A Comparison of Evolutionary and Tree-Based Approaches for Game Feature Validation in Real-Time Strategy Games with a Novel Metric. Mathematics 2020, 8, 688. https://doi.org/10.3390/math8050688
Novak D, Verber D, Dugonik J, Fister I Jr. A Comparison of Evolutionary and Tree-Based Approaches for Game Feature Validation in Real-Time Strategy Games with a Novel Metric. Mathematics. 2020; 8(5):688. https://doi.org/10.3390/math8050688
Chicago/Turabian StyleNovak, Damijan, Domen Verber, Jani Dugonik, and Iztok Fister, Jr. 2020. "A Comparison of Evolutionary and Tree-Based Approaches for Game Feature Validation in Real-Time Strategy Games with a Novel Metric" Mathematics 8, no. 5: 688. https://doi.org/10.3390/math8050688
APA StyleNovak, D., Verber, D., Dugonik, J., & Fister, I., Jr. (2020). A Comparison of Evolutionary and Tree-Based Approaches for Game Feature Validation in Real-Time Strategy Games with a Novel Metric. Mathematics, 8(5), 688. https://doi.org/10.3390/math8050688