# IORand: A Procedural Videogame Level Generator Based on a Hybrid PCG Algorithm

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- We introduce new metrics for any game level: risk, obstruction, precision, reward, motivation, and distance. The motivation metric considers the potential reward available in an interaction and the precision required.
- Through the designed reward function, our hybrid algorithm of PCG generates a diversity of levels that meet any given gaming experience.

## 2. Problem Statement

## 3. Proposed Level Evaluation

## 4. Feature Selection

## 5. The IORand Algorithm

#### 5.1. Environment

#### 5.2. Reward

**Definition of the reward**. The reward function is the indicator of how close the agent is to meeting the goal, which is defined from an evaluation of the state to which the agent brings the environment due to its actions. Usually, the higher the reward, the better the agent’s performance.

**Rhythm accuracy**. For this part of the reward, a bell-shaped function was used, whose value was calculated based on Equation (5). For this function, ‘${x}_{i}$’ is the rhythm of the i-th feature, ‘$\sigma $’ is the desired variance of the rhythm, which serves to widen or reduce the width of the bell, making the evaluation of the rhythm more relaxed (wider) or tighter (smaller), ‘$\mu $’ is the target beat, indicating that the bell will have its center and, therefore, maximum value, at this point.

**Value range accuracy**. For this portion of the reward, the goal is to assess how close the range of measured values is to the range of target values. For this purpose, two different comparisons between the ranges are proposed. To define how equal two ranges are, it is proposed to compare the amplitude of the range and its center, the closer two ranges are in terms of their centers and their amplitudes, the more similar they are. Let $\gamma $ be the maximum measured value, $\delta $ be the minimum measured value, $\lambda $ be the maximum target value, $\tau $ be the minimum target value; then the first of these comparisons is shown in Equation (7), the amplitude similarity (${S}_{a}$), which is given by 1 minus the normalized distance between the amplitudes.

**Prototype levels**. To calculate the reward value of each of the defined game experiences (simple, jumps, obstacles), it is necessary to set their target values ($\mu $, $\sigma $, $\lambda $, $\tau $, $\alpha $, $\beta $ and w). The goal is that, when adjusting the reward, Equation (13), using the target values of a specific gaming experience, and this equation is used to evaluate a level of that same gaming experience, a value close to 1 is obtained, indicating that the said level meets the desired gaming experience, and also that when evaluating a level that provides another gaming experience, the calculated value is close to 0, indicating that the level does not meet the desired gaming experience. Therefore, it is necessary to find an adequate combination of target values for each game experience, in order to maximize the reward at the levels of that same experience and minimize the reward at the levels of the rest of the experiences.

#### 5.3. Agent

- The agent receives an observation of the environment. In our implementation, this translates to a 16 × 29 matrix with symbols representing game items or empty spaces in the level.
- If necessary, the observation is preprocessed to be propagated by the ANN; in our implementation, the symbols of the matrix are already in numerical format, so no preprocessing was necessary.
- This numerical matrix is propagated through the ANN and a vector whose components have values between 0 and 1, each associated with a possible action, is obtained as a network output. The closer this value is to 1, the greater the reward that the ANN has estimated for said action.
- The $\u03f5$-greedy policy is applied, in which the agent chooses an action at random with probability $\u03f5$ and with the complementary probability, 1-$\u03f5$, chooses the most convenient action according to the ANN estimation.
- The chosen action (a) is executed on the observed state of the environment (s), causing a transition to the resulting state (s’) and the calculation of a reward (r).
- The agent receives, from the environment, the observation of the resulting state and the reward of its previous action.
- The agent stores the data of the previous transition, the observed state (s), the chosen action (a), the resulting state (s’), and the reward obtained (r).
- To adjust the weights and bias of the neural network, it is necessary to:
- –
- Store transitions (agent experience) in a replay memory.
- –
- Once there is a certain amount of data in memory, the transitions stored in it are randomly queried to train the agent’s neural network. The adjustments of the weights and bias are made only in the neurons associated with the chosen action in each transition.

- With each training step, the value of $\u03f5$ is reduced by a percentage of its current value (our implementation multiplies it by 0.996).

**Replay memory**. The replay memory acts as a training set for the agent’s neural network based on the DQN algorithm. In our implementation, this memory was set to store 150,000 transitions. These transitions are stored in the form of a row, in which, if an additional element arrives that overflows the row, then the first stored element is eliminated and the new one is stored; preserving the most recent experience. This memory is randomly queried to see if the inputs to the neural network are uncorrelated.

**Implemented artificial neural network architecture**. For our implementation, a convolutional neural network (CNN) was used in conjunction with a vanilla network. The CNN fulfills the function of processing the level matrix, detecting spatial relationships in it and extracting features from the input data. The vanilla network fulfills the function of the regressor, processing the features extracted through the CNN and calculating from them, the expected reward for executing each of the actions on the observed state (network input). The architecture of the presented network is shown in Figure 5.

**Action space**. The action space is everything that the agent can choose to do. The actions alter the environment causing a transition of states in it.

#### 5.4. Semi-Random Content Generators

**Move**. The agent detects platforms from the map array, then creates a list of platforms of the selected type. From this list, the generator randomly chooses an element and moves all of its blocks to an adjacent position. This action is only executed if there are enough empty spaces to place the selected element in the adjacent position.

**Remove**. The agent detects platforms from the map array, then creates a list of platforms of the selected type. From this list, the generator chooses an element at random and changes all of the blocks that compose it to empty spaces.

**Change (type or address)**. The agent detects platforms from the map matrix, then creates a list of platforms of the selected type. Of these platforms, the generating algorithm chooses one at random. Depending on the type of platform, there is a list of possible values that it can take. The algorithm selects a new type at random, making sure it is different from the old one.

**Insert**. The algorithm determines the space required to place the selected platform on the level slice, so to place the platform, it picks random coordinates in the space between the starting and ending platforms, until it finds one that is surrounded by as much empty space as required; space was determined, inserting the label(s) that make up the platform.

**Create and insert**. In order to create platforms, the generator must be told the dimensions of the platform to be created, as well as the number of blocks that must be used to build it. This information is provided by the user before executing the PCG algorithm. To create the platforms, the algorithm creates an array with empty spaces of the specified dimensions and selects a random coordinate within that array. In this coordinate, the generating algorithm inserts the label corresponding to the type of platform selected. Then, in positions adjacent to the platform being created, adds the label again. This process is repeated until adding as many labels as the number of blocks defined or until the matrix is filled. Once the platform is created, the previously defined “insert” method is executed.

## 6. Metrics to Evaluate PCG Algorithm Performance

**Playability**. This measure indicates if the generated level is playable or not; that is, if the player will be able to finish the game. In this case, it means that a level is playable if the player can obtain the penguin from the start point to the end point of the level. It is a binary metric; this characteristic is fulfilled or not, so the possible values are {0, 1}.

**Game experience**. It refers to the fulfillment of the objective task; that is, the levels provide the desired gaming experiences. To measure this feature, we use the reward evaluation obtained at each generated level. Our reward returns values in the range [0, 1].

**Novelty**. This measurement requires the calculation of the degree of difference between two slices A and B, which is obtained by calculating the number of operations necessary to transform slice A into slice B. This degree of difference is normalized with respect to the maximum number of operations to transform one string into another; therefore, their values are in the range [0, 1]. The novelty of a slice is the average degree of difference between it and the rest of the slices produced by the algorithm in one run.

**Effort**. For this measure, the degree of difference is also used. The effort of a slice is given by the degree of difference between it and the initial slice from which it was produced, indicating the degree of changes necessary to reach the final level from the initial point of content generation.

**Iterativity**. It relates to the number of iterations needed by the algorithm to produce content.

- High—500 actions (steps);
- Moderate—300 shares;
- Fair—150 shares;
- Low—75 actions.

## 7. Experiments and Results

## 8. Discussion

## 9. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Togelius, J.; Kastbjerg, E.; Schedl, D.; Yannakakis, G.N. What is procedural content generation. Mario Borderline. Pcgames
**2010**, 11, 3:1–3:6. [Google Scholar] - De Kegel, B.; Haahr, M. Procedural puzzle generation: A survey. IEEE Trans. Games
**2019**, 12, 21–40. [Google Scholar] [CrossRef] [Green Version] - Risi, S.; Togelius, J. Increasing generality in machine learning through procedural content generation. Nat. Mach. Intell.
**2020**, 2, 428–436. [Google Scholar] [CrossRef] - Shaker, N.; Togelius, J.; Nelson, M.J. Procedural Content Generation in Games; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Summerville, A.; Snodgrass, S.; Guzdial, M.; Holmgård, C.; Hoover, A.K.; Isaksen, A.; Nealen, A.; Togelius, J. Procedural content generation via machine learning (PCGML). IEEE Trans. Games
**2018**, 10, 257–270. [Google Scholar] [CrossRef] [Green Version] - Liu, J.; Snodgrass, S.; Khalifa, A.; Risi, S.; Yannakakis, G.N.; Togelius, J. Deep learning for procedural content generation. Neural Comput. Appl.
**2021**, 33, 19–37. [Google Scholar] [CrossRef] - Togelius, J.; Yannakakis, G.N.; Stanley, K.O.; Browne, C. Search-based procedural content generation: A taxonomy and survey. IEEE Trans. Comput. Intell. Games
**2011**, 3, 172–186. [Google Scholar] [CrossRef] [Green Version] - Yannakakis, G.N.; Togelius, J. Artificial Intelligence and Games; Springer: Berlin/Heidelberg, Germany, 2018; Available online: gameaibook.org (accessed on 8 March 2022).
- Alvarez, A.; Dahlskog, S.; Font, J.; Togelius, J. Empowering quality diversity in dungeon design with interactive constrained map-elites. In Proceedings of the 2019 IEEE Conference on Games (CoG), London, UK, 20–23 August 2019; pp. 1–8. [Google Scholar]
- Ashlock, D.; Lee, C.; McGuinness, C. Search-based procedural generation of maze-like levels. IEEE Trans. Comput. Intell. Games
**2011**, 3, 260–273. [Google Scholar] [CrossRef] - Frade, M.; de Vega, F.F.; Cotta, C. Evolution of artificial terrains for video games based on obstacles edge length. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
- Gravina, D.; Khalifa, A.; Liapis, A.; Togelius, J.; Yannakakis, G.N. Procedural content generation through quality diversity. In Proceedings of the 2019 IEEE Conference on Games (CoG), London, UK, 20–23 August 2019; pp. 1–8. [Google Scholar]
- Togelius, J.; Preuss, M.; Beume, N.; Wessing, S.; Hagelbäck, J.; Yannakakis, G.N.; Grappiolo, C. Controllable procedural map generation via multiobjective evolution. Genet. Program. Evolvable Mach.
**2013**, 14, 245–277. [Google Scholar] [CrossRef] [Green Version] - Valtchanov, V.; Brown, J.A. Evolving dungeon crawler levels with relative placement. In Proceedings of the Fifth International C* Conference on Computer Science and Software Engineering, Montreal, QC, Canada, 27 June 2012; pp. 27–35. [Google Scholar]
- Earle, S. Using Fractal Neural Networks to Play SimCity 1 and Conway’s Game of Life at Variable Scales. arXiv
**2020**, arXiv:2002.03896. [Google Scholar] - Chen, Z.; Amato, C.; Nguyen, T.H.D.; Cooper, S.; Sun, Y.; El-Nasr, M.S. Q-deckrec: A fast deck recommendation system for collectible card games. In Proceedings of the 2018 IEEE conference on Computational Intelligence and Games (CIG), Reno, NV, USA, 14–17 August 2018; pp. 1–8. [Google Scholar]
- Khalifa, A.; Bontrager, P.; Earle, S.; Togelius, J. Pcgrl: Procedural content generation via reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Worchester, MA, USA, 19–23 October 2020; Volume 16, pp. 95–101. [Google Scholar]
- Guzdial, M.; Liao, N.; Chen, J.; Chen, S.Y.; Shah, S.; Shah, V.; Reno, J.; Smith, G.; Riedl, M.O. Friend, collaborator, student, manager: How design of an ai-driven game level editor affects creators. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; pp. 1–13. [Google Scholar]
- Delarosa, O.; Dong, H.; Ruan, M.; Khalifa, A.; Togelius, J. Mixed-Initiative Level Design with RL Brush. arXiv
**2020**, arXiv:2008.02778. [Google Scholar] - Mott, J.; Nandi, S.; Zeller, L. Controllable and coherent level generation: A two-pronged approach. In Proceedings of the 6th Experimental AI in Games Workshop at AIIDE 2019, Honolulu, HI, USA, 27 January 2019. [Google Scholar]
- Doran, J.; Parberry, I. A prototype quest generator based on a structural analysis of quests from four MMORPGs. In Proceedings of the 2nd International Workshop on Procedural Content Generation in Games, New York, NY, USA, 28 June 2011; pp. 1–8. [Google Scholar]
- Dormans, J. Adventures in level design: Generating missions and spaces for action adventure games. In Proceedings of the 2010 Workshop on Procedural Content Generation in Games, Monterey, CA, USA, 18 June 2010; pp. 1–8. [Google Scholar]
- Dormans, J. Level design as model transformation: A strategy for automated content generation. In Proceedings of the 2nd International Workshop on Procedural Content Generation in Games, New York, NY, USA, 28 June 2011; pp. 1–8. [Google Scholar]
- Dormans, J.; Bakkes, S. Generating missions and spaces for adaptable play experiences. IEEE Trans. Comput. Intell. Games
**2011**, 3, 216–228. [Google Scholar] [CrossRef] - Johnson, L.; Yannakakis, G.N.; Togelius, J. Cellular automata for real-time generation of infinite cave levels. In Proceedings of the 2010 Workshop on Procedural Content Generation in Games, New York, NY, USA, 18 June 2010; pp. 1–4. [Google Scholar]
- Karavolos, D.; Liapis, A.; Yannakakis, G.N. Pairing character classes in a deathmatch shooter game via a deep-learning surrogate model. In Proceedings of the 13th International Conference on the Foundations of Digital Games, Malmö, Sweden, 7–10 August 2018; pp. 1–10. [Google Scholar]
- Guzdial, M.; Reno, J.; Chen, J.; Smith, G.; Riedl, M. Explainable PCGML via game design patterns. arXiv
**2018**, arXiv:1809.09419. [Google Scholar] - Guzdial, M.; Liao, N.; Riedl, M. Co-creative level design via machine learning. arXiv
**2018**, arXiv:1809.09420. [Google Scholar] - Summerville, A.; Guzdial, M.; Mateas, M.; Riedl, M. Learning player tailored content from observation: Platformer level generation from video traces using lstms. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Burlingame, CA, USA, 8–9 October 2016; Volume 12. [Google Scholar]
- Karavolos, D.; Liapis, A.; Yannakakis, G. Learning the patterns of balance in a multi-player shooter game. In Proceedings of the 12th International Conference on the Foundations of Digital Games, Hyannis, MA, USA, 14–17 August 2017; pp. 1–10. [Google Scholar]
- Kamal, K.R.; Uddin, Y.S. Parametrically controlled terrain generation. In Proceedings of the 5th International Conference on Computer Graphics and Interactive Techniques in Australia and Southeast Asia, Perth, Australia, 1–4 December 2007; pp. 17–23. [Google Scholar]
- Tobin, J.; Fong, R.; Ray, A.; Schneider, J.; Zaremba, W.; Abbeel, P. Domain randomization for transferring deep neural networks from simulation to the real world. In Proceedings of the 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 23–30. [Google Scholar]
- Belhadj, F. Terrain modeling: A constrained fractal model. In Proceedings of the 5th international Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, Grahamstown, South Africa, 29–31 October 2007; pp. 197–204. [Google Scholar]
- Guzdial, M.J.; Riedl, M.O. Combinatorial Creativity for Procedural Content Generation via Machine Learning; AAAI Workshops: New Orleans, LA, USA, 2018; pp. 557–564. [Google Scholar]
- Petrovas, A.; Bausys, R. Procedural Video Game Scene Generation by Genetic and Neutrosophic WASPAS Algorithms. Appl. Sci.
**2022**, 12, 772. [Google Scholar] [CrossRef] - Di Liello, L.; Ardino, P.; Gobbi, J.; Morettin, P.; Teso, S.; Passerini, A. Efficient Generation of Structured Objects with Constrained Adversarial Networks. Adv. Neural Inf. Process. Syst.
**2020**, 33, 14663–14674. [Google Scholar] - Fontaine, M.C.; Liu, R.; Togelius, J.; Hoover, A.K.; Nikolaidis, S. Illuminating mario scenes in the latent space of a generative adversarial network. arXiv
**2020**, arXiv:2007.05674. [Google Scholar] - Awiszus, M.; Schubert, F.; Rosenhahn, B. TOAD-GAN: Coherent style level generation from a single example. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 19–23 October 2020; Volume 16, pp. 10–16. [Google Scholar]
- On, C.K.; Foong, N.W.; Teo, J.; Ibrahim, A.A.A.; Guan, T.T. Rule-based procedural generation of item in role-playing game. Int. J. Adv. Sci. Eng. Inf. Technol
**2017**, 7, 1735. [Google Scholar] - Togelius, J.; Justinussen, T.; Hartzen, A. Compositional procedural content generation. In Proceedings of the third workshop on Procedural Content Generation in Games, Raleigh, NC, USA, 29 May–1 June 2012; pp. 1–4. [Google Scholar]
- Gellel, A.; Sweetser, P. A Hybrid Approach to Procedural Generation of Roguelike Video Game Levels. In Proceedings of the International Conference on the Foundations of Digital Games, Bugibba, Malta, 15–18 September 2020; pp. 1–10. [Google Scholar]
- Torres, J.A. Pingu Run Github Repository. Available online: https://github.com/JAlbertoTorres/Pingu-run (accessed on 23 February 2022).
- Torres, J.A. Gameplay de Pingu Run. Available online: https://youtu.be/TZza1W5kSOI (accessed on 23 February 2022).
- IJsselsteijn, W.A.; de Kort, Y.A.; Poels, K. The Game Experience Questionnaire; Eindhoven University of Technology: Eindhoven, The Netherlands, 2013. [Google Scholar]
- Karpouzis, K.; Yannakakis, G.N.; Shaker, N.; Asteriadis, S. The platformer experience dataset. In Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi’an, China, 21–24 September 2015; pp. 712–718. [Google Scholar]
- Karavolos, D.; Bouwer, A.; Bidarra, R. Mixed-Initiative Design of Game Levels: Integrating Mission and Space into Level Generation. In Proceedings of the Foundations of Digital Games, Pacific Grove, CA, USA, 22–25 June 2015. [Google Scholar]
- Forsyth, W. Globalized random procedural content for dungeon generation. J. Comput. Sci. Coll.
**2016**, 32, 192–201. [Google Scholar] - Aponte, M.V.; Levieux, G.; Natkin, S. Difficulty in videogames: An experimental validation of a formal definition. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology, Lisbon, Portugal, 8–11 November 2011; pp. 1–8. [Google Scholar]
- Summerville, A.; Mariño, J.R.; Snodgrass, S.; Ontañón, S.; Lelis, L.H. Understanding mario: An evaluation of design metrics for platformers. In Proceedings of the 12th International Conference on the Foundations of Digital Games, Hyannis, MA, USA, 14–17 August 2017; pp. 1–10. [Google Scholar]
- Mourato, F.; Birra, F.; dos Santos, M.P. Difficulty in action based challenges: Success prediction, players’ strategies and profiling. In Proceedings of the 11th Conference on Advances in Computer Entertainment Technology, Madeira, Portugal, 11–14 November 2014; pp. 1–10. [Google Scholar]
- Mourato, F.; Santos, M.P.D. Measuring difficulty in platform videogames. In Proceedings of the 4a Conferencia Nacional em Interacao PessoaMquina, Grupo Portugues de Computaao Grfica/Eurographics, Aveiro, Portugal, 13–15 October 2010; pp. 173–180. [Google Scholar]
- Dıaz, A.C. Procedural Generation Applied to a Video Game Level Design. Bachelor’s Thesis, Universitat politécnica de Catalunyia, Facultat d’Informàtica de Barcelona, Barcelona, Spain, 2015; pp. 41–44. [Google Scholar]
- Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
- Pingu-Run. Pingu-Run Code. Available online: https://github.com/JAlbertoTorres/ChunksCreator (accessed on 8 March 2022).

**Figure 2.**Pingu Run experience examples. (

**a**) Simple experience. (

**b**) Obstacle experience. (

**c**) Jump experience.

**Figure 6.**Trained agents’ performance rate. (

**a**) Jump agent. (

**b**) Obstacle agent. (

**c**) Simple experience.

Technique | Generation Process | Determinism | Controllability | Interactivity | Novelty | Playability |
---|---|---|---|---|---|---|

Evol. Algorithms [9,10,11,12,13,14,35] | Search-based | Moderate | Moderate | High | Fair | High |

Reinforcement learning [15,16,17,18,19,20] | Search-based | Moderate | Moderate | High | Fair | High |

Cellular Automata [25] | Gen. and test | Fair | Low | Fair | Moderate | Fair |

Grammars [21,22,23,24] | Constructive | High | High | Low | Low | High |

Machine Learning Supervised Learning [26,27,28,29,30] Adversarial Learning [36,37,38] | Constructive | High | High | Low | Low | High |

Random Generators [31,32] | Gen. and test | Low | Low | Moderate | High | Low |

Rule-based [39] | Constructive | High | High | Low | Low | High |

Game Experience | Metric | $\mathit{\mu}$ | $\mathit{\sigma}$ | $\mathit{\lambda}$ | $\mathit{\tau}$ | $\mathit{\alpha}$ | $\mathit{\beta}$ | w |
---|---|---|---|---|---|---|---|---|

Simple | level reward | 2 | 3 | 916 | 0 | 0.95 | 0.05 | 0.29 |

level motivation | 0 | 3 | 1406 | 0 | 0.75 | 0.25 | 0.26 | |

risk | 2 | 0.001 | 491 | 0 | 0.65 | 0.35 | 0.22 | |

bonus reward | 2 | 2 | 265 | 0 | 0.85 | 0.15 | 0.15 | |

distance | 4 | 4 | 5 | 1 | 0 | 1 | 0.08 | |

Obstacle | level reward | 4 | 3 | 800 | 0 | 0.95 | 0.01 | 0.25 |

level motivation | 2 | 3 | 1384 | −70 | 0.75 | 0.25 | 0.22 | |

risk | 6 | 4 | 243 | 0 | 0.65 | 0.35 | 0.19 | |

bonus motivation | 2 | 0.001 | 160 | −184 | 0.35 | 0.65 | 0.13 | |

bonus reward | 4 | 4 | 90 | 0 | 0.85 | 0.15 | 0.11 | |

distance | 4 | 4 | 11 | 1 | 0 | 1 | 0.1 | |

Jump | level reward | 1 | 0.001 | 1000 | 0 | 0.95 | 0.05 | 0.25 |

level motivation | 1 | 0.001 | 1805 | −56 | 0.75 | 0.25 | 0.22 | |

risk | 5 | 2 | 155 | 0 | 0.65 | 0.35 | 0.19 | |

bonus motivation | 1 | 2 | 530 | −68 | 0.35 | 0.65 | 0.13 | |

bonus reward | 5 | 3 | 350 | 0 | 0.85 | 0.15 | 0.11 | |

distance | 4 | 4 | 8 | 1 | 0 | 1 | 0.1 |

**Table 3.**Calculated rewards for example levels using objective values from Table 2.

Level | Reward Function | ||
---|---|---|---|

Obstacle | Simple | Jump | |

obstacle 1 | 0.748515 | 0.216647 | 0.169015 |

obstacle 2 | 0.741509 | 0.33168 | 0.192525 |

obstacle 3 | 0.695833 | 0.253077 | 0.285230 |

Simple 1 | 0.083789 | 0.843283 | 0.08732 |

Simple 2 | 0.197562 | 0.765775 | 0.178005 |

Simple 3 | 0.204252 | 0.829947 | 0.183384 |

Jump 1 | 0.189115 | 0.255675 | 0.975981 |

Jump 2 | 0.158059 | 0.268668 | 0.702896 |

Jump 3 | 0.181520 | 0.261360 | 0.970456 |

Game Element | Semi-Random Content Generator | |||||
---|---|---|---|---|---|---|

Create and Insert | Insert | Move | Quit | Change Type | Change Direction | |

Platform: static | X | X | X | |||

Enemy: lava | X | X | X | |||

Bonuses: score (except golden fish) | X | X | X | X | ||

Platform: mobile | X | X | X | X | X | |

Enemies: Bear and Troll | X | X | X | X | X | |

Enemy: Eagle | X | X | X | X | ||

Bonus: sub-mission | X | X | X | |||

Marker: Mid-level (ring) | X | X | X | |||

Platform: bouncing | X | X | X | |||

Bonus: Score - golden fish | X | X | X | |||

Bonus: lives | X | X | X |

PCG Metric | Ranges | |||
---|---|---|---|---|

High | Moderate | Fair | Low | |

Playability | 1 | - | - | 0 |

Game experience | [1, 0.72] | (0.72, 0.45] | (0.45, 0.225] | [0.225, 0] |

Novelty | [1, 0.75] | (0.75, 0.5] | (0.5, 0.25] | (0.25, 0] |

Effort | [1, 0.75] | (0.75, 0.5] | (0.5, 0.25] | (0.25, 0] |

Agent | PCG Metric | Ranges | |||
---|---|---|---|---|---|

High | Moderate | Fair | Low | ||

Simple | Playability | 64% | - | - | 36% |

Game experience | 4% | 38% | 20% | 38% | |

Novelty | 0% | 56% | 44% | 0% | |

Effort | 0% | 66% | 34% | 0% | |

Jump | Playability | 70% | - | - | 30% |

Game experience | 14% | 20% | 24% | 42% | |

Novelty | 0% | 60% | 40% | 0% | |

Effort | 0% | 74% | 26% | 0% | |

Obstacles | Playability | 76% | - | - | 24% |

Game experience | 4% | 40% | 28% | 28% | |

Novelty | 0% | 100% | 0% | 0% | |

Effort | 4% | 90% | 6% | 0% |

Technique | Generation Process | Determinism | Controllability | Iterativity | Novelty | Playability |
---|---|---|---|---|---|---|

IORand | Hybrid (Search-based + Generate and test) | Fair | Moderate | High | Moderate | High |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Moreno-Armendáriz, M.A.; Calvo, H.; Torres-León, J.A.; Duchanoy, C.A.
IORand: A Procedural Videogame Level Generator Based on a Hybrid PCG Algorithm. *Appl. Sci.* **2022**, *12*, 3792.
https://doi.org/10.3390/app12083792

**AMA Style**

Moreno-Armendáriz MA, Calvo H, Torres-León JA, Duchanoy CA.
IORand: A Procedural Videogame Level Generator Based on a Hybrid PCG Algorithm. *Applied Sciences*. 2022; 12(8):3792.
https://doi.org/10.3390/app12083792

**Chicago/Turabian Style**

Moreno-Armendáriz, Marco A., Hiram Calvo, José A. Torres-León, and Carlos A. Duchanoy.
2022. "IORand: A Procedural Videogame Level Generator Based on a Hybrid PCG Algorithm" *Applied Sciences* 12, no. 8: 3792.
https://doi.org/10.3390/app12083792