In the context of games and simulations, virtual worlds gain more and more importance. For example, when we look at Bethesda’s Skyrim
], one can see that the complexity and effort to create a credible and realistic 3D world is a task of several months for a large team of professional designers. However, it is not only the overall visual quality that plays an important role during the design process but also the sheer amount of different objects necessary to avoid a repetitive look where items like furniture, plants or buildings frequently reoccur [2
]. As a result, game development studios are facing two challenges:
to create a realistic and credible environment in an adequate period of time,
to keep the project in budget to make the result affordable for the end user.
The famous game designer Will Wright (The Sims
], Sim City
]) has proven that these challenges not only exist in theory but also in practice. He named it The Mountain of Content Problem
]. Procedural generation
is an approach to face these issues. For years, developers have created methods and patterns to generate textures, 3D models or even entire game levels by applying algorithms to achieve a unique-looking environment that needs no or very little adjustment. Another reason for the growing interest in procedural content is to make a game worth playing again by changing levels or quests and hence offering new stories and impressions in each new session. The game mechanisms remain the same so that the player can make use of acquired skills and abilities. Examples for games that make use of these possibilities are Elite: Dangerous
] or Minecraft
], creating new worlds at the beginning of each game if the player wishes. The principle of procedural content generation (PCG) does not serve exclusively the domain of digital games. Board games like The Settlers of Catan
] require the players to create a world map by distributing raw material fields randomly on the table [9
], offering an entirely new experience to the player in each game. This approach has led to an enormous success of the game.
Furthermore, PCG has had an influence on the creation of serious games
, focusing on a learning experience in addition to entertainment as in classic games [10
]. In the area of education, changing environments have the effect that students not only repeat knowledge or motion sequences they have learned but also apply their abilities to new situations. These situations can be randomly created using procedural generation mechanisms. Taking a virtual driving school as an example, generated road networks might teach the player to orientate himself in the real world in an unknown area. Other scenarios for randomly generated environments can be derived easily for serious games in diverse industries such as transportation, health and marketing.
This survey provides an overview of current research in the area of procedural content generation. It focuses on the creation of realistic modern virtual environments. After a short historical introduction and a definition of the term, each category of objects that form a virtual world will be presented and discussed regarding progress, usability and level of detail in research and application. The survey ends with a tabular comparison and evaluation of these classes of objects in terms of research status and practical application. We conclude with an outlook, also discussing features that can be expected in next-generation game engines and game development tools. The procedural generation of sounds and music is not part of this survey.
2. History of PCG in Digital Games
Already, in 1978, Don D. Worth used simple algorithms in his game Beneath Apple Manor
] to create dungeons for this RPG (Role Playing Game) [12
]. A more thrilling sensation was caused by the game Rogue
], which also used algorithms to create levels for this very famous dungeon crawler [14
]. The game came out in 1980. The name Rogue
served as an eponym for the following generation of Rogue-like games that had the following characteristics:
Other than expected, the developers neither chose the generative approach for the level creation—in the interest of making the game worth playing again—nor did they face the Mountain of Content Problem
. In fact, in these days, the memory requirements of a game with many different levels were too high so that the decision was made to generate the levels on the fly instead of writing them to disk [15
From a research perspective, Darwin R. Peachey’s paper Solid Texturing, published in 1985, was one of the first publications explicitly discussing procedural content generation [16
]. Similar to today’s normal mapping
, Peachey proposed a technique that enabled two-dimensional textures to look three-dimensional. This early paper was followed by various other publications discussing e.g., terrain generation [17
] or design and animation of plants making use of fractals [18
] and hence established procedural content generation as a scientific research area.
The derived methods were not only used in the digital game industry but also in the animation movie sector. For instance, Pixar’s animation tool RenderMan
] offered procedural functions to define textures and materials algorithmically, or to generate a variety of simple primitives by handwritten subroutines. With a growing maturity in research and practice, the generative procedures were used more and more in AAA games (AAA game is a widely used term for games with a high development and promotion budget.) such as Command and Conquer: Red Alert 2
] or The Elder Scrolls II: Daggerfall
]. In those games, developers deliberately implemented PCG as a concrete game element, not because of a resource bottleneck as in former days. In 2000, EA Games (Redwood City, CA, USA) published Maxis’ The Sims
], introducing not only a completely new gameplay concept but also a highly innovative yet simple to use character editor. Players were able to create an in-game character in the form of a detailed 3D model composed of customizable body parts, face, clothing and personality. Using the editor did not mean to move single vertices and adapt a UV map (A UV map is the projection of a 3D model’s surface to a 2D image.), as it is done in modeling tools like Blender
]; Maxis designed the editor to offer a set of parameters to edit single characteristics of a human to e.g., adapt the waist, size or the interocular distance. Reasonable limits supported the players while designing their characters to guarantee a credible and believable look of the result. Many 3D tools implemented a similar workflow to create humanoid models, like Autodesk’s (San Rafael, CA, USA) Character Generator
(formerly known as Autodesk Pinocchio) [24
] or the open source application Makehuman
]. We consider this approach to be procedural content generation with a strong user interaction. One year later, in 2004, the group .theprodukkt published a simple shooter called .kkrieger
], having its origin in the demo scene (see Figure 1
). This scene’s goal is to create visually or aurally impressive applications, being often denoted as digital art since they show simple, moving scenes—similar to a painting [28
A particular category focuses on demos with a maximum file size of 4, 8 or 64 kb. This small footprint does not allow the usage of prefabricated assets such as graphics, music or models. Instead, those assets are created procedurally during runtime or before the demo starts. Keeping in mind that rogue-like games were also determined by memory, one can definitely find certain parallels here. Taking .kkrieger
as an example, the development team .theprodukkt generated animations, levels, textures, shaders and music based on a tool set offering procedural methods. With a great interest in the final demo (which had a size of 96 kb), the developers decided to publish their tools in the form of the editor .werkkzeug
Compton, Osborn and Mateas determine Computer Graphics as the origin of procedural generation and state that the discipline was connected to computer games with a publication of Intel (Santa Clara, CA, USA) [30
]. This particular paper surveyed generative techniques such as L-systems, Perlin noise or fractals [31
Smith dedicates an entire article to the history of PCG and speaks of a development from modularity in design to algorithmic assembly of content. She furthermore summarizes the motivations to use PCG such as replayability, PCG as an expressive medium or as assisting creative technology [32
]. Nowadays, most of the discussed utilities have arrived in modern game development tools and make procedural content generation easier. 3D models are constructed by deformation, cropping or merging primitives instead of assembling them triangle by triangle (e.g., in ZBrush
]). Thus, developers can focus on the procedural algorithms to generate content instead of tackling the technical challenges some tiers below. Examples include the game engines Unreal Engine 4
], Unity 2017
] and Cry Engine V
] provide tools to simply manipulate terrains and paint
vegetation like trees or flowers onto the ground (see Figure 2
) and thus speed up the level creation workflow impressively. Hence, developers can lay their focus on finding an algorithm to distribute vegetation objects over the level without taking care of colliding flowers or repetitively formed trees.
A remarkable achievement of today’s game development tools is the procedural generation of textures and materials. Similar to what .theprodukkt
introduced in .werkkzeug
some years ago, the procedural generation of textures and materials is now applied by most of the big players in the game engine market. In addition, instead of writing shaders by hand (in GLSL or HLSL), modular tools were offered to graphic designers to construct materials by combining images, mathematical operations or shader functions to e.g., create a normal mapping effect as shown in Figure 3
. The next game engine generation has begun to implement features to create even more complex objects. Epic
, for instance, included a simple tool to generate 3D buildings [37
Some very specific applications like Side Effects Software’s (Toronto, ON, Canadian ) Houdini
] or the Esri CityEngine
] have an even longer experience in the area of building generation, but, in the end, they still create non-random worlds without a lively simulated population.
3. Definition of PCG
Ruben M. Smelik et al. define procedural content generation as “any kind of automatically generated asset based on a limited set of user-defined input parameters” [40
]. They furthermore refer to Roden and Parberry [41
] who call these kinds of algorithms amplification algorithms
, taking a small set of input parameters to transform them to a larger set of output data. Togelius formulates a definition by an antithesis, saying that procedurally generated content does not correspond to content that is generated by users even if they make use of procedural algorithms since they have to be manually parameterized [42
]. Hendrikx et al. see procedural generation as an alternative to manual design but stress the need for a possible parameterization so that designers can take an influence on the generated object [43
]. Shaker et al. are more concrete and define PCG by giving examples what PCG is (e.g., a software tool to generate random dungeons without any user input) and what it is not (a map editor that lets users place items) [44
]. At this point, we would like to give our own definition of PCG:
Procedural content generation is the automatic creation of digital assets for games, simulations or movies based on predefined algorithms and patterns that require a minimal user input.
PCG is not only a subject of research in computer science. Prusinkiewicz and Lindenmayer emphasize the growing interests in other communities caused by the interdisciplinarity, affecting natural sciences like biology [45
]. This strong interest in other fields of research is an indicator for the topic’s presence. However, harmonizing all those disciplines like biology, architecture, urban studies, psychology, etc. and finding the right formalisms and data structures is a huge effort. Finkenzeller [46
] narrows the affected areas of computer science to:
He furthermore points out that programming is the most flexible but yet error-prone method to automatically generate procedural content. Hendrikx et al. introduce the abbreviation PCG-G (procedural content generation for games) [43
] in order to delimitate PCG for games from further fields such as simulations or (animated) movies. This shows that PCG-based procedures, algorithms and tools can be applied to a large variety of fields like urban planning (e.g., the Esri CityEngine) or to the (animation) movie industry. Pixar has already been named as the company making use of procedural content generation in RenderMan
]; additionally, Disney Research (https://www.disneyresearch.com/
) provides publications mentioning procedural techniques, e.g., for virtual terrain editing [48
], showing that there is an awareness of the advantages of automated content generation in the movie industry.
3.1. Theoretical Considerations
In computer science, the efficiency and maturity of software or algorithms are frequently measured to assure their quality and applicability. A widely used metric is a simple subjective estimation to what degree the generated content looks realistic; this is not the case when it can be easily identified by a human observer as automatically generated. We propose balancing between performance and fidelity in PCG.
When an algorithm returns a precise result (e.g., a natural-looking forest), the algorithm requires more processing power, more memory or more storage to generate more variations of trees, textures in higher resolution, more detailed meshes or a denser planting. Depending on the desired outcome, the user has to choose between either performance or realism to reach the optimum for the given system or the requirements for the virtual world. Often, the generated objects can either be categorized as handcrafted or as made by nature. A central question is if one could tell if either one or the other category can be created with lesser effort than the other. As an example, the complexity of the tasks to generate 3D trees can be compared to the one to generate 3D buildings. When producing much of the visible content automatically, the order of its generation becomes relevant. Therefore, we compare a highly simplified projection of the natural creation of the earth (see Figure 4
b) to the procedural generation of a virtual world (see Figure 4
a). In many PCG developers’ conception of natural creation of a virtual world, the landscape (including water) on a planet serves as the basis of a world, followed by vegetation such as trees and plants. Later on, humans construct buildings on this landscape and connect them via road networks. Then, a settlement grows or dies over the years. If there is vegetation or mountains in the way, mankind tends to remove them in order to build roads or buildings.
Mapping PCG of a virtual world to the simplified nature-like creation of the earth results in some extra iterations in which forests are removed to build roads, a terrain is flattened to place cities on it, or rivers get re-routed to let a city grow in the desired direction. As an alternative, we propose to use the order presented in Figure 4
b where the terrain is created followed by a transport infrastructure, followed by buildings and vegetation. This proceeding will most likely not comply with the idea of natural growth, but it can ease the computer-aided generation of virtual worlds.
3.2. Random Number Generators
Random Number Generators (RNG) can either be a hardware device or a software [49
], producing a deterministic and periodic sequence of (pseudo) random numbers [50
]. Their existence and functionality is assumed in many publications dealing with PCG [32
]. Despite the fact that not all researchers agree with their omnipresence in PCG. Some point out that pure random generation would result in chaos [42
]. We think that the application of randomness depends on the context of each generation process. Taking the road network in Section 4.2.1
as an example, the placement of streets and intersections can occur randomly, but the algorithm to generate the streets meshes and textures has to be adequately implemented upfront.
We now provide a classification for the types of objects that are most frequently the subject of procedural content generation (see Figure 5
). We took the CityGML
specification (especially the CityGML Core
schema) as the basis and added living beings to it to fit the needs to describe objects in a procedurally generated world [51
]. In the remaining chapter, we will present vegetation, water, road networks, buildings, creatures, humans and stories as typical examples for these classes.
4.1. Vegetation and Landscape
The procedural creation of objects occurring in nature, like terrains or plants, belongs to the most explored areas in procedural content generation. Theories and software tools have existed for many years, and they have reached a high level of maturity.
4.1.1. Generation of Vegetation
Looking at plants in particular, one of the reasons for the strong interest in their structure comes from theoretical biology: inspired by their beauty, researchers have tried to find mathematical models for their growth. The Hungarian researchers Prusinkiewicz and Lindenmayer were pioneers in this area. As early as in 1968, they proposed a grammar called L-Systems (Lindenmayer Systems) to describe the structure of plants with mathematical methods.
Since we are interested in the graphical representation of trees and plants, we have to find a mapping of L-Systems to graphics. This interpretation is often called the turtle model as it is the basis of the language LOGO and the turtle used there. Our system consists of a two-dimensional grid and the following grammar:
move one step forward, drawing a line,
move one step forward without drawing a line,
turn right by degrees,
turn left by degrees.
Now, we introduce re-writing, the fundamental idea of L-Systems. A re-writing rule
or production rule
defines that the left side of the production can be replaced by the right side, and that replacement can be repeated as often as necessary. If we take characters as the elements of a language of words, a set of re-writing rules might be the following:
The language created by our grammar consists of all the words that can be created out of an initial character string and our re-writing rules. The attentive reader might notice that this principle was originally introduced by Chomsky to describe programming languages [52
]. In contrast to Chomsky’s languages, L-Systems require every re-writing rule to be applied in every round. The reason is that the growth of plants is based on cell division, and this happens in parallel for all cells. If we take our example from above with ‘a’ as the initial string, we can generate the following words in our language:
In an L-System, this production process describes how a plant grows. In a more common notation, we mark possible replacements with brackets around the corresponding substring. For example, we could define an L-System with re-writing as follows (n is the number of applications of the re-writing rule, ‘F’ in the second line is the initial string):
For a graphical representation, we again use the turtle model. A ‘[’ is interpreted as a push-down on a stack, a ‘]’ as a pop from the stack. The above L-System then produces the plant shown in Figure 7
We now apply the re-writing rule twice, i.e., we change our production system as follows:
We then get the plant shown in Figure 7
b. If we use n
= 5, we get the realistic plant shown in Figure 7
c. If we use the deterministic L-Systems, we have derived so far to produce a large number of plants for our virtual world that all look the same. This seems to be unnatural. Thus, we need to introduce a stochastic component: each step in the construction of a plant is taken with a specified probability. Let us thus introduce probabilities into the L-System from above:
are the probabilities for the productions; they add up to 1. This stochastic L-System might produce the plants shown in Figure 8
. Note that they all seem to be of the same species, just variations at different levels of growth.
Thus far, we have described how 2D plants can be generated. It is easy to extend our L-System grammar to 3D plants: we simply add operations to ‘pitch down’, ‘pitch up’, ‘roll left’ and ‘roll right’ to the initial ’turn right, turn left’ operations at every decision point. This allows us to describe a large variety of 3D plants with short rules. The reader can imagine that it is also easily possible to extend L-Systems with different widths and colors for the different branches. Similarly, L-Systems were adapted to bushes and other types of plants. Many papers were published addressing the details e.g., [45
]. Until today, variations of the early L-Systems are the most widely used methods for the procedural generation of plants.
Besides pure L-Systems, there exist various other systems. For example, Chen et al. propose a sketch-based tree modeling system that takes advantage of a tree database for the 3D layout [58
]. The user sketches the basic branch structure of the desired tree and optionally the contour of the crown with a few strokes in 2D. The system then looks up matching 3D tree structures in a database of 20 tree models. The best match is extracted and used to generate the desired tree. A similar approach is proposed in [59
]. In related work, other authors propose using photographs of trees to find the appropriate model in the database [60
The self-similarity of plants is also often used to automatically generate them. The Mandelbrot set
], the Koch snowflake
or the Pythagoras tree
are examples of mathematical models that can be visualized as shapes resembling natural structures (see Figure 9
). When we take a look at the practical application of methods for tree and plant generation, one can see that the most commonly used tools Xfrog
] or Speedtree
] reach impressive results.
], the authors propose a procedural branch graph (PBG) approach that creates diverse trees with the same branch structure at different LODs (Level of Detail). Further state-of-the-art research such as [58
] confirms the high maturity level of this research area.
Like the procedural generation of vegetation, the generation of landscapes in the form of height maps belongs to the more advanced topics in PCG. In most cases, height maps consist of grayscale bitmaps in which the elevation is represented by the shade of grey of the bitmap’s pixel (see Figure 10
a). The calculated height is then projected to a flat 3D mesh [69
]. Usually, the whiter the pixels are, the higher is the elevation. After the mesh has been created, the terrain needs to be colored. For this task, there are three established techniques:
manually drawing a texture,
using a manually created color map to project textures to specific regions,
generating a texture by analyzing slopes and heights of the terrain’s mesh.
These three approaches differ in quality and applicability. The process of manually drawing a texture is simple, but it results in a huge bitmap or an insufficient resolution. Drawing a color map (see Figure 10
b) that is used in the game or simulation to map a set of textures (see Figure 10
c) to specific colors (in general by making use of a shader) is a common practice and returns visually impressive results (see Figure 10
d) but is still a manual process.
The third method makes use of the mesh data and utilizes heights and slopes to calculate the appropriate texture mapping. Very low areas are typically seen as ocean and are hence textured with an ocean ground texture, average heights are textured using grass, areas with a high elevation are seen as mountains and receive a rock texture, and very high regions receive a snow texture. Slopes can be used to identify steep areas that are frequently represented by stone (e.g., cliffs) and hence receive a stone texture. Since the first two methods require the designer to create either the color map or the texture [70
], they are not considered to be applicable for a purely procedural approach. In contrast, the third procedure fits well since it relies completely on the terrain’s mesh. In recent games such as Minecraft [7
], biomes come into play and contain climate information like humidity and temperature [71
]. Biomes are regions of land, and, depending on their characteristics, the terrain is formed regarding elevation and textures. Adjacent biomes can either be blended or have abrupt borders depending on design decisions. A question that arises again is how to procedurally generate a height map. There exist various techniques to do that:
fractal noise, e.g., Perlin Noise [72
midpoint displacement (mostly used to generate 2D landscapes) [73
diamond-square algorithm (adapts midpoint displacement to generate 3D terrains) [75
These are only three examples for a huge variety of algorithms. For a practical implementation, there exist some open libraries like libnoise
, containing modules to produce noise or other patterns like checkerboard or Voronoi diagrams
]. The latter can be used to achieve a non-homogeneous appearance of a terrain shape, in contrast to noise algorithms [76
]. Keeping in mind that a virtual world requires millions of square meters, it is indispensable to agree on a way to reach the optimal performance for the rendering process. The use of LODs is recommended [77
]. It should be used to add more terrain detail in important and frequently visited areas of a terrain that are close to the user, and to reduce detail in less important regions, e.g., in far mountain areas. As an enhancement to a static solution, a real-time optimization algorithm called Real-time Optimally Adapting Meshes
(ROAM) is proposed in [78
], optimizing the mesh’s triangulation during runtime depending on the player’s view frustum. Lee, Jeong and Kim created a maze terrain authoring system (MAVE) to calculate a finite maze terrain based on different maze patterns [79
A limitation to common algorithms for terrain generation is the creation of caves [69
] or overhangs, which can either be addressed by layered terrains [80
] or by voxel terrains [81
]. Cui et al. not only propose a technique to create caves with different characteristics but also how to store their data efficiently in an octree data structure [82
]. Boggus and Crawfis make use of 3D models to generate pattern based caves using prefabricated pattern images [83
4.1.3. Placing Vegetation in a Landscape
Once a set of plants and trees has been created, the question of proper placement in the virtual landscape has to be answered. In general, there should be a differentiation between placing plants in large numbers in a given area and an individual placement [54
]. Similar to the height maps described above, a technique based on a grey-scale image can be used in which the density of the vegetation is defined by the shade of grey. Similar to the approach of texturing a terrain by analyzing heights and slopes, Hammes [84
] proposes a procedure to place plants and trees based on the grayscale of the terrain beneath. Another idea is to use color maps where each color stands for a type of vegetation [70
]. An interesting alternative was presented by Alsweis and Deussen in 2006 [85
]. They propose to model the natural resources available for the plants and the competition between them to determine their density. In the FON model
(Field of Neighborhood, see [86
]), each plant has a circular zone of influence on the neighboring plants. The size of that zone can depend, for example, on the humidity of the ground, the fertility of the soil and the type and size of the plant. They compose a landscape of tiles (the Wang tiles
]), each representing a specified density of the plants according to the FON model. Transitions between the tiles are smoothened by relaxation methods. The property of the ground and thus the Wang tiles chosen depend on the elevation, nearby water, etc. In this way, they can produce very realistic areas of vegetation with different densities automatically.
An integrated system for modeling terrains and plants is described by Deussen et al. in [54
]. They describe an entire toolkit for the process. It allows both the manual editing of height maps and plants and their procedural generation. An interesting idea is to provide an initial distribution of plants manually and then model their growth and death algorithmically over some time, taking plant competition into account; the final result is then represented graphically. Another idea is to reduce the geometric complexity of the scene by approximate instancing
, replacing similar plants, groups of plants and parts of plants by representative objects before rendering. A number of impressive examples shows the realism of their approach. Poisson distribution
is another approach for placing plants. Here, a probable number of plants is distributed in a partial area of a grid. Poisson disk distribution
avoids plants growing too close to each other by defining a outer radius in which no other distribution point can be placed [88
Although water as an element is always the same, the creation of rivers, oceans, lakes and waterfalls differs in many ways. Where oceans and lakes are more or less calm, rivers and waterfalls move constantly. The creation of rivers is often discussed in two ways: either they are generated during the creation of the terrain, or river courses are placed later in the landscape in a separate step [40
]. Another option is to refer to the sea level and assume the presence of water everywhere in the virtual world beneath this predefined height [84
]. In contrast, Kahoun proposes a procedure of natural growth by the spreading of the flow of water [77
]. This flow then iteratively forms the river courses. Ebert refers to the use of dilation symmetry
to achieve realistically looking rivers where each smaller river branch looks exactly like the larger branch on a smaller scale [19
]. The procedural creation of seas and rivers is rarely explored [40
], and it focuses mainly on the shape and course of riverbeds [89
]. In a few papers, the authors frequently differentiate between a grid-based and a mesh-based approach when creating rivers; the mesh-based approach reaches the more visually impressive results [90
] since a 3D mesh is generated individually along a river, whereas the grid-based approach focuses on an existing layout.
Doran and Parberry mention coast line agents [91
] to generate realistic island shapes. The creation of coast lines or surf and wave action can be found in frameworks in the form of concrete implementations. The tool Mystymood
, for instance, generates shore lines, underwater caustics and shore break automatically using a simple collision algorithm and color maps [92
4.2. Road Networks
This section discusses the efforts made to procedurally create a traffic infrastructure. In this context, the focus lies on the generation of road networks including pedestrian paths. Since air and sea traffic requires only little physical infrastructure, both topics are excluded from our discussion.
4.2.1. Intersecting Streets
The creation of road networks can be done in several ways, for instance, by creating a set of intersections to which the roads are connected [93
]. A parameter limits the maximum number of roads connected to one single intersection. During the creation, it makes sense to only allow roads that do not overlay with others (i.e., no bridges). Figure 11
shows an implementation from the game development framework TUST [94
] in which road networks are created by placing streets in a 3D environment.
illustrates the usage of a Manhattan style road network that creates a grid of streets and randomly deletes a parameterized number of streets in the resulting grid. Streets contain at least a start point and an end point. An algorithm detects roads with similar start and end points and creates an intersection there. Each intersection counts the number of connected streets and will later on be replaced with a corresponding prefabricated intersection model with either one, two, three, or four connections. More than four connections form a roundabout. To achieve a realistic but yet simple road placement, several algorithms can be used. If a street is defined by more than two points and if those points are not positioned on a straight line, then a street mesh is generated by expanding a simple spline to both sides, as can be seen in Figure 12
Pedestrian paths can be attached to streets and/or around a parcel of land. In the case that roads surround such a parcel, sidewalks can be calculated by creating secondary polygons with smaller sizes adjacent to the street polygons [95
]. Figure 13
shows a single sidewalk adjacent to a street.
We find many patterns in nature, math or computer science that can result in an interesting road network structure, like Voronoi diagrams
, tree maps, or binary trees (see Figure 14
Another approach is to break the road network generation down to a few reoccurring shapes like grids or radials [96
]. The authors focus more on the creation of primary and secondary streets where the first ones handle heavy traffic and the latter ones lead to buildings or other facilities. As introduced before, L-Systems play an important role in PCG; they can also be used for road network generation. Extended L-Systems [97
] are a hierarchical and adaptable method that allows for modifying the L-System modules during the road generation process.
The authors of [97
] point out that their own system is not only applicable to road network generation but also to e.g., buildings. A tile-based system, as shown in Figure 15
, receives less attention in current publications; this might be a consequence of the artificial look of the result. Nevertheless, tile-based systems are frequently implemented in games such as Trackmania Nation
], Ridge Racer
], or Re-Volt
]. Furthermore, more complex street forms such as highways exits, non-standard crossroads and interchange roads are not covered in PCG-related publications.
Traffic simulation is frequently discussed in the literature [101
] but is beyond the scope of this survey.
Regarding the construction of virtual buildings, there exists an overwhelming amount of research papers [104
] but only a few concrete implementations [108
]. In this context, we mainly discuss two topics:
room arrangement on a fixed floor,
shape and facade creation of the outer appearance of the buildings.
The synergy of both—namely the generation of entire accessible buildings—is only vaguely discussed, as well as further steps like the placement of doors or windows or the connection of floors using stairways. Furthermore, the goal to create buildings for a real-life simulation of a virtual world requires another factor: time. Time influences the outer appearance of buildings regarding
the age (construction time, inhabited, renovation, decay),
time of day or time of the year (illuminated windows and switched-on outer lights at night or in the Winter, smoking chimney in colder times of the year and open windows in the Summer).
While rendering several buildings in one scene, the Level Of Detail (LOD) plays an important role to manage details of the model. When the distance of the viewer is low, lamps, signs and window sills are visible. Furthermore, the area behind the entrance is supposed to be visible through the windows from outside. If the player enters the house, the model of the interior has to be loaded in full detail; if he/she is three hundred feet away the house can only be visible as a coarse outer shape. An example is shown in Figure 16
Podevyn describes five different LOD stages within the CityGML (Geography Markup Language) schema [110
], which include definitions for entire cities, not only for buildings:
LOD0: regional, landscape,
LOD1: city, region,
LOD2: city districts,
LOD3: architectural models (exterior), landmarks,
LOD4: architectural models (including interior features).
4.3.1. Residential Buildings
The general understanding of the term residential building
does not only include single-family houses but is much more flexible in its interpretation so that row homes, apartment towers, semi-detached houses or student dormitories are included. Our survey discusses only more general algorithms for floor planning and facade creation. Floor planning is seen as a sub-topic of the Spatial Allocation Model and has been present in research since the early 1970s [111
]. Mueller et al. attempt to develop a sequential grammar to create building shells [112
] by taking arbitrary volumetric shapes as a basis and apply certain rules to them (e.g., split, scope (rotation, position, scale), or repeat). There are several other approaches like the application of a Squarified Treemap Algorithm
] based on the original Treemap Algorithm
], which has been adapted to be used to create rooms in a given rectangular building area. The former method was to display tree structures in a two-dimensional area.
As one can see in Figure 17
, the rooms displayed in the squarified treemap (c) have an improved aspect ratio between width and height compared to the treemap (b), which is a result of the Squarified Treemap Algorithm
. We assume a rectangular floor with a width of 6 and a length of 4 units. This floor is supposed to be split up into seven rooms with a size of 6, 6, 4, 3, 2, 2 and 1 square units. The rooms should be placed so that each room achieves an aspect ratio (width/length), which is as close to 1 as possible to avoid rooms that are very long but narrow. The Squarified Treemap Algorithm
first determines a start half of the floor. Since the width is larger than the length, the room is placed in the left half (see Figure 18
, step 1). Otherwise, it would be placed in the upper half. The aspect ratio of the first room is 8/3 (or 4/1,5). In a second step, the next room is placed above the first (step 2). Its aspect ratio is 3/2, which is nearer to 1 than 8/3, so we continue placing the third room above the second. The aspect ratio of the third room worsens to 4/1 (step 3) so the third room is moved to the right (free) half of the floor. The aspect ratio then improves to 9/4 (or 3/1,
) (step 4). The room also moves when the next room with a size of 3 is placed (step 5). Placing room 5 next to room 4 results in a worse aspect ratio (step 6), and it is hence placed above in the free top right corner (step 7). Rooms 6 and 7 reach the best aspect ratio if placed next to each other (steps 8, 9 and 10).
At first sight, the result lacks practical usability since each room is only directly connected to another room, meaning that there is no corridor. Mirahmadi and Shami propose an improvement to the Squarified Treemap Algorithm, which is able to find a corridor path connecting all individual rooms in the building [115
]. Based on a set of rules (bathrooms and bedrooms may not be connected to a kitchen, bedrooms may not be connected to each other, etc.), it is first determined if a corridor is needed. Then, the authors propose selecting all inner edges (or practically spoken, walls) in the floor plan and connect them using a shortest-path algorithm. This path is then used to generate a corridor along the selected edges. By shifting and extending the path, the corridor is generated. Mirahmadi and Shami emphasize that any generated corridor needs intersection points with the rooms to allow the placement of doors.
Another known limitation of the treemap algorithm is that it can only handle rectangular areas. This does not hold for another approach, which can literally grow a room on any initial area [116
]. The only precondition is that this area is tiled before the algorithm is applied. In the grid of tiles, each room is initially assigned to one cell, which serves as a starting point. One by one, each room grows in one direction in turn until its predefined target size is reached (Figure 19
, top left and top right rooms in step e and f). In a second iteration, all cells that have not yet been filled are assigned to a connected room (Figure 19
, bottom room, step f).
Due to the fact that there is no minimal cell size, the algorithm can be used to also fill e.g., a circle or a triangle by reducing the cell size to a minimum. The smaller the cell size is, the more accurate is the result for non-rectangular areas.
A possible next step of in the creation of houses would be to fill the created rooms with furniture and accessories. This could be done on the basis of interior design guidelines [117
], defining the placement of furniture depending on accessibility, symmetry or adaptability to the room’s structure.
We now briefly discuss the creation of facades, which seems to be quite simple at first sight since it only contains the tasks to create a wall and add a texture to it. Finkenzeller might disagree with this point of view; he offers a very comprehensive and competent overview over the diverse techniques to create facades algorithmically, like grammars, L-Systems or shape grammars [46
]. Not only facades, windows and doors are part of his work. He rather develops a semantic model for cornices, ornaments and wall structures (e.g., made from baking stone). By that not only the modeling of facades but also the procedural generation of textures for them is addressed—another huge branch in the theory of computer graphics that has an influence on terrain, character and object creation.
Another approach to create believable outer facades was published under the name Split Grammar [118
]. It introduces a technique to hierarchically split a wall and label the resulting parts as e.g., door, window or wall. In the next hierarchical layer, these parts can be split again, and they can also be described by grammar to receive more and more detail the more often the splitting is applied. To let virtual buildings appear more realistic and aesthetic, an extensive use of ornaments and decoration is recommended [119
4.3.2. Other Buildings
Other constructions, like industrial or office buildings, might appear to be similar to the generation of residential buildings. This may be true regarding the basic requirements such as rooms, windows, doors and corridors; even some room types match like office rooms, kitchens and rest rooms. There are mainly three differences: industrial or office buildings have larger rooms, the number of kitchens and toilets has to be higher than in a residential building since there are more people per square foot, and the shape is more regular, making the generation easier [111
]. As for residential buildings, shape grammar can be used to generate and place buildings in the virtual world [112
]. The details of the procedural generation of other buildings are beyond the scope of this paper.
4.4. Living Beings
The generation of living beings can be divided into two parts. On the one hand, there are humans which are biped and walk upright. On the other hand, there are animals that can either be vertebratae or invertebratae. These classes can once again be broken down to e.g., fish, birds, insects or arachnids. We address those beings as creatures here. Barreto and Roque specify that the PCG of creatures includes the generation of meshes or 3D models, animations, behaviors and sounds [120
]. We focus here on the creation of static meshes and skins for humans and creatures.
The generation of a flexible human model—in contrast to a static one—is characterized by some additional steps, including the generation of a skeleton, the rigging process and the animation [5
]. The rigging takes care of assigning bones and joints to a model’s limbs. Each level of detail of a model requires a separate rigging. The advantage of using a skeletal animation compared to a vertex animation is obvious: once an animation sequence (walking, running, talking, etc.) has been created, it can simply be assigned to hundreds of models. The motion only differs with age and gender of the human. Minor changes (e.g., stride length, body tension, and corpulence) may provide an individual look and behavior for each human. At this point, one could already guess that there might be some limitations when it comes to the question if a virtual world can entirely be created procedurally. On the one hand, a detailed humanoid model formed by a realistic skeleton, skin tissue and muscles is difficult to create from scratch, and, on the other hand, it is equally difficult to find an algorithm to generate a believable bipedal human motion.
As already mentioned in Section 2
, there exist tools to create and customize a generic character model, like Autodesk’s Character Generator
] (see Figure 20
) or Unity Multipurpose Avatar (UMA)
]. The use of a framework acting in the same way might be a good way to quickly create human models, and it is worth further research. To achieve a higher degree of realism, it is not only required to move the extremities according to the real-world but also to reflect the mimic and lip movement during interactions. The latter topic is addressed by the proposal of a text-to-speech engine, which is not only capable to translate written text to audible speech but also to automatically calculate the corresponding lip movements (see below).
As already mentioned, research on the generation of creatures is few and far between. An outstanding game called Spore
] was developed by Maxis
, designed by Will Wright
and published by EA Games
in 2008, introducing gameplay featuring the development of a microscopic organism into a highly intelligent and social creature. To create a huge amount of different-looking creatures, designer Will Wright proposed procedural generation as the means to address the generation of thousands of assets that Maxis had during the development of Sims 2
. Based on the development for Spore
, Hecker et al. introduced a novel system to animate creatures with an unknown body shape [123
] in which generalized motion data could be applied during runtime to achieve an unexpected but realistic-looking animation. Hudson [124
] introduced a three-step system:
a user defines a set of variables for the creature generation (referred to as genes),
a tool then translates these genes into a visual model,
the model is rigged to have a skeleton ready for animation.
4.4.3. Simulated Motion
The primary aspect of motion simulation in a virtual living environment focuses on the visual impression of life that can be summarized to inhabitants and traffic. Of course, many other simulations can run simultaneously, like economic development, entertainment (sports, television), weather or evolution of beings. Controlling the behavior of hundreds or thousands of inhabitants belongs to the most challenging tasks. Some papers avoid talking about AI (Artificial Intelligence) in this context [125
] since the routines used in games are generally not meant to imitate a realistic human being. Crowd simulation is a separate, well researched topic in the area of simulating large numbers of people. Crowd simulation is the coordinated movement and acting of multiple characters with and within a given environment [126
Movement in and interactions with a virtual world are equally important. Early open world games—games in which the player can access the entire world from the beginning of the game—like Grand Theft Auto 3
] introduced virtual citizens to make the world look more lively, but, due to the lack of interaction with each other and their missing personality, they appear to be a bit dull [128
]. Better approaches of today’s games like Watch Dogs
prefer to give each NPC (Non-Player Character) a personality, a job, and special character traits.
Nevertheless, the basics like moving through a city, paying attention to traffic, using sidewalks, etc. work well in AAA games. They have shown that a simple path-finding algorithm is not enough to let the behavior appear to be realistic. Curtis et al. name four disciplines a crowd simulator has to cover [126
In a continuous process, the goals of each NPC are set (Goal Selection), chances of action and movement are determined (Spatial Queries), and the path to achieve the selected goal is calculated (Plan Computation) and iteratively adapted after each action (Plan Adaptation). The resulting simulation can be enhanced by taking two other factors into account, which significantly improve the overall impression:
The factor credibility
is seen as a superficial requirement. It can be split into several sub-conditions like personality, emotionality, determination and outer appearance [129
]. Some authors make careful steps towards a procedural animation. Horswill describes his motion framework Twig
, based on physical simulation, to be able to create motions like moving towards a target, or hugging. However, its goal is not to generate realistic animations but just those that make a character seem to be alive [130
]. Karim et al. present a locomotion system for multi-legged characters, which is based on an algorithm that places footprints along a path and then calculates the position of the character’s feet along the path [131
]. The authors stress in their conclusion that the character model—especially the motion apparatus, down to the shape of the feet—has to be very detailed to generate a believable motion.
4.4.4. NPC Interaction
The possibility to interact with other humans, creatures or NPCs is now discussed in detail. An interaction can be broken down to verbal and non-verbal communication (Here, the classic axiom of the communication scientist Paul Watzlawick is broken saying that one could not not communicate; this is actually possible for an NPC if he is not explicitly programmed to do so.). It can include two or more actors. In this context, it is irrelevant if a human player (represented by an avatar) is involved or not. In verbal communication, the dialogue planning and management signify a central challenge [132
]. One of the easier and vivid ways to address it is by using Finite State Machines (FSM) [133
], known from classical computer science. Figure 21
shows such an FSM in which an NPC asks the player a simple question, namely What is 2 times 2?
and lets him pass when the player answers correctly. A set of two pre-defined choices is given—4
and other number
. If the answer is other number
, the NPC will repeat the question. If the answer is 4
, the NPC will tell the player to pass and the dialog ends.
This idea can be enhanced by the aspects of personality, relations and moods [134
]. Having a look at tools like articy:draft
, one can see that visual editing of dialogs (and entire story lines) has reached a high level of maturity [135
]. Maxis’ The Sims
shows vividly how to simulate several NPCs with a strong focus on the interactional aspects. All characters in the game, including those controlled by the player, have a weighted relation to each other, and they have an individual personality. The limitation lies in the verbal interaction which still (since the first release of the game) takes place using a pseudo language called Simlish
. Nevertheless, emotions are very well expressed through intonation, mimic and gesticulation, allowing an immediate interpretation of moods (see Figure 22
for the projection of moods on an NPC’s model).
is limited to a certain number of actors so that an investigation on how to scale the approach to many interacting NPCs is valuable. The term Level of Detail, as it has been used describing a 3D model’s details, can also be applied to the area of AI and actor behavior [136
]. In the paper, a procedure is proposed in which the diverse behavior patterns are structured hierarchically. These patterns can be reduced or extended by certain layers, depending on the level of detail. A disadvantage of this approach is the additional cost for modeling the corresponding behavior tree. The author describes a tavern as an example in which the bartender and several guests are simulated at different levels of detail. If the player is absent, the scene in the tavern changes roughly; glasses get empty in an instant, guests do not move from table to table but instantly leave or enter the bar. If the player enters the tavern the simulation level of detail raises, and guests begin to interact with each other, the bartender moves around and cleans tables.
Cassel et al. developed the BEAT toolkit (Behavior Expression Animation Toolkit) to generate nonverbal behavior and synthesized speech for a virtual character based on a typed text. Their extensible rule-based system was derived from actual human conversations [137
] (see Figure 23
], Gratch et al. go more into detail and show how BEAT annotates text with hand gestures, eye and eyebrow movement and intonation and schedules these expressions along a speech timeline. De Carolis et al. present a conversational agent, which imitates human-like behavior during a conversation. A so called Mind–Body interface enriches a discourse plan with meaning and allows the projection of the corresponding emotions to the character’s body [139
]. Bickmore and Picard speak of the usage of relational agents
to establish a long-term relation between a human (e.g., a player) and a virtual personality (e.g., an NPC) [140
]. They place their focus on situations (such as education or business) in which the human interacts with his counterpart and provide insights in their motivation.
4.5. Procedurally Generated Stories
Many games are based on stories. They provide a natural narrative structure, helping the players to understand the mindset and the goals to be reached, establishing the context and motivating the player. This is especially true for roleplaying games (RPGs). The player typically has a main storyline, guiding them to the goal, and additional side stories that increase the fun in playing. Examples for such games are The Elder Scrolls V: Skyrim
] and the The Legend of Zelda
The procedural generation of stories has many advantages [142
]. First of all, creating stories is expensive; many artists, content designers, programmers and audio engineers are usually involved. This effort can be reduced considerably if PCG for stories is employed [145
]. Second, when new story variations are created at runtime, this can keep players motivated when replaying the game [144
]. Third, it becomes possible to generate stories that automatically adapt to the players’ skills and preferences [147
In narrative games, the notion of a quest
is widely used. Quests are pieces of the story. A quest has a well defined beginning and ending, and a specified goal must be fulfilled. The main story of a game can then be subdivided into many quests, main quests and side quests. The main story (i.e., the series of main quests) is harder to generate procedurally. The reason is that the creativity of the storyteller cannot yet be replaced entirely by an algorithm using computational creativity (CC) [148
]; a good narration and thus exiting quests are an essential part of a good story-driven game. CC is actually seen as a remaining challenge in the field of generating games [149
]. Different side quests can be generated more easily. Different paths can then be taken through the game, even dependent on the performance and the personal taste of the player. Thus, we focus on the procedural generation of such side quests.
Any procedural generation must be based on a model. This model is always more abstract than the assets created. For example, the model for the generation of plants is often an L-System, and the model for a street system is a set of interconnected points, both with parameters. So, what would be a good model for a story? We look at three examples for such models: a planning language, a Petri net model and StoryTec.
4.5.1. Planning Algorithms
A first approach is to use a planning language
. Such languages are know from artificial intelligence, for example to describe robot movements. An example is ADL, the Action Description Language
]. It is based on actions that have a pre-condition and a post-condition. The pre-condition has to be fulfilled in order for the action to be performed, and the post-condition reflects its effects. Also allowed are quantified variables, i.e., the ∃ and ∀ operators. Conditions can be formulated as Boolean formulas with the operators ∧, ∨ and ¬. Typed variables are supported. ADL assumes an open world
, i.e., what is not contained in the specification is undefined. An initial state
is defined as the basis, in our case the beginning of the story, and a goal state
as the final result of the story.
Let us now look at an example. A person named Bob is supposed to move from his house “Bob’s Shack” to the palace “Royal Palace”. The house and the palace are both locations. Three alternative storylines allow him to either walk, fly or teleport. The result of our random number generation is available at runtime in the variable random. The story can then be described as follows:
Initial State Representation:
Person(bob) ∧ location(bobsShack) ∧ location(royalPalace) ∧ At(bob, bobsShack),
Goal State Representation:
Action(walk(p:person, from:location, to:location)
Precondition: At(p, from) ∧ random = 1
Effect: ¬At(p, from) ∧ At(p, to)
Action(fly(p:person, from:location, to:location)
Precondition: At(p, from) ∧ random = 2
Effect: ¬At(p, from) ∧ At(p, to)
Action(telePort(p:person, from:location, to:location)
Precondition: At(p, from) ∧ random = 3
Effect: ¬At(p, from) ∧ At(p, to)
At runtime, this will lead to Bob moving from his shack to the royal palace in one of the three ways, depending on the current value of the random variable. In our example, one of the three paths is chosen at random but other preconditions can also be defined easily, for example, to take the personality of the player or his current performance in the game into account. More details on ADL can be found in the original paper by Pednault [150
More recent work is presented, e.g., by Riedl and Young [151
]. The Intent-Driven Partial Order Causal Link
(IPOCL) planning algorithm used in [152
] simultaneously reasons about causality and character intentionality and motivation in order to produce narrative sequences that are causally coherent and have elements of character believability. The authors present a tool that generates a narrative plan meeting the outcome objective. It ensures that all character actions and goals are justified by events within the narrative itself. An overview of newer story planning concepts can be found in [153
4.5.2. Petri Nets
Petri nets are a well known extensions of finite state machines; they allow an easy description of parallel activities. A standard Petri net is defined as a 4-tuple (S, T, W, M) where
S is a set of places, marked graphically by circles,
T is a set of transitions, marked graphically by bars,
is a multiset of arcs, i.e., W assigns to each arc a non-negative integer arc multiplicity (or weight); note that no arc may connect two places or two transitions. The elements of W are indicated graphically by arrows.
M0 is an initial marking, consisting if tokens, indicated graphically by dots.
A transition fires if and only if W(s,t)
tokens are at the input place, and it will produce W(t,s)
tokens at the output place. A major advantage of Petri nets is the availability of tools to edit the model and to proof properties such as liveliness (i.e., the Petri net does not lead to deadlocks), and the reachability of a specified marking M from the initial marking M0. A very useful enhancement for Petri nets is to allow colors
for tokens. In a standard Petri net, tokens are indistinguishable. In a colored Petri net, every token has a value (“color”). In popular tools for colored Petri nets such as the CPN tools [154
], the values of tokens are typed, and they can be tested (using guard expressions) and manipulated with a functional programming language.
An obvious way to employ Petri nets for game quests is to interpret the places as game locations and the tokens as players [155
]. The Petri net describing our example from above might look as in Figure 24
. Our example is simplified: it contains conditions for the transitions. For real Petri nets, such conditions (and their variables) are not allowed; they must be expressed in terms of tokens. A possibility, described in [155
], is to use colored Petri nets and have a colored token for each variable used in a condition. Thus, we do not only represent players by tokens but also variables. However, this (correct) notation makes the graph much more complex. The notation shown in Figure 24
can be translated 1:1 into the correct notation. Again, the random variable in the condition can be replaced by other variables, taking the context of the game and the player(s) into account.
A different use of Petri nets is described in [156
]. They extend the standard Petri Net model by three new constructs, conditions
, and they model stories with these modified Petri nets. They also describe how to infer the player type from his/her playing history, and they automatically generate game variations from their Petri net model for a real game (Neverwinter Nights
]). A major drawback of their approach is that they lose the power of formal verification for standard Petri nets because they introduce new constructs.
A less formal model is the basis of StoryTec, a system for the specification of story-based games [143
]. A StoryTec specification distinguishes the game structure model
and the game logic model
Game Structure Model
The game structure model describes the data part of a story. A story consists of scenes, similar to a theater play. Each scene models a small part of the game. Scenes are interconnected by transitions. A scene consists of a set of objects, including physical elements, interaction elements such as buttons or text fields, and avatars. Thus, the overall game has the narrative structure of a theater play. Scenes and objects can be configured with parameters. They can have the types boolean, color, composite, enum, file, float, scene, skill, stimulus and string.
Game Logic Model
When the data part of a story has been defined, activities are added. This is done with the game logic model. Its basic construct are actions. Similar to ADL, conditional actions can be specified. For example, a virtual character’s move from one location to another is defined as an action. Actions also have parameters. Typically, actions are at a high level of abstraction; for example, the details of animations are not part of their specification. A stimulus is an event that triggers an action. Stimuli are also specified at a high level of abstraction. Unlike in Petri nets, parallel actions are not supported by StoryTec.
The StoryTec Editor and the StoryTec Runtime
A powerful StoryTec editor supports easy graphical editing of both the structure model and the logic model. Figure 25
shows the graphical interface of the editor. The main editing tools for the game structure and the game logic can be selected on the left side. The central part shows the currently edited scene in the upper part and below the overall scene structure. Note that we have created extra scenes for the walking, flying and teleporting activities since we want to show them graphically, and we want to assign parameters to them. The scene Bob’s Shack
contains several objects shown as small squares. The right side has windows for the objects with their parameters. Extra windows are opened if we want to edit conditions, as in our case with the variable random
. When editing of the story is completed, we store the result in an ICML file (INSCAPE Communication Markup Language) file, similar to an XML file. Listing 1 shows an extract from this file for our example.
A runtime system called story engine
is provided that connects to a game engine. The output of the editing process is fed into the story engine and executed. The overall architecture is shown in Figure 26
4.5.4. The Procedural Generation of Game Content from a PCG Story
It is possible to combine the semi-automatic generation of stories or quests with the automatic generation of other content. For example, landscapes and buildings can be generated automatically from a procedurally generated story. When pre-specified location keywords such as shack
are found in a story, the corresponding game objects can be created without human intervention, as described in Section 4.1
and Section 4.3
above. Details can be found in [145
]. The locations are mapped to a space tree
where each node specifies a portion of the world. That space tree is mapped to a 2D grid
, and the landscapes, plants and buildings are then procedurally generated. Care is taken that neighboring landscapes are plausible; for example, in a mountain area, caves are more probable than gardens. In order to generate nice-looking landscapes, the main features of each region in the grid (i.e., the rocks in a mountain) follow a Gaussian distribution, and they spread over into their neighboring regions. More details on the automatic derivation of content from a PCG story can be found in [142
We conclude this work with a table showing the most important contributions to the field of PCG in research and in practice (Table 1
) and rank their presence from very high (++) to very low (- -). As can be seen, there are many areas where practitioners have overtaken the research community, and, in other fields, theories exist that never made it into practice. The two worlds seem to be fairly separated. We thus recommend that researchers and game developers talk to each other more often, for the benefit of both.
An analysis of the research papers and implementations in the field of PCG clearly shows its immense complexity and variety. Results were contributed from many different areas of computer science, computer graphics, architecture, town planning, psychology, traffic planning and mathematics. We explicitly want to reference the good classification of procedural modelling methods by Smelik et al. [164
]. It also became obvious that practitioners have contributed a great deal to the progress of the field without publishing their work in research papers. The interest and the awareness of the game development community in the context of PCG is shown by the recent foundation of diverse communities settling in the academic, hobbyist, commercial and independent sector. Some of the most active and valuable are:
PCG Wiki— A wiki presenting PCG in theory and practice, collecting games that make use of content generation algorithms and listing related links,
procjam.com—An online contest inviting developers to create graphical demos explicitly making use of PCG,
Reddit Procedural Generation—Topic on Reddit that serves as a panel for discussions and presentations of projects.
Unfortunately, large companies are thrifty about how they model or automatically generate cities. Especially for open world games like Grand Theft Auto 3-5
] or L.A. Noire
], let us assume that procedural techniques play an important role. Assassin’s Creed Unity
recently presented a technique to generate entire, flexible streets including buildings [167
]. As impressive as this demonstration was, it is sad that there is so little industry knowledge about PCG at conferences like the GDC (Game Developers Conference
). The knowledge cannot only be used to create games—there are so many other areas of applications as Visitor Ville
] shows exemplarily on their website where they visualize web traffic abstractly in a virtual world. One can say that the automatic generation of digital content is one of the next big things in game design. State-of-the-art engines already offer good workflows to create games in which the development teams rarely lack programming skills but time and money to buy or create models, music and levels. Furthermore, generated worlds can stimulate the game designers’ creativity and show them worlds they would never have created on their own [96