Association of Game Events with Facial Animations of Computer-Controlled Virtual Characters Based on Probabilistic Human Reaction Modeling

Featured Application: The results of this work have been used in a development of a prototype computer game


Introduction
In modern gaming environments, the development of effective virtual characters has the potential of augmenting in-game motivation, player sense of presence, as well as the believability and realism of the game [1][2][3]. This is where the principles of affective computing come into play by bridging the gap 2 of 18 between research in computer science, Artificial Intelligence, and psychology. This interest in designing affective gameplay draws heavily on the already well-established premises of affective computing, which dictate that technology must be designed in such a way to recognize and replicate human emotions [4,5]. In a gaming environment, both verbal and non-verbal methods of communication amongst players are implemented; nonetheless, the non-verbal aspects, such as facial expressiveness during gameplay, are either not communicated in some games (static virtual characters), or they are delivered via textual or audio messages. For example, affectivity could be achieved via designing game avatars that morph depending on the player's facial expressions, motions, or body language [6,7]. Even more, the reactions of the computer-controlled characters (C-CCs) should be aligned with what is happening in the game at a given time. This research attempts to answer the question of how the C-CCs can be programmed to react as a human opponent would have reacted in a similar situation.
This work takes a step forward into the modeling and development of C-CCs that react more realistically during gameplay, i.e., capture the unpredictability of human emotions as they engage with games. This research attempts aims to develop, evaluate, and apply a player behavior model by capturing and analyzing the reactions of human players during in-game events.
This research is based on the assumption that accurately predicted "expected" reactions of C-CCs based on in-game events can have a positive impact on non-functional game properties such as player enjoyment, game believability, replay value, and overall immersion. This research utilizes a player's facial motion capture and knowledge of in-game events in order to produce a realistic behavioral model and apply it to C-CCs.
The main challenge of this research lies in the development of a realistic facial expression model, which demonstrates that a player's engagement with a specific game event might produce several probable emotional reactions, not only one. This is highly dependent on the realistic nature of the dataset used to develop the model. Therefore, during this research, we generated a dataset with game events (such as a player receives 2 points of damage) mapped to the facial expressions of the human players who played against the Artificial Intelligence (AI) of a single-player video game (developed as part of this research and described in Section 3). The aim is to model the probable set of facial expressions the human player will likely produce in a certain situation (e.g., winning the game, or dealing the most damage) and make the C-CC (e.g., C-CC's portrait in Figure 1) to play a similar expression (e.g., Laugh). In order to develop a dataset with realistic facial expressions, animations of a real human player (trainer) were captured using special hardware and software described in Section 3. Figure 1, shows a screenshot of the game in which the player's avatar, also refered in this research as player 1, is on the left, while the C-CC (player 2) is on the right. In Figure 1, the C-CC's portrait is playing a sad animation because he received two points of damage. Prior to selecting this reaction, the game identified the game event and its possible reactions. Then, it generated a random number from 0 to 100, e.g. 28, which happened to be in the range of the "sad" reaction (as it will be explained in Section 4). In other words, the C-CC reacted as the human player (trainer) would have reacted in this situation. This was made possible by recording and analyzing the reactions of a real human player, who played multiple games against the computer.
In total, we have identified and captured 10 facial expressions associated with 15 game events. The proposed approach is using different decision trees that associate game events with facial animations based on the probability of occurrence. For example, if the enemy character loses the game, it has a 24% chance of playing the upset animation, a 38% chance of playing the angry animation, and a 38% chance of playing the furious animation. This is because based on the dataset used to produce this model, the human player reacted in such a way, expressing the same facial expressions with the same probability of occurrence. This way, the proposed system achieves more realistic C-CC reactions.
This paper is organized in the following manner. Section 2 presents related work. Section 3 explains the experiment in some detail, including how the data collection was conducted. Section 4 explains the dataset and analysis phases. Section 5 presents the evaluation based on user experience testing. Section 6 concludes this work.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 18 In total, we have identified and captured 10 facial expressions associated with 15 game events. The proposed approach is using different decision trees that associate game events with facial animations based on the probability of occurrence. For example, if the enemy character loses the game, it has a 24% chance of playing the upset animation, a 38% chance of playing the angry animation, and a 38% chance of playing the furious animation. This is because based on the dataset used to produce this model, the human player reacted in such a way, expressing the same facial expressions with the same probability of occurrence. This way, the proposed system achieves more realistic C-CC reactions.
This paper is organized in the following manner. Section 2 presents related work. Section 3 explains the experiment in some detail, including how the data collection was conducted. Section 4 explains the dataset and analysis phases. Section 5 presents the evaluation based on user experience testing. Section 6 concludes this work.

Related Work
Research investigating the relationship between the expressivity of human emotions and game design has recognized, albeit in a holistic manner, how the human player's emotions can be influenced by and impact game design and logic [5]. Basically, it is assumed that gameplay is generally conducive of emotions as it plays on the players' engagement with an exciting narrative, where they can experience horror, thrills, fantasy, and a host of other themes [8]. To increase the player's immersion in the game narrative, researchers proposed that game avatars replicate the human player's emotions [9]. It is suggested that a game with an optimal enjoyment experience is the one that achieves a form of total immersion through allowing for an expression of emotions to the point where the player's intuition, reason, and perception are heightened [10].
In a gaming environment, facial expressions are perceived as instances of observable physical responses that give an abundance of data about an individual's emotions, expectations, and other interior states. Although facial expressions are culturally specific, yet they can be universal, as well [11]. Ekman [12] developed a model of human emotions based on universal facial expressionsanger, disgust, fear, surprise, happiness, sadness, and contempt-and reported that these emotions are shared across diverse cultures. Not surprisingly, research showed how enhancing game avatars with interfaced human emotions demonstrated an increase in interaction and communication with

Related Work
Research investigating the relationship between the expressivity of human emotions and game design has recognized, albeit in a holistic manner, how the human player's emotions can be influenced by and impact game design and logic [5]. Basically, it is assumed that gameplay is generally conducive of emotions as it plays on the players' engagement with an exciting narrative, where they can experience horror, thrills, fantasy, and a host of other themes [8]. To increase the player's immersion in the game narrative, researchers proposed that game avatars replicate the human player's emotions [9]. It is suggested that a game with an optimal enjoyment experience is the one that achieves a form of total immersion through allowing for an expression of emotions to the point where the player's intuition, reason, and perception are heightened [10].
In a gaming environment, facial expressions are perceived as instances of observable physical responses that give an abundance of data about an individual's emotions, expectations, and other interior states. Although facial expressions are culturally specific, yet they can be universal, as well [11]. Ekman [12] developed a model of human emotions based on universal facial expressions-anger, disgust, fear, surprise, happiness, sadness, and contempt-and reported that these emotions are shared across diverse cultures. Not surprisingly, research showed how enhancing game avatars with interfaced human emotions demonstrated an increase in interaction and communication with the game environment and other players [13], as well as an improvement in decision-making skills [14].
The emphasis, in research, has typically been on identifying methods by which the facial expressions of a gamer (denoting emotions in most cases) can be reflected on a game's avatar. Mostly, research focuses on studying techniques that allow for aligning the player's performance in in-game tasks to an expected set of emotions that logically correspond to the outcomes of these tasks. In reality, the unpredictability of human responses is always secondary in such research. Another aspect to consider as lacking in this kind of research is the belief that it is possible to predict facial emotions in all types of games based on a fixed categorical model of emotions, whereby instances of "wining" or "losing", for example, are bound to elicit emotions of happiness and sadness, respectively. Moreover, studies almost neglect that it is nearly impossible to achieve total immersion in a game, hence a complete identification of the human player with his in-game avatars, which even impacts, sometimes, experiencing relevant emotions to gameplay.
Recent research in non-gaming-related contexts has recognized the significance of modeling facial expressions and emotions of humans for various applications, and the methods implemented in this kind of investigation varied. [15] is an example of template-based research in facial expression modeling, where the researchers used weighting schemes to achieve a reliable transfer of facial expressions collected from example poses to a high-resolution and customized blendshape. To make an animation lip-sync speech produced by a human in real time, the researcher in [16] developed a model based on a dataset of lower-face movements, which were prioritized using an algorithm to compute each motion segment's importance and its plausibility as an expression in case a specific speech pattern was produced. Facial expressions modeling in [17] was part of the researchers' pipeline for creating a system that can transform facial expressions from one to another. Their model detected facial emotions, landmarks, and head poses and used a technique based on an image to image translation model that does not demand paired examples to map two facial expressions to each other. In [18], the researchers proposed a facial expressions model that is based on hierarchical reconstruction with the purpose of capturing refined and finer details of human faces. Other researchers, [19,20] modeled facial expressions as part of a virtual character animation pipeline, where the blendshapes are transformed from a reference face to a target one using regression models. According to the researchers, these models performed well in comparison to other cited methods in the field. Real-time facial animation has been the subject of [21][22][23], where the researchers start with real-time tracking of a human actor, then transfer the collected data on facial emotions, expressions, or poses to either the face of an animated character [21,22] or another human's face [23]. Overall, in these three papers, the researchers depended on the weighting of the resultant blendshapes vectors to generate plausible facial expressions, which led to successful facial expression transfer. In spite of the fact that these studies do not address gaming environments, the implications of reported success can be utilized or built upon to animate either player-controlled or computer-controlled gaming characters (avatars).
The accurate detection and recognition of the player's in-game, face-reflected emotions and the interfacing of recognized expressions to game avatars with the purpose of making them imitate human expressions based on in-game variables is one area of research that has received much attention in recent years. [24], for example, implemented a fuzzy logic model, FLAME, to replicate emotions in virtual characters depending on event appraisal, i.e., based on the user's emotional responses to a specific event. The researchers introduced, as well, inductive learning algorithms to dynamically adapt the virtual character to users' actions and the environment. The study reported an overall positive evaluation of the model by the users with a focus on the impact of implementing the learning algorithms but emphasized the potential for extending it through considering personality attributes.
Zhou, Huang, and Wang [25], for instance, utilized an embedded Hidden Markov Model (HMM) for the real-time recognition of the players' facial emotions while engaging in an interactive game; then, a face alignment depending on the game role and context was performed. The study reports an 84% accuracy rate for the classification of unknown cases. In a similar manner, the HMM model developed for emotion recognition in [26] was intended for gaming environments, and the input of the model was six recorded videos, which were processed and codified as representing six emotions: joy, astonishment, rage, antipathy, fear, and sadness.
Summarizing the results of the literature review, a number of researchers have pointed out the positive impacts of the virtual characters behavior to the gamers, and especially, the impact of facial expressions, as they can represent emotions. A number of researchers have automatically recognized emotions from facial expressions using various machine learning techniques. Some research has been done in the area of non-playable character (NPC) modeling, where they have modeled some standard behaviors and associate them with some in-game events. The main difference of all the above with our work is that we propose the development and use of a probability model that will govern the facial expressions of the computer-controlled character. The proposed model is trained using the facial expressions of a real human, playing the game and reacting to various game events. The aim is to enable a more realistic behavior of the computer-controlled character.

The Experiment
The aim of this research is to link the facial expressions of the C-CCs with game events (such as win, lose, etc.), mimicking the reactions of human players who previously recorded their facial expressions while playing a video game. For data collection, demonstration, evaluation, and as a proof of concept, this work utilized Unity3D [27] with the asset [28] to develop a game that resembled "Rock-Paper-Scissors". This game was selected due to its simplicity and fitness to demonstrate the proposed approach.

The Game
The game used for this project is similar to "Rock-Paper-Scissors" with some additional game elements, including but not limited to fire, water, and ice spells. Basically, this is a single-player game where the human player is playing against the computer, referred to in this paper as AI, where the AI is selecting actions randomly. Both the human player and the AI are confronted with a total of seven possible actions during the whole game. These actions consist of performing three attacks, using three shields, and experiencing one surprise event. The game is divided into sequential rounds, where players have a limited time of 20 s to select their actions. Each player has five health points, which are represented as hearts. The aim of the attack is to reduce the health points of the opposing player. If the health points are reduced to zero or less, then the player loses the game. The shield will protect the caster from certain incoming attacks. There are three types of shields: water, fire, and ice. The water shield will protect the caster from water or fire attacks, while the fire will protect the caster from ice and fire attacks. The ice will protect the caster from ice and water attacks. Finally, during the game, the players may experience a random "surprise event" that can heal or stun a player. Figure 2 presents an overview of the game.
our work is that we propose the development and use of a probability model that will govern the facial expressions of the computer-controlled character. The proposed model is trained using the facial expressions of a real human, playing the game and reacting to various game events. The aim is to enable a more realistic behavior of the computer-controlled character.

The Experiment
The aim of this research is to link the facial expressions of the C-CCs with game events (such as win, lose, etc.), mimicking the reactions of human players who previously recorded their facial expressions while playing a video game. For data collection, demonstration, evaluation, and as a proof of concept, this work utilized Unity3D [27] with the asset [28] to develop a game that resembled "Rock-Paper-Scissors". This game was selected due to its simplicity and fitness to demonstrate the proposed approach.

The Game
The game used for this project is similar to "Rock-Paper-Scissors" with some additional game elements, including but not limited to fire, water, and ice spells. Basically, this is a single-player game where the human player is playing against the computer, referred to in this paper as AI, where the AI is selecting actions randomly. Both the human player and the AI are confronted with a total of seven possible actions during the whole game. These actions consist of performing three attacks, using three shields, and experiencing one surprise event. The game is divided into sequential rounds, where players have a limited time of 20 s to select their actions. Each player has five health points, which are represented as hearts. The aim of the attack is to reduce the health points of the opposing player. If the health points are reduced to zero or less, then the player loses the game. The shield will protect the caster from certain incoming attacks. There are three types of shields: water, fire, and ice. The water shield will protect the caster from water or fire attacks, while the fire will protect the caster from ice and fire attacks. The ice will protect the caster from ice and water attacks. Finally, during the game, the players may experience a random "surprise event" that can heal or stun a player. Figure 2 presents an overview of the game.   Table 1 explains the rules of the game in some detail.

1.
Each player can choose one ability out of seven each turn.

2.
When either of the players' health reaches zero or less, the game ends.
• If the first player's health reaches zero or less, the second player wins.

•
If the second player's health reaches zero or less, the first player wins.

3.
Each player can use each ability three times except for the stun ability, which can only be used once.
• If both players finish their available actions while the health points of both remain above zero, the game will be considered a tie.

4.
There are two surprise abilities that can occur only once per game.
• "Health Increase Surprise" will increase the health points of a random player by one point. • "Stun Surprise" will randomly stun either player, preventing them from dealing or being dealt any damage.

5.
Each player can receive 0-2 damage points according to the following situations • 0 points of damage: • If both players choose shield abilities.

•
If the "stun" is used • If a player uses an attack element and the other player uses a shield of the same element or counter element, e.g, in the case of player one using the fire attack while player two using either the fire or water shields. Note: water counters fire, ice counters water, and fire counters ice.
If both players used attack abilities.
• 2 points of damage • If a player uses an attack element and the opponent player uses the element that doesn't counter it as a shield. Note: water is weak to ice, ice has weakness to fire, and water's weakness is ice. Table 2 explains the game damage system. Table 2 covers all the possible actions by both players. Figure 3 shows both players conducting a "fire attack"; hence, both will get 1 point of damage. Figure 4 shows player one conducting a "fire attack" while player two is casting an "ice shield", so player 2 will receive two points of damage. Figure 5 shows both players casting the "stun" ability, so none of them will receive any damage.  Table 2 covers all the possible actions by both players. Figure 3 shows both players conducting a "fire attack"; hence, both will get 1 point of damage. Figure 4 shows player one conducting a "fire attack" while player two is casting an "ice shield", so player 2 will receive two points of damage. Figure 5 shows both players casting the "stun" ability, so none of them will receive any damage.

Data collection
In order to construct the dataset, the researchers played a total of 113 games and have recorded their facial expressions using the "motion live" plugin in IClone7 [29] with an Iphone10 and the "live face" application [30]. Then, the facial expression animations were manually grouped to 10 different types with a corresponding ID from zero to nine. While there are multiple ways to automatically detect facial expressions and even extract various types of emotions [31][32][33], the researchers felt that it was not worth the significant overhead and was out of the scope of this project, so they have decided to extract and group the various facial expressions manually from the recorded videos. Table  3 summarizes the grouped facial expressions.

Data Collection
In order to construct the dataset, the researchers played a total of 113 games and have recorded their facial expressions using the "motion live" plugin in IClone7 [29] with an Iphone10 and the "live face" application [30]. Then, the facial expression animations were manually grouped to 10 different types with a corresponding ID from zero to nine. While there are multiple ways to automatically detect facial expressions and even extract various types of emotions [31][32][33], the researchers felt that it was not worth the significant overhead and was out of the scope of this project, so they have decided to extract and group the various facial expressions manually from the recorded videos. Table 3 summarizes the grouped facial expressions.
During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids.
The dataset produced by the experiment (available in Supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of player 1 and player 2. Columns eight and nine contain values from −1 to 2 representing the damage dealing events. Columns ten and eleven contain the ID of the player (1 or 2) that suffers a "stun surprise event" or a "heal supervise event" or the value zero if not applicable. Column twelve contains the ID of the player (1 or 2) who won the game or the value zero if not applicable or three in case of a tie. The following section attempts different machine learning algorithms for discovering any correlations between the dependent and independent columns.  During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. Ice Def. 7 Water Stun The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of  During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. Ice Def. 7 Water Stun The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of  During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. Ice Def. 7 Water Stun The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of ID:2

Evil Laugh
Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 18 Table 3. Facial expression summary. During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. Ice Def. 7

Facial Expressions
Water Stun The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of ID:3

Extreme Laugh
Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 18 Table 3. Facial expression summary. During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. Ice Def. 7

Facial Expressions
Water Stun The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of ID:4 Amazed Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 18 Table 3. Facial expression summary. During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of ID:5 Sad Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 18 Table 3. Facial expression summary. During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of ID:6 Angry Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 18 Table 3. Facial expression summary. During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of ID:7 Furious Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 18 Table 3. Facial expression summary. During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of  During the experiment, we recorded all the game data per turn, such as turn number, player actions, health points, surprise events, and victory state. Table 4 presents all the actions and their corresponding ids. The dataset produced by the experiment (available in supplementary files) contains 10 columns. The first column named "y" represents the dependent value and consists of numbers from zero to nine that correspond to facial expressions as per Table 3. The second column is the ID of each game. The third column is the turn ID. The fourth and fifth columns contain the action IDs (see Table 4) of player 1 and player 2, respectively. Column six and seven present the remaining health points of ID:9

Analysis
The biggest challenge of analyzing the facial reactions and their association with game events was that the same combination of game events caused different facial expressions. In order to reflect that to our model, we linked the combination of various game events with the probability of occurrence of the corresponding facial expression. During this process, we identified that some game events have a more dominant effect over others. The "win, lose, or tie" game event (from the last column of the dataset) was the most dominant, followed by the "surprises" (columns ten and eleven) and "damage dealing" (columns eight and nine) event combinations.
Therefore, we developed a three-step decision-making process that associates the facial animations with either of these events only instead of all game events. The system should first check if there is a "win, lose, or tie" game event, and if the check is positive, then it should play one of the corresponding animations. Else, it should check if a "surprise" game event is happening and one of its associated animations. Finally, if none of the above are present, the system should decide based on the "damage dealing" event combinations. To accommodate this, we studied these three associations separate from each other.
Based on the dataset, as it can be seen in Table 5 that when the player won, 14 out of 52 times, facial expression 4 (Extreme Laugh) occurred, and 12 times, it was facial expression 3 (Evil Laugh). In some cases, the dataset included unpredicted behaviors. For example, the player won the game and is furious (8) instead of happy. This was the result of a misrecording or the subject was disturbed by environmental factors during the data collection. However, this has a low number of occurrences and can be ignored as an outlier during the modeling. Table 5. Summary of the occurrence of the various facial expressions in association with win, lose, and tie game events. Before the model can be produced, the dominant associations must be identified and have their possibility of occurrence calculated. To that extend, for the win and lose cases from Table 5, all facial expressions that occurred less than five times have been removed. That leaves four facial expressions (1,2,3,4) for "Player Win" and three from "Player Lose" (6,7,8). In order to calculate the possibility of occurrence (P), we add the total occurrences of the facial expressions that were not eliminated (T). Then, we use it as a divider for the number of occurrences of the facial expressions (F) and multiply the result by 100. Equation (1) presents the formula used for calculating the individual possibilities.

Facial Expression Occurrences
The "Tie" column has only 1 occurrence of facial expression 5, so its possibility of occurrence will be 100%. Table 6 presents the possibility of the occurrence of various facial expressions.  The possibilities in Table 6 were calculated using the formula (Equation (1)). According to Table 5, for 19 times where the player lost, he was furious (8). Since facial expressions 6, 7, and 8 were not eliminated, T will be the total number of occurrences, which is 50. So, the possibility of facial expression 8 is going to be (19/50)*100 = 38%. The rest of the possibilities were calculated in the same way. For simplification purposes, we conducted a rounding and/or manual alteration (up to 1%), in order to ensure that all possibilities are decimals and that the summary of each column is 100%. Table 7 lists all the possible combinations of "surprise events" and the number of occurrences of the various facial expressions for each combination. The two digits at the beginning and end of the column name present the values from the "stun" (column ten) and "1 health bonus" (column eleven), respectively. So, "0_1" means that there is no "stun" event and that player 1 (the human) received one point of bonus health. Likewise, the rest of the columns follow the same naming conversion based on all possible combinations of values from the dataset (explained in the previous section).  It worth to be noticed that the number of occurrences in Table 7 were calculated by excluding the rows where the "Win" (last column) had a value other than zero. This is because, as explained before, the "win, lose or tie" events were calculated separately and then temporally removed from the dataset so they will not affect the rest of the analysis.
Before the possibilities were calculated, some facial expressions with low occurrence values were eliminated. The possibilities for the rest of the facial expressions were calculated using (Equation (1)) and are presented in Table 8. During the analysis, it was found that the "damage dealing" events summarized the remaining columns in terms of contribution to the association. Hence, we take all their combination in a similar manner, as the "surprise" events were enough for determining the association with the facial expressions. The possible combinations are four. No player received any damage "0_0", player 1 receives no damage, but player two receives two points of damage "0_2", both players receive one point of damage "1_1", or player 1 receives two points of damage. Table 9 presents the number of occurrences of the various facial expressions under each game event. The counting of occurrences of facial animation during "damage dealing" events excluded the rows where the "surprise" or "win" events (last three columns) had values other than zero. If even one of these columns had a value other than zero, then this row was excluded from the counting as "surprise" events having priority over "damage dealing" events. The possibilities of occurrence of the non-eliminated facial expressions calculated using (Equation (1)) are presented in Table 10. With all the game events identified and their associated probabilities calculated, it is possible to produce a tree that translates these relationships. The researchers opted for diagraming a tree in which the branches represented the facial expression outcomes (independent variable) of each game event being played (dependent variable). The root of the tree is the game event. The children include the possible facial expression ID and its possibility of occurrence. Figure 6 presents a probability tree for the "win", "lose", and "tie" events. The possibilities taken from Table 6 were reversed, assuming that the computer-controlled player, referred to as Artificial Intelligence (AI), is the human player. For example, when the player is losing, it was assumed that the AI was losing, hence it will react in a similar way.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 13 of 18 Table 10. Summary of the possibility of occurrence of the various facial expressions in association with damage dealing game events. With all the game events identified and their associated probabilities calculated, it is possible to produce a tree that translates these relationships. The researchers opted for diagraming a tree in which the branches represented the facial expression outcomes (independent variable) of each game event being played (dependent variable). The root of the tree is the game event. The children include the possible facial expression ID and its possibility of occurrence. Figure 6 presents a probability tree for the "win", "lose", and "tie" events. The possibilities taken from Table 6 were reversed, assuming that the computer-controlled player, referred to as Artificial Intelligence (AI), is the human player. For example, when the player is losing, it was assumed that the AI was losing, hence it will react in a similar way. Similarly, Figure 7 translates the results of Table 8 by converting 1 to 2 and 2 to 1, as these values represent player IDs. So, event "1_2" will use the values of column "2_1" from Table 8. The 0 was not affected. Similarly, Figure 7 translates the results of Table 8 by converting 1 to 2 and 2 to 1, as these values represent player IDs. So, event "1_2" will use the values of column "2_1" from Table 8. The 0 was not affected. Translating the results from Table 10 required swapping the order of the digits that represent points of damage. For example, "0_2" is using the associations of "2_0". The event "0_0" was not affected. The results are presented in Figure 8. As it can been seen, the summary of the possibility of occurrence of the children is 100%. The game initially will identify which game event occurred. Then, it will partition a space from zero to 100 based on the possibility of occurrences starting from left to right. For example, as it can been seen in Figure 9, the AI lose condition will be partitioned based on the possibility of occurrence of the three facial expressions, which are extracted from Figure 5 for the AI lose condition. From zero to 24 is the space of facial expression 6. Twenty-five to 62 is the space of facial expression seven. Finally, the space of facial expression eight is from 63 to 100. The system will select which facial expression to play base on a randomly generated number from zero to 100. For example, if the generated number is 60, it will select facial expression seven. The same process is repeated in case of other game events. Translating the results from Table 10 required swapping the order of the digits that represent points of damage. For example, "0_2" is using the associations of "2_0". The event "0_0" was not affected. The results are presented in Figure 8. Translating the results from Table 10 required swapping the order of the digits that represent points of damage. For example, "0_2" is using the associations of "2_0". The event "0_0" was not affected. The results are presented in Figure 8. As it can been seen, the summary of the possibility of occurrence of the children is 100%. The game initially will identify which game event occurred. Then, it will partition a space from zero to 100 based on the possibility of occurrences starting from left to right. For example, as it can been seen in Figure 9, the AI lose condition will be partitioned based on the possibility of occurrence of the three facial expressions, which are extracted from Figure 5 for the AI lose condition. From zero to 24 is the space of facial expression 6. Twenty-five to 62 is the space of facial expression seven. Finally, the space of facial expression eight is from 63 to 100. The system will select which facial expression to play base on a randomly generated number from zero to 100. For example, if the generated number is 60, it will select facial expression seven. The same process is repeated in case of other game events. As it can been seen, the summary of the possibility of occurrence of the children is 100%. The game initially will identify which game event occurred. Then, it will partition a space from zero to 100 based on the possibility of occurrences starting from left to right. For example, as it can been seen in Figure 9, the AI lose condition will be partitioned based on the possibility of occurrence of the three facial expressions, which are extracted from Figure 5 for the AI lose condition. From zero to 24 is the space of facial expression 6. Twenty-five to 62 is the space of facial expression seven. Finally, the space of facial expression eight is from 63 to 100. The system will select which facial expression to play base on a randomly generated number from zero to 100. For example, if the generated number is 60, it will select facial expression seven. The same process is repeated in case of other game events. Translating the results from Table 10 required swapping the order of the digits that represent points of damage. For example, "0_2" is using the associations of "2_0". The event "0_0" was not affected. The results are presented in Figure 8. As it can been seen, the summary of the possibility of occurrence of the children is 100%. The game initially will identify which game event occurred. Then, it will partition a space from zero to 100 based on the possibility of occurrences starting from left to right. For example, as it can been seen in Figure 9, the AI lose condition will be partitioned based on the possibility of occurrence of the three facial expressions, which are extracted from Figure 5 for the AI lose condition. From zero to 24 is the space of facial expression 6. Twenty-five to 62 is the space of facial expression seven. Finally, the space of facial expression eight is from 63 to 100. The system will select which facial expression to play base on a randomly generated number from zero to 100. For example, if the generated number is 60, it will select facial expression seven. The same process is repeated in case of other game events. While the proposed model focuses on facial expressions, the same approach can be used for modeling any type of virtual character reaction, leading to more immersive virtual worlds. The populated possibility model developed and used in this work can not be used for other games. However, the proposed approach can be generalized using the steps from Figure 10. While the proposed model focuses on facial expressions, the same approach can be used for modeling any type of virtual character reaction, leading to more immersive virtual worlds. The populated possibility model developed and used in this work can not be used for other games. However, the proposed approach can be generalized using the steps from Figure 10. In order to generalize the proposed approach so it can be used in a different scenario, developers must first collect the animation data of real human subjects, while they are experiencing various game events. Then, they can extract the possibility of occurrence from the dataset. The next step is reverting the possibility of occurrence to represent the AI instead of the player behavior (as explained earlier in this section). Then, the possibility tree can be produced and used.

Evaluation
In order to evaluate this work, the researchers examined the trees from Figures 6-8 for reasonable results, and they conducted user experience testing. During the tree evaluation, it has been discovered that the "win, lose, tie" game events had reasonable reactions. The "surprise" event "1_0" (stun) sometimes caused laughter (28%) and sometimes upset (24%). While this is contradicting, it was a result of the small size of the occurrences, as "surprise" events were relatively rare. The rest of the "surprise" events produced expected behaviors. The "damage dealing" events had consistent happy and sad facial expressions when causing damage to the enemy or receiving damage. When nobody is receiving damage or both receive one point of damage, we got either negative or positive expressions, which is considered normal.
The user experience testing included asking 24 students attending the "3D Game Development" class, but who were not involved with this research, to play the prototype game and answer a questionnaire. The questionnaire and its responses are summarized in Table 11. Table 11. User experience testing, questionnaire summary. As can be seen from Table 11, all users notice that the computer-controlled character portrait was linked to the game events. The students did not realize that these reactions were the result of our model, but they were not aware of this research, and this is reflected in these results. All students agreed that the interactive computer-controlled character portrait made the game more immersive and fun to play. Overall, the researchers are satisfied with the proposed approach. In order to generalize the proposed approach so it can be used in a different scenario, developers must first collect the animation data of real human subjects, while they are experiencing various game events. Then, they can extract the possibility of occurrence from the dataset. The next step is reverting the possibility of occurrence to represent the AI instead of the player behavior (as explained earlier in this section). Then, the possibility tree can be produced and used.

Evaluation
In order to evaluate this work, the researchers examined the trees from Figures 6-8 for reasonable results, and they conducted user experience testing. During the tree evaluation, it has been discovered that the "win, lose, tie" game events had reasonable reactions. The "surprise" event "1_0" (stun) sometimes caused laughter (28%) and sometimes upset (24%). While this is contradicting, it was a result of the small size of the occurrences, as "surprise" events were relatively rare. The rest of the "surprise" events produced expected behaviors. The "damage dealing" events had consistent happy and sad facial expressions when causing damage to the enemy or receiving damage. When nobody is receiving damage or both receive one point of damage, we got either negative or positive expressions, which is considered normal.
The user experience testing included asking 24 students attending the "3D Game Development" class, but who were not involved with this research, to play the prototype game and answer a questionnaire. The questionnaire and its responses are summarized in Table 11. Table 11. User experience testing, questionnaire summary.

Question 1 2 3
Was the computer-controlled character portrait reacting based on what was happening in the game? I don't know (1), yes (2), no (3) 0/24 0/24 24/24 The behavior was random (1), pre-programmed (2) As can be seen from Table 11, all users notice that the computer-controlled character portrait was linked to the game events. The students did not realize that these reactions were the result of our model, but they were not aware of this research, and this is reflected in these results. All students agreed that the interactive computer-controlled character portrait made the game more immersive and fun to play. Overall, the researchers are satisfied with the proposed approach.

Conclusions
This work presented a new approach for developing interactive computer-controlled character behaviors that react based on what is happening in the game. The approach was based on modeling facial expressions based on a dataset collected by human subjects that were playing a computer game that was developed as part of this project. The prototype computer game was developed for the purposes of data collection and proof of concept. During the research, it was found that "win, lose, tie" game events have more dominant associations with the facial expressions than the rest of game events, followed by "surprise" game events that occurred rarely and finally "damage dealing" events. The association of these three types of events were modeled separately. The models were in the form of probabilistic trees that include game events as roots and the possible facial expressions with the possibility of occurrence as leaves. This work was evaluated by manually checking the trees for unexpected leaves as well as user experience testing. While manually inspecting the tree, it was found that only 1 out of 15 game events had a possibility to produce an unexpected facial expression. The user experience test was conducted by 24 test subjects that confirmed that the computer-controlled character was responsive to the game events, which was something that made the game more immersive and fun to play.
The main limitation of this work lies in the dataset that was collected while one human subject was playing the game. The problem is that different people with different personalities may react differently to the same game event. Utilizing more people could result in a richer dataset that can allow the computer-controlled character not only to mimic facial expressions based on the various game events but react differently based on different personality profiles. This is the main focus of our future work, together with more testing. While this work focused on associating computer-controlled character's facial expressions with game events, the same process can be used for modeling any type of avatar reaction, which is something that can lead to more immersive virtual worlds and games that are more fun to play.