Next Article in Journal
Demographics and Personality Discovery on Social Media: A Machine Learning Approach
Previous Article in Journal
Fast, Efficient and Flexible Particle Accelerator Optimisation Using Densely Connected and Invertible Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What Constitutes Fairness in Games? A Case Study with Scrabble

by
Htun Pa Pa Aung
,
Mohd Nor Akmal Khalid
*,† and
Hiroyuki Iida
*
Research Center for Entertainment Science, School of Information Science, Japan Advanced Institute of Science and Technology, 1-1 Asahidai, Nomi 923-1211, Ishikawa, Japan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Information 2021, 12(9), 352; https://doi.org/10.3390/info12090352
Submission received: 19 July 2021 / Revised: 13 August 2021 / Accepted: 19 August 2021 / Published: 30 August 2021
(This article belongs to the Section Information Applications)

Abstract

:
The compensation system called komi has been used in scoring games such as Go. In Go, White (the second player) is at a disadvantage because Black gets to move first, giving that player an advantage; indeed, the winning percentage for Black is higher. The perceived value of komi has been re-evaluated over the years to maintain fairness. However, this implies that this static komi is not a sufficiently sophisticated solution. We leveraged existing komi methods in Go to study the evolution of fairness in board games and to generalize the concept of fairness in other contexts. This work revisits the notion of fairness and proposes the concept of dynamic komi Scrabble. We introduce two approaches, static and dynamic komi, in Scrabble to mitigate the advantage of initiative (AoI) issue and to improve fairness. We found that implementing the dynamic komi made the game attractive and provided direct real-time feedback, which is useful for the training of novice players and maintaining fairness for skilled players. A possible interpretation of physics-in-mind is also discussed for enhancing game refinement theory concerning fairness in games.
Keywords:
games; fairness; scrabble; Go; komi

1. Introduction

Fairness is essential for many multi-agent systems and human society, and contributes to both stability and productivity. The concept of fairness emerges in various contexts, such as telecommunication networks, operating systems, and the economy [1,2], when a limited amount of resources is to be concurrently shared among several individuals. Recent work has shown that fairness is becoming increasingly critical with the rapid increase in the use of machine learning software for important decision making because of the black-box nature of this technology [3,4,5,6,7]. The potential for advanced machine learning systems amplifies social inequity and unfairness, which are receiving increasing popular interest and academic attention [8]. Measuring the fairness of machine learning models has been studied from different perspectives with the aims of mitigating the bias in complex environments and supporting developers in building fairer models [9,10,11,12,13,14].
In the field of intelligent communication, throughput fairness was improved by a novel user cooperation method in a wireless powered communication network (WPCN) [15]. Fairness is one of the most important aspects of a good game, but it is rarely straightforward. It is also an essential element to attract more people to play the target game. If a game loses fairness and equality, then it cannot survive for a long time [16]. The various stakeholders of society define fairness, in which fair play gives games the characteristic of beauty [17]. The evolution of fairness was studied by Shirata [18] under an assortative matching rule in the ultimatum game. In the domain of two-player perfect information board games such as chess and Go, the definition has been given that a game is fair if and only if the winning ratio for White and Black is statistically equal or nearly so [19].
Artificial intelligence (AI) is typically achieved by a collection of techniques to simulate human decision-making skills. Since the 1950s, AI has played an essential role in the game industry as an ideal domain to evaluate the potential of AI applications. AI strives to build intelligent agents that can perceive and act rationally to accomplish goals. In recent years, AI researchers have developed programs capable of defeating the strongest human players in the world. Superhuman-performance programs exist for popular board games such as chess, shogi, Go (AlphaZero by [20]), checkers (Chinook by [21]), Othello (Logistello by [22]), and Scrabble (Maven by [23]).
Although superhuman-performance programs have been achieved, the question of what makes a game good and fair is still actively debated [24,25]. While a game’s rules might be balanced, the player may feel that the experience is not fair, which is a source of design tension. The concept of fairness in games was first studied by Van Den Herik et al. [26]. Meanwhile, Iida [27] discussed fairness in game evolution, revealing a glimpse of what human intelligence from different parts of the world sought throughout history in all games: thrilling and fair play.
Some board games have persisted in popularity despite the changing entertainment opportunities afforded to consumers by rapidly changing technology. Scrabble is one of the brilliantly engineered board games that remain unique to the contemporary game community. Scrabble is a popular crossword game and a board game that is interesting from an AI perspective because the information is gradually revealed to the player during game play. Scrabble has been sold in 121 countries (approximately 150 million sets have been sold worldwide); it is available in 29 languages, is played in approximately 4000 Scrabble clubs, and is owned by roughly one-third of American and by half of British households [28,29,30]. Scrabble is a type of scoring game that is played on a physical or virtual board space. Scrabble AI programs have achieved a level of performance that exceeds that of the strongest human players [23]. Furthermore, the game of Scrabble gave an exciting example while giving an initial randomized position when the advantage of the initiative was reconsidered with self-play experiments [16].
A sophisticated game should have the game-theoretic value as a draw. In Scrabble, the definition of fairness should be: “two similarly leveled players have a statistically equal or nearly equal winning ratio”. As such, the nature of Scrabble, its fairness mechanism, and its evolution in gameplay are investigated in this paper using game refinement theory to discover the underlying process fairness.
The rest of this paper is organized as follows. Section 2 gives an overview of existing works and research questions with hypotheses. We also introduce some notions relating to Scrabble and komi in this section. Section 3 describes the proposed dynamic komi idea and how it can efficiently be used to give a fair game environment for different performance levels. The experiments and their results are described and analyzed in Section 4. Finally, Section 5 concludes and points to future research directions and opportunities.

2. Literature Review and Related Work

2.1. Artificial Intelligence (AI) and Fairness in Games

In the past, the Go board game was listed as one of the grand challenges of AI [31,32]. By 2006, the strengths of computer Go programs were generally below 6-kyu [33,34], which is far from the strength of amateur dan players. With the adoption of Monte Carlo tree search (MCTS) in 2006, computer Go programs started to make significant progress up to 6-dan in 2015 [32,35,36,37,38,39,40]. In 2016, this grand challenge was achieved by the AlphaGo program [41] when it defeated (4:1) against Lee Sedol, a 9-dan grandmaster who had won most of the world Go champion titles in the past decade. Many thought at the time that the technology was a decade or more away from surpassing this AI milestone. A new approach was introduced to computer Go which uses deep neural networks trained by a novel combination of supervised learning from human expert games. In addition, games of self-play were conducted to evaluate board positions and select moves where a new search algorithm was introduced that combines Monte Carlo simulation with value and policy networks [41].
In the traditional computer Go program with MCTS, dynamic komi is a technique widely used to make the program play more aggressively, especially for handicap games [40]. With the growing availability and use of machine learning techniques and faster computer hardware, superior computer programs against human beings in the most popular perfect information games have emerged, including checkers, chess, shogi, and Go [20]. It is challenging to keep fairness when designing a game. Games have survived for a long time by changing the rules to seek fairness and become more attractive and engaging. Several board games maintain fairness; this includes chess, which was studied in [19]. In the history of Chinese and Western chess, the rules have been changed many times. As a result, a draw outcome became prominent in competitive tournaments. In gomoku or Connect5, Allis [42] proved that the first player always wins when playing perfectly on a 15 × 15 board. As such, some of the game rules were changed to ensure fairness. In the game of renju (a professional variant of gomoku), the first player is debarred from playing some moves while the second player gets to swap color pieces after the second move of the first player [43]. However, there is still some advantage for one side under this rule. Hence, Connect6 was developed, which is much fairer than gomoku in some ways.
Scrabble is a unique game that is considered as a scoring game (Go-like game) and a board game (checkers-like game) while being imperfect information (a card-based game). In Scrabble, the current player is unaware of the opponent player’s rack, making it difficult to guess the opponent’s next move until the end of the game. There is also inherent randomness present in Scrabble, as random letters are drawn from the bag to the current player’s rack. The state space in Scrabble is also quite complicated because the tiles are marked with specific letters instead of Black and White. Currently, Maven and Quackle are the leading Scrabble AI programs. Maven was created in 2002 by Brian Sheppard [44], whereas Quackle is an open-source Scrabble AI developed by Jason Katz-Brown and John O’Laughlin in 2006 [45].

2.2. Game Refinement Theory

Game refinement theory has been investigated based on the uncertainty of game outcome [46,47,48]. It has been studied not only in fun-game domains such as video games [49,50], board games [47], and sports [51,52], but also in non-game domains such as education and business [53,54]. The tendency of game refinement value typically converges towards a comfortable zone ( G R [ 0.07 , 0.08 ] ), which is associated with the measures of game entertainment and sophistication involving a balance between the level of skill and chance in the game [55,56].
The information on the game’s result is an increasing function of time (i.e., the number of moves in board games) t, which corresponds to the amount of solved uncertainty (or information obtained) x ( t ) , as given by (1). The parameter n (where 1 n N ) is the number of possible options and x ( 0 ) = 0 and x ( T ) = 1 .
x ( t ) = n t x ( t )
x ( T ) stands for the normalized amount of solved uncertainty. Note that 0 t T , 0 x ( t ) 1 . The rate of increase in the solved information x ( t ) is proportional to x ( t ) and inversely proportional to t, which is given as (1). By solving (1), (2) is obtained. It is assumed that the solved information x ( t ) is twice derivable at t [ 0 , T ] . The accelerated velocity of the solved uncertainty along the game progress is given by the second derivative of (2), which is given by (3). The acceleration of velocity implies the difference of the rate of acquired information during game progress. Then, a measure of game refinement ( G R ) is obtained as the square root of the second derivative (Equation (4)).
x ( t ) = t T n
x ( t ) = n ( n 1 ) T n t n 2 t = T = n ( n 1 ) T 2
G R = n ( n 1 ) T
A skillful player would consider a set of fewer plausible candidates (b) among all possible moves (B) to find a move to play. The core part of a stochastic game assumes that each among b candidates may be equally selected. Knowing that the parameter n in (4) stands for the number of plausible moves b, n B is obtained. Thus, for a game with branching factor B and length D, the G R can be approximated as in (5).
G R B D

2.3. Gamified Experience and the Notion of Fairness

Let p be the probability of selecting the best move among n options, which implies p = ( 1 n ) . As such, a gamified experience can be defined based on the notion of the risk frequency ratio. The risk frequency ratio m (risk frequency over the whole game length) is defined as m = 1 p = n 1 n . Then, a gamified experience is gained if and only if the risk of failure occurs half the time ( m 1 2 ), which implies n 2 .

2.3.1. Definition of Outcome Fairness

Based on the gamified experience, one notion of fairness can be defined as an outcome fairness. The winning ratio p (focus on game outcome over the whole game) is defined as p = 1 2 . Then, fairness is gained if and only if the winning ratio occurs with p = 1 2 for White and Black players.
Let t and y ( t ) be the time or length of a given game and the uncertainty solved at time t, respectively. Hence, a player who needs to solve uncertainty by the average ratio v is given by (6). Information acceleration felt by the player is given by (7). By considering the cross point between (6) and (7) found at t = D , the relation (8) is obtained.
y ( t ) = v t
y ( t ) = 1 2 a t 2 where a = G R 2
a = 2 v D
The cross point D indicates the correct balance between skill and chance concerning the gamified experience as well as comfortable thrill by the informational acceleration in the game under consideration. In other words, a sophisticated game postulates an appropriate game length to solve uncertainty while gaining the necessary information to identify the winner. Moreover, if the game length is too long (or too short) or the total score is too large (or small), the game would be boring (or unfair).

2.3.2. Definition of Process Fairness

In score-based games, game length is not a reliable measure, since it can vary between multiple game sets. Thus, a different measure p is utilized in this context. Assume that P 1 is the player that gets advantage first in the early stages of the game, and that P 2 is the player who fails to get the advantage. Let W be the number of advantages by P 1 and L be the number of advantages by P 2 (the number of disadvantages by P 1 ); then, another notion of fairness can be defined, namely, process fairness.
A game is competitive and fair if m = 1 2 , based on (9) and (10). This notion leads to the interpretation of mass m in game playing, which is determined on the basis of the target domain: (1) board games, and (2) scoring board games.
(1)
Board games:
Let B and D be the average number of possible moves and game length, respectively. The score rate p is approximated as (9), by which the approximation of p is derived originally from the approximation in (5).
p 1 2 B D and m = 1 p
(2)
Scoring games:
Let W and L be the number of advantages and the number of disadvantages in the games that have a game pattern with an observable score. The score rate p is given by (10), which implies that the number of advantages W and the number of disadvantages L are almost equal when a game meets fairness.
p = W W + L and m = 1 p = L W + L
Figure 1 describes two types of game progression curves. A game is considered fair if each of an evenly matched group of players has a prior equal chance of winning for any given starting position. Each player in a regular fair game between two players should win about 50% of the time when both players are playing at the same proficiency level. Figure 1a shows the progression curve for games with a first-player advantage, where many games are won by the first player. A seesaw game with a good dynamic is one where the result is unpredictable to the very last moves of the endgame stage. This implies that the game is more exciting, fascinating, and entertaining. As such, it is expected that this ideal game progression is the most important for a well-refined game. It indicates that both players have a good chance of winning the game, as shown in Figure 1b.

2.3.3. Momentum, Force, and Potential Energy in Games

Various physical formulations have been established around the motions of an object in physics. The most fundamental such formulations are the measure of force, momentum, and potential energy. We adopt these measures in the context of games, where m = 1 p is the mass, v = p is the velocity, and a = 2 m D = G R 2 is the acceleration. Then, the force, momentum, and potential energy (based on gravitational potential energy), are obtained as (11)–(13), respectively.
F = m a
p = m v
E p = m a 1 2 a t 2 = 1 2 m a 2 t 2 = 2 m v 2

2.4. Evolution of Fair Komi in Go

Komi (or compensation) is a Japanese Go term that has been adopted in English. In a game of Go, Black has the advantage of the first move. In order to compensate for this, White can be given an agreed-upon set number of points before starting the game (the typical value is from 5 to 8 points). These points are called komi [57]. Before 1937, komi was rarely used in professional tournaments, and its gradual introduction into professional play was not without controversy. Now, almost all Go tournaments (amateur and professional) use komi.
Although there were some games played with compensation in the 19th century, more substantial experiments came in the first half of the 20th century. Several values were experimented with until a value of 4.5 became the standard from the 1940s onward. Game results from the next two decades showed that 4.5 komi still favored Black, so a change was made to 5.5 komi, which was mostly used for the rest of the 20th century in both Japan and China [57]. At the start of the 21st century, the komi was increased yet again to 6.5 (Korea and Japan) and 7.5 (China).
In theory, the perfect komi for a given ruleset is a clear concept: it is the number of points by which Black would win provided optimal play by both sides. Unless the ruleset allows fractional winning margins (which none of the common ones do), this is necessarily a whole number. Due to the absence of perfect players in Go, this number cannot be determined with certainty. However, it is possible to make a reasonable guess at it, at least for some rulesets. For this study, the current komi system is called static komi because it is determined based on the statistical scores of previously played games.

3. The Proposed Assessment Method

3.1. Dynamic versus Static Komi

Since perfect play is not yet possible in Go, statistical analysis was used to judge the fairness of a given komi value. To illustrate how komi is determined, the statistics of professional games played on a 19 × 19 board with a komi of 5.5 are given in Table 1. The data show that a komi of 5.5 slightly favors the Black. Therefore, the compensation is not sufficient for the White to overcome the first-move advantage of the Black.
Although it is tempting to use this as evidence to grant a komi of one point higher, the greater proportion of games would then be won by White, which is not entirely fair. The problem is that professional Go players play to win since winning by a little or winning by a lot is still winning (the same is true for losing). Thus, a change of strategy only happens when a player loses or gains the advantage. A player who is behind will try to get ahead by introducing complexities, even losing points in the process. The leading player may be willing to play sub-optimally in order to reduce complexities and give up a few points to maintain the lead.
The advent of AlphaGo and other AI bots induces the need for performance benchmarking through an explicit probabilistic evaluation. With the standard komi value of 7.5, the bots believe their opponent is ahead by 55–45%. Similarly, KataGo increases the performance evaluation of its opponent scores by half a point, making the fair komi value 7 instead. Thus, substantial evidence of the perfect komi (or upper bound) is needed. A much more reliable statistic can be obtained from games won by gaining the advantage of W (or disadvantage L) for a given komi value.
In the context of Scrabble games, previous work adopted a similar concept of komi by proposing a static komi method that corresponds to the players’ level to ensure a fair game environment for all levels of players [58]. However, the approach is dependent on the board situation, where the program adjusts the komi value internally—either giving the program a “virtual advantage” where the AI player is losing or burdening it with a virtual disadvantage when it is winning too much in the actual game. This approach may be limited since constant komi values are statistically computed based on the expected score difference of the player’s level over a specific number of simulations. It may also lead to a second-player advantage, meaning that the player could hide their best move (by making the highest-score word) before receiving the komi in the earlier stages. This study proposes a new approach called dynamic komi, allowing adjustment of the score based on each particular game match. Since the correct static komi only depends on the board size and the player’s ability, the proposed dynamic komi method significantly enhances the fairness level over the original AI programs, and over the static komi method.

3.2. Scrabble AI and Play Strategies

Scrabble is a word anagram game in which two to four players competitively score points by placing tiles, each bearing a single letter onto a 15 × 15 board. The tiles must form words that are accepted by the dictionary, in either vertical or horizontal crossword directions. There are two general sets of acceptable words, named OCTWL and SOWPODS. These two sets were developed especially for Scrabble so that there are only words between 2 to 15 characters. OCTWL is generally used in the USA, Canada, and Thailand, while other countries use SOWPODS.
Scrabble has been played for a long time under various settings (Scrabble history—making of the classic American board game: https://scrabble.hasbro.com/en-us/history, accessed on 18 January 2020), for example, as a friendly match game among friends or family members, as a competitive match between professional players [59], and as a language-learning tool by students. Different players may have different vocabulary knowledge and are supposed to have distinct playing experiences [60].
Compared to a typical board game, Scrabble players require strategic skill and sufficient vocabulary size. Scrabble has also reached an advanced community of players. From a previous work [16], it is known that Scrabble has an issue with the advantage of initiative (AoI), which is essential for fairness. Utilizing an AI player, the fairness of Scrabble can be determined based on AI feedback. To achieve this feedback, the gameplay of the Scrabble AI in this study is subdivided into three phases:
  • Mid-game: This phase lasts from the beginning until there are nine or fewer tiles left in the bag.
  • Pre-endgame: This phase starts when nine tiles are left in the bag. It is designed to attempt to yield a good end-game situation.
  • Endgame: During this phase, there are no tiles left in the bag, and the state of the Scrabble game becomes a perfect information game.

4. Computational Experiment and Result Analysis

This section discusses the experiments conducted and results obtained in this study. The Scrabble AI program was implemented in C#, and one hundred distinct match settings were simulated with five hundred iterations each. These experiments were executed on an ASUS PC machine with 16 GB RAM and quad-core processor with Intel Core i7-10700 on the Windows 10 operating system. We collected the average results of the conducted self-play games to analyze the likely winning rate of each game for each setting. Our program took 120–320 s to finish 100 simulations.
Following a previous work by [61], several experiments were performed to demonstrate and analyze the effect of the dynamic komi approach based on ten different performance levels in the game (ranging from 1 to 10). The first experiment focused on analyzing the impact of dynamic komi on the winning probability in the game of Go. The second experiment focused on analyzing the impact of dynamic komi on the winning probability in four different game phases of Scrabble. Subsequently, game refinement theory was applied to determine the optimal play strategy over various performance levels.

4.1. Analysis of Dynamic Komi in the Game of Go

The motivation for applying dynamic komi to the game of Go is because the use of komi originates from this game. The results of the winning probability for each komi value using both static and dynamic komi in the Go game are given in Table 2 and Figure 2. The results were collected from an analysis of 2650 games of Go, where the winning probability was computed based on different komi values.
Table 2 presents the experimental results of static komi and dynamic komi incorporated in Go. The compensation (komi) system was introduced into professional Go in Japan as a gradual process of innovation, beginning in the 1930s. As a professional opening strategy has evolved, the correct value of komi has been re-evaluated over the years. Initially, static komi (compensation) is given to balance the initiative of playing first. Although 6.5 points was a common komi as of 2007, each country, association, and tournament may set its own specific komi. By far, the most common type of komi is a fixed compensation point system. A fixed number of points, determined by the Go organization or the tournament director, is given to the second player (White) in an even game (without handicaps) to make up for the first-player (Black) advantage.
In another work, dynamic komi was proposed based on the observation of different values (win rates), and was simultaneously trained for different komi settings to improve the game-playing strength in [62]. However, that study found no significant difference when adopting dynamic komi compared to the static komi in the Go game. This implies that the evolution of fairness might be dependent on the initial game condition. Static komi is suitable for games having a fixed initial position, such as Go. Meanwhile, a random initial position in Scrabble should incorporate the dynamic komi approach to ensure expected fairness.
Monte Carlo tree search (MCTS) tends to produce varying results when faced with extreme advantage (or disadvantage) due to poor move selection in computer versions of Go. This variation is caused by the fact that MCTS maximizes the expected win probability, not the score margin. This situation can be found in handicap games of players with different performance levels. The handicap consists of the weaker player (always taking the Black color) placing a given number of stones on the board before the stronger player (White) gets their first move (handicap amount of between one to nine stones). Thus, when playing against a beginner, the program can find itself choosing a move to play on board with nine Black stones already placed on strategic points.
In practice, if a strong human player is faced with an extreme disadvantage (handicap game), they tend to play until the opponent makes mistakes to catch up patiently. Similarly, a strong human in an advantageous position will seek to solidify their position, defend the last remaining openings for an attack, and avoid complex sequences with unclear results to enlarge their score margin. The dynamic komi technique was used as one possible way to tackle the above extreme problem in Go. It has long been suggested, notably by non-programmer players in the computer Go community, that the dynamic komi approach should be used to balance the pure win rate orientation of MCTS.

4.2. Analysis of Dynamic Komi in the Game of Scrabble

Figure 3a presents the komi variation over different game phases in Scrabble. While the expected winning rate remained within the limits of what is considered fair (≈50%), the primary concern is the application of the komi. Specifically, in what game phases should the komi be applied in order to provide the optimal impact on the winning rate or outcome of the game?
Hence, the attention of this study is shifted towards the endgame phases. Scrabble endgames are crucial real-world scenarios that determine the success or failure of Scrabble tournaments. Having the right approach may turn a losing player into a winner. A basic greedy-based strategy for gaining maximum points is insufficient [63]. As such, three variants of endgame strategies were implemented in the Scrabble AI (Figure 3b), where a general winning percentage probability approach was applied (“Q-sticking” is a strategy where one player is stuck with the “Q” and cannot play it, creating the possibility of the opponent gaining 20 points from an unplayed “Q” while making no open spot to dump it. Another strategy called “slow endgame” was utilized when the player was behind on their score and wished to maximize their point spread by slowly playing off available tiles while preventing the opponent from playing out.). Based on the experimental results, dynamic komi overcame the AoI issue flexibly while maintaining fairness in each phase of the Scrabble game.

4.3. Optimal Play Strategy in Scrabble

Strategies for simpler games like tic-tac-toe are simple enough to determine simply by playing and analyzing a couple of games. For more complicated games such as chess, the optimal strategy is too difficult even for modern computers to calculate. As such, it is interesting to determine the optimal strategy Scrabble.
An analysis of the GR value and the strategy changes of different player levels was conducted to determine the optimal strategy for the Scrabble game. According to earlier studies on board games and time-limited games [47,51], a sophisticated game should be situated in the ideal value of G R [ 0.07 , 0.08 ] (Table 3).
The results in Figure 4 indicate that the change in the endgame strategies affected the progress of the game match. Relative to the ideal GR value ( G R [ 0.07 , 0.08 ] ), all strategies trended proportionately to increase the player’s performance level. However, only two endgame strategies (Q-sticking and combination of Q-sticking and slow endgame) provided the appropriate sophistication level to the Scrabble game when the player’s performance level was L V 0.6 .
From another perspective, all of the considered strategies were less beneficial for an inexperienced player. In a way, the player must have a better skill-based play (e.g., better knowledge vocabulary). Additionally, an inexperienced player may find strategies (i.e., without Q-sticking and slow endgame) to be challenging to master since it involves the slightest uncertainty; this may be the reason for the GR value greater than 0.08. On the other hand, Q-sticking and the combination of Q-sticking and slow endgame strategies were beneficial for experienced players. Nevertheless, regardless of the player’s performance levels, the strategies did not result in a GR value of less than 0.07. This situation indicates that Scrabble shares similar features to board games where skill is an essential element of play.

4.4. Link between Play Strategy and Fairness

From the entertainment perspective described in the previous section, the combined Q-sticking with slow endgame strategy was the most sophisticated setting across all Scrabble player performance levels. While this strategy may be sophisticated, investigating its expected fairness and justifying its optimality are the main interests of this study.
Figure 5a,b shows the relationship between different endgame strategies and the proposed dynamic komi before and after its application. It can be observed that the winning rate of a player at performance level L V 0.5 did not differ much before or after the application of the dynamic komi. However, the application of dynamic komi substantially affected the winning rate of the player with performance level L V > 0.5 , within the range [ 0.5 , 0.6 ] .
The application of dynamic komi showed that the slow endgame and the combined Q-sticking/slow endgame strategies provided the best fairness (≈50% winning rate). Aligning with the previous experimental results, the combination of Q-sticking and slow endgame is the most optimal endgame strategy for Scrabble, leading to game play that is both entertaining and fair (Figure 4).

4.5. Interpretation of m with Respect to Fairness from Entertainment Perspective

With respect to (10), there are two possibilities (gaining an advantage or disadvantage over the opponent) at each scoring chance ( n = 2 holds), where the risk frequency ratio (m) to Scrabble implies m 1 2 . Figure 6a illustrates that there is an AoI issue in a Scrabble match, where m < 0.2 . This implies that only one player has a significant advantage, and the uncertainty of the game outcome is extremely low ( m 0 ). Adopting dynamic komi enabled a shift in the m value to m 0.5 , helping to improve the fairness.
Meanwhile, Figure 6b illustrates the a values with respect to (8). It can be observed that Scrabble game matches are highly skill-based, as a < 0.01 . This indicates that Scrabble game matches are less enjoyable for beginners and more advantageous for seasoned or professional players. By applying dynamic komi, the a values changed between different game matches ( 0.01 a 0.07 ), demonstrating seesaw possibilities. This result indicates that dynamic komi made the game more “fair” in terms of game outcomes and entertainment aspects.

4.6. Interpretation of m with Respect to Fairness from the Physics-in-Mind Perspective

To identify the AoI zone for one-sided games, an understanding of the fairness, risk frequency ratio (m), and momentum value ( p ) is important. Table 4 shows various physics-in-mind measures of Scrabble and two board games (shogi and Go), as well as chess and soccer. From the table, since m is an objective measure and p is a subjective measure, different games may show different AoI zones. Therefore, m = 1 2 acted as a guideline for the relative closeness of a game to process fairness.
While an object is characterized by mass in Newton’s mechanics, an agent is characterized by its intelligence. A stronger (weaker) player has a higher (lower) skill for solving uncertainty in a given game. To determine the mass m in Scrabble, (10) was applied (Figure 6a), demonstrating that a stronger player in an original Scrabble game environment with AoI met at m 0 . Application of the dynamic komi made the process of the Scrabble game relatively fair ( m 0.5 ), in such a way that the score turnover between players was frequent. On the other hand, shogi had the highest m for both grandmaster (GM)-leveled and AI players with a very high performance level (i.e., AI player like AlphaZero). However, the AI player in shogi had a greater m than the GM-leveled player due to the AI player’s ability. Hence, the game became one-sided (favoring the high-ability AI player), demonstrated by the lower turnover frequency (lower p ).
Meanwhile, Go had a relatively balanced m for both levels (AI and GM), where m 0.5 in part due to its komi rule. However, again, the Go game also became one-sided, albeit with higher score turnover frequency ( p > 0.2 ) compared to what was observed in shogi. Therefore, the potential AoI was not only observable when m < 0.2 (in the case of Scrabble) but also when m > 0.5 (in the case of Go and shogi). Similarly, chess (human game-playing data) can be observed to be a highly skill-based game ( m > 0.5 and high F), where the game was also one-sided and potentially faced with AoI due to lower p (in this context, less advantage turnover frequency).
From the perspective of the physics-in-mind, grandmaster-level players for shogi and Go both had a sufficiently large p ( p 0.2 ). This implies that both games have enough movement freedom to enjoy the game. However, the value of p for the AI players for shogi was lower than its GM counterparts. This implies that the shogi AI has less movement freedom than its GM counterpart, and the game became more advantageous (AoI). Additionally, the E p value for shogi identifies it as a highly motivating game.
Go had a similar p value, which may explain the reason for the application of the Komi rule. A high degree of fairness in Go is represented by its m value and its roughly similar values in various measures of physics in mind, regardless of the player’s performance level. This situation is also consistent with previous findings (see Section 4.1), where the application of dynamic komi to the Go game yielded a slight improvement in terms of outcome fairness. The E p indicates that the game is less motivating to play.
In chess, having the highest force makes the game require a high ability while being attractive to watch (feats of high-ability playing). On the other hand, the low E p value indicates that chess is less motivating to play (e.g., for beginner players). In sports cases, various measures of soccer were also observed to have large m with sufficient p value, implying the game’s hardness to move (i.e., trying to maintain engagement and challenging to score) and highly skilled-based (highest F). Meanwhile, the E p value was low, indicating that the game’s motivation to keep playing is low.
Hence, the region of m < 0.2 and m > 0.6 is where potential AoI occurs (Figure 7). This is related to a game going out of the “fairness zone”. In addition, events with m 0.9 are considered as games or gamified events (benchmark by a soccer game where p = 0.09 ). Additionally, p 0.09 would be desirable as a (competitive) game where the expected scoring turnover frequency demonstrates a frequent seesaw effect. People would feel fairness or equality when p is high or peaked at m = 0.5 .

5. Conclusions

This study proposes a new mechanism for improving the fairness of Scrabble game, referred to as dynamic komi, focusing on three factors: score rate over different player performance levels, application of komi in different game phases, and optimal end-game strategies. These three factors are equally essential to maintain the expected fairness of a game. As a result, dynamic komi provided a much more fair game environment (such as a random initial position like Scrabble), and the experimental results demonstrated that it could be a possible solution for all performance levels of each variant. We also evaluated the effectiveness of static komi and found that it has limitations for second player advantage; however an evaluation in Go proved that static komi worked well for games with fixed initial position.
We also evaluated the effect of play strategy on the sophistication of Scrabble. Our results demonstrated that the mixed end-game strategy called Q-sticking with slow-end game was sophisticated enough to make the game more interesting and attractive. Furthermore, it was found that the proposed dynamic komi provided a feedback mechanism to maintain the perceived fairness of the game in real-time. This mechanism could be a valuable tool for training players and potentially improving engagement in skill learning or acquisition (This mechanism can be observed to provide timely feedback to players in games.). For instance, Pokemon Unite (https://unite.pokemon.com, accessed on 25 July 2020), a new multiplayer online battle arena, shows the performance gap of one team against another at a specific time. Nevertheless, this mechanism would affect the psychological aspects of the game for human players, which could be further explored in a future study.
In addition, game refinement theory was used as an assessment methodology focusing on fairness from two perspectives: entertainment and physics. In this study, a possible interpretation of the mass, momentum, and energy in the game context was given. A game with low or large mass tended to have potential AoI, as verified by two games: shogi and Scrabble. Our proposed method, dynamic komi, ensures that the game is fair by balancing the process of fairness. The original Scrabble environment was less enjoyable to beginners and more advantageous to high-performance-level players from the entertainment perspective. The evolution of komi in Go demonstrated a higher degree of fairness by its m value. Another subjective measure, momentum in mind, enhanced the completeness of game refinement theory concerning fairness, verified through the proposed dynamic komi. Future works could focus on generalizing the concept of fairness with our proposed dynamic komi method and extending it to different game genres and other domains, such as complex systems in our society.

Author Contributions

Conceptualization, H.P.P.A. and H.I.; data curation, H.P.P.A.; formal analysis, H.P.P.A., M.N.A.K. and H.I.; funding acquisition, H.I.; investigation, H.P.P.A., M.N.A.K. and H.I.; methodology, H.P.P.A. and H.I.; project administration, H.I.; resources, H.I.; software, H.P.P.A.; supervision, H.I.; validation, H.P.P.A., M.N.A.K. and H.I.; visualization, H.P.P.A., M.N.A.K. and H.I.; writing—original draft, H.P.P.A., M.N.A.K. and H.I.; writing—review and editing, H.P.P.A., M.N.A.K. and H.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by a grant from the Japan Society for the Promotion of Science, in the framework of the Grant-in-Aid for Challenging Exploratory Research (Grant Number: 19K22893).

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The data presented in this study are openly available. The summary of the analyzed data is reported in the manuscript, and detailed data used in the study are available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goel, A.; Meyerson, A.; Plotkin, S. Combining fairness with throughput: Online routing with multiple objectives. In Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, Portland, OR, USA, 21–23 May 2000; pp. 670–679. [Google Scholar]
  2. Kumar, A.; Kleinberg, J. Fairness measures for resource allocation. In Proceedings of the 41st Annual Symposium on Foundations of Computer Science, Redondo Beach, CA, USA, 12–14 November 2000; pp. 75–85. [Google Scholar]
  3. Amershi, S.; Chickering, M.; Drucker, S.M.; Lee, B.; Simard, P.; Suh, J. Modeltracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 337–346. [Google Scholar]
  4. Chang, J.C.; Amershi, S.; Kamar, E. Revolt: Collaborative crowdsourcing for labeling machine learning datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 2334–2346. [Google Scholar]
  5. Dove, G.; Halskov, K.; Forlizzi, J.; Zimmerman, J. UX design innovation: Challenges for working with machine learning as a design material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 278–288. [Google Scholar]
  6. Agarwal, A.; Beygelzimer, A.; Dudík, M.; Langford, J.; Wallach, H. A reductions approach to fair classification. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 60–69. [Google Scholar]
  7. Cramer, H.; Garcia-Gathright, J.; Reddy, S.; Springer, A.; Takeo Bouyer, R. Translation, tracks & data: An algorithmic bias effort in practice. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–8. [Google Scholar]
  8. Holstein, K.; Wortman Vaughan, J.; Daumé, H., III; Dudik, M.; Wallach, H. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–16. [Google Scholar]
  9. Binns, R. Fairness in machine learning: Lessons from political philosophy. In Proceedings of the Conference on Fairness, Accountability and Transparency, PMLR, New York, NY, USA, 23–24 February 2018; pp. 149–159. [Google Scholar]
  10. Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 2017, 5, 153–163. [Google Scholar] [CrossRef] [PubMed]
  11. Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, Cambridge, MA, USA, 8–10 January 2012; pp. 214–226. [Google Scholar]
  12. Lee, M.K.; Baykal, S. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, Portland, OR, USA, 25 February–1 March 2017; pp. 1035–1048. [Google Scholar]
  13. Díaz, M.; Johnson, I.; Lazar, A.; Piper, A.M.; Gergle, D. Addressing age-related bias in sentiment analysis. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–14. [Google Scholar]
  14. Liu, A.; Reyzin, L.; Ziebart, B. Shift-pessimistic active learning using robust bias-aware prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; Volume 29. [Google Scholar]
  15. Huang, X.L.; Ma, X.; Hu, F. Machine learning and intelligent communications. Mob. Netw. Appl. 2018, 23, 68–70. [Google Scholar] [CrossRef] [Green Version]
  16. Aung, H.P.P.; Iida, H. Advantage of Initiative Revisited: A case study using Scrabble AI. In Proceedings of the 2018 International Conference on Advanced Information Technologies (ICAIT), Yangon, Myanmar, 1–2 November 2018; Volume 11. [Google Scholar]
  17. Borge, S. An agon aesthetics of football. Sport Ethics Philos. 2015, 9, 97–123. [Google Scholar] [CrossRef]
  18. Shirata, Y. The evolution of fairness under an assortative matching rule in the ultimatum game. Int. J. Game Theory 2012, 41, 1–21. [Google Scholar] [CrossRef] [Green Version]
  19. Iida, H. On games and fairness. In Proceedings of the 12th Game Programming Workshop, Kanagawa, Japan, 8–10 November 2007; pp. 17–22. [Google Scholar]
  20. Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140–1144. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Schaeffer, J. One Jump Ahead: Challenging Human Supremacy in Checkers; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  22. Buro, M. How Machines Have Rearnea to Play Othello. IEEE Intell. Syst. Appl. 1999, 14, 12–14. [Google Scholar]
  23. Sheppard, B. World-championship-caliber Scrabble. Artif. Intell. 2002, 134, 241–275. [Google Scholar] [CrossRef] [Green Version]
  24. Canaan, R.; Salge, C.; Togelius, J.; Nealen, A. Leveling the playing field: Fairness in AI versus human game benchmarks. In Proceedings of the 14th International Conference on the Foundations of Digital Games, Luis Obispo, CA, USA, 26–30 August 2019; pp. 1–8. [Google Scholar]
  25. Justesen, N.; Debus, M.S.; Risi, S. When Are We Done with Games? In Proceedings of the 2019 IEEE Conference on Games (CoG), London, UK, 20–23 August 2019; pp. 1–8. [Google Scholar]
  26. Van Den Herik, H.J.; Uiterwijk, J.W.; Van Rijswijck, J. Games solved: Now and in the future. Artif. Intell. 2002, 134, 277–311. [Google Scholar] [CrossRef] [Green Version]
  27. Iida, H. Games and Equilibriums; 12th IPSJ-SIG-GI-12; Totsugeki-Tohoku, Science: Kodansha, Japan, 2004. [Google Scholar]
  28. Toys|Toys and Games—Scrabble. 2008. Available online: https://web.archive.org/web/20080424165147/http://www.history.com/minisite.do?content_type=Minisite_Generic&content_type_id=57162&display_order=4&sub_display_order=4&mini_id=57124 (accessed on 13 January 2021).
  29. Staff, G. Oliver Burkeman on Scrabble. 2008. Available online: http://www.theguardian.com/lifeandstyle/2008/jun/28/healthandwellbeing.familyandrelationships (accessed on 13 January 2021).
  30. MindZine—Scrabble—History. 2011. Available online: https://web.archive.org/web/20110608001552/http://www.msoworld.com/mindzine/news/proprietary/scrabble/features/history.html (accessed on 13 January 2021).
  31. Müller, M. Computer go. Artif. Intell. 2002, 134, 145–179. [Google Scholar] [CrossRef] [Green Version]
  32. Gelly, S.; Kocsis, L.; Schoenauer, M.; Sebag, M.; Silver, D.; Szepesvári, C.; Teytaud, O. The grand challenge of computer Go: Monte Carlo tree search and extensions. Commun. ACM 2012, 55, 106–113. [Google Scholar] [CrossRef]
  33. Fotland, D. Knowledge representation in the Many Faces of Go. 1993. Available online: ftp://Basdserver.Ucsf.Edu/Go/Coomp/Mfg.Gz (accessed on 13 January 2021).
  34. Chen, K. The move decision process of Go Intellect. Comput. Go 1990, 14, 9–17. [Google Scholar]
  35. Kocsis, L.; Szepesvári, C. Bandit based monte-carlo planning. In Proceedings of the European Conference on Machine Learning, Berlin, Germany, 18–22 September 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 282–293. [Google Scholar]
  36. Tesauro, G.; Galperin, G.R. On-line policy improvement using Monte-Carlo search. In Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA, 2–5 December 1997; pp. 1068–1074. [Google Scholar]
  37. Gelly, S.; Silver, D. Combining online and offline knowledge in UCT. In Proceedings of the 24th International Conference on Machine Learning, San Juan, PR, USA, 21–24 March 2007; pp. 273–280. [Google Scholar]
  38. Browne, C.B.; Powley, E.; Whitehouse, D.; Lucas, S.M.; Cowling, P.I.; Rohlfshagen, P.; Tavener, S.; Perez, D.; Samothrakis, S.; Colton, S. A survey of monte carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 2012, 4, 1–43. [Google Scholar] [CrossRef] [Green Version]
  39. Coulom, R. Efficient selectivity and backup operators in Monte-Carlo tree search. In Proceedings of the International Conference on Computers and Games, Turin, Italy, 29–31 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 72–83. [Google Scholar]
  40. Baudiš, P. Balancing MCTS by dynamically adjusting the komi value. ICGA J. 2011, 34, 131–139. [Google Scholar] [CrossRef]
  41. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484. [Google Scholar] [CrossRef] [PubMed]
  42. Allis, L.V. Searching for Solutions in Games and Artificial Intelligence; Ponsen & Looijen: Wageningen, The Netherlands, 1994. [Google Scholar]
  43. The Renju International Federation Portal—RenjuNet. 2006. Available online: http://www.renju.net/study/rules.php (accessed on 13 January 2021).
  44. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach; Pearson Education Limited: London, UK, 2016. [Google Scholar]
  45. Brown, J.; O’Laughlin, J. How Quackle Plays Scrabble. 2007. Available online: http://people.csail.mit.edu/jasonkb/quackle/doc/how_quackle_plays_scrabble.html (accessed on 15 January 2020).
  46. Májek, P.; Iida, H. Uncertainty of game outcome. In Proceedings of the 3rd International Conference on Global Research and Education in Intelligent Systems, Budapest, Hungary, 6–9 September 2004; pp. 171–180. [Google Scholar]
  47. Iida, H.; Takahara, K.; Nagashima, J.; Kajihara, Y.; Hashimoto, T. An application of game-refinement theory to Mah Jong. In Proceedings of the International Conference on Entertainment Computing, Eindhoven, The Netherlands, 1–3 September 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 333–338. [Google Scholar]
  48. Iida, H.; Takeshita, N.; Yoshimura, J. A metric for entertainment of boardgames: Its implication for evolution of chess variants. In Entertainment Computing; Springer: Berlin/Heidelberg, Germany, 2003; pp. 65–72. [Google Scholar]
  49. Zuo, L.; Xiong, S.; Iida, H. An analysis of dota2 using game refinement measure. In Proceedings of the International Conference on Entertainment Computing, Tsukuba City, Japan, 18–21 September 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 270–276. [Google Scholar]
  50. Xiong, S.; Zuo, L.; Iida, H. Quantifying engagement of electronic sports game. Adv. Soc. Behav. Sci. 2014, 5, 37–42. [Google Scholar]
  51. Takeuchi, J.; Ramadan, R.; Iida, H. Game refinement theory and its application to Volleyball. Inf. Process. Soc. Jpn. 2014, 2014, 1–6. [Google Scholar]
  52. Sutiono, A.P.; Ramadan, R.; Jarukasetporn, P.; Takeuchi, J.; Purwarianti, A.; Iida, H. A mathematical Model of Game Refinement and Its Applications to Sports Games. EAI Endorsed Trans. Creat. Technol. 2015, 15, e1. [Google Scholar] [CrossRef] [Green Version]
  53. Zuo, L.; Xiong, S.; Iida, H. An analysis of hotel loyalty program with a focus on the tiers and points system. In Proceedings of the 2017 4th International Conference on Systems and Informatics (ICSAI), Hangzhou, China, 11–13 November 2017; pp. 507–512. [Google Scholar]
  54. Huynh, D.; Zuo, L.; Iida, H. Analyzing gamification of “Duolingo” with focus on its course structure. In Proceedings of the International Conference on Games and Learning Alliance, Utrecht, The Netherlands, 5–7 December 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 268–277. [Google Scholar]
  55. Iida, H. Where is a line between work and play? IPSJ Sig. Tech. Rep. 2018, 2018, 1–6. [Google Scholar]
  56. Xiong, S.; Zuo, L.; Iida, H. Possible interpretations for game refinement measure4. In Proceedings of the International Conference on Entertainment Computing, Tsukuba City, Japan, 18–21 September 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 322–334. [Google Scholar]
  57. Sensei’s Library. Available online: https://senseis.xmp.net/?Komi (accessed on 13 January 2021).
  58. Aung, H.P.P.; Khalid, M.N.A.; Iida, H. Towards a fairness solution of Scrabble with Komi system. In Proceedings of the 2019 International Conference on Advanced Information Technologies (ICAIT), Yangon, Myanmar, 6–7 November 2019; pp. 66–71. [Google Scholar] [CrossRef]
  59. Wespa Tournament Calendar. Available online: http://www.wespa.org/tournaments/index.shtml (accessed on 18 January 2021).
  60. Scrabble: An Entertaining Way to Improve Your Child’s Vocabulary and Spelling Skills. Available online: http://mathandreadinghelp.org/articles/Scrabble/3AAnEntertainingWaytoImproveYourChild/27sVocabularyandSpellingSkills.html (accessed on 18 January 2021).
  61. Aung, H.P.P.; Khalid, M.N.A. Can we save near-dying games? An approach using advantage of initiative and game refinement measures. In Proceedings of the 1st International Conference on Informatics, Engineering, Science and Technology, INCITEST 2019, Bandung, Indonesia, 18 July 2019. [Google Scholar] [CrossRef]
  62. Wu, T.R.; Wu, I.C.; Chen, G.W.; Wei, T.H.; Wu, H.C.; Lai, T.Y.; Lan, L.C. Multilabeled Value Networks for Computer Go. IEEE Trans. Games 2018, 10, 378–389. [Google Scholar] [CrossRef]
  63. Meyers, J. Importance of End Game in Scrabble. 2011. Available online: https://scrabble.wonderhowto.com/how-to/scrabble-challenge-9-can-you-win-losing-game-last-move-0130043/ (accessed on 15 January 2020).
  64. Wu, Y.; Mohd Nor Kkmal, R.; Hiroyuki, I. Informatical Analysis of Go, Part 1: Evolutionary Changes of Board Size. In Proceedings of the 2020 IEEE Conference on Games (CoG), Osaka, Japan, 24–27 August 2020; pp. 320–327. [Google Scholar]
  65. Hiroyuki, I.; Mohd Nor Kkmal, R. Using games to study law of motions in mind. IEEE Access 2020, 84, 138701–138709. [Google Scholar]
Figure 1. A rough illustration of the game progression curves for a game with advantage of initiative and for an ideal game.
Figure 1. A rough illustration of the game progression curves for a game with advantage of initiative and for an ideal game.
Information 12 00352 g001
Figure 2. Komi variation versus winning probability in Go.
Figure 2. Komi variation versus winning probability in Go.
Information 12 00352 g002
Figure 3. The application of dynamic komi to different game phases, and the adopted endgame strategies. (a) Komi variation versus winning probability over different game phases in Scrabble. (b) Four variants of Scrabble AI’s endgame strategies.
Figure 3. The application of dynamic komi to different game phases, and the adopted endgame strategies. (a) Komi variation versus winning probability over different game phases in Scrabble. (b) Four variants of Scrabble AI’s endgame strategies.
Information 12 00352 g003
Figure 4. Relationship between end-game strategies and refinement measure.
Figure 4. Relationship between end-game strategies and refinement measure.
Information 12 00352 g004
Figure 5. Illustration of winning rate obtained using different strategies before and after dynamic komi was applied.
Figure 5. Illustration of winning rate obtained using different strategies before and after dynamic komi was applied.
Information 12 00352 g005
Figure 6. Illustration of the trend in m and a values over a number of matches comparing advantage of initiative and application of dynamic komi. (a) Illustration of m dynamics: process fairness with number of advantages and disadvantages is given by m = 1 W W + L = L W + L . (b) Illustration of a dynamics: acceleration is given by a = 2 m D .
Figure 6. Illustration of the trend in m and a values over a number of matches comparing advantage of initiative and application of dynamic komi. (a) Illustration of m dynamics: process fairness with number of advantages and disadvantages is given by m = 1 W W + L = L W + L . (b) Illustration of a dynamics: acceleration is given by a = 2 m D .
Information 12 00352 g006
Figure 7. p as the function of mass (m). The peak point at m = 1 2 is important for maintaining fairness in games where the scoring turnover frequency of a game is the highest.
Figure 7. p as the function of mass (m). The peak point at m = 1 2 is important for maintaining fairness in games where the scoring turnover frequency of a game is the highest.
Information 12 00352 g007
Table 1. Example Go game statistics with a komi of 5.5 (adopted from [20]).
Table 1. Example Go game statistics with a komi of 5.5 (adopted from [20]).
No. of GamesWinning Probability
Black670153.15%
White590646.85%
Total12,607
Table 2. Winning rate analysis based on static komi versus dynamic komi in the game of Go.
Table 2. Winning rate analysis based on static komi versus dynamic komi in the game of Go.
KomiStatic KomiDynamic Komi
ValueBlackWhiteBlackWhite
3.553.30%46.70%53.10%46.90%
4.555.00%45.00%52.90%47.10%
5.553.15%46.85%52.50%47.50%
6.550.58%49.42%51.40%48.60%
7.549.51%50.49%50.15%49.85%
Table 3. Correlative measures of legacy game refinement (GR) measures (ordered by game refinement value).
Table 3. Correlative measures of legacy game refinement (GR) measures (ordered by game refinement value).
GameGTBD GR
Xiangqi 38.00950.065
Soccer2.6422.00 0.073
Basketball36.3882.01 0.073
Chess 35.0080.000.074
Go 250.00208.000.076
Table tennis54.8696.47 0.077
UNO®0.9812.68 0.078
DotA®68.6106.20 0.078
Shogi 80.001150.078
Badminton46.3479.34 0.086
Scrabble *2.7931.54 0.053
Scrabble **10.2539.56 0.080
* With advantage of initiative; ** with dynamic komi; G / B : scoring options/branching factors; T / D : total scores/game length; G R : game refinement value, where the comfortable zone [ 0.07 , 0.08 ] .
Table 4. Comparison of various physics-in-mind measures of several games (adopted from [64,65]), including Scrabble.
Table 4. Comparison of various physics-in-mind measures of several games (adopted from [64,65]), including Scrabble.
GameGTBDm p F E p
Chess 35.0080.000.78130.17080.01520.07474
Go G M 250.00208.000.40000.24000.00150.2880
Go A I 220.00264.000.58300.24310.00260.2028
Shogi G M 80.00115.000.65000.23000.00730.1593
Shogi A I 80.00204.000.80400.15750.00630.0640
Soccer2.6422 0.88000.10560.07040.0253
Scrabble *2.7931.54 0.00910.000060.0000040.000001
Scrabble **10.2539.56 0.48240.271700.01370.3061
* With advantage of initiative; ** With dynamic komi; G M Grandmaster data; A I artificial intelligence data; G / B : scoring options/branching factors; T / D : total scores/game length; m: scoring difficulty; p : momentum in mind given by (12); F: force in mind given by (11); E p : potential energy in mind given by (13).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aung, H.P.P.; Khalid, M.N.A.; Iida, H. What Constitutes Fairness in Games? A Case Study with Scrabble. Information 2021, 12, 352. https://doi.org/10.3390/info12090352

AMA Style

Aung HPP, Khalid MNA, Iida H. What Constitutes Fairness in Games? A Case Study with Scrabble. Information. 2021; 12(9):352. https://doi.org/10.3390/info12090352

Chicago/Turabian Style

Aung, Htun Pa Pa, Mohd Nor Akmal Khalid, and Hiroyuki Iida. 2021. "What Constitutes Fairness in Games? A Case Study with Scrabble" Information 12, no. 9: 352. https://doi.org/10.3390/info12090352

APA Style

Aung, H. P. P., Khalid, M. N. A., & Iida, H. (2021). What Constitutes Fairness in Games? A Case Study with Scrabble. Information, 12(9), 352. https://doi.org/10.3390/info12090352

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop