Next Article in Journal
Computer Vision-Based Obstacle Detection Mobile System for Visually Impaired Individuals
Previous Article in Journal
Optimizing HUD-EVS Readability: Effects of Hue, Saturation and Lightness on Information Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of the Labeling Effect on the Perception of Command Execution Delay in Gaming

Department of Networked Systems and Services, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Műegyetem rkp. 3, 1111 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2025, 9(5), 47; https://doi.org/10.3390/mti9050047
Submission received: 9 March 2025 / Revised: 1 May 2025 / Accepted: 12 May 2025 / Published: 15 May 2025

Abstract

:
Gaming is one of the largest industries of digital entertainment. Modern gaming software may be susceptible to command execution delay, which may be caused by various factors, such as insufficient rendering capabilities or limited network resources. At the time of writing this paper, the utilized advances in gaming are often accompanied by brief descriptions when communicated to the users. While such descriptions may be compressed into a couple of words, even a single word may impact user experience. Due to the cognitive bias induced by the labeling effect, the impact of such a word may actually be more significant than what the user genuinely perceives. In this paper, we investigate the influence of the labeling effect on the perception of command execution delay in gaming. We carried out a series of subjective tests to measure how the word “optimized” affects gaming experience. The test variables of our experiment were the added delay between command and execution, the speed of the game, and the label that was assigned to gaming sequences. The test participants were tasked to directly compare gaming sequences with the different labels assigned: “optimized” and “not optimized”. In every comparison, both sequences had the same objective characteristics; only the label differed. The experiment was conducted on single-input and continuous-input computer games that we developed for this research. The obtained results indicate that for both of these input types, the labeling effect has a statistically significant impact on perceived delay. Overall, more than 70% of the subjective ratings were affected by the assigned labels. Moreover, there is a strong correlation between the amount of delay and the effect of cognitive bias. The speed of the game also affected the obtained results, yet statistically significant differences were only measured between the slowest and the fastest gameplay.

1. Introduction

In real-time interactive systems, particularly in video games, the perception of responsiveness is a crucial factor influencing the Quality of Experience (QoE). Any noticeable delay in command execution (i.e., the delay between issuing a specific command and perceiving its execution) can disrupt immersion and negatively impact gameplay satisfaction [1].
Latency in gaming environments—whether due to network conditions or hardware limitations—remains a problem when it comes to delivering an optimal gaming experience. While advancements in related technologies minimize the factual, objective extents of latency, human perception does not only depend on subjective time measurement, but also on psychological and cognitive factors, which, therefore, should be taken into consideration. One of such cognitive factors influencing delay perception is the labeling effect—a phenomenon which may alter a person’s subjective experience based on descriptive labels [2].
Despite the growing number of studies of latency perception in human–computer interaction (HCI), the scientific literature is limited when it comes to addressing the impact of the labeling effect on the perception of command execution delay. Multiple studies focus on the technical optimization, but fewer have examined the potential cognitive mitigation that can enhance the perception of responsiveness. If the labeling effect can be used systematically to alter the perception of delay, developers can design user interfaces that reduce frustration whenever latency is inevitable.
Furthermore, it should be stated that often a single word is sufficient to serve as a label for cognitive bias [3,4,5]. In the context of gaming, optimization is becoming more and more pivotal on multiple fronts. For instance, performance optimization in general is essential for making the most out of the user’s gaming system, which is typically either a console, a handheld device, or a personal computer (PC). As it is common that video games are updated and patched regularly, the communication of the changes frequently emphasize the fact of optimization (e.g., in the patch notes). Optimization is also of key importance when it comes to the gameplay itself. For example, at the time of writing this paper, online communities and video-sharing platforms proliferate the significance of optimizing gameplay strategies (e.g., “build order” in real-time and turn-based strategy games) [6,7], which is not limited to competitive gaming. This form of optimization is also known as metagaming, often referred to by gaming communities simply as “meta”.
On the one hand, the scientific literature on the impact of cognitive bias on the perception of delay—and subjectively perceived delay in general—is lacking, particularly for gaming. On the other hand, the optimization-related terms are more relevant than ever in the context of video games.
In this paper, we address the impact of the labeling effect on the perceived command execution delay in video games. We conducted a subjective study in which test participants directly compared identical game sequences, but in each pair, one was labeled as “optimized”, while the other one was “not optimized”. The hypothesis was that such a single word may significantly affect the perception of command execution delay. A single-input and a continuous-input video game was developed to test the hypothesis. The examined game sequences differed in terms of added delay and game speed. The label order was also taken into consideration, as half of the test participants compared the “optimized” sequence to the “not optimized” one, and the other half compared the “not optimized” sequence to the “optimized” one.
The detailed hypothesis statements (i.e., research questions) of our work are the following: (i) Do the assigned labels significantly impact the obtained results? Is there a statistically significant difference between the theoretical ratings based on the lack of objective differences (i.e., the sequence pairs in all of the comparisons are identical) and the obtained results that are influenced by the labels “optimized” and “not optimized”? (ii) Does the order of the label significantly impact the obtained results? Is there a statistically significant difference between the sequence pairs where “optimized” is the first label and “not optimized” is the second and pairs with the opposite order? (iii) Does the type of the input significantly impact the influence of the labels? Is there a statistically significant difference between the results obtained for single-input and continuous-input games? (iv) Does the game difficulty (i.e., game speed) significantly impact the influence of the labels? Are there statistically significant differences between the results obtained for easy, medium, and hard game instances? (v) Does the added command execution delay significantly impact the influence of the labels? Are there statistically significant differences between the game sequences with different extents of added delay? The null hypothesis for the hypotheses above is that measurable differences are due to random error. According to the best knowledge of the authors, these hypotheses have not yet been addressed by the scientific literature, and thus, they are deemed novel within this field of research.
The remainder of this paper is structured as follows. The related research efforts are reviewed in Section 2. The experimental setup of our work is detailed in Section 3. The obtained results are presented in Section 4. The paper is concluded in Section 5.

2. Related Work

The published scientific achievements relevant to our study are discussed in this section. The research efforts related to command execution delay and to cognitive bias are analyzed separately.

2.1. Command Execution Delay

The classic study of Shneiderman [8] concludes that fast response times (i.e., less than a single second) maintain user engagement and minimize the level of frustration, and that any delay greater than that may disrupt the cognitive flow. Furthermore, delays over 10 s may result in task abandonment. However, note that these results were published over 40 years ago, and since then, humanity’s interaction with computers and devices has changed drastically. As a more recent take on the investigated topic, Raaen and Eg [9] found that certain test participants not only noticed a command execution delay of 66 ms, but also became frustrated by it. On the other hand, greater delays may be tolerated if certain technologies are involved, and the user is aware of the fact. For instance, in the case of space communication (i.e., communication via satellites), Wang [10] found that even 10 s may actually be tolerated.
The work of Metzger et al. [11] proposes a lag model for video games. The model approaches lag as the sum of network delay, processing delay, and playout delay (i.e., command execution delay). Additional factors that contribute to the model include command rate, server tick rate (in the case of online games), and codec delay (in the case of cloud gaming). The survey of Pantel and Wolf [12] addresses the delay in real-time multiplayer games. One of the conclusions of the research is that the tolerance towards delay in such games is not only affected by the extent of network delay, but by the genre of the video game as well. The same conclusion was drawn by Schimdt et al. [13].
The study of Claypool and Claypool [14] points out that during gameplay, not every phase of the game is equally sensitive to latency. The so-called “play phase”—where real-time player interaction occurs—is the most vulnerable to latency in online games. However, a carefully designed game can still provide a tolerable gameplay even if the play phase is affected by high network latency. For example, Fritsch et al. [15] confirmed this in their study on the game Everquest2.
Among the research efforts on cloud gaming, Chen et al. [16] measured the command execution delay of PC gaming on different cloud services. The obtained results were between 135 ms and 500 ms. A more recent study [17] measured the latency in mobile cloud gaming. Depending on the type of network connection—which was either Wi-Fi or long-term evolution (LTE)—the latency varied between 70 ms and 177 ms.
In context of virtual reality (VR), Allison et al. [18] addressed the most relevant type of delay for such a technology, which is the delay between head movement and the corresponding field of visualization, and measured 200 ms to be the threshold of delay tolerance. Huang et al. [19] investigated the perception of weight in VR. The authors conducted an experiment in which test participants had to perform a balancing task, with various extents of delay towards visualization. The results indicate that when the delay exceeds 25 ms, the task becomes virtually impossible. These works signify the differences between general tolerance and task performance.
There are numerous research efforts that examine the impact of influence factors on QoE in gaming scenarios. Multiple studies [20,21,22] suggest that age, gender, personality, previous gaming experience, and user performance are some of the main factors that contribute to one’s perception of QoE. To measure the impact of these aforementioned factors, Moller et al. [23] designed the Gaming Experience Questionnaire (GEQ) for measuring seven dimensions of the players’ experience, while Depping and Mandryk [24] introduced the Game-Specific Attributing Questionnaire (GSAQ) to measure the correlation between the players’ characteristics (e.g., emotion, motivation, and behavior) and QoE. Among the mentioned factors, user performance—which is of particular interest to game designers and developers—is heavily dependent on command execution delay. In networked games, a large number of studies on the impact of delay on a game’s difficulty and the player’s performance [5,25,26,27,28] was conducted. The results of these studies converge and indicate that games with delays of around 100 ms are acceptable; when reaching 200 ms games are playable but annoying; and above 500 ms, they are reported to be extremely difficult and scored poorly in terms of QoE. Chanel et al. [29] concluded that performance is linearly affected by game difficulties. When the game is too easy, players are likely to feel bored. On the other hand, when the game is too challenging, players are inclined to feel frustrated. It is important to highlight that both of these cases result in the degradation of QoE. Numerous studies [30,31,32,33] proposed Dynamic Difficulty Adjustment (DDA) models to reduce the impact of the player’s performance on QoE. Sabet et al. [34,35] introduced an improved DDA model for cloud-based gaming which operates in real-time and does not require the tracking of player performance. By modifying in-game mechanics, the model could mitigate the negative effect of delay on QoE [36]. Lee and Chang [37] published the Advance Lag Compensation (ALC) algorithm to improve the QoE of first-person shooter games. Liu et al. [38] proposed that another approach to improve QoE is the player self-evaluation during gameplay. The results show that when players suffered from delay and poor performance, as long as their self-evaluated performance was fulfilled, the game still received positive QoE ratings. Only when these conditions were not met did the games became laggy and annoying to play. Delay-related disturbances to one’s QoE may also be due to the asynchronization of visualization and audio. Studies [39,40] suggest that visual–audio delays in video games can reach up to 250 ms, and the characteristics of the phenomenon and their perception depend on the content.
Most of the studies on the topic of command execution delay focus only on network delay, and local delay is often neglected. However, local delay is evidently an important component of the delay experienced by the player. Raaen and Petlund [41] measured local system delays and found that they can reach up to 100 ms—making them comparable to network delay. The study of Long and Gutwin [42] points out that the local delay can vary between 50 ms and 500 ms, and proposed a predictive model called “Time to React”. The aim of the model is to help developers in assessing and mitigating the effect of local lag.
Claypool et al. [43] investigated players’ performance with a delayed mouse. The test participants of the study were asked to select a fast-moving target with a delayed mouse at different speeds and delays. The results show that user selection times and error rates increase linearly with local delays, which led to significant decrease (i.e., dropped to approximately 25%) in user performance. In a different work, Claypool et al. [44] measured QoE using the same methodology, and came to a conclusion that user experience depends mainly on delay and not on the speed of the target. Liu et al. [4] addressed the effect of local delay on the players’ performance in the first-person shooter game, Counter Strike: Global Offensive. Test participants were asked to fire different weapons at targets under different conditions of delay. The results indicate that in such first-person shooter games, players are sensitive to local delay. It was shown that players’ performance increased 20% when latency was reduced from 125 ms to 25 ms. This conclusion is also supported by Ivkovic et al. [45]. The authors measured the local delay using a high-speed camera, frame-by-frame, from the moment the input was triggered to screen change. The test participants took part in 3D target tracking and target acquisition tasks. The study reveals that real-world local latency ranges from 23 ms to 243 ms, which causes significant and substantial degradation in the performance of first-person shooter games.
The scientific literature reports that first-person shooter video games are more sensitive to delay in general than other games from different genres. The work of Quax et al. [46] suggests that action games and racing games—which require intense interactions—are more vulnerable to delay than puzzle games and strategy games. The results align with findings in cloud-based gaming [47], traditional online gaming [48], and mobile gaming [49]. Claypool [50] explored the impact of latency in real-time strategy games. The study indicates that such games focus more on long-term strategy rather than on real-time reaction, reducing the impact of delay on the player’s performance. Claypool’s finding was supported by Pedri and Hesketh [51]. The authors concluded that a fast-interaction computer task requires more attention, leaving less cognitive capacity to encode temporal information, resulting in the perception that time passes more quickly during the task. Therefore, occurrences of longer delays are more likely to be tolerated.
Sabet et al. [34] conducted an experiment in which each test participant played different games and a local delay was added periodically. The three games—developed by the authors—included Shooting Range (aim and shoot at a target via a mouse), T-Rex (jump over 2D obstacles on a platform), and Rocket Escape (move left or right to avoid randomly spawned obstacles). The authors came to the conclusion that players can compensate the effect of local delay by predicting the next move of the game. Particularly in the case of Shooting Range and T-Rex, as game moves followed certain rules which could be predicted, adaptation was feasible for the players, leading to positive QoE ratings. On the other hand, Escape Rocket spawned obstacles randomly—rendering adaptation impossible—and thus QoE scores were impacted. Normoyle et al. [1] measured that players adapt well to constant delay even at 300 ms, but a small jitter (i.e., the variation of delay) can be detected by all users and causes annoyance.
These findings are supported by the work of Kohrs et al. [52], which investigates the functional magnetic resonance imaging (fMRI) scan of test participants experiencing delay while completing a locally delayed computer task. The results indicate that frequent delays allow the brain to adapt and return neural activity to a baseline, while an unexpected delay distract users from their task, increasing their cognitive load and making delays more noticeable.
Command execution delay may impact both single-player and multiplayer video games. However, delay in multiplayer games—which typically involve competition and/or cooperation—is more likely to degrade the QoE of the players [53] and, as such, may result in a sense of unfairness, as well as frustration towards other players [54]. Therefore, subjective ratings in the case of multiplayer gaming are more susceptible to degradation [55]. In addition, Siu et al. [56] concluded that single-player games are perceived as less challenging than multiplayer games, even when the task is completely identical. Among online multiplayer games, fighting games (FTG) are particularly sensitive to delay, as a lag of just a few frames may change the outcome of a match. Thus, the netcode of such games is carefully developed to provide compensation for delay [57,58,59].
The impact of local delay from other input devices was also studied. Claypool [60] measured the user performance in a selection task with a delayed thumbstick controller. The results show that the time the user needs to complete the task grows exponentially at higher delays, and higher player skills reduce the impact of delay on QoE. This confirms that the player ability is a key factor in delay tolerance. The same conclusion was drawn by Long and Gutwin [61] for pointing game devices (e.g., drawing tablet).
Table 1 lists research efforts that address the perceptual and/or tolerance thresholds of delay in various gaming genres. Perceptual threshold in this context refers to just noticeable difference (JND) [62]. According to the scientific literature, the perceptual threshold for first-person shooter (FPS) games, FTGs, multiplayer online battle arena (MOBA) games, puzzle platform games, and racing games is typically between 50 ms and 75 ms. The same for real-time strategy (RTS) games is measured to be 100 ms, but it can be up to 170 ms for survival role-playing games (RPG), and even 500 ms for team sports. Note that the latter was tested on an American football game in which gameplay was somewhat semi-automated, and thus, required less reactivity from the player. Tolerance thresholds may vary between 200 ms and 1250 ms. This great value was measured for a massively multiplayer online role-playing game (MMORPG). The game genres covered by Table 1 rely on the reactivity of the player, at least to some extent. Other genres, such as simple puzzle games and certain forms of turn-based games (e.g., turn-based strategy games), may tolerate even greater delays as the reactivity of the player does not directly influence the gameplay. One should also consider that in an online multiplayer scenario, those types of games that have low perceptual and tolerance thresholds are those that are most likely to be sensitive to network latency.
Table 1. Perceptual and tolerance thresholds of delay for each game genre in the scientific literature.
Table 1. Perceptual and tolerance thresholds of delay for each game genre in the scientific literature.
Scientific WorkGame GenrePerceptual ThresholdTolerance Threshold
Beigbeder et al. [28]FPS75 ms200 ms
Quax et al. [63]FPS60 ms
Xu et al. [64]FTG67 ms
Fritsch et al. [15]MMORPG1250 ms
Tan et al. [65]MOBA50 ms200 ms
Beznosyk et al. [48]Puzzle platform60 ms200 ms
Pantel et al. [12]Racing50 ms500 ms
Claypool [50]RTS100 ms500–800 ms
Hohlfeld et al. [66]Survival RPG170 ms
Nichols and Claypool [67]Team sports500 ms
Numerous research efforts signify the impact of command execution delay on the experience of the players. The results indicate that constant extents of delay (i.e., without significant variation) can be tolerated up to 200 ms as players may adapt to such gameplay; however, those exceeding 200 ms often degrade the performance of the players and the overall QoE. Additionally, jitter, even at lower levels, is typically perceivable and commonly causes frustration. In our work, we address command execution delay at constant levels, without any extent of variation.

2.2. Cognitive Bias

The study of Wilke and Mata [2] concludes that one of the most common forms of cognitive bias is confirmation bias. According to classic studies of Wason [68,69] and of Klayman [70], confirmation bias happens when people are actively looking for confirmation of their idea, even when disconfirming evidence is available. Related reasoning errors arise from the interplay of cognitive limitations and motivational factors, rather than the lack of intelligence or knowledge.
Darley and Gross [71] studied the correlation between confirmation bias and the labeling effect. The authors concluded that people often gather evidence and information before making their judgment. The issue arises when their preconceptions are based on invalid and biased evidence. Jones and Sugden [72] conducted an experiment studying this behavior. The results show that people are willing to pay for evidence that supports their earlier beliefs, even when that evidence has no information value.
The labeling effect changes how human beings perceive their environment. Sakai et al. [73] conducted an experiment to explore the impact of visual images on the sense of smell. Test participants were asked to evaluate the intensity of odor while being shown a color. The intensity of the odors perceived by the test participants significantly increased when the shown color matched their expectations (e.g., dark brown color and Coca-Cola odor). The results indicate the influence of confirmation bias, based on the preconceptions of test participants. The color was used as a label to strengthen the test participants’ belief that the intensity of the odor was stronger, even though it was not. Bentler et al. [74] examined the impact of the labeling effect on human hearing. The test participants were asked to use a pair of the same hearing aids, which were labeled either as “conventional” or “digital”. The results indicate that most of the test participants were biased toward choosing “digital” hearing aids. Some of the test participants genuinely experienced better performance of the “digital” hearing aids. Iglesias [75] conducted a survey on the preconception of a retail bank service. The results show that customers’ preconception did not directly affect the overall evaluation of the service encounter. However, it did influence the perception of quality dimensions.
The labeling effect influences the way people process information. The study of Gao et al. [76] explores the impact of stance label and the readers’ selection of news articles. The study concludes that human beings are not neutral information processors and their existing beliefs, biases, and emotions affect how they interact with the news. De Graaf et al. [77] studied how experts would search and process data in documents. In the experiment, professionals were asked to answer a designed set of questions based on the provided documents. The results show that professionals tend to use their prior knowledge to find answers to questions that are not related to the document. Even when information is available, in some cases, it is ignored, which leads to an incomplete search.
The labeling effect may also have a significant impact on how people consume products. Chovanova et al. [78] concluded that product brands create biases on customers’ decision-making process, regardless of age and gender. The study by Gao et al. [76] shows that consumers are willing to pay more for beef if more information on its attributes (e.g., origin of the beef) is provided. The attribute information of a product is important and can be the decisive factor of the perceived quality of a product [79]. However, too much attribute information may be rather counter-productive. The study of Fitzgerald et al. [80] concludes that the overuse of labels and contradicting information confuse customers, degrading their ability to make decisions, and thus causing negative consequences for business. The study of Christandl et al. [81] shows that customers expect significant price hikes—after value-added tax (VAT) increases—even for products not affected by the VAT change. The expected price exceeded actual changes, confirming a bias in price perception.
In the context of software and video game development, multiple studies were conducted on the impact of cognitive bias. Studies [82,83,84] conclude that cognitive bias in software development mainly occurs during testing and debugging phases. During the testing phase, developers tend to be influenced by the expected behavior of the software. For example, as shown by the work of Calikli and Bener [85], the performance of the software was actively tested with an incomplete specification, confirming the completeness of the software, even when it was not. This phenomenon is also known as “positive testing” [86,87,88]. Rainer and Beecham [89] conducted a survey using evidence-based software engineering (EBSE) to evaluate different requirement management tool (RMT). The results show that the test participants tended to recommend the RMT they are familiar with over better tools supported by evidence. Cognitive bias also causes errors in the estimation of the effort required to develop software [90]. This is due to confirmation bias in selecting what is perceived as relevant project cost information.
Cognitive bias is also an important factor in the perception of video quality and virtual reality. Kara et al. [91,92] explored the impact of the labeling effect on the perceived quality of high-definition (HD) and ultra-high-definition (UHD) video streaming. The test participants were asked to compare videos with labels that were either true or misleading. The study suggests that the perception of visual quality is significantly influenced by labeling and expectations rather than actual resolution differences. In a more recent study, Kara et al. [93] examined the labeling effect in high dynamic range (HDR) video streaming. The study shows that the positive label (“Premium HDR”) assigned to visual stimuli increased the ratings of quality aspects such as luminance, color, and image quality. However, as a preconception of the “cost” of such an improvement, the frame rate was perceived to be worse. Geyer et al. [94] addressed the perceived quality of so-called “rugged” smartphones (i.e., highly durable smartphones) in a series of studies. One experiment compared a rugged phone to two conventional smartphones, while two other tests exhibited the visual stimuli on a single device—either a smartphone or a computer display. The obtained results were compared to ground truth data, excluding the triggers of cognitive bias, and statistically significant differences were found in every instance. Generally, the rugged smartphone was perceived to have worse capabilities as a price for its ruggedness—similar to the case of “Premium HDR” frame rate. Bouchard et al. [95] conducted a study on the impact of preconceptions on the sense of presence. In the experiment, test participants were misled into believing a virtual environment was real. Additionally, there were indicators to suggest that what they perceived was happening in real time. The results show an increase in the sense of presence of the test participants.
In summary, a great number of studies address the influence of cognitive bias on human perception and on decision-making processes. Confirmation bias generally occurs when individuals seek information that supports existing beliefs and preconceptions, ignoring contradictory evidence. The labeling effect may amplify this bias by altering perception through expectations induced by labels. Studies show that this phenomenon occurs on a daily basis in various domains of everyday life. This is particularly applicable to consumer behavior, which fundamentally builds on the perception of quality. In our work, quality corresponds to the responsiveness of video games (i.e., command execution delay). We investigated the influence of the labeling effect on the perception of this performance indicator. The methodology of the subjective tests—detailed in the next section—builds on the experimental setup of earlier works [92,93,94].

3. Experimental Setup

In this section, the experimental setup of our work is introduced. The chosen methodology is explained in terms of test environment (i.e., where the subjective study took place), apparatus and viewing conditions (i.e., the display used to visualize the test stimuli and the positioning of the test participants with respect to the display), test variables (i.e., the characteristics of the tests that changed between the different stimuli), test conditions (i.e., the selected combinations of the aforementioned characteristics), software (i.e., the games developed to test the hypothesis), and test protocol (i.e., how the subjective study was carried out with regards to data collection and stimulus order).

3.1. Test Environment

The subjective study was carried out in a laboratory environment. The test participants were shielded from external audiovisual distractions. During a test, only a single test participant and the test conductor were present within the test environment.

3.2. Apparatus and Viewing Conditions

The tests were carried out on a laptop with a screen size of 15.6 inches. The resolution of the display was set to 1920 × 1080 pixels. Based on the relevant recommendations [96], the recommended viewing distance was 3.2 H; as the height of the screen was 23.8 cm, the default viewing distance (i.e., the distance between the screen and the eyes of the test participant) was set to be approximately 75 cm. However, as the study did not address any aspect of visualization quality, it was possible for the test participants to deviate from this value.

3.3. Test Variables

The subjective study was designed to accommodate three test variables. The first one was the label assigned to the stimulus, in order to potentially induce the phenomenon of cognitive bias. The label was either “optimized” or “not optimized”. The second test variable was the extent of the added command execution delay, which refers to the time period between the user input (i.e., the test participant presses a button on the keyboard of the laptop) and its execution within the software (i.e., the corresponding action is carried out). In accordance with the scientific literature, the chosen values were 0 ms (i.e., no delay), 50 ms, 150 ms, and 250 ms. As shown in Table 1, since 50 ms is the JND for the most reactive games, we chose this value as the smallest non-zero added delay. The other two values were derived from the tolerance thresholds of such games (i.e., 200 ms minus and plus the aforementioned JND). Furthermore, the work of Ivkovic et al. [45] indicates that real-world local latency is typically up to approximately 250 ms. The third test variable was the difficulty settings of the game (i.e., the speed of game progression and events). Three difficulties were distinguished: easy, medium, and hard. These difficulties are explained in the introduction of the software.

3.4. Software

The study was carried out on two types of video games in order to address the research questions: a single-input and a continuous-input game. Both of these games were developed by the authors, using the Unity Real-Time Development Platform [97]. The test variables (i.e., the labels, the added delays, and the difficulty settings) were also implemented using Unity.
The single-input video game was a 2D rhythm game. The task of the test participant as the player was to press the correct arrow key when the arrow in the game—moving vertically from top to bottom—reached the box at the bottom of the screen. The more the arrow aligned with the box at the time of the input, the more points were awarded. If a successful input was provided, the arrow disappeared, meaning that it was not possible for the test participant to circumvent the task by repeatedly pressing the arrow key. This also nullified the potential for providing a continuous input. The difficulty settings of the game was implemented through the vertical movement speed of the arrows. This means that in harder difficulties, an arrow spent less time in the area covered by the box at the bottom. For the easy, medium, and hard difficulties, this aforementioned time periods were 1500 ms, 750 ms, and 500 ms, respectively. However, each gaming sequence had a fixed 60-second duration. Therefore, harder difficulties involved more arrows. This decision was based on the fact that in this game, poor gaming performance (i.e., the player missed the timing of the input) did not shorten the sequence, as performance was reflected in points. Additionally, it should be noted that at an added delay of 250 ms, the game at hard difficulty was excessively challenging, since the arrow overlapped the bounding box for a total of 500 ms. Yet, as the extent of the delay was constant, it was actually possible to adapt to it by pressing the arrow key slightly sooner. A screenshot of the single-input video game is shown in Figure 1.
The continuous-input video game was a 3D driving game with two input keys: the left and the right arrow keys for steering a vehicle forward at a fixed pace. In this context, the continuous input means that keeping the arrow key pressed continuously steers the vehicle in the given direction. The road on which the vehicle moved forward consisted of three portions: the left, the middle, and the right portion of the road. At uniform distances, two of the three portions were blocked by objects. The task of the test participant was to steer the vehicle in order to avoid collision. In each sequence, the player had five “lives”, meaning that the game was over at five collisions. Additionally, regardless of “lives”, the game was also over if the player over-steered the vehicle, making the vehicle face the wrong direction (i.e., there was a more than a 90-degree rotation of the vehicle either to the left or to the right). The difficulty settings differed by the speed of the vehicle; harder difficulty meant greater constant vehicle velocity. Since each gaming sequence had the same road length, the three game difficulties—easy, medium, and hard—lasted 20 s, 40 s, and 60 s, respectively. Regarding the input provided, the entire input was temporally shifted via the added delay. For instance, at 250 ms of delay, the steering continued 250 ms after the arrow key was let go. This example is illustrated with a 500 ms continuous input, as shown in Figure 2. Adaptation to this phenomenon (i.e., letting go of the key sooner) provided an extra layer of difficulty in addition to that of the single-input video game. A screenshot of the continuous-input video game is shown in Figure 3.

3.5. Test Conditions

The approach of our work was to test every single combination. As there were two labels, four added delays, three difficulties, and two games, the total number of test stimuli was 48 (i.e., 24 for each game).

3.6. Test Protocol

The test was a series of direct comparisons. In each segment of the test, the test participant played the exact same game sequence twice, and compared the latter to the former. The only difference was the label: either the first one was labeled “optimized” and the second one was labeled “not optimized”, or vice versa. Prior to every single sequence, the label was shown on the screen. Additionally, these labels were integrated into the game sequences as well, as shown in Figure 1 and Figure 3. Labeling was consistent for each test participant, which means that either the “optimized” stimulus was always the first one, or always the second one throughout the entire test. The first order applied to half of the test participants, and the other order to the other half of the test participants. However, no such division was applied to input type, as every single test participant played both the single-input and the continuous-input games. In order to compare the two test stimuli within the scope of this double-stimulus method, a standardized seven-point comparison scale [96] was used: “Much worse” ( 3 ), “Worse” ( 2 ), “Slightly worse” ( 1 ), “The same” (0), “Slightly better” ( + 1 ), “Better” ( + 2 ), and “Much better” ( + 3 ).
The temporal structure of the subjective tests is shown in Figure 4. Every single comparison consisted of five elements. First, the label of the first stimulus (i.e., stimulus A) was shown for 5 s, essentially a dark-gray screen exhibiting either the bold white text “optimized” or “not optimized” in the middle. This was followed by stimulus A itself. As detailed earlier, for single-input game sequences, the duration was always 60 s, and for continuous-input sequences, the game was either 20, 40, or 60 s long, based on the difficulty. These duration values assume that the game was not ended earlier due to gameplay. However, if that happened, then the sequence was evidently shorter. These two elements were then repeated, followed by a 30-s assessment period, during which the test participant had to select one of the seven rating options. If a test participant fully completed every single game sequence (i.e., no sequence was aborted), then the total duration of the test was approximately 1 h and 52 min. Optionally, due to such extended test duration, the possibility of a brief break was offered for the test participants. Prior to the test itself, the test participants were familiarized with the games, on various difficulties.

4. Results

The study was completed by a total of 60 test participants. For half of them, the “optimized” stimulus was to be compared to the “not optimized” stimulus, and vice versa for the other half. Of the 60 test participants, 39 were male and 21 were female. The age of the test participants ranged from 20 to 28, with an average age of 23. According to The Entertainment Software Association [98], there are more than 190 million people in the age range from 5 to 90 playing video games, 75% of which are Gen Z (i.e., 13 to 28 years old at the time of writing this paper). In five major European markets [99], there are an estimated 124 million people playing video games, and 78% of them are in the age range from 15 to 24. Therefore, having test participants in the range from 20 to 28 is a reasonable choice that produces representative results. Regarding the collected results and the demographic data, as age and gender were not investigated factors in the study, they were collected in separate datasets (i.e., the comparison scores were not matched with the demographic information). The experiment (i.e., the series of subjective tests that involved a total of 60 test participants) lasted over 43 days.
A total of 1440 results were collected, as there were four added delays, three difficulties, and two games, all the possible combinations of which were assessed by 60 test participants. Each test participant provided 24 comparison scores. The obtained results are separately investigated for all the test variable values, as well as the label order. In the analysis, the rating options of the comparison scale were represented by their numerical counterparts. These quantitative descriptors were also present during the experiment. Additionally, the numerical values for tests where the “not optimized” stimulus was compared to the “optimized” one are inverted, which means that all the results are approached from the perspective of the “optimized” stimulus (i.e., all the ratings express the comparison of the “optimized” stimulus to the “not optimized” stimulus).

4.1. Overall Results

Figure 5 exhibits the theoretical ratings distribution. As there were no objective differences between the stimuli in any given pair, then theoretically, there should not have been any rating other than “The same” (0).
Figure 6 shows the actual rating distribution of the obtained subjective data. It is based on all the 1440 ratings. First of all, it needs to be stated that less than 30% of the ratings report that the stimuli in the pairs were “the same”—in other words, more than 70% indicate a difference in perceived command execution delay. Of the ratings, 57.36% were positive (i.e., the “optimized” stimulus was deemed to perform better than the “not optimized” stimulus) and 13.61% were negative (i.e., the “optimized” stimulus was deemed to perform worse than the “not optimized” stimulus). The most frequent rating option was “Slightly better” ( + 1 ), while the least used option was “Much worse” ( 3 ), with only three ratings in total. What is also interesting to see is that the “Better” ( + 2 ) option surpassed the amount of “Slightly worse” ( 1 ) ratings. It may expected that when such a rating scale is used, the two slight options are the most frequent among the six differentiating options. However, distributions similar this one are also possible if a strong preference in one direction emerges [92,93,94]. The overall results exhibit an overwhelmingly positive assessment of the “optimized” stimulus in contrast to the “not optimized”—which, again, were completely the same. When compared to the theoretical all-zero ratings, Student’s t-test provides p < 0.01 , meaning that the cognitive bias in the experiment resulted statistically significant differences.
Comparing these obtained results to the results of similar studies in the scientific literature (i.e., experiments in which identical stimuli were compared via a 7-point comparison scale) enables a better understanding of the impact of cognitive bias. In this experiment, 70.97% of the ratings were differentiating, non-zero options. In the work on HDR [93], the corresponding value was 77.75% for quality aspects and 77.43% for stalling duration. For UHD [92], it was 72.4%. In the work on rugged smartphones [94], this value was 56.06% when multiple smartphones were involved, 66.91% when a single smartphone was used, and 38.83% when the content was displayed on a computer. It thus can be stated that the impact of the investigated instance of the labeling effect is among the greater ones.
Figure 7 and Figure 8 exhibit data for each test participant. As the study does not address differences related to demographic information (e.g., age, gender, etc.), the characteristics of the test participants are not shown on the axis, since they are irrelevant to the analysis. Moreover, as stated earlier, demographic data were collected independently from the subjective ratings, and therefore, such analysis would not be possible. The aim of these figures is to visualize the most important general rating behaviors of the test population.
In Figure 7, each marker represents the mean rating of a test participant. It is shown that 2 test participants had negative rating means, another 2 had means between 0 and 0.1, while the other 56 had higher means. Among them, 10 had means of 1 or higher. The average of these individual calculations were 0.73.
Figure 8 shows the number of “The same” (0) ratings that each test participant provided out of the 24 ratings in total. One test participant had only one such rating (“not optimized” compared to “optimized”; continuous input; medium difficulty; no added delay), and the other extreme value was 14. For 48 out of 60 test participants, at least two-thirds of the reported ratings consisted of non-zero options. The average number of zero ratings per test participant was 6.97 out of the 24.

4.2. Impact of Label Order

Figure 9 shows the rating distribution of the tests where the first label was “not optimized” and the second was optimized (i.e., the “optimized” stimulus was compared to the “not optimized” stimulus), and Figure 10 shows the rating distribution for the opposite case. Note that in this analysis, the results of the latter are inverted, so that both express the perceived command execution delay of the “optimized” stimulus compared to the “not optimized” stimulus. The means for the obtained results are 0.77 and 0.69, respectively. For both, the order of rating frequency is the same as in the case of the overall results, yet there are apparent differences when “optimized” was the first label: “The same” (0) and “Slightly better” ( + 1 ) ratings were fewer, while there were more of the others. In particular, the number of “Slightly worse” ( 1 ) and “Much better” ( + 3 ) ratings increased by 5%. However, these apparent differences are not statistically significant, as p = 0.1 .

4.3. Impact of Input Type

The rating distributions of the single-input and the continuous-input games are shown in Figure 11 and Figure 12, respectively. The means for this classification are 0.72 and 0.74, respectively. The most apparent difference is that there are more “Slightly better” ( + 1 ) and fewer “The same” (0) ratings in the case of the continuous-input game—roughly 3% difference for each in contrast to the single-input game. However, these are far from being significantly different, as p = 0.41 .

4.4. Impact of Game Difficulty

The rating distributions of games at easy, medium, and hard difficulties are shown in Figure 13, Figure 14, and Figure 15, respectively. The means for these difficulties are 0.79, 0.75, and 0.66, respectively. While there is an approximately 10% difference regarding “Slightly better” ( + 1 ) ratings between easy and medium difficulty, the means are rather similar, and p = 0.26 . However, both the means and distributions indicate more of a difference when compared to the hard difficulty. Medium and hard difficulties do not differ significantly, as p = 0.15 , yet easy and hard do, as p = 0.04 . It is noteworthy to observe that while the amount of differentiating, non-zero rating options change only 2.5% in total, the distribution of these ratings vary considerably. Previous studies on the impact of difficulty on the player experience [20,30] conclude that video games are only enjoyable when the difficulties are maintained within certain thresholds. Exceeding or dropping below the difficulty thresholds may degrade the player experience. Other studies highlight the cognitive demand of increased gaming difficulty [100,101,102], including education games [103]. For instance, Large et al. [101] studied the players of a MOBA game through cognitive tasks, and found a correlation between various cognitive characteristics (e.g., speed of processing) and gaming performance (i.e., player ranking). Future research should address the correlation between cognitive abilities—and the load on these abilities via increased game difficulty—and rating behavior in the context of the labeling effect.

4.5. Impact of Added Command Execution Delay

The rating distributions at 0 ms, 50 ms, 150 ms, and 250 ms added command execution delay are shown in Figure 16, Figure 17, Figure 18, and Figure 19, respectively. The means for these delay values are 0.37, 0.41, 0.91, and 1.23, respectively. When 0 ms and 50 ms of added delay are compared, p = 0.33 , and for every other comparison, p < 0.01 , indicating statistically significant differences. The distribution of the ratings shows that, as the added delay increases, the number of differentiating, non-zero rating options increase as well: it is 55.83% for 0 ms, 69.44% for 50 ms, 77.78% for 150 ms, and 80.83% for 250 ms. With such a tendency, it is not surprising that there is a 0.92 strong correlation between non-zero ratings and the amount of added command execution delay. It is also rather notable that at the highest delay, the “Much better” ( + 3 ) ratings reached 16.11%, nearly rivaling “The same” (0) ratings at 19.17%. A straightforward explanation of this phenomenon is that it is quite likely that the term “optimized” was directly associated with the amount of delay. Therefore, the more command execution delay was present, the more it could support a confirmation bias that assumes that the “optimized” gameplay suffered less in contrast to the “not optimized” one. Still, even when no delay was added, more than half of the ratings indicated perceivable differences.

4.6. Additional Discussion

In live service game environments, in order to meet the demand of players, developers usually make an effort to balance the game, fix issues and bugs, and also provide new contents. These novel additions to the game are usually delivered to the player with a label “patch”, or even “update” or “downloadable content” if the novelty is more substantial. The study of Zhong and Xu [104] concludes that frequent updates keep the player engaged and satisfied. This finding is also supported by the work of Liu and Samiee [105]. In addition, the study of Claypool et al. [106] indicates how new patches are able to adjust the way players play, or may even encourage brand new methods to play the game [107]. However, the work of Del Gallo [108] argues that the developers of the MOBA game League of Legends intentionally provide new “patches” in order to distract players from the unresolved issues of the games.
Patches do not always provide new contents as they may focus on technological improvements or even security enhancement. The study of Arora et al. [109] addresses the impact of software security patches, while the work of Lin et al. [110] proposes a model that automatically generates security patches. Security patches are often used in order to prevent cheating and other types of exploitation [111], and thus such patches are likely to be welcomed by players [112,113]. The study of Mertens [114] concludes that games with long lists of bugs and errors can be considered “broken”; however, such negative reputation may vanish as the developer releases new patches, updates, and downloadable contents. The author also argued that these “broken” games usually receive backlash at launch and player count eventually drops. Yet after receiving updates—the public notes of which usually describe the updated version of the game to have better performance and/or improved gameplay—players may be attracted to the game again.
In games where low network latency is essential to gameplay (e.g., FTGs), the improvement of the utilized netcode is crucial [14,57,58,59]. At the time of writing this paper, rollback netcode is widely used. Its main idea is to compensate delay by predicting player input. Of course, if there is an actual difference between the predicted and the real player input, then the game session is reverted to an earlier state and the correct input is carried out. The relevance of such research is that for certain video games, even a few frames worth of delay may have a significant impact on QoE and player performance. Therefore, any change to netcode communicated in patches or updates may have a significant preliminary influence on players that understand such mechanisms, and thus may have preconceptions about the updated gameplay performance.
The results of the experiment presented in this paper emphasize the impact of the labeling effect in the context of gaming. The methodology did not introduce objective differences within the pairs of the pair comparison tests (i.e., the two game sequences were always identical), only between pairs, in accordance with the test conditions. According to the best knowledge of the authors, no such study has been carried out so far that addresses any aspect of gaming QoE where a video game patch or update is communicated to have improved performance when there is actually zero difference. The results of such a subjective study could be highly relevant to the current practices of video game production and maintenance. One may hypothesize that for a portion of the test participants, the combination of the expectations and the lack of actual improvement may degrade certain aspects of QoE—as has already been measured in a prior experiment [115]—while it is also possible that the labeling effect may result in an overall QoE improvement for other test participants.
As stated earlier in the analysis of the overall results, 13.61% of the obtained results in our study is negative, which means that the “optimized” sequences were deemed to perform worse. Earlier works indicate that rating penalization may occur due to the mismatch of expectations and genuine differences [115], or as a form a compensation [93]. While the results are indeed overwhelmingly positive (57.36%), it is worth considering that certain yet-to-be-investigated labels and contexts may induce a greater extent of rating penalization.

5. Conclusions

In this paper, we presented a study on the impact of the labeling effect on the perception of command execution delay in video games. In the experiment, identical game sequences were compared, but one was labeled “optimized”, and the other one “not optimized”. The obtained results indicate that regardless of label order, input type, game difficulty, and added delay, the subjective ratings clearly exhibit the influence of the assigned labels. Generally, we can confidently reject the null hypothesis, as the measured differences are not due to random error. The ratio of differentiating scores in this study is competitive with prior results in the scientific literature (i.e., match the corresponding values or even surpass them), particularly with works using similar methodologies [92,93,94]. For the two label orders and the two input types, the results do not differ significantly. However, for the slowest and fastest gameplay (i.e., easy and hard difficulty), and for the various extents of added command execution delay, the differences are statistically significant. Furthermore, there is a strong correlation between the achieved results and the added delay. More specifically, the more delay is added, the greater the impact of the labeling effect is (i.e., more ratings differentiate the identical game sequences). However, the range of added delay in our experiment was limited by the tolerance levels measured by prior research efforts in the scientific literature. Therefore, greater extents of added delay may result in different tendencies of rating behavior.
One notable limitation of the study is that objective user performance was not recorded (i.e., sequence scores and when a sequence ends). In future work, the correlation between objective performance and subjective ratings should be investigated. Additionally, such gameplay-related characteristics may provide a better insight regarding user behavior. For instance, at higher difficulties, sequence abandonment may occur (i.e., the player forfeits the session either by intentionally providing the wrong input to end the sequence earlier in the case of the continuous-input game or by not providing further input at all), the detection of which may enable better data analysis. Another potential phenomenon is that a player may project operational errors (e.g., the personal mistake of pressing the wrong button at a given moment or pressing the right button too late) onto the game sequence itself, which may heavily influence subjective assessment. It is also worth highlighting that objective data related to reaction times may measure fatigue over time, and therefore, the inclusion of such data may map the connection between fatigue and the impact of the labeling effect. Fatigue may result more instances of operational error, making the aforementioned phenomenon even more relevant. As for command execution delay in general, while our work completely neglected the variation in delay, investigating the perception of jitter in such a context could also be relevant. Regarding the extent of added delay, as implied earlier, delay values that exceed the level of tolerance could be included in future studies.
Beyond command execution delay, other game characteristics should be addressed as well, such as frame rate. Moreover, multiplayer scenarios should be studied, which may have numerous considerations related to inter-user effects. Future work should also address the correlations between viewing conditions, display characteristics, immersion, and perceptual fatigue. For example, the different combinations of display size, display resolution, and viewing distance may affect the influence of the labeling effect on perceived visual quality. It is already well investigated how various viewing distances impact perceived quality [116,117,118], yet research efforts on its connection to the labeling effect and other forms of cognitive bias are still lacking. On the level of methodology, physiological measurements could provide an enhanced insight into the impact of such a form of cognitive bias. Physiological measurements are also utilized to determine the level of immersion [119,120,121], particularly in the context of novel immersive technologies such as VR [122,123,124]. In future works, the correlation between the level of immersion and the impact of cognitive bias should be investigated as well. Finally, as suggested earlier in this paper, it would be particularly relevant for the gaming industry to measure the impact of a performance patch or update that does not implement any objective difference, and therefore, quantify the potential positive and negative influence on gaming QoE.

Author Contributions

Conceptualization, D.H.N. and P.A.K.; methodology, D.H.N. and P.A.K.; software, D.H.N.; validation, D.H.N. and P.A.K.; investigation, D.H.N. and P.A.K.; resources, D.H.N. and P.A.K.; data curation, D.H.N. and P.A.K.; writing—original draft preparation, D.H.N. and P.A.K.; writing—review and editing, D.H.N. and P.A.K.; visualization, D.H.N. and P.A.K.; supervision, P.A.K.; project administration, P.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. Individual approval by the ethics committee of the institution was not required for the subjective study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The author would like to thank all the individuals who participated in the subjective study, and thus, made this research possible.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALCAdvance Lag Compensation
DDADynamic Difficulty Adjustment
EBSEevidence-based software engineering
fMRIfunctional magnetic resonance imaging
FPSfirst-person shooter
FTGfighting game
GEQGaming Experience Questionnaire
GSAQGaming Specified Attributing Questionnaire
HCIhuman–computer interaction
HDhigh definition
HDRhigh dynamic range
JNDjust noticeable difference
LTElong-term evolution
MMORPGmassively multiplayer online role-playing game
MOBAmultiplayer online battle arena
PCpersonal computer
QoEQuality of Experience
RMTrequirement management tool
RPGrole-playing game
RTSreal-time strategy
UHDultra-high-definition
VATvalue-added tax
VRvirtual reality

References

  1. Normoyle, A.; Guerrero, G.; Jörg, S. Player Perception of Delays and Jitter in Character Responsiveness. In Proceedings of the ACM Symposium on Applied Perception, Vancouver, BC, Canada, 8–9 August 2014; pp. 117–124. [Google Scholar]
  2. Wilke, A.; Mata, R. Cognitive Bias; Elsevier: San Diego, CA, USA, 2017. [Google Scholar]
  3. Claypool, M.; Claypool, K.; Damaa, F. The Effects of Frame Rate and Resolution on Users Playing First Person Shooter Games. In Proceedings of the Multimedia Computing and Networking, San Jose, CA, USA, 15–18 January 2006; SPIE: New York, NY, USA, 2006; Volume 6071, p. 607101. [Google Scholar]
  4. Liu, S.; Claypool, M.; Kuwahara, A.; Sherman, J.; Scovell, J.J. Lower is Better? The Effects of Local Latencies on Competitive First-person Shooter Game Players. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–12. [Google Scholar]
  5. Chen, K.T.; Huang, P.; Lei, C.L. Effect of Network Quality on Player Departure Behavior in Online Games. IEEE Trans. Parallel Distrib. Syst. 2008, 20, 593–606. [Google Scholar] [CrossRef]
  6. Kokkinakis, A.; York, P.; Patra, M.S.; Robertson, J.; Kirman, B.; Coates, A.; Chitayat, A.P.P.; Demediuk, S.; Drachen, A.; Hook, J.; et al. Metagaming and Metagames in Esports. Int. J. Esports 2021, 1, 1–24. [Google Scholar]
  7. Boluk, S.; LeMieux, P. Metagaming: Videogames and the Practice of Play. Comput. Games New Media Cult. 2017, 53, 1–22. [Google Scholar]
  8. Shneiderman, B. Response time and display rate in human performance with computers. ACM Comput. Surv. (CSUR) 1984, 16, 265–285. [Google Scholar] [CrossRef]
  9. Raaen, K.; Eg, R. Instantaneous human-computer interactions: Button causes and screen effects. In Proceedings of the Human-Computer Interaction: Users and Contexts: 17th International Conference, HCI International 2015, Los Angeles, CA, USA, 2–7 August 2015; Proceedings, Part III 17. Springer: New York, NY, USA, 2015; pp. 492–502. [Google Scholar]
  10. Wang, E. User’s Delay Perception and Tolerance in Human-Computer Interaction. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Baltimore, MD, USA, 29 September–4 October 2002; SAGE Publications: Los Angeles, CA, USA, 2002; Volume 46, pp. 651–655. [Google Scholar]
  11. Metzger, F.; Rafetseder, A.; Schwartz, C. A comprehensive end-to-end lag model for online and cloud video gaming. In Proceedings of the 5th ISCA/DEGA Workshop on Perceptual Quality of Systems (PQS 2016), Berlin, Germany, 29–31 August 2016; pp. 15–19. [Google Scholar]
  12. Pantel, L.; Wolf, L.C. On the impact of delay on real-time multiplayer games. In Proceedings of the 12th International Workshop on Network and Operating Systems Support for Digital Audio and Video, Miami, FL, USA, 12–14 May 2002; pp. 23–29. [Google Scholar]
  13. Schmidt, S.; Zadtootaghaj, S.; Möller, S. Towards the delay sensitivity of games: There is more than genres. In Proceedings of the 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany, 31 May–2 June 2017; IEEE: New York, NY, USA, 2017; pp. 1–6. [Google Scholar]
  14. Claypool, M.; Claypool, K. Latency can kill: Precision and deadline in online games. In Proceedings of the First Annual ACM SIGMM Conference on Multimedia Systems, Phoenix, AZ, USA, 22–23 February 2010; pp. 215–222. [Google Scholar]
  15. Fritsch, T.; Ritter, H.; Schiller, J. The effect of latency and network limitations on mmorpgs: A field study of everquest2. In Proceedings of the 4th ACM SIGCOMM Workshop on Network and System Support for Games, Hawthorne, NY, USA, 10–11 October 2005; pp. 1–9. [Google Scholar]
  16. Chen, K.T.; Chang, Y.C.; Tseng, P.H.; Huang, C.Y.; Lei, C.L. Measuring the latency of cloud gaming systems. In Proceedings of the 19th ACM International Conference on Multimedia, Scottsdale, AZ, USA, 28 November–1 December 2011; pp. 1269–1272. [Google Scholar]
  17. Kämäräinen, T.; Siekkinen, M.; Ylä-Jääski, A.; Zhang, W.; Hui, P. A Measurement Study on Achieving Imperceptible Latency in Mobile Cloud Gaming. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 88–99. [Google Scholar]
  18. Allison, R.S.; Harris, L.R.; Jenkin, M.; Jasiobedzka, U.; Zacher, J.E. Tolerance of Temporal Delay in Virtual Environments. In Proceedings of the IEEE Virtual Reality, Yokohama, Japan, 13–17 March 2001; IEEE: New York, NY, USA, 2001; pp. 247–254. [Google Scholar]
  19. Huang, P.; Arima, R.; Ishibashi, Y. Influence of Network Delay on Human Perception of Weight in Virtual Environment. In Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017; IEEE: New York, NY, USA, 2017; pp. 1221–1225. [Google Scholar]
  20. Klimmt, C.; Blake, C.; Hefner, D.; Vorderer, P.; Roth, C. Player Performance, Satisfaction, and Video Game Enjoyment. In Proceedings of the Entertainment Computing—ICEC 2009: 8th International Conference, Paris, France, 3–5 September 2009; Proceedings 8. Springer: New York, NY, USA, 2009; pp. 1–12. [Google Scholar]
  21. Erfani, M.; El-Nasr, M.S.; Milam, D.; Aghabeigi, B.; Lameman, B.A.; Riecke, B.E.; Maygoli, H.; Mah, S. The Effect of Age, Gender, and Previous Gaming Experience on Game Play Performance. In Proceedings of the Human-Computer Interaction: Second IFIP TC 13 Symposium, HCIS 2010, Held as Part of WCC 2010, Brisbane, Australia, 20–23 September 2010; Proceedings. Springer: New York, NY, USA, 2010; pp. 293–296. [Google Scholar]
  22. Hopp, T.; Fisher, J. Examination of the Relationship Between Gender, Performance, and Enjoyment of a First-Person Shooter Game. Simul. Gaming 2017, 48, 338–362. [Google Scholar] [CrossRef]
  23. Moller, S.; Schmidt, S.; Zadtootaghaj, S. New ITU-T Standards for Gaming QoE Evaluation and Management. In Proceedings of the 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), Sardinia, Italy, 29–31 May 2018; IEEE: New York, NY, USA, 2018; pp. 1–6. [Google Scholar]
  24. Depping, A.E.; Mandryk, R.L. Why is This Happening to Me? How Player Attribution can Broaden our Understanding of Player Experience. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 1040–1052. [Google Scholar]
  25. Claypool, M.; Finkel, D. The effects of latency on player performance in cloud-based games. In Proceedings of the 2014 13th Annual Workshop on Network and Systems Support for Games, Nagoya, Japan, 4–5 December 2014; IEEE: New York, NY, USA, 2014; pp. 1–6. [Google Scholar]
  26. Clincy, V.; Wilgor, B. Subjective evaluation of latency and packet loss in a cloud-based game. In Proceedings of the 2013 10th International Conference on Information Technology: New Generations, Las Vegas, NV, USA, 15–17 April 2013; IEEE: New York, NY, USA, 2013; pp. 473–476. [Google Scholar]
  27. Li, S.; Chen, C.; Li, L. Evaluating the Latency of Clients by Player Behaviors in Client-Server Based Network Games. In Proceedings of the 2008 3rd International Conference on Innovative Computing Information and Control, Dalian, China, 18–20 June 2008; IEEE: New York, NY, USA, 2008; p. 375. [Google Scholar]
  28. Beigbeder, T.; Coughlan, R.; Lusher, C.; Plunkett, J.; Agu, E.; Claypool, M. The effects of loss and latency on user performance in unreal tournament 2003®. In Proceedings of the 3rd ACM SIGCOMM Workshop on Network and System Support for Games, Portland, OR, USA, 30 August 2004; pp. 144–151. [Google Scholar]
  29. Chanel, G.; Rebetez, C.; Bétrancourt, M.; Pun, T. Emotion Assessment From Physiological Signals for Adaptation of Game Difficulty. IEEE Trans. Syst. Man, Cybern. Part A Syst. Humans 2011, 41, 1052–1063. [Google Scholar] [CrossRef]
  30. Alexander, J.T.; Sear, J.; Oikonomou, A. An investigation of the effects of game difficulty on player enjoyment. Entertain. Comput. 2013, 4, 53–62. [Google Scholar] [CrossRef]
  31. Liu, C.; Agrawal, P.; Sarkar, N.; Chen, S. Dynamic Difficulty Adjustment in Computer Games Through Real-Time Anxiety-Based Affective Feedback. Int. J. Hum.-Comput. Interact. 2009, 25, 506–529. [Google Scholar] [CrossRef]
  32. Andrade, G.; Ramalho, G.; Santana, H.; Corruble, V. Challenge-Sensitive Action Selection: An Application to Game Balancing. In Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Compiegne, France, 19–22 September 2005; IEEE: New York, NY, USA, 2005; pp. 194–200. [Google Scholar]
  33. Baldwin, A.; Johnson, D.; Wyeth, P.A. The Effect of Multiplayer Dynamic Difficulty Adjustment on the Player Experience of Video Games. In CHI’14 Extended Abstracts on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2014; pp. 1489–1494. [Google Scholar]
  34. Sabet, S.S.; Schmidt, S.; Griwodz, C.; Möller, S. Towards the Impact of Gamers’ Adaptation to Delay Variation on Gaming Quality of Experience. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, German, 5–7 June 2019; IEEE: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
  35. Sabet, S.S.; Schmidt, S.; Zadtootaghaj, S.; Naderi, B.; Griwodz, C.; Möller, S. A Latency Compensation Technique Based on Game Characteristics to Mitigate the Influence of Delay on Cloud Gaming Quality of Experience. In Proceedings of the 11th ACM Multimedia Systems Conference, Istanbul, Turkey, 8–11 June 2020; pp. 15–25. [Google Scholar]
  36. Savery, C.; Graham, N.; Gutwin, C.; Brown, M. The Effects of Consistency Maintenance Methods on Player Experience and Performance in Networked Games. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, Baltimore, MD, USA, 15–19 February 2014; pp. 1344–1355. [Google Scholar]
  37. Lee, S.W.; Chang, R.K. Enhancing the Experience of Multiplayer Shooter Games via Advanced Lag Compensation. In Proceedings of the 9th ACM Multimedia Systems Conference, Amsterdam, The Netherlands, 12–15 June 2018; pp. 284–293. [Google Scholar]
  38. Liu, S.; Claypool, M.; Devigere, B.; Kuwahara, A.; Sherman, J. ‘Git Gud!’—Evaluation of Self-Rated Player Skill Compared to Actual Player Performance. In Proceedings of the Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play, Virtual, 2–4 November 2020; pp. 306–310. [Google Scholar]
  39. Lwin, H.M.M.M.; Ishibashi, Y.; Mya, K.T. Influence of Voice Delay on Human Perception of Group Synchronization Error for Remote Learning: One-way communication case. In Proceedings of the 2020 IEEE Conference on Computer Applications (ICCA), Yangon, Myanmar, 27–28 February 2020; IEEE: New York, NY, USA, 2020; pp. 1–5. [Google Scholar]
  40. Pornpongtechavanich, P.; Wuttidittachotti, P.; Daengsi, T. QoE modeling for audiovisual associated with MOBA game using subjective approach. Multimed. Tools Appl. 2022, 81, 37763–37779. [Google Scholar] [CrossRef]
  41. Raaen, K.; Petlund, A. How much delay is there really in current games? In Proceedings of the 6th ACM Multimedia Systems Conference, Portland, OR, USA, 18–20 March 2015; pp. 89–92. [Google Scholar]
  42. Long, M.; Gutwin, C. Characterizing and Modeling the Effects of Local Latency on Game Performance and Experience. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play, Melbourne, VIC, Australia, 28–31 October 2018; pp. 285–297. [Google Scholar]
  43. Claypool, M.; Eg, R.; Raaen, K. Modeling User Performance for Moving Target Selection with a Delayed Mouse. In Proceedings of the International Conference on Multimedia Modeling, Miami, FL, USA, 4–6 January 2016; Springer: New York, NY, USA, 2016; pp. 226–237. [Google Scholar]
  44. Claypool, M.; Cockburn, A.; Gutwin, C. The Impact of Motion and Delay on Selecting Game Targets with a Mouse. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2020, 16, 1–24. [Google Scholar] [CrossRef]
  45. Ivkovic, Z.; Stavness, I.; Gutwin, C.; Sutcliffe, S. Quantifying and Mitigating the Negative Effects of Local Latencies on Aiming in 3D Shooter Games. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 135–144. [Google Scholar]
  46. Quax, P.; Beznosyk, A.; Vanmontfort, W.; Marx, R.; Lamotte, W. An Evaluation of the Impact of Game Genre on User Experience in Cloud Gaming. In Proceedings of the 2013 IEEE International Games Innovation Conference (IGIC), Vancouver, BC, Canada, 23–25 September 2013; IEEE: New York, NY, USA, 2013; pp. 216–221. [Google Scholar]
  47. Sabet, S.S.; Schmidt, S.; Zadtootaghaj, S.; Griwodz, C.; Möller, S. Delay Sensitivity Classification of Cloud Gaming Content. In Proceedings of the 12th ACM International Workshop on Immersive Mixed and Virtual Environment Systems, Istanbul, Turkey, 8 June 2020; pp. 25–30. [Google Scholar]
  48. Beznosyk, A.; Quax, P.; Coninx, K.; Lamotte, W. Influence of Network Delay and Jitter on Cooperation in Multiplayer Games. In Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, Hong Kong, China, 11–12 December 2011; pp. 351–354. [Google Scholar]
  49. Beyer, J.; Möller, S. Assessing the Impact of Game Type, Display Size and Network Delay on Mobile Gaming QoE. PIK-Prax. Der Informationsverarbeitung Und Kommun. 2014, 37, 287–295. [Google Scholar] [CrossRef]
  50. Claypool, M. The effect of latency on user performance in real-time strategy games. Comput. Netw. 2005, 49, 52–70. [Google Scholar] [CrossRef]
  51. Pedri, S.; Hesketh, B. Time Perception: Effects of Task Speed and Delay. Percept. Mot. Ski. 1993, 76, 599–608. [Google Scholar] [CrossRef] [PubMed]
  52. Kohrs, C.; Angenstein, N.; Brechmann, A. Delays in Human-Computer Interaction and Their Effects on Brain Activity. PLoS ONE 2016, 11, e0146250. [Google Scholar] [CrossRef] [PubMed]
  53. Ravindran, K.; Sabbir, A.; Ravindran, B. Impact of network loss/delay characteristics on consistency control in real-time multi-player games. In Proceedings of the 2008 5th IEEE Consumer Communications and Networking Conference, Las Vegas, NV, USA, 10–12 January 2008; IEEE: New York, NY, USA, 2008; pp. 1128–1133. [Google Scholar]
  54. Le, A.; Liu, Y.E. Fairness in Multi-Player Online Games on Deadline-Based Networks. In Proceedings of the CCNC, Washington, DC, USA, 1–13 January 2017; pp. 670–675. [Google Scholar]
  55. Lindström, S.F.; Wetterberg, M.; Carlsson, N. Cloud gaming: A QoE study of fast-paced single-player and multiplayer gaming. In Proceedings of the 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC), Leicester, UK, 7–10 December 2020; IEEE: New York, NY, USA, 2020; pp. 34–45. [Google Scholar]
  56. Siu, K.; Guzdial, M.; Riedl, M.O. Evaluating singleplayer and multiplayer in human computation games. In Proceedings of the 12th International Conference on the Foundations of Digital Games, Hyannis, MA, USA, 14–17 August 2017; pp. 1–10. [Google Scholar]
  57. Lioret, A.; Diler, L.; Dalil, S.; Mota, M. Hybrid Prediction for Games’ Rollback Netcode. In Proceedings of the ACM SIGGRAPH 2022 Posters, Vancouver, BC, Canada, 7–11 August 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–2. [Google Scholar]
  58. Huynh, E.; Valarino, F. An Analysis of Continuous Consistency Models in Real Time Peer-to-Peer Fighting Games. 2019. Available online: https://www.diva-portal.org/smash/get/diva2:1322881/FULLTEXT01.pdf (accessed on 24 April 2025).
  59. Ehlert, A. Improving Input Prediction in Online Fighting Games. 2021. Available online: https://www.diva-portal.org/smash/get/diva2:1560069/FULLTEXT01.pdf (accessed on 24 April 2025).
  60. Claypool, M. Game Input with Delay—Moving Target Selection with a Game Controller Thumbstick. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2018, 14, 1–22. [Google Scholar] [CrossRef]
  61. Long, M.; Gutwin, C. Effects of Local Latency on Game Pointing Devices and Game Pointing Tasks. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–12. [Google Scholar]
  62. Stern, M.K.; Johnson, J.H. Just noticeable difference. In The Corsini Encyclopedia of Psychology; John Wiley & Sons, Inc.: New York, NY, USA, 2010; pp. 1–2. [Google Scholar]
  63. Quax, P.; Monsieurs, P.; Lamotte, W.; De Vleeschauwer, D.; Degrande, N. Objective and subjective evaluation of the influence of small amounts of delay and jitter on a recent first person shooter game. In Proceedings of the 3rd ACM SIGCOMM Workshop on Network and System Support for Games, Portland, OR, USA, 30 August 2004; pp. 152–156. [Google Scholar]
  64. Xu, J.; Wah, B.W. Concealing network delays in delay-sensitive online interactive games based on just-noticeable differences. In Proceedings of the 2013 IEEE International Conference on Multimedia and Expo (ICME), San Jose, CA, USA, 15–19 July 2013; IEEE: New York, NY, USA, 2013; pp. 1–6. [Google Scholar]
  65. Tan, C.I.; Tan, W.H.; binti Shamsudin, S.F.; Navaratnam, S.; Ng, Y.Y. Investigating the Impact of Latency in Mobile-Based Multiplayer Online Battle Arena (MOBA) Games. Int. J. Creat. Multimed. 2022, 3, 1–16. [Google Scholar]
  66. Hohlfeld, O.; Fiedler, H.; Pujol, E.; Guse, D. Insensitivity to Network Delay: Minecraft Gaming Experience of Casual Gamers. In Proceedings of the 2016 28th International Teletraffic Congress (ITC 28), Wurzburg, Germany, 12–16 September 2016; IEEE: New York, NY, USA, 2016; Volume 3, pp. 31–33. [Google Scholar]
  67. Nichols, J.; Claypool, M. The effects of latency on online madden NFL football. In Proceedings of the 14th International Workshop on Network and Operating Systems Support for Digital Audio and Video, Cork, Ireland, 16–18 June 2004; pp. 146–151. [Google Scholar]
  68. Wason, P.C. On the Failure to Eliminate Hypotheses in a Conceptual Task. Q. J. Exp. Psychol. 1960, 12, 129–140. [Google Scholar] [CrossRef]
  69. Wason, P.C. Reasoning about a Rule. Q. J. Exp. Psychol. 1968, 20, 273–281. [Google Scholar] [CrossRef]
  70. Klayman, J. Varieties of Confirmation Bias. Psychol. Learn. Motiv. 1995, 32, 385–418. [Google Scholar]
  71. Darley, J.M.; Gross, P.H. A Hypothesis-confirming Bias in Labeling Effects. J. Personal. Soc. Psychol. 1983, 44, 20. [Google Scholar] [CrossRef]
  72. Jones, M.; Sugden, R. Positive Confirmation bias in the acquisition of information. Theory Decis. 2001, 50, 59–99. [Google Scholar] [CrossRef]
  73. Sakai, N.; Imada, S.; Saito, S.; Kobayakawa, T.; Deguchi, Y. The Effect of Visual Images on Perception of Odors. Chem. Senses 2005, 30, i244–i245. [Google Scholar] [CrossRef]
  74. Bentler, R.A.; Niebuhr, D.P.; Johnson, T.A.; Flamme, G.A. Impact of Digital Labeling on Outcome Measures. Ear Hear. 2003, 24, 215–224. [Google Scholar] [CrossRef]
  75. Iglesias, V. Preconceptions About Service: How Much Do They Influence Quality Evaluations? J. Serv. Res. 2004, 7, 90–103. [Google Scholar] [CrossRef]
  76. Gao, Z.; Schroeder, T.C. Effects of Label Information on Consumer Willingness-to-pay for Food Attributes. Am. J. Agric. Econ. 2009, 91, 795–809. [Google Scholar] [CrossRef]
  77. De Graaf, K.A.; Liang, P.; Tang, A.; Van Vliet, H. The Impact of Prior Knowledge on Searching in Software Documentation. In Proceedings of the 2014 ACM Symposium on Document Engineering, Fort Collins, CO, USA, 16–19 September 2014; pp. 189–198. [Google Scholar]
  78. Chovanová, H.H.; Korshunov, A.I.; Babčanová, D. Impact of Brand on Consumer Behavior. Procedia Econ. Financ. 2015, 34, 615–621. [Google Scholar] [CrossRef]
  79. Stylidis, K.; Wickman, C.; Söderberg, R. Perceived Quality of Products: A Framework and Attributes Ranking Method. J. Eng. Des. 2020, 31, 37–67. [Google Scholar] [CrossRef]
  80. Fitzgerald, M.P.; Russo Donovan, K.; Kees, J.; Kozup, J. How Confusion Impacts Product Labeling Perceptions. J. Consum. Mark. 2019, 36, 306–316. [Google Scholar] [CrossRef]
  81. Christandl, F.; Fetchenhauer, D.; Hoelzl, E. Price Perception and Confirmation Bias in the Context of a VAT Increase. J. Econ. Psychol. 2011, 32, 131–141. [Google Scholar] [CrossRef]
  82. Stacy, W.; MacMillan, J. Cognitive Bias in Software Engineering. Commun. ACM 1995, 38, 57–63. [Google Scholar] [CrossRef]
  83. Mohanani, R.; Salman, I.; Turhan, B.; Rodríguez, P.; Ralph, P. Cognitive Biases in Software Engineering: A Systematic Mapping Study. IEEE Trans. Softw. Eng. 2018, 46, 1318–1339. [Google Scholar] [CrossRef]
  84. Jørgensen, M.; Papatheocharous, E. Believing is Seeing: Confirmation Bias Studies in Software Engineering. In Proceedings of the 2015 41st Euromicro Conference on Software Engineering and Advanced Applications, Madeira, Portugal, 26–28 August 2015; IEEE: New York, NY, USA, 2015; pp. 92–95. [Google Scholar]
  85. Calikli, G.; Bener, A. Empirical Analyses of the Factors Affecting Confirmation Bias and the Effects of Confirmation Bias on Software Developer/Tester Performance. In Proceedings of the 6th International Conference on Predictive Models in Software Engineering, Timisoara, Romania, 12–13 September 2010; pp. 1–11. [Google Scholar]
  86. Leventhal, L.M.; Teasley, B.E.; Rohlman, D.S. Analyses of Factors Related to Positive Test Bias in Software Testing. Int. J. Hum.-Comput. Stud. 1994, 41, 717–749. [Google Scholar] [CrossRef]
  87. Salman, I. Cognitive Biases in Software Quality and Testing. In Proceedings of the 38th International Conference on Software Engineering Companion, Austin, TX, USA, 14–22 May 2016; pp. 823–826. [Google Scholar]
  88. Çalıklı, G.; Bener, A.B. Influence of confirmation biases of developers on software quality: An empirical study. Softw. Qual. J. 2013, 21, 377–416. [Google Scholar] [CrossRef]
  89. Rainer, A.; Beecham, S. A follow-up empirical evaluation of evidence based software engineering by undergraduate students. In Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering (EASE), Swindon, UK, 26–27 June 2008; BCS Learning & Development: Swindon, UK, 2008. [Google Scholar]
  90. Jørgensen, M.; Løhre, E. First Impressions in Software Development Effort Estimation: Easy to Create and Difficult to Neutralize. In Proceedings of the 16th International Conference on Evaluation &Assessment in Software Engineering (EASE 2012), Stevenage, UK, 14–15 May 2012; IET: London, UK, 2012; pp. 216–222. [Google Scholar]
  91. Kara, P.A.; Robitza, W.; Raake, A.; Martini, M.G. The Label Knows Better: The Impact of Labeling Effects on Perceived Quality of HD and UHD Video Streaming. In Proceedings of the 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany, 31 May–2 June 2017; IEEE: New York, NY, USA, 2017; pp. 1–6. [Google Scholar]
  92. Kara, P.A.; Robitza, W.; Pinter, N.; Martini, M.G.; Raake, A.; Simon, A. Comparison of HD and UHD video quality with and without the influence of the labeling effect. Qual. User Exp. 2019, 4, 4. [Google Scholar] [CrossRef]
  93. Kara, P.A.; Cserkaszky, A.; Martini, M.G.; Bokor, L.; Simon, A. The effect of labeling on the perceived quality of HDR video transmission. Cogn. Technol. Work. 2020, 22, 585–601. [Google Scholar] [CrossRef]
  94. Geyer, F.; Szakal, V.A.; Kara, P.A.; Simon, A. Cognitive-bias-induced differences in the perceived video quality of rugged and conventional smartphones. In Proceedings of the 2022 16th International Conference on Signal-Image Technology &Internet-Based Systems (SITIS), Dijon, France, 19–21 October 2022; IEEE: New York, NY, USA, 2022; pp. 592–599. [Google Scholar]
  95. Bouchard, S.; Dumoulin, S.; Talbot, J.; Ledoux, A.A.; Phillips, J.; Monthuy-Blanc, J.; Labonté-Chartrand, G.; Robillard, G.; Cantamesse, M.; Renaud, P. Manipulating subjective realism and its impact on presence: Preliminary results on feasibility and neuroanatomical correlates. Interact. Comput. 2012, 24, 227–236. [Google Scholar] [CrossRef]
  96. BT.500: Methodologies for the Subjective Assessment of the Quality of Television Images. 2023. Available online: https://www.itu.int/rec/R-REC-BT.500/en (accessed on 21 February 2025).
  97. Unity Real-Time Development Platform. Available online: https://unity.com/ (accessed on 21 February 2025).
  98. 2024 Essential Facts About the U.S. Video Game Industry. Available online: https://www.theesa.com/resources/essential-facts-about-the-us-video-game-industry/2024-data/ (accessed on 2 March 2025).
  99. Video Games Europe: Key Facts Report 2023. Available online: https://www.videogameseurope.eu/wp-content/uploads/2024/09/Video-Games-Europe-2023-Key-Facts-Report_FINAL.pdf (accessed on 2 March 2025).
  100. Wulf, T.; Rieger, D.; Kümpel, A.S.; Reinecke, L. Harder, better, faster, stronger? The relationship between cognitive task demands in video games and recovery experiences. Media Commun. 2019, 7, 166–175. [Google Scholar] [CrossRef]
  101. Large, A.M.; Bediou, B.; Cekic, S.; Hart, Y.; Bavelier, D.; Green, C.S. Cognitive and behavioral correlates of achievement in a complex multi-player video game. Media Commun. 2019, 7, 198–212. [Google Scholar] [CrossRef]
  102. Seyderhelm, A.J.; Blackmore, K.L. How hard is it really? Assessing game-task difficulty through real-time measures of performance and cognitive load. Simul. Gaming 2023, 54, 294–321. [Google Scholar] [CrossRef]
  103. Mitre-Hernandez, H.; Carrillo, R.C.; Lara-Alvarez, C. Pupillary responses for cognitive load measurement to classify difficulty levels in an educational video game: Empirical study. JMIR Serious Games 2021, 9, e21620. [Google Scholar] [CrossRef]
  104. Zhong, X.; Xu, J. Measuring the effect of game updates on player engagement: A cue from DOTA2. Entertain. Comput. 2022, 43, 100506. [Google Scholar] [CrossRef]
  105. Liu, K.; Samiee, S. Too much Patching? Protocological Control and Esports Player Counts. In Proceedings of the 58th Hawaii International Conference on System Sciences, Big Island, HI, USA, 7–10 January 2025; pp. 2635–2644. [Google Scholar]
  106. Claypool, M.; Kica, A.; La Manna, A.; O’Donnell, L.; Paolillo, T. On the Impact of Software Patching on Gameplay for the League of Legends Computer Game. Comput. Games J. 2017, 6, 33–61. [Google Scholar] [CrossRef]
  107. Anderson, K. Software Patches and Their Impacts on Online Gaming Communities; University of Colorado: Boulder, CO, USA, 2019. [Google Scholar]
  108. Del Gallo, R. The Politics of a Game Patch: Patch Note Documents and the Patching Processes in League of Legends. 2023. Available online: https://utd-ir.tdl.org/server/api/core/bitstreams/10cc834a-ff20-4187-86f0-f3ef71a3c196/content (accessed on 24 April 2025).
  109. Arora, A.; Krishnan, R.; Telang, R.; Yang, Y. An Empirical Analysis of Software Vendors’ Patch Release Behavior: Impact of Vulnerability Disclosure. Inf. Syst. Res. 2010, 21, 115–132. [Google Scholar] [CrossRef]
  110. Lin, Z.; Jiang, X.; Xu, D.; Mao, B.; Xie, L. AutoPaG: Towards Automated Software Patch Generation with Source Code Root Cause Identification and Repair. In Proceedings of the 2nd ACM Symposium on Information, Computer and Communications Security, Singapore, 20–22 March 2007; pp. 329–340. [Google Scholar]
  111. Webb, S.D.; Soh, S. Cheating in networked computer games: A review. In Proceedings of the 2nd International Conference on Digital Interactive Media in Entertainment and Arts, Perth, Australia, 19–21 September 2007; pp. 105–112. [Google Scholar]
  112. Jussila, A. Conceptualizing Video Game Updates. 2022. Available online: https://jyx.jyu.fi/bitstreams/b64f85e3-94c3-4544-99a8-e41c8c8f8991/download (accessed on 24 April 2025).
  113. Truelove, A.; de Almeida, E.S.; Ahmed, I. We’ll fix it in post: What do bug fixes in video game update notes tell us? In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), Virtual, 25–28 May 2021; IEEE: New York, NY, USA, 2021; pp. 736–747. [Google Scholar]
  114. Mertens, J. Broken Games and the Perpetual Update Culture: Revising Failure with Ubisoft’s Assassin’s Creed Unity. Games Cult. 2022, 17, 70–88. [Google Scholar] [CrossRef]
  115. Kara, P.A.; Bokor, L.; Sackl, A.; Mourão, M. What your phone makes you see: Investigation of the effect of end-user devices on the assessment of perceived multimedia quality. In Proceedings of the 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX), Costa Naviro, Greece, 26–29 May 2015; IEEE: New York, NY, USA, 2015; pp. 1–6. [Google Scholar]
  116. Gu, K.; Liu, M.; Zhai, G.; Yang, X.; Zhang, W. Quality assessment considering viewing distance and image resolution. IEEE Trans. Broadcast. 2015, 61, 520–531. [Google Scholar] [CrossRef]
  117. Fang, R.; Wu, D.; Shen, L. Evaluation of image quality of experience in consideration of viewing distance. In Proceedings of the 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), Chengdu, China, 12–15 July 2015; IEEE: New York, NY, USA, 2015; pp. 653–657. [Google Scholar]
  118. Amirpour, H.; Schatz, R.; Timmerer, C.; Ghanbari, M. On the impact of viewing distance on perceived video quality. In Proceedings of the 2021 International Conference on Visual Communications and Image Processing (VCIP), Munich, Germany, 5–8 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–5. [Google Scholar]
  119. Harmat, L.; de Manzano, Ö.; Theorell, T.; Högman, L.; Fischer, H.; Ullén, F. Physiological correlates of the flow experience during computer game playing. Int. J. Psychophysiol. 2015, 97, 1–7. [Google Scholar] [CrossRef]
  120. Yeo, M.; Lim, S.; Yoon, G. Analysis of biosignals during immersion in computer games. J. Med. Syst. 2018, 42, 557608. [Google Scholar] [CrossRef]
  121. Schmidt, S.; Uhrig, S.; Reuschel, D. Investigating the relationship of mental immersion and physiological measures during cloud gaming. In Proceedings of the 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, 26–28 May 2020; IEEE: New York, NY, USA, 2020; pp. 1–6. [Google Scholar]
  122. Wiederhold, B.K.; Davis, R.; Wiederhold, M.D. The effects of immersiveness on physiology. In Virtual Environments in Clinical Psychology and Neuroscience; IOS Press: Amsterdam, The Netherlands, 1998; pp. 52–60. [Google Scholar]
  123. Wiederhold, B.K.; Jang, D.P.; Kaneda, M.; Cabral, I.; Lurie, Y.; May, T.; Kim, I.; Wiederhold, M.D.; Kim, S. An investigation into physiological responses in virtual environments: An objective measurement of presence. Towards Cyberpsychol. 2001, 2, 175–183. [Google Scholar]
  124. Halbig, A.; Latoschik, M.E. A systematic review of physiological measurements, factors, methods, and applications in virtual reality. Front. Virtual Real. 2021, 2, 694567. [Google Scholar] [CrossRef]
Figure 1. Screenshot of the single-input video game.
Figure 1. Screenshot of the single-input video game.
Mti 09 00047 g001
Figure 2. Implementation of a 250 ms delay for a 500 ms continuous input.
Figure 2. Implementation of a 250 ms delay for a 500 ms continuous input.
Mti 09 00047 g002
Figure 3. Screenshot of the continuous-input video game.
Figure 3. Screenshot of the continuous-input video game.
Mti 09 00047 g003
Figure 4. Temporal structure of the subjective tests.
Figure 4. Temporal structure of the subjective tests.
Mti 09 00047 g004
Figure 5. Theoretical rating distribution based on the lack of objective differences.
Figure 5. Theoretical rating distribution based on the lack of objective differences.
Mti 09 00047 g005
Figure 6. Overall rating distribution of the subjective study.
Figure 6. Overall rating distribution of the subjective study.
Mti 09 00047 g006
Figure 7. Mean ratings of the test participants.
Figure 7. Mean ratings of the test participants.
Mti 09 00047 g007
Figure 8. Number of “The same” (0) ratings provided by the test participants.
Figure 8. Number of “The same” (0) ratings provided by the test participants.
Mti 09 00047 g008
Figure 9. Rating distribution of tests where “not optimized” was the first label and “optimized” was the second label.
Figure 9. Rating distribution of tests where “not optimized” was the first label and “optimized” was the second label.
Mti 09 00047 g009
Figure 10. Rating distribution of tests where “optimized” was the first label and “not optimized” was the second label.
Figure 10. Rating distribution of tests where “optimized” was the first label and “not optimized” was the second label.
Mti 09 00047 g010
Figure 11. Rating distribution of the single-input game.
Figure 11. Rating distribution of the single-input game.
Mti 09 00047 g011
Figure 12. Rating distribution of the continuous-input game.
Figure 12. Rating distribution of the continuous-input game.
Mti 09 00047 g012
Figure 13. Rating distribution of games at easy difficulty.
Figure 13. Rating distribution of games at easy difficulty.
Mti 09 00047 g013
Figure 14. Rating distribution of games at medium difficulty.
Figure 14. Rating distribution of games at medium difficulty.
Mti 09 00047 g014
Figure 15. Rating distribution of games at hard difficulty.
Figure 15. Rating distribution of games at hard difficulty.
Mti 09 00047 g015
Figure 16. Rating distribution at no added command execution delay.
Figure 16. Rating distribution at no added command execution delay.
Mti 09 00047 g016
Figure 17. Rating distribution at 50 ms added command execution delay.
Figure 17. Rating distribution at 50 ms added command execution delay.
Mti 09 00047 g017
Figure 18. Rating distribution at 150 ms added command execution delay.
Figure 18. Rating distribution at 150 ms added command execution delay.
Mti 09 00047 g018
Figure 19. Rating distribution at 250 ms added command execution delay.
Figure 19. Rating distribution at 250 ms added command execution delay.
Mti 09 00047 g019
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nguyen, D.H.; Kara, P.A. The Influence of the Labeling Effect on the Perception of Command Execution Delay in Gaming. Multimodal Technol. Interact. 2025, 9, 47. https://doi.org/10.3390/mti9050047

AMA Style

Nguyen DH, Kara PA. The Influence of the Labeling Effect on the Perception of Command Execution Delay in Gaming. Multimodal Technologies and Interaction. 2025; 9(5):47. https://doi.org/10.3390/mti9050047

Chicago/Turabian Style

Nguyen, Duy H., and Peter A. Kara. 2025. "The Influence of the Labeling Effect on the Perception of Command Execution Delay in Gaming" Multimodal Technologies and Interaction 9, no. 5: 47. https://doi.org/10.3390/mti9050047

APA Style

Nguyen, D. H., & Kara, P. A. (2025). The Influence of the Labeling Effect on the Perception of Command Execution Delay in Gaming. Multimodal Technologies and Interaction, 9(5), 47. https://doi.org/10.3390/mti9050047

Article Metrics

Back to TopTop