1. Introduction
The term game-based learning refers to combination of learning content with the engaging aspects of games [
1]. ‘There is no general agreement among theorists on the definition of games, but many authors agree on their characteristics: they are rule based, following clearly defined rules of play; they are responsive, enabling player actions and providing system feedback and responses; they are challenging, often including an element of chance; the progress within a game is usually cumulative, reflecting previous actions; and finally games are inviting, motivating the player to engage’ Mayer, 2014, in [
2] (p. 3). Research in the last decade has focused on the impact of digital game-based learning on different learning outcomes [
3,
4,
5], with the cognitive and emotional involvement of the players at the forefront [
6].
The presence and design of reward systems (or positive reinforcement) in games aim to keep a student engaged [
7]. Feedback monitors and provides students with an assessment of their progress, usually at the end of a learning section [
8]. Such elements incorporated into a digital game are part of the emerging design aspects of game-based learning, specifically the design of the incentive system [
9]. This aspect of system design can consist of elements aimed at extrinsic rewards, such as points, stars, badges, or trophies [
10,
11], or at intrinsic motivation with the game, directly linked to progress in it. As in Ref. [
7], we use the term ‘intrinsic’ to refer to the enjoyment of the game itself and ‘extrinsic’ to refer to the enjoyment caused by the actual reward. The authors suggest that a player enjoys the extrinsic reward, which has the potential to enhance intrinsic motivation with the game. The incentive systems are considered essential in game-based learning, as they provide performance feedback to reinforce students’ short-term goals and provide additional motivation to pursue tasks that would otherwise be less interesting [
9].
There is an extensive literature on the effects of traditional extrinsic rewards and praise in the classroom and school system, which shows that most of the extrinsic rewards could undermine the ultimate goal of learning [
12] and intrinsic motivation with the task [
13]. Tangible rewards such as stickers, stars, praise, and awards [
12] can provide external motivation to complete a task, but undermine long-term intrinsic motivation when used as control, especially for children [
13].
Until now, little has been known about how young children interact with and respond to such aspects of the incentive system design while playing a digital-based educational game. In their systematic review of eye-tracking research in multimedia learning, [
14] found that most studies have been conducted with university students, providing little empirical evidence for younger primary school children [
9,
15]. Eye-tracking technology provides more detailed information than verbal self-report, and gaze tracking is very useful in discovering how a person interacts with the visual stimuli [
16]. Accordingly, the current study aimed to investigate how elementary school pupils interact with the virtual incentive system during a digital mental math game called MathleTIC. We recorded and analyzed the eye tracking of eight students while they were playing the game. We also collected their perceptions of the experience with the incentive system design and the experience with the item in general, using questions within a semi-structured interview following the eye-tracking session. Although our findings are based on a small number of respondents, the results provide a detailed picture through eye tracking and indicate which incentive designs have an influence on pupils’ engagement with the digital mathematics item. This will contribute to the knowledge base on the potential effects of different types of extrinsic rewards on motivation in educational games [
9].
1.1. Incentive System Design in Digital Game-Based Learning
Digital game-based learning (DGBL) has an engaging and challenging character, while setting educational goals for students and promoting knowledge acquisition [
3,
17]. In game-based learning, it is important that players face challenges that are close to their skill level [
18]. Researchers and digital game designers consider which elements of the game contribute to processing the content that has to be learned, from a cognitive perspective [
11]. The use of games to achieve educational goals distinguishes DGBL from gamification, which is the use of game design elements in non-game contexts to promote engagement with specific tasks or assignments [
10,
19]. Under the theoretical umbrella of DGBL, being ‘serious educational games’ [
20], ‘rich digital game-based learning environments’ [
21] or even ‘kinesthetic educational games’ [
22], all use game mechanics and game components to achieve educational goals.
Rewards or positive reinforcement includes unlocking a new level, digital coins, treats, or an increased score. Penalties, punishments, or negative reinforcement include stopping the game, reducing strengths, or losing rewards [
23]. Penalties in games are usually small and temporary, rewarding replay and persistence [
24] and providing a balance between excitement and anxiety. As part of the incentive system, feedback is most often summative, represented by a performance chart provided to the student at the end of the game [
23,
25]. In a well-designed game, learning or assessment occurs naturally [
26], supported by rewards and penalties during the game, and reinforced by formative or summative feedback at the end of the game [
26].
The design of such incentive systems can take many forms, but most research has examined the use of badges, points, levels, and leaderboards [
9]. Badges are external rewards that represent achievements that meet the requirements for earning the badge. Points and levels provide performance feedback for self-assessment towards reaching learning goals, while leaderboards create competition between players.
Understanding how students respond to different types of rewards with the potential to sustain positive experiences can facilitate how the design of the incentive system affects extrinsic motivation and longer learner engagement with the game.
The variation in the features of incentive system design leads us to ask how elementary students interact with such design features of an incentive system in DGBL. This motivates our research question:
The present study is related to three research directions, as identified by [
2]. These are usability research, by identifying the elements in the overall game design that work or do not work as expected in how students make use of the game; design-based research, which aims to provide recommendations for refining the design of games by clarifying how different features of the incentive system design actually work; and affective research, to capture the affective functions of the game and their impact on pupils’ engagement, in order to further clarify the potential impact of games on learning.
1.2. The Role of Tangible Rewards in the Traditional Classroom Context
The work of [
12,
13,
27] and others has shown that rewards and praise can have a short-term positive extrinsic effect on motivating students to perform a task, but not a long-term positive effect on intrinsic motivation with the task, under certain circumstances.
External, tangible rewards can stimulate students’ interest and lay the foundation for certain study habits [
28]. This seems to be particularly true when students are engaged in memorizing facts [
12] or, more generally, in tasks that are boring or not intrinsically motivating [
13]. If rewards do not seem to undermine intrinsic motivation under these circumstances, they seem to do so when the tasks are interesting, especially for young children [
13].
More specifically, students may be discouraged from taking risks and may change their willingness to act when not directly influenced by incentives [
29]. In addition, students may focus on the external rewards in test-based situations and fail to develop the ability to self-motivate and to focus intrinsically on the task [
12]. The long-term intention of rewards must be that extrinsic motivation will stimulate interest in a new area, which can later lead to intrinsic motivation [
29]. Furthermore, children will respond differently to the incentive depending on their ability to perform the task [
29] and the controlling nature of the feedback [
13]. For students who have the skills and knowledge, the reward may motivate them to try harder, whereas for the students who do not have the required skills and knowledge, the lack of the reward may cause them to give up [
29]. Finally, tangible rewards can undermine intrinsic motivation, especially if students would have completed the task without the reward [
29], as is the case with interesting tasks [
13].
Reward-driven competitive situations, such as leaderboards or rankings, can undermine intrinsic motivation, especially in controlling contexts with high pressure to win [
30]. Competitive conditions have a complex effect on intrinsic motivation, being mediated by perceived competence and interpersonal context, among other factors [
13,
30]. When studied in game-based applications, leaderboards appeared to increase motivation for those at the top, but also to cause disengagement for those at the bottom [
31].
In educational practice, rewards typically provide students with only verification feedback (whether the response was correct or not) and not elaborated feedback (providing cues to the correct response) [
32]. Elaborated feedback would allow students to understand why a behavior was correct or incorrect by providing guidance on how to improve it [
32,
33].
The present paper also reflects on how these positive and negative aspects of traditional rewards are reflected in the incentive system of the digital game-based item and the students’ reactions to the incentive system and its design.
2. Materials and Methods
2.1. Item Description
MathemaTIC is a multilingual (German, French, Portuguese, and English), personalized digital learning environment for mathematics launched in Luxembourg in 2015 as part of the national policy ‘Digital Luxembourg’ [
34]. In 2019–2020, more than 25,000 students and 2600 teachers were actively using MathemaTIC. National subject matter specialists developed the content of the learning environment, in line with the Luxembourg curriculum. For primary education, MathemaTIC covers all of the curriculum topics of the last two grades of primary education (ages 8–11), through learning, practice, and assessment items within the following modules: numbers and operations, geometry, data management, measurement, and problem solving.
The game-based item called MathleTIC (see
Figure 1) belongs to the “Numbers and Operations” module of the learning environment. This competitive game-based learning item focuses on practicing, reinforcing, and assessing existing knowledge and skills through mental computation, providing the opportunity to deepen and automate skills through repeated application. Students can choose from four operations: addition, subtraction, multiplication, and division, when not guided by the teachers.
Each operation is measured on 10 levels of increasing difficulty. The student must start at level 1 in each operation and cannot move to the next level without mastering the previous one. The teacher can unlock certain levels, if he/she feels that the student can start the game-based item at a higher level. There is no set limit to the number of consecutive attempts, successful or unsuccessful, that a student can complete before the item ends.
A robot that appears on the screen only at certain moments of the activity chases a figure/avatar while the student solves the mental computational tasks, appearing on the screen only at specific time intervals. There are 15 tasks per level to be solved under time pressure. If the student answers all 15 tasks correctly in a level, he/she will receive a maximum of three stars. Two stars are awarded for 10 correct answers (66%), and one star is awarded for five correct answers (33%). No stars are awarded for more than 10 incorrect answers. One star is enough to unlock the next level of the game, while the level must be repeated if the student does not receive any stars. When the game ends at any point, students can see their own performance, the number of stars earned for the level, and the number of levels completed. Teachers can also see each student’s performance level and assign new tasks or unlock new levels.
MathleTIC uses an incentive system design, combined with appealing colors and shapes. The incentive system is essential to achieve the educational goal of this item, given the repetitive nature of the tasks required in mental math. The goal is to use extrinsic rewards to keep players engaged throughout the game in order to create an intrinsically rewarding experience.
Figure 2 shows the activity screen, where the reward elements (the appearing star and the star bar) are marked with green boxes and the penalty elements (the figure area and the robot) are marked with red boxes. The game contains no written text or written instructions. In this respect, it is close to a gamified learning item.
The number of stars earned during the game represents the reward design. For every five correct answers, a star appears on the road and flickers as it flies into the star bar. The stars are collected in the star bar. A maximum of three stars can be earned per level, depending on the number of correct answers given within the allotted time.
Time pressure is introduced as a design element by the robot chasing the figure, which can end the game by catching the figure, if the answers given are wrong. The longer a student takes to give an answer, the closer the robot approaches the figure. Therefore, the area of the figure where the distance to the robot is indicated should be watched during the game, as the approaching robot indicates the possible end of the game. In other words, the robot is the most important link to the exciting game mechanics in terms of the limited time available to successfully complete the task and receive the three stars, or be penalized at the end of the game. The game also has a ‘training mode’ where the students can practice the skills without time pressure.
A summary of the reward and penalty features of the game, representing the incentive system in the activity, is presented in
Table 1.
Figure 3 shows the feedback screen for the end of a level in the game, with the feedback elements marked in blue. Minimal written information is provided while the feedback screen is displayed, as language is not an important element of this item.
The performance feedback provided is individual and summative, as shown in
Table 2, with the aim of informing the students about their performance in the task, as a source of self-regulation of short-term goals and the desire to continue. For one age group, the feedback is given in terms of the level achieved, and for the other age group, in terms of the number of stars.
There is a variation in how the performance feedback is presented for the different age groups: either by indicating the level achieved or by the number of stars. However, both screens provide verification feedback, without elaborating on which answers were correct or incorrect and how the score could be improved.
In addition, as an overview at the end of the level, the students can also see their performance in terms of the number of stars acquired for each specific level, for all their attempts in MathleTIC, as a tool for self-assessment and progression within the competency. See
Appendix A for a view of how students see their cumulative performance in the game-based item, for all levels accessed, in both age groups. The student can see only his or her performance in the game, and no information about the performance level of other students, no leaderboards or rankings.
2.2. Participants
Our study was conducted in two primary schools in Luxembourg. Both schools had already worked with MathemaTIC, so the students were familiar with its features and its functions. For methodological and ethical reasons, we did not analyze all students for whom we had valid data, but selected a sample of students for this study.
We used three different criteria for this selection. First, we selected only students who had minimal experience with MathleTIC but who had completed at least one level of the game. This was important in terms of a natural way of working with a minimum level of experience with the item and its functioning. Secondly, considering the two different age groups participating in this study, we selected both 8- to 9-year-old students and 10- to 11-year-old students. These age groups correspond to the last two grades of primary school. The younger students were asked to work on the first level of multiplication, while the older students were asked to work on the fourth level of multiplication, to be comparable and adapted in difficulty level to their expected level of knowledge. We also considered the amount of time spent on the item during the eye-tracking session, aiming for a minimum of one minute for each student if possible.
Using these three criteria, we obtained a sample of eight students for the study. Finally, we also marked the performance level that the students achieved in the game during the study. We classified the students’ performance into low (none or one star) (Low P) and high (two or three stars) (High P).
2.3. Methods of Data Collection and Analysis
We eye tracked the items using the Tobii X3-120, 120Hz screen-mounted eye tracker in conjunction with the Tobii Studio software. The screen-mounted eye tracker was attached to the bottom of a laptop screen. The eye-tracking system was not integrated into the game, but was used as an external data collection tool for this study. We also conducted a semi-structured interview with each student at the end of the eye-tracking session, asking relevant retrospective questions about the design of the incentive system. Details of the data collection protocol can be found in
Appendix B of this paper. The study design took into account the age-specific characteristics of the students, such as the development of self-regulation and greater sensitivity to rewards in controlled contexts [
13]. Therefore, we conducted the study at their school and asked the students to play the game as usual. The researcher did not directly observe the students and did not judge their performance at the end of the session. Students had to follow a set order of the items to be accessed during the eye-tracking session.
We used Noldus ObserverXT software to analyze the data collected. With Observer XT, we analyzed the screen recordings (activity and feedback screens) containing the eye-tracking data. To ensure the comparability of the selected participants, we decided to code one minute and at least one trial for each participant. This was necessary because the students took different amounts of time to complete the tasks and had repeated trials, with only one of the selected students spending less than a minute on the task.
We created a specific coding scheme for the analysis software Noldus Observer XT. This was developed based on the Areas of Interest (AOI) presented in
Appendix C. Classifying AOIs is the most common approach to assessing attention to particular objects, by defining all gaze and fixations on a particular object as a contact that falls within a defined distance from the object [
35]. Our defined AOIs focused on the elements of reward, penalty and feedback. Because viewing these incentive elements does not require a cognitive process, we included all gaze lengths when calculating the duration (no cutoff). Later in the paper, when we use the term ‘gaze’, we refer to both gazes and fixations.
For each category of AOIs, we considered the time in seconds that each AOI was displayed (AOI visible duration) and the cumulated time in seconds that each student focused on each AOI (AOI gaze duration). For comparability, we calculated the proportion (expressed as a percentage) of each student’s total gaze/fixation time on each AOI relative to the total time that the AOI was displayed on the screen.
The students’ responses to the semi-structured interviews were analyzed qualitatively without the use of software, following the general principles of content analysis. The responses were organized under the design elements of the incentive system, namely rewards, penalties, and performance feedback, using a deductive approach to these topics. Students’ general self-perceptions of their experience with the item during the eye-tracking session were characterized based on the degree to which the experience was enjoyable or stressful and the details involved. Additional impressions were coded and are also reported.
3. Results
The results section is organized around our research question and reports on the students’ interaction with the different elements of the incentive system design. The goal is to identify which elements of the incentive system design in a game-based item have the potential to trigger the extrinsic motivation of elementary school students to play a game. The use of eye tracking makes it possible to accurately capture the focus of visual attention during the item.
3.1. Rewards
Figure 4 (and
Table 3) shows the results of the eye tracking, showing the proportion (as a percentage) of the time (as duration) that students looked (as gaze) at a reward element out of the total time that a reward element was displayed on their screen during the task. In this case, the reward system is represented by stars that appear and can be ‘collected’ in the star bar. These stars signaled a possible successful completion of the specific level. We classified students’ performance as low (none or one star) (Low P) and high (two or three stars) (High P).
The star bar at the top right of the game screen is displayed throughout the game (again, see
Figure 2). Even though it is always visible, only one student looked at the star bar for 2 percent (1.3 s in total) of the time it was displayed on the screen (60 s). The appearing and flickering stars are displayed very briefly, and only one student looked at them for 11 percent (0.5 s in total) of the time they were visible on his/her screen (4.7 s in total). A star only appears when it is earned and it must be visible on the screen, which means that two of the low-performing students in our sample never saw this reward element, as indicated in
Table 3.
It is very interesting to note that although the reward elements were displayed for at least six of the eight students, almost none of the students actually looked at them. Only two of the high performers paid very little attention to them.
The same minimal explicit focus on the reward design elements that appeared during the activity was also evident in the semi-structured interviews. Only two of the eight students explicitly referred to the star bar or the star that appeared. One student said that he/she liked the task because ‘you can get three stars’ and another student said ‘the stars make us better and more powerful’.
3.2. Penalty
Figure 5 (and
Table 4) shows the results of the eye tracking, showing the proportion (as a percentage) of the duration students gazed at a penalty element out of the total duration the penalty element was displayed on their activity screen.
The penalty elements were only slightly longer in duration and appearance than the reward elements. However, students gazed at the penalty elements more than at the rewards elements, even if only briefly. The figure is visible for the entire duration of the activity, while the robot appears only briefly, and usually ends with failure to complete the level (again, see
Figure 2).
Figure 5 shows that all the students in our sample gazed at at least one penalty element. The figure was gazed at briefly by all the students (1–6 percent of the time visible), probably looking for information about the robot and the distance to it.
The robot is the most expressive design element of the penalty incentive system, which also means a possible quick end of the game. We can see that the robot appeared very often on the activity screen for the younger students; even within a single attempt at a level and even if they were high performers in the end (see
Table 4: Number of times the robot appeared). The appearance of the robot indicates a decrease in the time available to provide an answer. For the older group, the robot appeared infrequently, mainly for one low-performing student. As with the star, the younger and older students rarely gazed at the robot. Notably, one younger low-performing student spent up to 21 percent (3.3 s in total) of the time it was visible (15.9 s in total) gazing at the robot.
The design elements of the penalty were frequently mentioned during the semi-structured interview, with the students actually referring to the item as ‘the one with the robot’, ‘the calculator with the robot’, or ‘the challenge/competition against the robot’. One aspect mentioned by the students was that the task is ‘exciting’ because they have to calculate quickly, considering that the robot is trying to catch them. Another aspect mentioned was the design of the penalty incentive. One student said that ‘the robot should look more evil’, while another recommended giving more emphasis to the running figure through improved graphics or ‘decorations’. In general, students liked the graphics and the design of the game’s incentive system.
3.3. Feedback
Figure 6 (and
Table 5) shows the results of the eye tracking, showing the duration of the students’ gaze at the performance feedback as a proportion (as a percentage) of the total time the feedback element was displayed on their screen. Students decide how long to look at the feedback screen—there is no time limit for this performance feedback part of the game.
We present here only the eye-tracking coding results for the first feedback screen after the first trial of the game, considering that five out of eight students had only one trial of the game within the first minute selected for coding.
We see some very interesting results. In
Figure 6, we see that the younger students who received the performance feedback as an indication of the level, regardless of their performance level, did not spend most of their feedback time looking at the performance feedback information (see
Table 5). Only one high performing student spent time looking at the performance feedback for 64% of the time the feedback screen was visible. In contrast, the older students who received their performance feedback information in the form of the number of stars spent most of their feedback time looking at it, regardless of their performance. More specifically, the amount of time spent looking was between 50 and 79 percent of the time for the low performers and around 55 percent for the high performers (see
Table 5 for the gaze duration in seconds). Furthermore, in general, the older students spent less time on the feedback screen when compared with the younger students (see
Table 5, Duration visible).
Comparing the percentages in
Figure 6 with those in
Figure 1 and
Figure 2, we can say that when students are not pressured by time or by sequential tasks, they pay more attention to the incentive elements while they are displayed on the screen. During the semi-structured interview, the students did not explicitly refer to the feedback screen and its design, but they did mention a few times that they liked the game because they could ‘collect’ as many as three stars.
In general, all eight students’ general perceptions of the game-based item indicated that they enjoyed playing the game, that it was exciting, and that it was more fun than stressful. All students liked the game-based item, and most of them identified it as their favorite item in the MathemaTIC learning platform. Most students did not find it stressful, and although some found it slightly so, it was always considered more fun than stressful. Students stated that ‘it was really fun and nice’, that ‘I like it a lot, more than the other items’, and that it was ‘very cool’. When asked about the need to calculate quickly, most students said that ‘it was fast, but it should always be like that’, or ‘we have to be fast and that is good, and it is fun’, or students ‘enjoy the competition and the tasks with time pressure’. In addition, some students indicated that they would ‘like to go further and compete with a friend and whoever finishes first wins’ or to create student groups ‘where you can see what the others in the group are doing’, in line with the leaderboard’s incentive aspect.
Students enjoyed the task because of the excitement created by the time pressure and the possibility of achieving the highest level of performance indicated by three stars. This implies that they do pay attention to their performance level, which is best explicitly indicated by the number of stars. The robot, as a penalty incentive design, is more visible to the students because it is associated with the time pressure and will determine their performance level. Students suggested various design ideas to support the running figure. This could also be associated with the competitive aspect of the game, but as a reward incentive. However, it is the robot, as a penalty incentive design, that is associated with the ‘exciting’ time pressure and ultimately with the whole game and its name.
4. Discussion
In this paper, we investigated the attention elementary school students pay to the incentive design of a digital game-based item, taking into account their age group and performance in the game. The incentive system design introduced during the activity includes rewards and penalties that, together with exciting game mechanisms, are intended to increase engagement [
2] and the practice of various educational skills [
22] to attain specific educational goals.
Regarding the design of the reward incentive system, represented here by the star bar and appearing stars, we found that students did not visually focus on these elements during the activity, except for one student who looked briefly at the appearing star. The reward elements are intended to provide positive reinforcement during the game [
11] and we see that students are aware of them without visually disrupting their play of the game. From the semi-structured interviews at the end of the eye-tracking session, we understand that the students are aware of the star bar and the incentive reward system, which induces a self-expressed extrinsic motivation to earn the stars and to continue playing the game. Both younger and older students reported similar positive perceptions of the game and of the design of the incentive rewards system through the stars, in line with the previous findings of Refs. [
10,
19].
We found similar results regarding the design of the game’s penalty incentive system, in this case represented by the running figure, and in particular by the robot chasing the figure, which plays a similar role to the end of the game. We see that during the activity, slightly more attention is paid to the design of the penalty incentive system, with limited attention to the area of the figure and the robot. From the semi-structured discussions with the students, we see that they are all aware of the robot chasing the figure and the penalty role of the approaching robot. The students reported that they were excited to play the game because of the time pressure imposed by the running robot, which they eventually identified with the whole game. Both younger and older students reported similar positive perceptions of the game and the incentive system design of penalties. The robot proves to be a very well designed and well-perceived emotional agent that engages but does not disrupt the game, which are key characteristics of an effective emotional agent in educational game design, as noted by Ref. [
36]. In the time we coded, we found that students appreciated the incentive system without focusing their attention on it during the game. This could also be related to the familiarity with the star system as a reward mechanism and a level of mastery of these simple rules and game mechanics in this specific item. It is important to reiterate that the students were familiar with the game and needed to complete the tasks quickly in order to continue playing.
The elements on the feedback screen were treated differently in terms of attention, with both younger and older students paying a great deal of attention to them. The focus of their attention also seems to depend on the design of the incentive system. When represented in terms of the number of stars earned, performance feedback information attracted more visual attention than when presented in terms of the level achieved and other relevant information. Furthermore, during the semi-structured interview, it was evident that both the younger and the older students referred to the accumulation of stars as an indication of their performance outcome, even though for the younger students this was ultimately presented through the level achieved. This may be because the overall level of performance at all levels achieved was represented by the number of stars for both age groups, as shown in
Appendix A.
We can say that the incentive design of the game achieved its goal of supporting extrinsic motivation to play the game. However, we see some possible negative influences of the incentive design when we reflect on the functioning of the traditional tangible rewards. First, even though the game does not display ranking information between students, some students expected the competitive aspect to be introduced by the rewards. In the semi-structured interview, a few students expressed their expectation to compete with others and to see their performance ranked. In line with Refs. [
12,
33] and others, such an externally motivated need for competition can stifle intrinsic motivation for learning and create anxiety and competitive pressure. In this particular case, it could be determined by the intense presence of the incentive system and less presence of the game rules, which is also specific to gamified items. Second, as it is common in games, the feedback provided here is summative and it does not delve into the reasons why students succeeded or failed at the task [
32]. We see that the low-performing students had more attempts at the game than the high-performing students within the one minute selected for coding, indicating the possible risk of students repeating the task without knowing how to improve their performance. Students are quick to repeat the game after receiving the summative feedback possibly because no elaboration on their performance is provided.
However, it is good that a student can only start the game at level 1 without the teacher unlocking the levels, and can only progress to the next level after mastering level 1, and so on. However, there is no limit to the number of consecutive attempts at a level, whether successful or not.
5. Conclusions
Returning to our research question, which is to identify the elements of the incentive system design that have the potential to trigger students’ extrinsic motivation, we can state that both the reward system and the penalty system have this potential. The reward system, expressed in terms of the number of stars ‘collected’, motivates students to perform and to try harder, especially when the number of stars is explicitly stated in the performance feedback. It has such an extrinsic role that two students suggested adding competitive elements proxy to leaderboards to allow students to compare their performance results. This student recommendation demonstrates the engaging role of competition in game-based learning and supports the findings of Ref. [
18], who emphasized the pedagogical potential of game-based math competitions. The penalty system, represented by the chasing robot, also has the potential to trigger students’ extrinsic motivation through the robot’s role in ending the game. Students associate the robot with the time limit and the pressure it represents, which is a positive aspect of the game for them. All students reported that they found the game more fun than stressful, with some appreciating the time pressure the game provides. This finding again shows the importance of designing the incentive systems and emotional agents in order to be effective in providing engagement and enjoyment [
36], and ultimately learning through repeated engagement with the game.
Although research on game mechanics and the use of game-based learning in classroom practice has been popular in recent years, it is now necessary to bridge the gap between academic research and classroom practice [
37]. In terms of usability research [
2], we can suggest that the overall game design has the expected effect. Students stay engaged in the game as expected, because of the ‘cool’ and ‘fun’ design of the incentive system and the game characters. Even if students do not look at the incentive system design during the activity, as shown by their eye-tracking marks, we see that the game does achieve its goal of motivating students to play. Some authors [
7] have found that players can enjoy both the rewards and reward mechanics, implicitly responding to the motivation that such reward systems provide. In terms of design-based research, we believe that the design of the game is appropriate to keep students engaged. The ‘robot’, as an emotional agent and implicit symbol of time pressure, becomes an explicit symbol of the game, with the game being known as ‘the robot’ or ‘the calculator with the robot’. Finally, in terms of affective research, the semi-structured interviews illustrate that students find the game exciting and fun to play, especially through its game mechanics and incentive system design [
38]. Furthermore, as many other authors have found [
10,
18,
22,
36], all students in this study express that they enjoy playing the game because ‘it is a game’ on the computer. These findings are also relevant for teachers’ use of DGBL in the classroom, especially for educational games that can be integrated into practice [
39].
Reflecting on the positive and negative aspects of traditional rewards, we can say that the clear role of the incentive system and its design in triggering extrinsic motivation is welcome here, considering that mental math is a repetitive cognitive task that might be ignored by the students [
13]. However, we do find out that the incentive system triggers the competitive aspect, where students want to compete with others and see their performance ranked, which is not a support for intrinsic motivation and continuous learning [
30] in specific tasks. The incentive system and its design triggers extrinsic motivation, but also activates aspects that could hinder a smooth transition to intrinsic motivation for learning. However, the evidence supporting these findings is still under debate, even though it is based on numerous meta-analyses [
13,
40] challenging the results of three different meta-analytic reviews of this literature, which produced different results. In addition, recent studies show that young children still prefer tangible rewards (candy) to social rewards or harder tasks when rewarded for completing a mathematical task [
41].
Several limitations of this study need to be considered when discussing the results presented so far. The first relates to the sample size. Given the selection criteria used (e.g., equal numbers per age group and performance level), our sample was limited to eight students. We advise caution in generalizing the results to this and other age groups, although it seems unlikely that the results of this study would be radically different with a larger number of respondents. However, we do not know what the results would be if more students were included in the study design, and this remains a valid limitation of the present study.
The second limitation relates to the specifics of this game design. The feedback screen provides the same type of information, but indicates performance first by the number of stars earned and second by the level achieved. Unfortunately, the groups that received the two different forms of performance outcome information also differed in age. Refs. [
13,
28] noted that younger children differed from older students in how they responded to external rewards and motivation, which may be related to the amount of time they spent on the feedback screen. However, given that we did not find any age-group-related differences in the way students responded to the other elements of the incentive system design or in their self-reported perceptions of the game, it seems unlikely that age determines the different length of the time students spent looking at the feedback screen. However, we cannot definitively rule out this possibility.
In addition, we would have liked to reflect on the emotions that students displayed while playing this game. Unfortunately, we encountered difficulties in validly detecting the facial emotions of all students in this sample for the entire coded section. Such difficulties were determined by specific facial features (e.g., glasses, color) or by disruptive actions during the coded period (e.g., getting too close to the eye tracker, covering the face). Based on the few facial analyses we were able to perform, we found that positive expressions were associated with the first successful trial of the game, and that negative expressions increased in intensity with each unsuccessful trial. A larger sample of eye-tracked students could mitigate this limitation in future research and provide more insight into the students’ associated emotions.
Finally, although the semi-structured interviews provided rich qualitative insights, the use of closed-ended questions and of few questions of a retrospective nature may have limited the quality of insights that could be obtained directly from the students.
In terms of recommendations for future research, as well as for game developers and practitioners, this study allows for reflections in terms of usability, design, and affective research.
First, accompanying the eye-tracking session with the semi-structured interview using open-ended questions and a retrospective approach is very useful to obtain a more complete picture of the students’ experience with the item.
This combined approach suggests that, although the learning game is time-limited and self-competitive, students reported enjoying this aspect of the game as a game mechanism.
Although the design format of the incentive system is not explicitly considered visually during the activity, students report being aware of this aspect and its role in the outcome of the game, which also suggests that students are not distracted or bothered by it during the game. It may be that the students experience the incentives even if they do not visually acknowledge them because of their familiarity with this item and its specific game mechanics. Future research could also test how students visually interact with the incentive system design in a new item unfamiliar to the students.
Accompanying such an item with a “training mode”, where students can also practice the skills without time pressure, could be a good additional option to reduce the possible negative effect of rewards on intrinsic motivation.
This brings us to summative feedback, which may be a key element in timed games. Students pay the most attention to the feedback screen. This leads us to conclude that the feedback screen is essential for providing immediate performance information with the potential to support self-regulation and sustained practice. Future research could extend the investigation to the use of elaboration feedback to enrich the educational aspect of the game, especially at the end of the entire game.
Finally, we found that the design of the incentive system triggered students’ external motivation to continue playing the game, which is a desired effect when using digital learning items. However, future research needs to investigate whether incentive systems in digital game-based leaning have the same potentially stifling effect on intrinsic motivation as previously found for traditional tangible rewards under certain conditions.