1. Introduction
Lack of motivation and inability to engage learners to achieve desired learning objectives are among the top barriers to learning [
1]. Improving and maintaining learner motivation has consequently emerged as a key challenge in education. Gamified learning refers to technologies that attempt to engender motivations toward learning activities, typically by employing design strategies found in games [
2,
3]. Usually, the reason for gamifying an activity is that learners’ motivation to engage in it is low, and gamification is seen as a way to strengthen it. Many empirical studies have demonstrated that gamification can inspire motivation for engagement [
4]. However, it is difficult to draw definite conclusions about gamification’s motivating potential in education [
5] as existing research yields contradictory findings [
4,
5,
6]. Since the existing literature presents an ambivalent picture of the effect of gamifying learning environments, several researchers (e.g., [
5,
7,
8]) argue that gamification practice would benefit from a better understanding of the effect of individual game design elements on learners’ experience and motivational outcomes.
Many learning activities are centered on skill development, where students are not merely memorizing concepts and principles but applying them to solve problems. Pivotal to the notion of skill-based learning is the notion of practice. To develop needed skills, learners need to spend considerable time engaged in practicing problem-solving and developing subject-related skills in a variety of hands-on scenarios. Many available educational tools for practicing are crafted to support skill development through self-study [
9], which makes it hard to mandate and control their use. Self-study tools typically support recommended activities, and as such, their use rarely counts towards the student’s course grade, which results in low usage [
9]. Thus, although freely available, practice problems that accompany self-assessment tools are rarely fully utilized. This requires employing strategies for boosting student interest and engagement with such tools.
Gamification techniques are particularly popular in contexts where rewards are delayed, such as education, and attempt to provide rewards in the short term to motivate users to stay engaged in the activity [
10]. The use of virtual currency (VC) to promote engagement in practicing is an example of reward-based gamified learning. Although VC falls into the reward category, it offers a more complex motivational mechanism driven by the possibility of earning certain rewards that enable obtaining some other desirable objects. It has the potential to appeal to both intrinsically and extrinsically motivated learners, as it can serve several functions. By scoring points when successfully performing tasks, students are rewarded with VC. This creates a simplistic economy, and work is rewarded with VC, which is used to acquire some desirable benefits. In a learning–practicing environment gamified with VC, students can earn VC based on the number, level of difficulty, and correctness of completed practice quizzes. They can spend it on some benefits, possibly course-related, such as deadline extensions, homework re-submission, and others, as decided by the instructors. Thus, in such an environment, there are two separate but related numerical values: a score which is a permanent indication of progress and VC, which is earned and spent on (course) benefits. As in this form, earning virtual currency evokes a perception of gaining benefits with a positive impact on course outcomes, and it is more extrinsic in nature. Still, extrinsic motivation can be beneficial in situations where it can be seen as a basis for the gradual development of intrinsic motivation [
11]. On the other hand, based on Ryan and Deci [
12], it can be assumed that VC can enhance intrinsic motivation when it is awarded for the accomplishment of specific challenges. Depending on learners’ motivational drivers, VC can be perceived as feedback, as an immediate reward, as a progress indicator, as an accomplishment, or as an incentive for practicing.
Virtual currency is an underexplored game element, especially in educational settings, so our goal was to systematically study it across different contexts to demonstrate potentially generalizable results. We chose to perform a multiple-case, exploratory study since a multi-case study design allows comparisons across several settings [
13]. Many authors (e.g., [
14,
15]) emphasize that multiple case designs are needed for creating a generalizable theory under the replication logic of positivist case research.
The multi-case study reported here was designed with the goal of deepening the knowledge of the individual effect of virtual currency on learners practicing in a gamified learning environment. It was designed as a sequence of three case studies utilizing a single gamification element, VC, and guided by the same research questions but conducted in different contexts regarding the subject, type of university, student population, and academic background, as well as using different schemes for earning and spending VC. The following questions provided a framework for our exploration of evidence present in all case studies and guided the research focus.
RQ1: Does virtual currency encourage more active engagement in voluntary out-of-class practice?
RQ2: Does virtual currency improve students’ academic performance?
RQ3: Do gamified activities using virtual currency improve intrinsic motivation?
The case studies were conducted between 2019 and 2021. While the results of each study were published after completing it (see [
16,
17,
18]), the goal of this paper was to cross-examine and aggregate all results and to perform some additional analyses. Thus, it consolidates the acquired knowledge on the individual effect of the game element virtual currency on learners, which has not been studied before in such scope and depth. This also improves our understanding of how the context influences the impact of VC on learners and how better to tailor its implementation to the specific context.
2. Related Work
A growing number of software systems supporting teaching and learning are making use of game design elements to increase learner engagement. This practice is commonly known as gamified learning [
5,
19]. Many game elements, including points, badges, leaderboards, competitions, and avatars, are used to engage and motivate learners in various learning activities [
20]. One element popular in games but less common in gamified learning is virtual currency (VC) [
21]. It is used to reward players and create an in-game economy. Virtual currencies are most powerful when complemented by a virtual marketplace where learners can spend their earned virtual bucks. Learners can earn VC by exhibiting robust performance in gamified activities and challenges. The stronger the performance, the more VC a learner earns [
22]. Many games incorporate game design elements that can be redeemed for unlocking or buying objects (e.g., new characters, tools, weapons, stages, etc.). Utilizing rewards in such a way can enhance players’ motivation and engagement due to the possibility of achieving useful objects and tools and using them to progress and perform better in the game [
23]. While this idea has been transferred to gamification in educational contexts, typically in the form of virtual currency [
5,
23], its effect on learners’ engagement and academic outcomes is underexplored.
O’Donovan’s gamified course [
24] is among the first examples demonstrating the use of VC, together with a storyline, badges, and a leaderboard in a university-level gamified course. Although the study concluded that in-game currency was very well received, its effect was not statistically confirmed. Another early attempt at using VC was Vassileva et al.’s study of the effects of adding VC along with some social motivators to a university peer help system to incentivize students to help their peers [
25]. Reports on the use of VC were favorable, but in general, when gamification is driven by several game elements, the isolation of the effect of individual elements is problematic.
Gamifying a Computer Science course with virtual currency (BitPoints) used together with levels and stars was proposed by Lopes [
26]. BitPoints could be earned by overcoming obstacles associated with challenges (practical assignment exercises), where the amount of VC earned is proportional to the level of difficulty of the challenges. The available BitPoints could be used for purchasing information or tools (for use in solving other tasks). An explicit evaluation of VC’s impact on student learning has not been performed. An alternative kind of VC, in the form of coins, used for gamifying a Software Testing course, was studied by de Jesus et al. [
27] but with inconclusive results. Outside of computing subjects, Duolingo [
28] is a successful example of an app with an effective virtual currency implementation. Users are rewarded for each task they successfully complete in this language-learning app and can purchase bonus lessons, health points, and more with the in-app currency (lingots). Specifically, Munday [
29] describes the use of Duolingo in college-level second language courses, where the Duolingo VC was used together with points, streaks, and crowns. Lingots were awarded for learning skills, going up levels, and long streaks (playing many days in a row) and could be used to unlock a bonus skill, a timed-practice option, a progress quiz, or a power-up. Another version of virtual currency, eCoins, was used in a Statistics course [
30] in combination with levels, progress feedback, time pressure and pathways. The number of eCoins awarded for a successful attempt was a function of the experience points and the difficulty level of the attempted task. The earned eCoins could be used to remove parts of a question or an entire question from an activity test set. Virtual currency, as a feature for enhancing engagement, has also been studied in a MOOC environment [
31], where redeemable points were reported as the second most engaging gamification mechanism. The use of a similar version of VC, called in-course redeemable rewards, along with badges, was reported in [
32]. Redeemable rewards were issued to students for completing predefined tasks and could be exchanged for various privileges (e.g., unlock exclusive learning contents, extra attempts and/or more time to perform quizzes, and extended due dates of assignments). Nonetheless, subsequent studies [
23] did not demonstrate a significant increase in student engagement.
A major limitation of the works above is that they report either preliminary studies with inconclusive results or informal observations without an explicit evaluation of the impact of VC on student engagement or specific learning outcomes. Therefore, they provide limited data for more diverse and fine-grained analysis.
With a more methodical approach, Snow et al. [
33] studied how VC impacts in-system performance and learning outcomes in the context of an intelligent tutoring system (ITS). Students earn iBucks through their interactions with the ITS and use them to unlock game-based features. The study revealed that students who were more interested in spending their earned currency did not perform well and had lower scores on the learned skills. Still, gamifying an ITS is quite different from gamifying an academic course.
A more systematic exploration of the effect of VC on learners’ behavioral and psychological outcomes began with the work of Dicheva et al. [
34]. In a Data Structures course gamified with badges, a leaderboard, and VC, students could earn and spend VC based on rules specified by the instructor. The earning rules were based on the amount, the level of difficulty, and the correctness of the solutions of completed problem-solving exercises. Students could spend their VC on purchases of deadline extensions, re-submission of homework, etc. The idea behind this form of gamification economy was to stimulate students to practice more (by incentivizing them with purchasable course-related benefits) in order to attain the intended learning outcomes. The reported results of the study confirmed that the targeted motivational effect was achieved but again without isolating the motivational effect of VC from the other elements used to gamify the course. This early work was followed by three consecutive studies with a focus on examining the effect of VC on learners enrolled in a Discrete Math course [
16], in a Computer Networking course [
17], and in a Discrete Structures course [
18]. Unlike the previous studies, the authors empirically examined the individual effect of VC (which was the single gamification element used) in three different contexts (subject, student population, academic background, and earning and spending rules) as a step towards gaining more generalizable results. These three studies showed that using VC to gamify practicing increased student engagement. The idea of this work was to integrate the three studies in a framework providing ground for further multi-perspective analysis leading to more generalizable knowledge about using VC for gamifying practicing. As the three cases were chosen to differ in their gamified environments, they contain unexplored data that can help improve the understanding of how the context influences the impact of VC on learners’ engagement in practicing and how to better tailor its implementation to the specific context to achieve the intended outcomes.
4. Results
Our main purpose here is not to present the complete original findings from each case study but rather to focus on a cross-case analysis. In this section, we compare how VC impacts student engagement, motivation, and academic performance in the three case studies. For student engagement, we look at the use of VC by the students in the experimental group and compare the number of warm-up challenges taken by the students in the experimental and comparison groups. We used these numbers as a measure of engagement, while students’ course grades were used as a measure of student academic performance. We used well-established motivational scales to measure student motivation (see
Section 3.3).
4.1. The Use of Virtual Currency
We first looked at the use of VC in the three case studies.
Table 2 presents information on the earning transactions (how students earned their VC) and spending transactions (how they spent their accumulated VC).
This information was extracted from the OneUp transaction log. Each VC earning transaction is recorded there when a student satisfies a VC earning rule defined by the instructor. Considering the numbers in the table, note that some instructors have opted for giving generous quantities of VC for completing learning activities, but their “prices” in the course shop were also high. As the table shows, there is a group of students in each of the cases, which either have no earning transactions at all or only have one or two transactions. For case studies A and B, the percentage of students in this group is very similar (16% and 17%), while it is higher for Case Study C, for which the group also included students who had two transactions.
The high percentages of the students who earned VC for taking a warm-up challenge with results of at least 90% correct (55% in Study A) and taking at least five warm-up challenges in one topic with results of 85% correct (37% in Study B) and 90% correct (40% in Study C) show the persistence of the students to improve their challenge scores by re-taking challenges.
It is interesting that a substantial number of the students who earned virtual currency, approximately one-third of the students in Case Study A (33%) and Case Study C (37%) and 16% in Case Study B did not make any purchase in the course shop. The distribution of the spending transactions by category shows that students favored buying extra credit points for a test or homework, dropping the lowest score on lab or quiz, and in Study 1, an extension of the homework deadline. To obtain insight into the reasons for a particular student to make a particular purchase, we directly asked them for a reason via a pop-up question in OneUp at the time of purchase (see
Section 5).
4.2. Effect of VC on Student Engagement
We compared the number of warm-up challenges taken in both the control and experimental groups for each individual study to assess how virtual currency impacted students’ engagement in out-of-class practicing.
In
Table 3 below, we present the total number of unique warm-up challenges completed, as well as the total number of attempts, which includes taking the same warm-up challenge multiple times by a student trying to improve their score. The average number of taken challenges per student for the experimental and control groups are also included in the table. We have presented details about the distribution of the students over different ranges of taken challenges for the individual cases in [
16,
17,
18]. These distributions provide a better insight than presenting the average number of taken challenges per student.
It is difficult to compare the numbers of completed challenges across the different courses since the challenges created by the instructors consist of different numbers of problems; in Study B, a challenge typically included 5 problems, while in Study A, 10 problems were included. Moreover, instructors in Study A and Study C used dynamic problems, where one problem generates a number of similar problems. However, the difference between the experimental and comparison groups in the same case study clearly shows that in all three studies, the students from the experimental groups completed more unique warm-up challenges, as well as more challenge attempts.
While the increase in the number of warm-up challenges and the challenge attempts taken from the experimental group in Study A is close to 50% more than those of the comparison group, the increase in student engagement with OneUp for Study B and Study C is striking. For Study B, the number of warm-up challenges taken from the experimental group is close to four times (373%), and the number of challenge attempts is close to five times (470%) as many as those of the comparison group. For Study C, the increase in the number of warm-up challenges taken from the experimental group is close to 274%, and the number of challenge attempts is 300% higher than those of the comparison group. Thus, all three case studies answered positively to RQ1, “Does virtual currency encourage more active engagement in out-of-class practicing?”.
4.3. Effect of VC on Student Motivation
Pre-test and post-test surveys, including the Basic Psychological Needs Satisfaction Scale, were conducted with the experimental groups in all case studies at the beginning and at the end of the corresponding semester. A paired-sample
t-test was conducted to explore potential pre-test to post-test differences in the autonomy, competence, and relatedness of the students to answer RQ3, “Do gamified activities using virtual currency improve intrinsic motivation?”.
Table 4 presents the results of the
t-test.
As the table shows, the pre-test to post-test effects for all dependent variables (autonomy, competence, and relatedness) for the participants in both Study A and Study C are not significant. However, the factor-level mean scores for the three factors imply that participants came to the study with strong ratings for them, and those strong ratings stayed stable from the pre-test to the post-test. These durable rather than malleable intrinsic motivation indicators demonstrate that using virtual currency as a gamification element did not significantly alter students’ basic psychological needs.
Differently, the t-test results for Study B indicate that the difference from pre-test to post-test on the relatedness factor of the Basic Need Satisfaction scale is significant (p = 0.02). While this means that participants who took both the pre-test and post-test felt more positively about how they related to other students in class, due to the nature of our intervention (gamifying out-of-class practice with only virtual currency), we could not attribute the change of the relatedness factor to it.
Thus, the results of our multi-case study consistently showed that contrary to some expectations using virtual currency did not have significant effects on the students’ intrinsic motivation (RQ2).
4.4. Effect of VC on Student Performance
Our last research question (RQ3) was meant to explore whether the gamification intervention had any impact on student academic performance, measured by the students’ final course grades.
In Study A, in addition to the final course grades of the experimental and comparison groups, students’ grades on Test 2 were also compared. This was carried out since Test 2 for both groups was conducted in the same condition, in person, while due to the COVID-19 pandemic, the course for the experimental group was transferred to an online mode. The results showed that the mean score of Test 2 grades was 78.68 for the comparison group and 81.67 for the experimental group (a difference of 2.99 points), while the mean score of the final grades was 82.37 for the comparison and 85.53 for the experimental group (a difference of 3.16 points). While the improvement is not significant, for the experimental group, the number in the final student grades increased, and the number of Ds and Fs significantly decreased in comparison to Test 2 grades.
Study B compared the course quiz grade scores of the groups as well as the final course grades. The results showed that the mean score of the quizzes was 79.40 for the comparison group and 84.32 for the experimental group (t-test p-value of 0.13), while the mean score of the final grades was 85.85 for the comparison and 87.73 for the experimental group (t-test p-value of 0.48). Thus, the results show that the improvement in academic performance is not significant.
In Study C, although the comparison of the course grades did not show significant improvement, it showed some improvements in course grades for the experimental group. The total percentages of As and Bs combined was 86% in the experimental group vs. 79% in the comparison group. In addition, the total percentages of Ds and Fs combined were 5% in the experimental group vs. 11% in the comparison group. In the experimental group, there were no Ds and fewer Fs than in the comparison group.
The results of the three case studies show that there is some, but not significant, improvement in the student’s academic performance. However, it is noticeable that the number of Ds and Fs across the studies decreased. This is important since one of the major goals of the instructors is to reduce course dropouts and failures.
After answering the initial research questions, we were interested in exploring relationships between students’ final course grades and their intrinsic motivation as measured by three factors of the intrinsic motivation inventory: value/usefulness, interest/enjoyment, and perceived choice [
42]. For Case Study A, we conducted an exploratory stepwise regression analysis to determine which of these factors most strongly predicted participants’ final course grades. Thus, for this regression model, value/usefulness, interest/enjoyment, and perceived choice were the predictor variables, and the participant’s final grade was the outcome variable. Considering the results of our exploratory analysis, which illuminated value/usefulness as a significant predictor of final course grades above and beyond interest/enjoyment and perceived choice, and considering issues with multicollinearity between the three predictor variables in our datasets, we ran a series of simple regressions for Case Studies B and C using the same predictor and outcome variables to determine if value/usefulness would hold as a strong predictor of final course grades. The results are shown in
Table 5.
As
Table 5 shows, the only significant correlation found was for the factor value/usefulness in Study A. It emerged as a significant predictor of student course grades, explaining 32% of the course grades’ variance. The other two variables, interest/enjoyment and perceived choice, were excluded from the final stepwise regression model in Study A. For the participants in Study B and Study C, neither interest/enjoyment, perceived choice, nor value/usefulness appeared as a significant predictor of their final course grades.
5. Discussion
Comparing the results associated with the research questions in the three case studies, we found many commonalities but also some differences. This section elaborates on them.
Concerning students’ interaction with OneUp, in all cases, there was a group of students who either did not log in to OneUp even once or only took one or two warm-ups at the beginning of the semester (possibly out of curiosity). Some of these students were even awarded VC by the instructor for class activities, but none of them spent the awarded VC. Interestingly, some of them had final course grades either between 85 and 89 or between 65 and 69 and might have benefited from using the awarded virtual currency. However, they never logged in to OneUp to take advantage of this opportunity. A possible explanation for the students in the first group is that they did not feel they needed additional practicing as they were confident in their knowledge. Thus, having never logged in to OneUp, they were not aware that they had virtual currency to spend.
A possible explanation for the second group can be rooted in the expectancy value theory [
46]. The related expectancy–value–cost model postulates that achievement behavior is mostly affected by the expectancy of success (a learner’s task-specific expectations for success), subjective task values (the enjoyment obtained from a task, the usefulness, and the importance of a task), and the cost to participate in the task (required effort and time, lost opportunities because of the participation, and emotional costs) [
47]. In a follow-up study, we used the EVC scale [
48] to examine whether expectancy-value beliefs predict engagement in gamified activities [
11]. The corresponding findings indicate that the second group consists mostly of students who perceived the gamified activity as of low value, had low expectancy for success, or deem the activity too difficult, resulting in unwillingness or inability to engage in the gamified activity [
11].
Another interesting result was that in all case studies, a sizeable number of the students did not spend any of their earned virtual currency. In addition, there is a variation in the studies for those who made purchases. Although the majority of students from Study B spent most of or even all their earned VC, in Study A and Study C, almost half of the earned VC remained unspent. Possible reasons for not spending VC include:
Students did not earn sufficient VC to buy desired items.
Desired items were time-sensitive, and students missed the deadline for buying them. For example, they might have been allowed to buy a 2-day lab assignment extension not later than one day before the assignment deadline).
Students collected VC with the intention of making purchases at the end of the course but then realized that they did not need any of the course benefits offered.
Students withdrew in the middle of the course when the course was converted to an online mode because of the COVID-19 pandemic (Study A).
Students were told by the instructor that all unspent VC would be credited to their lowest-performed lab, assignment, or project, so they knew that they were not actually losing it (Study C).
Concerning the reason for purchases, the students from Study B (50%) and Study C (60%) gave “worry for their performance” as the main reason. These are much higher percentages than the percentage reported for the same reason from Study A (18%). In contrast, Study A reported the highest percentage for “need more time” (27%) compared to only 4% for the same reason reported by both Study B and Study C. In a search for possible moderators for the observed differences, we looked at the difference in the demographics of the student population that participated in the studies: for Studies B and Study C, the European American students were predominant, and the universities were private, while for Study A, African American students were predominant, and the university was public. Certain cultural and economic specifics and differences could explain some behavioral tendencies of students. The students in Study B and Study C might have planned more carefully so they did not need an extension for homework. While most of the students in both Study A and Study C worked full time (for varied reasons), those in Study C were more mature (~67% were in the age group 26–45, and ~17% older than 46). This may explain why the students from Studies B and C did not favor buying deadlines for submitting homework and labs, while those from Study A did.
A good indicator of student engagement in a learning activity which is not required and graded, such as practicing (in our case, taking warm-up challenges), is how regularly and how persistently students do it. Our analyses show that in all three case studies, gamification intervention increased student engagement. The results of Study B and Study C confirmed our expectations by reporting significant increases. A likely explanation for the non-significant increase in student engagement in Study A is that it is due to the disruption of the normal instructional process because of the COVID-19 pandemic. For the experimental group in Study A, the mode of the course was changed to online in the middle of the semester, which negatively impacted the work of both students and instructors.
In addition to the significant increase in student engagement in practicing in a learning environment gamified with virtual currency, the results of this multi-case study do not demonstrate a significant change either in the intrinsic motivation of the students or in their final course grades. We hypothesized that by spending more time practicing and studying, encouraged by the use of virtual currency, students would significantly improve their performance [
49]. However, the analysis does not support this hypothesis. While there could be different reasons for this result, such as the timing of practicing, knowledge retaining, and level of similarity of the practice problems offered in OneUp to the graded course quizzes and tests, there is one possibility that is more related to the nature of the game element used in this study, virtual currency. It is possible that some of the students would engage in practicing just to earn VC. So, instead of studying before practicing and taking time to analyze and understand their errors when their solution is incorrect, they look at the correct solution and immediately re-take the challenge to obtain a higher score. Such behaviors may signify that practicing for these students is instigated by the desire to gain course benefits instead of learning. Gaming the system is a well-known problem in educational environments and is not an easy one to prevent. Our future agenda includes work aimed at approaches to preventing such behaviors.
In addition to looking into the relationships between intrinsic motivation and students’ final course grades when using a practicing environment gamified with virtual currency, we were interested in how factors such as school type (HBCU vs. PWI), gender (male, female), or race (AA, EA, etc.) impact the relationship between distinct warm-ups/warm-up attempts taken and final course grades. We collected data related to these factors (see
Section 3.3) in order to do a cross-institution analysis of their impact. Since the results from the multi-case study did not indicate a strong correlation between students’ intrinsic motivation and their course grades, we have not analyzed the impact of those factors on it.
A series of bivariate correlations were run to determine if there was a significant relationship between grades and warm-up attempts and grades and distinct warm-ups for each of the schools involved in Study A, Study B, and Study C. There was a moderate, positive relationship between the number of warm-up challenges students took and their final course grades for students from Study A, r(28) = 0.55, p = 0.00. Participants in Study A who completed more warm-up challenges also earned better end-of-semester grades. The bivariate correlation between grades and the number of warm-up attempts was also significant, r(28) = 0.419. Again, positive valence and moderate magnitude indicated a moderate, positive relationship between the participant’s grades and the number of warm-up attempts completed. Students who had better grades completed more warm-up attempts. However, the same relationship was not apparent for Study B or Study C. Next, a series of linear regressions (simple and multiple) was run to determine the intensity of relationships between the predictor variables and the outcome variable (final course grades). The regression indicated a significant effect of warm-ups on grades such that more warm-up attempts resulted in higher grades, R2 = 0.07, F(26) = 6.65, and p = 0.01. We deduced that this relationship might be related to school type considering the significant bivariate correlation between warm-up attempts, distinct warm-ups, and course grades for Study A’s participants and, thus, split the data by institution type (HBCU vs. PWI) and re-ran the analyses. We found that school type determined the relationship between warm-ups and grades for PWI students (b = 0.00, SE = 0.00, p = 0.03 *) rather than HBCU students (b = 0.01, SE = 0.00, p = 0.07 *). There was also a significant effect of distinct warm-ups and final course grades such that taking more distinct warm-ups was related to higher end-of-course grades for both HBCU and PWI students, R2 = 0.07, F(26) = 6.47, and p = 0.01.
Next, a multiple regression for basic psychological needs (autonomy, competence, and relatedness) was run to determine the intensity of their relationship to final course grades. Only competence emerged as a significant predictor of final course grades (b = 0.48, SE = 0.19, and p = 0.01 *). We then split the data by school type and found that school type (HBCU vs. PWI) does not impact the relationship between BSN and final course grades. However, when the data were split by gender, we found that for men, competence significantly predicted final course grades (b = 0.70, SE = 0.23, and p = 0.00).
When split by race, an interesting finding emerged from the data. The number of warm-up challenges for European Americans predicted final course grades (R2 = 0.08, F(26) = 3.79, and p = 0.05). However, this was not the case for the number of distinct warm-ups. Logically, engagement in the performance domain is a significant predictor of grades R2 = 0.14, F(26) = 9.07, and p = 0.00, which was significant for PWI students (b = 0.44, SE = 0.14, and p = 0.00 *) but not for HBCU students (b = 0.41, SE = 0.40, and p = 0.32).
Essentially, the results of this study shed further light on the situational factors that may impact the expected engagement and motivational effect of gamified learning activities.