1. Introduction
A critical concern for user researchers in Human–Computer Interaction (HCI) is how to recruit participants for studies—how many people to recruit, how much detail to specify about them during the study design phase, and how to find them. Even where authentic project work is built into curricula, there is limited opportunity for students to practice recruitment. Due to constraints of time and budget, they typically recruit a convenience sample of friends and family (
Bernstein et al., 2011;
Caine, 2016). This means that students do not have the opportunity to develop and implement a recruitment strategy, an essential skill for user researchers.
By situating learning tasks in contexts that reflect “the way the knowledge will be used in real life” (
Herrington et al., 2014, p. 403), authentic learning confronts students with “the same problem-solving challenges in the curriculum, as they [experience] in their daily endeavours” (
Herrington et al., 2014, p. 402). Games can be an accessible, low-risk, engaging and authentic tool for teaching (see, for example, (
Khaldi et al., 2023;
Meriläinen & Piispanen, 2022)) when they are carefully designed to model specific skills for a specific context. Recognising the gap in educational games for HCI, and specifically in games that teach about participant recruitment, we developed, produced and evaluated a card-based game for use in small group classes to teach students about recruiting study participants. The game introduces students to participant recruitment and provides an opportunity to participate in a simulated recruitment activity. It prepares students to design and implement participant recruitment plans that address participant sampling, budget, and the practical constraints of time and access.
Our Research Question asks whether a tabletop boardgame can authentically support HCI education about participant recruitment. This paper contributes the Recruitment Rush game as an artefact, together with reflections on the process and value of using a game for instruction. These are based on three key activities: (1) reflections by the design team; (2) evaluation by educators and experienced HCI researchers, comprising graduate students and academics; and (3) evaluation by currently enrolled students in graduate and undergraduate subjects that focus on evaluation methods. These highlight, in particular, the value that tabletop games can offer to HCI education in delivering authentic, game-based learning that simulates real-world methods.
In the following sections, we outline the state of research across three key areas: the importance of authenticity and accuracy in games for education; the use of games for education; and the use of games in HCI education.
1.1. Authenticity and Accuracy in Games for Education
Our work on Recruitment Rush reflects the authenticity offered by introducing ‘real-world’ scenarios that students will need to navigate in their professional work, within the safe (and engaging) space of a game-based environment. Such authenticity is particularly important, note
Ney et al. (
2014, p. 132), “in fields that are difficult to teach because students do not relate the learning goals to their personal experience or learning project.” They present a three-dimensional model of authenticity in simulation games which juxtaposes External or Real-World Relevance, Internal or Gameplay Relevance, and Learning Relevance. Here,
external relevance relates to how the model reflects reality, and whether learners feel that they are being “prepared to react adequately in real professional situations” (
Ney et al., 2014, p. 134).
Rogers et al. (
2022, p. 2) define realism in this context as occurring in settings where “the thing being signified in the reproduction process is the real world”. This dedication to realism contrasts with the approach taken by
Mochocki (
2021), who notes that it may be more important that a game
feel authentic (“authenticity-of-feelings”) than that it actually
be accurate in its depiction of events or processes (“authenticity-of-facts”) (p. 953). Together, these approaches suggest that an authentic simulation must at least
feel real or realistic to learners—through what
Petraglia (
1998, p. 11) describes as “preauthentication”.
Gameplay relevance requires that the game represent “a logical sequence of events” and provide a consistent and coherent experience (
Ney et al., 2014, p. 133).
Learning relevance requires that learners take on the setting of the game “with the feeling that it is relevant, or meaningful” in the learning context; the lessons must be transferable to similar problems and settings (
Ney et al., 2014, p. 133).
In practical implementations, an Extended Authentic Learning Framework (EALF) has been applied in the design of educational games. The EALF can be read against broad definitions of authenticity as providing meaningful opportunities for students that connect with their prior experiences or to ’real-world’ problems, and authenticity in terms of educational game development.
Safiena and Goh (
2022) used the nine principles of authentic learning outlined by Herrington (
Herrington, 2006;
Herrington et al., 2014) as design guidelines for a game about identifying hazards. They found that students identified game interaction and guidance as the most important game design and authentic learning factors when evaluating their game. This suggests that these elements are key for the delivery of authentic learning. Even when “actual history doesn’t take place” (
Stirling & Wood, 2021), games can authentically represent the setting of an historical event and promote students’ engagement with history.
Table 1 summarises and illustrates the connections between Ney’s model, the EALF, and Herrington’s principles of authentic learning.
More recently,
Levin et al. (
2025) examined how authenticity can be created in simulations used in teacher education. They highlight the value of physical (including setting, facilities, and props), contextual (representativeness), and experiential (structure, flexibility, reactions elicited) elements of a scenario developed for a form of role-play.
1.2. Games for Education
Games and other playful artefacts have been used in education settings for hundreds of years. As early as the 1700s, engravers like John Spilsbury were using jigsaws to “dissect” maps of the world which were then used as geography teaching tools, possibly based on early designs by French educator Madame de Beaumont (
Historic Royal Palaces, 2015;
Norgate, 2007). Similarly, wargames have a more than hundred-year history as tools for teaching military strategy (
Enstad, 2022;
Hagen, 2022;
Smith, 2010), and more recently computing topics (
Haggman, 2019;
López-Fernández et al., 2021;
North, 2016;
Papadakis, 2020) and modern history (
MacDougall & Faden, 2016;
Reynaud & Northcote, 2015).
A diverse range of games have been used in preschool, primary, secondary and tertiary education. Game-based learning can immerse students in authentic scenarios, developing knowledge and skills to support real-world activities (
Herrington et al., 2014). Researchers have examined the alignment between games and educational settings, exploring what makes them effective and how to enhance these effects (
Plass et al., 2015). A systematic review of the literature on the use of computer games in primary education, for example, found that game-based learning approaches were widely used across the curriculum (
Hainey et al., 2016). In secondary education, computer games are used across a diverse range of subjects including mathematics (
Fadda et al., 2022), history (
McCall, 2022), science (
Kara, 2021), and foreign language acquisition (
Acquah & Katz, 2020;
Meriläinen & Piispanen, 2022;
Peterson et al., 2022). These continue to be primarily subject-specific and focused on imparting explicit knowledge. A further application of games, however, is to develop skills such as computational thinking (
Sun et al., 2023) and problem-solving (
Adachi & Willoughby, 2013;
Araiza-Alba et al., 2021), as well as build student motivation both in specific subjects and more broadly (
Fadda et al., 2022;
Malouf, 1988). Many of these claimed skills align with graduate attributes, “an orienting statement of education outcomes used to inform curriculum design and the provision of learning experiences at a university” (
Barrie et al., 2009). This provides a meaningful connection to higher education policy as a justification for the use of these games in universities, even beyond the subject-specific pedagogical benefits (
Hill et al., 2016).
Definitions of game-based learning frequently explicitly mention “computer games” (e.g., (
Hainey et al., 2016;
Tang et al., 2009)), excluding games that do not involve computers or other devices: “Usually it is assumed that the game is a digital game, but this is not always the case.” (
Plass et al., 2015). There has however also been significant work on the use of boardgames and tabletop role-playing games in education settings. In HCI, boardgame research has focused on the development of specific games to teach topic knowledge (e.g., robotics design (
Collins & Šabanović, 2021), gut health (
Pasumarthy et al., 2021) or quantum computing (
Weisz et al., 2018)) or for specific instrumental purposes (e.g., a boardgame for use in co-design workshops with children (
Gennari et al., 2019)), as well as on the user experience and cognitive work of boardgame play with or without hybrid components (
Farkas et al., 2020;
Rogerson et al., 2018;
Sidji et al., 2023). A systematic review on the use of boardgames as interventions in medical disciplines found that they can improve understanding (“educational knowledge”) when used for health education, improve cognitive function, enhance interpersonal interactions, and maybe contribute to improving players’ mental health by increasing motivation and moderating symptoms of depression and anxiety (
Noda et al., 2019). Another systematic literature review examines the use of simulation games, which model authentic situations and approaches, for education. To be effective, these must be matched to target groups in difficulty, familiarity with the topic, and cultural and linguistic understanding (
Alf, 2022). They should be structured to be easy to learn but offer unpredictablity and complexity to a level that suits the learners, and are—essentially—moderated by teaching staff. Meaningful learning, as well as other positive benefits, is found in many different types of game.
1.3. Human–Computer Interaction Education
Games are well-explored in HCI, but an early paper found that only 40% of surveyed research on games and play used games operatively—that is, “as an instrument or tool for achieving external (i.e., non play or non-fun) goals” (
Carter et al., 2014). This included 20 papers, or around 11% of the reviewed corpus of 178 papers, which used games for ‘Education and Learning’. Although there has been considerable research in this area since then, much of it has focused on the user experience of games in non-HCI education settings (
Zuo et al., 2020), on the use of game design features or gamified learning platforms to motivate student learning (
Silveira, 2020;
van Roy et al., 2018), on game design education (
Wyeth et al., 2018), and on games as the subject of project study or teaching (
Roldan et al., 2020;
Santana-Mancilla et al., 2019).
Despite this interest in games as pedagogical tools, little work has addressed the use of games for HCI education. Researchers have examined the use and design of a videogame (
Santana-Mancilla et al., 2019) as the subject of a usability engineering process (
Greenberg, 1996) and as the subject of a co-design project (
Roldan et al., 2020), and the use of badges to motivate students in an HCI course (
Silveira, 2020), but there is limited work that explicitly presents an educational game as a pedagogical tool to teach concepts specific to the HCI curriculum
de Souza Lima and Benitti (
2019). We have identified only three prior games that have been developed specifically to teach concepts from HCI, all of which primarily focus on heuristic evaluation.
The Usability Game (
F. B. V. Benitti & Sommariva, 2015) is a computer game which tells the story of a software development company and its attempts to improve the usability of its products. Student players are ‘employed’ as usability engineers and work through a series of tasks including requirements analysis, the design of a software prototype, and heuristic evaluation. The game was shown to improve student performance in requirement analysis and heuristic evaluation. The same authors also developed
UsabiliCity (
F. Benitti & Sommariva, 2012), a game which appears to focus on the usability life cycle.
1Like
The Usability Game,
Heureka focuses on the teaching of usability heuristics (
Sobrino-Duque et al., 2022). It appears to use a multiple-choice, quiz-like function where the user is invited to select an image that best represents a given heuristic. Unlike
The Usability Game, however,
Heureka showed no measurable effect on students’ learning. These three games focus on a very narrow aspect of the HCI curriculum, suggesting a significant gap for developing games that teach foundational HCI skills and concepts. While
The Usability Game appears to adopt an authentic learning approach,
Heureka is strictly a quiz game.
2. Materials and Methods
In this section, we contextualise our approach to game design and outline the design of Recruitment Rush, before we discuss the methods used to evaluate it.
2.1. Game Design
There are a number of models and methodologies for game design which, typically, can be applied to both boardgames and computer games. One that has been widely used is the MDA framework (
Hunicke et al., 2004). This short paper proposes that games can be interpreted formally across three dimensions: Mechanics—the algorithms and data that construct the game; Dynamics—the interactions between players and mechanics; and Aesthetics—the emotional responses that the game seeks to evoke in a player. One of the authors of the MDA framework, Robert Zubek, subsequently presented a revised and more detailed framework which reframes these concepts as Mechanics, Gameplay, and Player Experience (
Zubek, 2020). In both of these models, the different elements are closely imbricated.
While these models are useful for interpreting games, we find Jesse Schell’s Elemental Tetrad a more practical model to use in designing games (
Schell, 2015). This comprises four key elements: Mechanics, Story, Aesthetics, and Technology. In this model, Mechanics comprises both the Mechanics and Dynamics/Gameplay elements of the MDA model and Zubek’s revised model, while the Aesthetics/Player Experience model is not explicitly represented in the Tetrad.
To Schell, the
Mechanics are the procedures and rules that comprise the game, aligning with the concept of
Gameplay relevance discussed by
Ney et al. (
2014) and with the game-related criteria in the EALF (
Safiena & Goh, 2022). At a more granular level, Engelstein and Shalev have catalogued over 200 distinct game mechanisms or “building blocks” for tabletop games (
Engelstein & Shalev, 2022). Schell’s
Story element reflects the theme or sequence of unfolding events, which delivers external relevance (
Ney et al., 2014) and authenticity (
Safiena & Goh, 2022).
Aesthetics describes the ways that the game appeals to the player’s senses, aligning with the user-friendliness element of the EALF (
Safiena & Goh, 2022), and
Technology comprises the materials that facilitate the game (
Schell, 2015). Each of these elements is essential and connected to each of the others, and the connections between these elements inform and drive the design of a game.
2.2. Design of Recruitment Rush
Before we describe our study, here we describe the game
Recruitment Rush2 and its connection to core HCI curriculum and skills.
In Recruitment Rush, each player is tasked with recruiting participants for a usability study, to be conducted in one week’s time. A personal client card (see
Section 2.2.2) sets out the parameters for the study that each player is to conduct—the website to be studied, required number of participants, a budget for the study, and a profile for their ideal user.
The game is played over seven “days”, each comprising one round. The central board features a day/round tracker, draw and discard piles for Participant cards (see
Section 2.2.3), a central “Agency” of four face-up Participant cards, and a summary of available actions and possible attributes.
In each round, participants undertake one recruitment activity, choosing from four possibilities of different cost and efficacy (See
Table 2).
Players recruit participants for their study over the course of the game, depending on the actions they choose. Recruited participants form a face-up tableau in front of the player. We designed the recruitment actions to reflect common activities during participant recruitment—the use of a convenience sample, often of family and friends (“Ask Around”); using local channels (posters, newsletters) (“Local Advertising”); advertising on social media and similar channels (“Online Advertising”); and commissioning an external recruiter to provide pre-qualified participants (“Use the Agency”). After seven rounds, players pay the indicated incentive to their participants and the game is scored, with players earning bonus points for matching participants to recruitment criteria.
2.2.1. Design Brief and Constraints
As the game was developed on a limited budget and to be used in a (time-limited) tutorial class, there were some specific constraints on the design.
Our first, and significant, design constraint was the short timeframe in which the game was to be taught and played—a one-hour tutorial class, typically with 5 min to set up and pack up, leaving 50 min for class activities. This contributed to several design decisions, including the use of the game board to provide rules reminders. Research has consistently shown that people are not good at reading game rules; rule interpretation is error-prone and disorderly, with players finding rules to be “unintelligible” (
Liberman, 2011). Rather than relying on students reading the game rules in advance of the session, or reading and interpreting them during the class, therefore, we chose to introduce and teach the game to the whole group at the start of the tutorial class, leaving about 40 min for the gameplay itself.
The second design constraint was the need for simplicity, to cater to players with different experience of and interest in games. This aligns with the user-friendliness criterion from the EALF
Safiena and Goh (
2022). As we will discuss in
Section 3, there is a vast discrepancy in game interest and experience among our students. Accordingly, while we drew design inspiration from hobby games rather than mass-market games, we ensured that the mechanisms chosen were simple and easy to explain in the context of the game’s theme. For example, at the end of each player’s turn a participant was moved out of the Agency and replaced with a new one; this was explained as their being recruited by other organisations. This also informed later design decisions, which we will discuss in the context of the playtest sessions in
Section 3. Moreover, it precluded some of our initial design ideas including the use of variable player powers or abilities, which would have added complexity and planning overheads.
Finally, the game needed to be portable and storable. This limited the size of the participant deck and of other components in the game.
2.2.2. Client Cards
We developed 24 Client cards. Each represents a potential (imagined) Client for a usability or user experience (UX) study (see, for example,
Figure 1). Each Client card begins with the name and brief description of the client.
Additionally, each card describes a preferred study size, representing the number of participants to be recruited, a budget, and a profile of three “desired” participant attributes (see
Section 2.2.3), comprising a personal (purple) and a socio-economic (blue) trait as well as personal interests (green). The budget amount reflects, to some degree, how commonly those attributes appear in the deck.
2.2.3. Participant Cards
We designed 100 Participant cards (see
Figure 2), representing the people who participate in the study. In designing these cards, we considered the dimensions that are often used to describe participants in a study, with a focus on the industry setting of the majority of our students’ future UX careers.
Each Participant card has a name and an associated incentive cost. Cards with ‘scarcer’ attributes have a higher incentive cost, as do those from demographics that tend to be harder to recruit (e.g., high-income adult professionals). Additionally, each Participant card has three sets of attributes: personal attributes (shown in purple text), socio-economic attributes (shown in blue text) and hobbies (shown in green text).
Table 3 shows the different options available for each of these attributes.
While we aimed to make attributes broadly representative of the general population, and used a spread of personal and socio-economic attributes that align to those frequently used by practising UX specialists, we did not explicitly match them to Australian Bureau of Statistics data. Recruitment Rush is an educational game rather than a ‘true’ simulation, and interpreting population-level demographic data was beyond the scope of the project.
Recognising that gender is a complex and “messy” construct (
Taylor et al., 2024) but wanting to explicitly include non-binary people in our participant pool, we chose to use pronouns in place of a more overt statement of gender. We recognise that our use of just three pronoun pairs—she/her, they/them, and he/him—is also an oversimplification of this complex issue. Similarly, we simplified age to three categories—young (notionally, up to about 35), adult (to retirement age), and mature—and household status to either single or partnered.
As well as a personal identity, we also provided each participant with three socio-economic attributes. We classified professions as domestic, essential, professional, retired, student, or trade; income as low, middle, high, or wealthy; and English proficiency as beginner, competent, and proficient. We initially used IELTS levels as the measure of proficiency, but found that these levels were not familiar to some participants so switched to the more descriptive terms. The use of these terms reflects that different projects may target different levels of English language comfort. For example, a local organisation in an area with high numbers of community language speakers might explicitly seek to connect with people with beginner English proficiency. Categories were loosely connected—for example, a mature participant might be retired and a young participant might be a student, although not exclusively. Finally, we developed a list of 12 hobbies and allocated three to each participant, allowing each participant card a unique combination of hobbies.
2.2.4. Scoring
We developed a scoring system (see
Figure 3) that rewarded players for recruiting to the desired profile and also for recruiting a diverse sample of participants.
As we discuss below, players found the scoring system to be fiddly. Additionally, and highlighting the value of playtesting with diverse groups, one group with very disparate play styles observed that the developed scoresheet strongly favoured a desire for larger participant numbers, rather than a small but carefully selected sample. To remediate this, we suggest increasing the score for Desired Attributes from one point to three points “for each Participant matching each of your Project’s desired attributes”. Another option would be to reduce a player’s score by two for each Participant who does not match
any of the desired attributes; however, we do not recommend this negative scoring as it can be experienced as hostile or punitive (
Engelstein, 2020).
2.2.5. Connection to Game Design Literature
In designing Recruitment Rush, we followed the four key elements of Schell’s Tetrad (
Schell, 2015). We began with the Story of recruiting participants for a usability study, which represents a common, authentic task for user researchers. From the design brief, we knew that the Technology we used for the game would primarily be cards. Connecting Story and Technology led us to the dual decks of Client and Participant cards, which are at the heart of the game. Similarly, the connection between cards and Mechanics, or the goals and procedures of the game (building a set of participants, in the form of a tableau), led us to the understanding that participants would draw and select cards to form their tableau, and the connection between Mechanics and Story informed the various recruitment actions that are available in the game.
Although the game’s graphic design aesthetic remains very simple, it connects to the Mechanics in the way that the board provides information about how to play the game including a representation of the different actions that are available and in the coloured text on the cards, which connects the different types of attributes. Images and text are used to enhance the story, for example through the introductions to the different client cards, and the layout of the cards is similar to other card games.
In the same way as the game connects to Schell’s Elemental Tetrad, it also incorporates specific design principles for educational games, which support the authenticity of the player experience (
Laine & Lindberg, 2020). While these are too numerous to discuss in detail, the game pays particular attention to notions of Challenge (“DP2: Favor simple challenges over complex challenges”), Goals (“DP25: Create clear, meaningful, and achievable goals”), Relevance and Relatedness (“DP36: Relate gameplay to real-world contexts”), Storytelling and fantasy (“DP49: Create a meaningful story that the player can relate to”) and importantly, Learning (“DP28: Provide relevant and pedagogically grounded learning content and activities”).
2.3. Method
This project sought to address two core Research Questions. Firstly, we had the challenge of whether it was possible to design a game that taught participant recruitment within the boundaries of our design brief and constraints. Secondly, we wanted to understand students’ attitudes towards the designed game, to examine whether it made a valuable contribution to our HCI curriculum. The game’s learning objective was to teach students about the process of recruiting participants for a study, paying special attention to how to recruit participants, the types of information to include in a recruitment plan, and issues of the match between participants and a study’s audience, budgets, different recruitment methods, and the pragmatic trade-offs that may be made to achieve a target number of participants.
While data were generated through an online survey, our methods are mixed. We use descriptive statistics to report on surveyed measures, and used qualitative coding to analyse free-text comments. The first author grouped comments into broad themes for analysis and reporting; given the small size and brevity of our dataset we did not conduct a full thematic analysis on the data.
2.3.1. Playtest Cycles
We playtested Recruitment Rush at three stages, which roughly correspond to the Concept, Preproduction, and Production stages of game design and development (
Fullerton, 2014).
Figure 4 shows the three cycles of playtesting.
Firstly, we playtested an early concept version of the game through an informal early playtest within the development team. This playtest focused on the “formal elements” of the game including the player actions, the game procedures, and the turn sequence or “core loop” (
Fullerton, 2014). Secondly, we playtested a working preproduction version of the game with three teaching staff from a subject focused on UX evaluation and with a group of teaching specialists from across the University. Finally, we implemented an early production version of the game in tutorials for our UX evaluation subject and simultaneously ran a session where two groups of HCI researchers, comprising academic staff and graduate researchers, played the game and engaged in a focus group discussion. Both tutorial participants and expert reviewers were invited to complete a survey about their experience of the game. Ethics approval was obtained from The University of Melbourne. Information about the participants and key outcomes of each stage of playtesting is included below, in
Section 2.4.
2.3.2. Survey
Building on prior research in HCI on player experience, we developed a survey to evaluate Recruitment Rush. The survey comprised a consent form, demographic questions including birth year, gender (using the response options presented by
Spiel et al. (
2019)), profession and gaming experience, 16 Likert scale questions, which were rated on a 7-point scale from “Strongly disagree” to “Strongly agree”, and three free-response questions exploring what the participant liked about the game, what they would change, and any additional information they wished to provide.
The table of Likert-scale questions included the 11 verbatim statements that comprise the mini-Player Experience Inventory (PXI) (
Haider et al., 2022) as well as five statements that related more specifically to the educational goals and context of the game. This is congruent with the use of the PXI and variants; the developers of the PXI anticipated that researchers might create additional modules to extend the base PXI in particular directions (
Abeele et al., 2020). These 16 statements were shown in random order (see
Table 4).
The lead researcher on this project was also the subject coordinator responsible for teaching this Evaluation subject. She did not attend the classes, to ensure that students did not feel pressured or judged in participating. Because we did not know which students had chosen to complete the survey, we pre-incentivised survey participation (
Müller et al., 2014) by providing cupcakes to all students who participated in the class.
2.4. Playtests
2.4.1. Concept Playtest
We used this playtest as part of the game development cycle, noting ideas for different elements of the game as well as for the flow of the game. This informal session was run at the home of one of the researchers; a family member with considerable experience playing boardgames spontaneously sat down to join the session and contributed her feedback and ideas as part of our discussion
3. We used a small number of prepared, handwritten Client and Participant cards (see
Figure 5 and
Figure 6), which we added to as play continued.
Outcomes A key outcome from this session was the need for a central board to focus attention on a common playing environment. This was also the stage where we developed and explored the concept of different ways to recruit participants and simplified some early ideas. We explored a variety of mechanisms including card drafting, where Participant cards were passed from player to player, but settled on each player having access to a semi-independent pool of participants. Additionally, we considered whether there should be action selection restrictions, with each action available to only one player per round, discarding this idea as adding unnecessary complexity to the design.
At this stage, we also refined the implementation of the budget—not only would participants need to be paid an incentive, but there would also be costs to recruit in different ways. We used poker chips to represent the budget. Finally, we limited players’ options for recruitment. Our original design allowed players to draw extra cards when taking a recruitment action, at a cost of 1 per card. In this model, when taking the Local Advertising action (for example), which has a base cost of 3 to draw 5 cards, the player could choose to pay 4 to draw 6 cards, or 5 to draw 7 cards. Playing simulated turns with this option showed us that this was error-prone and fiddly, and made the game unnecessarily complex.
2.4.2. Preproduction Playtests
Playtesting with Subject Staff We ran a playtest with three members of our Evaluation teaching team. Two are graduate researchers who have tutored UX Evaluation subjects over several semesters. One is a lecturer who was assisting with the subject’s delivery. The game was run with a handwritten set of cards and board. In the absence of a supply of “play money”, we created a grid that acted as a budget tracker; players moved a counter to indicate how much money they had left. Although we had intended this as a stop-gap short-term solution, player feedback was positive. They liked the sense that they could see their budget being “crossed off” so the stop-gap became a core component of the design.
Playtesting with Education Specialists At this stage of the project, we also ran a playtest with four education specialist staff members from different areas of the University. These participants enjoyed the game, and felt that it connected well to real-world settings and pedagogical goals. They were concerned, however, that the game might be over-long and suggested reframing it as a co-operative game where players work together to recruit participants for a single study. We considered this option but decided that this would conflict with our design constraint of simplicity (see
Section 2.2.1). Casual players may be unfamiliar with the notion of a cooperative game, as these tend to be aimed at hobbyists rather than the mass market (
Woods, 2012).
Outcomes Key outcomes from these playtests were that the game was playable but that it might need scaffolding in the first couple of rounds, particularly for people who had had less exposure to boardgames. It was after this stage that we increased the number of hobbies or interests on each participant card from two to three, to increase the likelihood of a match without increasing the number of cards in the deck. This was driven by the design constraint of portability (see
Section 2.2.1).
An unexpected observation during these playtests was that the process of checking the cards drawn when taking the ‘Local’ or ‘Online’ advertising options gave a sense of qualifying participants through a screener survey, adding to the feeling of a “real” simulation.
2.4.3. Production Playtests
Tutorial Classes We created multiple copies of the game and dedicated one week of tutorial classes (1 h of class time, up to 25 students in each of four classes) to playing it. Students were expected to play the game as part of their studies, and were invited to complete a survey at the conclusion of the class. A single reminder message to complete the survey was sent through the subject LMS after the end of the week. Students who chose not to complete the survey were invited to spend that time reflecting on what they had learned from playing the game. Of 87 students who attended the tutorial classes, 52 completed the survey.
Figure 7 shows an image from a tutorial class. All groups completed the game within the set time, although in a few cases tutors advised groups to end the game early, after five or six “days” rather than the full seven. We see this as a strength of the design as it allows flexibility even when play proceeds more slowly than anticipated.
Expert Review Our final playtest, run during the same week as the tutorial classes, was an expert evaluation with eight HCI researchers split into two tables for play. Three were working academic staff at our university and the other five were current graduate researchers in the HCI research group.
Like the students, this group of participants also completed the survey. Additionally, several of them chose to stay and participate in a discussion after the game had finished. One suggested that he had been able to “maths out” a winning strategy, and that the game might be too easy. Comparing the result from his table with the second table, however, we noted that his raw score was considerably lower than that of the student who won at the other table. The random nature of drawing cards mitigates the impact of such calculations.
Outcomes A particular focus of the discussion was the scoring system for the game and the way that it incentivised broad recruitment rather than narrow and highly specific recruitment, even where the characteristics of a player’s selected Participants bore little resemblance to the desired characteristics. This was an accidental and unexpected outcome of our scoring system which we had overlooked in earlier playtests. The discussion raised the possibility of having multiple scoring systems which the players could select at the start of the session, prioritising either reaching the required number of participants with some diversity in the population, or reaching a well-targeted but smaller group of participants with the potential to provide rich and detailed insights into the studied website. In response to this issue, we added conversation about the value of different types of recruitment to the post-play tutorial discussion, addressing the tension between quantity of participants and quality or fit to a recruitment profile. Further, the discussion identified the potential to use the game not only to teach about recruiting for a usability or UX study but also to teach graduate researchers about recruiting for their own projects, and to inspire similar conversations about recruitment priorities.
Further outcomes from these playtests are discussed in the following section.
3. Results
We received 60 responses to the survey, which included 52 students and eight HCI researchers. An additional four participants consented to participate but did not respond to any of the extended PXI Likert scale statements or free-text response questions.
In total, of the 60 participants, 32 identified as women and 25 as men. One identified as non-binary, one preferred not to disclose, and one self-described as “He/They”. Fifty-two were current students in the Evaluation subject; 1 an HCI/UX practitioner; 4 academic HCI staff; and 4 graduate researchers in HCI (participants were able to select more than one response option for this question). The median age of participants was 21 (year of birth 2002; IQR 2001–2003) which is congruent with a second-year subject in an undergraduate degree.
We also asked participants about their experience with games. All had at least some experience, although the time they spent playing games each week was variable. Nearly a third (19 participants) spent less than an hour a week playing games and only 12 reported playing for more than ten hours. Nevertheless, the majority of participants felt that they were non-novice game players. On a scale from 1 (novice) to 7 (expert), the average rating was 4.28. This was important because we had to ensure that the game was straightforward enough for inexperienced players, reflecting our design goal of simplicity, but had enough interest and complexity for more experienced groups.
3.1. PXI and Custom Module Findings
Table 5 shows the ratings from the mini PXI, which uses a 7-point Likert response scale, as well as from the five unvalidated survey items. In these scales, a response of −3 shows extreme disagreement, and 3 shows extreme agreement. The positive means all indicate some level of agreement with the statements. Full statements are shown in
Table 4.
Overall, the responses to the Functional (1.446) and Psychosocial (1.454) elements measured by the mini PXI were markedly similar. The game rated most highly for Enjoyment (participants strongly agreed that “I had a good time playing this game” ( = 2.13, stdev = 0.77)) and Immersion—“I was fully focused on the game” ( = 1.97, stdev = 0.76). The lowest ratings from the mini PXI were in Autonomy ( = 1.27, stdev = 1.35), Mastery ( = 1.13, stdev = 1.37), Progress Feedback ( = 1.18, stdev = 1.24) and Audiovisual Appeal ( = 1.23, stdev = 1.29).
The five custom questions that we appended to the mini PXI included three that related to the perceived educational value of the game, one to its value as a tutorial activity, and one to the transferability of this educational game to a social setting.
In our measures of the game’s educational value, respondents felt that playing the game had helped them to understand both the different ways to recruit people (
= 1.8, stdev = 0.86) and how to recruit to a specific user brief (
= 1.47, stdev = 1.05), and had improved their ability to create a recruitment plan (
= 1.25, stdev = 0.99). These questions relate specifically to the learning goals for the tutorial activity, presented in
Section 2.3. Overall, the average for these three questions was 1.5, which is broadly similar to the averages for the validated Functional and Psychosocial constructs. Participants felt strongly that playing the game would be an “effective” use of a tutorial (
= 1.9, stdev = 0.92)—given that the subject’s explicit teaching comprises 12 two-hour lectures and 11 one-hour tutorials, this is a significant share of face-to-face teaching time. Although we did not explicitly measure the effectiveness of learning, these results suggest that playing the game improved students’ broad understanding of the recruitment process and their confidence in applying the in-game concepts to real-world scenarios. They were less certain that they would like to play the game with friends or family (
= 0.9, stdev = 1.43), although we were surprised that this response was as positive as it was, given the specific and educative nature of the game.
3.2. What People Liked About the Game
Fifty-three participants added free-text information about what they liked about the game. Many commented that they found the game “fun” or enjoyable to play, and that they liked the variety of recruitment options (actions) and Participant cards. “I was really surprised by how fun and well-design the game was. It was really fun seeing how my real-world experience in recruiting was reflected in the game—like the expensiveness of recruiting through an agency, and the hit and miss nature of online advertising.” Others enjoyed the element of luck in the game—“It felt kinda like gambling in a fun way” even when it did not always go in their favour. They found the game “easy to follow” and noted that the highly structured turns made it easy to understand what to do during their turn. Several participants commented on enjoying the need to make decisions and think about their actions—“Thinking about the characters in turn and building a story about your project”—and others complimented the game’s design quality.
Free comments echoed these emotions, with several participants reiterating the fun that they had experienced playing the game—“I was pleasantly surprised by how fun it was”. One participant contrasted it with their broader experience of their studies—“Thanks for the fun class, it was a nice change lol”.
3.3. Opportunities to Improve the Game’s Rules and Mechanisms
Forty-five respondents provided suggestions as to how they would improve the game. Several participants responded “no” or even “no, I think it is perfect” to this question; we did not count these responses as suggestions.
3.3.1. Clarify the Rules
Sixteen participants commented on the rules of the game. They felt that elements of the rules could have been clearer, and that a printed rulebook would be preferable to the group teaching and illustrative round, which we presented at the start of the session to introduce the activity. Two participants felt that there was too much luck in the game, in the form of the Participant deck, one felt that there were too many elements to the game, and one that it should be more complex. One commented that it should have been a digital game, but did not provide any supporting information to explain this comment.
3.3.2. Refine and Simplify Scoring
One participant—who identified as a “
novice” to games and who played in the session with the graduate researcher who felt that they had “solved” the game—commented that to improve the game we should “
Make it clearer than this is just about probability and chance and not about engaging with human-centred design”; however, we feel that this reflects a misunderstanding of the relationship between a game’s theme and the mechanics or mechanisms that support gameplay. While the mechanisms reflect elements of probability and chance, the theme connects closely to human-centered design. We recognise, however, that this player’s experience was unsatisfactory due to poor luck in card draws and their desire to
exactly match the preferred characteristics on the Client card. This reflects their recruitment practices as a qualitative researcher rather than the pragmatic considerations of an industry-based role. Providing more information about the game’s scoring would have alleviated this issue for this session; however, we reflect further on this issue in
Section 4.
Further, ten participants expressed concerns about the budget tracker and twelve about the scoresheet. Although our internal playtest group had liked the paper budget tracker sheet, these participants found it anywhere from “a little unclear” to “very confusing”. Several participants confused or conflated the budget tracker and scoring, and ten raised the complexity of the scoring as an issue.
3.3.3. Reflecting English Proficiency in the Game
The English proficiency attribute was the most problematic, and was discussed at all stages of playtesting. Participants questioned why these were not presented as a spectrum where a minimum—but not maximum—English proficiency could be specified. We feel that this could be addressed through teaching and by providing relevant and sensitive examples in rules explanation, for example:
Some clients are interested in knowing whether their website or app is easy to understand. They specifically want people who are fairly new to speaking English to make sure that they can understand the content.
3.4. Opportunities to Improve the Game’s Graphic Design
Twelve respondents commented on the graphic design of the game—“The aesthetics of the board were quite black and white and a bit text heavy so might be interesting to make a bit more colourful.”. Although we explained that the game was a prototype, participants felt that the simple and functional graphics—particularly on the game board—detracted from their gameplay experience. A redevelopment of the game will focus on enhancing the visual elements and strengthening the connection to the attributes on the Participant cards.
4. Discussion
In this project, we sought to explore the use of a tabletop game for teaching the core Human–Computer Interaction skill of recruiting participants for an evaluation study. This authentic learning approach provides students with the opportunity to explore scenarios typical of the evaluation recruitment process within a game environment. Recruitment Rush is a novel and engaging teaching tool for use in face-to-face teaching, although its value extends beyond the specific HCI context. Our discussion reflects primarily on the value of the game and, more broadly, on the applicability of tabletop play to HCI education. In addition, we discuss the value of the interdisciplinary collaboration on this project, as well as opportunities to extend the game with hybrid tools that streamline the scoring process and to extend the evaluation framework that we used to understand the player experience.
Our study supports prior research that demonstrates the benefit of the use of computer and tabletop games for education (
Araiza-Alba et al., 2021;
Gennari et al., 2019), and contributes to the limited work to date on games for HCI education (
de Souza Lima & Benitti, 2019). It highlights opportunities to use a game to authentically simulate processes that would otherwise be too lengthy or complex to undertake in the context of an HCI subject. Moreover, it suggests further opportunities to use games to strengthen research teaching to other groups including graduate researchers. While we did not explicitly measure learning outcomes, both survey outcomes and open feedback demonstrate that students perceived the game as helpful for learning about different dimensions of recruitment, and as a valuable use of class time.
Recruitment Rush thus delivers against Ney’s relevance model of authenticity in simulation games (
Ney et al., 2014) as well as the EALF (
Safiena & Goh, 2022), incorporating
Herrington (
2006)’s nine Authentic Learning Principles, as illustrated in
Table 1. The game provides
external relevance through its careful design, incorporating the use of realistic client scenarios and clear
game objectives, varied participant cards, diverse recruitment strategies, and budgets for studies. These address the EALF requirements for both
authentic activities and
authentic context “that reflects the way the knowledge will be useful in real life” (
Herrington, 2006, p. 1).
Recruitment Rush provides
gameplay relevance through the sequencing of player actions and use of meaningful in-game
interactions with relevant in-game consequences,
coaching and scaffolding students in a low-risk environment, although we acknowledge that the prototype game could be more attractive and
user-friendly. Finally, the game provides
learning relevance in its close alignment with the subject curriculum and learning outcomes, which is further demonstrated through the student feedback. By
articulating and explaining their choices and manipulating cards and budget to reflect those choices, students realise and model their decisions and their ramifications in the game setting. This allows students to
collaborate in play groups of 4–5 in their construction of their knowledge about and understanding of recruitment strategies through gameplay and
reflect on this through subsequent structured and guided discussion. The different client scenarios ensure that the recruitment process is understood through
multiple roles and perspectives4. Our own input into the game’s design and the expert review sessions ensure that the game reflects
access to experts.
Our first Research Question was whether we could design a game to be played within a one-hour tutorial class, which would successfully mimic the challenges of recruiting participants for studies, within the design constraints we had identified. Recruitment Rush achieves these goals and was successfully implemented in a tutorial class. Additionally, we aimed to understand whether the game made a valuable contribution to our HCI curriculum. Specifically, we aimed to teach students about the benefits and disadvantages of different methods of participant recruitment, the costs of both recruitment and incentivising participants, and the importance of matching participants’ attributes with the client’s ideal user. Although we did not use explicit measures to assess student learning in this area, the very positive responses to our survey showed that students felt that the game had been valuable in teaching them about recruitment. Moreover, comments on the survey showed that the game inspired students to think about recruitment in different contexts.
This reflective response to the game was also seen among the “expert” group of staff and graduate researchers from our HCI research group, who spent over an hour discussing the game after playtesting it. This group highlighted the potential value of the game for teaching about recruitment for research studies to graduate researchers. They felt that there was value not only in playing the game but in discussing the values and philosophies that it embodied, particularly in the scoring system which inadvertently privileged those who recruited a large number of participants, even with poor matches to the target attributes, over those who preferred a close fit to recruitment profiles. As a result of these discussions, we extended the reflective post-gameplay discussion to incorporate these ideas.
Interestingly, during our “expert” playtest session, two colleagues from a non-HCI computing discipline walked past and stopped to ask what it was about. After observing the play, they enthusiastically concurred that it would be valuable for their own graduate researchers to deepen their understanding of recruitment for research studies.
We designed Recruitment Rush as a tabletop boardgame, because we wanted it to be simple to implement in tutorial classes, without the need for potentially complex underlying technical infrastructure to support multiplayer play. It was interesting to us that only one respondent suggested that a digital design would be preferable, and they did not provide any detail or suggestion as to how this might be accomplished. Nevertheless, our observations and reading of the issues raised by participants suggest that there are meaningful opportunities to use hybrid technologies to expedite the use of the game, particularly in the domains of
housekeeping and
calculating (
Rogerson et al., 2021b).
A final reflection, which echoes work by
Crocco et al. (
2016), is on the value of the interdisciplinary team for this project, where Crocco et al. offered faculty members a scaffolded approach to game design which combined an introductory workshop on Game-Based Learning with a hands-on game design workshop, prototyping session and playtesting, our approach combined an HCI academic with some prior game design experience with a working game designer, with further contribution from a specialist student learning team. The first author developed the initial game concept in response to an identified gap in the teaching model and applied for funding to develop it. The second author took that game concept and led development of the game and components, and the third author provided advice and insight on student learning goals and outcomes. Regular meetings and playtests throughout the project ensured that ideas could be raised and challenged within the design team before being playtested with external participants.
Limitations and Future Work
Despite the positive response to Recruitment Rush, we note several key limitations of this work which point to future research opportunities. Firstly, although we developed Recruitment Rush as a tabletop boardgame, we see potential in redeveloping it to leverage hybrid components to expedite play. In particular, we noted that players found scoring to be a lengthy and error-prone process, and we propose a simple, web-based scoring tool to replace the paper scoresheets. This is a simple and light-weight hybrid solution that avoids the costly and complex need for a central multiplayer-equipped server (
Rogerson et al., 2021a). Such a scoring tool would expedite the lengthy scoring process, allow teaching staff to collect additional data about game outcomes, and support the development of further refinements to the game’s scoring system.
Secondly, we note the potential to develop and validate a further set of questions which could be used to extend the PXI instrument. The education-focused questions we used are not validated yet provided valuable insights to the student experience. A validated set of education-focused questions would present a valuable extension to the PXI. This would improve the transferability of knowledge from different projects and allow for comparison of different sets of game data. We believe that this is congruent with the PXI’s explicit mention of potential modules that extend it for specific settings (
Abeele et al., 2020).
Thirdly, we note limitations that relate to the context of development and testing of Recruitment Rush to integrate with our teaching: our evaluation is limited to a single institution and student cohort, and only 52 of the 87 participating students completed the survey. This may have induced a form of response bias. Future evaluation could extend to implementing the game at other institutions or in other programs. While we recognise that introducing a game to a classroom may have a novelty effect, we ultimately see that as a positive, applicable to each cohort of students who encounter the game. Future work should examine how to measure the value that the game contributes to student learning.
Finally, we note the tension (discussed in the Expert Review sessions) between the desire to recruit the required
number of participants and the desire to recruit participants that fit the target
user profile. While this presents a valuable opportunity for discussion and reflection in tutorials, we acknowledge that the score system presented here does not reflect the value of appropriately qualifying and targeting participants. We are undertaking further work to refine and simplify the scoring system and better reflect the value we place on the selection of suitable participants for any study. The alternative scoring approaches outlined in
Section 2.2.4 represent an interim solution for this issue.