The Relationship Between Game-Related Assessment and Traditional Measures of Cognitive Ability—A Meta-Analysis
Abstract
1. Introduction
2. The Relationship Between GRA and Cognitive Ability
2.1. Cognitive Ability: From Traditional Measures to Game-Related Assessment
2.2. Expected Relationship and Moderating Factors Between GRA and Traditional Measures of Cognitive Ability
2.3. Aims and Hypothesis
- Research Question 1: Are there any differences in the strength of the relationship between GRA and traditional tests of cognitive ability based on whether general cognitive ability/GMA vs. broad and narrow abilities were measured?
- Research Question 2: Are there any differences in the strength of the relationship between GRA and traditional tests of cognitive ability based on (A) whether the GRA was based on existing game(s) vs. specifically developed or adapted game(s) and (B) whether the GRA was specifically designed to assess cognitive ability vs. serves other purposes?
3. Materials and Methods
3.1. Literature Search and Inclusion Criteria
3.2. Coding Procedure
3.3. Meta-Analytic Procedure
4. Results
4.1. Study Characteristics
4.2. Overall Relationships Between GRA and Cognitive Ability
4.3. Moderator Analysis
4.4. Additional Analysis
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Studies Used in Meta-Analysis (44 Papers)
- 1.
- Aalbers, T., M. A. Baars, M. G. Olde Rikkert, and R. P. Kessels. 2013. “Puzzling with online games (BAM-COG): reliability, validity, and feasibility of an online self-monitor for cognitive performance in aging adults.” J Med Internet Res 15 (12):e270. https://doi.org/10.2196/jmir.2860.
- 2.
- Adams, D., and R. Mayer. 2012. “Examining the Connection Between Dynamic and Static Spatial Skills and Video Game Performance.” Proceedings of the Annual Meeting of the Cognitive Science Society.
- 3.
- Atkins, S. M., A. M. Sprenger, G. J. H. Colflesh, T. L. Briner, J. B. Buchanan, S. E. Chavis, S. Chen, G. L. Iannuzzi, V. Kashtelyan, E. Dowling, J. I. Harbison, D. J. Bolger, M. F. Bunting, and M. R. Dougherty. 2014. “Measuring Working Memory Is All Fun and Games.” Experimental Psychology 61 (6):417–438. https://doi.org/10.1027/1618-3169/a000262.
- 4.
- Auer, E. M., G. Mersy, S. Marin, J. Blaik, and R. N. Landers. 2022. “Using machine learning to model trace behavioral data from a game-based assessment.” International Journal of Selection and Assessment 30 (1):82–102. https://doi.org/10.1111/ijsa.12363.
- 5.
- Baniqued, P. L., H. Lee, M. W. Voss, C. Basak, J. D. Cosman, S. DeSouza, J. Severson, T. A. Salthouse, and A. F. Kramer. 2013. “Selling points: What cognitive abilities are tapped by casual video games?” Acta Psychologica 142 (1):74–86. https://doi.org/10.1016/j.actpsy.2012.11.009.
- 6.
- Bonny, J. W., and L. M. Castaneda. 2017. “Number processing ability is connected to longitudinal changes in multiplayer online battle arena skill.” Computers in Human Behavior 66:377–387. https://doi.org/10.1016/j.chb.2016.10.005.
- 7.
- Bonny, J. W., L. M. Castaneda, and T. Swanson. 2016. “Using an International Gaming Tournament to Study Individual Differences in MOBA Expertise and Cognitive Skills.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, California, USA.
- 8.
- Borghetti, D., C. Zanobini, I. Natola, S. Ottino, A. Parenti, V. Brugada-Ramentol, H. Jalali, and A. Bozorgzadeh. 2023. “Evaluating cognitive performance using virtual reality gamified exercises.” Frontiers in Virtual Reality 4. https://doi.org/10.3389/frvir.2023.1153145.
- 9.
- Buford, C. Colby, and B. J. O’Leary. 2015. “Assessment of Fluid Intelligence Utilizing a Computer Simulated Game.” International Journal of Gaming and Computer-Mediated Simulations 7 (4):1–17. https://doi.org/10.4018/ijgcms.2015100101.
- 10.
- Cheng, X., G. C. Gilmore, A. J. Lerner, and K. Lee. 2023. “Computerized Block Games for Automated Cognitive Assessment: Development and Evaluation Study.” JMIR Serious Games 11:e40931. https://doi.org/10.2196/40931.
- 11.
- Chesham, A., S. M. Gerber, N. Schutz, H. Saner, K. Gutbrod, R. M. Muri, T. Nef, and P. Urwyler. 2019. “Search and Match Task: Development of a Taskified Match-3 Puzzle Game to Assess and Practice Visual Search.” JMIR Serious Games 7 (2):e13620. https://doi.org/10.2196/13620.
- 12.
- Chicchi Giglioli, I. A., C. de Juan Ripoll, E. Parra, and M. Alcañiz Raya. 2018. “EXPANSE: A novel narrative serious game for the behavioral assessment of cognitive abilities.” PLOS ONE 13 (11):e0206925. https://doi.org/10.1371/journal.pone.0206925.
- 13.
- Cretenoud, A. F., A. Barakat, A. Milliet, O. H. Choung, M. Bertamini, C. Constantin, and M. H. Herzog. 2021. “How do visual skills relate to action video game performance?” J Vis 21 (7):10. https://doi.org/10.1167/jov.21.7.10.
- 14.
- Denga Ndemera, R. A. 2022. “Effects of Playing the Video Game Tetris on Attention and Processing Speed.” 29254213 Ph.D., Rensselaer Polytechnic Institute.
- 15.
- Foroughi, C. K., C. Serraino, R. Parasuraman, and D. A. Boehm-Davis. 2016. “Can we create a measure of fluid intelligence using Puzzle Creator within Portal 2?” Intelligence 56:58–64. https://doi.org/10.1016/j.intell.2016.02.011.
- 16.
- Gagnon, D. 1985. “Videogames and spatial skills: An exploratory study.” Ectj 33 (4):263–275.
- 17.
- Gödöllei Lappalainen, F. A. 2017. “Game-Based Assessments of Cognitive Ability: Validity and Effects on Adverse Impact through Perceived Stereotype Threat, Test-Taking Motivation and Anxiety.” Master thesis University of Calgary.
- 18.
- Jones, M. B., W. P. Dunlap, and I. M. Bilodeau. 1986. “Comparison of video game and conventional test performance.” Simulation & Games 17 (4):435–446.
- 19.
- Kokkinakis, A. V., P. I. Cowling, A. Drachen, and A. R. Wade. 2017. “Exploring the relationship between video game expertise and fluid intelligence.” PLOS ONE 12 (11):e0186621. https://doi.org/10.1371/journal.pone.0186621.
- 20.
- Kranz, M. B., P. L. Baniqued, M. W. Voss, H. Lee, and A. F. Kramer. 2017. “Examining the Roles of Reasoning and Working Memory in Predicting Casual Game Performance across Extended Gameplay.” Front Psychol 8:203. https://doi.org/10.3389/fpsyg.2017.00203.
- 21.
- Krebs, C., M. Falkner, J. Niklaus, L. Persello, S. Klöppel, T. Nef, and P. Urwyler. 2021. “Application of Eye Tracking in Puzzle Games for Adjunct Cognitive Markers: Pilot Observational Study in Older Adults.” JMIR Serious Games 9 (1):e24151. https://doi.org/10.2196/24151.
- 22.
- Kröner, S., and D. Leutner. 2022. “MultiFlux–Pilotstudie für die Entwicklung eines Verfahrens zur simulationsbasierten Intelligenzdiagnostik [MultiFlux—A pilot study concerning the development of a stimulation-based tool for intelligence assessment].” Zeitschrift fur Arbeits und Organisationspsychologie 46: 84–88.
- 23.
- Landers, R. N., M. B. Armstrong, A. B. Collmus, S. Mujcic, and J. T. Blaik. 2021. “Theory-driven game-based assessment of general cognitive ability: Design theory, measurement, prediction of performance, and test fairness.” Journal of Applied Psychology. https://doi.org/10.1037/apl0000954.
- 24.
- Lee, K., D. Jeong, R. C. Schindler, and E. J. Short. 2016. “SIG-Blocks: Tangible game technology for automated cognitive assessment.” Computers in Human Behavior 65:163–175. https://doi.org/10.1016/j.chb.2016.08.023.
- 25.
- Leutner, F., S. C. Codreanu, S. Brink, and T. Bitsakis. 2022. “Game based assessments of cognitive ability in recruitment: Validity, fairness and test-taking experience.” Front Psychol 13:942662. https://doi.org/10.3389/fpsyg.2022.942662.
- 26.
- Lim, J., and A. Furnham. 2018. “Can Commercial Games Function as Intelligence Tests? A Pilot Study.” The Computer Games Journal 7 (1):27–37. https://doi.org/10.1007/s40869-018-0053-z.
- 27.
- Martin, N., J. Capman, A. Boyce, K. Morgan, M.F. Gonzalez, and S. Adler. 2020. “New frontiers in cognitive ability testing: working memory.” Journal of Managerial Psychology 35 (4):193–208. https://doi.org/https://doi.org/10.1108/JMP-09-2018-0422.
- 28.
- McPherson, J., and N. R. Burns. 2007. “Gs invaders: Assessing a computer game-like test of processing speed.” Behavior research methods 39:876–883.
- 29.
- McPherson, J., and N. R. Burns. 2008. “Assessing the validity of computer-game-like tests of processing speed and working memory.” Behav Res Methods 40 (4):969–81. https://doi.org/10.3758/BRM.40.4.969.
- 30.
- Nikolaou, I., K. Georgiou, and V. Kotsasarlidou. 2019. “Exploring the Relationship of a Gamified Assessment with Performance.” Span J Psychol 22:E6. https://doi.org/10.1017/sjp.2019.5.
- 31.
- Ohlms, M. L., K. G. Melchers, and U. P. Kanning. 2024. “Can we playfully measure cognitive ability? Construct-related validity and applicant reactions.” International Journal of Selection and Assessment 32:91–107. https://doi.org/10.1111/ijsa.12450.
- 32.
- Ono, T., T. Sakurai, S. Kasuno, and T. Murai. 2022. “Novel 3-D action video game mechanics reveal differentiable cognitive constructs in young players, but not in old.” Sci Rep 12 (1):11751. https://doi.org/10.1038/s41598-022-15679-5.
- 33.
- Pontrelli, M. James. 1990. “A study of the relationship between practice in the use of a radar simulation game and ability to negotiate spatial orientation problems.” 9119893 Ed.D., Oklahoma State University.
- 34.
- Quiroga, M. A., A. Diaz, F. J. Román, J. Privado, and R. Colom. 2019. “Intelligence and video games: Beyond “brain-games”.” Intelligence 75:85–94. https://doi.org/10.1016/j.intell.2019.05.001.
- 35.
- Quiroga, M. A., M. Herranz, M. Gómez-Abad, M. Kebir, J. Ruiz, and Roberto Colom. 2009. “Video-games: Do they require general intelligence?” Computers & Education 53 (2):414–418. https://doi.org/10.1016/j.compedu.2009.02.017.
- 36.
- Quiroga, M. Á., F. J. Román, A. Catalán, H. Rodríguez, J. Ruiz, M. Herranz, M. Gómez-Abad, and R. Colom. 2011. “Videogame performance (not always) requires intelligence “ International Journal of Online Pedagogy and Course Design (IJOPCD), 1 (3):18–32.
- 37.
- Quiroga, M. A., F. J. Román, J. De La Fuente, J. Privado, and R. Colom. 2016. “The measurement of intelligence in the XXI century using video games.” The Spanish Journal of Psychology 19: E89.
- 38.
- Rabbitt, P., N. Banerji, and A. Szymanski. 1989. “Space Fortress as an IQ test? Predictions of learning and of practised performance in a complex interactive video-game.” Acta psychologica 71(1–3) (1–3):243–257.
- 39.
- Roman, F. J., P. Gutierrez, J. Ramos-Cejudo, P. A. Gonzalez-Calero, P. P. Gomez-Martin, C. Larroy, R. Martin-Brufau, C. Lopez-Cavada, and M. A. Quiroga. 2024. “Checking Different Video Game Mechanics to Assess Cognitive Abilities in Groups with and without Emotional Problems.” J Intell 12 (1). https://doi.org/10.3390/jintelligence12010001.
- 40.
- Simons, A., I. Wohlgenannt, S. Zelt, M. Weinmann, J. Schneider, and J. vom Brocke. 2023. “Intelligence at play: game-based assessment using a virtual-reality application.” Virtual Reality 27 (3):1827–1843. https://doi.org/10.1007/s10055-023-00752-9.
- 41.
- Thompson, O., S. Barrett, C. Patterson, and D. Craig. 2012. “Examining the Neurocognitive Validity of Commercially Available, Smartphone-Based Puzzle Games.” Psychology 03 (07):525–526. https://doi.org/10.4236/psych.2012.37076.
- 42.
- Valls-Serrano, C., C. de Francisco, E. Caballero-López, and A. Caracuel. 2022. “Cognitive Flexibility and Decision Making Predicts Expertise in the MOBA Esport, League of Legends.” SAGE Open 12 (4):215824402211427. https://doi.org/10.1177/21582440221142728.
- 43.
- Ventura, M., V. Shute, T. Wright, and W. Zhao. 2013. “An investigation of the validity of the virtual spatial navigation assessment.” Front Psychol 4:852. https://doi.org/10.3389/fpsyg.2013.00852.
- 44.
- Wang, P., Y. Fang, J.-Y. Qi, and H.J. Li. 2023. “FISHERMAN: A Serious Game for Executive Function Assessment of Older Adults.” Assessment 30 (5):1499–1513. https://doi.org/10.1177/10731911221105648.
Appendix B. Result Table with Correlations Corrected for Unreliability
k | m | rcor | 95%CI | ||
---|---|---|---|---|---|
LB | UB | ||||
H1: Overall relationship | 807 | 52 | 0.45 *** | 0.38 | 0.52 |
Test of a set of binary moderators according to hypothesis | |||||
H2A: Measurement scope cognitive ability | |||||
Yes (multiple tests) | 102 | 17 | 0.56 *** | 0.48 | 0.63 |
No (single test) | 705 | 43 | 0.43 *** | 0.36 | 0.50 |
H2B: Measurement scope GRA | |||||
Yes (multiple games) | 38 | 7 | 0.54 *** | 0.37 | 0.68 |
No (single game) | 769 | 46 | 0.44 *** | 0.37 | 0.51 |
H3: Computer-based measurement of cognitive ability | |||||
Yes (computer-based) | 363 | 25 | 0.46 *** | 0.34 | 0.56 |
No (paper-pencil) | 444 | 34 | 0.48 *** | 0.40 | 0.56 |
Test of a set of binary moderators according to research questions | |||||
RQ1: General cognitive ability/GMA measured in traditional test | |||||
Yes (g/GMA) | 27 | 8 | 0.39 *** | 0.24 | 0.52 |
No (broad or narrow abilities) | 780 | 46 | 0.46 *** | 0.39 | 0.53 |
RQ2.A: Existing GRA | |||||
Yes | 516 | 30 | 0.40 *** | 0.32 | 0.46 |
No (specifically developed or adapted) | 291 | 22 | 0.52 *** | 0.41 | 0.62 |
RQ2.B: GRA assessment of cognitive ability | |||||
Yes (intend to assess cognitive ability) | 358 | 21 | 0.49 *** | 0.42 | 0.56 |
No (other purpose) | 449 | 31 | 0.42 *** | 0.31 | 0.51 |
1 | It is important to distinguish GRAs from work simulations, as they can easily be conflated. Work simulations are contextualized assessment procedures designed to mimic key psychological and physical aspects of a job (Lievens and De Soete 2012). Both simulations and GRAs aim to assess job-relevant skills, but they differ in intent. GRAs are designed with playfulness in mind, using game mechanics to motivate and engage users. In contrast, work simulations are designed to replicate real-world work scenarios and experiences. In work simulations, tasks must closely resemble actual work behaviors (e.g., an air traffic control simulation involves directing aircrafts to prevent flight conflicts). In contrast, GRAs do not need to resemble work scenarios. Their gameplay might involve job-related, but also unrelated activities such as bursting balloons or catching fish. The emphasis in GRAs is on using game mechanics to create an engaging experience rather than mimicking specific job tasks. Although a GRA can include simulation elements (e.g., through the game mechanics of goals or immersion), simulation is not inherently part of a GRA (Landers and Sanchez 2022). Conversely, a work simulation that merely replicates job tasks (e.g., reading emails) without incorporating game mechanics that encourage deep immersion in the game play (e.g., story narratives and specified roles), would not constitute a GRA. In summary, while GRAs and work simulations overlap, the primary differentiator is the intentional use of game mechanics and the element of playfulness in GRAs, as opposed to the replication of job-related scenarios in work simulations that do not need to involve game mechanics. |
References
- Ackerman, Paul L. 2012. Intelligendce-as-process, personality, interests, and intelligence-as-knowledge. In Contemporary Intellectual Assessment: Theories, Tests, and Issues. Edited by Dwan P. Flanagan and Erin M. McDonough. New York: Guildford Press, pp. 225–41. [Google Scholar]
- Adams, Deanne, and Rich Mayer. 2012. Examining the Connection Between Dynamic and Static Spatial Skills and Video Game Performance. Paper presented at the Annual Meeting of the Cognitive Science Society, Sapporo, Japan, August 1–4. [Google Scholar]
- Adler, Seymour, Anthony S. Boyce, and Pat M. Caputo. 2018. Employment testing. In Next Generation Technology-Enhanced Assessment. Edited by John C. Scott, Dave Bartram and Douglas H. Reynolds. Cambridge: University Press, pp. 3–35. [Google Scholar]
- Arthur, Winfred, Jr., and Anton J. Villado. 2008. The importance of distinguishing between constructs and methods when comparing predictors in personnel selection research and practice. Journal of Applied Psychology 93: 435–42. [Google Scholar] [CrossRef] [PubMed]
- Assink, Mark, and Carlijn J. M. Wibbelink. 2016. Fitting three-level meta-analytic models in R: A step-by-step tutorial. The Quantitative Methods for Psychology 12: 154–74. [Google Scholar] [CrossRef]
- Atkins, Sharona M., Amber M. Sprenger, Gregory J. H. Colflesh, Timothy L. Briner, Jacob B. Buchanan, Sydnee E. Chavis, Sy-yu Chen, Gregory L. Iannuzzi, Vadim Kashtelyan, Eamon Dowling, and et al. 2014. Measuring Working Memory Is All Fun and Games. Experimental Psychology 61: 417–38. [Google Scholar] [CrossRef]
- Auer, Elena M., Gabriel Mersy, Sebastian Marin, Jason Blaik, and R. N. Landers. 2022. Using machine learning to model trace behavioral data from a game-based assessment. International Journal of Selection and Assessment 30: 82–102. [Google Scholar] [CrossRef]
- Baniqued, Pauline L., Hyunkyu Lee, Michelle W. Voss, Chandramallika Basak, Joshua D. Cosman, Shanna DeSouza, Joan Severson, Timothy A. Salthouse, and Arthur F. Kramer. 2013. Selling points: What cognitive abilities are tapped by casual video games? Acta Psychologica 142: 74–86. [Google Scholar] [CrossRef] [PubMed]
- Borenstein, Michael, Larry V. Hedges, Julian P. T. Higgins, and Hannah R. Rothstein. 2009. Introduction to Meta-Analysis. West Sussex: Sons. [Google Scholar]
- Borghetti, Davide, Carlotta Zanobini, Ilenia Natola, Saverio Ottino, Angela Parenti, Victòria Brugada-Ramentol, Hossein Jalali, and Amir Bozorgzadeh. 2023. Evaluating cognitive performance using virtual reality gamified exercises. Frontiers in Virtual Reality 4: 1153145. [Google Scholar] [CrossRef]
- Brunswik, Egon. 1955. Representative design and probabilistic theory in a functional psychology. Psychological Review 62: 193–217. [Google Scholar] [CrossRef]
- Buford, Charles Colby, and Brian J. O’Leary. 2015. Assessment of Fluid Intelligence Utilizing a Computer Simulated Game. International Journal of Gaming and Computer-Mediated Simulations 7: 1–17. [Google Scholar] [CrossRef]
- Campbell, Donald T., and Donald W. Fiske. 1959. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin 56: 81–105. [Google Scholar] [CrossRef]
- Chicchi Giglioli, Irene Alice, Carla de Juan Ripoll, Elena Parra, and Mariano Alcañiz Raya. 2018. EXPANSE: A novel narrative serious game for the behavioral assessment of cognitive abilities. PLoS ONE 13: e0206925. [Google Scholar] [CrossRef] [PubMed]
- Cretenoud, Aline F., Arthur Barakat, Alain Milliet, Oh-Hyeon Choung, Marco Bertamini, Christophe Constantin, and Michael H. Herzog. 2021. How do visual skills relate to action video game performance? Journal of Vision 21: 10. [Google Scholar] [CrossRef] [PubMed]
- Deterding, Sebastian, Dan Dixon, Rilla Khaled, and Lennart Nacke. 2011. From game design elements to gamefulness: Defining “gamification”. Paper presented at the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, Tampere, Finland, 29–30 September 2011; New York: Association for Computing Machinery, pp. 9–15. [Google Scholar]
- Dilchert, Stephen. 2018. Cognitive ability. In The SAGE Handbook of Industrial, Work & Organizational Psychology: Personnel Psychology and Employee Performance. Edited by Deniz S. Ones, Neal Anderson, Chockalingam Viswesvaran and Handan K. Sinangil. London: Sage Publications Ltd., pp. 248–76. [Google Scholar]
- Ellingsen, Victor J., and Randall W. Engle. 2020. Cognitive approaches to intelligence. In Intelligence. Edited by Robert J. Sternberg. Cambridge: University Printing House, pp. 104–38. [Google Scholar]
- Fetzer, Michael, Jennifer McNamara, and Jennifer L. Geimer. 2017. Gamification, Serious Games and Personnel Selection. In The Wiley Blackwell Handbook of the Psychology of Recruitment, Selection and Employee Retention. Hoboken: Wiley Blackwell, pp. 293–309. [Google Scholar]
- Flanagan, Dawn P., and Erin M. McDonough, eds. 2018. Contemporary Intellectual Assessment. New York: Guilford Press. [Google Scholar]
- Foroughi, Cyrus K., Carolyn Serraino, Raja Parasuraman, and Deborah A. Boehm-Davis. 2016. Can we create a measure of fluid intelligence using Puzzle Creator within Portal 2? Intelligence 56: 58–64. [Google Scholar] [CrossRef]
- Georgiou, Konstantina, Athanasios Gouras, and Ioannis Nikolaou. 2019. Gamification in employee selection: The development of a gamified assessment. International Journal of Selection and Assessment 27: 91–103. [Google Scholar] [CrossRef]
- Harman, Jason L., and Kayla D. Brown. 2022. Illustrating a narrative: A test of game elements in game-like personality assessment. International Journal of Selection and Assessment 30: 157–66. [Google Scholar] [CrossRef]
- Hawkes, Ben, Iva Cek, and Charles A. Handler. 2018. The gamification of employee selection tools. In Next Generation Technology-Enhanced Assessment. Edited by John C. Scott, Dave Bartram and Douglas H. Reynolds. Cambridge: University Press, pp. 288–313. [Google Scholar]
- Hervas, Ramon, David Ruiz-Carrasco, Tania Mondejar, and Jose Bravo. 2017. Gamification mechanics for behavioral change: A systematic review and proposed taxonomy. Paper presented at the 11th EAI International Conference on Pervasive Computing Technologies for Healthcare, Barcelona, Spain, May 23–26. [Google Scholar]
- Holden, LaTasha R., and Gabriel J. Tanenbaum. 2023. Modern Assessments of Intelligence Must Be Fair and Equitable. Journal of Intelligence 11: 126. [Google Scholar] [CrossRef]
- Hommel, Björn E., Regina Ruppel, and Hannes Zacher. 2022. Assessment of cognitive flexibility in personnel selection: Validity and acceptance of a gamified version of the Wisconsin Card Sorting Test. International Journal of Selection and Assessment 30: 126–44. [Google Scholar] [CrossRef]
- Hox, Joop J. 2010. Multilevel Analysis: Techniques and Applications. New York: Routledge. [Google Scholar]
- Jones, Marshall B., William P. Dunlap, and Ina McD. Bilodeau. 1986. Comparison of video game and conventional test performance. Simulation & Games 17: 435–46. [Google Scholar]
- Kantrowitz, Tracy M., and Sara L. Gutierrez. 2018. The changing landscape of technology-enhanced test administration. In Next Generation Technology-Enhanced Assessment. Edited by John C. Scott, Dave Bartram and Douglas H. Reynolds. Cambridge: Cambridge University Press, pp. 193–216. [Google Scholar]
- Koch, Marco, Nicolas Becker, Frank M. Spinath, and Samuel Greiff. 2021. Assessing intelligence without intelligence tests. Future perspectives. Intelligence 89: 101596. [Google Scholar] [CrossRef]
- Krebs, Christine, Michael Falkner, Joel Niklaus, Luca Persello, Stefan Klöppel, Tobias Nef, and Prabitha Urwyler. 2021. Application of Eye Tracking in Puzzle Games for Adjunct Cognitive Markers: Pilot Observational Study in Older Adults. JMIR Serious Games 9: e24151. [Google Scholar] [CrossRef]
- Kuncel, Nathan R., Sarah A. Hezlett, and Deniz S. Ones. 2004. Academic performance, career potential, creativity, and job performance: Can one construct predict them all? Journal of Personality and Social Psychology 86: 148–61. [Google Scholar] [CrossRef]
- Landers, Richard N., and Diana R. Sanchez. 2022. Game-based, gamified, and gamefully designed assessments for employee selection: Definitions, distinctions, design, and validation. International Journal of Selection and Assessment 30: 1–13. [Google Scholar] [CrossRef]
- Landers, Richard N., Michael B. Armstrong, Andrew B. Collmus, Salih Mujcic, and Jason T. Blaik. 2021. Theory-driven game-based assessment of general cognitive ability: Design theory, measurement, prediction of performance, and test fairness. Journal of Applied Psychology 107: 1655. [Google Scholar] [CrossRef]
- Landers, Richard N., Kristina N. Bauer, and Rachel C. Callan. 2017. Gamification of task performance with leaderboards: A goal setting experiment. Computers in Human Behavior 71: 508–15. [Google Scholar] [CrossRef]
- Lang, Jonas W. B., and Harrison J. Kell. 2020. General mental ability and specific abilities: Their relative importance for extrinsic career success. Journal of Applied Psychology 105: 1047–61. [Google Scholar] [CrossRef] [PubMed]
- Lievens, Filip, and Britt De Soete. 2012. Simulations. In Handbook of Assessment and Selection. Edited by N. Schmitt. Oxford: University Press, pp. 383–410. [Google Scholar]
- Lievens, Filip, and Charlie L. Reeve. 2015. Where I–O Psychology Should Really (Re)start Its Investigation of Intelligence Constructs and Their Measurement. Industrial and Organizational Psychology 5: 153–58. [Google Scholar] [CrossRef]
- Lievens, Filip, and Paul R. Sackett. 2017. The effects of predictor method factors on selection outcomes: A modular approach to personnel selection procedures. Journal of Applied Psychology 102: 43–66. [Google Scholar] [CrossRef]
- Lumsden, Jim, Elizabeth A. Edwards, Natalia S. Lawrence, David Coyle, and Marcus R. Munafò. 2016. Gamification of Cognitive Assessment and Cognitive Training: A Systematic Review of Applications and Efficacy. JMIR Serious Games 4: e11. [Google Scholar] [CrossRef]
- Malanchini, Margherita, Kaili Rimfeld, Agnieszka Gidziela, Rosa Cheesman, Andrea G. Allegrini, Nicholas Shakeshaft, Kerry Schofield, Amy Packer, Rachel Ogden, Andrew McMillan, and et al. 2021. Pathfinder: A gamified measure to integrate general cognitive ability into the biological, medical, and behavioural sciences. Molecular Psychiatry 26: 7823–37. [Google Scholar] [CrossRef] [PubMed]
- McGrew, Kevin S. 2009. CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence 37: 1–10. [Google Scholar] [CrossRef]
- Mead, Alan D., and Fritz Drasgow. 1993. Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin 114: 449–58. [Google Scholar] [CrossRef]
- Melchers, Klaus G., and Johannes M. Basch. 2022. Fair play? Sex-, age-, and job-related correlates of performance in a computer-based simulation game. International Journal of Selection and Assessment 30: 48–61. [Google Scholar] [CrossRef]
- Nye, Christopher D., Jingjing Ma, and Serena Wee. 2022. Cognitive Ability and Job Performance: Meta-analytic Evidence for the Validity of Narrow Cognitive Abilities. Journal of Business and Psychology 37: 1119–39. [Google Scholar] [CrossRef]
- Ohlms, Marie L., Klaus G. Melchers, and Uwe P. Kanning. 2024. Can we playfully measure cognitive ability? Construct-related validity and applicant reactions. International Journal of Selection and Assessment 32: 91–107. [Google Scholar] [CrossRef]
- Ones, Deniz S., Stephen Dilchert, and Chockalingam Viswesvaran. 2012. Cognitive abilities. In The Oxford Handbook of Personnel Assessment and Selection. Edited by N. Schmitt. New York: Oxford University Press, pp. 179–224. [Google Scholar]
- Quiroga, Maria A., Sergio Escorial, Francisco J. Román, Daniel Morillo, Andrea Jarabo, Jesús Privado, Miguel Hernández, Borja Gallego, and Roberto Colom. 2015. Can we reliably measure the general factor of intelligence (g) through commercial video games? Yes, we can! Intelligence 53: 1–7. [Google Scholar] [CrossRef]
- Quiroga, Maria A., Alice Diaz, Francisco J. Román, Jesús Privado, and Roberto Colom. 2019. Intelligence and video games: Beyond “brain-games”. Intelligence 75: 85–94. [Google Scholar] [CrossRef]
- Quiroga, Maria A., Francisco J. Román, Javier De La Fuente, Jesús Privado, and Roberto Colom. 2016. The measurement of intelligence in the XXI century using video games. The Spanish Journal of Psychology 19: E89. [Google Scholar] [CrossRef]
- Quiroga, Maria. A., María Herranz, Marta Gómez-Abad, Muna Kebir, Javier Ruiz, and Roberto Colom. 2009. Video-games: Do they require general intelligence? Computers & Education 53: 414–18. [Google Scholar] [CrossRef]
- Quiroga, Maria A, Francisco J. Román, Ana Catalán, Herman Rodríguez, Javier Ruiz, María Herranz, Marta Gómez-Abad, and Roberto Colom. 2011. Videogame Performance (Not Always) Requires Intelligence. International Journal of Online Pedagogy and Course Design 1: 18–32. [Google Scholar] [CrossRef]
- Rabbitt, Patrick, Nicole Banerji, and Alex Szymanski. 1989. Space Fortress as an IQ test? Predictions of learning and of practised performance in a complex interactive video-game. Acta Psychologica 71: 243–57. [Google Scholar] [CrossRef]
- Ramos-Villagrasa, Pedro J., Elena Fernández-del-Río, and Ángel Castro. 2022. Game-related assessments for personnel selection: A systematic review. Frontiers in Psychology 13: 952002. [Google Scholar] [CrossRef]
- Roman, Franciso J., Pablo Gutierrez, Juan Ramos-Cejudo, Pedro A. Gonzalez-Calero, Pedro P. Gomez-Martin, Cristina Larroy, Ramon Martin-Brufau, Carlos Lopez-Cavada, and Maria A. Quiroga. 2024. Checking Different Video Game Mechanics to Assess Cognitive Abilities in Groups with and without Emotional Problems. Journal of Intelligence 12: 1. [Google Scholar] [CrossRef] [PubMed]
- Sackett, Paul R., Charlene Zhang, Christopher M. Berry, and Filip Lievens. 2022. Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology 107: 2040–68. [Google Scholar] [CrossRef]
- Sackett, Paul R., Filip Lievens, Chad H. Van Iddekinge, and Nathan R. Kuncel. 2017. Individual differences and their measurement: A review of 100 years of research. Journal of Applied Psychology 120: 254–73. [Google Scholar] [CrossRef] [PubMed]
- Sala, Giovanni K., Semir Tatlidil, and Fernand Gobet. 2018. Video game training does not enhance cognitive ability: A comprehensive meta-analytic investigation. Psychological Bulletin 144: 111–39. [Google Scholar] [CrossRef]
- Sanchez, Diana R., Erik Weiner, and Anand Van Zelderen. 2022. Virtual reality assessments (VRAs): Exploring the reliability and validity of evaluations in VR. International Journal of Selection and Assessment 30: 103–25. [Google Scholar] [CrossRef]
- Schmidt, Frank L., and John E. Hunter. 2015. Methods of Meta-Analysis. Thousand Oaks: SAGE. [Google Scholar]
- Schneider, W. Joel, and Kevin McGrew. 2012. The Cattell-Horn-Carroll model of intelligence. In Contemporary Intellectual Assessment: Theories, Tests, and Issues. Edited by Dwan P. Flanagan and Erin M. McDonough. New York: Guildford Press, pp. 99–144. [Google Scholar]
- Stanek, Kevin C., and Deniz S. Ones. 2018. Taxonomies and compendia of cognitive ability and personality constructs and measures relevant to industrial, work and organizational psychology. In The SAGE Handbook of Industrial, Work & Organizational Psychology: Personnel Psychology and Employee Performance. Edited by Deniz S. Ones, Neal Anderson, Chockalingam Viswesvaran and Handan K. Sinangil. London: Sage Publications Ltd., pp. 366–407. [Google Scholar]
- Sternberg, Robert J. 2020. Approaches to understand human intelligence. In Intelligence. Edited by R. J. Sternberg. Cambridge: University Printing House, pp. 22–46. [Google Scholar]
- Tippins, Nancy T., James Beaty, Fritz D. Drasgow, Wade M. Gibson, Kenneth Pearlman, Daniel Segall, and William J. Shepherd. 2006. Unproctored, internet testing in employment settings. Personnel Psychology 59: 189–225. [Google Scholar] [CrossRef]
- Tippins, Nancy T., Frederick L. Oswald, and S. Morton McPhail. 2021. Scientific, Legal, and Ethical Concerns About AI-Based Personnel Selection Tools: A Call to Action. Personnel Assessment and Decisions 7: 1. [Google Scholar] [CrossRef]
- Traylor, Zach, Ellen Hagen, Ashleigh Williams, and Winfred Arthur, Jr. 2021. The testing environment as an explanation for unproctored internet-based testing device-type effects. International Journal of Selection and Assessment 29: 65–80. [Google Scholar] [CrossRef]
- van de Schoot, Rens, Jonathan de Bruin, Raoul Schram, Parisa Zahedi, Jan de Boer, Felix Weijdema, Bianca Kramer, Martijn Huijts, Maarten Hoogerwerf, Gerbrich Ferdinands, and et al. 2021. An open source machine learning framework for efficient and transparent systematic reviews. Nature Machine Intelligence 3: 125–33. [Google Scholar] [CrossRef]
- van Lill, Xander, Laird McColl, and Matthew Neale. 2023. Cross-national applicability of a game-based cognitive assessment. International Journal of Selection and Assessment 31: 302–20. [Google Scholar] [CrossRef]
- Viechtbauer, Wolfgang. 2010. Conducting meta-analyses in R with the metafor package. Journal of Statistical Software 36: 1–48. [Google Scholar] [CrossRef]
- Wahlstrom, Dustin, Susan E. Raiford, Kristina C. Breaux, Jianjun Zhu, and Lawrence G. Weiss. 2012. The Wechsler Preschool and primary scacle of intelligence—Forth edition. In Contemporary Intellectual Assessment: Theories, Tests, and Issues. Edited by Dwan P. Flanagan and Erin M. McDonough. New York: Guildford Press, pp. 225–41. [Google Scholar]
- Weinder, Nathan, and Elisabeth Short. 2018. Playing with a purpose: The role of games and gamification in modern assessment practices. In The Cambridge Handbook of Technology and Employee Behavior. Edited by Richard N. Landers. Cambrige: Cambrige University Press, pp. 151–78. [Google Scholar]
- Weiner, Erik J., and Diana R. Sanchez. 2020. Cognitive ability in virtual reality: Validity evidence for VR game-based assessments. International Journal of Selection and Assessment 28: 215–35. [Google Scholar] [CrossRef]
- Wu, Felix Y., Evan Mulfinger, Leo Alexander III, Andrea L. Sinclair, Rodney A. McCloy, and Frederick L. Oswald. 2022. Individual differences at play: An investigation into measuring Big Five personality facets with game-based assessments. International Journal of Selection and Assessment 30: 62–81. [Google Scholar] [CrossRef]
Criteria | Description & Examples |
---|---|
| Empirical study: reporting results of quantitative data collection; correlations between GRA and traditional cognitive ability measure.
|
| Limited to technology-based GRAs. GRAs had to (i) include at least one game mechanism identified by Hervas et al. (2017) and (ii) report an objective individual performance measure unaffected by others (e.g., game score or rank).
|
| Traditional assessment of cognitive ability can be located in the CHC model, including paper-pencil and computer-based assessment.
|
| Sample participants had to be healthy adults (aged 18 or older); the study had to report sample size.
|
Other criteria | Published studies in scientific journals, papers available in conference proceedings, or university repositories. Years: all available until December 2023. Language: German, English, or Dutch. |
k | N | m | robs | 95%CI | ||
---|---|---|---|---|---|---|
LB | UB | |||||
H1: Overall relationship | 807 | 6139 | 52 | 0.30 *** | 0.26 | 0.34 |
Estimates based on a set of binary moderators according to hypothesis | ||||||
H2A: Measurement scope cognitive ability | ||||||
Yes (multiple tests) | 102 | 4451 | 17 | 0.39 *** | 0.33 | 0.44 |
No (single test) | 705 | 2521 | 43 | 0.28 *** | 0.24 | 0.32 |
H2B: Measurement scope GRA | ||||||
Yes (multiple games) | 38 | 1678 | 7 | 0.38 *** | 0.26 | 0.48 |
No (single game) | 769 | 5026 | 46 | 0.29 *** | 0.25 | 0.33 |
H3: Computer-based measurement of cognitive ability | ||||||
Yes (computer-based) | 363 | 2544 | 25 | 0.30 *** | 0.24 | 0.35 |
No (paper-pencil) | 444 | 4095 | 34 | 0.31 *** | 0.27 | 0.36 |
Estimates based on a set of binary moderators according to research questions | ||||||
RQ1: General cognitive ability/GMA measured in traditional test | ||||||
Yes (g/GMA) | 27 | 1033 | 8 | 0.30 *** | 0.21 | 0.38 |
No (broad or narrow abilities) | 780 | 5287 | 46 | 0.30 *** | 0.26 | 0.34 |
RQ2.A: Existing GRA | ||||||
Yes | 516 | 2288 | 30 | 0.27 *** | 0.23 | 0.34 |
No (specifically developed/adapted) | 291 | 3851 | 22 | 0.31 *** | 0.27 | 0.36 |
RQ2.B: GRA assessment of cognitive ability | ||||||
Yes (intend to assess cognitive ability) | 358 | 3946 | 21 | 0.32 *** | 0.29 | 0.36 |
No (other purposes) | 449 | 2193 | 31 | 0.28 *** | 0.22 | 0.33 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bipp, T.; Wee, S.; Walczok, M.; Hansal, L. The Relationship Between Game-Related Assessment and Traditional Measures of Cognitive Ability—A Meta-Analysis. J. Intell. 2024, 12, 129. https://doi.org/10.3390/jintelligence12120129
Bipp T, Wee S, Walczok M, Hansal L. The Relationship Between Game-Related Assessment and Traditional Measures of Cognitive Ability—A Meta-Analysis. Journal of Intelligence. 2024; 12(12):129. https://doi.org/10.3390/jintelligence12120129
Chicago/Turabian StyleBipp, Tanja, Serena Wee, Marvin Walczok, and Laura Hansal. 2024. "The Relationship Between Game-Related Assessment and Traditional Measures of Cognitive Ability—A Meta-Analysis" Journal of Intelligence 12, no. 12: 129. https://doi.org/10.3390/jintelligence12120129
APA StyleBipp, T., Wee, S., Walczok, M., & Hansal, L. (2024). The Relationship Between Game-Related Assessment and Traditional Measures of Cognitive Ability—A Meta-Analysis. Journal of Intelligence, 12(12), 129. https://doi.org/10.3390/jintelligence12120129