Next Article in Journal
The Spatiality of the Vernacular Courtyard House in the Arabian Gulf Region
Previous Article in Journal
Image-Based POI Identification for Mobile Museum Guides: Design, Implementation, and User Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Cognitive Variability in Interactive Museum Games

Human Opsis, 26500 Patras, Greece
Heritage 2025, 8(7), 267; https://doi.org/10.3390/heritage8070267
Submission received: 19 May 2025 / Revised: 21 June 2025 / Accepted: 30 June 2025 / Published: 7 July 2025
(This article belongs to the Section Digital Heritage)

Abstract

Understanding how cognitive differences shape visitor behavior in digital heritage experiences is essential for designing inclusive and engaging museum technologies. This study explores the relationship between cognitive level and interaction behavior, affective responses, and sensor-based engagement using a publicly available dataset from a digital museum game. Participants (N = 1000) were categorized into three cognitive levels (Early, Developing, and Advanced), and their data were analyzed across three domains: user interaction behavior, affective and performance states, and sensor-based interaction measures. Our findings suggest that sensor-level interactions are more sensitive indicators of cognitive differences than observable behavior or inferred affect. This work contributes to the heritage HCI field by highlighting the potential for cognitively adaptive systems that personalize the museum experience in real-time, enhancing accessibility, engagement, and learning in cultural settings.

1. Introduction

The cultural heritage domain has increasingly embraced digital technologies to enrich visitor experiences and promote interactive engagement with historical content. From mobile applications and gamified exhibits to immersive virtual environments, many digital interventions aim to support knowledge acquisition, engagement, and memory retention through exploratory or goal-oriented activities. These applications often draw on game-based learning principles, emphasizing active exploration, feedback, and task-based learning to enhance cultural comprehension and user immersion.
While there is growing evidence that game-based cultural heritage experiences can positively affect learning outcomes [1], affective engagement [2], and interaction behavior [3], the extent to which individual user characteristics, especially cognitive characteristics, influence such outcomes remains underexplored. Recent research has highlighted the need to account for individual cognitive characteristics, such as field dependence-independence or visualizer-verbalizer styles, when designing and evaluating cultural heritage systems [3]. These characteristics impact how users perceive, process, and act upon information embedded in digital heritage environments, often shaping their performance and overall experience.
Despite this progress, current cultural heritage personalization approaches have mainly focused on user preferences [4,5], interests [6], and behavior [7], with minimal attention to underlying cognitive attributes. Cognition-aware personalization frameworks have only recently begun to emerge, leveraging interaction traces, visual behavior, or task performance to infer cognitive traits in real-time and adjust the user experience accordingly [3]. However, existing frameworks (e.g., [3]) typically target dichotomous or well-studied cognitive styles (e.g., Field Dependence-Independence), leaving more generalized representations of cognitive development, such as cognitive level, insufficiently studied in interactive cultural heritage settings.
As an aggregate indicator of a user’s stage of cognitive development, the cognitive level can reflect a range of underlying competencies such as working memory, attention control, and processing speed. These attributes are known to modulate interaction patterns in educational technologies. However, their role within cultural heritage games remains under-investigated. In this context, assessing how users at different cognitive levels (e.g., early, developing, advanced) interact with, respond to, and engage with digital heritage applications may offer critical insights for inclusive design and interaction modeling.
To address this gap, we conducted a large-scale study (N = 1000) using the publicly available Museum Game Interaction Dataset1, which captures user behavior, affective and performance states, and sensor-based interaction metrics during gameplay. Participants were categorized into three cognitive levels (Early, Developing, and Advanced) based on their assessed abilities. This study investigated whether cognitive level significantly affects user behavior, emotional and performance states, and low-level sensor-derived interaction metrics. We examined these questions through a robust multi-method analytical approach, employing Multivariate Analysis of Variance (MANOVA), univariate ANOVAs, Chi-square tests, and post hoc comparisons.
While user interaction and affective-performance metrics showed no statistically significant differences across cognitive groups, we observed marginally significant multivariate effects in sensor-based metrics and statistically significant pairwise differences in key low-level indicators. Specifically, users in the Developing group exhibited significantly more touch interactions than those in the Early group (p = 0.042), and Advanced users responded significantly faster than Developing users ( p = 0.037 ). These findings suggest that although global interaction behaviors may appear uniform across cognitive levels, more subtle cognitive effects manifest in micro-level engagement metrics, especially those involving real-time motor response and perceptual readiness.
The contributions of this work are threefold. First, we extend the growing body of work on cognition-aware design in cultural heritage by operationalizing cognitive level as a grouping variable and evaluating its effect on behavioral, affective, and sensor-based measures in a realistic and scalable game-based environment. Second, we demonstrate the value of fine-grained sensor-derived metrics (e.g., reaction time, touch frequency) in revealing hidden cognitive effects that conventional behavioral analysis may miss. Third, we offer empirical evidence that supports the case for micro-personalization in digital cultural heritage environments, highlighting that even marginal group-level differences can inform adaptive strategies to improve user experience across a cognitively diverse audience.
The remainder of this paper is organized as follows: Section 2 reviews relevant work on cognition-aware cultural heritage design; Section 3.1 describes the dataset, variables, and analysis procedures; Section 3.2 presents the empirical findings; Section 4 discusses the theoretical and practical implications; and Section 5 concludes with key takeaways and directions for future research.

2. Background, Related Work, and Research Objective

2.1. Theoretical Background

Cognitive theory provides a framework for understanding how individuals acquire, process, and apply knowledge in diverse contexts, including educational, cultural, and interactive digital environments. Central to cognitive theory are constructs such as working memory, attention control, processing speed, and executive functioning, which shape how users engage with information, make decisions, and perform tasks [8,9]. In Human-Computer Interaction (HCI), these cognitive characteristics play a critical role in tasks involving exploration, problem-solving, and multimodal information navigation, especially in domains such as cultural heritage, where users encounter complex visual and narrative stimuli.
A well-established area of individual difference within cognitive theory is cognitive styles. These refer to preferred ways of perceiving, organizing, and processing information. One of the most widely studied styles is Field Dependence–Independence (FD-I), which categorizes individuals as either field-dependent (FD), relying on holistic and contextual cues, or field-independent (FI), capable of analytically disembedding elements from their background [10]. In the cultural heritage domain, FI users tend to demonstrate more efficient visual search and navigation behaviors, such as fixating longer on target elements and following focused scan paths, while FD users engage more broadly and contextually [2]. These patterns have been associated with differences in task performance, engagement, and knowledge acquisition in physical and virtual museum settings.
Beyond style-based frameworks, the concept of cognitive level offers a complementary viewpoint. The cognitive level is a developmental indicator reflecting general cognitive functioning, encompassing attention regulation, mental flexibility, reasoning, and processing efficiency. Recent literature advocates for its use in interactive systems to group users according to early, developing, and advanced cognitive functioning [11]. Unlike binary style distinctions, the cognitive level is better fitted for modeling learning trajectories, enabling scalable adaptation in systems such as educational games or museum applications. Studies have shown that users at more advanced cognitive levels exhibit faster reaction times, fewer errors, and more strategic exploration patterns, particularly in complex visual environments [2].
Emerging work leverages multimodal data (e.g., eye gaze, touch interactions, reaction times) to infer cognitive characteristics in real-time and adapt system behavior accordingly [12]. These efforts support the development of cognition-aware technologies capable of personalizing content and interaction based on users’ cognitive profiles without intrusive testing (e.g., GEFT, VVT). Such models offer promise for digital cultural heritage, where cognitive variability often mediates users’ access to meaningful and engaging learning experiences.

2.2. Related Works

2.2.1. Games in the Cultural Heritage Domain

Games have become practical tools for enhancing cultural heritage experiences by transforming passive observation into interactive learning and emotional engagement. Over the past two decades, the integration of digital games into museums, archaeological sites, and cultural installations has evolved beyond static displays, embracing playful technologies to promote curiosity, knowledge acquisition, and retention among diverse user groups [13,14]. This evolution reflects a broader shift in HCI and museology toward participatory and user-centered design, with games playing a key role in personalizing cultural access.
Early applications of games in cultural heritage settings focused on simple quiz and puzzle-based interactions. These were later complemented by more sophisticated genres such as adventure games, treasure hunts, and serious games integrating historical narratives, spatial navigation, and problem-solving elements [13,15]. These genres often simulate real or imagined historical environments, allowing users to engage in tasks such as artifact identification, role-playing, or temporal exploration. Modern cultural heritage games increasingly leverage emerging technologies (e.g., augmented and virtual reality) to offer immersive experiences, leveraging mixed-reality environments to enable embodied interaction with historical reconstructions or holographic representations of artifacts, thus increasing user engagement through spatial presence and gesture-based interaction. These experiences have been shown to elicit stronger emotional responses and enhance memory retention compared to traditional media [16,17].
The design of cultural heritage games needs to consider a series of trade-offs. One major challenge is the tension between historical accuracy and entertainment value. Aiming for the right balance ensures educational effectiveness and sustained user motivation [14]. Another challenge concerns visitor diversity. As museum audiences include individuals of varying ages, technological familiarity, and cognitive characteristics, games must be designed to support accessibility and personalization. However, most current implementations adopt generic interaction paradigms, offering limited adaptability to individual needs or styles of exploration. Recent research emphasizes the need for cognition-aware and user-adaptive cultural heritage games. Studies increasingly rely on behavioral metrics, eye-tracking, and physiological data to assess how cognitive and perceptual differences influence gameplay and learning [3]. This trajectory aligns with broader HCI efforts to personalize user experiences based on preferences and real-time indicators of cognitive state and engagement.

2.2.2. Cognitive Differences in Games and Cultural Heritage

A growing body of literature on HCI, games, and cultural heritage has begun to explore the role of cognitive differences across diverse characteristics in shaping user behavior, engagement, and learning outcomes within games and cultural experiences.
  • Cognitive styles. The FD-I cognitive style has been one of the most frequently studied constructs, particularly due to its implications for visual exploration and information processing. FI visitors tend to engage in analytical and structured processing, while FD visitors typically adopt a more holistic, context-sensitive approach. These styles have been shown to influence interaction behavior, game navigation, and information recall in cultural heritage games [2]. Empirical studies reveal that FD and FI users demonstrate distinct gaze behaviors during gameplay. For example, FI individuals produce longer and more targeted fixations on salient areas. At the same time, FDs exhibit more dispersed and shorter scanpaths [2]. These visual behavior differences are mirrored in interaction patterns. FIs are more likely to explore detailed game elements and perform better in tasks requiring focused attention and cognitive restructuring. In contrast, FDs may benefit more from structured gameplay or socially-driven scenarios [2]. Such distinctions are particularly evident in tasks involving visual search or item identification, which are common in heritage games.
  • Cognitive characteristics. Analytic reasoning and cognitive flexibility have been increasingly used to differentiate styles of game playing. For example, strategic players often display an analytic cognitive style and a high need for cognition, preferring games that require planning and deliberate decision-making. In contrast, non-strategic or impulsive players may rely more on intuitive thinking and seek immediate rewards, reflecting a faster, less reflective game approach [18]. Studies on collaborative games have shown that players’ performance and interaction styles vary significantly with their cognitive profiles, influencing how they process contextual and visual information [19].
  • Cognitive levels. They influence how users interact with educational games and the benefits they derive. At the early cognitive level, educational games that focus on basic classification, color and shape recognition, and simple number concepts have been shown to enhance foundational thinking skills [20]. Games that require memory, sequencing, and logical thinking are more suitable at the developing stage, helping to deepen understanding of abstract concepts and improve critical thinking abilities [21]. At the advanced cognitive level, games that include strategic complexity, adaptive challenges, and narrative engagement provide optimal stimulation. Advanced systems that dynamically adjust game difficulty and content based on the learner’s performance (e.g., those using fuzzy reasoning models) could further enhance learning outcomes by tailoring content to individual cognitive profiles [22]. Moreover, research has shown that different cognitive levels shape how they interpret, interact with, and internalize digital museum content, in game-based exploration (e.g., character-driven guides or VR-based tasks), influencing their ability to process narrative content, navigate interactive tasks, and sustain attention across multimodal experiences [23] and enhancing meaning-making and memory for specific cognitive levels (e.g., less cognitively advanced audiences) [24]. Recent research has expanded these investigations to XR and multimodal settings in recent years, emphasizing the need for inclusive design strategies that account for cognitive variability [25]. Emerging work highlights the need to bridge traditional user modeling approaches with real-time multimodal sensing to enable responsive and personalized learning pathways. As such, integrating cognitive characteristics into adaptive frameworks is expected to grow, fostering personalized engagement with cultural content across diverse audiences and platforms.
The aforementioned analysis highlights that recognizing cognitive differences is essential for designing games that accommodate diverse users, support inclusive educational experiences, and optimize interaction design for different cognitive profiles in the cultural heritage domain.

2.3. Research Objective

Drawing on the theoretical background and the related works, it is evident that cognitive differences shape user experience in digital games and cultural heritage environments. However, while prior studies have focused on cognitive styles and their impact on behavior, the broader construct of cognitive level remains underexplored in interactive cultural heritage contexts. Given the increasing adoption of serious games in museums and cultural institutions, understanding how users of varying cognitive development stages engage with these systems is timely and necessary. This study addresses this gap by empirically investigating whether and how cognitive level, operationalized as a categorical variable with three stages: early, developing, and advanced, affects user interaction, affective-performance states, and sensor-based behavioral metrics during gameplay. The research is guided by the assumption that, beyond preferences or demographics, latent cognitive characteristics may subtly influence how visitors navigate, respond to, and engage with complex heritage-related tasks.
To this end, we analyze data from a large-scale digital museum game dataset comprising behavioral logs, affective annotations, and sensor-derived indicators. Through a structured set of statistical analyses, we assess whether meaningful differences emerge across cognitive levels in three main domains: (i) observable interaction behavior (e.g., time spent, actions taken, hint usage), (ii) affective and performance states (e.g., engagement levels, task completion), and (iii) sensor-based interaction patterns (e.g., gaze focus, reaction time, touch activity). This study seeks to establish a core understanding of cognitive-level-driven variation in cultural heritage interaction by testing the presence or absence of statistically significant effects in these domains. This will inform the design of more adaptive, inclusive, and cognition-aware cultural heritage systems capable of delivering equitable and meaningful experiences across diverse visitor populations.

3. Study

3.1. Methodology

3.1.1. Null Hypotheses

The hypotheses are transformed in the following null hypotheses to be evaluated with an appropriate statistical test:
H01. 
There is no significant difference in user interaction behavior across different cognitive levels during the museum game.
H02. 
There is no significant difference in affective and performance states across different cognitive levels during the museum game.
H03. 
There is no significant difference in sensor-based interaction measures across different cognitive levels during the museum game.

3.1.2. Dataset and Data

The Museum Game Interaction Dataset2 comprises data from 1000 participants, capturing detailed logs of user behavior during a digital museum game. The dataset includes a variety of continuous and categorical variables related to user interaction, affection and performance, and sensor-based interaction. Each record corresponds to a unique participant-game session. The independent variable is cognitive level, a categorical factor with three levels: early, developing, and advanced, used to group participants based on assessed cognitive ability during gameplay. We should note that According to the dataset’s source publication [26], this classification was performed using a Convolutional Neural Network (CNN) model trained on gameplay performance data; however, the dataset does not include detailed information about the specific features, thresholds, or validation methods used to define these cognitive level categories. We grouped the dependent variables into three categories:
  • User interaction data capture aspects of player behavior during the game: Time_Spent represents the total time (in seconds) a participant engaged with the game and is stored as a continuous numeric variable, Total_Actions record the number of interactions performed, Correct_Responses_Ratio represents the ratio of correctly answered challenges, and  Hint_Usage represents the number of hints used by the participant.
  • Affective and performance states represent high-level behavioral and emotional indicators during gameplay: Engagement_Level reflects the participant’s inferred engagement and is recorded as a nominal variable with categories such as Low, Medium, and High; Game_Completion_Status is a nominal variable indicating whether the participant completed the game session; Facial_Expression_Sentiment captures affective state using emotion recognition categories (e.g., Neutral, Frustrated), and  Performance_Level is a nominal classification of overall task performance based on predefined success criteria. We note that the dataset does not provide technical documentation about the emotion recognition method used to derive these sentiment labels (e.g., model type, training data, or accuracy), which limits our ability to assess their validity within a digital museum context.
  • Sensor-based interaction measures include continuous metrics derived from real-time user interaction and physiological responses: Eye_Tracking_Focus_Duration measures the total duration (in seconds) of gaze focus on relevant game elements; Touch_Interactions counts the number of physical touch-based inputs recorded during the session, and  Reaction_Time captures the average time (in seconds) the participant took to respond to in-game prompts or tasks.
We should also note that the dataset does not include metadata identifying the museum, exhibition content, or spatial configuration, limiting our ability to contextualize interactions within a specific curatorial environment.

3.1.3. Procedure

To evaluate the influence of cognitive level on user experience and behavior, we employed a stepwise analytical procedure tailored to the data types and structure of the dependent variables. Separate statistical procedures were applied to each group of dependent variables, as defined in the study design. To test H 01 regarding user interaction behavior, we first conducted a one-way Multivariate Analysis of Variance (MANOVA) to examine the overall effect of cognitive level on the combined continuous variables: Time_Spent, Total_Actions, Correct_Responses_Ratio, and Hint_Usage. MANOVA was selected because it allows simultaneous evaluation of multiple interrelated dependent variables, controlling for Type I error inflation and capturing shared variance across indicators of interaction behavior. Then, we followed up with univariate ANOVAs to assess the contribution of each individual dependent variable. ANOVAs were applied to identify which specific continuous variables significantly varied across cognitive levels, given a significant MANOVA result. Next, we performed post hoc pairwise comparisons (independent t-tests) between cognitive levels (Early vs. Developing, Developing vs. Advanced, Early vs. Advanced) to identify specific contrasts. We followed the same approach to test H 03 related to sensor-based interaction measures, as these also consist of continuous variables suited for MANOVA and ANOVA. To test H 02 concerning affective and performance states, we applied a Chi-Square Test of Homogeneity for each categorical dependent variable (Engagement_Level, Game_Completion_Status, Facial_Expression_Sentiment, Performance_Level) to evaluate whether their distributions differed significantly across cognitive levels. Chi-square tests were appropriate for these analyses because the dependent variables were categorical, and the goal was to compare distribution frequencies across independent groups. Additional pairwise chi-square tests were conducted between each cognitive group pair to explore specific distributional differences. Across all analyses, a significance level of α = 0.05 was used to identify statistically significant effects. In line with common practice, p-values between 0.05 and 0.075 were interpreted as marginally significant and considered indicative of potential trends requiring further investigation. Prior to conducting the tests, we assessed key statistical assumptions. For MANOVA, multivariate normality was evaluated using the Shapiro-Wilk test on each continuous dependent variable, and all variables demonstrated approximate normality (all p > 0.05 ). The homogeneity of variances across cognitive groups was assessed using Levene’s test; no severe violations were observed (all p > 0.05 ). Inter-variable correlations were also examined, with Pearson coefficients remaining below 0.75, indicating no multicollinearity. For Chi-square tests applied to categorical variables, all expected cell counts exceeded the threshold of 5, satisfying the assumption of sufficient sample size per cell.

3.2. Results

3.2.1. User Interaction

The results of the one-way MANOVA test showed that there was no statistically significant difference between the cognitive groups on the combined game interaction dependent variables. Moreover, the following univariate one-way ANOVAs showed no statistically significant differences in any of the interaction dependent variables between the cognitive groups. No statistically significant differences were found during multiple comparisons across the cognitive groups. Figure 1 shows the distribution of these metrics across cognitive groups.

3.2.2. Affection and Performance

The results of the Chi-square test of homogeneity showed that no significant differences were found in the distributions of the combined dependent variables. Moreover, the results indicated no statistically significant association between cognitive level and the dependent variables. However, Game Completion Status showed a close to marginal group-level effect, χ 2 ( 2 ) = 5.04 , p = 0.081 , suggesting a possible trend in task completion outcomes across cognitive levels. Follow-up pairwise comparisons between cognitive groups did not yield any statistically significant differences for any of the categorical variables, although all observed p-values were consistent with the overall non-significant pattern. These results indicate that while cognitive level may have some influence on game completion behavior, the broader affective and performance-related states did not vary meaningfully between groups. Figure 2 shows the distribution of these metrics across cognitive levels.

3.2.3. Sensor-Based Interaction

The results of the one-way MANOVA test indicated a marginally significant multivariate effect of cognitive level on the dependent variables, using Wilks’ Λ = 0.988 , F ( 6 , 1990 ) = 2.039 , p = 0.057 , with a small effect size ( η 2 = 0.006 ). The results of the following Univariate ANOVAs indicated that while no dependent variable reached conventional significance ( α = 0.05 ), two measures demonstrated close to marginal effects, specifically Touch_Interactions ( F ( 2 , 997 ) = 2.490 , p = 0.083 , η 2 = 0.005 ) and Reaction_time ( F ( 2 , 997 ) = 2.520 , p = 0.081 , η 2 = 0.005 ) The multiple comparisons across the cognitive groups showed that for Touch_Interactions a statistically significant difference was observed between the Early and Developing groups ( M diff = 2.09 , p = 0.042 ), with the Developing group showing more frequent interactions while the difference between Developing and Advanced approached marginal significance ( M diff = 1.85 , p = 0.071 ), for Reaction_time, a significant difference emerged between the Developing and Advanced groups ( M diff = 13.52 , p = 0.037 ), indicating that Advanced participants responded faster. No other statistically significant or marginal differences were observed. Figure 3 shows the distribution of these metrics across cognitive levels.

4. Discussion

4.1. Contribution

This study offers a large-scale, data-driven exploration into the interplay between cognitive level and user behavior in digital cultural-heritage environments, contributing novel insights to Human-Computer Interaction, Cultural Heritage, and Cognitive Science fields. By employing a structured analysis of 1000 participants interacting with a museum game, the study explored whether early, developing, and advanced cognitive levels influence three distinct dimensions of user experience: (a) behavioral interactions (e.g., time on task, number of actions), (b) affective and performance states (e.g., engagement level, completion rate), and (c) sensor-based physiological and interactional indicators (e.g., gaze focus, touch frequency, reaction time).
The statistical analysis revealed that the main contribution lies in identifying statistically significant differences in low-level sensor-based interaction behaviors as a function of cognitive level, despite the absence of significant effects in higher-level behavioral and affective-performance indicators. While the one-way MANOVA revealed a marginal multivariate effect of cognitive level on sensor-based measures ( p = 0.057 ), follow-up post hoc analyses showed statistically significant group differences: for Touch_Interactions, participants in the Developing group interacted significantly more frequently than those in the Early group ( p = 0.042 ) and For Reaction_Time, the Advanced group responded significantly faster than the Developing group ( p = 0.037 ).
These findings indicate that cognitive level subtly but meaningfully influences interaction dynamics, particularly in motor engagement and response efficiency, dimensions often hidden in aggregated behavioral metrics. Detecting such effects in continuous, sensor-derived data highlights the importance of analyzing fine-grained interaction logs when investigating cognitive diversity in user behavior. Moreover, this work contributes a methodologically rigorous process that combines statistical analyses based on MANOVA, univariate ANOVAs, and pairwise comparisons to disentangle complex cognitive effects on interaction in cultural-heritage contexts. This study establishes a foundation for future cognition-aware personalization efforts in museum applications and serious games by triangulating across data categories- interaction behavior, affective-performance states, and sensor-based metrics. Moreover, this research demonstrates that while global user outcomes (e.g., engagement, game completion, total actions) may appear uniform across cognitive levels, micro-level differences in interaction behaviors do exist and can be statistically detected, providing a strong rationale for integrating cognitive modeling into the design and adaptation of cultural-heritage technologies.

4.2. Theoretical and Practical Implications

Our work introduces theoretical and practical implications. Regarding the first, this study challenges the assumption that cognitive level, as a coarse-grained static attribute, has a strong explanatory role in shaping digital behavior in cultural heritage applications. While previous work has demonstrated significant behavioral differences between cognitive styles such as field-dependence/independence or visualizer/verbalizer typologies, our findings suggest that grouping users solely by early/developing/advanced cognitive levels may overlook within-group heterogeneity or mask more complex interactions with task and system characteristics. For example, although advanced-level users exhibited faster reaction times and required fewer touch interactions in marginally significant ways, these patterns were inconsistent across all metrics or robust under stricter significance thresholds. This implies that behavioral expression of cognitive processing in cultural-heritage games may be situational and mediated by gameplay mechanics, task complexity, and content layout. This is in line with related research, which has shown that diverse factors like game mechanics [2], team composition [19], interaction spaces [23], and game environment (e.g., VR or AR settings) [7] can influence cognitive processing by shaping users’ attention, decision-making strategies, and engagement levels during interactive gameful experiences. Therefore, the theoretical implications of our work support a shift from static categorization toward dynamic and multimodal user modeling approaches that integrate real-time behavioral signals (e.g., gaze duration on key objects, idle periods, or micro-interaction sequences) as indicators of cognitive engagement and processing efficiency.
From a practical perspective, our findings carry implications for the design of inclusive, cognitively adaptive digital cultural-heritage experiences. The absence of statistically significant differences across most metrics suggests that current game designs, such as the museum game used in this study, may already provide a balanced baseline experience for users across cognitive levels. However, the marginal and intra-cognitive-level differences observed in fine-grained measures (e.g., increased touch interaction frequency among developing-level users or slower reaction times among early-level users) may inform targeted enhancements in pacing, feedback, or scaffolding mechanisms. This practical implication suggests that adaptation and personalization efforts may be most effective when targeted toward specific user groups. For example, in our case, tailored support could be provided to early-level visitors to help improve their reaction times during cognitively demanding tasks. This approach aligns with evidence from related domains, where personalization strategies focus on FD users in information-seeking tasks, as FI users typically exhibit stronger performance in such scenarios [2,3], combine preferences with cognitive modeling to improve learning and engagement in heritage-rich games [27], and dynamically adjust game difficulty in real-time to identify struggling, optimal, and disengaged conditions aligned with specific cognitive levels to optimize engagement and learning outcomes [28]. Real-time cognitive-level adaptation can be operationalized using sensor-based indicators (e.g., prolonged reaction time and excessive touch interactions) to adjust game parameters dynamically. For example, task complexity could be scaled down for early-level users by simplifying instructions or reducing decision branches. At the same time, feedback mechanisms such as adaptive hint delivery or supportive audio prompts could be deployed in response to detected cognitive load. Conversely, for advanced users, task difficulty could escalate or interruptions minimized to maintain optimal engagement and challenge. Furthermore, performance indicators such as marginal trends in game completion across cognitive levels suggest that more cognitively demanding users benefit from increased autonomy and reduced interruptive guidance, while those in lower cognitive groups require more scaffolded experiences with gradual content unfolding.
These insights underscore the potential value of real-time adaptation mechanisms that are unobtrusive yet responsive to latent cognitive needs and preferences, extending the growing discourse on personalization in cultural-heritage systems beyond preference-based or demographic-based tailoring to include cognition-aware design.

4.3. Limitations

This study is subject to several limitations that we must consider when interpreting the findings: (i) cognitive level was treated as a categorical variable with three discrete levels, which, while this is aligned with pedagogical classifications, lacks the granularity of continuous cognitive metrics or multi-dimensional cognitive style models; this may have limited the ability to detect nuanced behavioral differences. (ii) the dataset and task environment are restricted to a single museum-based game, limiting the generalizability of the findings to other cultural heritage activities (e.g., guided tours, AR/VR exhibits) or other game genres (e.g., puzzles, narrative quests) that may involve different interaction modalities and cognitive demands. (iii) while gaze duration, reaction time, and interaction metrics provide dimensions to measure cognitive and affective engagement, other direct physiological indicators (e.g., EEG, skin conductance, heart rate variability) were not collected; by using such measures, we could enhance the interpretability and triangulation of the derived effects. (iv) the analysis relies on a publicly available dataset obtained from Kaggle, which does not include documentation on ethics approval or informed consent procedures; as such, we acknowledge the limitations of secondary data use in ensuring that original data collection adhered to formal ethical standards. (v) the classification of participants into cognitive levels was derived from the original dataset and implemented via a CNN-based model trained on gameplay data; however, no documentation is provided regarding how the classification was operationalized or validated; thus, the cognitive level groupings in this study should be interpreted cautiously. (vi) the dataset lacks identifying information about the source museum or its exhibitions, preventing the analysis from being grounded in a specific real-world setting, thus limiting the ability to interpret the findings about curatorial design, spatial layout, or thematic content of the exhibition.

4.4. Future Research Directions

Building upon the insights and limitations of this study, future research could follow the next directions: (i) future studies should incorporate continuous cognitive metrics or validated cognitive style instruments (e.g., GEFT, VVT) to assess whether specific dimensions of cognition (e.g., perceptual speed, working memory capacity) can better predict visitor behavior and engagement in cultural heritage contexts; (ii) incorporating real-time multimodal data streams (e.g., gaze entropy, galvanic skin response, facial EMG) would allow for dynamic modeling of visitors’ states; such models could support personalized adaptations that respond to moment-to-moment fluctuations in cognitive load or emotional engagement; (iii) to enhance external validity, future research should replicate this analysis across different cultural heritage applications (e.g., mobile AR tours, desktop museum simulations, immersive VR installations) and diverse populations (e.g., children, older adults, tourists with low digital literacy); (iv) future work should operationalize the observed patterns into adaptive features (e.g., dynamic pacing, content filtering, or adaptive hinting) and evaluate their impact on user experience, learning outcomes, and engagement metrics through controlled A/B testing or longitudinal field deployments.

5. Conclusions

This study examined how cognitive level, categorized as Early, Developing, and Advanced, influences user behavior in a digital museum game, analyzing data from 1000 participants across three domains: interaction behavior, affective-performance states, and sensor-based metrics. While high-level behaviors and affective states showed no significant differences across groups, sensor-level indicators revealed marginally and statistically significant differences. Specifically, developing users interacted more frequently via touch, while Advanced users responded faster to in-game prompts. These findings suggest that cognitive variability is more evident in fine-grained interaction patterns than in aggregated behavioral outcomes. They highlight the importance of low-level metrics in identifying subtle cognitive effects that can inform adaptive cultural heritage systems. This work contributes to developing cognition-aware design practices that personalize heritage experiences based on real-time behavioral and physiological signals. For example, game environments could use eye-tracking and reaction-time data to estimate users’ cognitive load and dynamically adapt the difficulty level, pacing, or hint delivery. Such real-time personalization would allow museums to create more inclusive and engaging digital experiences that respond to the diverse needs of visitors.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the author.

Conflicts of Interest

Author George E. Raptis was employed by the company Human Opsis.

Notes

1
2
See note 1 above

References

  1. Bellotti, F.; Berta, R.; De Gloria, A.; D’ursi, A.; Fiore, V. A serious game model for cultural heritage. J. Comput. Cult. Herit. 2013, 5, 17. [Google Scholar] [CrossRef]
  2. Raptis, G.E.; Fidas, C.A.; Avouris, N.M. Do Game Designers’ Decisions related to Visual Activities affect Knowledge Acquisition in Cultural Heritage Games? An Evaluation from a Human Cognitive Processing Perspective. ACM J. Comput. Cult. Herit. (JOCCH) 2019, 12, 4. [Google Scholar] [CrossRef]
  3. Raptis, G.E.; Fidas, C.; Katsini, C.; Avouris, N. A Cognition-Centered Personalization Framework for Cultural-Heritage Content. User Model. User-Adapt. Interact. 2019, 29, 9–65. [Google Scholar] [CrossRef]
  4. Basile, P.; de Gemmis, M.; Iaquinta, L.; Lops, P.; Musto, C.; Narducci, F.; Semeraro, G. SpIteR: A Module for Recommending Dynamic Personalized Museum Tours. In Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology-Volume 01, Washington, DC, USA, 15–18 September 2009; WI-IAT ’09. pp. 584–587. [Google Scholar] [CrossRef]
  5. Javdani Rikhtehgar, D.; Wang, S.; Huitema, H.; Alvares, J.; Schlobach, S.; Rieffe, C.; Heylen, D. Personalizing cultural heritage access in a virtual reality exhibition: A user study on viewing behavior and content preferences. In Proceedings of the Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization, Limassol, Cyprus, 26–29 June 2023; pp. 379–387. [Google Scholar]
  6. Roes, I.; Stash, N.; Wang, Y.; Aroyo, L. A Personalized Walk Through the Museum: The CHIP Interactive Tour Guide. In Proceedings of the CHI ’09 Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 4–9 April 2009; CHI EA ’09. pp. 3317–3322. [Google Scholar] [CrossRef]
  7. Raptis, G.E.; Fidas, C.A.; Avouris, N.M. Effects of Mixed-Reality on Players’ Behaviour and Immersion in a Cultural Tourism Game: A Cognitive Processing Perspective. Int. J. Hum.-Comput. Stud. 2018, 114, 69–79. [Google Scholar] [CrossRef]
  8. Schmiedek, F.; Oberauer, K.; Wilhelm, O.; Süß, H.M.; Wittmann, W.W. Individual differences in components of reaction time distributions and their relations to working memory and intelligence. J. Exp. Psychol. Gen. 2007, 136, 414. [Google Scholar] [CrossRef] [PubMed]
  9. Wen, W.; Ishikawa, T.; Sato, T. Working memory in spatial knowledge acquisition: Differences in encoding processes and sense of direction. Appl. Cogn. Psychol. 2011, 25, 654–662. [Google Scholar] [CrossRef]
  10. Witkin, H.A.; Moore, C.A.; Goodenough, D.R.; Cox, P.W. Field-Dependent and Field-Independent Cognitive Styles and their Educational Implications. ETS Res. Bull. Ser. 1975, 1975, 1–64. [Google Scholar] [CrossRef]
  11. Hwang, G.J.; Sung, H.Y.; Hung, C.M.; Huang, I.; Tsai, C.C. Development of a Personalized Educational Computer Game based on Students’ Learning Styles. Educ. Technol. Res. Dev. 2012, 60, 623–638. [Google Scholar] [CrossRef]
  12. Steichen, B.; Wu, M.M.A.; Toker, D.; Conati, C.; Carenini, G. Te,Te,Hi,Hi: Eye Gaze Sequence Analysis for Informing User-Adaptive Information Visualizations. In User Modeling, Adaptation and Personalization, Proceedings of the 22nd International Conference, UMAP 2014, Aalborg, Denmark, 7–11 July 2014; Dimitrova, V., Kuflik, T., Chin, D., Ricci, F., Dolog, P., Houben, G.J., Eds.; Springer: Cham, Switzerland, 2014; pp. 183–194. [Google Scholar]
  13. Mortara, M.; Catalano, C.E.; Bellotti, F.; Fiucci, G.; Houry-Panchetti, M.; Petridis, P. Learning cultural heritage by serious games. J. Cult. Herit. 2014, 15, 318–325. [Google Scholar] [CrossRef]
  14. Antoniou, A.; Lepouras, G.; Bampatzia, S.; Almpanoudi, H. An Approach for Serious Game Development for Cultural Heritage: Case Study for an Archaeological Site and Museum. ACM J. Comput. Cult. Herit. (JOCCH) 2013, 6, 17. [Google Scholar] [CrossRef]
  15. Anderson, E.F.; McLoughlin, L.; Liarokapis, F.; Peters, C.; Petridis, P.; De Freitas, S. Developing serious games for cultural heritage: A state-of-the-art review. Virtual Real. 2010, 14, 255–275. [Google Scholar] [CrossRef]
  16. Pedersen, I.; Gale, N.; Mirza-Babaei, P.; Reid, S. More than meets the eye: The benefits of augmented reality and holographic displays for digital cultural heritage. J. Comput. Cult. Herit. (JOCCH) 2017, 10, 1–15. [Google Scholar] [CrossRef]
  17. Perry, S.; Roussou, M.; Economou, M.; Young, H.; Pujol, L. Moving Beyond the Virtual Museum: Engaging Visitors Emotionally. In Proceedings of the 23rd International Conference on Virtual System Multimedia (VSMM 2017), Dublin, Ireland, 31 October–4 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar] [CrossRef]
  18. Mouneyrac, A.; Lemercier, C.; Le Floch, V.; Challet-Bouju, G.; Moreau, A.; Jacques, C.; Giroux, I. Cognitive characteristics of strategic and non-strategic gamblers. J. Gambl. Stud. 2018, 34, 199–208. [Google Scholar] [CrossRef] [PubMed]
  19. Alharthi, S.A.; Raptis, G.E.; Katsini, C.; Dolgov, I.; Nacke, L.E.; Toups, Z.O. Investigating the Effects of Individual Cognitive Styles on Collaborative Gameplay. ACM Trans. Comput.-Hum. Interact. 2021, 28, 1–49. [Google Scholar] [CrossRef]
  20. Loka, N.; Diana, R.R. Improving cognitive ability through educational games in early childhood. JOYCED J. Early Child. Educ. 2022, 2, 50–59. [Google Scholar] [CrossRef]
  21. Zia, A.; Chaudhry, S.; Naz, I. Impact of computer based educational games on cognitive performance of school children in lahore, Pakistan. Shield. Res. J. Phys. Educ. Sport. Sci. 2017, 12, 40–58. [Google Scholar]
  22. Chrysafiadi, K.; Papadimitriou, S.; Virvou, M. Cognitive-based adaptive scenarios in educational games using fuzzy reasoning. Knowl.-Based Syst. 2022, 250, 109111. [Google Scholar] [CrossRef]
  23. Lin, L.; Lu, L.; Lin, N. A Cognitive Psychoanalytic Perspective on Interaction Design in the Education of School-Age Children in Museums. In HCI International 2024 Posters; Springer: Cham, Switzerland, 2024; pp. 212–219. [Google Scholar] [CrossRef]
  24. Chang, C.W. The Cognitive Study of Immersive Experience in Science and Art Exhibition. In Augmented Cognition; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 369–387. [Google Scholar] [CrossRef]
  25. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A survey of augmented, virtual, and mixed reality for cultural heritage. J. Comput. Cult. Herit. (JOCCH) 2018, 11, 1–36. [Google Scholar] [CrossRef]
  26. Lyu, K. Research on the Interactive Design and Optimization of Museum Game-Oriented Cultural and Creative Products Based on Piaget’s Game Theory and Convolutional Neural Network. SSRN Scholarly Paper No. 5179507. 15 March 2025. [Google Scholar] [CrossRef]
  27. Naudet, Y.; Antoniou, A.; Lykourentzou, I.; Tobias, E.; Rompa, J.; Lepouras, G. Museum Personalization Based on Gaming and Cognitive Styles. Int. J. Virtual Communities Soc. Netw. 2015, 7, 1–30. [Google Scholar] [CrossRef]
  28. Qinghong, Y.; Dule, Y.; Junyu, Z. The research of personalized learning system based on learner interests and cognitive level. In Proceedings of the 2014 9th International Conference on Computer Science & Education, Vancouver, BC, Canada, 22–24 August 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 522–526. [Google Scholar] [CrossRef]
Figure 1. Descriptive statistics for four continuous dependent variables across cognitive levels (Early, Developing, Advanced). Each subplot displays the mean values with 95% confidence intervals for Time Spent, Total Actions, Correct Response Ratio, and Hint Usage. Distinct colors and hatch patterns are used to differentiate cognitive levels consistently across plots: orange with forward slashes (Early), blue with backslashes (Developing), and green with crosshatch (Advanced).
Figure 1. Descriptive statistics for four continuous dependent variables across cognitive levels (Early, Developing, Advanced). Each subplot displays the mean values with 95% confidence intervals for Time Spent, Total Actions, Correct Response Ratio, and Hint Usage. Distinct colors and hatch patterns are used to differentiate cognitive levels consistently across plots: orange with forward slashes (Early), blue with backslashes (Developing), and green with crosshatch (Advanced).
Heritage 08 00267 g001
Figure 2. Distributions of four nominal dependent variables across cognitive levels (Early, Developing, Advanced). Each subplot displays the frequency of participants per category for Engagement Level, Game Completion Status, Facial Expression Sentiment, and Performance Level, grouped by cognitive level. Bars are color-coded by response category, providing a visual summary of categorical trends and variations in affective and performance-related states.
Figure 2. Distributions of four nominal dependent variables across cognitive levels (Early, Developing, Advanced). Each subplot displays the frequency of participants per category for Engagement Level, Game Completion Status, Facial Expression Sentiment, and Performance Level, grouped by cognitive level. Bars are color-coded by response category, providing a visual summary of categorical trends and variations in affective and performance-related states.
Heritage 08 00267 g002
Figure 3. Estimated marginal means for three sensor-based interaction measures across cognitive levels (Early, Developing, Advanced). Each subplot presents the adjusted mean values with 95% confidence intervals for Eye Tracking Focus Duration, Touch Interactions, and Reaction Time. Distinct colors and hatch patterns are used consistently to represent cognitive levels: orange with forward slashes (Early), blue with backslashes (Developing), and green with crosshatch (Advanced).
Figure 3. Estimated marginal means for three sensor-based interaction measures across cognitive levels (Early, Developing, Advanced). Each subplot presents the adjusted mean values with 95% confidence intervals for Eye Tracking Focus Duration, Touch Interactions, and Reaction Time. Distinct colors and hatch patterns are used consistently to represent cognitive levels: orange with forward slashes (Early), blue with backslashes (Developing), and green with crosshatch (Advanced).
Heritage 08 00267 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Raptis, G.E. Exploring Cognitive Variability in Interactive Museum Games. Heritage 2025, 8, 267. https://doi.org/10.3390/heritage8070267

AMA Style

Raptis GE. Exploring Cognitive Variability in Interactive Museum Games. Heritage. 2025; 8(7):267. https://doi.org/10.3390/heritage8070267

Chicago/Turabian Style

Raptis, George E. 2025. "Exploring Cognitive Variability in Interactive Museum Games" Heritage 8, no. 7: 267. https://doi.org/10.3390/heritage8070267

APA Style

Raptis, G. E. (2025). Exploring Cognitive Variability in Interactive Museum Games. Heritage, 8(7), 267. https://doi.org/10.3390/heritage8070267

Article Metrics

Back to TopTop