Next Article in Journal
Enhancing EFL Speaking Skills with AI-Powered Word Guessing: A Comparison of Human and AI Partners
Previous Article in Journal
Mitigating Impact of Data Poisoning Attacks on CPS Anomaly Detection with Provable Guarantees
Previous Article in Special Issue
Augmented Reality as an Educational Tool: Transforming Teaching in the Digital Age
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Bayesian Knowledge Tracing and Human Plausible Reasoning in an Adaptive Augmented Reality System for Spatial Skill Development

by
Christos Papakostas
*,
Christos Troussas
,
Akrivi Krouska
and
Cleo Sgouropoulou
Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
*
Author to whom correspondence should be addressed.
Information 2025, 16(6), 429; https://doi.org/10.3390/info16060429
Submission received: 15 April 2025 / Revised: 18 May 2025 / Accepted: 22 May 2025 / Published: 23 May 2025
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)

Abstract

:
The use of advanced adaptive algorithms in Augmented Reality (AR) systems works to advance spatial skills with valuable relevance in many professional spheres by providing personalized feedback in an immersive environment. This study combines Bayesian Knowledge Tracing (BKT) and Human Plausible Reasoning (HPR) to design an AR system that can adapt to dynamic simulations with quantitative as well as qualitative cognitive methodologies. The system records a broad range of interactions from users, such as objects being rotated, changes in viewing perspective, and time spent on tasks, which are later analyzed through probabilistic updates with respect to skill building along with rule-based reasoning for determining behavioral patterns. Results from an in-depth case study show that the BKT module properly tracks improvement in spatial skills, while the HPR application highlights suboptimal approaches that hide underlying conceptual understanding. The adaptive system used then provides metacognitive hints that adjust by optimizing task difficulty levels, leading to improved student performance compared to standard non-adaptive AR techniques. Results show that using BKT and HPR in an AR environment not only enables accurate task performance but supports greater insight in approach strategies, leading to better and transferable spatial skills.

1. Introduction

Spatial skills are a core component of human cognitive functioning, significantly affecting performance across an expansive range of jobs and professions [1,2,3,4,5]. Highly spatially competent individuals tend to be successful in professions like engineering, architecture, medicine, and geoscience [6,7,8]. Therefore, developing spatial skills becomes an integral learning goal with far-flung consequences for education, employment, and general problem-solving skills outside formal education. Advances through different epochs of technology have provided instructors and researchers with technologies created through systematic research to improve certain cognitive skills, of which Augmented Reality (AR) is an efficacious tool with which to promote spatial skills within realistic problem contexts [9,10,11,12,13]. Based on its ability to add digital data to everyday environments, AR offers learners an immersive and interactive experience that can be designed to suit differing learning styles and needs. Such a versatility factor is especially relevant for spatially based cognitive functions, including visual reorganization, object manipulation, and rotational skills.
Despite huge potential for applying AR to increase learner engagement, the effectiveness of AR-driven intervention can be improved upon through integration with adaptive learning technologies [14,15]. Adaptation ensures that instruction’s type, intensity, and timing appropriately match learner understanding at any point in time. Bayesian Knowledge Tracing (BKT) is one of the established frameworks for modeling changes to learners’ skill acquisition patterns with time through a probabilistic framework with an intent to calculate an individual learner’s mastery or absence of mastery in developing certain skills [16,17,18]. While instances of applying BKT to various e-learning environments have been successful, sometimes it ignores how learners engage with problem-solving activities. In contrast, Human Plausible Reasoning (HPR) reduces this weakness through modeling learner cognitive engagement and preferred methods and decision-making processes for task solving [19,20]. Combining HPR with BKT promises to make available an integrated system not only for modeling learner activity during AR training but for demystifying critical components of such activity as well, thus paving the way for more targeted intervention.

1.1. Rationale for Using AR to Enhance Spatial Skills

Spatial skills include the ability to visualize, manipulate, and interpret objects in multi-dimensional spaces [21,22,23]. These competencies are essential for tasks such as reading technical diagrams, interpreting scientific data, and navigating through complex geometrical structures. Traditional approaches to developing spatial reasoning, such as paper-based activities or two-dimensional simulations, can help learners practice some aspects of spatial manipulation, but they often lack immersion and contextual relevance. AR directly addresses these limitations by adding virtual information to the real world, allowing learners to interact with 3D objects as if they were part of their immediate surroundings.
A major benefit of AR is its ability to provide instant feedback and enable individualized learning paths. Students are allowed to find out whether rotations of virtual objects or assembly of digital parts produces the correct geometric arrangement. The provision of instant feedback by the system invites an exploratory style, enabling users to develop spatial skills through haptic manipulation. Additionally, the ability of AR to place learning challenges within actual situations can lead to increased student engagement and participation. For example, engineering students can engage with mechanical parts under an AR framework that simulates conditions found within real-world situations, thereby preparing them for practical application found within their field. Overall, AR is well-positioned to facilitate experiential learning; the system can track learner behavior—such as viewing position and manipulation patterns—and deliver targeted assistance or modify task complexity as needed.
Despite these advantages, effective learning in AR still depends on the design of the adaptive mechanisms behind the scenes. A static AR application that supports an extended audience also holds promise for heightened interactivity but falls short of accommodating different levels of expertise. Something too simplistic for an expert student could pose substantial difficulty for a novice. This is a limitation that highlights needs for adaptable models that can differentiate between various cognitive strategies, on one hand, and different levels of expertise, on the other. By integrating robust data modeling techniques such as BKT and HPR, AR environments can leverage the best of both worlds: immersive interaction and adaptive personalization. While prior studies (e.g., [16,17,18,19,20]) have separately explored BKT for learner modeling and HPR for cognitive interpretation, a comprehensive integration of both within an immersive AR environment has not been previously documented.

1.2. Role of BKT and HPR

BKT is a powerful framework for modeling student learning trajectories. Fundamentally, BKT attempts to estimate a learner’s proficiency in a particular skill through the analysis of the performance data gathered as learners work through different tasks. The underpinnings of BKT are in Bayesian statistical theory, which facilitates the continuous updating of inferences about the mastery of a learner as new performance data are obtained. For example, if a learner consistently produces correct answers on tasks requiring a spatial skill like mental rotation, BKT will report a high probability of proficiency in the spatial skill. Conversely, frequent incorrect responses yield a lower estimate of mastery, where the system responds by providing additional support. BKT is particularly beneficial in that it offers a formal mechanism for dealing with the uncertainties of learner behavior and is shown to be responsive to the dynamic process of skill acquisition.
It is true that BKT alone cannot capture all aspects of a person’s approach to difficult tasks. Students will often arrive at correct answers through a wide variety of methods, and a correct response does not necessarily mean an efficient, conceptually sophisticated approach is being used. The requirement is met by HPR, which looks at patterns of action reflecting cognitive processes. For instance, if a student continually rotates a 3D model through different orientations before arriving at an answer, the HPR component interprets this as reflecting the use of outside cues and not inner spatial cognition. Such qualitative understanding allows for the provision of a range of forms of metacognitive scaffolding beyond what is provided through accuracy feedback alone. Combining BKT’s skill level measures with the interpretive framework provided by HPR, the system is able to construct a richer picture of student strengths, ability, and cognitive styles.
The collaborative benefit of BKT and HPR lies in their ability to improve both quantitative and qualitative understanding. BKT is mainly directed at answering, “Does this student master skills X, Y, or Z?” In contrast, HPR attempts to address the far richer question, “What are the student’s operating procedures for solving this problem, and what do these procedures suggest with respect to conceptual understanding?” Combining these methods with an AR platform allows for dynamic, moment-by-moment intervention based upon both skill learning and cognitive processes. As unique patterns of interaction reveal difficulties or faulty methodologies, the system can respond with instant, specific remediation or adjust subsequent task difficulty.
To the best of our knowledge, this is the first study to integrate Bayesian Knowledge Tracing and Human Plausible Reasoning into a unified adaptive framework within an Augmented Reality learning environment for spatial skill development. This novel combination allows for both probabilistic mastery estimation and strategy-based metacognitive feedback, bridging a gap not yet addressed in the literature.

1.3. Research Questions and Objectives

Building on the foregoing described theoretical and practical advantages, this research seeks to explore the effectiveness of combining BKT and HPR in an AR environment to enhance spatial skills. The study is guided by two main questions:
  • How accurately can the integrated BKT-HPR model diagnose skill mastery and cognitive strategies in real time during AR-based spatial tasks? (RQ1)
  • What impact does the integrated adaptive system have on learners’ overall spatial reasoning and problem-solving approaches compared to a non-adaptive AR environment? (RQ2)
  • What are learners’ perceptions regarding the adaptivity, feedback quality, and strategy support offered by the integrated BKT-HPR augmented reality system? (RQ3)
The first question refers to the technical effectiveness of the models, most specifically their ability to evaluate learner interaction with low error and provide salient feedback. The second question concerns pedagogical effectiveness of the combined framework. A study will be undertaken to compare learner results based on the usage of adaptive features with those based on usage of a static version of the AR module, with an intention to find measurable improvements in accuracy, efficiency, and diversification of strategic moves. The third question refers to learners’ subjective experiences with the system, specifically their perceptions of its adaptivity, the usefulness of the feedback provided, and the extent to which it supported their development of effective problem-solving strategies.
Taking into consideration the aforementioned questions, the major goal is to design and evaluate an AR application that simultaneously employs BKT and HPR. The system is designed to gather real-time data about user performance measures (such as successes, mistakes, and time to task completion) as well as behavioral measures (such as repeated tries and changes in viewing directions), which are subsequently used to generate adaptive support measures. Another goal is to supply assistance to instructors and instructional designers by suggesting guidelines for incorporating similar adaptive systems in various domains that require spatial thinking. Through enabling learner data to be generated and evaluated at runtime, the proposed system provides a systematic framework for incorporating immersive technology into adaptive pedagogical approaches.
In sum, this study aims to advance the field of AR-based learning by illustrating how BKT and HPR can be combined to deliver targeted, cognitive-level feedback. The hope is that these methods will support learners not just in achieving correct answers, but in developing deeper spatial insights and more flexible approaches to problem solving. The rest of the paper is organized as follows. Section 2 presents the relevant background literature, Section 3 the conceptual framework, Section 4 the system design, Section 5 presents the evaluation results and discussion, while Section 6 concludes the article.

2. Background and Related Work

2.1. Overview of Spatial Skill Development Strategies

Spatial abilities refer to skills related to visualizing, conceiving, and manipulating objects within two- or three-dimensional space [24,25,26]. These skills include a wide range of functions that include spatial visualization, object manipulation, perspective taking, and mental rotation [27,28]. Evidence provided through empirical research shows that spatial skills are instrumental for success across most fields of study, especially science, technology, engineering, and mathematics (STEM), where successful performance depends heavily on understanding geometric shapes and spatial manipulation [29]. Note that spatial skills are trainable, implying that formal training programs can increase performance at both educational and occupational levels.
Traditional methods of improving spatial skills often include paper-based activities such as drawing or working on exercises using orthographic projection [30,31]. While such methods can promote a deeper comprehension of geometric relationships, they may not have the requisite engagement and richness to effectively address the unique requirements of varied learners. Modern approaches have leveraged digital simulations, computer-based learning modules, and interactive software to develop spatial reasoning skills [32,33]. In digital environments, students are given the ability to manipulate rotations, transformations, and scaling actions that realistically model real-world situations. In addition, such environments support the automatic gathering of data, thus enabling researchers to monitor user progress and adjust pedagogical approaches accordingly.
These interventions are based upon a collection of educational principles. A progressive ordering of tasks is first and foremost required; foundational configurations prepare for ever-more advanced visual representations. Secondly, providing prompt feedback is typically beneficial; learners learn what cognitive or physical procedures are involved in visualizing and manipulating objects. Finally, motivational considerations are of great importance. Emphasis upon relevance to everyday experience and the provision of a variety of exercise types have been demonstrated to increase both motivation and retention. An effective mechanism exists to bring these methods into use but applying these at an individual and adaptive level is a serious challenge. Many resources provide generic exercises that do not adapt dynamically to changing spatial abilities, nor strategic inclinations, of an individual.

2.2. Prior Work on BKT for Adaptive Learning Systems

BKT is a probabilistic model that has been used to a large extent in intelligent tutoring systems to estimate a student’s mastery of certain skills [34]. Originally used in subject domains like mathematics and programming, extensive application of BKT arose as it provided an interpretable and transparent model for understanding how skills are learned. In essence, BKT relies on four different parameters for each skill: initial probability of mastery of the skill, probability of learning the skill based on a correct or incorrect response, as well as probabilities for guessing or slipping during a response [35]. As these parameters are iteratively adjusted based on learners’ performance across different tasks, BKT allows for dynamic measurement of whether or not a learner has achieved mastery.
Within adaptive learning systems, BKT is often utilized to determine the best degree of scaffolding or task difficulty to be provided during future exercises [16,36,37,38]. As an illustration, once a model suggests there is a high probability for a student to have well understood a specific concept, it will subsequently offer an advanced task. Conversely, persistently low confidence ratings may lead to remediation material or extra hints being provided. Past research has indicated that pedagogical decisions based upon BKT estimations will increase learning efficiency, optimize task engagement duration, and improve learner motivation [18,39,40].
Despite its effectiveness, BKT alone may not fully capture how learners arrive at a solution. For example, learners may constantly guess and end up with correct answers or use suboptimal procedures that happen to lead to the correct solution. Such activity can lead to an inaccurate picture of skill acquisition if only measures of accuracy are considered. To address this drawback, researchers have attempted to incorporate contextual cues that explain cognitive paths followed by learners. Several scholars have suggested new measures or altogether different models to demarcate factors like usage of hints, time spent or shifts in approach. The inclusion of these richer data sources can potentially significantly improve both sensitivity and reliability of mastery measurement, thus enabling timely and accurate intervention.

2.3. Applications of HPR in Educational Contexts

HPR attempts to bridge interpretative gaps that methodologies focusing on precision or expertise often neglect. Informed by the assumption that human comprehension is realized through probabilistic and situated thought, HPR models attempt to capture qualitative subtleties inherent to student thought [41,42]. Plausible reasoning subsumes heuristic methods, intuitive justifications, analogical-style reasoning, as well as several cognitive approaches that shape student response to new issues [43].
In educational research, HPR has been explored as a means of isolating misconceptions or unique but salient methods. As an example, a student may habitually use a corner-first approach to building geometric shapes. While this specific technique is not necessarily the most effective, repeated usage indicates a reliance upon concrete cues available for a student’s immediate situation. Through early identification of these methods, instructors or adaptive programs are given a chance to intervene to help students learn to generalize to more abstract methods [44]. Thus, HPR provides an added dimension to analyses beyond simple right and incorrect responses, revealing cognitive processes that may otherwise be lost.
Combining automated learning systems with HPR improves capability for informed intervention, preventing learners from engaging in guesswork or superficial problem-solving. For example, during a geometry problem, if a learner continually tries solving issues by arbitrary manipulation, an HPR module can infer a lack of strategic planning and introduce a metacognitive cue like, “Try visualizing the complete shape prior to reassembling pieces”. It is important to stress that HPR does not replace quantitative models, such as BKT. Instead, it complements the framework, enabling probability estimate adaptation or refinement, and ensuring that adaptive feedback is consistent with the learner’s observed patterns of reasoning. It is especially important for spatial domains, where successful manipulation may be hiding inefficient methods or weak conceptual understanding.

2.4. Existing AR Approaches for Spatial Training

AR has been explored for a variety of educational applications because it merges physical and digital realms, thus creating interactive learning experiences [45]. In spatial training, AR provides an immediate, visually rich environment where learners can rotate, assemble, and dissect 3D objects in real time [46]. These features are beneficial in domains such as geometry, engineering design, and medical training, where spatial visualization is crucial. In geometry education, for instance, students might use AR apps to examine cross-sections of three-dimensional solids, develop an intuition for volume, or experiment with angles and rotations.
In educational technology research, learners’ perceptions of adaptivity and feedback quality have been shown to significantly influence engagement, motivation, and strategic learning behavior (e.g., [9,10,11,12,13,14,15]). Incorporating user experience into the evaluation of intelligent systems has thus become essential for validating both usability and pedagogical effectiveness.
One of the key advantages of AR-based spatial tasks is the ability to overlay context-sensitive guides or step-by-step instructions on the learner’s field of view [47,48,49,50]. This approach can highlight relevant features of a virtual 3D object, helping learners connect abstract concepts to concrete representations. However, many of these AR solutions remain non-adaptive; they present the same sequence of tasks and instructions to every user, regardless of individual progress or approach. Some more advanced AR platforms collect performance data, such as the time spent on each manipulation or the angle of device rotation but do not necessarily use these inputs to update the system’s understanding of the learner [51,52,53,54].
There is growing emphasis upon integration between AR and data-driven models for personalization of user experience. With inclusion of BKT, an AR system could track a user’s mastery of skills almost in real time and thus adjust problem difficulty or provide targeted hints accordingly. Along similar lines, a Higher-Order Processing Representation layer can recognize learners’ strategies and intervene with help that targets deficiencies. While initial results have indicated that adaptive AR can significantly enhance performance and motivation, large-scale studies and controlled methodologies are still being worked upon. The potential exists to leverage modern computing abilities, specifically in mobile and wearable environments, to vary virtual parts and guidance based upon dynamic evaluations of learners.

3. Conceptual Framework

3.1. Explanation of the BKT Model for Skill Acquisition

BKT is based fundamentally upon probabilistic models that attempt to explain learners’ development from a state of poor understanding to mastery of specific skills. It basically assumes that skills can be defined by distinctive parameters that predict learning patterns for future time. Traditionally, the initial model of BKT contains four parameters for every skill:
  • Initial knowledge (p_init): the probability that a learner already knows the skill before engaging in any tasks;
  • Learning rate (p_learn): the probability that a learner will acquire the skill after an instructional or practice opportunity;
  • Guess probability (p_guess): the likelihood that a learner responds correctly despite not having mastered the skill;
  • Slip probability (p_slip): the likelihood that a learner responds incorrectly despite having mastered the skill.
Once the four parameters are defined (p_init, p_learn, p_guess and p_slip), the BKT model updates the estimated probability that a learner has mastered a given skill after each task.
For a correct response observed at time t given the current mastery probability P(Lt), the conditional probability that the learner actually knew the skill is updated using Bayes’ rule as follows:
P ( L _ t   |   c o r r e c t ) = [ P ( L _ t )     ( 1 p _ s l i p ) ] / [ P ( L _ t )     ( 1 p _ s l i p ) + ( 1 P ( L _ t ) )     p _ g u e s s ] ,
In contrast, if an incorrect response is observed, the updated probability becomes:
P ( L _ t   |   i n c o r r e c t ) = [ P ( L _ t )     p _ s l i p ] / [ P ( L _ t )     p _ s l i p + ( 1 P ( L _ t ) )     ( 1 p _ g u e s s ) ] ,
After this immediate update based on the response, the overall learning process is modeled by incorporating the learning probability plearn. Thus, the mastery probability is then updated for the next time step as follows:
P ( L _ ( t + 1 ) ) = P ( L _ t   |   r e s p o n s e ) + [ 1 P ( L _ t   |   r e s p o n s e ) ]     p _ l e a r n ,
where P(L_t ∣ response) is replaced by either P(L_t ∣ correct) or P(L_t ∣ incorrect), depending on the observed outcome.
These equations provide the quantitative backbone of the BKT framework, allowing the system to dynamically refine its estimate of a learner’s mastery after each interaction. This in turn supports more precise and adaptive interventions within the AR environment.
Whenever a learner interacts with a task that targets a particular skill, BKT updates the estimated probability of mastery based on the learner’s performance. For example, a correct response may suggest that the student has a higher probability of knowing the skill, but the guess parameter remains part of the equation in case the success was due to chance. Likewise, an incorrect response might lower the probability of mastery, yet the slip parameter accounts for mistakes made by an otherwise knowledgeable learner. Over multiple trials, this iterative updating process forms a trajectory of skill mastery, revealing both short-term fluctuations and overall progress.
One strength of BKT is its dynamic nature. After each observed performance, the model recalculates the likelihood that the learner either has or has not mastered the skill. Educators and adaptive systems rely on this real-time feedback to tailor instruction, adjusting task difficulty or providing additional support when mastery seems incomplete. In practice, BKT can be implemented in various ways, often with slight modifications to the underlying formulas. Regardless of these variations, the key idea remains; each performance instance reveals new information about the learner’s proficiency, and that information reshapes the probability estimates about future performance. This continual process of inference helps in deciding how and when to intervene with targeted hints, scaffolding, or more advanced tasks.
BKT has enjoyed popularity in intelligent tutoring systems for mathematics, programming, and other structured domains, thanks in large part to its transparent approach to modeling. Educators can review the parameters that drive the model and link them to pedagogical decisions. For instance, a high slip rate might indicate that the content is error-prone or that instructions are unclear. A high guess rate might signal the need for additional checks to confirm genuine mastery, rather than simple guessing. Despite these advantages, BKT’s classical form primarily focuses on correctness data and does not always capture the pathways or strategies leading to that correctness. It offers valuable insight into what a student knows, but not necessarily how the student arrives at the answer.

3.2. Explanation of HPR for Interpreting Learner Strategies

HPR presents a model that outlines cognitive processes utilized by humans during learning situations. Rather than simply examining a response for correctness, HPR focuses on how a learner justifies solving a specific problem. It includes heuristic methods, empirical approaches, analogical reasoning based upon past experiences, and various signs of strategic methods. HPR, in essence, contains those qualitative features that are inherent to problem-solving activity.
Cognitive studies demonstrate that users apply different tactics to arrive at the same results. For instance, one will see a rotating virtual object internally before doing anything, whereas another will perform repeated manipulations of a device or object to look at different perspectives. Another will generate random possibilities until an effective pathway is found. HPR allows for these patterns to be identified through an examination of complete histories of user interactions, ordering of options tried, and time devoted to different steps.
To quantitatively capture how these confidence weights influence the interpretation of learner behavior, we define an effective confidence threshold. Let θ denote the base confidence threshold. Suppose that when a particular rule i is triggered, it contributes a confidence weight ci with an associated scaling factor αi. Then, if multiple rules are activated, the effective threshold θ_effective is given by:
θ _ e f f e c t i v e = θ _ ( i = 1 ) ^ n   ( α _ i     c _ i ) ,
This formulation shows that as the cumulative influence of triggered rules increases by summing their weighted contributions, then the system lowers the effective threshold. This adjustment makes the system more conservative in attributing mastery, thereby supporting more targeted interventions.
From a learning point of view, these differences are of the utmost significance. Two students can give correct answers, yet one can use a more structured or conceptually based approach. Without in-depth analysis, these variations may go undetected, and an adaptive system may be misled into believing the two students have identical mastery levels. In contrast, HPR can clarify profound differences in approaches, thus enabling focused intervention. For example, if the system detects a proclivity for over-relying on guessing, it may prompt the student to formulate a plan or outline concepts ahead of time before proceeding to the next step. In addition, if a student is having trouble with repetitive sequences, HPR can prompt the investigation of different strategies.
Another advantage of HPR is its versatility. It can be used across a wide range of fields, as long as there is a mechanism available for tracking learner performance. Furthermore, results obtained through HPR are often found to have greater relevance to actual problem-solving behaviors in realistic situations. Researchers and instructors can look at these results and map them to overall cognitive or metacognitive skills, such as planning, monitoring, and self-judgment of the cognitive process. While there is a probabilistic approach to mastery with BKT, there is an explanation of why the results occurred with HPR, which is a crucial distinction if the goal is to help learners build strong strategies and not just arrive at correct answers.

3.3. Illustration of How These Two Models Complement Each Other

Within an AR learning environment, BKT and Hidden Markov Processes (HMPs) provide complementary information. BKT allows for the tracking of a student’s progress toward mastery of basic spatial skills, such as object assembly and mental rotation. If a student shows repeated proficiency at rotating virtual objects, BKT will evaluate mastery for that skill and update it upward accordingly. Conversely, repeated mistakes signal the need for more practice.
Nevertheless, correctness data may not necessarily reveal if the learner is performing manipulations through repetitive physical moves of the tablet or phone or not developing an internalized representation. HPR comes into play here through analyses of these patterns of behavior; heavy reliance on extrinsic cues can manifest as repeated switching of perspective, discontinuous dragging motions, or extended periods of watching between actions. While BKT could deem the learner as “proficient” based only on correct results, HPR can detect poor or inefficient practice contributing to the overall learning process.
This two-layer approach makes for a richer adaptation logic. Suppose the BKT layer deems the learner highly accurate in mental rotation tasks, which traditionally indicates readiness for more advanced challenges. However, HPR has found that most tasks were solved through repeated trial and error. Combining these signals, the system might decide to provide more advanced tasks but also introduce scaffolding that fosters internal visualization. For example, the system could temporarily restrict the ability to rotate the device, nudging learners to form internal mental images. Alternatively, it might offer hints such as, “Try to predict the final orientation before tapping on the object”, thereby encouraging metacognitive awareness.
Alternatively, imagine a student who seems tentative, with BKT implying relatively low mastery probabilities for several spatial subskills. HPR may show that this student is careful but painstaking—perhaps he or she reviews each component individually prior to acting, for an analytical approach which just requires practice or coaching. Instead of giving remedial exercises, the system may provide short cues or a quick demonstration to confirm the student’s careful approach, reinforcing thoroughness and patience while avoiding irritation.
These examples illustrate that a combination of BKT and HPR offers a richer basis for adaptation. BKT produces a probability estimate for skills and thus gives educators and systems an accurate estimate of learner’s skills at a particular point in time. HPR supplements this view by making the learner’s style of argumentation explicit. The system is not constrained to bring feedback or intervention to solely accuracy levels, and thus these intervention attempts may be adapted to fit a learner’s individual style. This correspondence is especially beneficial to improve spatial skills because accuracy levels within a short range may hide the presence of guessing or use of extraneous aids. With respect to both skill levels and underlying strategic approach, the AR environment will be able to facilitate deep conceptual development.
Additionally, the integration of this framework has strong implications for data-informed research into education. Outputs of BKT can be used to enable measures of learner progress, while qualitative profiles generated through HPR can provide insight into components key to mastery. Researchers can investigate connections between specific strategic styles and intensified development of skills. They can also investigate how specific interventions affect both accuracy rates and changes in strategy. Such a dual perspective offers avenues for optimizing AR tasks to serve a wide variety of learner profiles, potentially moving toward truly personalized education at scale.
In summary, BKT and HPR operate in tandem to create a comprehensive conceptual framework that tracks both the “what” and the “how” of learning. BKT offers a statistical understanding of skill mastery, informing when to challenge a learner with more com-plex tasks or provide focused practice. HPR adds the interpretive dimension, allowing the system to identify and guide the cognitive processes behind correctness. By merging these two approaches within AR, educators and system designers can promote not only accurate performance but also solid, enduring strategies that advance spatial competence. This synergy has the potential to transform educational technology in ways that build robust, transferable learning outcomes.

4. System Design and Implementation

4.1. System Architecture and Workflow

Formulating a robust framework for adaptive spatial training requires coordinating several components: the user interface for AR, the data processor, learner-modeling elements (Bayesian Knowledge Tracing and Hierarchical POMDP), and the adaptation logic for providing corrective feedback and moving toward task advancement [55]. The overall system design typically follows a layered approach (Figure 1), with data and control signals passing through in an orderly and organized fashion.
  • Client-Side AR Interface: The client application, running on a mobile device or a head-mounted display, captures the learner’s real-world environment via the camera. Virtual 3D elements are rendered over this feed, allowing users to manipulate digital objects in real time. The AR application was developed using the Unity 3D engine in combination with AR Foundation and ARCore (for Android devices), enabling real-time environment tracking, gesture-based manipulation, and dynamic object rendering within the learner’s physical surroundings. The interface supports device orientation sensing and allows seamless overlay of 3D models that respond to user interaction, such as touch-based rotation or drag-and-drop assembly. This technical foundation ensured that the AR environment could support fine-grained interaction monitoring necessary for adaptive reasoning.
  • Interaction Manager: On the client side, an interaction manager module logs every user action (object moves, rotations, success/failure messages, and other relevant events). When a user attempts to assemble a 3D object, the manager tracks each manipulation: how the user rotates or repositions the elements, how often they request hints, and the duration of each sub-step. These data points form the raw input for the adaptive engine. The interaction manager typically pre-processes these data (e.g., time-stamping events, compressing repeated actions) before sending them to the central server.
  • Server-Side Learning Analytics: While some AR applications can function in an entirely local mode, a server-side analytics component is advantageous when training multiple learners or collecting large-scale data. The server ingests the event streams and applies advanced algorithms, including BKT for estimating skill mastery and HPR for identifying strategy patterns. This centralized approach allows for integrated monitoring of user progress across multiple tasks and sessions.
  • BKT Module: The form of knowledge modeling implemented in the system is Bayesian, realized through BKT. This approach enables probabilistic estimation of a learner’s mastery level for each spatial skill in real time, based on observed correct and incorrect responses. The BKT module requires precise logs of both correct and incorrect attempts, error probabilities, guess probability, and the user’s current estimate of mastery for each individual spatial ability. For instance, there may be one specific ability, “mental rotation”, and another, different ability, “assembly of objects”, and another different ability, “symmetry recognition”. Each of these abilities has its own set of parameters, which are adjusted after each user experience with an appropriate task. The updated mastery probabilities are used as key inputs to subsequent system decisions, for example, whether to increase task complexity or to provide a gentle nudge. BKT was selected as the learner modeling approach due to its ability to provide real-time, probabilistic estimations of student mastery, which is well suited for structured, skill-based learning environments such as spatial reasoning tasks. Compared to fixed-rule systems or heuristic thresholds, BKT offers a more flexible and evidence-driven mechanism for adjusting task complexity, enabling the system to maintain learners within their optimal zone of development. Its interpretability and proven success in prior educational applications make it a reliable and pedagogically grounded choice for adaptive learning environments. Standard parameter values were initially set as follows: initial knowledge probability pinit = 0.25, learning rate plearn = 0.15, guess probability pguess = 0.20, and slip probability pslip = 0.10. These values were based on prior literature and calibrated during pilot testing to reflect the skill acquisition pace in spatial tasks. Each skill (e.g., mental rotation, 3D assembly) had its own BKT tracker updated after each task step based on success or error events.
  • HPR Module: Along with the BKT module, the HPR module is focused on analyzing learners’ behavioral tendencies. While, as mentioned above, the BKT model takes a high number of correct answers to be a strong indication of mastery, the HPR module can detect signs of over-reliance on trial-and-error procedures or external perspectives. The module performs contextual comparison of the user’s performed actions. For example, it compares patterns of object manipulation and perspective-changing frequency. These patterns provide interpretative results, i.e., such as “inadequate planning”, “over-trust of external perspectives”, or “goal-driven behavior”, and “suboptimal approach”. Finally, based on these data, it subtly alters the skill state of the user or triggers a certain intervention to stimulate enhanced good habits. The HPR module employed a rule-based inference engine with heuristics derived from cognitive strategy patterns. For example, a rule such as “IF object rotations > 5 AND viewpoint changes > 3 THEN classify as ‘trial-and-error strategy’” carried a confidence weight of 0.7. Other rules were similarly defined for detecting behaviors such as externalization bias, impulsive execution, or metacognitive hesitation. These rules were applied using a logic-based evaluator triggered every 10 s or at the end of each task. The integration of HPR and BKT scores was governed by an adaptive weighting factor β = 0.5, balancing quantitative mastery probability and qualitative strategic classification for decision making in the adaptation engine.
  • Adaptation Engine: Central to the system is the adaptation engine, which consolidates inputs gained through BKT and HPR to make decisions in real time concerning intervention. For instance, if BKT shows a higher probability of expertise for rotations and HPR indicates consistent usage of the device, the adaptation engine will continue with the existing rate of skill-linked tasks (since accuracy is consistent for the user) and provide metacognitive guidance directed towards stimulating inner spatial reasoning. On the other hand, if an enhanced lack of confidence towards mastery is evident through BKT, sequential instructions or changing to easier exercises will be implemented by it. The engine produces instructions or adjusts the AR interface, thus affecting future learner action. It is important to note that HPR does not replace BKT. Instead, it supplements it by adjusting interpretation thresholds, such as confidence levels or mastery thresholds, based on strategic behavior, leading to more precise and personalized interventions (Figure 2).
  • Data Storage and Reporting: All relevant interactions and conditions of the model are stored in either a relational or a document-oriented data store. Educators and researchers can access dashboards that show learners’ skills development, classify methodologies, and offer detailed analyses of progress. These reports are beneficial for optimizing tasks, detecting misconceptions, and enable research into spatial cognition.
The workflow typically proceeds as follows:
  • The learner opens the AR application, which initializes the device’s camera tracking and loads the relevant 3D models.
  • The user selects or is assigned a task (e.g., assemble a 3D shape).
  • As the user manipulates the virtual objects, the interaction manager logs each event (rotations, placements, errors).
  • These events are sent to the server or local analytics module. The BKT and HPR modules process the data to update mastery probabilities and strategy classifications.
  • The adaptation engine merges these signals to decide how to respond (e.g., offering a hint, adjusting difficulty, or encouraging strategic reflection).
  • Feedback is displayed in real time within the AR interface.
  • The learner continues until they complete the task or reach a predefined mastery threshold.
  • Data are stored for subsequent analysis, and the cycle repeats for the next task or session.

4.2. Integration of BKT and HPR Within the AR Environment

Developing an AR platform that accommodates BKT and HPR requires careful consideration of how data are collected, interpreted, and acted upon. The challenge arises because AR contexts often yield more complex interaction data than traditional learning tools. Instead of a simple “correct” or “incorrect” response, learners might engage in multiple sub-steps, partial rotations, or repeated placements, all within a single task attempt. Capturing these nuances is essential for accurate modeling of both mastery and strategy.
  • Defining Skills and Strategies: The first step is to identify the specific spatial skills to be tracked by BKT. Common examples include “mental rotation”, “perspective-taking”, “symmetry recognition”, and “3D visualization”. Each skill requires its own BKT parameters. For the HPR component, developers must define the potential strategy markers. These might include “frequent device rotation”, “corner-first assembly”, or “excessive reliance on visual prompts”. By enumerating typical patterns of behavior, the system can systematically scan for them in user logs.
  • Tagging Interactions to Skills: Not every user action is relevant to every skill. Therefore, the system architecture must map each type of user interaction to one or more specific skills. If a user is working on a mental rotation challenge, correct rotation sequences count toward updating the mastery probability of “mental rotation” in BKT. Similarly, frequent viewpoint changes might prompt the HPR module to label the user’s strategy as “external-based approach”.
  • Event Processing and State Updates: As tasks unfold, the BKT module is updated whenever the learner completes a measurable sub-step that can be classified as correct or incorrect. Each correct sub-step increases the probability that the learner has mastered the associated skill, weighted by the guess and slip parameters. Meanwhile, the HPR engine processes the raw logs for evidence of particular strategy markers. This analysis might run in short cycles (e.g., every 10 s) or once per task completion, depending on system design.
To quantitatively combine the information from both the BKT and HPR modules, we define an Adaptive Score that fuses the mastery probability from BKT with a qualitative score derived from HPR indicators. This combined measure can be expressed as:
A d a p t i v e   S c o r e = β   ·   P ( L ) + ( 1 β )   ·     ( w   ·   I ) ,
In Equation (5), P(Lt) represents the probability of mastery as determined by the BKT module, Ii represents individual HPR indicators, wi represents their weights and β (with 0 ≤ β ≤ 10) is a weighting factor that balances the contributions of the two modules. This fusion mechanism enables the adaptive system to drive interventions based on both quantitative and qualitative assessments of learner performance.
Unlike conventional AR systems that rely on static instruction sets or uniform task sequences, our system actively integrates user behavior analytics into the AR runtime. This real-time fusion allows the platform to respond not just to performance outcomes but also to cognitive strategies. By linking HPR-derived insights with skill-specific BKT updates, the system dynamically modifies feedback, difficulty levels, and even visual aids within the AR interface—something not previously implemented in existing educational AR platforms. The following components describe how this integration is operationalized within the adaptive AR environment:
  • Integrating the Outputs: The crucial integration point is where HPR’s interpretation modifies BKT’s estimates or triggers distinct feedback, without overwriting the BKT logic entirely. For example, if the HPR engine detects high reliance on random manipulations, it might reduce the “confidence” threshold for a correct sub-step, effectively making the BKT model more cautious about attributing mastery to successful outcomes. Alternatively, the system might place a user in a “strategy improvement” mode, offering targeted prompts before the next BKT update occurs.
  • Real-Time Feedback Loop: AR is immersive and typically demands that feedback be immediate or nearly so, in order to guide the user before they develop unproductive habits. Thus, a feedback manager monitors the integrated BKT-HPR output. It decides if immediate guidance is necessary (e.g., an on-screen hint: “Try visualizing the next step before rotating the device”) or if a more gradual intervention (e.g., adjusting the next task’s complexity) is sufficient. The key principle is that feedback should be contextually meaningful: addressing both the skill gap indicated by BKT and the strategy gap signaled by HPR.
  • Technical Considerations: (a) Processing real-time data streams from AR interactions can be computationally intensive. Optimizing for speed involves efficient event buffering, multi-threaded updates, and caching repeated patterns for quick reference. (b) Scalability: If the platform is to be used by multiple learners concurrently, the system must handle parallel BKT-HPR evaluations. Cloud-based architectures or distributed computing solutions often prove beneficial. (c) Usability: The user interface must convey feedback non-disruptively. Overloading the learner with constant pop-ups or metrics may hinder the immersive aspect of AR. Balancing the system’s adaptive interventions with a streamlined interface is a key design priority.
  • Evaluation and Iteration: Integration is not a one-time event. After initial deployment, developers typically conduct pilot studies to observe how learners interact with the system. Feedback from these studies informs adjustments to both the BKT parameters and the HPR triggers. If a particular strategy classification appears too sensitive or too lenient, developers can refine the rules. Similarly, if BKT data suggest that the guess parameter is overestimating mastery, the system can be recalibrated to match real learner performance.
By systematically intertwining BKT’s skill tracking with HPR’s strategy insights, the AR environment can respond not just to what the learner accomplishes, but also to the methods by which they accomplish it. This synergy promises a deeper form of adaptivity, guiding learners toward genuine spatial competence rather than superficial correctness.
The proposed system introduces a novel form of adaptivity directly within the AR interface. By combining gesture-based input tracking, real-time device orientation monitoring, and adaptive learning logic, the system transcends the role of AR as a passive visual layer. Instead, it becomes an intelligent instructional medium. The innovation lies in its ability to interpret fine-grained spatial interaction data—such as rapid viewpoint changes, object manipulation patterns, and response timing—and immediately act upon them to modify the learning experience. This architecture advances current AR learning systems, which typically remain static or only log performance data without immediate pedagogical response.

4.3. Data Capture: Recording Learner Actions and Interactions

Recording detailed information about every user action is essential for both BKT and HPR models. The AR setting naturally produces more extensive and varied data than typical e-learning platforms, since learners can rotate objects, adjust viewpoints, and complete partial successes within a single challenge.
Every relevant user action is logged as a distinct event. These include object manipulations (such as rotating one shape by a specific angle), major device viewpoint changes, task transitions, feedback requests, and errors. In tasks that involve assembling puzzle pieces, partial alignments or minor mistakes can be recognized so that not every step is reduced to a simple “correct” or “incorrect” label. BKT processes these successes or failures to update skill mastery probabilities, while HPR reviews patterns of interaction to identify strategies such as trial-and-error or consistent, methodical planning. Examples of such rule-based inferences used by the HPR module are listed in Table 1.
These HPR rules are defined by domain experts using established cognitive and instructional heuristics. The current system does not employ machine learning for rule generation, though its structure allows future integration of data-driven rule refinement.
Time stamps attached to each action allow the system to infer the speed or thoroughness of a user’s approach. Rapid movements may indicate impulsive behavior, whereas longer pauses can reflect careful thinking. BKT primarily requires a correct-or-incorrect outcome tied to a defined skill, but HPR uses the same log to detect broader strategies. These might include constant device rotation to gain multiple viewpoints or continuous random dragging motions that hint at uncertainty.
Because data volume can expand quickly if minute events are tracked, a common approach is to store summarized logs in real time for immediate BKT updates, while preserving detailed records for offline HPR analysis. Summaries might note how many total errors occurred or how long the user spent on each stage, whereas full logs capture rotation angles, partial alignments, and repeated attempts in chronological order.
Privacy and ethics guidelines determine how much environmental information can be gathered, particularly if the device’s camera captures background scenes. Typically, only orientation data and non-identifying performance metrics are kept, ensuring compliance with relevant data regulations. Some processes, such as BKT updates for adaptation, require low-latency processing so that interventions can be triggered promptly. More complex methods, such as in-depth strategy analysis, may take place after the session has ended, reducing processing demands during active learning.
To support educators and researchers, a dashboard can display key metrics such as each learner’s mastery probabilities, common errors, and highlighted strategies. Instructors can quickly see who needs extra help or who is ready for more advanced tasks. Researchers can explore correlations between particular strategy choices and faster improvements, refining the system further.
In summary, the system’s data capture mechanisms enable a real-time adaptive loop for immediate interventions while also preserving rich interaction logs for thorough examination of learning patterns. This approach ensures that the AR platform not only promotes correct outcomes but also fosters strong, long-lasting spatial reasoning skills.

4.4. Example of Operation

In this paper, we seek to demonstrate adaptive logic’s functional role in the system by providing an extended case study of one student’s initial interaction with an AR application. This extended case study demonstrates simultaneous operation of BKT and HPR modules in assessing skill accomplishment, interpreting cognitive activity undertaken by the student, and providing intervention assistance with a view to strengthening cognitive aspects of learning as well as metacognitive aspects.
Scenario: A student named Chris, with no prior experience in the system, begins by assembling a 3D cube from virtual components using gestures and device orientation. Although Chris completes the task correctly, the method used reveals deeper aspects of their spatial reasoning.
Step 1: Ensuring Participation. The AR Interaction Module captures all relevant information systematically. Throughout this session, Chris:
  • Rotates the object seven times before final placement;
  • Changes the viewing angle of the device five times in succession;
  • Completes the task in 28 s;
  • Does not necessarily denote commands but conveys repetitive tapping and quick movements.
Step 2: Processing of BKT Module. The system tags the task with two spatial skills: “mental rotation” and “3D assembly”. BKT updates the learner’s knowledge state as follows:
  • Mental rotation: Probability increases from 0.32 to 0.48;
  • 3D assembly: Probability increases from 0.25 to 0.38.
The moderate improvement signals success but not enough evidence for mastery.
Step 3: Rationalizing the HPR Module. The HPR engine employs rule-based inferential techniques on captured data. It starts, for example:
  • Rule 1: IF object rotations > 5, THEN flag potential trial-and-error strategy (confidence weight: 0.6);
  • Rule 2: IF device viewpoint changes > 3 within a short time span, THEN infer reliance on external spatial cues (confidence weight: 0.7);
  • Rule 3: IF task completion time < 30 s with no observable pause, THEN infer impulsive decision making (confidence weight: 0.8).
Those regulations also classify Chris’s approach under “externalization bias with impulsive performance”, which means one that is predominantly based on physical manipulation over inner spatial visualization.
Step 4: Adaptive Response and Integration. The Reasoning and Integration Engine combines resulting outputs:
  • From BKT: Moderate learning progress without mastery;
  • From HPR: A minimal metacognitive regulation-based strategy.
    • Based on this, the system decides to:
    • Maintain the current difficulty level;
    • Remove some visual aids to promote internal thinking;
    • Provide a metacognitive prompt: Can you imagine how the block looked before it was moved? Try to visualize the result before you touch it.
Step 5: Adjustment of Conduct and Re-examination. In the following assignment, Chris exemplifies:
  • Reduced object rotations (three instead of seven);
  • Fewer device movements (two changes);
  • Reflective period of nine seconds before the onset of first action;
  • An extended time to finish tasks (34 s).
These changes lead to further deductions, including a structure for developing strategic planning. BKT and HPR both adapt their metrics based on these changes, which in turn increases feedback and intensifies task complexity.

5. Evaluation

To assess the educational performance and technological efficacy of the proposed adaptive AR learning system with BKT and HPR, an experimental study with 100 university student subjects was conducted. This study was undertaken with a view to answering the following research questions: (a) How well does the system model and adapt improvements in spatial skills and cognitive styles in learners? (b) Is use of a dual-module adaptive system related to improved spatial skills, and overall learning performance, compared with standard non-adaptive AR systems? (c) What are learners’ perceptions regarding the adaptivity, feedback quality, and strategy support offered by the integrated BKT-HPR augmented reality system?

5.1. Participant Demographics and Experimental Design

The sample for the study was 100 undergraduate students with an average of 20.8 years of age, 42% female and 58% male. They were enrolled in different STEM fields at the University of West Attica. Importantly, all the individuals had minimal or no pre-exposure to AR environments, nor had any of them been exposed to adaptive AR learning systems. This aspect of the sample was important in ensuring that any learning gains recorded could be attributed mostly to the in-built qualities of the intervention rather than any premised familiarity with the medium.
The study adopted a quasi-experimental pre-test/post-test control group design, structured to evaluate the effectiveness of the proposed adaptive AR system. Initially, participants completed a pre-test designed to assess baseline spatial reasoning ability. The test included tasks targeting mental rotation, three-dimensional assembly, and symmetry recognition, which are skills that are well documented in the literature as fundamental to spatial cognition. Following the pre-test, each participant engaged in two structured learning sessions. During these sessions, students interacted with a series of spatially challenging AR tasks using either the adaptive system or a static version. Each participant completed the study over the course of two consecutive 45 min sessions. The environment was a controlled laboratory setting equipped with Android tablets preloaded with the AR application. Participants in the experimental group interacted with the adaptive version of the system, which dynamically adjusted task difficulty and provided metacognitive prompts based on real-time BKT-HPR inferences. Those in the control group used a static version of the AR application, which presented the same sequence of tasks without adaptation or feedback interventions. All participants completed a brief training session to become familiar with basic gestures and controls prior to the experiment. The spatial tasks presented across sessions included mental rotation challenges, 3D object assembly, and symmetry recognition problems, progressing in difficulty across levels. The goal was to provide sufficient exposure for the system to apply personalization features while allowing participants to develop and demonstrate measurable improvement.
Upon completion of instructional sessions, all participants completed a post-test that was formulated to mimic the structure and complexity of the pre-test so that there could be a direct comparison of learning gains. The study was completed with an administration of a user experience survey of the participants that collected information regarding perceived adaptability of the system, feedback quality, and engagement levels. Participants were assigned randomly to either an experimental group (n = 50), which worked with an adaptive version of the system that involved both BKT and HPR modules, or a control group (n = 50), which worked with a static, non-adaptive AR system. Random assignment was implemented to ensure that the study maintained internal validity by ensuring that differences reported, if any, in learning gains could be attributed to adaptive aspects involved in the experimental condition.

5.2. Learning Gains and Knowledge Modeling

Both learning groups showed improvements; however, the experimental group showed considerably better progress (Table 2). BKT tracked skill acquisition over sessions, continuously filling in skill probabilities based on student performance. While both learning groups started with scores that were almost identical, experimental group members achieved substantially higher post-test scores, thus showing that adaptive aspects of the system delivered a significant educational advantage.
The adaptive system adjusted spatial task complexity and tempo in real time. While learners progressed, BKT adjusted task difficulty levels, either increasing or decreasing it based on each one’s emerging expertise. This ensured that learners remained in their optimal zone of proximal development, facing appropriate levels of difficulty without becoming too challenged. On the contrary, learners in the control condition received a pre-set sequence of tasks that did not take individual ability into account, which most likely resulted in reduced efficiency or relevance to the engagement.
Table 2 displays this difference. The experimental group showed a mean improvement of 26.1 points, compared to only an increase of 10.4 points for the control group. In addition, the small standard deviation regarding the experimental group’s improvement (SD = 6.3) indicates that learning gains were consistent across participants and not distorted by outliers. This difference was statistically significant (t(98) = 7.63, p < 0.001), with a large effect size for the experimental group (Cohen’s d = 1.45) and a small-to-moderate effect for the control group (Cohen’s d = 0.58), highlighting not only statistical significance but also the substantial educational advantage offered by adaptive feedback. These results validate the use of BKT as a foundational mechanism for real-time adaptation. The performance contrast between the experimental group (BKT-enabled adaptivity) and the control group (non-adaptive, fixed sequence) highlights the effectiveness of Bayesian modeling in enhancing personalization and learning gains.
Along with enhancing raw scores, BKT enabled an ongoing measurement of each student’s likelihood of acquiring a skill, which influenced task assignment. These adaptive assessments ensured that the system was responsive to students’ progression, allowing for immediate intervention for confusion and building up of skills. Due to this responsiveness, the system was able to eliminate redundant practice efficiently, promoting helpful struggles, which are important building blocks of successful cognitive growth.

5.3. BKT Skill Mastery Progression

Skill proficiency levels were evaluated by employing BKT probabilities over four consecutive tasks within each session. The experimental group exhibited a faster convergence of skills, which can be seen in Table 3. These probabilities were computed instantaneously, which enabled the system to adaptively adjust subsequent task difficulty and sequence. This adaptive process helped to provide learners with suitably challenged questions that were neither too easy nor too hard as well as maintained an optimal cognitive load for productive learning.
In Session 1, both experimental and control groups showed similar degrees of mastery with an experimental score of 0.32 and a score of 0.31 for the controls. Subsequent sessions, however, were easier for the experimental group due to the adaptive system. The experimental group’s average score of 0.51 by Session 2 was an improvement of 19 points over that of the control group’s score of 0.42. This early difference grew with each of the later sessions, with an experimental score of 0.75 by Session 4, compared to a score of 0.60 by the controls.
These differences highlight the importance of adaptive feedback and task individualization. In the experimental group, the feedback cycle with the BKT model allowed the system to respond immediately to the needs of the learner. As a learner showed signs of understanding, the system increased the challenge of the activities. Conversely, as errors persisted or mastery assessments were low, the system offered further support in the form of scaffolding or review activities. This adaptive process helped to maintain learner motivation and promote continued cognitive growth.
By way of comparison, the control group followed a pre-set sequence of tasks that did not allow for variation in performance or strategic preferences. Thus, some students may have been presented with either too many or too few demands for their present levels of ability, to reduce both learning effectiveness and motivation. Linear progression of mastery achieved with this group highlights some of the limitations of non-personalized learning even in environments with interactive augmented realities.
A two-way ANOVA revealed a significant main effect of group (F(1, 234) = 29.97, p < 0.001), indicating that learners in the experimental group achieved significantly higher overall skill mastery than those in the control group. However, the interaction between group and skill type was not statistically significant (F(2, 234) = 0.27, p = 0.76), suggesting that the adaptive benefits applied consistently across spatial task types rather than being specific to one domain.
To further elaborate on the system’s effectiveness across different spatial skill types, Table 4 summarizes average score improvements in performance sub-tests corresponding to each targeted skill. This comparison complements the BKT-based mastery progression by providing an additional, performance-centered perspective that reinforces the observed benefits of adaptive personalization at the skill-specific level.
It needs to be noted that gains reflect average score increases in sub-tests mapped to each spatial skill. Normalized gain is calculated as P o s t P r e M a x P r e based on a maximum possible score of 100.

5.4. Strategic Behavior and HPR Interventions

The HPR module evaluated behaviors that were typified by recurrent direction changes, changes of direction, and precipitous movement. These behavioral markers were considered via an a priori template of professional guidelines intended for use to recognize suboptimal problem-solving thinking. Specifically:
  • Rule 1: A trial-and-error approach was seen to illustrate an analysis with over five object rotations in 39 students;
  • Rule 2: Multiple rapid device rotations suggested externalization bias, where learners depended more on physical interaction than mental visualization, flagged in 27 students;
  • Rule 3: Initiating interaction in under 10 s was associated with impulsivity and lack of planning, triggered for 31 students.
Based on these regulations, the system promoted targeted strategies to help facilitate improvement in cognitive regulation internally. These strategies included tailored prompts (for example, “Consider visualization before action”), a reduction in interface scaffolding to prevent over-reliance, and a purposeful delay in providing display feedback to encourage thoughtful thinking. As a result, the students showed visible changes in their behaviors:
  • Pre-action planning mean duration increased from 3.4 s to 8.9 s;
  • Average object manipulations per task dropped by 45%;
  • 34 students were identified to have made a switch from impulsive and externally based methods to systematic techniques based on visualization.
Its improvement was not only confirmed by quantitative measurements but also by identifiable qualitative interaction patterns. One student, who had earlier performed unnecessary rotations with no planning, displayed stronger pre-action hesitation after HPR feedback, resulting in an 18% improvement on a subsequent test. Improvements like these illustrate HPR’s ability to identify changes in learning dynamics that often go unnoticed by traditional performance measures.
Importantly, these interventions were non-intrusive and seamlessly integrated into the AR environment. Students were not interrupted with explicit corrective feedback but rather received context-sensitive prompts and adaptive interface adjustments. This approach aligns with theories of situated cognition and metacognitive development, fostering internal regulation through experience rather than direct instruction.
Combining rule-based reasoning with behavior logging provided a strong foundation for cognitive engagement to be monitored in real time. While BKT evaluated the learner’s prior knowledge, HPR inferred the approach of a learner to a task, providing a dual view that refined personalization. These findings show that integrating cognitive strategy tracking in adaptive systems not only enhances learning gains but also deepens cognitive practices—an ultimate aim in developing spatial reasoning in STEM areas.

5.5. User Satisfaction and Perception

This section addresses RQ3 by examining learners’ perceptions of the adaptive system’s effectiveness, strategic support, and user experience. The survey-based findings confirmed the system’s influence, which is revealed in Table 5. Participants from the experimental group showed a significant improvement in their overall satisfaction with respect to adaptive aspects of the augmented reality environment compared to the control group that made use of a non-adaptive, fixed iteration of the system. A significant percentage of learners from the experimental learning group reported that the system allowed for better understanding of their individual learning methodologies, with task complexity perceived by them aligning with their individual skill levels.
Interestingly, 92% of experimental group members stated that the system helped them consider different strategies for problem solving, compared with just 61% of members of the control group. This finding is consistent with the system’s design principles of accuracy with respect to exploration in problem-solving strategy through HPR module feedback. In addition to this, 91% of students reported that feedback impacted their motivation to reconsider their strategies, suggesting that strategic prompts and accompanying revisions were viewed as helpful and relevant rather than intrusive.
The enjoyment levels remained persistently high under adaptive conditions. Despite metacognitive nudges and changes made in scaffolding, 89% of experimental participants reported that they enjoyed the activity, a percentage that is slightly higher than 85% reported by control group members. This finding highlights the importance of creating adaptive solutions that blend in so well that people remain unaware that assistance is being received.
Marked differences in satisfaction are suggestive of recognition of personalization by learners. Particularly, 88% of experimental group members felt that the activities were tailored to their skill levels, compared to only 52% of members of the control group. These findings suggest that adjustments made by the BKT module enhanced perceived relevance and appropriate difficulty, two major motivational factors that can strongly shape learners’ commitment and persistence.
Taken together, these findings indicate that the system was not only effective in improving academic performance but also in creating a reflective and productive learning process for subjects. The high levels of agreement measured over each of these perceptual scales show that adding cognitive strategy awareness to adaptive augmented environments has an added advantage.
The perception data in Table 5 were collected using binary agree/disagree statements. This response format was intentionally adopted to minimize cognitive load and ensure clarity, especially since participants were unfamiliar with adaptive AR systems. Moreover, the perception survey was designed as a complementary component to the main learning evaluation, and brevity was prioritized to maintain participant engagement. While this limited the granularity of responses, it provided a clear indication of general sentiment.

5.6. Discussion

The results of this study show significant improvements due to the application of BKT and HPR in an adaptive AR system for spatial learning. Implementation of this two-module system produced significant improvements in student performance, strategic activity, and metacognitive actions. Pre- and post-intervention analyses showed a statistically significant and educationally relevant improvement in experimental participants’ spatial reasoning scores compared to peers receiving non-adaptive treatment. Cohen’s d effect size (1.45) demonstrates substantial practical effect resulting from the adaptive system, reflecting not only improvements in task performance efficacy but also deeper cognitive changes. The strong effect size observed in the experimental group and the higher normalized gains across spatial tasks provide quantitative support for the pedagogical effectiveness of combining BKT and HPR in adaptive AR environments. These outcomes reflect not only statistical significance but also practical educational value, especially in fostering personalized improvement across varied spatial skills. Key to this improvement was the BKT module, which enabled near-real-time inference of learners’ knowledge with adaptive adjustment of task difficulty based on learning. This alleviated risks of under-challenge and over-challenge by continuously keeping learners in their optimal areas of learning across their entire learning journey. Further graphical and tabular assessments of learning progression showed that adaptive treatment subjects achieved high-probability levels of mastery faster than non-adaptive treatment subjects. These trends support claims that adaptive progression and feedback produced by the BKT system were essential to achieving accelerated gains in learning, especially with respect to measuring object rotation judgments in mental rotation and 3D spatial reasoning accuracy.
Along with skill acquisition, the addition of the HPR module brought a qualitative aspect of personalization not present in traditional BKT-based systems. By examining behavioral patterns in object rotations, action timing, and changes in perspective, the HPR module inferred cognitive strategies from learners and identified areas of possible inefficiency. Students who overused trial-and-error or acted with inadequate forethought were subtly encouraged to improve their learning strategies through targeted prompts, phased removal of scaffolds, or adjustments to the interface. Results revealed that in excess of one-third of students moved from impulsive, visually driven behaviors to deeper, internally motivated problem solving. Such changes in behavior were recorded by system logs and actions of users; students had longer intervals before activating actions, fewer attempts at manipulation, and carried through with more thoughtful actions. Such changes in behavior contribute support to theories of metacognition and self-regulated learning, such as Zimmerman’s model, which highlights planning, executive control, and reflecting on one’s engagement with the learning process. The dual-module design made these processes explicit and actionable in an embodied learning environment. By not only rewarding accurate responses, the system encouraged learners to reflect on the strategies by which they approached a problem, not just whether or not they arrived at a correct solution. This approach elevates AR from being a delivery medium to an adaptive and intelligent learning technology capable of both tracking and shaping learner behavior in real time.
The HPR module strategic interventions are especially noteworthy because they counter an important limitation of many adaptive systems: that of distinguishing between high performance that stems from underlying mastery and that which stems from counterproductive strategies or guessing. In this study, several students first succeeded at tasks with counterproductive behaviors that involved an over-reliance on object manipulation and minimal reasoning before acting. With intervening actions launched by the HPR system—from such suggestions as a “Try visualizing before acting” to changes in scaffolding that reduced over-reliance on prompts—students moved to improved strategies. One example dealt with a student who reduced the average number of rotations from nine to just over three and boosted post-test scores by 18% due to these small but critical changes made in the system. The system’s ability to recognize and respond to the way in which students approach tasks, rather than just what they produce, is an important step forward in pedagogy. It bridges effectively between cognitive modeling and instructional design, ensuring that both awareness of strategies and metacognitive reflection are brought to bear on learning. These findings suggest that future learning systems should prioritize not just tracking knowledge but tracking strategies to foster adaptive expertise—an essential aspect in complex fields like engineering and spatial visualization.
Those participating in the study seemed to appreciate and welcome the adaptive support offered by the system. Survey responses showed strong agreement with statements concerning metacognitive knowledge (“helped me understand how I solve problems”), perceived adaptiveness (“tasks matched my skill level”), and reflective contemplation (“feedback made me think about my strategy”). Positive reception of HPR-based feedback supports the hypothesis that such cognitive cues are effective and welcome in immersive environments. Unlike intrusive tutoring systems, AR-based feedback was blended with spatial task progression seamlessly, maintaining immersion while promoting strategic awareness. Further, the system’s transparency about its adaptive capabilities (enabling learners to understand the rationale behind task adjustments or provision of feedback) was likely to instill trust and motivation. From a design perspective, these observations suggest that feedback must be contextual, adaptive, and presented at optimal times, especially in experiential learning environments like AR. Significantly, the findings show that small changes to factors such as visual aid timing or hint frequency variation could have substantial effects on learner behavior under cognitive model guidance. These findings support the need for cognitive models of learners that go beyond correctness, integrating both behavioral and cognitive metrics to allow for a more holistic treatment of individualized education.
Collectively, these findings support the hypothesis that putting BKT and HPR together in an augmented reality environment yields an extremely effective, real-time adaptive learning process. BKT ensures that the learning process is always optimally challenging, with quantifiable and evidence-based improvement. During this time, HPR adds an element of responsiveness to instruction, enabling not just responding to mistakes but also to the multimodalities involved in learning. Combining these modules encourages adaptive interactions that are both cognitively and emotionally appropriate, enabling not only knowledge gains but also the development of strategic reasoning. Thus, the system overcomes some of the weaknesses of many modern adaptive systems that rely on error detection alone. This dual-module design proves that the effectiveness of personalization is achieved only if it accounts for learners’ knowledge base and cognitive processes. With educational systems increasingly embedding artificial intelligence and immersive media, these findings lay a compelling rationale for future-proof, student-centric environments that enable not only task accomplishment but also cognitive development, self-regulation, and strategic understanding.
The results reported herein reinforce and extend strong findings stated in relevant literature reported in Section 2. Conventional spatial training methodologies, comprehensively researched by [30], revolve around systematic sequences of manipulation-enhancing tasks for raising spatial ability. While these methodologies show some merit, they often fail to offer individual-level feedback or adjustment systems based on student performance. Our system remedies such deficiencies by utilizing BKT, which allows for difficulty adjustment based on real-time tests of mastery. These findings are consistent with prior work demonstrating that BKT-based systems improve learner performance by optimizing instructional pacing and challenge level [16,36]. However, this study extends earlier models by incorporating HPR, which allows the system to respond not only to correctness but to learners’ behavioral strategies—an aspect not sufficiently addressed in prior adaptive systems [19,20].
Additionally, the HPR alleviates a known shortcoming of many BKT-based systems: their inability to consider underlying reasoning about a student’s response. We demonstrate that HPR can accurately infer cognitive strategies from student responses and provide contemporaneous intervention if learners use suboptimal strategies, enriching BKT’s quantitative facets with qualitative information. In the context of spatial training, previous AR studies have shown improvements in task engagement and manipulation skills [47,49,50], but few have combined AR with adaptive scaffolding mechanisms. Our results suggest that integrating personalized feedback—both metacognitive (via HPR) and difficulty-based (via BKT)—yields stronger gains in spatial reasoning, confirming theoretical predictions from spatial learning literature [14,15,32].
These improvements are especially relevant in AR environments, where student interactions can be carefully recorded and used for individual support in near-real time—an application not explored deeply in most advanced AR systems reported in the present literature. By coupling adaptively adjusted difficulty with immediate strategic feedback, our system not only affirms but magnifies AR learning material’s potential for education, converting it from inert adaptive content to responsive adaptive systems that align with contemporary theories of adaptive learning and situativity.

6. Conclusions

This study introduces and evaluates an adaptive augmented reality (AR) learning system that integrates Bayesian Knowledge Tracing (BKT) and Human Plausible Reasoning (HPR) to enhance spatial skills in STEM students. The system personalizes feedback based on learners’ knowledge and cognitive strategies. In a trial with 100 university students, it significantly improved spatial reasoning, supported metacognitive awareness, and promoted more strategic learning behaviors. BKT enabled real-time task adjustment, while HPR detected and countered ineffective strategies such as impulsiveness and trial-and-error. The system advances prior work by technically integrating probabilistic knowledge modeling with behavioral inference and empirically demonstrating its educational value in immersive learning. It addresses key limitations of earlier adaptive AR systems by providing personalized, context-aware guidance that supports both cognitive performance and learning strategy development.
Despite promising results, the study highlights several directions for future research. The short intervention duration limits insight into long-term strategy transfer, which future studies should examine across varied domains. The current rule set was manually defined; incorporating machine-learned inference could enhance adaptability and scalability. The system also did not account for emotional or motivational states—key factors in immersive learning—suggesting a role for multimodal data (e.g., eye tracking, biometrics) in future adaptations. Further research could explore the system’s use in collaborative AR environments and its effectiveness across cultural contexts. Given its modular architecture, the system is well positioned for extension into more dynamic, socially embedded, and personalized learning applications.

Author Contributions

Conceptualization, C.P., C.T. and A.K.; methodology, C.P., C.T. and A.K.; software, C.P., C.T. and A.K.; validation, C.P., C.T. and A.K.; formal analysis, C.P., C.T. and A.K.; investigation, C.P., C.T. and A.K.; resources, C.P., C.T. and A.K.; data curation, C.P., C.T. and A.K.; writing—original draft preparation, C.P., C.T. and A.K.; writing—review and editing, C.P., C.T. and A.K.; visualization, C.P., C.T. and A.K.; supervision, C.T. and A.K.; project administration, C.P., C.T., A.K. and C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval are not required for this study, as it exclusively involves the analysis of properly anonymized datasets obtained from past research studies through voluntary participation. This research does not pose a risk of harm to the subjects. All data are handled with the utmost confidentiality and in compliance with ethical standards.

Informed Consent Statement

Informed consent was obtained from all subjects at the time of original data collection.

Data Availability Statement

The data supporting the findings of this study are available upon request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Carroll, J.B. Human Cognitive Abilities: A Survey of Factor-Analytic Studies; no. 1; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  2. Gardner, H. Frames of Mind: The Theory of Multiple Intelligences; Basic Books: New York, NY, USA, 1983. [Google Scholar]
  3. Thorndike, E.L. On the Organization of Intellect. Psychol. Rev. 1921, 28, 141–151. [Google Scholar] [CrossRef]
  4. Malanchini, M.; Rimfeld, K.; Shakeshaft, N.G.; McMillan, A.; Schofield, K.L.; Rodic, M.; Rossi, V.; Kovas, Y.; Dale, P.S.; Tucker-Drob, E.M.; et al. Evidence for a unitary structure of spatial cognition beyond general intelligence. NPJ Sci. Learn. 2020, 5, 9. [Google Scholar] [CrossRef]
  5. Wright, R.; Thompson, W.L.; Ganis, G.; Newcombe, N.S.; Kosslyn, S.M. Training generalized spatial skills. Psychon. Bull. Rev. 2008, 15, 763–771. [Google Scholar] [CrossRef]
  6. Yu, M.; Cui, J.; Wang, L.; Gao, X.; Cui, Z.; Zhou, X. Spatial processing rather than logical reasoning was found to be critical for mathematical problem-solving. Learn. Individ. Differ. 2022, 100, 102230. [Google Scholar] [CrossRef]
  7. Shelton, A.L.; Davis, E.E.; Cortesa, C.S.; Jones, J.D.; Hager, G.D.; Khudanpur, S.; Landau, B. Characterizing the Details of Spatial Construction: Cognitive Constraints and Variability. Cogn. Sci. 2022, 46, e13081. [Google Scholar] [CrossRef]
  8. Spence, I.; Feng, J. Video Games and Spatial Cognition. Rev. Gen. Psychol. 2010, 14, 92–104. [Google Scholar] [CrossRef]
  9. Papanastasiou, G.; Drigas, A.; Skianis, C.; Lytras, M.; Papanastasiou, E. Virtual and augmented reality effects on K-12, higher and tertiary education students’ twenty-first century skills. Virtual Real. 2018, 23, 425–436. [Google Scholar] [CrossRef]
  10. Martín-Gutiérrez, J.; Saorín, J.L.; Contero, M.; Alcañiz, M.; Pérez-López, D.C.; Ortega, M. Design and validation of an augmented book for spatial abilities development in engineering students. Comput. Graph. 2010, 34, 77–91. [Google Scholar] [CrossRef]
  11. Carrera, C.C.; Asensio, L.A.B. Landscape interpretation with augmented reality and maps to improve spatial orientation skill. J. Geogr. High. Educ. 2016, 41, 119–133. [Google Scholar] [CrossRef]
  12. Papakostas, C.; Troussas, C.; Krouska, A.; Sgouropoulou, C. Exploration of Augmented Reality in Spatial Abilities Training: A Systematic Literature Review for the Last Decade. Inform. Educ. 2021, 20, 107–130. [Google Scholar] [CrossRef]
  13. Papakostas, C.; Troussas, C.; Krouska, A.; Sgouropoulou, C. On the Development of a Personalized Augmented Reality Spatial Ability Training Mobile Application. In Novelties in Intelligent Digital Systems; Frasson, C., Ed.; Frontiers in Artificial Intelligence and Applications; IOS Press: Amsterdam, The Netherlands, 2021; Volume 338, pp. 75–83. [Google Scholar] [CrossRef]
  14. Papakostas, C.; Troussas, C.; Krouska, A.; Sgouropoulou, C. Modeling the Knowledge of Users in an Augmented Reality-Based Learning Environment Using Fuzzy Logic. In Novel & Intelligent Digital Systems, Proceedings of the 2nd International Conference (NiDS 2022), Athens, Greece, 29–30 September 2022; Krouska, A., Troussas, C., Caro, J., Eds.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2023; Volume 556, p. 12. [Google Scholar] [CrossRef]
  15. Papakostas, C.; Troussas, C.; Krouska, A.; Mylonas, P.; Sgouropoulou, C. Modeling Educational Strategies in Augmented Reality Learning Using Fuzzy Weights. In Proceedings of the 2024 9th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Egaleo, Greece, 20–22 September 2024; pp. 121–126. [Google Scholar] [CrossRef]
  16. Kaser, T.; Klingler, S.; Schwing, A.G.; Gross, M. Dynamic Bayesian Networks for Student Modeling. IEEE Trans. Learn. Technol. 2017, 10, 450–462. [Google Scholar] [CrossRef]
  17. Lei, T.; Yan, Y.; Zhang, B. An Improved Bayesian Knowledge Tracking Model for Intelligent Teaching Quality Evaluation in Digital Media. IEEE Access 2024, 12, 125223–125234. [Google Scholar] [CrossRef]
  18. Sun, J.; Zou, R.; Liang, R.; Gao, L.; Liu, S.; Li, Q.; Zhang, K.; Jiang, L. Ensemble Knowledge Tracing: Modeling interactions in learning process. Expert Syst. Appl. 2022, 207, 117680. [Google Scholar] [CrossRef]
  19. Abedinzadeh, S.; Sadaoui, S. A trust-based service suggestion system using human plausible reasoning. Appl. Intell. 2014, 41, 55–75. [Google Scholar] [CrossRef]
  20. Mohammadhassanzadeh, H.; Van Woensel, W.; Abidi, S. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support. BioData Mining 2017, 10, 7. [Google Scholar] [CrossRef]
  21. Epler-Ruths, C.M.; McDonald, S.; Pallant, A.; Lee, H.-S. Focus on the notice: Evidence of spatial skills’ effect on middle school learning from a computer simulation. Cogn. Res. Princ. Implic. 2020, 5, 61. [Google Scholar] [CrossRef]
  22. Harris, D.; Logan, T.; Lowrie, T. Spatial visualization and measurement of area: A case study in spatialized mathematics instruction. J. Math. Behav. 2023, 70, 101038. [Google Scholar] [CrossRef]
  23. Tiwari, S.; Shah, B.; Muthiah, A. A Global Overview of SVA–Spatial-Visual Ability. Appl. Syst. Innov. 2024, 7, 48. [Google Scholar] [CrossRef]
  24. Poltrock, S.E.; Brown, P. Individual Differences in visual imagery and spatial ability. Intelligence 1984, 8, 93–138. [Google Scholar] [CrossRef]
  25. Kozhevnikov, M.; Hegarty, M.; Mayer, R.E. Revising the Visualizer-Verbalizer Dimension: Evidence for Two Types of Visualizers. Cogn. Instr. 2002, 20, 47–77. [Google Scholar] [CrossRef]
  26. Blazhenkova, O.; Kozhevnikov, M. Visual-object ability: A new dimension of non-verbal intelligence. Cognition 2010, 117, 276–301. [Google Scholar] [CrossRef]
  27. Uttal, D.H.; Meadow, N.G.; Tipton, E.; Hand, L.L.; Alden, A.R.; Warren, C.; Newcombe, N.S. The malleability of spatial skills: A meta-analysis of training studies. Psychol. Bull. 2013, 139, 352–402. [Google Scholar] [CrossRef] [PubMed]
  28. Uttal, D.H.; Miller, D.I.; Newcombe, N.S. Exploring and Enhancing Spatial Thinking Links to Achievement in Science, Technology, Engineering, and Mathematics? Curr. Dir. Psychol. Sci. 2013, 22, 367–373. [Google Scholar] [CrossRef]
  29. Wai, J.; Lubinski, D.; Benbow, C.P. Spatial Ability for STEM Domains: Aligning Over 50 Years of Cumulative Psychological Knowledge Solidifies Its Importance. J. Educ. Psychol. 2009, 101, 817–835. [Google Scholar] [CrossRef]
  30. Sorby, S.A. Assessment of a ‘New and Improved’ Course for the Development of 3-D Spatial Skills. Eng. Des. Graph. J. 2009, 69, 6–13. [Google Scholar]
  31. Sorby, S.A. Developing 3-D Spatial Visualization Skills. Eng. Des. Graph. J. 1999, 63, 21–32. [Google Scholar]
  32. Papakostas, C.; Troussas, C.; Sgouropoulou, C. Fuzzy Logic for Modeling the Knowledge of Users in PARSAT AR Software. In Special Topics in Artificial Intelligence and Augmented Reality. Cognitive Technologies; Springer: Cham, Switzerland, 2024; pp. 65–91. [Google Scholar] [CrossRef]
  33. Papakostas, C.; Troussas, C.; Krouska, A.; Sgouropoulou, C. Measuring User Experience, Usability and Interactivity of a Personalized Mobile Augmented Reality Training System. Sensors 2021, 21, 3888. [Google Scholar] [CrossRef]
  34. Baker, R.S.J.D.; Corbett, A.T.; Gowda, S.M. Generalizing automated detection of the robustness of student learning in an intelligent tutor for genetics. J. Educ. Psychol. 2013, 105, 946–956. [Google Scholar] [CrossRef]
  35. Ritter, S.; Anderson, J.R.; Koedinger, K.R.; Corbett, A. Cognitive tutor: Applied research in mathematics education. Psychon. Bull. Rev. 2007, 14, 249–255. [Google Scholar] [CrossRef]
  36. Zhang, K.; Yao, Y. A three learning states Bayesian knowledge tracing model. Knowl.-Based Syst. 2018, 148, 189–201. [Google Scholar] [CrossRef]
  37. Liu, F.; Hu, X.; Bu, C.; Yu, K. Fuzzy Bayesian Knowledge Tracing. IEEE Trans. Fuzzy Syst. 2021, 30, 2412–2425. [Google Scholar] [CrossRef]
  38. Pelánek, R. Bayesian knowledge tracing, logistic models, and beyond: An overview of learner modeling techniques. User Model. User-Adapt. Interact. 2017, 27, 313–350. [Google Scholar] [CrossRef]
  39. Slater, S.; Baker, R. Forecasting future student mastery. Distance Educ. 2019, 40, 380–394. [Google Scholar] [CrossRef]
  40. Zhang, J.; Xia, R.; Miao, Q.; Wang, Q. Explore Bayesian analysis in Cognitive-aware Key-Value Memory Networks for knowledge tracing in online learning. Expert Syst. Appl. 2024, 257, 124933. [Google Scholar] [CrossRef]
  41. Graesser, A.; Baggett, W.; Williams, K. Question-driven Explanatory Reasoning. Appl. Cogn. Psychol. 1996, 10, 17–31. [Google Scholar] [CrossRef]
  42. Anderman, L.H.; Anderman, E.M. Considering Contexts in Educational Psychology: Introduction to the Special Issue. Educ. Psychol. 2000, 35, 67–68. [Google Scholar] [CrossRef]
  43. Polya, G. Mathematics and Plausible Reasoning; Princeton University Press: Princeton, NJ, USA, 1954; Volume 1. [Google Scholar] [CrossRef]
  44. Collins, A.; Michalski, R. The Logic of Plausible Reasoning: A Core Theory. Cogn. Sci. 1989, 13, 1–49. [Google Scholar] [CrossRef]
  45. Azuma, R.T. A survey of augmented reality. Presence Virtual Augment. Real. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  46. Pellegrino, J.W.; Hunt, E.B. Cognitive models for understanding and assessing spatial abilities. In Intelligence: Reconceptualization and Measurement; Psychology Press: London, UK, 1991; pp. 203–225. [Google Scholar]
  47. Ali, D.F.; Omar, M.; Mokhtar, M.; Suhairom, N.; Abdullah, A.H.; Halim, N.D.A. A review on augmented reality application in engineering drawing classrooms. Man India 2017, 97, 195–204. Available online: https://www.researchgate.net/publication/320878073 (accessed on 1 March 2025).
  48. Wu, H.-K.; Lee, S.W.-Y.; Chang, H.-Y.; Liang, J.-C. Current status, opportunities and challenges of augmented reality in education. Comput. Educ. 2013, 62, 41–49. [Google Scholar] [CrossRef]
  49. Figueiredo, M.; Cardoso, P.J.S.; Rodrigues, J.M.F.; Alves, R. Learning Technical Drawing with Augmented Reality and Holograms. Recent Adv. Educ. Technol. Methodol. 2014, 1–20. [Google Scholar]
  50. Chen, Y.-C.; Chi, H.-L.; Hung, W.-H.; Kang, S.-C. Use of tangible and augmented reality models in engineering graphics courses. J. Prof. Issues Eng. Educ. Pr. 2011, 137, 267–276. [Google Scholar] [CrossRef]
  51. Kaur, N.; Pathan, R.; Khwaja, U.; Murthy, S. GeoSolvAR: Augmented reality-based solution for visualizing 3D Solids. In Proceedings of the IEEE 18th International Conference on Advanced Learning Technologies, ICALT 2018, Mumbai, India, 9–13 July 2018; pp. 372–376. [Google Scholar] [CrossRef]
  52. Bell, J.; Hinds, T.; Walton, S.P.; Cugini, C.; Cheng, C.; Freer, D.; Cain, W.; Klautke, H. A study of augmented reality for the development of spatial reasoning ability. In Proceedings of the ASEE Annual Conference and Exposition, Conference Proceedings, Salt Lake City, UT, USA, 24–27 June 2018. [Google Scholar] [CrossRef]
  53. Veide, Z.; Strozheva, V.; Dobelis, M. Application of Augmented Reality for teaching Descriptive Geometry and Engineering Graphics Course to First-Year Students. In Proceedings of the Joint International Conference on Engineering Education & International Conference on Information Technology, Orlando, FL, USA, 25–27 September 2014; pp. 158–164. [Google Scholar]
  54. Tuker, C. Training Spatial Skills with Virtual Reality and Augmented Reality. In Encyclopedia of Computer Graphics and Games; Lee, N., Ed.; Springer International Publishing: Cham, Switzerland, 2018; pp. 1–9. [Google Scholar] [CrossRef]
  55. Papakostas, C.; Troussas, C.; Sgouropoulou, C. Artificial Intelligence-Enhanced PARSAT AR Software: Architecture and Implementation. In Special Topics in Artificial Intelligence and Augmented Reality. Cognitive Technologies; Springer: Cham, Switzerland, 2024; pp. 93–130. [Google Scholar] [CrossRef]
Figure 1. System architecture diagram.
Figure 1. System architecture diagram.
Information 16 00429 g001
Figure 2. Adaptive feedback flow using BKT and HPR.
Figure 2. Adaptive feedback flow using BKT and HPR.
Information 16 00429 g002
Table 1. Sample HPR rules for strategy detection and intervention.
Table 1. Sample HPR rules for strategy detection and intervention.
IF (Behavior)THEN (Inference)→ (Intervention or Tag)
Frequent device rotationReliance on external cuesPrompt: “Try forming a mental image before rotating”
Multiple random attempts without pauseTrial-and-error strategyTag as “Low planning”; reduce task complexity
Long pauses before each moveCareful analysisReinforce with a brief confirmation message
Repeated use of same failed methodRigid strategySuggest alternative method through hint
No interaction for extended timeHesitation or confusionOffer optional walkthrough or quick demo
Frequent request for hintsLow confidenceTrigger metacognitive prompt: “What would you try first?”
Quick correct answers with no hint usePossible masteryConfirm proficiency; suggest next level challenge
Table 2. Comparison of spatial reasoning performance between experimental and control groups.
Table 2. Comparison of spatial reasoning performance between experimental and control groups.
MetricExperimental Group (n = 50)Control Group (n = 50)
Mean Pre-test Score58.457.9
Mean Post-test Score84.568.3
Mean Score Improvement+26.1+10.4
Standard Deviation (Gain)6.37.2
Significance (t-test)t(98) = 7.63
Effect SizeCohen’s d = 1.45
p-valuep < 0.001
Table 3. BKT skill mastery progression across four sessions.
Table 3. BKT skill mastery progression across four sessions.
SessionExperimental Mastery (avg)Control Mastery (avg)
10.320.31
20.510.42
30.650.53
40.750.60
Table 4. Comparison of skill-specific improvements between experimental and control groups.
Table 4. Comparison of skill-specific improvements between experimental and control groups.
Spatial SkillExperimental Group Gain (Post—Pre)Control Group Gain (Post—Pre)Normalized Gain (Experimental)
Mental Rotation+14.2+5.80.62
3D Object Assembly+9.5+3.20.58
Symmetry Recognition+7.4+2.10.51
Table 5. Student feedback on system adaptivity and strategy support.
Table 5. Student feedback on system adaptivity and strategy support.
StatementAgree (%) ExperimentalAgree (%) Control
Helped me understand how I solve problems9261
Tasks matched my skill level8852
Feedback made me reflect on my strategies9147
Enjoyed the AR system8985
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Papakostas, C.; Troussas, C.; Krouska, A.; Sgouropoulou, C. Integrating Bayesian Knowledge Tracing and Human Plausible Reasoning in an Adaptive Augmented Reality System for Spatial Skill Development. Information 2025, 16, 429. https://doi.org/10.3390/info16060429

AMA Style

Papakostas C, Troussas C, Krouska A, Sgouropoulou C. Integrating Bayesian Knowledge Tracing and Human Plausible Reasoning in an Adaptive Augmented Reality System for Spatial Skill Development. Information. 2025; 16(6):429. https://doi.org/10.3390/info16060429

Chicago/Turabian Style

Papakostas, Christos, Christos Troussas, Akrivi Krouska, and Cleo Sgouropoulou. 2025. "Integrating Bayesian Knowledge Tracing and Human Plausible Reasoning in an Adaptive Augmented Reality System for Spatial Skill Development" Information 16, no. 6: 429. https://doi.org/10.3390/info16060429

APA Style

Papakostas, C., Troussas, C., Krouska, A., & Sgouropoulou, C. (2025). Integrating Bayesian Knowledge Tracing and Human Plausible Reasoning in an Adaptive Augmented Reality System for Spatial Skill Development. Information, 16(6), 429. https://doi.org/10.3390/info16060429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop