Next Article in Journal
Beyond Assistance: Embracing AI as a Collaborative Co-Agent in Education
Previous Article in Journal
Teaching and Learning of Time in Early Mathematics Education: A Systematic Literature Review
Previous Article in Special Issue
Generative AI-Enhanced Virtual Reality Simulation for Pre-Service Teacher Education: A Mixed-Methods Analysis of Usability and Instructional Utility for Course Integration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Dynamic Learner State: Orchestrating AI Agents and Workplace Performance via the Model Context Protocol

1
Educational Administration and Human Resource Development, Texas A&M University, College Station, TX 77843, USA
2
Educational Leadership & Workforce Development, Old Dominion University, Norfolk, VA 23508, USA
3
Curriculum and Instruction, Purdue University, West Lafayette, IN 47907, USA
4
Biomedical Engineering and Informatics, Indiana University, Indianapolis, IN 46202, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(8), 1004; https://doi.org/10.3390/educsci15081004
Submission received: 8 July 2025 / Revised: 1 August 2025 / Accepted: 5 August 2025 / Published: 6 August 2025

Abstract

Current learning and development approaches often struggle to capture dynamic individual capabilities, particularly the skills they acquire informally every day on the job. This dynamic creates a significant gap between what traditional models think people know and their actual performance, leading to an incomplete and often outdated understanding of how ready the workforce truly is, which can hinder organizational adaptability in rapidly evolving environments. This paper proposes a novel dynamic learner-state ecosystem—an AI-driven solution designed to bridge this gap. Our approach leverages specialized AI agents, orchestrated via the Model Context Protocol (MCP), to continuously track and evolve an individual’s multi-dimensional state (e.g., mastery, confidence, context, and decay). The seamless integration of in-workflow performance data will transform daily work activities into granular and actionable data points through AI-powered dynamic xAPI generation into Learning Record Stores (LRSs). This system enables continuous, authentic performance-based assessment, precise skill gap identification, and highly personalized interventions. The significance of this ecosystem lies in its ability to provide a real-time understanding of everyone’s capabilities, enabling more accurate workforce planning for the future and cultivating a workforce that is continuously learning and adapting. It ultimately helps to transform learning from a disconnected, occasional event into an integrated and responsive part of everyday work.

1. Introduction

Modern work changes rapidly, particularly driven by new technologies like Artificial Intelligence (AI) (Howard, 2019). Organizations often struggle to have a clear and up-to-date picture of their workforce’s capabilities in this fast-paced environment. For decades, workplace learning has been a central concern for organizations (Boud & Garrick, 2001). Despite the increasing investment in training, training effectiveness and efficiency have been constantly reported as low (M. Yang et al., 2025). Compounding this challenge, most learning occurs outside formal training. Studies estimate over 70% of workplace learning is informal, arising through on-the-job experiences or social interactions (Cerasoli et al., 2018). These informal learning gains remain largely invisible to conventional learning management systems (LMS), which mainly track formal course completions. Additionally, even when skills are formally acquired, standard records are often static and outdated. Earning a training certificate or completing a course often does not guarantee proficient on-the-job skill application. In domains like project management, for example, researchers found no direct relationship between holding a professional certification and subsequent job performance (Farashah et al., 2019). More generally, standard completion-based LMS records offer limited insight into evolving human competencies (Canaleta et al., 2024). Such static, one-time records fail to capture the depth of mastery achieved, the context in which skills were applied, or the natural decay of skills over time (Pratisto & Danoetirta, 2025). As a result, the struggle of assessing true workforce capability beyond tallying training completion often leads to a significant competency–performance gap, where what traditional learning systems think people know does not always align with what they can do on the job.
Personalized mastery-based instruction has long been known to dramatically improve performance (Bloom, 1984). However, such individualized approaches are challenging to scale broadly. Emerging technologies now promise to bridge these gaps by enabling a dynamic learner-state ecosystem. Mature interoperability standards like the Experience API (xAPI) provide a means to collect fine-grained, cross-platform data on a wide range of learning experiences in a consistent format (Ilić et al., 2023; Karoudis & Magoulas, 2018). This interface allows tracking not just formal learning outcomes but also on-the-job performance events, practice activities, and informal learning moments. In parallel, advances in AI and new orchestration frameworks such as the Model Context Protocol (MCP) enable specialized AI agents to continuously interpret these data streams and update each individual’s competency state in real time. By orchestrating multiple AI agents via MCP, it becomes feasible to maintain a multidimensional, evolving record of each learner’s state. This approach shifts the paradigm from static transcripts to a living model of competency development.
The concept of the learner state represents a fundamental shift from traditional methods of assessing an individual’s capabilities in both educational and professional contexts. In this paper, we define learner state as a dynamic multidimensional representation of an individual’s holistic capabilities and developmental trajectory. This approach moves beyond simply tracking what a learner knows or has completed, to understanding how well, where, when, and with what confidence they can apply their capabilities, and, importantly, how these elements transform over time. It directly addresses significant limitations found in traditional learner profiling and static competency tracking and management, as traditional learner profiling typically involves gathering a static snapshot of a person’s attributes, such as demographic data, pre-existing knowledge, and learning preferences (Salman et al., 2020; Snyman & Van den Berg, 2018). Despite extra proposed dimensions (Snyman & Van den Berg, 2018), such profiling cannot fully capture the continuous evolution.
The purpose of this paper is to propose a conceptual model of a dynamic learner-state ecosystem via the integration of MCP and specialized AI agents. This paradigm shift in organizational learning and development (L&D) aims to provide capability visibility that static profiles cannot, and ultimately lead to a more efficient, personalized, and effective learning and development process for individuals and organizations.

2. Dynamic Learner State

2.1. Competencies and CBL/CBT

The modern idea of competency originated in early-twentieth-century educational reforms that linked learning objectives to the behavioral demands of industrial production. In the United States, the competency-based movement gained momentum during the 1950s and 1960s, drawing primarily on applied psychology and teacher-education research (Tuxworth, 2005; White, 1959). There has been some confusion in terms of the use of “competence” and “competency” in the literature, with many authors using them interchangeably. However, a considerable debate has occurred over their distinct meanings (Alainati et al., 2010). However, some literature suggests these terms are either synonymous or there is no need to distinguish them (Brown, 1993; McBeath, 1990; McMullan et al., 2003; Nikolajevaite & Sabaityte, 2016; Oliver et al., 2008; Salman et al., 2020). Despite the debate, the concept of CBL or CBT is a pedagogical approach that aims to emphasize the demonstrable outcomes (Manoharan et al., 2023; Tuxworth, 2005), individualize learning, promote autonomy, and encourage continuous or lifelong learning (Gardner, 2017; Mbarushimana & Kuboja, 2016). In this paper, we use the term “competency” for consistency. Specifically, in the rest of this paper “competencies” refers to the externally defined statements in a competency framework (e.g., “collaboration,” “root-cause analysis”), whereas “competence/competent” in the context of CBL/CBT denotes a learner’s demonstrated mastery of those statements at an appropriate performance level. Competency is multifaceted, and this concept has evolved significantly over time. Generally, a competency is defined as a specific skill, trait, or attribute that is behavioral and can be learned or developed (Liles & Mustian, 2004). Other definitions highlight competencies as personal characteristics that lead to high performance (Boyatzis, 1991), a collective characteristic of behavior related to work performance, or even an individual’s capacity (Zarifian, 1999; Lustri et al., 2007). Notably, competency is viewed as a journey, not an endpoint (Arokiasamy et al., 2017).
CBL focuses on outcomes rather than learning processes, which provides clarity and orientation to programs (Tuxworth, 2005). While it marked a significant advancement over time-based instruction, its traditional implementation often falls short of capturing the full dynamism and evolving nature of human capability, leading to a “checklist” mentality, where competencies are recorded as discrete and binary achievements (Eraut, 2005; Scott, 1982; Smith et al., 2019; Tuxworth, 2005). This often overlooks the nuanced journey of learning, which offers limited insight into the actual depth of learning, varying proficiency levels, the specific context of application, or the natural decay of skills over time (Henri et al., 2017; Smith et al., 2019; Tuxworth, 2005). While some models introduce levels of performance, these still often represent static snapshots rather than continuous evolution or decay (Guthrie, 2009; Smith et al., 2019). The focus has been on documenting “what can be done” rather than a more nuanced understanding (Liles & Mustian, 2004). Aligning evidence to competencies is highly contextual, and it requires “weighting” that reflects a competency’s complexity and how an individual applies it to their job (Smith et al., 2019). However, capturing and integrating this contextual performance data remains a complex task for these systems, often relying on predetermined structures rather than real-time, dynamic insights (Capaldo et al., 2006; Smith et al., 2019). Competency models/frameworks aim to provide clarity on the knowledge, skills, and abilities needed for successful performance, but this often results in a fixed definition rather than a dynamic one (Liles & Mustian, 2004).
In addition, a significant and often overlooked limitation of traditional competency tracking is its inability to effectively capture and value the invisible, informal, and experiential learning (Kusmin et al., 2017; Škrinjarić, 2022) that constitutes over 70% of optimal development (Dillon, 2022). For instance, informal competency development from social activities, such as contributions to corporate blogs and wikis, represents valuable tacit competencies that are often not captured by traditional human resource management systems (Loia et al., 2010). This misalignment creates a competency–performance chasm, characterized by a significant gap. The “situationist approach” to competency management highlights that an individual’s competence is strictly tied to the social context in which it is activated and developed over time, underscoring the importance of capturing application in practical settings (Capaldo et al., 2006).
The continuous nature of competence development, which extends beyond formal education and training, is fundamental in an environment where skill shelf life is rapidly decreasing due to technological advancements like AI (Kusmin et al., 2017; Psyché et al., 2023; Rasdi et al., 2024). This mismatch often creates a significant skills gap (Faruqe et al., 2021) and renders existing static and fragmented records of workforce capabilities insufficient for strategic talent management. The dynamic environment necessitates a profound shift beyond just focusing on formal education and training to embrace the continuous and often informal nature of learning that occurs both on-the-job and throughout an individual’s life (Lustri et al., 2007; Psyché et al., 2023; Snyman & Van den Berg, 2018). The challenge lies in moving beyond static and fragmented records and shallow competency models.

2.2. Multi-Dimensional Learner State

To overcome these challenges and empower human resource development (HRD) in a volatile environment, there is an urgent need to build a comprehensive and adaptive learner-state ecosystem. This system would provide a clear overview of an individual’s existing competencies mapped against aspired competencies, along with other dimensions, to support continuous learning that prepares one for a career beyond the job, ultimately enhancing organizational effectiveness and competitive advantage. The dimensions of a dynamic learner state include the following:
Personal attributes and characteristics. The personal attributes and characteristics were identified as a critical component of the learner profile (Snyman & Van den Berg, 2018). In Baldwin and Ford’s (1988) transfer of training model, trainee characteristics, including ability, personality, and motivation, are one of the training inputs that impact transfer performance. Some personal attributes were also captured as important factors in Holton et al.’s (2000) Learning Transfer System Inventory. Personal attributes and characteristics, in this paper, refer to personal demographics (e.g., age, tenure in the organization, job role, educational background, previous degrees/certifications, and cultural background), abilities, personality characteristics (e.g., Big Five), motivation, personal interests and preferences, and prior experiences. It overlaps with the characteristics in the concept of Knowledge, Skills, Abilities, and Other characteristics (KSAO), often viewed interchangeably with competencies (Oberländer et al., 2020). Capturing personal attributes and characteristics will enable the personalization of learning, consideration of context, and segmentation for evaluation for more granular analyses, which are critical for strategic talent planning and management.
Personal career goals. Different from a job, career refers to an individual’s “advancing up an arranged hierarchy through an organization or profession” (Arokiasamy et al., 2017, p. 404). Therefore, this dimension highlights the individual’s aspirations and objectives for their professional trajectory. Career advancement is a dominant factor for professional growth (Arokiasamy et al., 2017) and relates to self-management of working and learning experiences (Kuijpers & Scheerens, 2006). Personal career goals are crucial for guiding individual development plans (Draganidis & Mentzas, 2006; Liles & Mustian, 2004). When individuals are actively involved in identifying the competencies to be taught and in developing their individualized learning plans, they feel a strong sense of ownership, which motivates active participation and fosters lifelong learning (Liles & Mustian, 2004). Aligning competency development activities with individual short-term and long-term goals enhances motivation and commitment (Boyatzis, 2008) and will increase employee retention when they see a clear growth path within the organization. It not only enables the system to recommend learning and experiences that are relevant and motivating for the individual’s future, tailoring development paths, but also empowers supervisors for more proactive talent management with an understanding of individual career goals.
Granular mastery levels of precise skills. This is the foundational dimension of the learner-state ecosystem, as it quantifies the degree to which an individual understands a concept or can perform a specific skill. This dimension is essential for effective HRD, as it allows a targeted and precise approach to enhancing individual and organizational performance. The granular mastery levels refer to the detailed breakdown of competencies and skills into smaller, measurable components, allowing for the assessment of proficiency at distinct stages of development. It moves beyond a binary scale to a granular score/level that reflects the depth and breadth of their capability. It is subject to continuous updates through various forms of assessment, performance observation, and analytics. Traditional competency models tend to be shallow with vaguely articulated competencies (Stevens, 2012). Precise skills, according to Riesterer and Shacklette (2025), can set reliable performance standards for needed competencies through contextual performance, client feedback, and benchmarked skills against top performers to pinpoint the skill gaps. CBT is incremental and granular, with each defined competency converted into measurable objectives (Emerson & Berge, 2018). There have been efforts to move away from the binary scale. In measuring literacy skills, the International Adult Literacy Survey (IALS; National Center for Education Statistics, n.d.) and the later Adult Literacy and Life (ALL) Skills survey (Murray et al., 2005) followed a continuum of proficiency rather than a binary scale.
Dreyfus and Dreyfus (1984) proposed a five-stage model of skill acquisition, from novice to expert. Liles and Mustian (2004) detailed the process of developing subcompetencies with three levels of proficiency for various job groups. Capaldo et al. (2006) reported a 9-level scale in three ranges to indicate distinct levels of competency proficiency. Manoharan et al. (2023) discussed “qualification levels” for laborers to progress from the unskilled stage to the master craftsperson stage on units of competencies. The SFIA competency framework proposed seven levels of responsibility for professional skills from level 1 “Follow” to level 7 “Set strategy, inspire, mobilize” (The SFIA Foundation, 2025). By understanding skills at a granular level, organizations can accurately identify specific deficits beyond broad areas. This precision supports highly focused interventions, such as microlearning, to improve efficiency in closing performance gaps. Traditional assessments often struggle to capture the nuances of competence. At the same time, the granular mastery levels with precise skills or subcompetencies will enable a comprehensive assessment in a dynamic environment amid digitalization (Škrinjarić, 2022).
Contextualized performance. This dimension records precisely where and how a skill has been demonstrated or applied, encompassing elements such as the specific environment (e.g., classroom, simulated lab, production system), the tools utilized (e.g., Python IDE), the complexity of the task, and the presence of external support (e.g., peer/supervisor support, documentation). Competencies are situated and influenced by the organizational context that people are in (Capaldo et al., 2006; Smith et al., 2019; Tuxworth, 2005). Performing the acquired skills, sometimes referred to as transfer of training (Baldwin & Ford, 1988; Holton et al., 2000), is related to the work environment. This dimension speaks directly to the generalizability of a skill (Baldwin & Ford, 1988; Holton et al., 2000). A skill is genuinely mastered only when it can be applied effectively across various relevant contexts, rather than being confined to a specific situation (Mbarushimana & Kuboja, 2016; Palan, 2007; Salman et al., 2020; Tuxworth, 2005). According to Boyatzis’ (1991) effective performance model, success can be achieved when individuals’ competencies fit the requirements of the job and the organizational context. Competence is not a static state, but an action resulting from the combination of personal and environmental resources (Lustri et al., 2007), so understanding how skills are applied in real working environments with their variations is essential for evaluating performance (Liles & Mustian, 2004; Riesterer & Shacklette, 2025). What is considered competent can vary significantly across different organizations and contexts (Guthrie, 2009; Kusmin et al., 2017; Oberländer et al., 2020; Škrinjarić, 2022). Also, knowing the context of the application allows for precise identification of skill gaps and the recommendation of targeted interventions. Effective competency development involves encouraging individuals to take initiative in new or unexpected situations and continually learn from practical experiences (Lustri et al., 2007). By contrast, relying solely on subjective self-assessments or manager ratings offers only a rear-view-mirror perspective of capability (Riesterer & Shacklette, 2025). Understanding contextual performance helps determine if an individual’s skills are adequately applicable to a new role that requires specific contextual experience (Oberländer et al., 2020; Palan, 2007; Vazirani, 2010). Assessing contextual performance is also critical for tracking progression across different levels of mastery, as individuals move from “novice” to “expert” stages, requiring adaptable and versatile application of skills in dynamic environments (Henri et al., 2017; Mbarushimana & Kuboja, 2016; Tuxworth, 2005).
Recency/Decay/Frequency. According to the transfer of training literature, the transfer of acquired knowledge or skills decreases significantly as time passes by (e.g., M. Yang et al., 2020). Apart from the contextual factors, an individual’s proficiency might decay over time if not actively used (Smith et al., 2019). This dimension is critical for understanding skill proficiency, as it tracks how recently a skill has been demonstrated and incorporates a model for how its proficiency might decay. Such monitoring is increasingly important as the average lifespan of professional skills is estimated to be just five years and tends to be shortening with new skills emerging rapidly (LinkedIn Learning, 2021; Psyché et al., 2023; Škrinjarić, 2022). Competency is solidified through the application of knowledge in practical activities (Lustri et al., 2007). Frequency, on the other hand, quantifies how often a skill is used or demonstrated within a given timeframe, focusing on the pattern of usage. Widespread application suggests a deeply ingrained and readily accessible skill, making it less prone to decay and more likely to be performed fluidly. Tracking application frequency highlights which skills are actively used in day-to-day roles versus those that have been formally taught or certified but are seldom applied in practical settings. By tracking recency and decay, the system can ensure that critical skills, especially for high-stakes roles like those in defense or healthcare, remain at an optimal level even during periods of low active use (Smith et al., 2019). This dimension enables the prediction of when a skill might become obsolete (Chuang, 2021; Wingreen & Blanton, 2007), allowing for the system, such as a Diagnostic Agent, to recommend timely refreshers or practice before performance declines (Emerson & Berge, 2018). This granular data provides strategic insights into overall human performance and helps link individual capabilities to organizational readiness, moving beyond traditional credential-based models that often lose detail on individual strengths and weaknesses. Additionally, understanding skill decay prevents wasted effort on re-learning (Psyché et al., 2023) by intervening just as decay becomes significant, thus optimizing training resources and maximizing the return on investment (ROI) in human capital (Liles & Mustian, 2004; Palan, 2007). This proactive approach helps employees cope with rapid technological innovation and ensures they remain competitive (Chuang, 2021; Vakola et al., 2007).
Confidence indicator. This dimension captures individuals’ self-perception of their ability to apply a skill effectively and reflects the belief in their capabilities to execute actions required to achieve desired performances (Bandura, 2023; Gist et al., 1991; Karsten & Roth, 1998; Tyas et al., 2020; Wingreen & Blanton, 2007). Within the training-transfer literature, confidence is typically treated as a domain-specific form of self-efficacy. It concerns how well individuals expect to draw upon both innate aptitudes and acquired knowledge and skills in a given context (T. K. Chiu et al., 2024), an expectation that subsequently shapes their motivation, emotional reactions, cognition, and behavior (Arditi et al., 2013; Gottlieb et al., 2022). Holton et al. (2000) emphasized that understanding and fostering learning transfer requires examining a relatively complete system of influences, which includes trainee perceptions. Confidence, as the cognitive attitude of individuals, is significantly associated with their transfer performance (M. Yang et al., 2025). Understanding confidence is crucial for identifying critical discrepancies between perceived and actual ability. Confidence is influenced by multiple factors in the workplace, beyond individuals’ expertise. Interventions to increase individual confidence should be implemented accordingly, such as verbal persuasion, vicarious experience, or initial easy tasks, to ensure they benefit more from training (Karl et al., 1993). Confidence, as part of individuals’ cognitive attitude, is dynamic and changes over time in response to external factors and the situation (M. Yang & Watson, 2022), indicating that a static view of skill and confidence is insufficient (Gottlieb et al., 2022). Tracking and maintaining the Confidence-Competence Ratio (CCR) is critical, as deviations from an ideal ratio (e.g., overconfidence or underconfidence) can lead to problems. For instance, high mastery with low confidence may indicate imposter syndrome, where capable individuals hold back or second-guess themselves despite their competence, which can lead to timidity or insecurity in decision-making (Gottlieb et al., 2022). Conversely, low mastery with high confidence signals a need for diagnostic intervention. Confidence can directly interplay with the recency/decay/frequency dimension. Individuals with strong confidence are more likely to persevere in the face of difficulties (Arghode et al., 2021; Bandura, 2023; Combs & Luthans, 2007). This “staying power” in challenging pursuits (Bandura, 2023) means that skills used with higher confidence are more likely to be practiced consistently, thereby influencing the rate of decay. To measure an individual’s true confidence, various means should be considered, including self-report, implicit cues, and supervisor/peer feedback. Self-reported confidence from novices or even experienced professionals can be inaccurate or unreliable (Brennan, 2022; Clanton et al., 2014; Gottlieb et al., 2022). The implicit cues can also be considered, such as hesitation, speed of response in simulations, or other non-verbal indicators.
Engagement. This dimension is relevant and important as it serves as a key mechanism linking inputs like training and competencies to desired impacts and results in individual and organizational performance. The concept of engagement in workplaces was often examined from multiple perspectives, including personal engagement (Carter et al., 2018; Fletcher, 2016; Kahn, 1990; Kim, 2017), job or work engagement (Noercahyo et al., 2021; Schaufeli et al., 2002), and organizational engagement (Noercahyo et al., 2021; Saks et al., 2022). These terms are often used interchangeably. Personal engagement, as defined by Kahn (1990), is a multidimensional motivational state. It represents the simultaneous investment of physical, cognitive, and emotional energy in active work performance (Rich et al., 2010; Saks et al., 2022). This definition provides a more comprehensive explanation of job performance effects compared to narrower concepts. Work engagement, as a work-related state of mind, focuses more on the relationship between an individual employee and their work. As a motivational state, engagement represents an individual’s active allocation of personal resources towards work tasks and how intensely and persistently those resources are applied (Christian et al., 2011; Rich et al., 2010). It plays a critical role for both individuals and organizations, as it facilitates learning and competency development (Liles & Mustian, 2004; LinkedIn Learning, 2021; Mbarushimana & Kuboja, 2016), influences individuals’ organizational commitment (Basit, 2019), and directly enhances performance and outcomes at both the individual and organizational levels (Chaudhry et al., 2017; Corbeanu & Iliescu, 2023; Haruna & Marthandan, 2017; Kim, 2017; Roberts & Davenport, 2002; Saks et al., 2022; Tyas et al., 2020; Guo et al., 2017). It also plays a mediating role between some antecedents (e.g., job resources, training, leadership) and various positive outcomes (e.g., job performance, organizational effectiveness) in existing literature (Chaudhry et al., 2017; Chen, 2015; Christian et al., 2011; Fletcher, 2016; Kim, 2017; Memon et al., 2016; Rich et al., 2010; Sekhar et al., 2018). In measuring engagement, there are multiple widely used scales such as the Utrecht Work Engagement Scale (UWES; Schaufeli et al., 2006) and the Job Engagement Scale (JES; Rich et al., 2010). As a stable state of mind, engagement can fluctuate over time (Carter et al., 2018; Christian et al., 2011; Fletcher, 2016). Therefore, a longitudinal study to understand engagement is important.
Impacts. The dimension of impacts, or the performance effectiveness, is often measured to justify training programs (Kirkpatrick & Kirkpatrick, 2016). It focuses on how the work is carried out, rather than merely what has been completed, which captures the essence of competencies (Alainati et al., 2010). This dimension directly links learning and development efforts to business value, demonstrating the Return on Investment (ROI). Organizations need competent people to achieve results efficiently and effectively (Palan, 2007) and seek to justify investments in training by showing its impact on productivity, quality, customer satisfaction, and overall economic performance. Measuring impacts can involve analyzing multiple factors such as the quality, efficiency, accuracy, and business impact of a skill’s application in on-the-job environments via quantifiable metrics (e.g., customer satisfaction scores, time-to-resolution, individual promotion, financial outcomes), external feedback, individual performance evolution, and individual Organizational Citizenship Behavior (OCB; Bateman & Organ, 1983).
The shift from a static and backward-looking view (Sitzmann & Weinhardt, 2018; Škrinjarić, 2022; Wingreen & Blanton, 2007) to a more dynamic multidimensional “state” view is necessary for accurately reflecting human performance and adapting to the complexities of modern work environments (Chuang, 2021). It emphasizes continuous evolution over discrete attainment (Guthrie, 2009), and enables predictive insights, targeted intervention, and real-time capability assessment (Smith et al., 2019, which allows organizations and individuals to holistically understand, continuously develop, predict, and assess capabilities in a dynamic, operational context. Figure 1 shows an example of some state dimensions in the learner state portal for Jane Smith, a Logistics Operations Specialist at DE Solutions.

3. Theoretical Foundations

A dynamic learner-state ecosystem has its theoretical foundations including mastery learning (Bloom, 1968; Guskey, 2022; Slavin, 1987), the dynamic skill theory (Fischer, 1980; Fischer & Bidell, 2006; Mascolo, 2020), activity theory (Y. Engeström, 2000), ecological systems theory (Bronfenbrenner, 2005), and self-regulated learning (Pintrich, 2000; Zimmerman, 1990).
Mastery learning theory underpins the concept of continuously tracking a learner’s mastery level. It posits that with sufficient time and appropriate instruction, nearly all learners can achieve a high level of proficiency (Bloom, 1968; Guskey, 2022). This approach emphasizes continuous assessment and tailored interventions to ensure learners progress along a continuum rather than simply passing or failing. Mastery learning is a theory-driven education approach with foundations rooted in constructivist, behavioral, and social learning theories (McGaghie et al., 2020). Its core principle is that instruction is complete when a learner meets or exceeds a minimum passing standard, rather than after a fixed time. This model is particularly important for high-stakes training, such as in health professions education (Cook et al., 2013; McGaghie et al., 2020).
Dynamic skill theory aligns with the idea of skill development as a continuous, non-linear, and context-dependent process that unfolds through various levels of complexity (Fischer, 1980; Fischer & Bidell, 2006). It views skills as dynamic constructs expressed differently across varying contexts and support levels rather than static traits (Mascolo, 2020). It supports tracking state dimensions including contextualized performance, the fluid nature of mastery levels, and recency/decay. As Mascolo (2020) explained, it offers a framework for understanding how an individual’s ability to control their thoughts, feelings, and actions develops, organized into 13 distinct levels across four broad tiers: reflexes, actions, representations, and abstractions. Activity Theory frames learning as a mediated activity within complex socio-technical systems and highlights how individuals interact with tools, rules, communities, and divisions of labor to achieve objectives (Bakhurst, 2009; Y. Engeström, 2000; R. Engeström, 2009; Jonassen & Rohrer-Murphy, 1999). This theory is crucial for understanding how workplace performance serves as authentic learning activities. Activity theory provides a lens to capture rich contextual data (e.g., tools used, collaborators, objectives) that feeds into learner state dimensions like contextualized performance.
Proposed by Bronfenbrenner, the Ecological Systems theory views human development as a complex interplay between an individual and various interacting environmental systems (Bronfenbrenner, 2005; Crawford, 2020; Darling, 2007; Härkönen, 2007). These systems include the microsystem (immediate settings), mesosystem (interactions between microsystems), exosystem (external settings indirectly affecting the individual), macrosystem (broader cultural and societal patterns), and chronosystem (changes over time) (Neal & Neal, 2013). Its relevance to the dynamic state lies in its ability to provide holistic contextualization of influences on a learner’s state beyond direct learning interventions. It helps conceptualize how different learner state dimensions are not isolated but dynamically interact to shape skill development. Adopting this systemic perspective is essential for predictive workforce development because it highlights how a change in one layer, such as a new organizational policy in the exosystem, can cascade down to influence an individual learner’s state in the microsystem, and how shifts at the learner level can likewise ripple upward to affect broader organizational dynamics.
Self-regulated learning (SRL) refers to learners’ active management of their own cognitive, metacognitive, motivational, and behavioral processes during learning. Key SRL mechanisms include setting goals and planning, monitoring one’s understanding and progress, employing strategies to learn, and reflecting on outcomes to adapt future efforts. Decades of research have shown SRL is critical for effective learning, yet many learners struggle to accurately monitor and regulate these processes on their own (Pintrich, 2000; Pintrich & De Groot, 1990; Zimmerman, 1990; Zimmerman, 2002). Consequently, advanced learning technologies, intelligent tutoring systems (ITS), have been developed to both measure and foster SRL in learners. In a dynamic learner-state ecosystem, SRL theory is operationalized via an integrated network of AI-driven tools that track the learner’s evolving state and scaffold the learner’s regulatory behaviors in real time (T. Chiu, 2024; Ng et al., 2024). This ecosystem continually collects multimodal data on the learner’s cognitive, affective, and behavioral state (e.g., performance logs, eye-tracking) and uses it to drive tailored interventions (D. Chang et al., 2023; Jin et al., 2023). In essence, the learner’s internal state becomes the central context that AI agents interpret and respond to, which enables a feedback loop between the learner’s actions and supportive regulation strategies. A hallmark of the system is the presence of multiple AI agents with distinct roles, all coordinated to support the learner’s self-regulation. These agents function as external regulators of learning, essentially intelligent scaffolds that complement the learner’s regulatory efforts. Each agent analyzes particular aspects of the learner’s state data and provides targeted support or feedback aligned with SRL processes.

4. The Orchestration Layer: Model Context Protocol (MCP) and AI Agents

4.1. Model Context Protocol (MCP)

MCP was introduced in late 2024 and is an open standard from Anthropic that defines how AI models and tools communicate about context. MCP provides a universal interface through which AI agents can access and exchange contextual information regardless of their internal frameworks or the data sources involved. Just as the API standards enabled different web systems to interact seamlessly, MCP standardizes the “language” by which AI agents (such as the mentoring, motivation, and diagnostic agents) retrieve learner data, send updates, and coordinate their actions across the learner-state ecosystem. This protocol is model-agnostic and vendor-neutral, meaning an agent built on one AI platform can communicate with an agent or data source from another through the common MCP interface. MCP follows a client–server architecture to facilitate AI interoperability (see Figure 2).
In this design, MCP servers act as interface endpoints to various data sources or services, while MCP clients are typically AI agents or applications that consume those data/services. The unique contribution of MCP in our context is how it enables shared contextual understanding and synchronous coordination among the various AI agents managing the learner state. In a traditional setup, each agent might maintain its view of the individual, leading to fragmented or lagging information exchange. MCP mitigates this by offering a common operating picture: all agents draw from and contribute to a unified context space. For instance, when the diagnostic agent updates the individuals’ competencies, it can do so through MCP to a central context server. Immediately, the mentoring agent can retrieve this updated information via MCP, ensuring it does not re-teach that concept, and the motivation agent can also see the progress to deliver appropriate praise. This real-time synchronization means each agent’s decisions are based on the latest state of the learner, not stale data. Research in multi-tool agent systems underscores the importance of such coordination: complex workflows (like a chain of reasoning that spans multiple resources or agents) are only possible when context is fluidly shared (R. Yang et al., 2021).
MCP is designed to integrate seamlessly with existing learning data standards like the Experience API (xAPI) and Learning Record Store (LRS), effectively functioning as an intelligent orchestration layer atop the learning data layer. The xAPI standard defines a format for logging learning experiences as statements (typically actor–verb–object with context), and LRSs are repositories that store these statements. In a modern learning ecosystem, an LRS might contain rich historical data about an individual, such as which content they accessed, scores on assessments, time spent on tasks, etc. MCP can bridge AI agents to this repository in two directions: writing new observations to the LRS and reading relevant records to inform decisions. For example, when an AI mentoring agent observes that an individual solved a problem, it can use MCP to send a natural-language or structured request to an MCP-xAPI server, which translates it into a correctly formatted xAPI statement to be stored in the LRS. Conversely, suppose an agent needs to know the individual’s past performance (e.g., “retrieve all statements about a team member’s engagement with recent compliance training modules”). In that case, it can query the LRS through the MCP server and achieve a consolidated answer without each agent individually implementing xAPI parsing logic. For example, a recently developed MCP server, LearnMCP-xAPI, demonstrates this synergy: AI agents issue high-level descriptions of learning activities or queries, and the MCP layer handles converting these to xAPI data operations securely on the LRS. In this way, MCP acts as a translator and broker between AI cognition and the learning data. It elevates the xAPI data from a static log to an interactive context within which the LRS is not a passive warehouse but an active context provider to the AI agents via MCP.
By positioning MCP as an intelligent orchestration layer, we leverage the best of both worlds. The xAPI/LRS layer ensures interoperability and standardization of data, while MCP ensures interoperability at the level of agent reasoning and action. Through MCP client–server connections, AI agents can directly query data sources for context and receive in real time. This architecture greatly enhances contextual awareness in the system. MCP does not replace standards like xAPI—rather, it augments them by adding a layer of intelligent, adaptive communication. It ensures that all AI agents managing individuals’ journeys have a shared and up-to-date context and can act in a coordinated way. As a result, the learning environment becomes more responsive and personalized. The moment data is logged, it is fed into the ongoing dialog among agents, enabling on-the-fly adjustments to support the individuals. In the context of a learner-state ecosystem, MCP’s introduction marks a shift from siloed tutoring systems to an interoperable, context-driven network of AI coaches, all sharing a common memory of the individual. This architecture delivers more holistic and scalable support for self-regulated learning, as each specialized agent can plug into the collective intelligence of the system through MCP and thereby contribute to a unified, contextually aware learning experience for individuals.

4.2. The Proposed AI Agents Suite

Specialized AI agents, designed to perform specific functions, serve as the operational core of the proposed system. Operating in a collaborative and interconnected manner, orchestrated by MCP, these agents collectively enable the real-time capture, interpretation, and utilization of learning data for adaptive and personalized learning and development experiences. The proposed ecosystem leverages a suite of ten core AI agents, with each playing a distinct yet complementary role in supporting the learner’s journey.
Competency mapping agent. This agent specializes in maintaining and evolving the organizational competency framework. It identifies emerging skill needs based on industry trends, strategic shifts, job role analyses, and even performance data from various organizational systems (e.g., project management, CRM). Most importantly, it focuses on identifying and detailing granular competencies, subcompetencies, and precise skills required for specific job roles and to meet emerging organizational demands. By continuously analyzing internal and external data sources (e.g., job descriptions, market reports, industry skill taxonomies), this agent ensures that the defined competencies within the learner-state ecosystem remain current and relevant. This agent helps Human Resource (HR) and Learning and Development (L&D) teams understand the evolving skill gaps at an aggregate level. It moves beyond static competency models to a dynamic, AI-driven approach.
xAPI generation agent. This agent is responsible for translating diverse learning interactions and observations into standardized xAPI statements. It acts as the data ingress point for the learner-state ecosystem, converting raw data from various sources (e.g., simulations, practical tasks, formal assessments, informal interactions) into machine-readable and contextualized records. For instance, when a learner successfully completes a module in a compliance training program, the xAPI generation agent captures this event and formats it as an xAPI statement, detailing the actor, verb (e.g., “completed”), object (e.g., “Compliance Module A”), and relevant context (e.g., time, score). Each statement is pushed to the Learning Record Store (LRS), ensuring that every learning activity contributes to the learner’s comprehensive, continuously updated state profile.
Diagnostic agent. The diagnostic agent analyzes the xAPI streams and the current learner state to identify knowledge gaps, skill deficiencies, misconceptions, and other areas requiring further development. It continuously evaluates a learner’s mastery and confidence levels across various competencies. By applying sophisticated analytical models, it can pinpoint specific misconceptions, highlight areas where performance deviates from expected benchmarks, or even predict potential future learning difficulties. For example, if a sales professional consistently struggles with a particular type of customer objection in simulated scenarios, the agent would flag this as a specific skill gap requiring targeted intervention.
Instructional Systems Design (ISD) agent. This agent leverages diagnostic insights to dynamically recommend or generate personalized learning pathways, resources, and interventions. It acts as an intelligent curriculum designer, adapting the learning experience in real-time based on the learner’s evolving needs and context. The ISD agent can select appropriate content, suggest specific exercises, or even propose a sequence of activities to address identified deficiencies. For instance, based on a diagnostic report indicating a team leader’s weakness in conflict resolution, the ISD agent might recommend a micro-learning module on de-escalation techniques, a case study analysis, or a simulated role-playing exercise (see the example scenario below).
Mentoring agent. This agent serves as a personal mentor and provides direct, real-time instructional support and feedback to individuals. It can engage in conversational dialog, explain complex concepts, offer hints during problem-solving, or provide immediate corrective feedback. This agent is designed to mimic the personalized attention of a human tutor/mentor, adapting its instructional strategy to the learner’s specific challenges and preferences. In an organizational context, a mentoring agent could guide an employee through a new software interface, explaining features and offering troubleshooting tips as they navigate the system.
Motivation agent. Recognizing that sustained engagement is critical to effective learning, this agent applies evidence-based strategies to keep learners motivated. It offers personalized encouragement, intuitive progress-tracking visualizations, gamified challenges, and timely reminders. By continuously monitoring engagement indicators, the agent detects dips in motivation and triggers targeted interventions to restore focus and persistence. For example, suppose an employee is falling behind on mandatory professional development. In that case, the motivation agent might send a personalized nudge, highlighting the benefits of completion or celebrating recent achievements to engage them.
Analytics agent. The analytics agent focuses on providing insights into learning performance, trends, and the overall effectiveness of the ecosystem. It aggregates and analyzes data across individuals, teams, or the entire organization to generate reports, identify systemic issues, and inform strategic decisions regarding training programs and talent development. This agent provides valuable feedback to HR, managers, and even the other AI agents for system optimization. For instance, the analytics agent could reveal that a new leadership training program is yielding significantly higher improvements in team cohesion scores compared to previous programs.
Performance support agent. This agent provides just-in-time assistance and resources at the moment of need within the workflow. Unlike the mentoring agent, which focuses on skill acquisition, the performance support agent aims to enhance on-the-job effectiveness by providing quick access to information, tools, or expert guidance. For example, during a complex task, a performance support agent could offer a digital checklist, link to relevant policy documents, or even connect the employee with a human expert if the agent’s knowledge base is insufficient.
Career pathing agent. This agent guides individual employees in navigating their career progression within the organization by suggesting relevant development pathways and opportunities. The career pathing agent leverages the learner’s current state, career aspirations, and organizational job role requirements to recommend next steps. It can identify the specific competencies needed for desired roles. Then, through the MCP, it can query the ISD agent for recommended learning resources to bridge those gaps. It can also highlight internal mobility opportunities and connect learners with mentors or specific projects that align with their career goals. This agent enhances employee retention by demonstrating clear growth opportunities, empowers employees to take ownership of their careers, and helps fill critical roles internally by preparing a skilled talent pipeline.
Team dynamics agent. This agent focuses on the collective learning and performance of teams, identifying interpersonal skill gaps, communication breakdowns, and opportunities for synergistic development. It analyzes collaborative activity data (e.g., project contributions, communication patterns, peer feedback captured via xAPI) to assess team-level competencies such as collaboration, conflict resolution, collective problem-solving, and shared understanding. It can flag instances where team dynamics might hinder performance and inform the ISD agent to recommend team-based interventions, workshops, or even suggest specific team configurations for future projects to optimize learning and performance. This agent is crucial for developing high-performing teams and a culture of collaboration, which can directly contribute to organizational effectiveness by optimizing group output.
Now, consider the following scenario, where a new project manager (the learner) is struggling with “conflict resolution” (a defined high-level competency). Below is the summarized role of each agent to highlight how MCP acts as the central nervous system (see Figure 3). The xAPI generation agent captures data from a simulated team meeting, logs the project manager’s ineffective handling of a dispute, and stores the data in the LRS, updating the Learner State. Diagnostic agent analyzes this new data against the “conflict resolution” competency within the Learner State. It identifies a significant gap in the learner’s mastery and updates the Learner State accordingly via MCP. Concurrently, the competency mapping agent ensures that “conflict resolution” is broken down into its precise sub-skills (e.g., active listening, de-escalation techniques, negotiation tactics) within the organizational framework, providing granular targets for development. This information is accessible to all agents via MCP. The updated Learner State, via MCP, triggers the ISD agent and the Career pathing agent. ISD agent accesses the Learner State via MCP, recognizes the “conflict resolution” gap with granular detail from the competency mapping agent, and, based on pre-defined rules and available resources, recommends or designs a micro-learning module on “De-escalation Techniques” and a series of interactive case studies. This new learning plan is added to the Learner State via MCP. By querying the updated Learner State via MCP, the career pathing agent notices the project manager’s aspiration for a senior leadership role and highlights how mastering “conflict resolution” (and its sub-skills) is a critical prerequisite for that advancement, providing a strong incentive. Via MCP, the new learning plan in the updated Learner State triggers the mentoring and the motivation agents. The mentoring agent initiates a personalized chat with the project manager, offering immediate feedback on the simulated meeting and introducing the recommended learning module. Motivation agents will notice the learner’s past tendency to disengage from self-paced modules and send a supportive message highlighting the career benefits of completion and suggest a collaborative study group focused on conflict resolution scenarios. By observing the project manager’s interactions in subsequent authentic team activities, the team dynamics agent collects data on communication patterns and team cohesion. If improvements are noted in conflict resolution within the team, this data is fed back to the Learner State as evidence for evolving competency. In the case that the learner is about to enter another on-the-job meeting with potential conflict, the performance support agent, after querying the Learner State via MCP, might proactively offer a quick checklist for navigating difficult conversations that are tailored to the specific sub-skills identified by the competency mapping agent. The analytics agent continuously monitors the learner’s progress, engagement with the recommended resources, and subsequent performance in related activities (including team dynamics data), providing aggregated insights to HR and L&D specialists.

5. Integrating Workplace Performance

5.1. The Need for Integrating In-Workflow Performance

The value of capturing performance data directly from daily work activities stems from the recognition of the invisible informal learning (Huda et al., 2018; Kusmin et al., 2017; Leiß et al., 2022), which leads to a significant blind spot in an organization’s understanding of its workforce’s true capabilities, as mentioned above. By integrating in-workflow performance capture, we make this informal learning visible, acknowledging the situated nature of an individual’s competence (Capaldo et al., 2006). Capturing performance data directly from the workflow provides granular insights into how and where skills are applied, including the specific tools utilized, the challenges encountered, and the environmental context. This level of detail is critical for understanding true mastery and the generalizability or transferability of skills across various situations (Brightwell & Grant, 2013; Mbarushimana & Kuboja, 2016; Palan, 2007; Salman et al., 2020; Tuxworth, 2005). With such precise data, the system can pinpoint specific skill deficits for targeted interventions.
Traditionally, post-event evaluations, such as end-of-course tests, surveys, or annual reviews, often fall short in providing this precision and context as they only provide a limited static snapshot of a learner’s abilities. The lack of authentic workplace conditions and complexity leads to a disconnect between training completion and actual job performance (Tuxworth, 2005). The integration of in-workflow performance data fundamentally transforms assessment and development processes by making them more dynamic, authentic, and highly targeted. The system shifts from episodic assessments to a continuous evaluation model where performance indicators are directly derived from daily workflow activities (Prescott-Clements et al., 2008). Real-time feedback loops and targeted interventions then accelerate skill refinement to match the rapid tempo of modern workplaces. This approach moves towards a “Longitudinal Evaluation of Performance”, providing ongoing insights into a learner’s progress (Prescott-Clements et al., 2008). Assessments will be based on the actual application of skills in authentic work situations, rather than being hypothetical (Prescott-Clements et al., 2008; Swanson et al., 1995). The system captures not only “what” was achieved (the task outcome) but also “how” it was achieved, which provides a richer understanding of competence (Borman & Motowidlo, 1997; Motowidlo & Van Scotter, 1994). By capturing both task and contextual performance (Borman & Motowidlo, 1997; Motowidlo & Van Scotter, 1994) and recognizing that competence is context-dependent and expressed differently across situations (Brightwell & Grant, 2013; Ginsburg et al., 2010), the model moves beyond static, context-free assessments.
By analyzing in-workflow performance data against granular skill definitions, the diagnostic agent can pinpoint precise skill deficiencies, even down to subcompetency or specific skill levels. The competency mapping agent continuously updates the underlying competency framework based on observed workplace demands, ensuring that targeted assessments are highly relevant and accurate. The ISD agent and mentoring agent then leverage these precise insights to recommend highly specific learning activities or provide just-in-time support, directly addressing identified performance gaps. The integrated system provides robust, data-driven evidence for critical HR decisions such as promotions, role matching, and succession planning, which significantly reduces reliance on subjective evaluations (Ginsburg et al., 2010). This objective evidence can empower the career pathing agent to offer highly personalized and accurate guidance for long-term career development, aligning individual aspirations with organizational needs.

5.2. Infrastructure for In-Workflow Data

To effectively capture performance data in the workflow, a robust and integrated technical infrastructure is essential, which acts as the “sensors” of the ecosystem. Existing enterprise applications must be instrumented to capture and log granular user interactions automatically. This telemetry might include Customer Relationship Management (CRM) systems, Enterprise Resource Planning (ERP) software, project management platforms, design software, and coding environments. The data captured can range from time spent on specific tasks, the number of customer issues resolved, lines of code committed, or adherence to process steps. A key technology enabling this is the Electronic Performance Support System (EPSS), which embeds performance and learning assistance directly within the software interface (C. C. Chang, 2004; Huang & Klein, 2023; Karakaya-Ozyer & Yildiz, 2022; Kert & Kurt, 2012; Leiß et al., 2022; Sezer, 2021; Schaik et al., 2002). EPSS components, such as advisory systems, data/information bases, online help/reference, and productivity software, provide “just-in-time” knowledge and skills development at the moment of need, thereby generating rich data on how employees interact with information and tools to perform tasks (C. C. Chang, 2004; Huang & Klein, 2023). Mobile Performance Support Systems (MPSS) extend this capability to workers in non-fixed locations, capturing performance data on the go (Huang & Klein, 2023, 2025).
For roles involving physical tasks or specialized equipment (e.g., manufacturing, healthcare), integrating Internet of Things (IoT) sensors with machinery, tools, or wearables becomes necessary. This integration enables the collection of operational data from production lines, usage logs from medical devices, diagnostic outputs from equipment, or even movement patterns and task durations from wearables, while adhering to appropriate privacy safeguards.
The xAPI is foundational for standardizing the diverse data captured from these operational systems. It extends beyond merely tracking formal learning events to capture a broad spectrum of performance events in the operational workflow (Smith et al., 2019). Custom xAPI profiles can be developed for specific job roles or critical incidents, defining what constitutes a meaningful performance event and how its data elements should be structured for consistent interpretation. The xAPI generation agent then automates the translation of raw, disparate system logs and sensor data into contextualized xAPI performance statements, acting as an “observer” and “translator” of in-workflow activities.
Once captured, performance data must flow efficiently through the ecosystem to inform the learner state and enable actionable insights. High-throughput data streaming technologies, such as Apache Kafka, are utilized to handle the continuous, high-volume, and low-latency ingestion of xAPI performance data from numerous operational sources simultaneously (Smith et al., 2019). An initial processing layer performs immediate filtering, routing, and basic validation to ensure data quality before storage or further analysis. LRSs, even a network of federated LRSs for large organizations, serve as the authoritative, tamper-proof repositories for all xAPI statements (Smith et al., 2019).

5.3. Adoption Challenges

Although the architecture assumes seamless organizational uptake, decades of technology-acceptance research—and recent evidence on AI-driven workplace monitoring—show that deeply entrenched learning habits, privacy concerns, and cultural norms can slow or derail implementation (Davis, 1989; Rogers, 2003; Venkatesh & Davis, 2000). Surveys indicate that nearly half of employees worry about data security and algorithmic fairness when AI tools track their behavior, while 70% of large companies now deploy some form of algorithmic monitoring (McKinsey & Company, 2025). Transparent data-governance policies, worker participation in tool design, and staged change-management programs therefore become as critical as the technical stack itself. Future research should test how such socio-technical safeguards mediate uptake and learning transfer across diverse organizational cultures.

6. Roadmap to Building a Learner-State Ecosystem

Building a dynamic learner-state ecosystem requires implementing a technical architecture with multiple integrated components. The development of a learner-state ecosystem is a multi-phase endeavor, which might take years. Each phase builds upon the successes of the previous one, allowing for continuous validation and adaptation. The foci, objectives, activities, and needed tools/techniques are described in Table 1.
The first phase focuses on the fundamental shift from traditional isolated training to a holistic AI-powered approach. Organizations should build the essential infrastructure to capture all types of learning activities into a unified view, which will serve as the foundation of enabling personalized employee development based on the understanding of the full spectrum of how individuals learn and perform. The vision should be communicated across different departments to secure executive buy-in and foster cross-departmental collaboration (e.g., HR, IT, Operations) in embracing this strategic shift. From the technical perspective, this phase involves defining the high-level technical architecture for integrating diverse data streams. For instance, organizations should select and deploy an xAPI-compliant Learning Record Store (LRS) as their central data hub, and develop and deploy the foundational xAPI Generation Agent. They should design the core MCP layer and conduct a comprehensive audit of existing formal learning platforms, operational systems, and collaboration tools to identify data export capabilities. Following that, the core data model for the multi-dimensional learner state should be designed. During this process, organizations should establish clear policies for data ownership, access, privacy, and ethical use.
With the integrated data foundation in place, phase 2 will focus on the initial set of AI agents development, to identify specific skill gaps and offer personalized learning recommendations. This phase is also where organizations begin to see the initial ROI with a clear understanding of specific skill gaps based on actual data. Technically, organizations can train the initial models of the AI agents (i.e., competency mapping, diagnostic, ISD, mentoring, analytics). They should start to build out the core functionality of the MCP layer to enable seamless communication and data exchange between the deployed AI agents and the LRS.
Phase 3 is critical as organizations will extend the data capture into workflow for a continuous and authentic understanding of employee performance and skill application, which can enrich the learner state significantly. By the end of this phase, a multi-dimensional learner state will be comprehensively developed. This phase will involve instrumenting key operational software (e.g., CRM, project management tools) and hardware (e.g., IoT sensors) when applicable. The xAPI Generation Agent should be enhanced with advanced AI capabilities. For example, use NLP for unstructured text and recognize behavioral patterns. New agents (e.g., performance support, team dynamic, career pathing) can be developed. MCP capabilities should also be advanced to handle more complex, real-time queries and data flows.
At its most mature stage, the ecosystem should function as an intelligent, predictive platform that continuously optimizes itself through reinforcement-learning algorithms. Supported by an aligned organizational culture, this capability shifts workforce development from reactive training to a genuinely proactive talent strategy. Building on existing infrastructure, organizations can optimize MCP and integrate sophisticated machine learning models for the Motivation Agent to personalize encouragement and engagement strategies. Models within the analytics and ISD agents should be advanced to forecast future skill demands, predict skill decay, and develop personalized learning paths. Importantly, the ongoing oversight and feedback from HR and L&D experts, along with the built-in mechanism for AI models to train and continuously improve themselves, will help optimize the ecosystem.

7. Conclusions

The landscape of modern work demands a radical re-imagining of how organizations cultivate and manage talent. This paper presented a dynamic learner-state ecosystem as a transformative solution to the significant competency–performance gap. The multi-dimensional learner state serves as a dynamic digital representation of an individual’s capabilities across eight critical dimensions. This holistic view moves beyond static credentials, providing a real-time understanding of what an individual truly knows and can do. The ecosystem resides in its suite of specialized AI agents, who operate in a highly collaborative and interconnected fashion, with their interactions and data exchanges seamlessly orchestrated by the MCP. One additional innovation of this ecosystem is its mechanism of integrating in-workflow performance through the proposed infrastructure and existing enterprise systems. It has profound implications for both individuals and organizations. For learners, it provides personalized and contextualized development paths, clear career trajectories, and an evidence-based record of their true capabilities. For organizations, it ensures real-time capability visibility, which enables predictive workforce planning and optimizes L&D investments.

Author Contributions

Conceptualization, M.Y., N.L. and B.L.; writing—original draft preparation, M.Y., N.L., B.L. and Z.H.; writing—review and editing, M.Y., B.L. and Z.H.; visualization, M.Y.; supervision, M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
MCPModel Context Protocol
LRSLearning Record Stores
LMSLearning management systems
xAPIExperience API
L&DLearning and development
CBLCompetency-based learning
CBTCompetency-based training
HRHuman Resource
HRDHuman resource development
ISDInstructional Systems Design
ROIReturn on investment
OCBOrganizational Citizenship Behavior
SRLSelf-regulated learning
CRMCustomer relationship management
EPSSElectronic Performance Support System

References

  1. Alainati, S., Alshawi, S., & Al-Karaghouli, W. (2010, April 12–13). The effect of education and training on competency. European and Mediterranean Conference on Information Systems, Abu Dhabi, United Arab Emirates. [Google Scholar]
  2. Anthropic. (2024, November 25). Introducing the model context protocol. Model Context Protocol. Available online: https://modelcontextprotocol.io/introduction (accessed on 1 May 2025).
  3. Arditi, D., Gluch, P., & Holmdahl, M. (2013). Managerial competencies of female and male managers in the Swedish construction industry. Construction Management and Economics, 31(9), 979–990. [Google Scholar] [CrossRef]
  4. Arghode, V., Heminger, S., & McLean, G. N. (2021). Career self-efficacy and education abroad: Implications for future global workforce. European Journal of Training and Development, 45(1), 1–13. [Google Scholar] [CrossRef]
  5. Arokiasamy, L., Mansouri, N., Balaraman, R. A., & Kassim, N. M. (2017). A literature review of competence development on academic career advancement: A human resource development perspective. Global Business and Management Research, 9(1s), 403. [Google Scholar]
  6. Bakhurst, D. (2009). Reflections on activity theory. Educational Review, 61(2), 197–210. [Google Scholar] [CrossRef]
  7. Baldwin, T. T., & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41(1), 63–105. [Google Scholar] [CrossRef]
  8. Bandura, A. (2023). Cultivate self-efficacy for personal and organizational effectiveness. In Principles of organizational behavior: The handbook of evidence-based management (3rd ed., pp. 113–135). John Wiley & Sons. [Google Scholar]
  9. Basit, A. A. (2019). Examining how respectful engagement affects task performance and affective organizational commitment: The role of job engagement. Personnel Review, 48(3), 644–658. [Google Scholar] [CrossRef]
  10. Bateman, T. S., & Organ, D. W. (1983). Job satisfaction and the good soldier: The relationship between affect and employee “citizenship”. Academy of Management Journal, 26(4), 587–595. [Google Scholar] [CrossRef]
  11. Bloom, B. S. (1968). Learning for mastery. Instruction and curriculum. Regional education laboratory for the carolinas and virginia, topical papers and reprints, number 1. Evaluation comment, 1(2), n2. [Google Scholar]
  12. Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16. [Google Scholar] [CrossRef]
  13. Borman, W. C., & Motowidlo, S. J. (1997). Task performance and contextual performance: The meaning for personnel selection research. Human Performance, 10(2), 99–109. [Google Scholar] [CrossRef]
  14. Boud, D., & Garrick, J. (2001). Understandings of workplace learning. In D. Boud, & J. Garrick (Eds.), Understanding learning at work (pp. 1–11). Routledge. [Google Scholar]
  15. Boyatzis, R. E. (1991). The competent manager: A model for effective performance. John Wiley & Sons. [Google Scholar]
  16. Boyatzis, R. E. (2008). Competencies in the 21st century. Journal of Management Development, 27(1), 5–12. [Google Scholar] [CrossRef]
  17. Brennan, B. A. (2022). The impact of self-efficacy based prebriefing on nursing student clinical competency and self-efficacy in simulation: An experimental study. Nurse Education Today, 109, 105260. [Google Scholar] [CrossRef]
  18. Brightwell, A., & Grant, J. (2013). Competency-based training: Who benefits? Postgraduate Medical Journal, 89(1048), 107–110. [Google Scholar] [CrossRef]
  19. Bronfenbrenner, U. (2005). Ecological systems theory (1992). In U. Bronfenbrenner (Ed.), Making human beings human: Bioecological perspectives on human development (pp. 106–173). Sage Publications Ltd. [Google Scholar]
  20. Brown, R. B. (1993). Meta-competence: A recipe for reframing the competence debate. Personnel Review, 22(6), 25–36. [Google Scholar] [CrossRef]
  21. Canaleta, X., Alsina, M., De Torres, E., & Fonseca, D. (2024). Automated monitoring of human–Computer interaction for assessing teachers’ digital competence based on LMS data extraction. Sensors, 24, 3326. [Google Scholar] [CrossRef]
  22. Capaldo, G., Iandoli, L., & Zollo, G. (2006). A situationalist perspective to competency management. Human Resource Management, 45(3), 429–448. [Google Scholar] [CrossRef]
  23. Carter, W. R., Nesbit, P. L., Badham, R. J., Parker, S. K., & Sung, L. K. (2018). The effects of employee engagement and self-efficacy on job performance: A longitudinal field study. The International Journal of Human Resource Management, 29(17), 2483–2502. [Google Scholar] [CrossRef]
  24. Cerasoli, C. P., Alliger, G. M., Donsbach, J. S., & Mathieu, J. E. (2018). Antecedents and outcomes of informal learning behaviors: A meta-analysis. Journal of Business and Psychology, 33(2), 203–230. [Google Scholar] [CrossRef]
  25. Chang, C. C. (2004). The relationship between the performance and the perceived benefits of using an electronic performance support system (EPSS). Innovations in Education and Teaching International, 41(3), 343–364. [Google Scholar] [CrossRef]
  26. Chang, D., Lin, M., Hajian, S., & Wang, Q. (2023). Educational design principles of using AI chatbot that supports self-regulated learning in education: Goal setting, feedback, and personalization. Sustainability, 15, 12921. [Google Scholar] [CrossRef]
  27. Chaudhry, N. I., Jariko, M. A., Mushtaque, T., Mahesar, H. A., & Ghani, Z. (2017). Impact of working environment and training & development on organization performance through mediating role of employee engagement and job satisfaction. European Journal of Training and Development Studies, 4(2), 33–48. [Google Scholar]
  28. Chen, S. L. (2015). The relationship of leader psychological capital and follower psychological capital, job engagement and job performance: A multilevel mediating perspective. The International Journal of Human Resource Management, 26(18), 2349–2365. [Google Scholar] [CrossRef]
  29. Chiu, T. (2024). A classification tool to foster self-regulated learning with generative artificial intelligence by applying self-determination theory: A case of ChatGPT. Educational Technology Research and Development, 72(4), 2401–2416. [Google Scholar] [CrossRef]
  30. Chiu, T. K., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers and Education Open, 6, 100171. [Google Scholar] [CrossRef]
  31. Christian, M. S., Garza, A. S., & Slaughter, J. E. (2011). Work engagement: A quantitative review and test of its relations with task and contextual performance. Personnel Psychology, 64(1), 89–136. [Google Scholar] [CrossRef]
  32. Chuang, S. (2021). An empirical study of displaceable job skills in the age of robots. European Journal of Training and Development, 45(6/7), 617–632. [Google Scholar] [CrossRef]
  33. Clanton, J., Gardner, A., Cheung, M., Mellert, L., Evancho-Chapman, M., & George, R. L. (2014). The relationship between confidence and competence in the development of surgical skills. Journal of Surgical Education, 71(3), 405–412. [Google Scholar] [CrossRef]
  34. Combs, G. M., & Luthans, F. (2007). Diversity training: Analysis of the impact of self-efficacy. Human Resource Development Quarterly, 18(1), 91–120. [Google Scholar] [CrossRef]
  35. Cook, D. A., Brydges, R., Zendejas, B., Hamstra, S. J., & Hatala, R. (2013). Mastery learning for health professionals using technology-enhanced simulation: A systematic review and meta-analysis. Academic Medicine, 88(8), 1178–1186. [Google Scholar] [CrossRef]
  36. Corbeanu, A., & Iliescu, D. (2023). The link between work engagement and job performance. Journal of Personnel Psychology, 22(3), 111–122. [Google Scholar] [CrossRef]
  37. Crawford, M. (2020). Ecological Systems theory: Exploring the development of the theoretical framework as con-ceived by Bronfenbrenner. Journal of Public Health Issues and Practices, 4(2), 170. [Google Scholar] [CrossRef]
  38. Darling, N. (2007). Ecological systems theory: The person in the center of the circles. Research in Human Development, 4(3–4), 203–217. [Google Scholar] [CrossRef]
  39. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  40. Dillon, J. D. (2022). The modern learning ecosystem: A new L&D mindset for the ever-changing workplace. Association for Talent Development. [Google Scholar]
  41. Draganidis, F., & Mentzas, G. (2006). Competency based management: A review of systems and approaches. Information Management & Computer Security, 14(1), 51–64. [Google Scholar] [CrossRef]
  42. Dreyfus, H. L., & Dreyfus, S. E. (1984). Putting computers in their proper place: Analysis versus intuition in the classroom. Teachers College Record, 85(4), 578–601. [Google Scholar] [CrossRef]
  43. Emerson, L. C., & Berge, Z. L. (2018). Microlearning: Knowledge management applications and competency-based training in the workplace. Knowledge Management & E-Learning, 10(2), 125–132. [Google Scholar]
  44. Engeström, R. (2009). Who is acting in an activity system. In A. L. Sannino, A. Sannino, H. Daniels, & K. D. Gutiérrez (Eds.), Learning and expanding with activity theory (pp. 257–273). Cambridge University Press. [Google Scholar]
  45. Engeström, Y. (2000). Activity theory as a framework for analyzing and redesigning work. Ergonomics, 43(7), 960–974. [Google Scholar] [CrossRef] [PubMed]
  46. Eraut, M. (2005). Initial teacher training and the NVQ model. In J. Burke (Ed.), Competency based education and training (pp. 161–172). Routledge. [Google Scholar]
  47. Farashah, A. D., Thomas, J., & Blomquist, T. (2019). Exploring the value of project management certification in selection and recruiting. International Journal of Project Management, 37(1), 14–26. [Google Scholar] [CrossRef]
  48. Faruqe, F., Watkins, R., & Medsker, L. (2021). Competency model approach to AI literacy: Research-based path from initial framework to model. arXiv, arXiv:2108.05809. [Google Scholar] [CrossRef]
  49. Fischer, K. W. (1980). A theory of cognitive development: The control and construction of hierarchies of skills. Psychological Review, 87, 477–531. [Google Scholar] [CrossRef]
  50. Fischer, K. W., & Bidell, T. R. (2006). Dynamic development of action and thought. In R. M. Lerner, W. Damon, R. M. Lerner, & W. Damon (Eds.), Handbook of child psychology: Theoretical models of human development (pp. 313–399). John Wiley &Sons Inc. [Google Scholar]
  51. Fletcher, L. (2016). Training perceptions, engagement, and performance: Comparing work engagement and personal role engagement. Human Resource Development International, 19(1), 4–26. [Google Scholar] [CrossRef]
  52. Gardner, A. (2017). The viability of online competency based education: An organizational analysis of the impending paradigm shift. The Journal of Competency-Based Education, 2(4), e01055. [Google Scholar] [CrossRef]
  53. Ginsburg, S., McIlroy, J., Oulanova, O., Eva, K., & Regehr, G. (2010). Toward authentic clinical evaluation: Pitfalls in the pursuit of competency. Academic Medicine, 85(5), 780–786. [Google Scholar] [CrossRef] [PubMed]
  54. Gist, M. E., Stevens, C. K., & Bavetta, A. G. (1991). Effects of self-efficacy and post-training intervention on the acquisition and maintenance of complex interpersonal skills. Personnel Psychology, 44(4), 837–861. [Google Scholar] [CrossRef]
  55. Gottlieb, M., Chan, T. M., Zaver, F., & Ellaway, R. (2022). Confidence-competence alignment and the role of self-confidence in medical education: A conceptual review. Medical Education, 56(1), 37–47. [Google Scholar] [CrossRef]
  56. Guo, Y., Du, H., Xie, B., & Mo, L. (2017). Work engagement and job performance: The moderating role of perceived organizational support. Anales de Psicología/Annals of Psychology, 33(3), 708–713. [Google Scholar]
  57. Guskey, T. R. (2022). Implementing mastery learning. Corwin Press. [Google Scholar]
  58. Guthrie, H. (2009). Competence and competency-based training: What the literature says. National Centre for Vocational Education Research Ltd. [Google Scholar]
  59. Haruna, A. Y., & Marthandan, G. (2017). Foundational competencies for enhancing work engagement in SMEs Malaysia. Journal of Workplace Learning, 29(3), 165–184. [Google Scholar] [CrossRef]
  60. Härkönen, U. (2007). The Bronfenbrenner ecological systems theory of human development. In Scientific articles of V international conference. Available online: https://www.academia.edu/67678654/The_Bronfenbrenner_ecological_systems_theory_of_human_development (accessed on 4 July 2025).
  61. Henri, M., Johnson, M. D., & Nepal, B. (2017). A review of competency-based learning: Tools, assessments, and recommendations. Journal of Engineering Education, 106(4), 607–638. [Google Scholar] [CrossRef]
  62. Holton, E. F., III, Bates, R. A., & Ruona, W. E. (2000). Development of a generalized learning transfer system inventory. Human Resource Development Quarterly, 11(4), 333–360. [Google Scholar] [CrossRef]
  63. Howard, J. (2019). Artificial intelligence: Implications for the future of work. American Journal of Industrial Medicine, 62(11), 917–926. [Google Scholar] [CrossRef] [PubMed]
  64. Huang, Y., & Klein, J. D. (2023). Mobile performance support systems: Characteristics, benefits, and conditions. TechTrends, 67(1), 150–159. [Google Scholar] [CrossRef]
  65. Huang, Y., & Klein, J. D. (2025). Benefits and challenges in adopting mobile performance support systems. TechTrends, 69(3), 684–698. [Google Scholar] [CrossRef]
  66. Huda, M., Maseleno, A., Teh, K. S. M., Don, A. G., Basiron, B., Jasmi, K. A., Mustari, M., Nasir, B. M., & Ahmad, R. (2018). Understanding Modern Learning Environment (MLE) in big data era. International Journal of Emerging Technologies in Learning (Online), 13(5), 71. [Google Scholar]
  67. Ilić, M., Mikić, V., Kopanja, L., & Vesin, B. (2023). Intelligent techniques in e-learning: A literature review. Artificial Intelligence Review, 56, 14907–14953. [Google Scholar] [CrossRef]
  68. Jin, S., Im, K., Yoo, M., Roll, I., & Seo, K. (2023). Supporting students’ self-regulated learning in online learning using artificial intelligence applications. International Journal of Educational Technology in Higher Education, 20, 1–21. [Google Scholar] [CrossRef]
  69. Jonassen, D. H., & Rohrer-Murphy, L. (1999). Activity theory as a framework for designing constructivist learning environments. Educational Technology Research and Development, 47(1), 61–79. [Google Scholar] [CrossRef]
  70. Kahn, W. A. (1990). Psychological conditions of personal engagement and disengagement at work. Academy of Management Journal, 33(4), 692–724. [Google Scholar] [CrossRef]
  71. Karakaya-Ozyer, K., & Yildiz, Z. (2022). Design and evaluation of an electronic performance support system for quantitative data analysis. Education and Information Technologies, 27(2), 2407–2434. [Google Scholar] [CrossRef]
  72. Karl, K. A., O’Leary-Kelly, A. M., & Martocchio, J. J. (1993). The impact of feedback and self-efficacy on performance in training. Journal of Organizational Behavior, 14(4), 379–394. [Google Scholar] [CrossRef]
  73. Karoudis, K., & Magoulas, G. D. (2018). User model interoperability in education: Sharing learner data using the Experience API and distributed ledger technology. In B. H. Khan, J. R. Corbeil, & M. E. Corbeil (Eds.), Responsible analytics and data mining in education (pp. 156–178). Routledge. [Google Scholar]
  74. Karsten, R., & Roth, R. M. (1998). Computer self-efficacy: A practical indicator of student computer competency in introductory IS courses. Informing Science, 1(3), 61–68. [Google Scholar] [CrossRef]
  75. Kert, S. B., & Kurt, A. A. (2012). The effect of electronic performance support systems on self-regulated learning skills. Interactive Learning Environments, 20(6), 485–500. [Google Scholar] [CrossRef]
  76. Kim, W. (2017). Examining mediation effects of work engagement among job resources, job performance, and turnover intention. Performance Improvement Quarterly, 29(4), 407–425. [Google Scholar] [CrossRef]
  77. Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick’s four levels of training evaluation. Association for Talent Development. [Google Scholar]
  78. Kuijpers, M. T., & Scheerens, J. (2006). Career competencies for the modern career. Journal of Career Development, 32(4), 303–319. [Google Scholar] [CrossRef]
  79. Kusmin, K. L., Ley, T., & Normak, P. (2017, October 11–12). Towards a data driven competency management platform for Industry 4.0. Workshop Papers of i-Know (Vol. 2025, p. 1), Graz, Austria. [Google Scholar]
  80. Leiß, T. V., Rausch, A., & Seifried, J. (2022). Problem-solving and tool use in office work: The potential of electronic performance support systems to promote employee performance and learning. Frontiers in Psychology, 13, 869428. [Google Scholar] [CrossRef]
  81. Liles, R. T., & Mustian, R. D. (2004). Core competencies: A systems approach for training and organizational development in extension. The Journal of Agricultural Education and Extension, 10(2), 77–82. [Google Scholar] [CrossRef]
  82. LinkedIn Learning. (2021). Workplace learning report. Available online: https://learning.linkedin.com/resources/workplace-learning-report-2021 (accessed on 4 July 2025).
  83. Loia, V., De Maio, C., Fenza, G., Orciuoli, F., & Senatore, S. (2010, July 18–23). An enhanced approach to improve enterprise competency management. IEEE International Conference on Fuzzy Systems (pp. 1–8), Barcelona, Spain. [Google Scholar]
  84. Lustri, D., Miura, I., & Takahashi, S. (2007). Knowledge management model: Practical application for competency development. The Learning Organization, 14(2), 186–202. [Google Scholar] [CrossRef]
  85. Manoharan, K., Dissanayake, P., Pathirana, C., Deegahawature, D., & Silva, R. (2023). A competency-based training guide model for labourers in construction. International Journal of Construction Management, 23(8), 1323–1333. [Google Scholar] [CrossRef]
  86. Mascolo, M. F. (2020). Dynamic skill theory: An integrative model of psychological development. In M. F. Mascolo, & T. Bidell (Eds.), Handbook of integrative psychological development (pp. 91–135). Routledge/Taylor & Francis. [Google Scholar]
  87. Mbarushimana, N., & Kuboja, J. M. (2016). A paradigm shift towards competence-based curriculum: The experience of Rwanda. Saudi Journal of Business and Management Studies, 1(1), 6–17. [Google Scholar]
  88. McBeath, G. (1990). Practical management development: Strategies for management resourcing and development in the 1990s. John Wiley & Sons. [Google Scholar]
  89. McGaghie, W. C., Adler, M., & Salzman, D. H. (2020). Instructional design and delivery for mastery learning. In W. C. McGaghie, J. H. Barsuk, & D. B. Wayne (Eds.), Comprehensive healthcare simulation: Mastery learning in health professions education (pp. 71–88). Springer. [Google Scholar]
  90. McKinsey, & Company. (2025). AI in the workplace: Empowering people to unlock AI’s full potential at work. Available online: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work (accessed on 4 July 2025).
  91. McMullan, M., Endacott, R., Gray, M. A., Jasper, M., Miller, C. M., Scholes, J., & Webb, C. (2003). Portfolios and assessment of competence: A review of the literature. Journal of Advanced Nursing, 41(3), 283–294. [Google Scholar] [CrossRef]
  92. Memon, M. A., Salleh, R., & Baharom, M. N. R. (2016). The link between training satisfaction, work engagement and turnover intention. European Journal of Training and Development, 40(6), 407–429. [Google Scholar] [CrossRef]
  93. Motowidlo, S. J., & Van Scotter, J. R. (1994). Evidence that task performance should be distinguished from contextual performance. Journal of Applied Psychology, 79(4), 475. [Google Scholar] [CrossRef]
  94. Murray, T. S., Owen, E., & McGaw, B. (2005). Learning a living: First results of the adult literacy and life skills survey. Statistics Canada and the Organization for Cooperation and Development.
  95. National Center for Education Statistics. (n.d.). International adult literacy survey (IALS). U.S. Department of Education. Available online: https://nces.ed.gov/surveys/ials/index.asp (accessed on 4 July 2025).
  96. Neal, J. W., & Neal, Z. P. (2013). Nested or networked? Future directions for ecological systems theory. Social Development, 22(4), 722–737. [Google Scholar] [CrossRef]
  97. Ng, D., Tan, C., & Leung, J. (2024). Empowering student self-regulated learning and science education through ChatGPT: A pioneering pilot study. British Journal of Educational Technology, 55, 1328–1353. [Google Scholar] [CrossRef]
  98. Nikolajevaite, M., & Sabaityte, E. (2016). Relationship between employees’ competencies and job satisfaction: British and Lithuanian employees. Psychology Research, 6(11), 684–692. [Google Scholar] [CrossRef]
  99. Noercahyo, U. S., Maarif, M. S., & Sumertajaya, I. M. (2021). The role of employee engagement on job satisfaction and its effect on organizational performance. Jurnal Aplikasi Manajemen, 19(2), 296–309. [Google Scholar] [CrossRef]
  100. Oberländer, M., Beinicke, A., & Bipp, T. (2020). Digital competencies: A review of the literature and applications in the workplace. Computers & Education, 146, 103752. [Google Scholar] [CrossRef]
  101. Oliver, R., Kersten, H., Vinkka-Puhakka, H., Alpasan, G., Bearn, D., Cema, I., Delap, E., Dummer, P., Goulet, J. P., Gugushe, T., Jeniati, E., Jerolimov, V., Kotsanos, N., Krifka, S., Levy, G., Neway, M., Ogawa, T., Saag, M., Sidlauskas, A., … White, D. (2008). Curriculum structure: Principles and strategy. European Journal of Dental Education, 12, 74–84. [Google Scholar] [CrossRef] [PubMed]
  102. Palan, R. (2007). Competency management—A practitioner’s guide. Specialist Management Resources Sdn Bhd. [Google Scholar]
  103. Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451–502). Academic Press. [Google Scholar] [CrossRef]
  104. Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33–40. [Google Scholar] [CrossRef]
  105. Pratisto, E., & Danoetirta, D. (2025). Design and user analysis of a learning management system: Student competency-based learning. Internet of Things and Artificial Intelligence Journal, 5(1), 136–147. [Google Scholar] [CrossRef]
  106. Prescott-Clements, L., Van Der Vleuten, C. P., Schuwirth, L. W., Hurst, Y., & Rennie, J. S. (2008). Evidence for validity within workplace assessment: The Longitudinal Evaluation of Performance (LEP). Medical Education, 42(5), 488–495. [Google Scholar] [CrossRef]
  107. Psyché, V., Tremblay, D. G., Miladi, F., & Yagoubi, A. (2023). A competency framework for training of AI projects managers in the digital and AI Era. Open Journal of Social Sciences, 11(5), 537–560. [Google Scholar] [CrossRef]
  108. Rasdi, R. M., Idris, F. H., Krauss, S. E., & Omar, M. K. (2024). Exploring artificial intelligence competencies for the future workforce: A systematic literature review using the PRISMA protocol. In Y. G. Ng, D. D. Daruis, & N. W. Abdul Wahat (Eds.), Human factors and ergonomics toward an inclusive and sustainable future. HFEM 2023 (Vol. 46). Springer Series in Design and Innovation. Springer. [Google Scholar] [CrossRef]
  109. Rich, B. L., Lepine, J. A., & Crawford, E. R. (2010). Job engagement: Antecedents and effects on job performance. Academy of Management Journal, 53(3), 617–635. [Google Scholar] [CrossRef]
  110. Riesterer, T., & Shacklette, D. (2025, May 22). Precision skills intelligence: Moving beyond self-assessments in sales performance. Corporate Visions. Available online: https://corporatevisions.com/blog/precision-skills-intelligence/ (accessed on 4 July 2025).
  111. Roberts, D. R., & Davenport, T. O. (2002). Job engagement: Why it’s important and how to improve it. Employment Relations Today, 29(3), 21. [Google Scholar] [CrossRef]
  112. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press. [Google Scholar]
  113. Saks, A. M., Gruman, J. A., & Zhang, Q. (2022). Organization engagement: A review and comparison to job engagement. Journal of Organizational Effectiveness: People and Performance, 9(1), 20–49. [Google Scholar] [CrossRef]
  114. Salman, M., Ganie, S. A., & Saleem, I. (2020). The concept of competence: A thematic review and discussion. European Journal of Training and Development, 44(6/7), 717–742. [Google Scholar] [CrossRef]
  115. Schaik, P. V., Pearson, R., & Barker, P. (2002). Designing electronic performance support systems to facilitate learning. Innovations in Education and Teaching International, 39(4), 289–306. [Google Scholar] [CrossRef]
  116. Schaufeli, W. B., Bakker, A. B., & Salanova, M. (2006). The measurement of work engagement with a short questionnaire: A cross-national study. Educational and Psychological Measurement, 66(4), 701–716. [Google Scholar] [CrossRef]
  117. Schaufeli, W. B., Salanova, M., González-Romá, V., & Bakker, A. B. (2002). The measurement of engagement and burnout: A two sample confirmatory factor analytic approach. Journal of Happiness Studies, 3, 71–92. [Google Scholar] [CrossRef]
  118. Scott, B. (1982). Competency based learning: A literature review. International Journal of Nursing Studies, 19(3), 119–124. [Google Scholar] [CrossRef] [PubMed]
  119. Sekhar, C., Patwardhan, M., & Vyas, V. (2018). Linking work engagement to job performance through flexible human resource management. Advances in Developing Human Resources, 20(1), 72–87. [Google Scholar] [CrossRef]
  120. Sezer, B. (2021). Developing and investigating an electronic performance support system (EPSS) for academic performance. Australasian Journal of Educational Technology, 37(6), 88–101. [Google Scholar] [CrossRef]
  121. Sitzmann, T., & Weinhardt, J. M. (2018). Training engagement theory: A multilevel perspective on the effectiveness of work-related training. Journal of Management, 44(2), 732–756. [Google Scholar] [CrossRef]
  122. Slavin, R. E. (1987). Mastery learning reconsidered. Review of Educational Research, 57(2), 175–213. [Google Scholar] [CrossRef]
  123. Smith, B., Hernandez, M., & Gordon, J. (2019). Competency-based learning in 2018. Advanced Distributed Learning Initiative (ADL). [Google Scholar]
  124. Snyman, M., & Van den Berg, G. (2018). The significance of the learner profile in recognition of prior learning. Adult Education Quarterly, 68(1), 24–40. [Google Scholar] [CrossRef]
  125. Stevens, G. W. (2012). A critical review of the science and practice of competency modeling. Human Resource Development Review, 12(1), 86–107. [Google Scholar] [CrossRef]
  126. Swanson, D. B., Norman, G. R., & Linn, R. L. (1995). Performance-based assessment: Lessons from the health professions. Educational Researcher, 24(5), 5–11. [Google Scholar] [CrossRef]
  127. Škrinjarić, B. (2022). Competence-based approaches in organizational and individual context. Humanities and Social Sciences Communications, 9(1), 28. [Google Scholar] [CrossRef]
  128. The SFIA Foundation. (2025). SFIA: The global skills and competency framework for a digital world. Available online: https://sfia-online.org/en (accessed on 4 July 2025).
  129. Tuxworth, E. (2005). Competence based education and training: Background and origins. In J. Burke (Ed.), Competency based education and training (pp. 9–22). Routledge. [Google Scholar]
  130. Tyas, A. A. W. P., Tippe, S., & Sutanto, S. (2020). How employee competency and self efficacy affect employee work engagement in human resource development agency (BPSDM) ministry of law and human rights Republic of Indonesia. IJHCM (International Journal of Human Capital Management), 4(2), 125–140. [Google Scholar] [CrossRef]
  131. Vakola, M., Eric Soderquist, K., & Prastacos, G. P. (2007). Competency management in support of organisational change. International Journal of Manpower, 28(3/4), 260–275. [Google Scholar] [CrossRef]
  132. Vazirani, N. (2010). Review paper: Competencies and competency model–A brief overview of its development and application. SIES Journal of Management, 7(1), 121–131. [Google Scholar]
  133. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. [Google Scholar] [CrossRef]
  134. White, R. W. (1959). Motivation reconsidered: The concept of competence. Psychological Review, 66(5), 297. [Google Scholar] [CrossRef] [PubMed]
  135. Wingreen, S. C., & Blanton, J. E. (2007). A social cognitive interpretation of person-organization fitting: The maintenance and development of professional technical competency. Human Resource Management, 46(4), 631–650. [Google Scholar] [CrossRef]
  136. Yang, M., Lowell, V. L., Exter, M., Richardson, J., & Olenchak, F. R. (2025). Transfer of training and learner attitude: A mixed-methods study on learner experience in an authentic learning program. Human Resource Development International, 28(3), 346–370. [Google Scholar] [CrossRef]
  137. Yang, M., Lowell, V. L., Talafha, A. M., & Harbor, J. (2020). Transfer of training, trainee attitudes and best practices in training design: A multiple-case study. TechTrends, 64(2), 280–301. [Google Scholar] [CrossRef]
  138. Yang, M., & Watson, S. L. (2022). Attitudinal influences on transfer of training: A systematic literature review. Performance Improvement Quarterly, 34(4), 327–365. [Google Scholar] [CrossRef]
  139. Yang, R., Liu, L., & Feng, G. (2021). An Overview of Recent Advances in Distributed Coordination of Multi-Agent Systems. Unmanned Systems, 10, 307–325. [Google Scholar] [CrossRef]
  140. Zarifian, P. (1999). Objectif compétence. Pour une nouvelle logique (p. 229). Editions Liaisons. [Google Scholar]
  141. Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25(1), 3–17. [Google Scholar] [CrossRef]
  142. Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41(2), 64–70. [Google Scholar] [CrossRef]
Figure 1. Example learner state portal.
Figure 1. Example learner state portal.
Education 15 01004 g001
Figure 2. General architecture of MCP (Anthropic, 2024).
Figure 2. General architecture of MCP (Anthropic, 2024).
Education 15 01004 g002
Figure 3. Example scenario of AI agents and MCP integration.
Figure 3. Example scenario of AI agents and MCP integration.
Education 15 01004 g003
Table 1. Main phases in developing the learner-state ecosystem.
Table 1. Main phases in developing the learner-state ecosystem.
Phase 1Phase 2Phase 3Phase 4
FocusParadigm shift and data foundationInfrastructure for initial agents and insightsPerformance integration and holistic developmentPredictive intelligence and continuous evolution
ObjectiveTo initiate organizational L&D paradigm shift and build unified secure data foundation capturing diverse learning and performance events from across the organizationTo enable precision skill mapping and build and integrate foundational MCP and AI infrastructureTo integrate in-workflow performance, enable a continuous and authentic understanding of individual performance and skill application, and develop a comprehensive multidimensional learner stateTo leverage advanced AI to forecast future talent needs, proactively manage employee development, and continuously refine the entire learning experience
Key activitiesStrategic alignment
Data source identification
LRS establishment and xAPI generation agent
Initial MCP layer design
Multi-dimensional learner state design
Data security and ethical use framework
Develop and train initial set of AI agents (xAPI generation, competency mapping, diagnostic, ISD, mentoring, analytics)
Core MCP layer development and agent integration
Instrument key operational software and hardware
Advanced xAPI Generation
Develop advanced MCP capabilities for complex data flows
Other AI agents (performance support, team dynamic, career pathing)
Develop advanced AI models to personalize encouragement and engagement strategies (motivation agent)
MCP optimization and scalability
Implement advanced AI algorithms for ISD agent
Human oversight and refinement
Example toolsPostgreSQL, Docker, Python, REST APIs, LRS, xAPI, MCPTensorFlow, Hugging Face, Kubernetes, FastAPI, MCPExisting enterprise systems, IoT sensors, MongoDB, Redis, MCP, Apache KafkaEdge AI, Microservices, enterprise systems and platform, MCP
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, M.; Lovett, N.; Li, B.; Hou, Z. Towards Dynamic Learner State: Orchestrating AI Agents and Workplace Performance via the Model Context Protocol. Educ. Sci. 2025, 15, 1004. https://doi.org/10.3390/educsci15081004

AMA Style

Yang M, Lovett N, Li B, Hou Z. Towards Dynamic Learner State: Orchestrating AI Agents and Workplace Performance via the Model Context Protocol. Education Sciences. 2025; 15(8):1004. https://doi.org/10.3390/educsci15081004

Chicago/Turabian Style

Yang, Mohan, Nolan Lovett, Belle Li, and Zhen Hou. 2025. "Towards Dynamic Learner State: Orchestrating AI Agents and Workplace Performance via the Model Context Protocol" Education Sciences 15, no. 8: 1004. https://doi.org/10.3390/educsci15081004

APA Style

Yang, M., Lovett, N., Li, B., & Hou, Z. (2025). Towards Dynamic Learner State: Orchestrating AI Agents and Workplace Performance via the Model Context Protocol. Education Sciences, 15(8), 1004. https://doi.org/10.3390/educsci15081004

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop