Next Article in Journal
Study on the Complexity Evolution of the Aviation Network in China
Previous Article in Journal
Green Innovation and Conservative Financial Reporting: Empirical Evidence from U.S. Firms
Previous Article in Special Issue
Framing and Evaluating Task-Centered Generative Artificial Intelligence Literacy for Higher Education Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Transformative Agency to AI Literacy: Profiling Slovenian Technical High School Students Through the Five Big Ideas Lens

Faculty of Education, University of Ljubljana, Kardeljeva ploscad 16, SI-1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Systems 2025, 13(7), 562; https://doi.org/10.3390/systems13070562
Submission received: 1 June 2025 / Revised: 1 July 2025 / Accepted: 7 July 2025 / Published: 9 July 2025

Abstract

The rapid spread of artificial intelligence (AI) in education means that students need to master both AI literacy and personal agency. This study situates a sample of 425 Slovenian secondary technical students within a three-tier framework that maps psychological empowerment onto AI literacy outcomes within a cultural–historical activity system. The agency competence assessments yielded four profiles of student agency, ranging from fully empowered to largely disempowered. The cluster membership explained significant additional variance in AI literacy scores, supporting the additive empowerment model in an AI-rich vocational education and training context. The predictive modeling revealed that while self-efficacy, mastery-oriented motivations, and metacognitive self-regulation contributed uniquely—though small—to improving AI literacy, an unexpectedly negative relationship was identified for internal locus of control and for behavioral self-regulation focused narrowly on routines, with no significant impact observed for grit-like perseverance. These findings underscore the importance of fostering reflective, mastery-based, and self-evaluative learning dispositions over inflexible or solely routine-driven strategies in the development of AI literacy. Addressing these nuanced determinants may also be vital in narrowing AI literacy gaps observed between diverse disciplinary cohorts, as supported by recent multi-dimensional literacy frameworks and disciplinary pathway analyses. Embedding autonomy-supportive, mastery-oriented, student-centered projects and explicit metacognitive training into AI curricula could shift control inward and benefit students with low skills, helping to forge an agency-driven pathway to higher levels of AI literacy among high school students. The most striking and unexpected finding of this study is that students with a strong sense of competence—manifested as high self-efficacy—can achieve foundational AI literacy levels equivalent to those possessing broader, more holistic agentic profiles, suggesting that competence alone may be sufficient for acquiring essential AI knowledge. This challenges prevailing models that emphasize a multidimensional approach to agency and has significant implications for designing targeted interventions and curricula to rapidly build AI literacy in diverse learner populations.

1. Introduction

Artificial intelligence (AI) as a transformative force in education [1] has rapidly permeated both education and everyday life. No longer confined to the computer science lab, AI tools now assist students in tasks ranging from writing to problem-solving, and appear in technologies that learners use at home, in school, and in the workplace [2,3]. Recent research even suggests that we now learn, live, and work with AI as an integrated part of our daily routines [4]. In vocational education and training (VET), this integration is especially salient: students in technical and service-oriented tracks increasingly encounter AI-driven tools in their coursework and future professions [5]. The increasing entwinement of educational and practical uses of AI demonstrates that, as Darvishi et al. identified, artificial intelligence shapes not only how students learn but also how they live and prepare for the future world of work [4]. In this sense, AI acts as a pervasive influence across learning, working, and daily practices. AI refers to the capacity of machines or computer programs to perform tasks that typically require human intelligence, such as problem-solving, decision-making, visual perception, speech recognition, and language understanding [6]. In the context of high school education, AI is generally introduced as systems or technologies capable of simulating aspects of human cognition, with applications ranging from adaptive learning environments and intelligent tutoring to ethical reflection on AI’s societal impacts [7,8]. Thus, AI should be understood less as a single technology and more as a family of data-driven, adaptive systems that sense, learn from, and act on information [9]. Recent curricular frameworks for secondary schools prioritize fostering AI literacy, enabling students not only to grasp basic concepts—such as how algorithms learn from data—but also to critically engage with issues like bias, fairness, and the transformative social and economic effects of AI technologies, [7,8,10]. As a result, students are encouraged to understand AI as more than just a technological tool, recognizing its pervasive influence in modern life and the importance of responsible and ethical use, an approach increasingly documented in emerging educational research [7,11]. However, alongside the enthusiasm for AI’s potential in education, researchers have begun to identify significant problems and unknowns. AI is now widely used to automate and scaffold learning [3,4], yet the effects of AI assistance on students’ cognitive learning processes remain poorly understood. For example, while AI-driven platforms can scaffold student learning, it is unclear whether students truly learn from these supports or simply rely on them [4]. In other words, does AI usage foster deeper understanding and self-regulated learning, or might it stifle the development of those very skills? Such questions point to a gap in articulating how AI influences critical thinking, problem-solving, and metacognition in learners. Moreover, the ethical and social implications of ubiquitous AI create new responsibilities for education [12]. AI systems can introduce issues of bias, transparency, and privacy, so learners must develop an informed awareness of these dimensions [13]. Researchers argue that students need to learn not only technical AI skills but also how to use AI wisely and understand its ethical practices, and how to reduce overreliance on AI and data privacy risk [2,13]. As AI’s influence in society expands inexorably, the urgency of developing a literate user base—learners who can understand, evaluate, and responsibly use AI—has never been greater [2]. An AI-literate student is better equipped to navigate the questions of fairness, accountability, critical thinking awareness, and social impact that accompany AI-rich environments [14,15]. Moreover, AI literacy can be seen as a component of mainstream literacy to meet a future in which AI use dominates everyday and working life [16] through its key components—technology, impact, ethics, collaboration, and self-reflection [3]—which can reshape the cognitive learning process [15].
Despite growing emphasis on AI literacy education [1,2,9,16,17,18,19], we lack insight into how students’ own agency might contribute to developing that literacy, particularly in VET contexts. The existing literature has begun to define what AI literacy entails and why it is important [2], but relatively little is known about how learners become AI-literate in practice and who drives this development. Notably, student agency—the capacity of students to act purposefully and shape their learning [20]—is seldom examined in connection with AI literacy. In vocational and technical high schools, where curricula are often oriented toward practical skills and workplace readiness, students’ proactive engagement could be a key factor in how they learn about AI. Moreover, the interdisciplinary nature of AI facilitates AI education to break disciplinary boundaries and adopt a global, practical, and student-centered active approach co-designed by teachers, pedagogues and AI experts, as argued by Casal-Otero et al. [9]. Yet, no comprehensive studies to date have clarified how agency (e.g., students’ initiative, self-efficacy, and ownership of learning) influences their understanding of AI concepts and tools; thus, an effective educational approach is needed to ensure an equitable, affective, and responsible learning experience [13]. In sum, we still know little about how student agency shapes the development of AI literacy in vocational education. Clarifying that relationship is essential for designing interventions that help learners not only use AI but also question, learn with, and ultimately thrive alongside it [2,12,21].
Against this backdrop, Slovenia’s two-year nationwide project Generativna umetna inteligenca v izobraževanju (generative AI in education)—funded by the Ministry of Education and the EU Recovery and Resilience Plan—has provided the country’s first systematic portrait of generative AI use from primary to tertiary schooling [22]. Research shows that barely one quarter of secondary students feel they know generative AI well and only about half actually use it for learning, while school-based training is still exceptional (just 13% have received any) [5]. Teachers and leaders recognize generative AI’s promise for personalized learning, yet underline persistent pedagogical and ethical knowledge gaps that must be closed before the technology can transform classrooms [5].
Building on these findings, our study moves beyond prevalence data to examine who Slovenia’s technical high school learners are. By profiling students through the mainstream Five Big Ideas in AI lens as a curriculum framework—described in detail in Section 1.1.2—we uncover distinct clusters of transformative agency, compare AI literacy attainment across the human service and industrial engineering tracks, and test how both profile membership and individual agency constructs predict AI literacy scores. Linking sociocultural notions of agency with contemporary AI literacy theory, these insights might illuminate where differentiated scaffolding and targeted professional development are most needed, thus advancing both scholarly understanding and the practical roll-out of AI implementation within Slovenian secondary education and more broadly.
The next section—“Psychological Empowerment and AI Literacy in Education”—is motivated by our commitment to design a truly multidisciplinary, holistic AI literacy framework that unlocks AI’s full educational potential by nurturing affective learning (motivation, self-efficacy, and autonomy), behavioral learning (collaboration, commitment, task selection, and self-regulation), cognitive learning (knowing, understanding, applying, evaluating, creating, and metacognitive self-regulation), and ethical learning (awareness of fairness, bias, transparency, and social impact), as proposed by [1,23,24].

1.1. Psychological Empowerment and AI Literacy in Education

1.1.1. Linking Empowerment Constructs to AI Literacy Outcomes

Psychological empowerment—as defined by frameworks such as Spreitzer’s model—encompasses key personal factors that influence learning. Spreitzer [25] built on Thomas and Velthouse’s theory [26] to define empowerment via four cognitions: meaningfulness, competence (self-efficacy), choice (self-determination/autonomy), and impact [27]. In other words, learners feel empowered when they find meaning in the task, believe in their competence, have autonomy (internal locus of control), and see their actions making a difference [27]. Recent education research applies similar frameworks in computing/AI contexts. For example, Kong et al. [28] conceptualized programming empowerment with four dimensions—programming self-efficacy, creative self-efficacy, meaningfulness, and impact—to study student motivation in coding courses. Likewise, an AI empowerment model defines empowerment for AI learning tasks as when students (1) see the effect of using AI (impact), (2) feel confident using AI (self-efficacy), (3) believe they can use AI creatively (creative efficacy), and (4) find AI tasks purposeful (meaning) [27]. These constructs map closely to classic empowerment theory and suggest that boosting students’ agency and confidence with technology is key to AI literacy [4,29,30].
Self-efficacy (competence) is the belief in one’s capability to learn or perform tasks [31]. Strong self-efficacy is consistently associated with better learning engagement and achievement [31]. In computing education, for instance, self-efficacy has been found to be a key predictor of academic success [32]. Students with higher confidence in their AI or programming skills tackle challenges more persistently and achieve higher outcomes [32]. Building learners’ AI self-efficacy (their confidence in understanding and using AI) is therefore crucial for positive AI learning results [33]. Studies suggest that designing curricula to bolster students’ confidence—e.g., through scaffolded AI activities—can improve both self-efficacy and learning outcomes in AI courses [34]. This aligns with Spreitzer’s competence dimension: when students feel capable, they are intrinsically more motivated to learn [3,35].
Locus of control (autonomy) refers to whether a person attributes outcomes to internal efforts or external forces. An internal locus of control (e.g., believe that “I can influence my success through effort”) strongly supports learning [36]. Research has well documented that students with an internal locus of control tend to show higher motivation, persistence, and academic achievement [37]. They take ownership of challenges, viewing difficulties as opportunities to grow, and persevere because they expect effort to pay off [37]. In contrast, an external locus (e.g., outcomes are just luck or others’ doing) can breed helplessness and disengagement [30,37]. Thus, in AI literacy contexts, cultivating an internal locus (e.g., giving students control in AI projects) can empower them [4]. An internal locus aligns with empowerment’s choice/autonomy element—students who feel control over their AI learning path are more likely to stay motivated and overcome obstacles [36,38].
Perseverance of effort—sustained effort and interest over time—is critical for mastering difficult concepts in AI and computing [39,40]. AI literacy often involves challenging problem-solving (debugging models, learning complex concepts) that requires grit [36]. Research on grit (perseverance and passion for long-term goals) in computer science courses shows that while overall grit scores may only weakly predict grades, the perseverance-of-effort component does correlate with success [41]. In a programming course, for example, students’ perseverance in the face of difficulties had a positive (though modest) correlation with final course marks [41]. This suggests that students who persist despite setbacks tend to achieve more in computing [39]. Perseverance is closely tied to self-regulation and a growth mindset—traits that empowered learners exhibit. By fostering perseverance (e.g., through encouraging a learn from failure culture in AI projects), educators support better AI learning outcomes. As Sigurdson and Petersen [41] stated, perseverance in the face of difficulty contributes to success in a programming course [41], underscoring its importance for any tech learning context. Moreover, resilient learners remain engaged and maintain sustained effort despite academic difficulties [39].
Future-oriented motivation—such as valuing future goals or seeing the future relevance of learning—can enhance students’ drive to learn emerging technologies such as AI. When learners understand how AI literacy connects to their future education, careers, and societal needs, it imbues the learning with meaning (echoing empowerment’s meaningfulness dimension) [42]. In fact, a key rationale for K-12 AI education is to prepare students for the AI-driven future. AI literacy is seen as essential for future workplaces and informed citizenship [43]. Students who recognize that mastering AI concepts will open future opportunities tend to be more engaged [16]. This future orientation often aligns with the utility value in expectancy-value theory, perceiving AI knowledge as useful for one’s goals increases effort and persistence [42]. In the service-robot context, a future orientation markedly heightens perceived utilitarian and hedonic benefits—namely, the anticipated enjoyment, experiential pleasure, and novelty derived from interacting with the robot—, which in turn boosts perceived usefulness and ease of use, so a future-time perspective can promote AI acceptance when benefit appraisals are salient [42]. While direct studies on future orientation in AI-ed are scarce, related research in Science, Technology, Engineering and Mathematics (STEM) shows that connecting coursework to students’ future goals boosts motivation and perseverance. By situating the Five Big Ideas in AI within authentic societal challenges and aligning them with future professional trajectories, students gain clearer goals and enhanced agency, which in turn strengthens their capacity for self-regulated learning and intrinsic motivation [27,44,45,46].
Self-regulated learning (SRL) is the ability to plan, monitor, and adjust one’s own learning strategies [47]. It is a cornerstone of learner empowerment because it enables autonomy and effective effort management [48]. In technology and AI learning, which often involves open-ended exploration and problem-solving, strong self-regulation skills are associated with better performance [4]. Education research finds that SRL is a key predictor of academic success, on par with motivation and self-efficacy, especially in learning to program [32]. For example, computing students who actively set goals, track their understanding, and seek help when needed generally outperform those who do not self-regulate. Effective SRL leads to persistence (rather than frustration) when facing a tough bug in code or a complex AI concept [32]. Empowerment frameworks view this as part of self-determination, wherein empowered learners regulate their own learning process [49]. In practice, teaching students metacognitive strategies and giving them agency in AI projects (choosing topics, pacing, etc.) can enhance their self-regulation and, in turn, improve learning outcomes [32,49]. In short, students who know how to learn are better equipped to master AI content [48].
Learning (or mastery) goal orientation means striving to develop competence and understanding, rather than merely to obtain good grades or outperform others [50]. This mindset has proven benefits in technology education. Alt et al. [51] showed that mastery-approach learners rated digital concept mapping (a knowledge-representation technique) as especially effective for self-regulating their learning during a problem-based assignment [31]. Mastery-oriented learners tend to show greater perseverance, deeper interest, and adaptive help-seeking, all of which support achievement in challenging subjects [52]. For instance, studies indicate that students who approach programming tasks with a focus on learning (not just on avoiding mistakes or looking smart) engage more deeply and persist longer [19,40,42,53]. Mastery orientation is linked to a growth mindset and resilience: mistakes are seen as learning opportunities rather than failures. In a recent computing education study, mastery orientation has been related to perseverance, interest, and achievement [52]. This orientation aligns with empowerment by emphasizing personal growth (intrinsic motivation) and reducing fear of failure [54]. In the AI literacy context, encouraging a learning goal orientation—e.g., rewarding effort, exploration, and improvement over time—can lead students to delve into AI’s “Big Ideas” with curiosity and persistence. They are more likely to experiment with an AI model or troubleshoot an algorithm until they understand it, thereby achieving deeper learning. Conversely, a performance orientation may cause students to shy away from complex AI topics for fear of looking incapable [55]. Thus, a mastery focus supports the kind of sustained engagement needed to grasp AI’s core ideas [56]. Moreover, motivational and learning trajectories among students are best understood as dynamic and contextually influenced rather than static or homogenous [57]. Research emphasizes that goal orientations, such as mastery or performance goals, are shaped not only by individual predispositions but also by the broader social and educational environments in which students are situated, including perceived parental and teacher goals as well as classroom structures [57,58,59]. These contextual influences mean that students’ motivational patterns can shift over time, with subsequent effects on their use of learning strategies and academic performance [57]. Notably, fostering a mastery or learning goal orientation—particularly when accompanied by strong self-efficacy—consistently predicts the adoption of deeper processing strategies and more adaptive academic behaviors across diverse groups and educational contexts [57,60]. Thus, constructing learning environments and support systems that prioritize mastery goals and bolster students’ self-efficacy is critical to promoting meaningful and sustained engagement [56,59].

1.1.2. Links to the Five Big Ideas in AI Education

To ground AI literacy education, Touretzky et al. [61] proposed the Five Big Ideas in AI as a framework for K-12 learning (AI4K12). These Five Big Ideas—Perception, Representation and Reasoning, Learning, Natural Interaction, and Societal Impact—cover the fundamental concepts and issues every student should understand about AI [61].
The Five Big Ideas are as follows: (1) Perception—how computers perceive the world (e.g., via sensors or recognizing patterns in data); (2) Representation and Reasoning—how AI systems represent knowledge and make decisions or inferences; (3) Learning—how machines can learn from data and experience (the core of machine learning); (4) Natural Interaction—how AI interacts with humans through language, vision, robotics, etc., in natural ways and (5) Societal Impact—the ethical, social, and societal implications of AI (fairness, privacy, the future of work, etc.) [62].
These Big Ideas have been widely adopted as an organizing structure for AI curricula [61]. They ensure that AI literacy is not just technical skills but also conceptual understanding and ethical awareness. Both theoretical and empirical sources have validated the importance of the Five Big Ideas framework in education.
The AI4K12 initiative’s guidelines are explicitly organized around the Five Big Ideas [62]. Touretzky et al. [62] provide a detailed example of how to implement this framework: they outline grade-appropriate learning progressions for each Big Idea and show how students can progressively master AI concepts. For instance, their curriculum design for the “Learning” Big Idea (Big Idea #3) specifies what concepts (such as training models or recognizing patterns) students should grasp in elementary vs. middle vs. high school, building a deepening understanding of machine learning over time [62]. The guidelines also emphasize making connections across Big Ideas—e.g., a lesson on machine learning might tie in Perception (data input) and Societal Impact (ethical use of models) [62]. Such frameworks draw on learning sciences and align with broader standards (Computer Science, math, science standards) to ensure AI literacy is integrated in a developmentally appropriate way [62]. In short, Touretzky and colleagues’ work has envisioned AI for K-12 by defining these Big Ideas [62], and this vision is now shaping standards and curricula worldwide. Other scholars have similarly argued for multifaceted AI literacy frameworks that cover technical knowledge and skills, and social–ethical understanding [16], reinforcing the idea that a complete AI education must span core concepts like the Five Big Ideas.
Early implementations of AI curricula based on the Five Big Ideas have shown promising outcomes. Several studies report that, when students are taught AI concepts with an empowering approach based on the Five Big Ideas, their engagement and learning outcomes improve. For example, a recent study by Kong and Yang [27] evaluated an AI literacy program for secondary and university students that focused on the students’ conceptual understanding of machine learning (aligning with the Learning Big Idea) [27]. They found that the program significantly increased students’ AI empowerment—students became more confident in using AI, were more aware of AI’s impact, and found more meaning in AI activities [27]. Notably, the intervention even narrowed the gender gap in students’ sense of empowerment by boosting female students’ confidence in AI use [27]. This empirical evidence suggests that teaching AI through core concepts and real-world applications (rather than just coding exercises) can empower a broad range of learners. Other studies in K-12 have similarly integrated AI Big Ideas and reported positive effects on student interest, creative self-efficacy, and ethical awareness (e.g., students designing socially impactful AI projects) [33]. Moreover, research reviews note that AI-based learning tools can enhance student engagement and personalization when grounded in sound pedagogical frameworks [33], underscoring the need to tie AI activities to big-picture ideas and student agency rather than using AI as a gimmick.
Importantly, empowerment through its constructs (self-efficacy, internal locus, perseverance, etc.) supports learning each of the Five Big Ideas by enabling deeper engagement [16,32,63]. Students with high self-efficacy and perseverance are more willing to tackle complex topics such as how sensors or vision algorithms work [16,39,63,64]. They feel confident exploring AI perception tools (e.g., training a simple image classifier) and persist through troubleshooting if their model initially fails. An internal locus of control makes them more likely to iterate on improving the AI’s perceptual accuracy, rather than saying “AI is too hard” and giving up. Self-regulation enables students to systematically probe, evaluate, and iteratively refine an AI system’s perception—for example, by scrutinizing anomalous outputs, adjusting the training corpus, and re-running validation tests—rather than passively accepting its first results [3,39,64].
Mastery learning goal orientation leads students to delve into reasoning algorithms or knowledge-representation schemes (digital concept mapping) with curiosity, rather than memorizing facts [50]. They will ask, “why does the AI make this decision?” and seek to understand the underlying logic [42]. If they have an empowered mindset (feeling their effort matters), they approach activities such as building a decision tree or an expert system as meaningful challenges [50]. They are more likely to engage in problem-solving or adaptive help-seeking [52]—e.g., using debugging tools or asking for hints—to grasp reasoning processes, which improves their mastery of this Big Idea.
The Big Idea of Learning especially benefits from perseverance and self-regulation [62]. Training machine learning models often requires tuning parameters and interpreting ambiguous results [42,55]. Students with grit will continue experimenting with their models (try another dataset, tweak settings) when outcomes are not ideal, thus learning more from the process [39]. High self-efficacy in programming, math, or technology education helps here: students who believe “I can figure this out” engage more deeply in model-training tasks [16,32]. A future orientation also plays a role: understanding that machine learning is a cornerstone of future technology can motivate students to put in the effort to really comprehend it. In the study by Touretzky et al. [62], the machine learning lessons designed around Big Idea No. 3 were informed by such best practices from learning sciences [62], helping students see machine learning not as a black box, but as a skill they can master step by step [42].
Topics such as natural language processing or human–robot interaction can be intimidating [65]. Internal locus of control and self-efficacy make a big difference in whether students actively engage [38,40]. If students feel empowered, they will approach tasks such as training a chatbot or programming a robot to respond to voice commands with enthusiasm and creativity [65]. They believe their actions have impact, so they might customize an AI assistant to solve a real problem of their own interest—aligning with personal meaning [40,57]. Studies on AI in education note that giving students autonomy in such projects (letting them choose a problem for their chatbot to solve, for example) increases their sense of ownership and innovative self-efficacy [27]. This leads to higher-quality learning outcomes (e.g., a working conversational agent) and greater confidence in tackling future AI interactions [3,38].
The Big Idea of Societal Impact requires reflection, ethical reasoning, and connecting AI to real-world issues [62]. A future-oriented, mastery mindset helps students engage thoughtfully with questions of AI’s impact on society [16]. Rather than dismissing ethics as abstract, empowered learners see it as meaningful to their lives and futures. For instance, students with a high locus of control feel personally responsible for using AI for good; they might be more invested in discussions on bias or AI for social good, knowing their actions can make a difference [2,9,18,38]. Perseverance and self-regulation also matter: grappling with complex ethical case studies or policy questions can be challenging, but students who persist and self-direct their research (perhaps exploring an AI ethics issue for a project) end up with a much deeper understanding [39]. Research has highlighted that AI literacy should include attitudinal and moral dimensions alongside technical skills [66]. Psychological empowerment contributes here by fostering a critical mindset and a sense of responsibility. In fact, one AI literacy framework explicitly includes a sense of empowerment to use AI responsibly and a critical mindset for evaluating AI’s influence on society [66]. Thus, empowerment-oriented education produces students who not only know AI’s Big Ideas technically but also feel capable of ethically shaping AI’s impact on the world [67]. Table 1 presents a concise synthesis of empirical evidence mapped to the Five Big Ideas in the AI framework, detailing typical learning activities, focal concepts, and associated learner outcomes.
With regard to the aforementioned, a growing body of peer-reviewed literature supports the idea that psychological empowerment constructs—self-efficacy, internal locus of control, perseverance, future orientation, self-regulation, and mastery goal orientation—are positively linked to AI literacy and technology learning outcomes. These factors echo components of Spreitzer’s empowerment model [25], indicating that when learners feel competent, autonomous, purposeful, and able to impact their work, they learn emerging technologies more effectively. Both theoretical frameworks and empirical studies in AI education reinforce this connection. The Five Big Ideas in AI [62] provide a comprehensive, concept-driven roadmap for AI literacy [61], and empowering students along these dimensions has been shown to improve their learning experiences. Curricula that integrate the Five Big Ideas with strategies to boost student empowerment (confidence building, autonomy support, relevance to future goals, etc.) have yielded greater student engagement, learning gains, and confidence with AI [27,33]. By validating the AI4K12 framework in classrooms and research settings [62], scholars have demonstrated that an emphasis on core AI concepts and learner empowerment can demystify AI for young learners and prepare them to be critical, capable participants in an AI-driven society. The convergence of empowerment psychology and AI literacy education is a promising avenue: it suggests that, to cultivate the next generation’s AI fluency, we must not only teach the Big Ideas of AI but also nurture the big attitudes and skills (resilience, agency, and curiosity) that enable lifelong learning in a rapidly evolving technological landscape [56].
In response to these needs, the present study investigates AI literacy and student agency among Slovenian vocational high school students. We focus on two distinct technical education tracks—the human services track and the industrial engineering track—to explore how students in different vocational domains develop AI literacy competencies. Of special interest is students’ transformative agency, referring to their ability to enact change in their learning environment and adapt to new challenges. By examining patterns of transformative agency through the lens of cultural–historical activity theory (CHAT) and their relationship to AI literacy, we aim to shed light on how empowering students can support AI-related competencies.
Firstly, we draw on third- and fourth-generation CHAT [68,69], building on ideas and instruments developed by the preceding generations of CHAT, to understand how students engage with AI learning in multi-voiced institutional settings. Third-generation CHAT highlights that learning occurs within multiple interacting activity systems (e.g., the student, the classroom, the school, and workplace communities) that share partially overlapping goals [68,70,71]. Building on this, the emerging fourth-generation CHAT extends the analysis to networks of activity systems, emphasizing multiple perspectives, voices, and boundary-crossing interactions in expansive learning environments [72,73]. This perspective allows us to consider the school as a dynamic activity system where students, teachers, tools (such as AI), and community influences all interplay [69,74]. Through the CHAT lens, student agency can be seen in how learners negotiate roles, rules, and tools (such as AI) within the broader activity system to transform their learning conditions [67,69,75,76]. When motives align (e.g., highly agentic learners and an AI-rich curriculum), AI tools become shared objects that drive expansive learning [67,73]; when motives clash, contradictions surface and signal where redesign is needed [29]. Gibson et al. [29] position CHAT as the macro-level as a mediator as well as the initiator of cultural shifts. CHAT makes visible the sociocultural forces that shape (and are reshaped by) AI tools, clarifies how individual and team agency scale into cultural change, and offers a diagnostic language—contradictions, boundary objects, and expansive learning—for guiding both empirical research and the practical design of AI-enhanced educational systems [29]. Moreover, researchers can model multi-voiced activity systems to predict where AI will amplify learning and where it may generate tension or inequity [29].
Next, to operationalize student agency, we turn to psychological empowerment theory, which conceptualizes agency as a set of measurable self-perceptions. In particular, we focus on students’ self-efficacy, sense of autonomy, and perceived impact in their learning process. These constructs are adapted from empowerment research that identifies four dimensions of individual empowerment: meaning, competence (self-efficacy), self-determination (autonomy), and impact [77]. By surveying students’ confidence in handling AI tasks (self-efficacy), their felt ownership over learning with AI (autonomy), and their belief that they can effect change or achieve outcomes using AI (impact), we quantify transformative agency at the individual level. Psychological empowerment theory thus provides the micro-level lens for examining how a student’s mindset and attitudes enable them to take initiative with AI tools and content.
Finally, to define and assess AI literacy, we ground our approach in the Five Big Ideas in the AI curriculum framework [62]. This framework, originally developed for K-12 AI education, outlines five core concept areas that together provide a comprehensive view of AI: Perception, Representation and Reasoning, Learning, Natural Interaction, and Societal Impact [62]. In essence, the Five Big Ideas cover how AI systems sense the world (e.g., via sensors and data perception), how they represent knowledge and make decisions, how they learn from data, how humans interact naturally with AI (e.g., through speech or other interfaces), and how AI affects society and ethics. By using this curriculum-grounded model, we ensure that our measure of AI literacy reflects both technical understanding and awareness of AI’s broader implications. The Five Big Ideas framework provides clear benchmarks for what it means to be AI-literate—from grasping machine learning concepts to recognizing ethical issues—and thus serves as the domain-specific lens of our study [62].

1.2. Aim and Research Questions of the Current Study

The rapid diffusion of AI is recasting what it means to be technologically literate, yet research on AI literacy in VET remains scarce. CHAT offers a systems lens for analyzing learning, but most CHAT-based AI studies treat students’ agency as a contextual backdrop rather than a dynamic driver of outcomes [70,76,78]. Conversely, psychological empowerment research foregrounds individual agency but rarely links it to the concrete content students are expected to master. Bringing these strands together is timely for VET, where learners must not only use AI tools but also understand the ideas that govern them [4,15] concerning accuracy, cognitive disengagement, and ethical implications [13].
The present study uses a combined person- and variable-centered approach to characterize and explain how Slovenian secondary technical students’ transformative agency relates to their AI literacy. Thus, this study also aims to uncover distinct transformative agency profiles among Slovenian technical students, compare their AI literacy attainment across technical tracks, and determine—both at the profile and dimension levels—how agency predicts AI literacy outcomes when demographic (sex) and curricular (track, study year) factors are controlled, thereby illuminating learner variation through the CHAT lens. To guide this study, we propose the following research questions (RQs):
  • RQ 1: What cluster profiles of transformative agency can be identified among Slovenian technical high school students?
  • RQ 2: To what extent do overall AI literacy scores differ between students in the human service and industrial engineering tracks?
  • RQ 3: How does membership in a given transformative agency profile predict students’ overall AI literacy scores?
  • RQ 4: To what extent do the individual constructs of transformative agency predict students’ overall AI literacy scores, controlling for program track, study year, and sex?
To investigate these RQs, we adopt a robust theoretical framework that integrates three levels of analysis. This multilevel triple-theoretical integrated framework enables us to connect the sociocultural context of learning with individual psychological factors and the specific knowledge and skill domain of AI.
This study, therefore, maps empowerment-based student agency (micro level) onto AI literacy outcomes (meso level) inside a CHAT activity system (macro level), delivering a three-tier theoretical integration [29] (see Table 2). By weaving empowerment theory [25] into CHAT [69] and benchmarking outcomes against the AI4K12 Five Big Ideas [62], this study can reposition student transformative agency from a background trait to a causal mechanism that channels how VET learners appropriate AI tools and concepts—thus advancing three theoretical streams with a single empirical dataset.
By integrating these three frameworks, our study is uniquely positioned to explore how context, agency, and content knowledge interact. We aim to illuminate how Slovenian technical students exercise agency in learning about AI, and how that agency relates to their emerging literacy in AI. The findings can inform educators and policymakers on how to foster both AI literacy and empowered learning in the VET sector. A multilevel triple-theoretical framework that embraces a pathway from transformative student agency to AI literacy is shown in Figure 1.
A holistic pathway to AI-literacy begins with student-agency dispositions—self-efficacy (SE), perseverance of interest (PI), perseverance of effort (PE), mastery learning-goal orientation (MLGO), locus of control (LC), future orientation (FO), self-regulation (SR), and metacognitive self-regulation (MSR)—which supply learners with the confidence, persistence, and strategic oversight to engage challenging ICT-supported and AI-driven tasks. These dispositions activate the four empowerment cognitions of meaning, competence, self-determination, and impact, converting raw motivation into a felt capacity to shape outcomes. Empowered learners are then primed to tackle the Five Big Ideas in AI: Perception (how machines sense the world), Representation and Reasoning (how they model and infer), Learning (how they improve from data), Natural Interaction (how they communicate with humans), and Societal Impact (how they affect communities). For example, a focal concept of Smart classroom energy optimizer: Students build and deploy a machine-learning model that directly lowers their school’s energy bill and carbon footprint by predicting occupancy-driven heating, ventilation and air conditioning (HVAC) demand and adjusting thermostats and lighting in real time. Seeing meter readings fall and administrators adopt their recommendations provides learners concrete evidence that their technical work changes organizational outcomes, fulfilling Spreitzer’s impact dimension [25]. The next example is a focal concept of Playlists, sports, or lunch data. This hits Spreitzer’s [25] meaning of cognition by giving the modeling task immediate personal and social value, and it operationalizes Big Idea 3—Learning by supplying rich, labeled examples through which students build, test, and refine machine-learning models. The two frameworks converge: personally significant data (empowerment through meaning) becomes the substrate for understanding how computers learn (AI literacy through Big Idea 3). When a machine-learning lesson is built around students’ own playlists, sports statistics, or cafeteria-lunch logs, the purpose of the task is automatically aligned with what they already care about (music identity, team performance, daily wellbeing). Mastery of these conceptual domains, in turn, enables performance across the six AI-literacy practices: understanding core concepts, use of AI systems, evaluation of their reliability, collaboration within interdisciplinary teams, ethical reflection and governance, and ultimately autonomy in designing or critiquing intelligent technologies. Thus, agency dispositions ignite empowerment cognitions, which power engagement with AI’s foundational ideas, culminating in the multifaceted competencies that define full AI literacy. AI literacy spans the cognitive, affective, behavioral, and ethical domains, and can be operationalized through six inter-related components: understanding, use, evaluation, collaboration, ethics, and autonomy, as suggested by Allen and Kendeou [1] and Ng et al. [24].

2. Materials and Methods

This study employed a quantitative, cross-sectional, non-experimental survey design situated in Slovenian secondary technical schools. All data were collected once, giving a map of students’ agency and AI literacy level. There was no random assignment or instructional intervention; naturally occurring groups (school track, sex, study year) were observed. Thus, the purpose was explanatory–predictive, aiming to describe patterns (cluster profiles) and explain/predict variation in AI literacy scores from agency constructs while controlling for background variables. A standardized self-report scale for the four student agency dimensions and an objective, single-dimension test for AI literacy were used as the data sources, and demographic items for control variables. The analysis in the study used a dual lens: (1) person-centered cluster profile analysis to identify agency profiles, and (2) variable-centered Partial Least Squares Structural Equation Modeling (PLS-SEM) to estimate regression-like paths among latent constructs.
Using a cross-sectional, explanatory survey design is appropriate for an initial, theory-building examination [79] of the student transformative agency to AI literacy linkage in secondary technical education. Next, by linking person-centered profiles, variable-centered regressions, and contextual controls within one CHAT-aligned model, the study is poised to (a) decode learner heterogeneity in agentive stances, (b) explain why some technical students acquire stronger AI concepts, and (c) inform both theoretical refinements (CHAT, agency typology, AI4K12) and practical reforms (track-specific scaffolds, gender equity) in Slovenia’s rapidly modernizing secondary technical schools.

2.1. Participants and Setting

In total, 485 students were recruited for the purpose of the study, but only 432 entirely completed both the AI literacy test and the student agency survey. Prior to the main analysis, we screened multivariate outliers on the survey indicators of student agency and AI literacy. Mahalanobis distances were computed for each case, using all manifest variables entered later in the PLS-SEM model. Cases that exceeded the cut-off value χ2(8) = 22.41 (p < 0.001), computed in SPSS v25, were removed (n = 7). The final analytic sample, therefore, comprised 425 students. Students were from multiple technical schools, and the sample was split across industrial engineering vs. human service tracks. Students included in the industrial engineering track were from two schools (n = 208), while students in the human service track were from one larger school (n = 217). All three schools were publicly financed by the government. Of the 425 students, 180 were female (42.35%) and 245 (57.65%) were male. Regarding the study year, 153 (36%) students were from the second year, 185 (43.52%) students were from the third year, and 87 (20.48%) students were from the fourth year of the study. Students were 17.45 years old (SD = 1.12) on average. All students were enrolled full time in technical VET programs (EQF 4, ISCED 3), which were school-based with 15% work-based learning, either at a workplace or a VET institution.
The Slovenian VET system is multilevel and allows students to choose paths tailored to their interests and goals. Lower VET (2 years) is more practically oriented and concludes with a final exam. Secondary VET (3 years) includes the option of an apprenticeship with more practical training with employers. Secondary technical education (4 years) leads to a vocational baccalaureate and enables further education. Vocational–technical education (2 years) is intended for SPI graduates and leads to a vocational baccalaureate. The programs include general education subjects and professional modules as well as practical work training, which comprises at least 24 weeks in three-year programs. Apprenticeships enable students to complete more than half of their education in a company. The aim of the system is to enable students to work independently in their chosen profession and to enable them to enter employment immediately or continue their education at a higher level. The chart of VET in Slovenia is shown in Appendix A (see Figure A1), including the context of our study.
Education programs covered by our survey include information and communication technology (ICT) to varying degrees, depending on the specific vocational orientation. In the human service track program, the subject “Applied Informatics” is included in the open curriculum and comprises approximately 50 h. The cosmetology technician program does not have a separate ICT subject, but approximately 70 h of ICT content is integrated into the “Entrepreneurship” module. The gastronomy and tourism program also deals intensively with ICT within the “Business Communication and ICT” module, which comprises approximately 133 h of teaching. The preschool education program includes a dedicated subject entitled “Information and Communication Technology” with approximately 68 h of teaching, which is specifically geared towards practical skills for educational work [80].
In industrial engineering track programs, the greatest emphasis on ICT is in the computer technician program, which includes a separate subject called “Information Technology with Technical Communication” (approximately 140 h) in the first year and a module called “Use of ICT in Business” (approximately 130 h) in the second year. In technical programs such as mechanical engineering, ICT content is mainly integrated into professional modules such as CAD/CAM and computer-aided technologies, with a total of approximately 150 h of ICT content. This diverse integration of ICT enables students to acquire practical and professionally relevant digital competences for their future careers [80].

2.2. Data Collection

Before data collection, the research proposal was reviewed and approved by the Ethics Commission of the Faculty of Education of the University of Ljubljana (approval code: 7/2025). Then, both the AI literacy test and the student agency survey were published on the 1ka portal (https://www.1ka.si/d/sl, accessed on 15 May 2025). According to the procedure, an invitation to complete the test and survey was sent to the school headmaster, who agreed to administer the survey in his/her institution and informed the school’s teachers. Parents or guardians were also briefed on the study and, by providing informed consent to the research administrator, authorized their under-age children to take part. The consent form outlined every component of the study, while underscoring the guarantee of anonymity and the voluntary nature of participation. Only students who granted this informed consent had their data included in the analysis. They completed both the online questionnaire and the test during class time in the presence of one of the researchers, responding to each item according to their own experiences, perceptions, skills, and knowledge. The entire administration took an average of 35 to 45 min. A high response rate was also achieved due to the direct presence of one of the researchers and local teachers in the classroom.

2.3. Instruments

2.3.1. AI Literacy Test

Students’ AI literacy—defined as their capacity to live, learn, and work in an AI-driven digital world, encompassing the abilities to understand, apply, evaluate, and create with AI, and to address its ethical implications [81,82]—was measured using AI literacy test developed by Hornberger et al. [83]. The test had already been validated in some international studies [14] and translated into the Slovenian language, where items were improved in terms of readability, clarity, and equivalence with the English version. The web-based instrument comprises 30 four-option multiple-choice items and one supervised-learning sorting task, mapped to 14 competencies from Long and Magerko’s AI literacy framework [23]. These competencies enable individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace [23,84]. Previous validation works showed a unidimensional structure and high reliability (Cronbach’s α = 0.81 to 0.91) [14,83]. In the present study, respondents completed the full 31-item test and the raw total score maximum was 31. The test had already been validated in different international environments in higher education settings; as such, it was deemed suitable for measuring high school students’ AI literacy [14]. Completion required approximately 20–25 min.

2.3.2. Student Agency Survey

Student agency was assessed with the Student Agency Questionnaire that we adapted from the American Institutes for Research (AIR) “Maximizing Student Agency” instrument [85]. The web-based survey consists of 32 items covering eight validated agency constructs (self-efficacy, perseverance of interest, perseverance of effort, locus of control, mastery learning goal orientation, metacognitive self-regulation, self-regulated learning, and future orientation) and basic demographics (sex, track, study years, age, and school). Each construct is introduced by the stem “To what extent do you agree or disagree as a result of your participation in technical track study program”, followed by three behavior/belief statements; a fourth item asks how growth in that construct supports college and career readiness. The items are averaged to yield construct scores (1 = strongly disagree to 6 = strongly agree); higher scores indicate stronger perceived agency. The finalized survey instrument is provided in the Supplementary Materials under the title “Secondary school students’ agency survey”. The instrument preserves the AIR’s content validity and internal-consistency evidence (Cronbach’s α ≥ 0.80 in the original study) and demonstrates acceptable reliability in the present sample (see Results, Section 3). Constructs of student agency were also validated according to Rupnik and Avsec [86], where all constructs were considered to satisfy both convergent and discriminant validity. Completion required approximately 15–20 min and was voluntary and anonymous.

2.4. Validation Procedures

2.4.1. IRT Calibration of AI Literacy Items

To assess the validity of the AI literacy test, we applied the widely used item response theory (IRT) framework for assessing the relationship between individuals’ latent traits and their item responses [14]. To assess whether the test exhibited consistent properties across two technical tracks (industrial engineering and human service), we used the parameter logistic (PL) models 1PL, 2PL, and 3PL: starting with the strictest measurement ideal operationalized by Rasch; moving to 2PL tests on whether AI knowledge items truly share a common slope; and adding a content-based expectation to 3PL tests (guessing on multiple-choice items). Item fit indices are compared to determine which of the calibration models best describes the data. Finally, the Benjamini and Hochberg [87] correction was applied in model testing to control the false discovery rate when conducting multiple hypothesis tests. This ensures more reliable identification of misfitting items while preserving statistical power, reducing the likelihood of spurious rejections due to random variation [14]. The progressive comparison ensures that the final scoring model balances parsimony with fidelity as to how high school students actually respond to the AI literacy items. For modeling and item analysis, we used the jMetrik package v4.1.1 [88] as free and open software (https://itemanalysis.com/jmetrik-download/, accessed on 30 April 2025).

2.4.2. Convergent and Discriminant Validity of Student Agency Constructs

SmartPLS 4.1 was used with the path-weighting scheme for the eight reflective constructs of student agency (four indicators each). Five thousand bias-corrected bootstrap samples provided standard errors and 95 % CIs. The sample adequacy was confirmed by the 10-times rule: the largest number of formative or structural links pointing to any latent variable was four, so the minimum recommended n was 40; our n = 425 comfortably exceeded this threshold.
To verify the convergent validity for each latent factor, we inspected (a) standardized loadings—items with λ < 0.50 were iteratively removed; (b) composite reliability (ρC), accepting values ≥ 0.70; and (c) average variance extracted (AVE), requiring AVE ≥ 0.50 to ensure the construct captured at least half of the indicator variance. A construct was deemed convergent-valid when all three criteria were met.
Discriminant validity was evaluated in three complementary ways:
  • Fornell–Larcker criterion: each construct’s square root of AVE was larger than its highest latent correlation.
  • Heterotrait–monotrait ratio (HTMT): bootstrapped HTMT values ranged from 0.27 to 0.63, all below the conservative 0.85 cut-off [89], and none of the 95 % CIs included 1.00.
  • Cross-loadings: every indicator loaded more strongly on its designated construct than on any other.

2.5. Data Analysis Strategy

For the purpose of the study, we used two complementary lenses: (1) a person-centered lens (RQ1 and RQ3) to discover the profiles of the students in terms of transformative agency, and (2) a variable-centered lens (RQ2 and RQ4) to test how the measured variables relate after controlling for background heterogeneity.
In answering RQ1, a three-stage procedure was applied. Firstly, hierarchical clustering (Ward’s method) was used to identify the structure. Secondly, the structure from the first step was refined using the k-means cluster method, while cross-validation was performed using a two-step clustering method. To determine the cluster number, besides visual inspection via a dendrogram and analysis of changes in the agglomeration coefficients, the Bayesian Information Criterion (BIC), Akaike’s Information Criteria (AIC) and the Silhouette coefficient were used. After the number of clusters was determined, the clusters were labeled based on the means ± SDs on the eight constructs. Thirdly, drawing on the cluster analysis output, we assigned every student to a specific cluster and then used those memberships as the basis for a detailed comparison of the student agency profiles.
The measures of skewness (1.81) and especially of kurtosis (5.15) indicated a departure from a Gaussian distribution [90] in the AI literacy total scores, which was confirmed with a Quantile–Quantile (Q-Q) plot where observed data were not approximate to the expected data. This departure from a straight line indicates departure from normality, which necessitates the use of nonparametric statistics or variance-based (PLS-SEM) methods that do not rely on normal distribution assumptions [91]. To answer RQ2, we used the Mann–Whitney U test, and the rank-biserial effect size r was calculated and interpreted in the educational context as a small effect for r = 0.1 small effect, a medium effect for r = 0.24, and a large effect for r = 0.37 [92].
The Kruskal–Wallis H test with Dunn–Bonferroni post hoc analysis was conducted to investigate and answer RQ3 regarding whether students with different agency profiles differed in their performance on an AI literacy test. The effect size ε2 describes the proportion of rank variance explained by the profile and can be interpreted as small when ε2 = 0.01, medium when ε2 = 0.06, and large when ε2 = 0.14 [93].
To answer RQ4, a variance-based PLS-SEM was conducted with SmartPLS 4.1 software (https://www.smartpls.com/, accessed on 10 May 2025). The model specification included (1) reflective blocks of student agency constructs (SE, PI, PE, MLGO, LC, FO, SR, MSR), (2) single-indicator latent constructs (AI literacy, track, sex, study year 3 and 4 (year 2 was the reference), and structural paths). The evaluation and reporting included the following:
  • Measurement quality—loadings ≥ 0.70, composite reliability ρC ≥ 0.70, AVE ≥ 0.50, and HTMT < 0.85.
  • Collinearity—inner variance inflation factor (VIF) ≤ 3.3.
  • Structural coefficients—standardized β, 95 % bias-corrected and accelerated bootstrap confidence interval (BCa CI), p, and f2.
  • Model fit—R2, Q2, PLSpredict Root Mean Square Error (RMSE) vs. Linear Regression Model (LM) benchmark.
  • Interpretation, which reveals which student agency constructs retain significance after controls.

3. Results

This section is organized into two sequential parts to guide the reader from measurement quality to substantive findings. We begin by establishing the psychometric soundness of all study instruments. After describing the validation of the instruments, we present the main findings in the order of RQ1–RQ4.

3.1. Psychometric Properties of the Measures

3.1.1. AI Literacy Test

On average, the high school students in this study scored M = 9.77 (SD = 4.19) out of the 31 possible points (the median was 9). The maximum score was 30, while the minimum was 3 points. Students from the human service technical track, on average, scored lower (M = 8.88, SD = 2.97; median = 9) than their counterparts from the industrial engineering track (M = 10.70, SD = 5.01, median = 10). Cronbach’s α = 0.70 indicated moderate reliability [94].
The results on the AI literacy test are shown in Table 3. These were derived from the classical item-analysis procedures recommended by Moosbrugger and Kelava [95], expressed with an item difficulty index (the proportion of students that answered an item correctly), difficulty index corrected for guessing (a 25% guess rate for the four-option format), and discrimination index (how well an item separates high-scoring from low-scoring test-takers).
Since the test was designed for and used with higher education students and adults, it was expected that secondary school students would score lower, which is evident in the low difficulty index. Moreover, Slovenian secondary school students have no prior experience with AI in an organized way, nor does their curriculum include AI competences. Therefore, in this study, the students’ experience with AI comes from their private lives, as already suggested by Licardo et al. [5].
The difficulty index in this study ranged from 0.005 to 0.661, which depicts the real state of the art amongst secondary school students. For a well-balanced test, a common recommendation is to have item difficulty in a mid-range (e.g., 30–80% correct) to maximize discrimination and reliability [95]. Several of the items were found to be too difficult (below ~ 30% correct), which might contribute less information to the test’s reliability [88] (see Table 3). This indicates that, on average, the items were quite challenging for the respondents, and many items had a low proportion of correct responses.
Notably, no item in this pool was extremely easy (none had > 70% correct), whereas about half (16 out of 31) could be classified as very difficult with < 30% of the respondents answering correctly [95]. The predominance of low difficulty indices across items aligns with the cohort’s lack of AI exposure, confirming the test’s sensitivity to detecting true novice-level AI literacy. These findings underscore the necessity of foundational AI education in formal curricula.
Across the 31 AI literacy items, the discrimination indices ranged from 0.149 (very low) to 0.353 (moderately high), with an average around 0.22. This average falls in the “fair” range, indicating moderate overall differentiating power. Importantly, no items had negative discrimination (all values of the point-biserial coefficient were positive), so there were no signs of seriously flawed items or mis-keyed answers causing high scorers to consistently choose wrong options [95]. However, a substantial number of items (12 items, ~39%) showed rpbis below 0.20, which is generally considered poor discrimination [95]. Only three items (~10%) achieved rpbis ≥ 0.30 (good discrimination), and none reached the “excellent” > 0.40 range. The presence of positive discrimination values across the majority of items suggests that students, despite lacking formal AI instruction, differ meaningfully in their intuitive or informal understanding of AI-related topics. The instrument thus appears sensitive not only to formal knowledge but also to broader cognitive and informational readiness in the domain of AI.
There is a slight inverse relationship between difficulty and discrimination in this item pool (correlation r ≈ −0.24), which is commonly observed; extremely easy or extremely hard items tend to have lower discrimination, whereas items of mid-range difficulty often discriminate best. Items with poor discrimination and extremely low difficulty indices likely exceed the students’ zone of proximal development. Future iterations of the test should consider cognitive load, contextual grounding, and linguistic accessibility, especially when assessing untrained populations.
Examining the item indices by content topic revealed some patterned differences that might be insightful for the test developers. The content analysis hints that certain domains (such as technical concepts and processes) were better captured by the test than others (such as ethics or general AI facts). The ethics/society items were relatively weak discriminators, which may lower the test’s internal consistency.
The absence of high-performing (easy) items in the dataset indicates a nearly universal lack of foundational AI knowledge among high school students. This supports the argument for implementing broad-based, entry-level AI education to prepare students for future technological citizenship.
In the context of untrained high school students, the item statistics should not be viewed as a flaw in the test design but rather as evidence of the current state of AI literacy. The test functioned as a valid discriminator even in the absence of formal training, and the results strongly justify the development of age-appropriate, scaffolded AI curricula.
In the second stage of the AI literacy test validation, we used jMetrik software and implemented and compared different IRT models.
Firstly, we verified the assumption of unidimensionality by performing Principal Component Analysis (PCA) of standardized residuals in the Rasch model. The eigenvalue of all five contrasts was less than 2 (1.82, 1.53, 1.48, 1.46, and 1.37, respectively), which is much less than 3. Therefore, dimensionality investigation revealed evidence that was more supportive of a singular dimension [96,97,98]. We can conclude that the assumption of unidimensionality is fulfilled. Scale quality statistics also revealed the reliability index for the Expected A Posteriori (EAP) estimation of a person’s abilities, which was r = 0.71. Meanwhile, the Cronbach’s α of the AI literacy test was 0.70.
Secondly, we tested fairness with regard to whether the test functioned similarly across the two tracks of students. The Differential Item Functioning (DIF) analysis was conducted in jMetrik, comparing human service and industrial engineering high school students. A statistically significant DIF (p < 0.05) was observed for items 12, 15, 22, 28, and 29, with Δb values of 0.57, 1.73, 0.50, 1.57, and 0.63, respectively, indicating a range of DIF from small to large [88]. The focal group (human service track) found the item moderately more difficult compared to the reference group, even at equivalent levels of ability. This suggests potential content bias or differential familiarity, possibly due to contextual references in the item.
Finally, we ran three PL models—1PL, 2PL, and 3PL—on the entire dataset after evaluating the assumptions of unidimensionality and test fairness. jMetrik assessed the IRT model data fit using the family of generalized χ2-based statistics originally proposed by Orlando and Thissen [99]—most notably the S-χ2—together with classical infit/outfit indices. First, we ran the 1PL model with the 31 items of the AI literacy test using the marginal maximum likelihood (MMLE) estimation method. We found only three items, 11, 26, and 30, deviating from the Rasch model with a significant p-value (0.047, 0.000, and 0.039, respectively). This indicates that these three items did not fit the model and that they might fit another model better. Thus, we ran the 2PL calibration model, where we found only one item (13) that did not fit to the model (p = 0.047). The discrimination parameter (Apar) for all items ranged from 0.38 to 1.96, which is an acceptable range [96]. A discrimination parameter close to 2 should be treated with caution since it may indicate problems in the dataset [88]. The maximum difficulty (Bparameter) was 4.41, indicating there was no threat to validity [96,100]. Items 11, 26, and 30 improved in this model (p > 0.05). It appeared that the model fit of the 2PL logistic model, at least at the item level, was better than the Rasch model. When running the 3PL model, we found acceptable MMLE item parameter estimates (Aparameter ranged from 0.75 to 2.03, Bparameter from 0.40 to 4.14, and Cparameter from 0.04 to 0.31) [100]. Considering the item fit statistics, it appeared that some items (e.g., 2, 11, 13, 21, and 29) exhibited increased misfit under the 3PL model compared to the 2PL model, as indicated by significant S-χ2 statistics and inflated outfit mean squares. This deterioration in item fit suggested potential overfitting or violation of model assumptions for those items. Therefore, the 2PL model provided the optimal compromise between statistical fit and parsimony, and was retained for all subsequent analyses (ability estimation, test information, and score reporting).
Based on the adjusted p-values obtained through the Benjamini–Hochberg [87] false discovery rate procedure, no items showed a statistically significant misfit under the 2PL and 3PL models. For the 1PL model, only item 26 remained statistically significant after correction, with a p-value below the adjusted critical threshold (q = 0.0016). These results suggest that, after controlling for multiple comparisons, the 2PL and 3PL models demonstrate acceptable item-level fit across all items, while the 1PL model may still exhibit limited misfit for a specific item. As no substantial item misfit was observed after correction, all 31 items were retained for further analysis, and their scores were summed to compute the total AI literacy score.
Detailed tables of the MMLE item parameter estimates (see Table A1) and item fit statistics (see Table A2) for the entire dataset evaluated with respect to all three IRT models can be viewed in Appendix B.

3.1.2. Student Agency Survey

A 32-item instrument measuring student agency had already been validated in some recent studies [36,86], where strong evidence of internal consistency, as well as convergent and discriminant validity, was provided. Even when a scale has passed psychometric checks in earlier work, its measurement properties cannot be assumed to generalize unchanged to a new investigation. Cheung et al. [101] note that established instruments often do not perform equally well in different populations and may be altered through translation, cultural adaptation, or minor wording changes, and so researchers must re-establish reliability and both forms of construct validity before testing substantive hypotheses. In addition, sampling error means that the magnitude of loadings and inter-factor correlations always fluctuates across studies; therefore, failing to verify convergence and discrimination risks retaining poorly functioning items or overlooking construct overlap, which can inflate type I errors and bias structural estimates [101]. Rönkkö and Cho [102] further demonstrate that apparent discriminant validity breaches may arise from sample-specific multicollinearity or model misspecification, problems that can only be detected by re-testing latent correlations and confidence intervals in each dataset. Routine verification of reliability and convergent and discriminant validity is, thus, a safeguard that ensures the scores analyzed in any new study are internally coherent, empirically distinct and appropriate for drawing substantive conclusions in that specific research context.
The construct’s statistics after running the PLS algorithm and bootstrapping in SmartPLS are shown in Table 4 to check convergence. The thresholds used for convergent validity follow Cheung et al. [101]. The outer loadings should be > 0.708 (at least > 0.60); composite reliability ρC, Cronbach’s alpha (α), and Dijkstra–Henseler rho (ρA) must fall between 0.70 and 0.95 while AVE should reach 0.50 or higher. As shown in Table 4, all constructs of student agency were considered to satisfy the convergent validity indicators, as argued by Cheung et al. [101].
Next, we also verified the discriminant validity of the survey constructs. As shown in Table 5, all eight constructs in the student agency model met the established criteria for discriminant validity. The Fornell–Larcker criterion was satisfied, with each construct’s square root of AVE exceeding its correlations with all other constructs [102]. Cross-loading analysis further confirmed that all indicators loaded highest on their intended constructs compared to others.
In addition, the HTMT ratios of inter-construct correlations ranged from 0.27 to 0.64, remaining well below the conservative threshold of 0.85 [89]. The bootstrapped confidence intervals for all HTMT estimates did not include 1.0, providing further evidence that the latent constructs are empirically distinct.

3.2. Main Study Results

3.2.1. Analysis of Student Agency Constructs

The quality of student agency constructs is reported in three stages. Firstly, we calculated basic statistics such as average score (M), standard deviation (SD), 95% confidence interval, and measures of skewness and kurtosis; the results are shown in Table 6.
Table 6 shows that, for the total sample, the agency scores were high for the SE and LC constructs, with mean values over 4.50. The lowest scores were detected for PE and FO, just above the 3.5 mid-point of the scale. The skewness and kurtosis statistics ranged between −2 and +2, which can be considered acceptable for the detection of a univariate normal distribution [90,103]. For a large sample size (>300), the assessment of the normality of the data depends on the histograms and the absolute values of skewness and kurtosis [104]. The graphs of all constructs are approximately bell-shaped and symmetric about the mean; thus, we can assume normally distributed data [91,105]. Thus, the scale distribution could be generally considered normal which points to the use of parametric statistics for further analysis.
Lastly, we assessed the relationships between the student agency constructs using Pearson coefficient, and the correlations are shown in Table 7. Running a Pearson correlation matrix among student agency constructs was necessary to demonstrate that the dimensions behave as a coherent yet non-redundant system—core evidence for the construct validity of our measurement and of the theoretical model underpinning the study.
As shown in Table 7, no redundancy (all r < 0.80), negligible correlations (r < 0.10), or negative values were detected [106]. This finding informed the follow-up SEM analysis that all paths were plausible and no constraints needed to be applied.

3.2.2. Cluster Profiles of Student Agency

In the first step, hierarchical cluster analysis was conducted based on the mean scores of the student agency constructs. Prior to analysis, we set a candidate range of k = 2 − 10. This followed (a) the pedagogical rule of thumb k   n / 2   for n = 400 [107], which suggests an upper bound near 10, and (b) evidence from simulation studies by Milligan and Cooper [108] showing that internal validity criteria rarely select solutions with more than about ten clusters when the underlying structure is well defined. Inspection of the dendrogram indicated a four-to-five-cluster solution, whereas analysis of changes in the agglomeration coefficients pointed at a four-cluster solution. Based on the mean scores for students’ individual-oriented and shared regulation behavior for each cluster, we discerned four agency profiles. In a second step, k-means cluster analysis was performed, aiming at validating the cluster solution uncovered during the hierarchical cluster analysis. Table 8 shows the four-cluster solution based on the two methods applied in clustering. The cluster membership revealed by the k-means method showed a comparable cluster size, and the ratio between the largest and smallest cluster was less than 2. The final solution yielded four clusters with the following sizes: 121, 117, 68, and 119 participants. The mean cluster size was 106.25 (SD = 25.65). To assess the degree of size imbalance, we computed the coefficient of variation (CV) of the cluster sizes, following the approach recommended by Eldridge et al. [109]. The resulting CV was 0.24, which was marginally above the commonly referenced threshold of 0.23 for acceptable cluster size variation. According to Hair et al. [103] and Eldridge et al. [109], cluster solutions with CV values below 0.23 are generally considered to have an acceptable size balance; our result, therefore, indicates a slight, but not excessive, imbalance. All clusters exceeded the minimum recommended size of 30 cases or 10% of the total sample, ensuring that each segment remains substantively meaningful [103]. These findings suggest that the cluster solution is reasonably balanced, with only a minor size disparity that does not compromise the interpretability or utility of the segmentation.
Additionally, we used one-way Analysis of Variance (ANOVA) (Bonferroni corrected) which indicated that the profile groups were distinct. Analysis also revealed that differences between clusters across the constructs of agency are significantly different for all cases (p < 0.05) in the four-factor solution, while for the five-cluster solution and more this was not the case.
In the last stage, we conducted two-step cluster analysis to validate the cluster solutions. We verified the cluster number solutions against the BIC, AIC, and Silhouette score as the most often used metrics in such a case [110,111]. The optimal number of latent profile groups was evaluated using the BIC, while inspection of Silhouette revealed a two-cluster solution as the best one as expected. A four-cluster solution was also acceptable according to the Silhouette values [112] (Table 9).
Initial enumeration based on the AIC suggested a 10-cluster solution. We rejected this recommendation for three reasons. First, AIC applies only a 2 p × penalty (p-number of estimated parameters) and is well known to be liberal in mixture modeling [113], frequently favoring overly complex solutions as the sample size increases [114]. Second, with n = 425, the resulting clusters would contain as few as 30–40 cases, falling below the 5–10% substantial size threshold proposed by Hair et al. [103]. Finally, AIC optimizes predictive accuracy rather than class separation; it does not consider assignment uncertainty or interpretability [112].
Consistent with best practice, we relied on the BIC and the average Silhouette, both of which converged on a more parsimonious four-cluster solution that displayed clear separation and an acceptable size balance, with a coefficient of variation (CV) = 0.24 [115].
K-means clustering produced a four-cluster solution (Figure 2). We labeled the groups as follows:
  • Low-Self-Belief Moderate (LSBM) (n = 121), characterized by average scores on most agency constructs but notably low self-efficacy and a more external locus of control. Below-average competence is evident, with the scores for meaning and impact hovering around the grand mean. LC is slightly external, contrasting with Cluster 2’s more internal LC.
  • Confident–Low Drive (CLD) (n = 117), showing high self-efficacy and internal control yet reduced self-determination, perseverance of effort, and interest. The finding regarding impact is mixed: LC suggests these students feel some personal control, but FO and PE are low, signaling limited forward drive.
  • Low Agency/Under-engaged (LA) (n = 68), with uniformly low levels across all constructs. Competence (SE) scores the lowest point. The scores for meaning and impact are subdued, yet SR and FO rise above the cluster’s baseline.
  • Highly Agentic (HA) (n = 119), exhibiting high scores on every agency dimension. Meaning and self-determination are especially strong. Impact is solid but shows slightly lower LC and FO than the other dimensions.
The mean scores for each cluster are reported in Table 9, while the standardized mean score (zM) is depicted in Figure 2.
Consistent with Spreitzer’s multidimensional model of empowerment [25], the four clusters differed systematically across the cognitions of meaning, competence, self-determination, and impact. The HA students (n = 119) scored above zM = 0.50 on every dimension, with particularly pronounced perseverance of interest (zM = 0.95) and metacognitive self-regulation (zM = 0.91). Conversely, the disempowered but forward-looking cluster LA (n = 68) showed global deficits (all zMs < −1.00), except for slightly higher self-regulation and future orientation, suggesting residual motivational resources.
Clusters 1 and 2 sit near the center of the spectrum; their LC scores diverge. Cluster 2’s internal LC supports their confidence but, without meaning and self-determination, fails to energize sustained striving—illustrating Spreitzer’s contention that competence alone cannot secure empowerment [25]. Cluster 1, by contrast, shows a more external LC, reinforcing their lower competence and leaving them “stuck in the middle”.

3.2.3. AI Literacy Among Human Service and Industrial Engineering Technical Track Students

Before we compare the differences in the two tracks, we provide basic descriptive statistics, as shown in Table 10. The normality check measures of skewness and kurtosis indicated data deviation from normality, which was confirmed with Q-Q plots. Thus, we conducted Mann–Whitney nonparametric tests for independent samples. The median (Md) and interquartile range (IQR) are also shown in Table 10.
A Mann–Whitney U test indicated that AI literacy scores differed significantly between the industrial engineering track (Mdn = 10, IQR = 4.75) and the human service track (Mdn = 9, IQR = 3.50), U = 17864, z = −3.74, p < 0.001, effect size r = 0.19. This represents a small-to-medium effect in favor of the industrial engineering students.
Because the AI literacy score data distribution was markedly non-normal, a Quade rank ANCOVA was conducted to compare AI literacy between students in the two tracks while controlling for study year and sex. The difference was statistically significant, F (1, 423) = 4.70, p = 0.031 < 0.05, in favor of students in the industrial engineering track.
In the second stage of exploring differences between the two technical tracks, we also investigated differences in competency as defined by Long and Magerko [23]. A Mann–Whitney test revealed significant differences (p < 0.05) in the following seven competences: (1) Recognizing AI, (2) Interdisciplinarity, (3) AI’s Strengths and Weaknesses, (4) Decision-Making, (5) Machine Learning Steps, (6) Data Literacy, and (7) Ethics. The effect size ranged from r = 0.10 to 0.15 which was categorized as small to medium. All seven competencies that favored the industrial engineering track share a strong analytic or procedural flavor. The industrial engineering curriculum in Slovenian upper-secondary technical schools devotes substantially more hours to mathematics, information technology, and project-based engineering subjects than human service programs, which concentrate on pedagogy, psychology, and social care [116]. Ethics also emerged as higher in the industrial engineering cohort, which may look counter-intuitive, but recent Slovenian curriculum revisions embed ethics modules directly into engineering and computer science units, often framed around responsible innovation and EU AI Act compliance [117]. The human service track’s study programs typically treat ethics in a broader social science sense, which might not map onto the narrower questionnaire items.
For the remaining seven competencies, the scores did not differ significantly for Understanding Intelligence, Human Role in AI, Representations, Sensors, and Programmability (p > 0.05). These facets are either (1) conceptual and cross-disciplinary or grounded in everyday technology use. Such knowledge is readily acquired through informal channels (social media, popular culture, smartphone use) that both cohorts share, diluting any curriculum-based advantage [5].

3.2.4. Student Agency Profiles in Relation to AI Literacy

A Kruskal–Wallis test showed a significant difference in AI literacy scores across the four agency profiles, H (3) = 27.45, p = 0.000, ε2 = 0.065 (medium effect). The median differences in AI literacy among the four student agency profiles are shown in Table 11.
A Dunn–Bonferroni pairwise multiple-comparison procedure was used to follow up the omnibus Kruskal–Wallis test and control the family-wise error rate across the six cluster contrasts, which yielded some interesting results: (1) AI literacy does not differ significantly (p > 0.05) between students who are globally disengaged (LA) and those who feel average meaning/impact but low competence (LSBM) (3 vs. 1). (2) Students who believe in their competence (SE) even without strong motivation (CLD) outperform the globally low-agency group (LA) on AI literacy (2 vs. 3). (3) The competence advantage of CLD over LSBM translates into higher AI literacy despite similar (low) self-determination (2 vs. 1). (4) The biggest gap: students high on every empowerment cognition (HA) show substantially higher AI literacy than the disengaged group (4 vs. 3). (5) Strong, multidimensional agency (HA) trumps the lukewarm profile (LSBM) in AI literacy (4 vs. 1). (6) AI literacy is statistically indistinguishable between HA and CLD, suggesting that competence (SE) may be the critical driver once a basic agency threshold is met (4 vs. 2). Moreover, the synergetic effects of LC and SE might compensate for motivational deficits and results in AI literacy as high as that of the HA profile.

3.2.5. Predictive Power of Transformative Agency Constructs on AI Literacy

Modeling the four agency cognitions via agency constructs as continuous predictors of AI literacy will deepen both theory-testing and practical guidance. First, it allows us to examine whether meaning, competence, self-determination, and impact, as articulated by Spreitzer [25], each contribute unique variance to students’ mastery of the Five Big Ideas in AI framework that underpins contemporary AI literacy curricula: Perception, Representation and Reasoning, Learning, Natural Interaction, and Societal Impact. By entering these agency constructs simultaneously, we could test whether the compensatory pattern hinted at in the profile analysis—high self-efficacy and an internal locus of control counter-balancing low intrinsic drive—persists once the outcome is anchored to these five foundational AI concepts. Such a variable-centered approach not only provides standardized coefficients (β) and incremental R2 that overcome the distributional limits of rank-based tests, it also identifies which agency lever is most potent for fostering literacy across all five AI ideas. Practically, this pinpoints where educators should focus their interventions: if competence and locus of control emerge as the strongest predictors, resources can be channeled into mastery experiences and attribution retraining rather than blanket motivation programs. In short, moving from profiles to predictors links empowerment theory to the AI4K12 literacy framework and delivers an evidence-based roadmap for targeted, high-impact instructional design.
The SmartPLS structural model assessed the influence of multiple student agency factors on AI literacy (Figure 3). Eight latent psychological constructs (self-efficacy, perseverance of interest, perseverance of effort, mastery learning goal orientation, locus of control, future orientation, self-regulation, and metacognitive self-regulation) were entered as predictors, along with the control variables (study track, study year, and sex).
The model quality was assessed using different criteria as we indicated in the Materials and Methods section. Firstly, we assessed measurement quality by checking the loadings on each construct, composite reliability ρC, AVE, HTMT ratio, and latent correlations assessed with the Fornel–Larcker method.
Almost all indicator loadings exceeded the recommended threshold of 0.70, except one item each for SR and FO (0.59 and 0.66, respectively), supporting indicator reliability. The composite reliabilities (ρC > 0.70) were likewise satisfactory (range = 0.89–0.92 > 0.70); the AVE values surpassed 0.50 for every construct (AVE = 0.68–0.74), indicating convergent validity. Discriminant validity was confirmed as (1) the HTMT ratios between all construct pairs were below 0.85 (max HTMT = 0.64) and (2) the Fornel–Larcker criterion was also satisfied for all constructs. For each construct, the square root of the AVE exceeded its highest latent-variable correlation |r|, indicating adequate discriminant validity. This finding converges with the HTMT results (all HTMT < 0.85). Table 12 summarizes these results.
Next, we checked collinearity issues between the variables in the model. The inner VIF for all predictor–criterion links was in the range of [1.348–2.044] and did not exceed the 3.3 cut-off, suggesting multicollinearity is not a concern [118].
The structural model explained R2 = 20.3% of the variance in AI literacy. Table 13 reports the standardized path coefficients (β), bias-corrected and accelerated 95% confidence intervals, p-values, and Cohen’s f2 effect size.
Predictive-relevance statistics were positive for the endogenous construct (Q2 = 0.11). PLSpredict indicated that the PLS-SEM model yielded a lower RMSE than an equivalent linear-model (LM) benchmark for eight indicators (average PLS-SEM_RMSE = 3.978 vs. LM_RMSE = 3.981), underscoring superior out-of-sample predictive power.
Several psychological predictors showed significant relationships with AI literacy, although their effects were modest. Mastery learning goal orientation (MLGO) had a significant positive association with AI literacy (β = 0.219, 95% CI [0.096, 0.350], p = 0.001). This suggests that students who more strongly endorse mastery goals tend to have higher AI literacy. The practical effect, however, was small (Cohen’s f2 = 0.036). Metacognitive self-regulation (MSR) was also positively related to AI literacy (β = 0.222, 95% CI [0.123, 0.356], p < 0.001), with a small effect size (f2 = 0.030). Likewise, self-efficacy (SE) emerged as a significant positive predictor (β = 0.195, 95% CI [0.092, 0.331], p = 0.002), indicating that students with higher self-efficacy tended to demonstrate higher AI literacy. This effect was statistically reliable and small in magnitude (f2 = 0.027).
In contrast, two psychological factors exhibited significant negative relationships with AI literacy. Locus of control (LC) showed a slight negative association with AI literacy (β = −0.099, 95% CI [–0.220, –0.022], p = 0.043). Although statistically significant, this effect was very small (f2 = 0.008, below the 0.02 threshold, indicating a negligible practical impact). Similarly, self-regulation (SR) had a significant negative effect on AI literacy (β = −0.179, 95% CI [−0.351, −0.080], p = 0.014). This negative coefficient implies that, when controlling for other variables, higher self-regulation scores were associated with lower AI literacy in the sample. The strength of this inverse relationship was small (f2 = 0.026).
Finally, several psychological predictors did not show meaningful effects on AI literacy. Future orientation (FO) had no significant impact (β = −0.052, 95% CI [−0.204, 0.044], p = 0.393, f2 = 0.002). Likewise, perseverance of interest (PI) (β = −0.081, 95% CI [–0.226, 0.026], p = 0.194, f2 = 0.005) and perseverance of effort (PE) (β = 0.036, 95% CI [−0.082, 0.184], p = 0.598, f2 = 0.001) were not significant predictors of AI literacy. All three of these coefficients had 95% CIs that spanned zero and extremely small f2 values (near 0.0), indicating no statistically significant or practically meaningful influence of FO, PI, or PE on students’ AI literacy in this model.
Among the control variables, study track emerged as a notable predictor of AI literacy. The path coefficient for study track was large and positive (β = 0.607, 95% CI [0.411, 0.830], p < 0.001), indicating that students in the specified track (e.g., a particular academic program or major) scored substantially higher on AI literacy than those in the reference track. Despite this high standardized coefficient, the unique variance explained by the study track was limited; the effect size was small (f2 = 0.042). In other words, students’ academic track had a statistically significant impact on AI literacy, but its practical contribution—after accounting for other factors—was modest.
By contrast, sex and year of study did not significantly influence AI literacy in the model. The effect of sex was negative but non-significant (β = −0.138, 95% CI [−0.328, 0.069], p = 0.175, f2 = 0.002), suggesting no reliable difference in AI literacy between male and female students once other variables were controlled. Similarly, neither being a third-year student (β = 0.012, 95% CI [−0.200, 0.218], p = 0.911, f2 ≈ 0.000) nor a fourth-year student (β = 0.093, 95% CI [−0.105, 0.294], p = 0.366, f2 = 0.001) was associated with a significant change in AI literacy compared to the reference year (study year 2). These year-of-study effects were essentially zero, with confidence intervals encompassing zero and negligible f2 values. Thus, no significant differences in AI literacy were evident by gender or study year in this dataset.
Overall, the path model results indicate that a few key psychological attributes—especially mastery goal orientation, metacognitive self-regulation, and self-efficacy—are positively linked to higher AI literacy, albeit with small effect sizes. Self-regulation and locus of control showed small negative effects, while other traits (future orientation and perseverance of interest/effort) did not show measurable impacts. Among demographic controls, students’ academic track was a significant determinant of AI literacy (favoring one track over another), whereas sex and study year showed no significant influence. All significant predictors, according to Cohen’s conventions, had small practical effects, highlighting that even statistically significant factors accounted for only a modest portion of variance in AI literacy.

4. Discussion

Despite the amount of research on the role of student agency in professional work and development [30], empirical research on the role of student agency as a subject in CHAT in the VET is scant. Moreover, this study is among the very first to apply a combined person- and variable-centered approach to examine a pathway regarding how student agency may enhance AI literacy using a multilevel triple-theoretical framework in the context of VET.
Our study showed that AI-based technology and tools in education as mediators in the CHAT model could impact the acquisition of AI literacy as an outcome in the model in activity systems that are inherently multi-voiced, since the subjects (students) form different conceptualizations of the object (AI literacy). Moreover, the CHAT framework, using different analytical tools to understand complex activities in the systems, was found to be a scaffold and a driver that can drive both organizational and individual learning, which is aligned with [75]. Through a study designed with a multilevel triple-theoretical framework where a person- and variable-centered approach was used, we found evidence of how and which constructs of students’ agency may impact specific empowerment cognition and at the same time enhance the development of specific ideas inside the Five Big Ideas in AI framework. Moreover, using the CHAT framework and Five Big Ideas in AI lens, scaffolding students’ learning may further enhance AI literacy so that students will show higher competency in use, understanding, and evaluation. These findings suggest that the AI-driven technologies and tools that become more prevalent in VET not only scaffold and automate learning, but also make learning meaningful and reflective for the effective acquisition and transfer of knowledge and skills.

4.1. Student Agency Profiles in VET Students

In response to RQ1, the findings of our cluster profile analysis suggest that, from a CHAT perspective, the four distinct student agency profiles with differing AI literacy scores can be interpreted as different configurations of interacting activity systems within the classroom. Each profile is driven by its own motive and pattern of engagement, which comes into contact (and sometimes conflict) with the curriculum’s motives, community, and tools [73]. For example, a highly agentic profile that values the deep exploration of AI aligns closely with the curriculum’s objective of mastering AI concepts, so the AI tools and tasks become shared objects bridging the student’s and the curriculum’s activity systems (functioning as boundary objects) [119]. In contrast, a less-empowered profile might focus only on completing required tasks for a grade, a narrow motive misaligned with the curriculum’s broader AI literacy goals; this misalignment constitutes a contradiction between the student’s and the institution’s activity systems [71], and can impede the student’s AI literacy development. The very existence of multiple profiles exemplifies CHAT’s principle of multi-voicedness, as the classroom activity system contains diverse points of view, interests, and approaches to using AI [69]. Examining how each profile navigates the shared AI tools and where systemic tensions emerge allows a CHAT-based analysis to explain the different AI literacy outcomes: boundary-crossing artifacts (such as an AI learning platform) may support expansive learning for some profiles while remaining underutilized or conflict-laden for others, depending on how each profile’s motives and interactions resolve (or exacerbate) the contradictions in the activity system. By viewing the student profiles and the curriculum as interacting activity systems, researchers can understand how differences in motive alignment, contradictions, and mediation through common tools lead to the observed variations in AI literacy across these empowered learner profiles [71].
On average, the students in this study experienced their self-efficacy as an empowerment cognition of agency more than the other three empowerment cognitions. The higher self-efficacy scores likely reflected that this measure tapped into students’ broad academic confidence rather than AI-specific skills, since the agency survey [36] assessed general beliefs in one’s capability to succeed academically—a trait these VET students have developed over time through schooling. In contrast, the other empowerment cognitions (meaning, self-determination, and impact) were inherently tied to the scaffold and automated learning using AI. Because the AI activities are rather informal, short, and exploratory, and seemed to be uniform across the industrial and human service tracks, they may not have strongly fostered a personal sense of purpose, autonomy, or influence. In other words, while students felt capable of learning (high self-efficacy), they only moderately perceived the AI tasks as deeply meaningful or under their control, nor did they see themselves making a significant impact through these brief exercises. This pattern is consistent with prior research showing that confidence (competence) often emerges as the most robust dimension of student agency, whereas other empowerment aspects remain lower [86]. Thus, given that self-efficacy was measured generally (covering overall academic skills) [36] and was bolstered by students’ prior successes, it stayed higher on average. Meanwhile, experiencing meaning, self-determination, and impact would require more sustained, personally relevant engagement with AI—something not fully realized in a one-off exploratory task, resulting in those three dimensions clustering at similarly modest levels [4].
As some previous studies maintain that student agency is rather dependent on self-efficacy resources [120], such as teacher–student interaction, learners’ mutual power relations, and experiences of trust, and emotional and social engagement [30,67,73], our findings of a high level of competency in the total sample suggests that, for the majority of the students, the high self-efficacy and internal locus of control facilitated by their courses offered experiences of agency that could favorably contribute to their learning outcomes (e.g., academic success, AI literacy) [1].
In line with Spreitzer’s psychological empowerment framework [25], the four clusters represent distinct combinations of meaning, competence, self-determination, and impact cognitions. Cluster HA exhibited uniformly high scores on all eight constructs, indicating a fully empowered profile. These students find strong personal meaning in their work (high PI, MLGO) and feel highly competent (high SE). This is while also exercising considerable autonomy in learning (high SR, MSR) and believing in their ability to influence and enrich desired outcomes (high LC, FO, PE) in learning environments which support active learning and invite students’ collaboration in solving problems and reflection [30], even in the face of obstacles and failures, which is consistent with [84,121]. The HA group encapsulates a subject position characterized by strong agency, meaning, and self-determination, aligning with what CHAT would recognize as expansive learning potential. Here, the subject’s motive is congruent with the system’s object, facilitating robust engagement and leading to higher AI literacy scores. Their strong sense of purpose and self-determined striving function as internal driving forces that harmonize with curricular and institutional rules—minimizing systemic contradictions and supporting individual and systemic transformation toward deeper competence and holistic development, as emphasized in holistic AI literacy frameworks [122,123,124].
In contrast, Cluster LA showed low scores across the board, indicating a broadly disempowered group lacking a sense of meaning in their studies (very low interest and mastery orientation) and feeling ineffective in directing their learning or future (low self-efficacy, regulation, and impact beliefs, akin to learned helplessness) [25]. These findings suggest that students in the LA group are especially vulnerable to falling behind: they are more likely than their peers to find the course material overwhelming, to feel under-prepared for the prerequisite knowledge, and to doubt their ability to succeed, which confirms the findings of [30]. In addition, learners assigned to the Low Agency profile perceived the self-efficacy resources available to them (support from teachers, peers, lack of mastery orientation) as weaker than that reported by students in the other profiles. It might be that LA students have lower outcomes (grades, AI literacy) since they repeatedly feel their agency is constrained. The transformative learning needed to recognize and develop their own capabilities may never fully emerge, as argued by [125], potentially weakening their sense of professional agency once they enter the workplace or the next level of study, which confirms the findings of [9,30]. The LA group represents a subject position where low agency, weak self-belief, and externally imposed motives create fundamental contradictions within the activity system. The LA students’ object (minimal effort, surface engagement) diverges sharply from the educational system’s object (the cultivation of transferable, deep AI literacy). The resultant primary contradiction manifests as motivational disengagement—a classic tension in CHAT between individual motives and shared objectives—producing the lowest AI literacy outcomes. This contradiction can prevent students from participating in the expansive learning cycles necessary to overcome systemic limitations [122,126].
Between these extremes, cluster LSBM was characterized by high meaning and self-determination but only moderate competence and impact. Students in Cluster LSBM appear passionate and mastery-driven (strong PI and MLGO) and actively self-regulate their learning (high SR, MSR), yet they are less convinced of their ability to succeed or control outcomes (moderate SE, LC, FO, and PE). Moreover, external locus of control, lack of collaboration, and behavioral and social student engagement and interactions in learning activities may hinder the development of AI literacy, as argued by [1,4,18,84]. Conversely, students’ motivational orientations both mirror their judgments of learning and instruction and can moderate how they respond to different teaching practices [30].
Cluster CLD displayed the inverse pattern: they reported strong competence and impact (high self-efficacy, internal locus of control, future orientation, and persistence of effort) but low meaning and somewhat lower self-determination. Thus, Cluster CLD students are confident and outcome-focused—believing in their capabilities and goals—but lack intrinsic interest and mastery orientation in their coursework, and they engage in self-regulation only to a moderate extent. On the other hand, social support from peers and teachers, together with an internal locus of control, may affect students’ repositioning by encouraging them to face challenges and practice active agency [30], and facilitating their development of AI literacy [16], especially within the CHAT framework [67]. The CLD profile can be seen as an instance of a “confident but unmotivated” subject, illustrating a secondary contradiction in which high competence (an inner tool for mediating action) is mitigated by a lack of intrinsic drive or meaning. In CHAT terms, the tool (internal self-efficacy) is strong, but the motive–object alignment is partial: students know they can succeed, but the absence of meaningfulness or relevance fails to energize sustained, expansive engagement. This partial alignment may yield surface-level AI literacy gains, yet, as shown in emerging AI literacy frameworks, it falls short of holistic development—the kind that allows transfer to new contexts, critical thinking, and ethical use [81,123,124].
The LSBM cluster, with moderate scores but weak internal locus of control and self-efficacy, embodies a liminal or ambivalent subject position. In CHAT, this profile may be hampered by “boundary contradictions”—where neither the community nor the division of labor fully empowers the subject, resulting in stagnant or “stuck” progress toward AI literacy. The system’s mediating artifacts (curriculum, teaching interventions) may not sufficiently address individual frustrations or boost the motivational drivers required to resolve these contradictions and promote a transition into more agentic roles [44,122,126].
Notably, Clusters LSBM and CLD represent two contrasting forms of partial empowerment: one emphasizes intrinsic meaning and autonomy while lacking full confidence, whereas the other boasts confidence and control but with little personal meaning in learning. According to empowerment theory, the absence of any single dimension (e.g., low competence in Cluster LSBM or low meaning in Cluster CLD) can deflate overall empowerment [25] even if the other dimensions are strong. Taken together, these profiles illustrate different expressions of psychological empowerment among Slovenian VET students—from fully empowered learners who experience high meaning, mastery, choice, and impact in their education, to those who have only pockets of empowerment, to those who feel disempowered on most fronts. This person-centered view underscores that empowerment is a multifaceted, graduated experience rather than a uniform trait [25], varying widely in how students perceive their role and agency in the learning process.
The distinct features of Vocational Education and Training (VET) in Slovenia deeply shape the agency profiles observed in our sample. The Slovenian VET system stands out for its highly structured, practice-oriented curriculum, which prioritizes work-based learning (WBL), direct labor market relevance, and competence-based assessment [127]. The research literature corroborates that competence-based assessment and mastery experiences are potent sources of self-efficacy but do not directly ensure the internalization of meaning or enduring motivation, especially absent strong autonomy support [128,129,130]. Students in CLD profiles may become skilled at meeting clear external standards and feel competent navigating defined workplace paths, yet experience learning less as a pursuit of personal meaning and more as a means to an end. The instrumental orientation of VET thus channels agency chiefly into self-efficacy and perceived impact, aligning with findings that VET learners often emphasize external outcomes while Cheng and Nguyen [131] suggest VET students have agency that is performance- or outcome-oriented—emphasizing “doing well” and meeting extrinsic goals—while academic-track students express greater intrinsic interest and mastery-oriented motivation, aligning more closely with “Highly Agentic” profiles. Slovenian VET pathways, while permeable and open to upward movement, are also sometimes a “default” route for students with lower prior achievement or weaker alignment between personal interests and program content [127]. This can foster the Low Agency/Under-engaged (LA) or Low-Self-Belief Moderate (LSBM) profiles, where students’ agency is globally depleted, particularly in self-efficacy and meaning [132,133]. Vocational students facing a misfit between career aspirations and current training, or whose self-selection into VET was constrained by external factors, may disengage or develop an external locus of control, particularly when their exposure to positive mastery experiences is insufficient to overcome negative expectancy beliefs [132,133,134].
Nonetheless, the Slovenian VET system’s permeability and opportunities for progression (e.g., upward mobility through vocational matura) [127] can foster the Highly Agentic (HA) profile—if students’ personal goals and interests align with the occupational focus of their program. In such cases, the system’s “real-life learning” orientation and feedback-rich environment may catalyze both intrinsic and extrinsic sources of agency: students know “why” they are learning and “to what end,” and see direct channels to meaningful personal and societal contribution [135,136].
In general academic (gimnazija) tracks, the curriculum is abstract, less directly tied to immediate employment outcomes, and more focused on autonomous learning, exploration, and long-term academic development [127]. These features, together with less frequent concrete feedback, may nurture higher intrinsic motivation and meaning (more “HA” profiles), but also result in greater uncertainty about competence and agency due to the deferred realization of personal impact [134,136]. Thus, academic-track students are more likely to exhibit agency profiles marked by strong self-determination and meaning, but potentially less immediate self-efficacy or impact—effectively, the reverse of the typical VET CLD pattern. Findings from comparative research suggest that general education students develop agency around mastery and intrinsic interests but may lack the “confident” performance-oriented agency typical in VET [131,134]. Thus, if these clusters were measured in a general academic-track sample, one might expect (1) fewer CLD profiles—because the environment offers fewer guarantees of success via externally structured competence and more demands for internalized motivation and long-term goal-setting; confidence may be distributed more variably and not as tightly linked to perceived competence [131,134], (2) relatively larger subset of HA profile students—due to emphasis on self-regulation, exploration, and personal meaning, with agency anchored in intrinsic motivation and long-term identity growth [134,136,137], and (3) potential for LSBM/LA profiles, because when students face an abstract curriculum lacking perceived relevance or mastery experiences, agency deficits may remain, particularly if peer comparison undermines self-belief [133,134].

4.2. AI Literacy in the Human Service and Industrial Engineering Students

In addressing RQ2, the analysis revealed that industrial engineering students achieved significantly higher overall AI literacy scores than their human service peers, albeit with a small effect size. In practical terms, the industrial engineering track’s median score (10) exceeded the human service track’s (9) by roughly one competency point, a modest but consistent advantage (U = 17864, z = −3.74, p < 0.001, r = 0.19). This gap remained significant even after controlling for study year and sex (Quade rank ANCOVA, F (1, 423) = 4.70, p = 0.031), suggesting that the program track itself is a key factor in AI literacy differences. Notably, the industrial engineering students outperformed on seven specific AI literacy competencies—Recognizing AI, Interdisciplinarity, AI Strengths and Weaknesses, Decision-Making, Machine Learning Steps, Data Literacy, and Ethics—all areas emphasized in contemporary AI literacy frameworks [16]. These competencies span conceptual understanding (e.g., knowing what AI is, what it can and cannot do, and how it works) and socio-ethical reasoning (e.g., recognizing the human’s role in AI systems and understanding ethical implications) [16]. The fact that industrial engineering students scored higher on precisely these domains suggests their education better cultivates the core knowledge and critical thinking aspects of AI literacy identified by Long and Magerko’s framework [23]. By contrast, no significant track differences emerged for the remaining seven competencies (including more basic conceptual items and everyday technology use), indicating that both groups share a similar baseline on rudimentary AI awareness—likely due to the ubiquitous exposure to AI in daily life—while the technical track confers an extra boost in deeper and applied AI literacy skills. In sum, although the overall between-track disparity is statistically reliable but small (r ≈ 0.2), it manifests in targeted competency areas, highlighting that industrial engineering education provides students with a slight edge in understanding and reasoning about AI in ways that human service education does not.
These findings can be interpreted through differences in curriculum and pedagogy between the two tracks, as well as variations in student engagement. Industrial engineering programs inherently integrate more computing, data, and technology content, which likely exposes students to AI concepts and tools as part of their training. Indeed, prior work observes that many AI literacy initiatives have historically been aimed at computer science or technical students [138], which aligns with our observation that technical-track students exhibit higher proficiency in key AI competencies. By contrast, human service curricula may focus on social, educational, or caregiving topics with less emphasis on AI, yielding lower experiential learning in that domain. The literature suggests that instructional design plays a crucial role in cultivating AI literacy: Wang et al. [138], for example, underscore the importance of systematically developing and measuring AI-related competencies in learners [138]. It is plausible that the industrial engineering track’s coursework is more deliberately aligned with such competencies (e.g., through programming assignments, data analysis projects, or technology ethics modules), whereas the human service track’s coursework might not target these skills explicitly. Furthermore, differences in pedagogical approach and student motivation may contribute to the gap. Technical education often employs project-based and interdisciplinary learning, which has been shown to enhance students’ problem-solving abilities and ethical awareness in AI contexts. Kong et al. demonstrated that a project-based AI course significantly improved secondary students’ capacity to apply AI concepts to real-world problems and deepened their understanding of AI’s ethical boundaries. Such hands-on, situated learning experiences are more typical of engineering programs and can foster greater self-efficacy and engagement with AI, compared to more traditional instruction [138]. In addition, industrial engineering students might enter the program with or develop stronger intrinsic motivation for technology and AI, which can amplify learning outcomes; research indicates that intrinsic motivation is positively linked to students’ AI learning and computational thinking skills [54]. By contrast, human service students may have less interest or fewer opportunities to delve into AI beyond superficial usage, potentially limiting their competency growth. Finally, the Slovenian context provides further insight: a recent national report on generative AI in education highlights both the challenges and pedagogical opportunities of integrating AI tools across educational settings [5]. This suggests that educational stakeholders are only beginning to grapple with AI’s role in curricula, and the uneven adoption may favor technically oriented programs that are quicker to embrace AI-related content. Overall, the observed track disparity underscores how discipline-specific educational practices and learning environments can shape students’ AI literacy. The industrial engineering track’s integration of interdisciplinary tech content, practical problem-solving, and likely more AI-focused instruction has translated into slightly higher competency levels, particularly in understanding how AI works and its societal implications. In contrast, the human service track students, lacking comparable emphasis on AI, perform on par in terms of general awareness but fall just short on the more advanced conceptual and ethical facets of AI literacy. This interpretation is consistent with broader evidence that motivation, curriculum design, and active learning experiences are critical to developing AI literacy [54,138], helping to explain why even a modest educational shift between tracks can yield measurable differences in students’ AI literacy profiles.
The observed differences in AI literacy between human service and industrial engineering tracks further illustrate CHAT’s notion of “boundary objects”—curricular and disciplinary artifacts that are interpreted and enacted differently by each community. Industrial engineering students, operating within a system structured around analytic and procedural knowledge, encounter a curriculum and object that align more closely with the technical competencies underpinning operational AI literacy. In contrast, human service students’ activity system emphasizes broader social, ethical, and humanistic values, which may not be directly incentivized or recognized by standardized AI literacy assessments—creating systemic contradictions between valued competencies and what is measured [123,139]. Of special note is the result that some AI literacy competences (ethics, interdisciplinarity) are interpreted and assessed differently across tracks, reflecting the way boundary objects mediate and sometimes distort alignment of meaning between communities [123,140].

4.3. Student Agency Profiles in Relation to AI Literacy

Expansive learning, a core CHAT concept, is evident when contradictions become the engine for transformation: for example, when students with previously low engagement or confidence reframe their role within the activity system (possibly through targeted instructional design or assessment innovation) [3,128]. Such expansive cycles may be catalyzed through learning interventions that realign system objects (e.g., emphasizing relevance, integrating affective and ethical dimensions) or transform mediating artifacts (e.g., AI tools and inclusive curricula that bridge technical and humanistic competencies) [122,123]. This dynamic is echoed in recent AI literacy competency models, which argue for progression toward critical, ethical, and context-sensitive AI literacy and stress the importance of tailoring content and mediation strategies to students’ starting positions and the sociocultural context of their community [122,123,124].
Recent research underscores that attributes such as perceived convenience, information redundancy, and especially system familiarity significantly influence students’ willingness to engage in active learning with AI, often outweighing technical sophistication alone. For instance, Wang and Sun [141] found that university students prioritized convenience and familiarity over novelty or advanced features when using Intelligent Personal Assistants (IPAs) for learning. Their findings demonstrate that redundancy in AI-driven information delivery can actually hinder active learning, while students’ perceptions of system quality and previous positive experiences with similar tools enhance engagement and literacy outcomes [141]. Furthermore, these tool characteristics interact dynamically with students’ agency profiles: highly agentic learners may leverage familiar, user-friendly AI systems to maximize autonomous and reflective learning, whereas those less confident in their digital abilities may be deterred by systems perceived as complex or inconsistent [141]. Moreover, these effects are not universally mediated by AI literacy—Al-Abdullatif and Alsubaie [142] show that while AI literacy enhances students’ ability to discern the usefulness and enjoyment of tools like ChatGPT4o, it is the interaction of these perceptions with the concrete design features of AI (e.g., interface intuitiveness, reliability of feedback) that most strongly shapes use intentions and learning outcomes. It is well established that different educational tracks emphasize distinct dimensions of AI tool functionality and usability. Human service students (e.g., in education or social work) often benefit most from AI tools that foreground intuitive interfaces, transparency, and ethical guidance, supporting reflective and value-driven practices. By contrast, industrial engineering students more frequently leverage technically sophisticated tools that prioritize functionality, precision, and customizability, aligning with a performance-focused epistemic orientation [143]. The interaction between these design features and student agency proves crucial. Al-Abdullatif and Alsubaie [142] identify that an individual’s AI literacy deeply influences how they perceive the value and potential of AI tools: students with higher AI literacy are better able to discern the utility and enjoyment associated with a given system, which in turn affects their intention to use it [2]. This effect may be magnified in engineering contexts, where high-agency students actively explore and manipulate advanced features, thereby nurturing deeper technical literacy. Conversely, high-agency students in human service disciplines are likely to gravitate toward, and make better use of, AI systems whose design emphasizes transparency, ethical cues, and user-friendliness—features that facilitate critical and sociotechnical AI literacy [81,143]. Furthermore, the perceived “agency” of the AI system itself can affect user engagement and trust. Vanneste and Puranam [144] propose that users’ trust and willingness to collaborate with AI are shaped by how much agency they attribute to the AI tool, and this attribution interacts with their own agency profile and disciplinary expectations. This dynamic is especially salient in settings where students must balance autonomy with reliance on AI, such as when human service students assess ethically sensitive recommendations or engineering students troubleshoot opaque algorithmic processes. These findings suggest that AI tool properties—such as interface design, perceived agency, transparency, and feature set—must be attuned to both the disciplinary context and the agency profiles of learners in order to support optimal AI literacy development. While systematic, comparative research is needed, current evidence implies that a one-size-fits-all approach to AI tool design may hinder rather than help students in different tracks achieve meaningful gains in AI literacy [23,81,142].
The pairwise contrasts paint a nuanced picture of how specific agency cognitions—particularly competence self-belief—map onto AI literacy outcomes. Contrast 1 (LA vs. LSBM) shows that adding a sense of meaning/impact to an otherwise disengaged profile does not raise AI literacy when perceived competence remains low. Self-determination theory (SDT) argues that competence, autonomy, and relatedness are joint prerequisites for optimal learning, with competence often the most proximal driver of achievement [53,63]. Empirical work on digital technologies likewise found that attitudes or perceived value alone do not predict skill acquisition once competence belief is partialed out [35]. Yet, we located no AI-specific studies testing “meaning-without-competence” profiles, so the null difference observed here extends the SDT proposition into the AI literacy arena and signals a gap for future research.
Contrasts 2 and 3 confirm that competence self-efficacy functions as a standalone accelerator of AI literacy. Students classified as CLD or SE—high on competence but low on other agency facets—outperformed the globally low-agency group (LA) and the “lukewarm-meaning” group (LSBM). This pattern aligns with a large body of work showing digital or AI self-efficacy to be a direct predictor of technology-related knowledge and performance [16,63,64]. In motivational modeling of AI courses, Lin et al. [145] likewise reported that confidence was the strongest structural antecedent of both persistence and test scores, overshadowing relevance beliefs. Hence, our data reinforce existing evidence that a competence belief threshold is often sufficient to boost AI literacy, even when intrinsic motivation is tepid.
Contrast 4 (HA vs. LA) shows the expected, large advantage of students who score high on all empowerment cognitions. Multidimensional agency has repeatedly been linked to superior learning and technology uptake [19], so this finding is well anchored in the literature. The sizeable gap also confirms that AI literacy is sensitive to aggregated motivational capital, mirroring results from project-based AI literacy courses where gains were largest for students reporting simultaneous boosts in self-efficacy, value, and autonomy support [146].
Contrast 5 (HA vs. LSBM) further illustrates that lukewarm or fragmented agency profiles are insufficient when compared with a fully empowered profile, corroborating SDT experiments in which autonomy-supportive climates coupled with competence scaffolds yielded superior conceptual transfer [146]. Again, existing AI literacy studies echo this pattern, but mostly in small-scale interventions; our cluster-analytic approach extends those findings to a naturalistic sample.
The most intriguing result is Contrast 6 (HA vs. CLD): once students believe they are competent, additional layers of autonomy and meaning do not translate into higher AI literacy. While SDT posits additive or even synergistic effects of the three basic needs, evidence for plateau effects beyond a competence threshold is sparse. Vansteenkiste et al. [146] reported that competence could sometimes enable learning even under low autonomy, but that work focused on short-term tasks [64,146]. Our data suggest that when focusing solely on foundational knowledge acquisition (rather than deep innovation or critical engagement), a sense of competence is the principal engine of performance, even if intrinsic motivation or value-driven engagement is subdued. We found no AI-specific studies comparing high-competence/low-autonomy profiles with fully empowered profiles. This calls for a nuanced revision of theoretical frameworks governing AI literacy and technology education. Conventional models, including those found in recent AI literacy competency frameworks, emphasize a gradual, multifaceted progression where interest, meaning, critical thinking, and autonomy coalesce to promote sophisticated skills and lifelong learning behaviors [81,123]. However, our results introduce the compelling argument that for foundational levels of AI literacy—the threshold at which students can understand and utilize AI competently—self-efficacy alone can drive outcomes comparable to profiles with heightened motivation across all agency domains. Such an insight echoes resource-based or expectancy-value theories but goes further by empirically demonstrating the independence and sufficiency of competence at specific learning thresholds [15]. This not only challenges existing holistic approaches but also refines our understanding of how student profiles might be differentiated—and specifically how learners with low intrinsic motivation but high confidence can be effectively supported. As such, the work advances the emerging field of AI literacy by identifying ‘competence sufficiency’ as a key mechanism in foundational acquisition, inviting further research into where and how other motivational factors become essential as proficiency deepens [123,124].
Thus, the functional equivalence of CLD and HA represents a novel contribution that calls for experimental replication: it suggests that, for cognitively demanding but largely individual activities such as coding or data tasks, feeling capable may suffice to reach top-tier literacy, whereas autonomy and meaning may be more critical for longer-term engagement or transfer—questions future longitudinal AI literacy research should test.

4.4. Predictive Power of Student Transformative Agency Constructs on AI Literacy

We sought to determine the influence of technical high school students’ transformative agency constructs (self-efficacy, perseverance of interest, perseverance of effort, mastery goal orientation, locus of control, future orientation, self-regulation, and meta-self-regulation) on their performance in AI literacy while controlling for study track, year, and sex. However, previous studies demonstrated the impact of AI-assisted scaffolding in learning to develop student agency [4] and, further, how self-efficacy affected both engagement in learning [35] and AI outcomes, including AI literacy [63]. This is due to the fact that the nature of AI literacy is rather interdisciplinary and a dynamic, generic, and emerging area of research in VET education; we can expect a lack of support in the current literature. AI might evolve in different roles, which may shift priorities in education, ethics, and society [2,31].
The path model revealed that only a subset of agency-related constructs had significant (albeit small) effects on overall AI literacy. In particular, self-efficacy (SE), mastery learning goal orientation (MLGO), and metacognitive self-regulation (MSR) showed positive associations with AI literacy, whereas self-regulation (SR) and locus of control (LC) were negatively associated. The remaining constructs—perseverance of interest (PI), perseverance of effort (PE) (often considered aspects of grit), and future orientation (FO)—did not exhibit significant effects. Notably, these results remained even after accounting for program track, year, and sex, suggesting that the influence of transformative agency on AI literacy holds across different student backgrounds.
These findings can be interpreted in light of the broader concept of transformative agency, which encompasses a learner’s capacity to take charge of and transform their own learning trajectory [12,86]. The positive predictors (SE, MLGO, and MSR) are all hallmarks of an empowered, agentic learner. High self-efficacy—belief in one’s capability to learn and succeed—likely encourages students to engage more deeply with unfamiliar AI tools and concepts without fear of failure. This aligns with Bandura’s social cognitive theory, where self-efficacy motivates perseverance in challenging tasks [120,147]. In our context, a student confident in their ability to master new technology is more inclined to experiment with AI and persist through setbacks, leading to higher AI literacy [12]. Similarly, a strong mastery goal orientation (as opposed to a performance orientation) means the student focuses on learning and understanding AI for its own sake, rather than just for grades or comparison with peers. Such a learner is more willing to tackle complex AI topics, ask “how” and “why” questions, and learn from mistakes—behaviors that facilitate meaningful literacy in AI systems [12]. This interpretation resonates with achievement goal theory: a mastery orientation promotes deeper engagement and resilience when confronting novel challenges [86]. The contribution of metacognitive self-regulation further underscores the importance of reflective learning strategies. Students who plan, monitor, and adjust their learning approaches (for example, by checking their understanding of how an algorithm works, or evaluating the output of an AI critically) can more effectively acquire complex new knowledge and problem-solving ability [148]. Prior research in educational psychology has consistently found that such metacognitive strategies correlate with better learning outcomes in technical domains [12,16,18,149]. Our results extend this understanding specifically to AI literacy: students skilled in regulating their own cognition appear better equipped to navigate the abstract and fast-evolving content of AI. Each of these positive effects was modest in size, implying that, while transformative agency supports AI literacy development, it is one factor among many. Nonetheless, even small boosts in these traits could cumulatively make a meaningful difference in students’ readiness to engage with AI.
In contrast, two dimensions of agency had significant negative relationships with AI literacy in the model: locus of control (LC) and self-regulation (SR).
Locus of control refers to individuals’ beliefs regarding the source of control over events in their lives: internal locus means attributing outcomes to one’s own actions, while external locus attributes outcomes to factors beyond one’s control, such as luck, fate, or powerful others [150,151]. On the other hand, AI literacy, as defined in Hornberger et al. [83], incorporates not only basic technical understanding but also epistemic and ethical dimensions—such as understanding AI’s decision making, limitations, bias, and broader impacts on society.
Locus of control may shape students’ epistemological beliefs about knowledge and learning. Multiple studies indicate that individuals with a strong internal locus of control tend to believe that learning outcomes are determined by personal effort and cognitive ability, fostering more mature epistemological beliefs [152,153]. However, this can possess a double-edged nature with regard to AI literacy. Students with a high internal locus, particularly in high-achieving academic tracks, may display a “personal efficacy bias”—believing they can master anything through effort—potentially underestimating the complexities, uncertainties, and social–ethical ambiguities inherent to AI as discussed by Hornberger et al. [83]. AI literacy involves grappling with uncertainty and recognizing the probabilistic, opaque, and sometimes “uncontrollable” character of AI systems. Students who favor a strong sense of personal control may perceive the inherent indeterminacy of AI as threatening or counter to their worldview, potentially leading to resistance, dismissiveness, or reduced engagement with nuanced AI literacy content [9,12,154].
Moreover, in industrial engineering tracks, where the culture often promotes deterministic, tool-based mastery and problem-solving, students with a higher internal locus may lean toward “instrumentalist” learning—seeking clear rules and single solutions [155,156]. AI literacy, in contrast, requires openness to ambiguity and an understanding of AI’s limits, such as recognizing algorithmic bias or the impossibility of total control/predictability in AI outcomes [3]. This mismatch might foster discomfort or lower scores in assessments of AI literacy, which privilege these epistemic and ethical perspectives. Conversely, students with a more external locus—who accept that many outcomes are not solely under individual control—may be more receptive to messages about uncertainty, probabilistic thinking, and the socio-technical limitations embedded in AI [3], and thus perform better on AI literacy assessments as defined by Hornberger et al. [14].
Research also suggests that locus of control mediates relationships between stress, motivation, and broader academic/life satisfaction [157,158]. Students with a high internal locus may experience heightened stress when confronting systems or knowledge domains (like AI) that do not yield to individual mastery, potentially impeding their engagement with complex, open-ended AI issues—again leading to lower AI literacy as measured by contemporary frameworks.
Meta-analytic and path-analytic studies confirm that while an internal academic locus of control generally predicts higher traditional academic outcomes [151,159,160,161], its relationship to more modern literacies (such as AI literacy) may diverge due to the aforementioned epistemic, motivational, and cultural mediation pathways. Furthermore, recent research with domain-specific measures (e.g., e-health literacy) shows that locus of control can both directly and indirectly shape new forms of literacy through self-efficacy and beliefs about control [162]. Some academic achievement studies report a null relation between LC and grades when the curriculum is strongly teacher-directed or when items cluster around hopelessness rather than agency [163].
Our findings may be especially pronounced in industrial engineering tracks, where educational “success” has traditionally been tied to deterministic knowledge and output-driven competency, making students less receptive to the kind of epistemic humility and social critique required of AI literacy. In human service tracks, the curriculum may more frequently highlight context-dependence, ethical ambiguity, and collaborative reasoning, possibly making students’ locus of control less dominantly predictive or even neutral in relation to AI literacy.
The negative association for self-regulation (SR) is more surprising at first glance. Generally, self-regulated learning behaviors (such as managing time, setting goals, and keeping oneself on task) are positive attributes [9,24]. However, in our analysis, SR was examined alongside metacognitive self-regulation (MSR), and it appears that once higher-order reflective strategies (MSR) are accounted for, more routine or procedural self-regulation might not confer additional benefit, and could even relate to slightly lower AI literacy [48]. One possible interpretation is that overly rigid or superficial self-regulation strategies might impede learning in a complex domain such as AI [164]. Students who simply go through the motions—e.g., diligently following step-by-step instructions or sticking to familiar study habits—without adapting their strategies may not cope well with the open-ended, problem-solving nature of AI literacy tasks. In other words, a learner who is behaviorally self-disciplined but lacks flexibility might not explore beyond the minimum, whereas those with metacognitive awareness can recognize when their current approach is not working and try new strategies. This explanation finds some support in research on self-regulated learning: effective learning requires not just regulation but adaptive regulation. Learners who cannot deviate from a fixed plan may struggle with ill-structured problems [39,165]. Thus, our finding may reflect that quality of self-regulation matters more than quantity—simply being organized is not enough unless coupled with reflective adaptation [48]. It is also worth noting that statistical suppression effects could be at play: MSR and SR are related, so in the model, one picks up the positive adaptive aspect (MSR), leaving the residual variance of SR possibly capturing unproductive forms of regulation. In any case, the take-away is that not all facets of student agency uniformly facilitate AI literacy; some facets, when isolated from the others, even show a slight detrimental relationship, which invites careful consideration (we return to this in practical implications). These negative findings highlight that true transformative agency is not just any self-directed behavior, but the kind that is accompanied by reflection, adaptability, and an internal drive to learn [73]. AI literacy entails not only procedural know-how but the capacity to navigate AI’s inherent unpredictability through cognitive flexibility and adaptive expertise [81]. Cognitive flexibility involves shifting mental sets and updating strategies when faced with novel or inconsistent AI outputs, enabling learners to reconceptualize problems rather than apply rote procedures [166]. Adaptive expertise—the ability to innovate and transfer knowledge to unfamiliar situations—is likewise essential for interpreting AI suggestions, diagnosing model errors, and integrating ethical considerations into AI-driven decisions [167]. In contrast, lower-order self-regulation strategies that rely on fixed schedules or checklists can foster cognitive entrenchment, narrowing students’ strategic repertoire and anchoring them in ineffective routines when tasks exceed well-structured parameters [167]. Without metacognitive reflection to monitor task demands and trigger strategy revision, such procedural SR becomes maladaptive in the context of ill-defined AI tasks, ultimately impeding the iterative sense-making and strategic adaptation central to robust AI literacy [12,18,166].
Finally, it is noteworthy that perseverance of interest (PI), perseverance of effort (PE), and future orientation (FO) did not significantly predict AI literacy in our study. At face value, this suggests that simply being passionate or persistent about one’s goals (traits often dubbed grit), or thinking ahead to future goals, did not translate into better AI literacy outcomes once other factors were controlled [12]. This result runs somewhat contrary to popular narratives that grit and future-focused motivation are keys to success [30,39,73]. One reason might be that perseverance in general is too domain-general to directly affect a specific outcome, such as AI literacy—what matters more is how students channel their effort. A student might generally consider themselves hard-working, but if they lack interest in AI or effective strategies, that trait alone will not ensure learning [39]. Moreover, a lack of teacher support for learning in the digital environment can negatively affect the acquisition of AI literacy [12]. Empirical research has similarly found that grit’s predictive power for academic performance is modest after accounting for other factors; often, perseverance overlaps with conscientiousness or effective study habits rather than offering a unique boost [39,168]. Our findings align with this: sustained effort and interest, on their own, were found to be not enough to boost AI literacy. It may be that, in a highly novel field such as AI, directed effort (guided by self-efficacy and good strategies) counts more than sheer persistence [16,17,63]. Therefore, motivation can be seen as the engine that keeps learners engaged in high-effort, skill-building loops (computational thinking) while, for AI literacy, where learning hinges more on exposure, teacher scaffolding, and conceptual clarification, motivation still matters, but it seldom appears as the critical bottleneck in empirical models [54]. It might be that focusing on supportive learning environments and relevance cues tends to yield bigger gains in AI literacy than trying to amplify persistence alone [19].
In rapidly evolving and complex technical domains such as artificial intelligence (AI), the effectiveness of student effort hinges less on the sheer volume or persistence of that effort, and more critically on how effort is metacognitively directed through adaptive, reflective learning strategies. Whereas traditional models of educational persistence emphasize endurance and time spent, current scholarship in AI literacy demonstrates that learning gains are maximized when students engage in strategic self-regulation, continual monitoring of their understanding, and flexible adjustment of their approaches to novel content or challenges [126,169]. This is particularly pertinent given the dynamic landscape of AI, which demands learners not only accumulate technical knowledge but also cultivate competencies in self-management, critical evaluation, and ethical reasoning [123,170,171]. Empirical studies and competency frameworks consistently highlight metacognition as central to AI literacy—enabling learners to anticipate skill needs, recognize misunderstandings, and adapt strategies to emerging technologies rather than simply persisting in rote activities [23,126,170]. Moreover, advances in AI-powered personalized and adaptive learning systems reinforce this viewpoint, as their efficacy rests on supporting students’ metacognitive engagement with individualized feedback and self-directed exploration rather than on increasing workload or passive repetition [172,173,174]. Notably, meta-analytic evidence reveals that the effectiveness of AI in educational outcomes depends more on learning methods and contextual adaptations than on the absolute quantity of student effort, with structured, learner-centered approaches proving decisive [122,175,176]. Therefore, in AI education, fostering metacognitively guided, adaptive perseverance is not merely preferable but essential, ensuring learners are equipped to thrive amidst ongoing technological change and complexity [123,169].
The non-significance of future orientation could indicate that, while thinking about long-term goals is valuable, it did not have an immediate impact on students’ current literacy levels. A similar result was found by Dang et al. [55], where time perspectives did not affect an AI adoption intention. One explanation is that many students, especially at the beginning of VET, might not yet connect learning about AI with their future goals, or such future-oriented motivation is too abstract to affect their day-to-day learning behaviors, which is aligned with Dang et al. [55]. Research on future time perspective in education notes that the impact of future goals on current learning depends on students seeing a clear, personal connection and having the strategic knowledge to link present tasks to future rewards [177]. In the context of our study, students with high future orientation may value education generally but might not specifically prioritize AI literacy unless they have specific aspirations in that area [55]. It could be that the future time perspective works only indirectly—via the reasons users generate—and its path is further moderated by education [55].
As the findings suggest, the lack of effect for perseverance and future orientation does not mean these qualities are unimportant in general; rather, it suggests that, when it comes to developing AI literacy, motivational beliefs (such as self-efficacy and mastery goals) and metacognitive skills play a more direct role than generic persistence or abstract future goals [178]. Agency-driven mastery-oriented learning goals serve as the motivational engine for AI literacy, fostering intrinsic interest, effective strategy use, and the cognitive flexibility needed to navigate ill-structured AI challenges [179]. In contrast, an internal locus of control paired with rigid, procedural self-regulation—relying on fixed routines and willpower—can induce cognitive entrenchment and ego depletion, leading to adaptability deficits and performance deterioration in complex AI tasks [167,180]. This effect appears particularly salient in more deterministic, tool-oriented academic tracks where personal mastery is valorized and ambiguity is minimized [150,151,162]. This nuance contributes to our understanding of transformative agency: not all of its components drive all outcomes equally—some elements (confidence, goal focus, and reflective learning) are especially salient for empowering students in new technological literacies, while others may need to be activated in alignment with those key drivers.
Recent scholarship robustly affirms that AI literacy is a complex, systemic construct shaped by multiple layers of influence across individual, interactive, and sociocultural dimensions [14,24,123,124,181]. The small explanatory power of individual agency predictors, in the context of a model that still accounts for a meaningful portion of variance in AI literacy outcomes, is indicative of this distributed causality. For example, Yuan et al. [124] empirically validate a six-dimensional scale for AI literacy that includes not just efficacy and ethical consideration, but also cognitive and normative competencies related to the features and processing of AI and its algorithmic influence. Crucially, their framework recognizes these elements as interacting within both individual and sociocultural contexts, making it clear that isolating one factor—such as agency—captures only a limited slice of the phenomenon. Likewise, Chee et al. [123] synthesize 29 studies to show that required AI literacy competencies and their developmental pathways vary substantially across learner groups, educational stages, and disciplines, reinforcing the view that AI literacy is not a static trait, but emerges interdependently across educational, social, and technological settings [123].
Moreover, studies also highlight that robust improvements in AI literacy require multilevel interventions—addressing not just individual confidence or efficacy, but also providing authentic, scaffolded experiences, ethical frameworks, and equitable technology access [124,182,183]. Methodologically, this distributed explanatory structure is reflected in studies showing small individual effect sizes even among predictors identified as significant, while overall variance explained is moderately high or respectable. This is a classic signal of what CHAT and similar systemic frameworks describe as emergent outcomes—properties that cannot be reduced to the sum of discrete individual contributions, but instead arise from the dynamic interplay among system elements (e.g., learners, tools, mediating artifacts, communities of practice, and rules/structures) [126,182,184]. Kumar et al. [184] conceptualize AI literacy as a practice arising from the intra-action of bodies, contexts, technologies, histories, and positions—insisting that literacy is not an isolated cognitive attribute, but an emergent property of the dynamic human–technology–society assemblage. Similarly, Lee et al. [143] report that the effectiveness of AI literacy interventions is consistently mediated by structural factors such as curriculum adaptivity, access to hands-on experiences, and socioemotional supports—illustrating the limitations of models focusing on individual agency to account fully for learning outcomes.
In sum, the evidence decisively indicates that the observed statistical pattern—in which a sizable portion of variance is explained despite small individual effect sizes—underscores the systemic, contextually embedded, and multifactorial nature of AI literacy. This finding robustly substantiates the value of CHAT for both research and practice, as it accommodates the interdependencies, mediating tools, and structural enablers that collectively drive literacy development in the era of AI [123,124,126,143,182,184].

4.5. Practical Implications for Curriculum, Pedagogy, and Teacher Education

The findings carry important practical implications for designing curricula and pedagogy to foster AI literacy. If self-efficacy, mastery orientation, and metacognitive self-regulation are key enablers of AI literacy, educators and curriculum developers should actively cultivate these traits alongside teaching AI concepts. In essence, building students’ transformative agency may be just as critical as teaching technical facts about AI, especially due to the fact that the ecological and environmental costs of data- and device-intensive forms of AI may affect society and the economy [185]. Thus, we can outline several implications and strategies for practice:
  • Cultivating Self-Efficacy in AI Learning: Instructors and instructional designers should incorporate practices that boost students’ confidence in working with AI. This can be accomplished by providing mastery experiences, social persuasion to encourage learners’ efforts, and exposing students to role models. Mastery Experiences: Begin the term with hands-on, plug-and-play AI activities that prioritize success with minimal technical barriers. For instance, students could use tools like Teachable Machine or web-based platforms (such as Cognimates or Scratch AI) to create simple image recognition or text classification projects. This provides immediate, successful outcomes and establishes a foundation of “I can do this” feelings before introducing more complex coding or theory [7,186,187]. Use structured peer feedback during labs or in online forums where students routinely praise each other’s troubleshooting abilities or creativity (e.g., “Noticing the data imbalance was a great catch!”). Instructors can reinforce this by sharing personalized video or written feedback that recognizes progress and persistence rather than just correctness [7,44]. Regularly feature short talks (live or recorded) from professionals—ideally from underrepresented backgrounds—who apply AI in different contexts (e.g., health, arts, social sciences). Have them address their initial fears, challenges, and when they first felt capable with AI, making the learning journey relatable and fostering a sense of belonging [7,188,189].
  • Educational interventions should be strategically oriented towards rapidly building and affirming learners’ confidence with AI concepts and tools. This is especially salient in higher education contexts where time and curricular space for AI are limited, but the societal and workforce demand is urgent [123,190]. The finding that self-efficacy is the crucial catalyst for entry-level AI literacy opens new pathways for supporting students who may not initially self-identify as passionate coders or tech enthusiasts but display situational confidence through prior exposure or effective scaffolding. As echoed in global studies, varied learner groups benefit from tailored approaches, and ‘competence-first’ pathways can broaden access and reduce affective barriers for underrepresented populations [123,142]. As a warm-up, launch courses with activities like experimenting with AI chatbots (such as DALL·E for generating images from prompts or ChatGPT for creating a study companion), letting students see the results of AI rapidly without yet coding [186,191]. These activities lower barriers, create rapid success experiences, and provide a springboard for deeper learning. Structure the first sessions as “AI Bootcamps” where learners complete a sequence of scaffolded challenges—such as customizing pre-built AI templates or analyzing simple datasets. This competence-first approach can be especially liberating for students without a strong tech background [7,186,192].
  • While competence may suffice for foundational literacy, educators and policymakers can structure curricula with early intensive self-efficacy boosting components, followed by modules that cultivate deeper intrinsic motivation, critical thinking, and meaning making as students’ progress toward advanced, agentic engagement with AI [24,123,124]. Use the first weeks for exploration with engaging, application-focused AI tools (e.g., AI-driven music composition, image manipulation, or virtual medical diagnosis platforms [186,193]), ensuring initial encounters are positive and confidence-boosting before heavier theoretical material is introduced [7,186,191]. Next, implement assessments that start with low-stakes, growth-focused tasks (e.g., “Share a model you built and one thing you learned”) and gradually incorporate higher-order tasks (like critiquing algorithmic bias or debugging complex code), helping all students feel early success [7,191,192]. Finally, leverage adaptive, AI-powered platforms where tasks and feedback are tailored according to each student’s confidence and progress, letting students choose between “practice” and “challenge” tracks based on their self-assessed readiness [187,194,195].
  • Encouraging a Mastery Goal Orientation: Given the positive role of mastery orientation, educational practice should shift the emphasis from performance outcomes to the learning process when it comes to AI. This means creating a classroom culture that values curiosity, improvement, and intellectual risk-taking over just attaining the right answer or scoring high on a test. To foster intrinsic motivation and enduring learning, build a classroom climate focused on experimentation, curiosity, and growth over competition (1) Structure rubrics to reward “evidence of revision,” “quality of reflection,” and “problem-solving strategies,” not just the correctness of code or final AI model performance [187,192]. (2) Require logs where students detail their decision processes, false starts, and what they learned from errors (e.g., “Describe two mistakes made while tuning your model and how you addressed them”). Collaborative reflection can occur on shared documents or forums [15,187]. (3) Schedule class sessions where students publicly share failed attempts or flawed AI models and discuss what these taught them about AI, normalizing and destigmatizing failure as a learning tool [191,192].
  • Integrating Metacognitive Strategy Training: Our findings on MSR suggest that teaching students how to think about their own learning is especially beneficial in complex domains such as AI. Teach students to evaluate and steer their own AI learning processes with structured metacognition can be organized as: (1) Make it routine for students to complete a post-mortem after each substantial project, detailing what worked, why the model may have failed, and the debugging strategies used. Provide prompts such as: “What variables most affected your results? How would you redesign your model based on outcomes?” [15,187]. (2) During live sessions, prompt students to narrate their reasoning as they develop or troubleshoot code. Optionally, have students record and replay these sessions for critical self-reflection or peer review [44,187]. (3) Pair students for reciprocal reviews—not to judge the final product but the reasoning process. Peers can highlight robust problem-solving or flag skipped assumptions, fostering awareness of diverse strategies [15,188,189].
  • Addressing Negative Factors—Fostering Adaptive Agency: The negative associations we found (SR, LC) highlight areas where educators should be cautious and proactive. By explicitly teaching that control in AI contexts means self-directed inquiry, ethical engagement, and adaptive learning—rather than rigid mastery—educators can harness the motivational power of internal locus of control while fostering the epistemic and ethical dispositions required for high AI literacy. This alignment, supported by mastery goals, social participation, and reflected in both curriculum and climate, can decisively transform internal locus of control from a liability to a robust predictor of critical, responsible AI literacy in industrial engineering students [151,155,158,159,163,164].
  • Teacher Education and Systemic Supports: To implement the above changes, teacher education programs and professional development workshops should incorporate the principles of transformative agency [196]. Practical actions for embedding transformative agency in teacher education and professional development include actions such as, co-creation of curriculum and learning experiences, critical reflection and generativity practices, action research and practitioner inquiry, professional learning communities with agency focus, mentorship models emphasizing agency and inclusion and embedding transformative agency in AI integration [7,184,185,186,197,198,199].
In sum, professional development should treat AI literacy as not only a content area but also an opportunity to cultivate broader learner skills. When teachers buy into this dual goal, they become more than transmitters of knowledge; they become facilitators of student agency [30,200]. Teacher education and professional development programs must move beyond fidelity-oriented training toward structures and practices that promote sustained, reflective, and collaborative transformation of professional practice. Such approaches not only foster agency at the individual and collective levels but also support the ethical, inclusive, and impactful adoption of innovations like AI in education [7,201,202,203].

4.6. Limitations of the Study and Future Research

A few limitations were also identified in our study. First, all participants were drawn from Slovenian vocational upper-secondary schools; cultural, curricular, and technological conditions may differ in general academic tracks or in other countries. The findings, therefore, cannot be generalized beyond similar VET settings without caution. Second, agency profiles and AI literacy levels were captured at a single time point. We cannot determine causal ordering—whether stronger agency promotes literacy, or whether success with AI tasks boosts agency. Third, although track differences were modest, industrial engineering students had slightly more AI experience. Unequal exposure could inflate cluster differences attributed to agency alone. Fourth, sociocultural variables were omitted. Transformative agency dimensions explain only individual motivation and regulation. Factors such as prior computing experience, access to AI tools, or classroom pedagogy—reported to matter in other AI literacy work—were not modeled.
In general, this study’s findings should be interpreted within the bounds of its context: Slovene technical–vocational programs in industrial engineering and human services tracks. Transfer to general (grammar) schools, other vocational specializations, younger learners or learners with special educational needs may be constrained by differences in curricular focus, prior exposure to AI concepts, and the extent to which metacognitive reflection is already cultivated. Metacognitive strategies are hardest to embed in highly procedural, time-pressured contexts such as hands-on vocational trades, physical education and other psychomotor subjects, rote-focused basic literacy classes, and algorithmic foundational mathematics. Consequently, the profiles we identified—and the instruments we employed—require re-validation before they can be applied confidently to settings where academic orientation, pedagogical routines, or students’ developmental stages differ markedly from those examined here.
We set some directions for future research: (1) Longitudinal and intervention studies. Track students across multiple semesters or embed agency-boosting interventions (e.g., mastery-goal framing, metacognitive training) to determine causal effects on AI literacy growth. Triangulate surveys with behavioral log data (e.g., time-stamped AI–tool interactions) and portfolio-based performance tasks. Include teacher ratings or peer assessments of agency. (2) Broader construct modeling. Extend the predictive model to include sociocultural and ethical dispositions, learning environment variables, and prior digital skill and access as moderators. (3) Design quasi-experimental or randomized classroom interventions that provide equivalent AI learning opportunities; then, observe profile shifts. (4) Replicate the multilevel triple-theoretical framework in other countries, general academic schools, and adult VET. Use multilevel modeling to compare school-level influences. (5) Test whether profiles predict external outcomes (e.g., course grades, internship performance) to establish practical validity.
Pursuing these avenues will sharpen causal inference, broaden generalizability, and yield richer, multimodal evidence on how transformative agency supports AI literacy in diverse educational contexts.

5. Conclusions

In this study, cultural–historical activity theory, psychological empowerment theory, and the Five Big Ideas of AI education were combined to examine how vocational students’ agency influences AI literacy. Cluster analysis revealed four distinctive agency profiles—from fully empowered (high importance, competence, self-determination, and influence) to largely disempowered—that explained significant additional variance in AI proficiency beyond program track, year, and gender, confirming the additive empowerment model in an AI-enriched VET environment. Supplemental predictive analyses revealed that specific facets of empowerment, namely, self-efficacy, learning goal orientation, and metacognitive self-regulation, made small but clear positive contributions to AI literacy, while internal locus of control and narrow behavioral self-regulation were modest negative predictors, and grit-style persistence was not significant. Taken together, these findings extend empowerment and transformation research to the field of AI and suggest that confidence, an attitude of mastery, and adaptive self-regulation enable learners to transform, rather than merely consume, digital tools and can help close skills gaps between industrial technology and human services degree programs. In practice, AI curricula should include mastery-oriented, student-centered learning, explicit metacognitive strategy training, and autonomy-enhancing tools to shift control beliefs inward and support low-skill students.
Investigating learners’ agency through a multilevel triple-theoretic model using different dimensions and a person- and variable-oriented approach to analysis in VET can contribute to understanding the factors that optimize learner agency in AI-driven learning tasks; this, in turn, can contribute to the development of educational practices that enhance both learner agency and AI literacy. Future longitudinal, cross-cultural, and multimodal studies are needed to verify causal pathways, assess generalizability, and optimize interventions that boost AI literacy.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/systems13070562/s1, Secondary Student Agency and AI literacy.

Author Contributions

Conceptualization, S.A. and D.R.; methodology, S.A. and D.R.; validation, S.A. and D.R.; formal analysis, S.A. and D.R.; investigation, S.A. and D.R.; resources, S.A. and D.R.; data curation, S.A.; writing—original draft preparation, S.A. and D.R.; writing—review and editing, S.A. and D.R.; visualization, S.A. and D.R.; supervision, S.A.; project administration, D.R.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support of the Slovenian Research Agency under the research core funding “Strategies for Education for Sustainable Development Applying Innovative Student-Centred Educational Approaches” [(ID: P5-0451)] and under the project “Developing the Twenty-First-Century Skills Needed for Sustainable Development and Quality Education in the Era of Rapid Technology-Enhanced Changes in the Economic, Social and Natural Environment” [(Grant no. J5-4573)], also funded by the Slovenian Research Agency. We also thank to financial support of the Ministry of Education, Republic of Slovenia and NextGenerationEU (ID—NRP: 3350-24-3502).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and was reviewed and approved by the Ethics Commission of the Faculty of Education of the University of Ljubljana (Approval code: 7/2025; approval date: 10 March 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data used in this study are available on request from the corresponding author. All data have been anonymized but are not publicly available because of the privacy issues related to the qualitative nature of it.

Acknowledgments

The authors thank the participating students. Declaration of generative AI and AI-assisted technologies in the writing process: While preparing this work, the authors used DeepL to translate text and Instatext to improve the text’s spelling and grammar. After using these tools, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Figure A1. The VET in the wider context of the Slovenian education system (colored). The purple dashed line marks the target group of VET students [127].
Figure A1. The VET in the wider context of the Slovenian education system (colored). The purple dashed line marks the target group of VET students [127].
Systems 13 00562 g0a1

Appendix B

Table A1. MMLE item parameter estimates 1PL to 3PL for the entire data set, evaluated with respect to all three IRT models.
Table A1. MMLE item parameter estimates 1PL to 3PL for the entire data set, evaluated with respect to all three IRT models.
1PL2PL3PL
ItemBpar (SE)Apar (SE)Bpar (SE)Apar (SE)Bpar (SE)Cpar (SE)
ai13.00 (0.21)0.99 (0.18)3.21 (0.49)2.07 (0.46)2.86 (0.28)0.04 (0.01)
ai20.89 (0.11)0.69 (0.12)1.28 (0.25)1.21 (0.28)1.61 (0.23)0.16 (0.04)
ai30.89 (0.11)0.40 (0.11)2.13 (0.59)1.63 (0.45)2.26 (0.31)0.25 (0.03)
ai41.20 (0.12)0.64 (0.12)1.85 (0.35)1.66 (0.41)2.04 (0.24)0.18 (0.03)
ai5−0.63 (0.11)0.55 (0.12)−1.07 (0.27)0.76 (0.19)−0.06 (0.45)0.25 (0.10)
ai6−0.12 (0.10)0.39 (0.10)−0.24 (0.26)0.75 (0.35)1.26 (0.47)0.31 (0.10)
ai70.76 (0.11)0.54 (0.11)1.38 (0.33)1.31 (0.38)1.83 (0.27)0.23 (0.04)
ai81.80 (0.14)0.69 (0.13)2.60 (0.47)1.79 (0.48)2.41 (0.26)0.12 (0.02)
ai91.03 (0.11)0.49 (0.11)2.01 (0.47)1.60 (0.48)2.25 (0.30)0.22 (0.03)
ai102.26 (0.16)0.93 (0.16)2.55 (0.37)1.56 (0.36)2.48 (0.29)0.06 (0.02)
ai111.54 (0.13)0.48 (0.12)3.03 (0.73)2.03 (0.43)2.32 (0.24)0.16 (0.02)
ai120.95 (0.11)0.58 (0.11)1.61 (0.34)1.80 (0.46)1.88 (0.22)0.22 (0.03)
ai131.57 (0.13)0.64 (0.13)2.40 (0.45)1.30 (0.33)2.31 (0.32)0.12 (0.03)
ai141.11 (0.12)0.77 (0.13)1.45 (0.25)1.21 (0.29)1.79 (0.25)0.14 (0.04)
ai150.54 (0.11)0.54 (0.11)0.98 (0.26)1.11 (0.37)1.72 (0.30)0.24 (0.06)
ai161.32 (0.12)0.54 (0.12)2.37 (0.51)1.66 (0.49)2.33 (0.29)0.18 (0.03)
ai171.19 (0.12)0.49 (0.11)2.32 (0.54)1.67 (0.48)2.32 (0.29)0.20 (0.03)
ai185.73 (0.69)1.79 (0.49)4.41 (0.61)1.90 (0.51)4.14 (0.60)0.01 (0.00)
ai190.88 (0.11)0.41 (0.11)2.02 (0.55)2.09 (0.46)2.07 (0.22)0.26 (0.02)
ai200.79 (0.11)0.46 (0.11)1.66 (0.43)1.38 (0.46)2.15 (0.32)0.25 (0.04)
ai210.85 (0.11)0.53 (0.11)1.55 (0.36)1.72 (0.44)1.83 (0.22)0.23 (0.03)
ai22−0.31 (0.10)0.67 (0.12)−0.44 (0.16)0.82 (0.18)0.23 (0.34)0.20 (0.09)
ai231.08 (0.12)0.47 (0.11)2.19 (0.53)1.84 (0.46)2.15 (0.25)0.22 (0.03)
ai240.03 (0.10)0.47 (0.11)0.10 (0.21)0.87 (0.35)1.31 (0.39)0.29 (0.09)
ai251.14 (0.12)0.48 (0.11)2.29 (0.55)1.54 (0.49)2.41 (0.33)0.21 (0.03)
ai26−0.76 (0.11)2.16 (0.21)−0.55 (0.06)2.36 (0.25)−0.40 (0.09)0.12 (0.05)
ai271.02 (0.11)0.48 (0.11)2.05 (0.49)0.97 (0.37)2.44 (0.44)0.19 (0.05)
ai280.48 (0.11)0.49 (0.11)0.97 (0.29)0.83 (0.33)1.89 (0.41)0.23 (0.08)
ai29−0.04 (0.10)1.03 (0.14)−0.05 (0.10)1.36 (0.22)0.27 (0.15)0.13 (0.06)
ai301.23 (0.12)0.38 (0.11)3.06 (0.88)2.23 (0.42)2.29 (0.23)0.21 (0.02)
ai310.59 (0.11)0.73 (0.12)0.83 (0.19)1.21 (0.30)1.36 (0.22)0.19 (0.05)
Table A2. Fit statistics 1PL to 3PL for the entire data set evaluated with respect to all three IRT models.
Table A2. Fit statistics 1PL to 3PL for the entire data set evaluated with respect to all three IRT models.
1PL2PL3PL
ItemS-X2dfpS-X2dfpS-X2dfp
ai113.738200.84415.579180.62212.425180.825
ai226.513190.11726.745180.08429.533180.042
ai327.435190.09522.966190.23923.904180.158
ai414.144190.77514.773190.73715.030190.721
ai510.087200.96712.597190.85912.176170.789
ai623.733190.20715.393180.63516.981190.591
ai731.577210.06527.242180.07525.953180.101
ai812.570200.89513.935190.78810.800180.903
ai912.381210.92913.350190.82012.123190.880
ai1024.464190.17924.662180.13522.187190.275
ai1131.654200.04726.880200.13933.743190.020
ai1220.501200.42716.813190.60317.635170.412
ai1323.602190.21231.960200.04432.575190.027
ai1425.273200.19125.972190.13125.467170.085
ai1515.515210.79614.855190.73214.589160.555
ai167.384200.9957.603200.9946.569170.989
ai1723.329190.22322.024200.33923.934190.199
ai184.747211.0002.297181.0003.710171.000
ai1921.931200.34417.823180.46715.766180.609
ai2022.406200.31922.274200.32620.772170.237
ai2119.335190.43619.660190.41532.756190.026
ai2212.537190.86114.938200.78013.578180.756
ai2322.144200.33317.816180.46820.662180.297
ai2416.049200.71414.098180.72316.635190.615
ai2517.800190.53616.053190.65416.548180.554
ai2653.670180.00015.749180.61018.446190.493
ai2718.973200.52415.850190.66716.687180.545
ai2829.157200.08526.963190.10626.625190.114
ai2922.130200.33423.788180.16228.899180.050
ai3031.062190.04027.408190.09625.541180.111
ai3111.843200.92112.248200.9079.702180.941

References

  1. Allen, L.K.; Kendeou, P. ED-AI Lit: An Interdisciplinary Framework for AI Literacy in Education. Policy Insights Behav. Brain Sci. 2023, 11, 3–10. [Google Scholar] [CrossRef]
  2. Yang, Y.; Zhang, Y.; Sun, D.; He, W.; Wei, Y. Navigating the landscape of AI literacy education: Insights from a decade of research (2014–2024). Humanit. Soc. Sci. Commun. 2025, 12, 374. [Google Scholar] [CrossRef]
  3. Chiu, T.K.F.; Sanusi, I.T. Define, foster, and assess student and teacher AI literacy and competency for all: Current status and future research direction. Comput. Educ. Open 2024, 7, 100182. [Google Scholar] [CrossRef]
  4. Darvishi, A.; Khosravi, H.; Sadiq, S.; Gašević, D.; Siemens, G. Impact of AI assistance on student agency. Comput. Educ. 2024, 210, 104967. [Google Scholar] [CrossRef]
  5. Licardo, M.; Kranjec, E.; Lipovec, A.; Dolenc, K.; Arcet, B.; Flogie, A.; Plavčak, D.; Ivanuš Grmek, M.; Bednjički Rošer, B.; Sraka Petek, B.; et al. Generativna Umetna Inteligenca v Izobraževanju: Analiza Stanja v Primarnem, Sekundarnem in Terciarnem Izobraževanju; University of Maribor, University Press: Maribor, Slovenia, 2025; Available online: https://press.um.si/index.php/ump/catalog/view/950/1409/5110 (accessed on 10 May 2025).
  6. Garg, P.K. Overview of Artificial Intelligence. In Artificial Intelligence; Sharma, L., Garg, P.K., Eds.; Chapman and Hall/CRC: New York, NY, USA, 2021; pp. 3–18. [Google Scholar]
  7. Chiu, T.K.F.; Meng, H.M.; Chai, C.; King, I.; Wong, S.; Yam, Y. Creation and Evaluation of a Pretertiary Artificial Intelligence (AI) Curriculum. IEEE Trans. Educ. 2021, 65, 30–39. [Google Scholar] [CrossRef]
  8. Ali, S.; Kumar, V.; Breazeal, C. AI Audit: A Card Game to Reflect on Everyday AI Systems. arXiv 2023, arXiv:2305.17910. [Google Scholar] [CrossRef]
  9. Casal-Otero, L.; Catala, A.; Fernández-Morante, C.; Taboada, M.; Cebreiro, B.; Barro, S. AI literacy in K-12: A systematic literature review. Int. J. STEM Educ. 2023, 10, 1–17. [Google Scholar] [CrossRef]
  10. Grover, S.; Broll, B.; Babb, D. Cybersecurity Education in the Age of AI: Integrating AI Learning into Cybersecurity High School Curricula. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education, New York, NY, USA, 15–18 March 2023; Volume 1. [Google Scholar] [CrossRef]
  11. Borasi, R.; Miller, D.E.; Vaughan-Brogan, P.; DeAngelis, K.; Han, Y.J.; Mason, S. An AI Wishlist from School Leaders. Phi Delta Kappan 2024, 105, 48–51. [Google Scholar] [CrossRef]
  12. Wu, D.; Chen, M.; Chen, X.; Liu, X. Analyzing K-12 AI education: A large language model study of classroom instruction on learning theories, pedagogy, tools, and AI literacy. Comput. Educ. Artif. Intell. 2024, 7, 100295. [Google Scholar] [CrossRef]
  13. Vieriu, A.M.; Petrea, G. The Impact of Artificial Intelligence (AI) on Students’ Academic Development. Educ. Sci. 2025, 15, 343. [Google Scholar] [CrossRef]
  14. Hornberger, M.; Bewersdorff, A.; Schiff, D.S.; Nerdel, C. A multinational assessment of AI literacy among university students in Germany, the UK, and the US. Comput. Hum. Behav. 2025, 4, 100132. [Google Scholar] [CrossRef]
  15. Jia, X.-H.; Tu, J.-C. Towards a New Conceptual Model of AI-Enhanced Learning for College Students: The Roles of Artificial Intelligence Capabilities, General Self-Efficacy, Learning Motivation, and Critical Thinking Awareness. Systems 2024, 12, 74. [Google Scholar] [CrossRef]
  16. Stolpe, K.; Hallström, J. Artificial intelligence literacy for technology education. Comput. Educ. Open 2024, 6, 100159. [Google Scholar] [CrossRef]
  17. Sperling, K.; Stenberg, C.-J.; McGrath, C.; Åkerfeldt, A.; Heintz, F.; Stenliden, L. In search of artificial intelligence (AI) literacy in teacher education: A scoping review. Comput. Educ. Open 2024, 6, 100169. [Google Scholar] [CrossRef]
  18. Walter, Y. Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. Int. J. Educ. Technol. High. Educ. 2024, 21, 15. [Google Scholar] [CrossRef]
  19. Bewersdorff, A.; Hornberger, M.; Nerdel, C.; Schiff, D.S. AI advocates and cautious critics: How AI attitudes, AI interest, use of AI, and AI literacy build university students’ AI self-efficacy. Comp. Educ. Artif. Intell. 2025, 8, 100340. [Google Scholar] [CrossRef]
  20. Klemenčič, M. What is student agency? An ontological exploration in the context of research on student engagement. In Student Engagement in Europe: Society, Higher Education and Student Governance; Klemenčič, M., Bergan, S., Primožič, R., Eds.; Council of Europe Publishing: Strasbourg, France, 2015. [Google Scholar]
  21. Zhao, J.; Li, S.; Zhang, J. Understanding Teachers’ Adoption of AI Technologies: An Empirical Study from Chinese Middle Schools. Systems 2025, 13, 302. [Google Scholar] [CrossRef]
  22. GEN-UI. Available online: https://gen-ui.si/ (accessed on 25 April 2025).
  23. Long, D.; Magerko, B. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13. [Google Scholar] [CrossRef]
  24. Ng, D.T.K.; Wu, W.; Leung, J.K.L.; Chiu, T.K.F.; Chu, S.K.W. Design and validation of the AI literacy questionnaire: The affective, behavioural, cognitive and ethical approach. Br. J. Educ. Technol. 2024, 55, 1082–1104. [Google Scholar] [CrossRef]
  25. Spreitzer, G.M. Psychological empowerment in the workplace: Dimensions, measurement, and validation. Acad. Manag. J. 1995, 38, 1442–1465. [Google Scholar] [CrossRef]
  26. Thomas, K.W.; Velthouse, B.A. Cognitive elements of empowerment: An “interpretive” model of intrinsic task motivation. Acad. Manag. Rev. 1990, 15, 666–681. [Google Scholar] [CrossRef]
  27. Kong, S.C.; Zhu, J.; Yang, Y.N. Developing and Validating a Scale of Empowerment in Using Artificial Intelligence for Problem-Solving for Senior Secondary and University Students. Comput. Educ. Artif. Intell. 2025, 8, 100359. [Google Scholar] [CrossRef]
  28. Kong, S.C.; Chiu, M.M.; Lai, M. A Study of Primary School Students’ Interest, Collaboration Attitude, and Programming Empowerment in CT Education. Comput. Educ. 2018, 127, 178–189. [Google Scholar] [CrossRef]
  29. Gibson, D.; Kovanovic, V.; Ifenthaler, D.; Dexter, S.; Feng, S. Learning Theories for Artificial Intelligence Promoting Learning Processes. Br. J. Educ. Technol. 2023, 54, 1125–1146. [Google Scholar] [CrossRef]
  30. Jääskelä, P.; Poikkeus, A.-M.; Häkkinen, P.; Vasalampi, K.; Rasku-Puttonen, H.; Tolvanen, A. Students’ Agency Profiles in Relation to Student-Perceived Teaching Practices in University Courses. Int. J. Educ. Res. 2020, 103, 101604. [Google Scholar] [CrossRef]
  31. Diseth, Å. Self-Efficacy, Goal Orientations and Learning Strategies as Mediators between Preceding and Subsequent Academic Achievement. Learn. Individ. Differ. 2011, 21, 191–195. [Google Scholar] [CrossRef]
  32. Domino, M.R. Self-Regulated Learning Skills in Computer Science: The State of the Field. Ph.D. Thesis, Virginia Tech, Blacksburg, VA, USA, 21 August 2024. [Google Scholar]
  33. Salas-Pilco, Z.; Yang, Y.; Zhang, Z. AI-Empowered Self-Regulated Learning in Higher Education: A Qualitative Systematic Review. Sci. Learn. 2024, 9, 21. [Google Scholar] [CrossRef]
  34. Chang, S.-H.; Yao, K.-C.; Chen, Y.-T.; Chung, C.-Y.; Huang, W.-L.; Ho, W.-S. Integrating Motivation Theory into the AIED Curriculum for Technical Education: Examining the Impact on Learning Outcomes and the Moderating Role of Computer Self-Efficacy. Information 2025, 16, 50. [Google Scholar] [CrossRef]
  35. Getenet, S.; Cantle, R.; Redmond, P.; Albion, P. Students’ digital technology attitude, literacy and self-efficacy in online learning. Int. J. Educ. Tech. High. Educ. 2024, 21, 3. [Google Scholar] [CrossRef]
  36. Sullivan, K. Achievement via Individual Determination (AVID) and Development of Student Agency. Ph.D. Thesis, University of North Carolina at Chapel Hill Graduate School, Chapel Hill, NC, USA, 26 May 2022. [Google Scholar] [CrossRef]
  37. Teachers Institute. Available online: https://teachers.institute/learning-learner-development/locus-of-control-learner-autonomy-achievement/ (accessed on 18 April 2025).
  38. Cui, H.; Bi, X.; Chen, W.; Gao, T.; Qing, Z.; Shi, K.; Ma, Y. Gratitude and academic engagement: Exploring the mediating effects of internal locus of control and subjective well-being. Front. Psychol. 2023, 14, 1287702. [Google Scholar] [CrossRef]
  39. Li, J.; Li, Y. The Role of Grit on Students’ Academic Success in Experiential Learning Context. Front. Psychol. 2021, 12, 774149. [Google Scholar] [CrossRef]
  40. Yilmaz, R.; Yilmaz, F.G.K. The Effect of Generative Artificial Intelligence (AI)-Based Tool Use on Students’ Computational Thinking Skills, Programming Self-Efficacy and Motivation. Comput. Educ. Artif. Intell. 2023, 4, 100147. [Google Scholar] [CrossRef]
  41. Sigurdson, N.; Petersen, A. An exploration of grit in a CS1 context. In Proceedings of the Koli Calling ’18: 18th Koli Calling International Conference on Computing Education Research, New York, NY, USA, 22–25 November 2018; Association for Computing Machinery: New York, NY, USA, 2018; Volume 23, pp. 1–5. [Google Scholar] [CrossRef]
  42. Dang, S.; Quach, S.; Roberts, R.E. Explanation of Time Perspectives in Adopting AI Service Robots under Different Service Settings. J. Retail. Consum. Serv. 2025, 82, 104109. [Google Scholar] [CrossRef]
  43. Southworth, J.; Migliaccio, K.; Glover, J.; Glover, J.N.; Reed, D.; McCarty, C.; Brendemuhl, J.; Thomas, A. Developing a Model for AI across the Curriculum: Transforming the Higher Education Landscape via Innovation in AI Literacy. Comput. Educ. Artif. Intell. 2023, 4, 100127. [Google Scholar] [CrossRef]
  44. Chiu, T.K.F.; Moorhouse, B.L.; Chai, C.S.; Ismailov, M. Teacher Support and Student Motivation to Learn with Artificial Intelligence (AI) Based Chatbot. Interact. Learn. Environ. 2023, 32, 3240–3256. [Google Scholar] [CrossRef]
  45. Wei, L. Artificial Intelligence in Language Instruction: Impact on English Learning Achievement, L2 Motivation, and Self-Regulated Learning. Front. Psychol. 2023, 14, 1261955. [Google Scholar] [CrossRef]
  46. Chang, D.H.; Lin, M.P.-C.; Hajian, S.; Wang, Q.Q. Educational Design Principles of Using AI Chatbot That Supports Self-Regulated Learning in Education: Goal Setting, Feedback, and Personalization. Sustainability 2023, 15, 12921. [Google Scholar] [CrossRef]
  47. Zimmerman, B.J. Investigating Self-Regulation and Motivation: Historical Background, Methodological Developments, and Future Prospects. Am. Educ. Res. J. 2008, 45, 166–183. [Google Scholar] [CrossRef]
  48. Molenaar, I.; Mooij, S.D.; Azevedo, R.; Bannert, M.; Järvelä, S.; Gašević, D. Measuring Self-Regulated Learning and the Role of AI: Five Years of Research Using Multimodal Multichannel Data. Comput. Hum. Behav. 2023, 139, 107540. [Google Scholar] [CrossRef]
  49. Deci, E.L.; Ryan, R.M. Self-Determination Theory: A Macrotheory of Human Motivation, Development, and Health. Can. Psychol. 2008, 49, 182–185. [Google Scholar] [CrossRef]
  50. Guo, J.; An, F.; Lu, Y. Relationship between Perceived Support and Learning Approaches: The Mediating Role of Perceived Classroom Mastery Goal Structure and Computer Self-Efficacy. Curr. Psychol. 2024, 43, 18561–18575. [Google Scholar] [CrossRef]
  51. Alt, D.; Weinberger, A.; Heinrichs, K.; Naamati-Schneider, L. The Role of Goal Orientations and Learning Approaches in Explaining Digital Concept Mapping Utilization in Problem-Based Learning. Curr. Psychol. 2023, 42, 14175–14190. [Google Scholar] [CrossRef]
  52. Pesonen, H.; Leinonen, J.; Haaranen, L.; Hellas, A. Exploring the interplay of achievement goals, self-efficacy, prior experience and course achievement. In Proceedings of the UKICER 23: 2023 Conference on United Kingdom & Ireland Computing Education Research, New York, NY, USA, 7–8 September 2023; Association for Computing Machinery: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  53. Wang, C.K.J.; Liu, W.C.; Kee, Y.H.; Chian, L.K. Competence, Autonomy, and Relatedness in the Classroom: Understanding Students’ Motivational Processes Using the Self-Determination Theory. Heliyon 2019, 5, e01983. [Google Scholar] [CrossRef] [PubMed]
  54. Martín-Núñez, J.L.; Ar, A.; Fernández, R.P.; Abbas, A.; Radovanovic, A. Does Intrinsic Motivation Mediate Perceived Artificial Intelligence (AI) Learning and Computational Thinking of Students During the COVID-19 Pandemic? Comput. Educ. Artif. Intell. 2023, 4, 100128. [Google Scholar] [CrossRef]
  55. Dang, S.; Quach, S.; Roberts, R.E. How time fuels AI device adoption: A contextual model enriched by machine learning. Technol. Forecast. Soc. Change 2025, 212, 123975. [Google Scholar] [CrossRef]
  56. UNESCO. AI Competency Framework for Students; UNESCO: Paris, France, 2024. [Google Scholar] [CrossRef]
  57. Geitz, G.; Brinke, D.J.; Kirschner, P.A. Changing Learning Behaviour: Self-Efficacy and Goal Orientation in PBL Groups in Higher Education. Int. J. Educ. Res. 2016, 75, 146–158. [Google Scholar] [CrossRef]
  58. Rončević Zubković, B.; Kolić-Vehovec, S. Perceptions of Contextual Achievement Goals: Contribution to High-School Students’ Achievement Goal Orientation, Strategy Use and Academic Achievement. Stud. Psychol. 2014, 56, 137–153. [Google Scholar] [CrossRef]
  59. Won, S.; Anderman, E.M.; Zimmerman, R.S. Longitudinal Relations of Classroom Goal Structures to Students’ Motivation and Learning Outcomes in Health Education. J. Educ. Psychol. 2020, 112, 1003–1019. [Google Scholar] [CrossRef]
  60. Honicke, T.; Broadbent, J.; Fuller-Tyszkiewicz, M. Learner Self-Efficacy, Goal Orientation, and Academic Achievement: Exploring Mediating and Moderating Relationships. High. Educ. Res. Dev. 2020, 39, 689–703. [Google Scholar] [CrossRef]
  61. Touretzky, D.S.; McCune, C.G.; Martin, F.; Seehorn, D. Envisioning AI for K–12: What Should Every Child Know about AI? In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; AAAI Press: Menlo Park, CA, USA, 2019. [Google Scholar]
  62. Touretzky, D.S.; Gardner-McCune, C.; Seehorn, D. Machine Learning and the Five Big Ideas in AI. Int. J. Artif. Intell. Educ. 2023, 33, 233–266. [Google Scholar] [CrossRef]
  63. Bećirović, S.; Polz, E.; Tinkel, I. Exploring Students’ AI Literacy and Its Effects on Their AI Output Quality, Self-Efficacy, and Academic Performance. Smart Learn. Environ. 2025, 12, 29. [Google Scholar] [CrossRef]
  64. Ulfert-Blank, A.-S.; Schmidt, I. Assessing Digital Self-Efficacy: Review and Scale Development. Comput. Educ. 2022, 191, 104626. [Google Scholar] [CrossRef]
  65. Fung, K.Y.; Fung, K.C.; Lui, T.L.R.; Sin, K.F.; Lee, L.H.; Qu, H.; Song, S. Exploring the Impact of Robot Interaction on Learning Engagement: A Comparative Study of Two Multi-Modal Robots. Smart Learn. Environ. 2025, 12, 12. [Google Scholar] [CrossRef]
  66. Biagini, G. Assessing the Assessments: Toward a Multidimensional Approach to AI Literacy. Med. Educ. 2023, 20, 91–101. [Google Scholar] [CrossRef]
  67. Kaup, C.; Brooks, E.A. Cultural–Historical Perspective on How Double Stimulation Triggers Expansive Learning: Implementing Computational Thinking in Mathematics. Des. Learn. 2022, 14, 151–164. [Google Scholar] [CrossRef]
  68. Engeström, Y. Learning by Expanding: An Activity-Theoretical Approach to Developmental Research, 2nd ed.; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  69. Engeström, Y.; Sannino, A. From Mediated Actions to Heterogeneous Coalitions: Four Generations of Activity-Theoretical Studies of Work and Learning. Mind Cult. Act. 2021, 28, 4–23. [Google Scholar] [CrossRef]
  70. Engeström, Y. Expansive Learning at Work: Toward an Activity Theoretical Reconceptualization. J. Educ. Work 2001, 14, 133–156. [Google Scholar] [CrossRef]
  71. Engeström, Y.; Sannino, A.; Virkkunen, J. On the Methodological Demands of Formative Interventions. Mind Cult. Act. 2014, 21, 118–128. [Google Scholar] [CrossRef]
  72. Batiibwe, M.S.K. Using Cultural–Historical Activity Theory to Understand How Emerging Technologies Can Mediate Teaching and Learning in a Mathematics Classroom: A Review of Literature. Res. Pract. Technol. Enhanc. Learn. 2019, 14, 12. [Google Scholar] [CrossRef]
  73. Sannino, A. Transformative Agency as Warping: How Collectives Accomplish Change Amidst Uncertainty. Pedagog. Cult. Soc. 2022, 30, 9–33. [Google Scholar] [CrossRef]
  74. Prihatmanto, A.S.; Sukoco, A.; Budiyon, A. Next Generation Smart System: 4-Layer Modern Organization and Activity Theory for a New Paradigm Perspective. Arch. Control. Sci. 2024, 34, 589–623. [Google Scholar] [CrossRef]
  75. Gormley, G.J.; Kajamaa, A.; Conn, R.L.; O’hAre, S. Making the Invisible Visible: A Place for Utilizing Activity Theory within In Situ Simulation to Drive Healthcare Organizational Development? Adv. Simul. 2020, 5, 29. [Google Scholar] [CrossRef]
  76. Lim, J.; Leinonen, T.; Lipponen, L. How can artificial intelligence be used in creative learning? Cultural-historical activity theory analysis in Finnish kindergarten. In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky, Proceedings of 25th International Conference, AIED 2024, Recife, Brazil, 8–12 July 2024; Olney, A.M., Chounta, I.-A., Liu, Z., Santos, O.C., Bittencourt, I.I., Eds.; Communications in Computer and Information Science; Springer: Cham, Switzerland, 2024; Volume 2151, pp. 149–156. [Google Scholar] [CrossRef]
  77. Xiang, X.; Wei, Y.; Lei, Y.; Li, W.; He, X. Impact of Psychological Empowerment on Job Satisfaction among Preschool Teachers: Mediating Role of Professional Identity. Humanit. Soc. Sci. Commun. 2024, 11, 1175. [Google Scholar] [CrossRef]
  78. Ganduri, L.; Collier Reed, B. A Cultural-Historical Activity Theory Approach to Studying the Development of Students’ Digital Agency in Higher Education. In Proceedings of the European Society for Engineering Education (SEFI), Dublin, Ireland, 11–14 September 2023; Volume 477–487. [Google Scholar] [CrossRef]
  79. Creswell, J.W.; Creswell, J.L. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2023. [Google Scholar]
  80. Center Republike Slovenije za Poklicno Izobraževanje (CPI). Izobraževalni Programi. Available online: https://cpi.si/poklicno-izobrazevanje/izobrazevalni-programi/ (accessed on 15 May 2025).
  81. Ng, D.; Leung, J.; Chu, S.K.; Qiao, M.S. AI literacy: Definition, teaching, evaluation and ethical issues. Proc. Assoc. Inf. Sci. Technol. 2021, 58, 504–509. [Google Scholar] [CrossRef]
  82. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI Literacy: An Exploratory Review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  83. Hornberger, M.; Bewersdorff, A.; Nerdel, C. What Do University Students Know about Artificial Intelligence? Development and Validation of an AI Literacy Test. Comput. Educ.: Artif. Intell. 2023, 5, 100165. [Google Scholar] [CrossRef]
  84. Chen, Y.; Lin, L.; Zhu, X. Is Social Interaction a Catalyst for Digital Environments? Exploring Affordances of Synchronous Delivery in Online CLIL. Interact. Learn. Environ. 2024, 33, 2689–2702. [Google Scholar] [CrossRef]
  85. Zeiser, K.; Scholz, C.; Cirks, V. Maximizing Student Agency: Implementing and Measuring Student-Centered Learning Practices; American Institutes for Research: Washington, DC, USA, 2018. [Google Scholar]
  86. Rupnik, D.; Avsec, S. Student Agency as an Enabler in Cultivating Sustainable Competencies for People-Oriented Technical Professions. Educ. Sci. 2025, 15, 469. [Google Scholar] [CrossRef]
  87. Benjamini, Y.; Hochberg, Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J. R. Stat. Soc. Ser. B 1995, 57, 289–300. [Google Scholar]
  88. Meyer, J.P. Applied Measurement with jMetrik; Routledge: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
  89. Henseler, J.; Ringle, C.M.; Sarstedt, M. A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equation Modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  90. George, D.; Mallery, P. IBM SPSS Statistics 29 Step by Step: A Simple Guide and Reference, 18th ed.; Routledge: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  91. Mishra, P.; Pandey, C.M.; Singh, U.; Gupta, A.; Sahu, C.; Keshri, A. Descriptive Statistics and Normality Tests for Statistical Data. Ann. Card. Anaesth. 2019, 22, 67–72. [Google Scholar] [CrossRef]
  92. LeBlanc, V.; Cox, M.A.A. Interpretation of the Point-Biserial Correlation Coefficient in the Context of a School Examination. Quant. Methods Psychol. 2017, 13, 46–56. [Google Scholar] [CrossRef]
  93. Cohen, J.; Cohen, P.; West, S.G.; Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd ed.; Routledge: Mahwah, NJ, USA, 2003. [Google Scholar] [CrossRef]
  94. Pituch, K.A.; Stevens, J.P. Applied Multivariate Statistics for the Social Sciences: Analyses with SAS and IBM’s SPSS, 6th ed.; Routledge: New York, NY, USA, 2015. [Google Scholar] [CrossRef]
  95. Moosbrugger, H.; Kelava, A. Testtheorie und Fragebogenkonstruktion, 3rd ed.; Springer: Berlin, Germany, 2020. [Google Scholar] [CrossRef]
  96. Boone, W.J.; Staver, J.R. Advances in Rasch Analyses in the Human Sciences; Springer: Cham, Switzerland, 2020. [Google Scholar]
  97. Denovan, A.; Dagnall, N.; Drinkwater, K.G.; Escolà-Gascón, Á. The Illusory Health Beliefs Scale: Preliminary Validation Using Exploratory Factor and Rasch Analysis. Front. Psychol. 2024, 15, 1408734. [Google Scholar] [CrossRef]
  98. Crowe, M.; Maciver, D.; Rush, R.; Forsyth, K. Psychometric Evaluation of the ACHIEVE Assessment. Front. Pediatr. 2020, 8, 245. [Google Scholar] [CrossRef]
  99. Orlando, M.; Thissen, D. Likelihood-Based Item-Fit Indices for Dichotomous Item Response Theory Models. Appl. Psychol. Meas. 2000, 24, 50–64. [Google Scholar] [CrossRef]
  100. Linacre, J.M. A User’s Guide to Winsteps/Ministep: Rasch-Model Computer Programs. Program Manual 5.10.1; Winsteps.com: Chicago, IL, USA, 2025; Available online: https://www.winsteps.com/winman/copyright.htm (accessed on 25 May 2025).
  101. Cheung, G.W.; Cooper-Thomas, H.D.; Lau, R.S.; Wang, L.C. Reporting Reliability, Convergent and Discriminant Validity with Structural Equation Modeling: A Review and Best-Practice Recommendations. Asia Pac. J. Manag. 2024, 41, 745–783. [Google Scholar] [CrossRef]
  102. Rönkkö, M.; Cho, E. An Updated Guideline for Assessing Discriminant Validity. Organ. Res. Methods 2022, 25, 6–14. [Google Scholar] [CrossRef]
  103. Hair, J.F.; Babin, B.J.; Anderson, R.E.; Black, W.C. Multivariate Data Analysis, 8th ed.; Pearson Prentice: Upper Saddle River, NJ, USA, 2019. [Google Scholar]
  104. Kim, H.Y. Statistical Notes for Clinical Researchers: Assessing Normal Distribution (2) Using Skewness and Kurtosis. Restor. Dent. Endod. 2013, 38, 52–54. [Google Scholar] [CrossRef]
  105. Barton, B.; Peat, J. Medical Statistics: A Guide to SPSS, Data Analysis and Clinical Appraisal, 2nd ed.; Wiley Blackwell, BMJ Books: Sydney, Australia, 2014. [Google Scholar]
  106. Schober, P.; Boer, C.; Schwarte, L.A. Correlation Coefficients: Appropriate Use and Interpretation. Anesth. Analg. 2018, 126, 1763–1768. [Google Scholar] [CrossRef]
  107. Madhulatha, T.S. An Overview on Clustering Methods. IOSR J. Eng. 2012, 2, 719–725. [Google Scholar] [CrossRef]
  108. Milligan, G.W.; Cooper, M.C. An Examination of Procedures for Determining the Number of Clusters in a Data Set. Psychometrika 1985, 50, 159–179. [Google Scholar] [CrossRef]
  109. Eldridge, S.M.; Ashby, D.; Kerry, S. Sample Size for Cluster Randomized Trials: Effect of Coefficient of Variation of Cluster Size and Analysis Method. Int. J. Epidemiol. 2006, 35, 1292–1300. [Google Scholar] [CrossRef] [PubMed]
  110. Coraggio, L.; Coretto, P. Selecting the Number of Clusters, Clustering Models, and Algorithms: A Unifying Approach Based on the Quadratic Discriminant Score. J. Multivar. Anal. 2023, 196, 105181. [Google Scholar] [CrossRef]
  111. Roski, M.; Sebastian, R.; Ewerth, R.; Hoppe, A.; Nehring, A. Learning Analytics and the Universal Design for Learning (UDL): A Clustering Approach. Comput. Educ. 2024, 214, 105028. [Google Scholar] [CrossRef]
  112. Rossbroich, J.; Durieux, J.; Wilderjans, T.F. Model Selection Strategies for Determining the Optimal Number of Overlapping Clusters in Additive Overlapping Partitional Clustering. J. Classif. 2022, 39, 264–301. [Google Scholar] [CrossRef]
  113. Grimm, K.J.; Houpt, R.; Rodgers, D. Model Fit and Comparison in Finite Mixture Models: A Review and a Novel Approach. Front. Educ. 2021, 6, 613645. [Google Scholar] [CrossRef]
  114. Nylund, K.L.; Asparouhov, T.; Muthén, B.O. Deciding on the Number of Classes in Latent Class Analysis and Growth Mixture Modeling: A Monte Carlo Simulation Study. Struct. Equ. Model. 2007, 14, 535–569. [Google Scholar] [CrossRef]
  115. Hooker, J.N.; Marrett, R.; Wang, Q. Rigorizing the Use of the Coefficient of Variation to Diagnose Fracture Periodicity and Clustering. J. Struct. Geol. 2023, 168, 104830. [Google Scholar] [CrossRef]
  116. University of Ljubljana, Faculty of Education. Available online: https://www.pef.uni-lj.si/ (accessed on 20 May 2025).
  117. Digital First Network. Available online: https://digitalfirstnetwork.eu/ (accessed on 20 April 2025).
  118. Hair, J.F.; Sarstedt, M.; Ringle, C.M.; Gudergan, S.P. Advanced Issues in Partial Least Squares Structural Equation Modeling, 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2024. [Google Scholar]
  119. Godhe, A.; Lindström, B. Creating Multimodal Texts in Language Education—Negotiations at the Boundary. Res. Pract. Technol. Enhanc. Learn. 2014, 9, 165–188. Available online: https://rptel.apsce.net/index.php/RPTEL/article/view/2014-09010 (accessed on 5 May 2025).
  120. Bandura, A. Self-Efficacy: The Exercise of Control; W. H. Freeman/Times Books/Henry Holt & Co.: New York, NY, USA, 1997. [Google Scholar]
  121. Goller, M. Human Agency at Work: An Active Approach Towards Expertise Development; Springer: Wiesbaden, Germany, 2017. [Google Scholar] [CrossRef]
  122. Chan, C. Holistic competencies and AI in education: A synergistic pathway. Australas. J. Educ. Technol. 2024, 40, 1–12. [Google Scholar] [CrossRef]
  123. Chee, H.; Ahn, S.; Lee, J. A competency framework for AI literacy: Variations by different learner groups and an implied learning pathway. Br. J. Educ. Technol. 2024, 1–37, Early access. Full record is not available. [Google Scholar] [CrossRef]
  124. Yuan, C.W.; Tsai, H.S.; Chen, Y.-T. Charting competence: A holistic scale for measuring proficiency in artificial intelligence literacy. J. Educ. Comput. Res. 2024, 62, 1455–1484. [Google Scholar] [CrossRef]
  125. Heikkilä, M.; Iiskala, T.; Mikkilä-Erdmann, M. Voices of Student Teachers’ Professional Agency at the Intersection of Theory and Practice. Learn. Cult. Soc. Interact. 2020, 25, 100405. [Google Scholar] [CrossRef]
  126. Yi, Y. Establishing the concept of AI literacy. Eur. J. Bioeth. 2021, 12, 131–144. [Google Scholar] [CrossRef]
  127. Cedefop; Institute of the Republic of Slovenia for Vocational Education and Training (CPI). Vocational education and training in Europe—Slovenia: System description. In Cedefop; ReferNet. Vocational Education and Training in Europe: VET in Europe Database—Detailed VET System Descriptions; Publications Office: Luxembourg, 2024; Available online: https://www.cedefop.europa.eu/en/tools/vet-in-europe/systems/slovenia-u3 (accessed on 24 May 2025).
  128. van Dinther, M.; Dochy, F.; Segers, M.; Braeken, J. Student perceptions of assessment and student self-efficacy in competence-based education. Educ. Stud. 2014, 40, 330–351. [Google Scholar] [CrossRef]
  129. Duchatelet, D.; Donche, V. Fostering self-efficacy and self-regulation in higher education: A matter of autonomy support or academic motivation? High. Educ. Res. Dev. 2019, 38, 733–747. [Google Scholar] [CrossRef]
  130. Brooks, C.; Young, S.L. Are choice-making opportunities needed in the classroom? Using self-determination theory to consider student motivation and learner empowerment. Int. J. Teach. Learn. High. Educ. 2011, 23, 48–59. Available online: http://files.eric.ed.gov/fulltext/EJ938578.pdf (accessed on 10 April 2025).
  131. Cheng, W.; Nguyen, P.N.T. Academic motivations and the risk of not in employment, education or training: University and vocational college undergraduates comparison. Educ. Train. 2024, 66, 91–105. [Google Scholar] [CrossRef]
  132. Chuang, Y.; Huang, T.; Lin, S.; Chen, B. The influence of motivation, self-efficacy, and fear of failure on the career adaptability of vocational school students: Moderated by meaning in life. Front. Psychol. 2022, 13, 958334. [Google Scholar] [CrossRef]
  133. Dæhlen, M. Completion in vocational and academic upper secondary school: The importance of school motivation, self-efficacy, and individual characteristics. Eur. J. Educ. 2017, 52, 336–347. [Google Scholar] [CrossRef]
  134. Jónsdóttir, H.H.; Blöndal, K.S. The choice of track matters: Academic self-concept and sense of purpose in vocational and academic tracks. Scand. J. Educ. Res. 2022, 67, 621–636. [Google Scholar] [CrossRef]
  135. Fieger, P.; Foley, A. Perceived personal benefits from study as determinants of student satisfaction in Australian vocational education and training. Educ. Train. 2024, 66, 1293–1310. [Google Scholar] [CrossRef]
  136. Stenalt, M.H.; Lassesen, B. Does student agency benefit student learning? A systematic review of higher education research. Assess. Eval. High. Educ. 2021, 47, 653–669. [Google Scholar] [CrossRef]
  137. Nieminen, J.; Tuohilampi, L. ‘Finally studying for myself’—Examining student agency in summative and formative self-assessment models. Assess. Eval. High. Educ. 2020, 45, 1031–1045. [Google Scholar] [CrossRef]
  138. Wang, T.; Cheng, E.C.K. An Investigation of Barriers to Hong Kong K-12 Schools Incorporating Artificial Intelligence in Education. Comput. Educ. Artif. Intell. 2021, 2, 100031. [Google Scholar] [CrossRef]
  139. Alamäki, A.; Nyberg, C.; Kimberley, A.; Salonen, A.O. Artificial intelligence literacy in sustainable development: A learning experiment in higher education. Front. Educ. 2024, 9, 1343406. [Google Scholar] [CrossRef]
  140. Gartner, S.; Krašna, M. Ethics of artificial intelligence in education. Rev. Elem. Educ. 2023, 16, 61–79. [Google Scholar] [CrossRef]
  141. Wang, S.; Sun, Z. Roles of artificial intelligence experience, information redundancy, and familiarity in shaping active learning: Insights from intelligent personal assistants. Educ. Inf. Technol. 2025, 30, 2525–2546. [Google Scholar] [CrossRef]
  142. Al-Abdullatif, A.; Alsubaie, M.A. ChatGPT in learning: Assessing students’ use intentions through the lens of perceived value and the influence of AI literacy. Behav. Sci. 2024, 14, 845. [Google Scholar] [CrossRef]
  143. Lee, I.A.; Ali, S.; Zhang, H.; DiPaola, D.; Breazeal, C. Developing middle school students’ AI literacy. In Proceedings of the SIGCSE 21: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, New York, NY, USA, 13–20 March 2021; pp. 1001–1007. [Google Scholar] [CrossRef]
  144. Vanneste, B.S.; Puranam, P. Artificial intelligence, trust, and perceptions of agency. Acad. Manage. Rev. 2024, 49, amr-2022. [Google Scholar] [CrossRef]
  145. Lin, P.-Y.; Chai, C.-S.; Jong, M.S.-Y.; Dai, Y.; Guo, Y.; Qin, J. Modeling the Structural Relationship among Primary Students’ Motivation to Learn Artificial Intelligence. Comput. Educ. Artif. Intell. 2021, 2, 100006. [Google Scholar] [CrossRef]
  146. Vansteenkiste, M.; Simons, J.; Lens, W.; Sheldon, K.M.; Deci, E.L. Motivating Learning, Performance, and Persistence: The Synergistic Effects of Intrinsic Goals and Autonomy Support. J. Pers. Soc. Psychol. 2004, 87, 246–260. [Google Scholar] [CrossRef] [PubMed]
  147. Bandura, A. Social Cognitive Theory: An Agentic Perspective. Annu. Rev. Psychol. 2001, 52, 1–26. [Google Scholar] [CrossRef] [PubMed]
  148. Kong, S.-C.; Cheung, M.-Y.W.; Tsang, O. Developing an Artificial Intelligence Literacy Framework: Evaluation of a Literacy Course for Senior Secondary Students Using a Project-Based Learning Approach. Comput. Educ. Artif. Intell. 2024, 6, 100214. [Google Scholar] [CrossRef]
  149. Frazier, L.D.; Schwartz, B.L.; Metcalfe, J. The MAPS Model of Self-Regulation: Integrating Metacognition, Agency, and Possible Selves. Metacognition Learn. 2021, 16, 297–318. [Google Scholar] [CrossRef]
  150. Santokhie, S.; Lipps, G.E. Development and Validation of the Tertiary Student Locus of Control Scale. SAGE Open 2020, 10, 2158244019899061. [Google Scholar] [CrossRef]
  151. Shepherd, S.; Owen, D.; Fitch, T.J.; Marshall, J.L. Locus of Control and Academic Achievement in High School Students. Psychol. Rep. 2006, 98, 318–322. [Google Scholar] [CrossRef] [PubMed]
  152. Terzi, A.R.; Çetin, G.; Eser, A. The Relationship between Undergraduate Students’ Locus of Control and Epistemological Beliefs. Educ. Res. 2012, 3, 30–39. Available online: https://www.interesjournals.org/articles/the-relationship-between-undergraduate-studentslocus-of-control-and-epistemological-beliefs.pdf (accessed on 17 May 2025).
  153. Celik, I.; Sarıcam, H. The Relationships between Positive Thinking Skills, Academic Locus of Control and Grit in Adolescents. Univ. J. Educ. Res. 2018, 6, 392–398. Available online: http://files.eric.ed.gov/fulltext/EJ1171309.pdf (accessed on 24 April 2025).
  154. Selwyn, N. On the Limits of Artificial Intelligence (AI) in Education. Nord. Tidsskr. Pedagog. Og Krit. 2024, 10, 3–14. [Google Scholar] [CrossRef]
  155. Anderson, A.; Hattie, J.; Hamilton, R. Locus of control, self-efficacy, and motivation in different schools: Is moderation the key to success? Educ. Psychol. 2005, 25, 517–535. [Google Scholar] [CrossRef]
  156. Buluş, M. Goal orientations, locus of control and academic achievement in prospective teachers: An individual differences perspective. Educ. Sci. Theory Pract. 2011, 11, 540–546. Available online: https://files.eric.ed.gov/fulltext/EJ927364.pdf (accessed on 3 May 2025).
  157. Karaman, M.A.; Nelson, K.M.; Vela, J.C. The mediation effects of achievement motivation and locus of control between academic stress and life satisfaction in undergraduate students. Br. J. Guid. Couns. 2017, 46, 375–384. [Google Scholar] [CrossRef]
  158. Yeşilyurt, E. Academic locus of control, tendencies towards academic dishonesty and test anxiety levels as the predictors of academic self-efficacy. Educ. Sci. Theory Pract. 2014, 14, 1945–1956. Available online: http://files.eric.ed.gov/fulltext/EJ1050425.pdf (accessed on 14 May 2025).
  159. Keith, T.; Pottebaum, S.M.; Eberhart, S.W. Effects of self-concept and locus of control on academic achievement: A large-sample path analysis. J. Psychoeduc. Assess. 1986, 4, 61–72. [Google Scholar] [CrossRef]
  160. Maqsud, M. Relationships of locus of control to self-esteem, academic achievement, and prediction of performance among Nigerian secondary school pupils. Br. J. Educ. Psychol. 1983, 53, 215–221. [Google Scholar] [CrossRef]
  161. Findley, M.J.; Cooper, H.M. Locus of Control and Academic Achievement: A Literature Review. J. Pers. Soc. Psychol. 1983, 44, 419–427. [Google Scholar] [CrossRef]
  162. Purcell, D.; Cavanaugh, G.; Thomas-Purcell, K.; Caballero, J.; Waldrop, D.; Ayala, V.; Davenport, R.; Ownby, R. e-Health literacy scale, patient attitudes, medication adherence, and internal locus of control. HLRP Health Lit. Res. Pract. 2023, 7, e80–e88. [Google Scholar] [CrossRef]
  163. Villa, E.A.; Sebastian, M.A. Achievement Motivation, Locus of Control and Study Habits as Predictors of Mathematics Achievement of New College Students. Int. Electron. J. Math. Educ. 2021, 16, em0661. [Google Scholar] [CrossRef]
  164. Ng, D.T.K.; Su, J.; Leung, J.; Chu, S. Artificial Intelligence (AI) Literacy Education in Secondary Schools: A Review. Interact. Learn. Environ. 2023, 32, 6204–6224. [Google Scholar] [CrossRef]
  165. Azevedo, R.; Cromley, J.G. Does Training on Self-Regulated Learning Facilitate Students’ Learning With Hypermedia? J. Educ. Psychol. 2004, 96, 523–535. [Google Scholar] [CrossRef]
  166. Pruessner, L.; Barnow, S.; Holt, D.V.; Joormann, J.; Schulze, K. A cognitive control framework for understanding emotion regulation flexibility. Emotion 2020, 20, 21–29. [Google Scholar] [CrossRef]
  167. Dane, E. Reconsidering the trade-off between expertise and flexibility: A cognitive entrenchment perspective. Acad. Manage. Rev. 2010, 35, 579–603. [Google Scholar] [CrossRef]
  168. Credé, M.; Tynan, M.C.; Harms, P.D. Much Ado about Grit: A Meta-Analytic Synthesis of the Grit Literature. J. Pers. Soc. Psychol. 2017, 113, 492–511. [Google Scholar] [CrossRef]
  169. Yang, Y.; Xia, N. Enhancing students’ metacognition via AI-driven educational support systems. Int. J. Emerg. Technol. Learn. 2023, 18, 133–148. [Google Scholar] [CrossRef]
  170. Carolus, A.; Koch, M.; Straka, S.; Latoschik, M.; Wienrich, C. MAILS—Meta AI Literacy Scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. arXiv 2023, arXiv:2302.09319. [Google Scholar] [CrossRef]
  171. Shiri, A. Artificial intelligence literacy: A proposed faceted taxonomy. Digit. Libr. Perspect. 2024, 40, 681–699. [Google Scholar] [CrossRef]
  172. Taşkın, M. Artificial intelligence in personalized education: Enhancing learning outcomes through adaptive technologies and data-driven insights. Hum.-Comput. Interact. 2025, 8, 173. [Google Scholar] [CrossRef]
  173. Hashim, S.; Omar, M.K.; Jalil, H.A.; Sharef, N.M. Trends on technologies and artificial intelligence in education for personalized learning: Systematic literature review. Int. J. Acad. Res. Prog. Educ. Dev. 2022, 11, 670–686. [Google Scholar] [CrossRef]
  174. Strielkowski, W.; Grebennikova, V.; Lisovskiy, A.; Rakhimova, G.; Vasileva, T. AI-driven adaptive learning for sustainable educational transformation. Sustain. Dev. 2024, 33, 1921–1947. [Google Scholar] [CrossRef]
  175. Zheng, L.; Niu, J.; Zhong, L.; Gyasi, J.F. The effectiveness of artificial intelligence on learning achievement and learning perception: A meta-analysis. Interact. Learn. Environ. 2021, 31, 5650–5664. [Google Scholar] [CrossRef]
  176. Xu, Z.; Zhao, Y.; Zhang, B.; Liew, J.; Kogut, A. A meta-analysis of the efficacy of self-regulated learning interventions on academic achievement in online and blended environments in K–12 and higher education. Behav. Inf. Technol. 2022, 42, 2911–2931. [Google Scholar] [CrossRef]
  177. Husman, J.; Lens, W. The Role of the Future in Student Motivation. Educ. Psychol. 1999, 34, 113–125. [Google Scholar] [CrossRef]
  178. Dweck, C.S.; Leggett, E.L. A Social-Cognitive Approach to Motivation and Personality. Psychol. Rev. 1988, 95, 256–273. [Google Scholar] [CrossRef]
  179. Ames, C.A.; Archer, J. Achievement goals in the classroom: Students’ learning strategies and motivation processes. J. Educ. Psychol. 1988, 80, 260–267. [Google Scholar] [CrossRef]
  180. Baumeister, R.; Vohs, K. Self-regulation, ego depletion, and motivation. Soc. Pers. Psychol. Compass 2007, 1, 115–128. [Google Scholar] [CrossRef]
  181. Hwang, H.S.; Zhu, L.; Cui, Q. Development and validation of a digital literacy scale in the artificial intelligence era for college students. KSII Trans. Internet Inf. Syst. 2023, 17, 2241–2258. [Google Scholar] [CrossRef]
  182. Zhao, L.; Wu, X.; Luo, H. Developing AI literacy for primary and middle school teachers in China: Based on a structural equation modeling analysis. Sustainability 2022, 14, 14549. [Google Scholar] [CrossRef]
  183. Relmasira, S.C.; Lai, Y.C.; Donaldson, J.P. Fostering AI Literacy in Elementary Science, Technology, Engineering, Art, and Mathematics (STEAM) Education in the Age of Generative AI. Sustainability 2023, 15, 13595. [Google Scholar] [CrossRef]
  184. Kumar, P.C.; Cotter, K.; Cabrera, L.Y. Taking responsibility for meaning and mattering: An agential realist approach to generative AI and literacy. Read. Res. Q. 2024, 59, 570–578. [Google Scholar] [CrossRef]
  185. Savec, V.F.; Jedrinović, S. The Role of AI Implementation in Higher Education in Achieving the Sustainable Development Goals: A Case Study from Slovenia. Sustainability 2025, 17, 183. [Google Scholar] [CrossRef]
  186. Xu, Z. AI in education: Enhancing learning experiences and student outcomes. Appl. Comput. Eng. 2024, 51, 104–111. [Google Scholar] [CrossRef]
  187. Druga, S.; Otero, N.; Ko, A.J. The landscape of teaching resources for AI education. In Proceedings of the ITiCSE 22: 27th ACM Conference on on Innovation and Technology in Computer Science Education, New York, NY, USA, 8–13 July 2022; Volume 1, pp. 420–426. [Google Scholar] [CrossRef]
  188. D’Mello, S.K.; Biddy, Q.L.; Breideband, T.; Bush, J.B.; Chang, M.; Cortez, A.; Flanigan, J.; Foltz, P.W.; Gorman, J.C.; Hirshfield, L.M.; et al. From learning optimization to learner flourishing: Reimagining AI in education at the Institute for Student-AI Teaming (iSAT). AI Mag. 2024, 45, 61–68. [Google Scholar] [CrossRef]
  189. Long, D.; Teachey, A.; Magerko, B. Family learning talk in AI literacy learning activities. In Proceedings of the CHI 22: CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 29 April 2022–5 May 2022. Article 268. [Google Scholar] [CrossRef]
  190. Ma, S.; Chen, Z. The development and validation of the Artificial Intelligence Literacy Scale for Chinese college students (AILS-CCS). IEEE Access 2024, 12, 146419–146429. [Google Scholar] [CrossRef]
  191. Eager, B.; Brunton, R. Prompting higher education towards AI-augmented teaching and learning practice. J. Univ. Teach. Learn. Pract. 2023, 20, 1–19. [Google Scholar] [CrossRef]
  192. Schleiss, J.; Laupichler, M.C.; Raupach, T.; Stober, S. AI course design planning framework: Developing domain-specific AI education courses. Educ. Sci. 2023, 13, 954. [Google Scholar] [CrossRef]
  193. Sousa, M.; Mas, F.D.; Pesqueira, A.; Lemos, C.; Verde, J.M.; Cobianchi, L. The potential of AI in health higher education to increase the students’ learning outcomes. TEM J. 2021, 10, 1671–1679. [Google Scholar] [CrossRef]
  194. Dogan, M.E.; Dogan, T.G.; Bozkurt, A. The use of artificial intelligence (AI) in online learning and distance education processes: A systematic review of empirical studies. Appl. Sci. 2023, 13, 3612. [Google Scholar] [CrossRef]
  195. Funda, V.; Mbangeleli, N. Artificial intelligence (AI) as a tool to address academic challenges in South African higher education. Int. J. Learn. Teach. Educ. Res. 2024, 23, 99–114. [Google Scholar] [CrossRef]
  196. Ribič, L.; Devetak, I.; Potočnik, R. Pre-Service Primary School Teachers’ Understanding of Biogeochemical Cycles of Elements. Educ. Sci. 2025, 15, 110. [Google Scholar] [CrossRef]
  197. Li, L.; Ruppar, A. Conceptualizing teacher agency for inclusive education: A systematic and international review. Teach. Educ. Spec. Educ. 2020, 44, 42–59. [Google Scholar] [CrossRef]
  198. Yang, S.; Han, J. Perspectives of transformative learning and professional agency: A native Chinese language teacher’s story of teacher identity transformation in Australia. Front. Psychol. 2022, 13, 894025. [Google Scholar] [CrossRef]
  199. Shavard, G. Teacher agency in collaborative lesson planning: Stabilising or transforming professional practice? Teach. Teach. 2022, 28, 555–567. [Google Scholar] [CrossRef]
  200. Klemen, T.; Devetak, I. Introduction of Hydrosphere Environmental Problems in Lower Secondary School Chemistry Lessons. Educ. Sci. 2025, 15, 111. [Google Scholar] [CrossRef]
  201. Liu, K.; Ball, A. Critical reflection and generativity: Toward a framework of transformative teacher education for diverse learners. Rev. Res. Educ. 2019, 43, 105–168. [Google Scholar] [CrossRef]
  202. Brevik, L.M.; Gudmundsdottir, G.; Lund, A.; Strømme, T. Transformative agency in teacher education: Fostering professional digital competence. Teach. Teach. Educ. 2019, 86, 102877. [Google Scholar] [CrossRef]
  203. Cochran-Smith, M.; Craig, C.; Orland-Barak, L.; Cole, C.; Hill-Jackson, V. Agents, agency, and teacher education. J. Teach. Educ. 2022, 73, 445–448. [Google Scholar] [CrossRef]
Figure 1. Multilevel pathways from transformative student agency to AI literacy competencies.
Figure 1. Multilevel pathways from transformative student agency to AI literacy competencies.
Systems 13 00562 g001
Figure 2. Agency profiles among secondary technical school students according to constructs and empowerment cognitions (demarked with dashed lines).
Figure 2. Agency profiles among secondary technical school students according to constructs and empowerment cognitions (demarked with dashed lines).
Systems 13 00562 g002
Figure 3. Path model showing the predictive relationships between six transformative agency constructs and overall AI literacy, controlling for sex, track, and study year (n = 425; R2 = 0.203, Q2 = 0.11).
Figure 3. Path model showing the predictive relationships between six transformative agency constructs and overall AI literacy, controlling for sex, track, and study year (n = 425; R2 = 0.203, Q2 = 0.11).
Systems 13 00562 g003
Table 1. A summary of typical learning activities and focal concepts with key learner outcomes reported in the literature.
Table 1. A summary of typical learning activities and focal concepts with key learner outcomes reported in the literature.
Big IdeaTypical Learning Activities and Focal ConceptsKey Learner Outcomes Reported
1. Perception (how computers sense the world)Hands-on image-/sound-classification demos; sensor data investigations; pattern-recognition tasks tying perception to later ML workHigher self-efficacy when students train simple classifiers.
Greater perseverance through troubleshooting.
Early understanding that “data quality → model quality”.
2. Representation and Reasoning (how AI stores and manipulates knowledge)Building decision trees, rule-based “expert” systems, concept maps; explaining why a model made a choiceGrowth in metacognition (“Why did the AI infer X?”).
More adaptive help seeking and iterative debugging.
Strong link to mastery goal orientation → deeper conceptual transfer.
3. Learning (machine learning from data and experience)Training and tuning ML models; experimenting with hyper parameters; comparing training vs. test accuracySignificant rise in students’ AI empowerment (confidence, interest, agency).
Narrowing of gender gap in AI self-efficacy.
Better grit/self-regulation predicts more successful model refinement.
4. Natural Interaction (AI–human interfaces)Programming conversational agents or voice-controlled robots; evaluating user experience of AI assistantsBoost in creative self-efficacy and ownership when learners set their own project goals.
Enthusiastic exploration of multimodal interaction.
Autonomy supportive tasks foster innovative problem solving.
5. Societal Impact (ethics, equity, future-of-work)Case studies on algorithmic bias; AI-for-social-good design challenges; debating policy and privacy scenariosHeightened sense of personal responsibility and internal locus of control.
Deeper ethical reasoning when projects connect to students’ lived experience.
Evidence that empowerment mindset → sustained civic engagement with AI issues.
Table 2. Integrating the three complementary theoretical frameworks that operate at different analytical levels.
Table 2. Integrating the three complementary theoretical frameworks that operate at different analytical levels.
ElementRole in the Activity System (CHAT)Operative FrameworkOperational Variables
SubjectSlovenian secondary technical studentsPsychological empowerment theoryFour agency dimensions: meaning, competence, self-determination, impact
Mediating artifactsAI tools, datasets, and classroom software-Contextual but not directly measured
Object/outcomeAI literacy to be masteredAI4K12 Five Big IdeasPerception, Representation and Reasoning, Learning, Natural Interaction, Societal Impact
Rules/community/division of laborSchool track, sex, study year-Control variables
Table 3. Descriptive item statistics for the AI literacy items.
Table 3. Descriptive item statistics for the AI literacy items.
Competency [23]ItemItem LabelDifficulty
Index
Difficulty
Index
Corrected for
Guessing
Discrimination
Index
Recognizing AI1Typical applications0.0610.0000.281
2Recognizing a chatbot0.3060.0750.255
Interdisciplinarity3AI systems0.3060.0750.182
4Interdisciplinary research fields0.2490.0000.248
Understanding
intelligence
5Intelligence of AI 10.6330.5110.174
6Intelligence of AI 20.5220.3630.157
7Similarities of humans and AI0.3320.1090.209
General vs. narrow8Weak and strong AI0.1620.0000.217
9Capabilities of weak AI0.2800.0400.209
AI’s strengths and weaknesses10Superiority of AI0.1130.0000.304
11Superiority of humans0.1980.0000.204
Representations12Knowledge representations 10.2940.0590.207
13Knowledge representations 20.1930.0000.258
Decision-making14Decision-making0.2660.0210.266
15Optimization0.3760.1680.202
16Supervised and unsupervised learning0.2310.0000.209
Machine learning steps17Iterative process0.2560.0080.200
18Steps in supervised learning0.0050.0000.309
19Training and test data0.3080.0770.181
Human role in AI20Human influence 10.3250.1000.192
21Human influence 20.3130.0840.218
Technical22Programmability0.5650.4200.190
Data literacy23Visualization of data0.2710.0280.183
Learning from
data
24Learning from data0.4870.3160.157
25Learning from user data0.2610.0150.179
Critical thinking26Representativeness of data0.6610.5480.353
Ethics27Ethical principles0.2820.0430.149
28Black box0.3880.1840.171
29Societal challenges0.5040.3390.244
30Risks of AI0.2450.0000.171
31Legal challenges0.3650.1530.242
Bold p-value of 0.000 = performance below chance. Gray-shaded competency cell presents significant difference between two tracks in favor of industrial engineering track (p < 0.05).
Table 4. Indicators and summary statistics for verifying the convergent validity of the student agency constructs.
Table 4. Indicators and summary statistics for verifying the convergent validity of the student agency constructs.
Student Agency ConstructsOuter loadings (λ) RangeComposite Reliability ρCCronbach’s αρAAVE
Self-efficacy (SE)0.75–0.880.890.840.850.68
Perseverance of interest (PI)0.69–0.950.900.850.880.69
Perseverance of effort (PE)0.66–0.980.890.830.870.68
Mastery learning goal orientation (MLGO)0.80–0.980.920.880.890.74
Locus of control (LC)0.74–0.910.900.850.880.70
Future orientation (FO)0.80–0.980.910.870.890.73
Self-regulation (SR)0.78–0.970.910.880.890.73
Metacognitive self-regulation (MSR)0.79–0.960.910.870.880.73
Table 5. Fornell–Larcker criterion of discriminant validity using the square root of the AVE (on-diagonal) and correlations among student agency constructs (off-diagonal) complemented with HTMT values (in parentheses).
Table 5. Fornell–Larcker criterion of discriminant validity using the square root of the AVE (on-diagonal) and correlations among student agency constructs (off-diagonal) complemented with HTMT values (in parentheses).
Student Agency ConstructsSEPIPEMLGOLCFOSRMSR
SE0.82
PI0.42 (0.49)0.83
PE0.29 (0.34)0.55 (0.63)0.82
MLGO0.47 (0.55)0.41 (0.48)0.39 (0.44)0.86
LC0.53 (0.63)0.31 (0.36)0.24 (0.27)0.38 (0.44)0.84
FO0.27 (0.31)0.25 (0.29)0.28 (0.32)0.32 (0.36)0.24 (0.27)0.85
SR0.34 (0.39)0.50 (0.56)0.45 (0.39)0.39 (0.44)0.26 (0.30)0.38 (0.44)0.86
MSR0.46 (0.53)0.47(0.54)0.46(0.54)0.56 (0.64)0.39 (0.45)0.40 (0.46)0.49 (0.56)0.85
Table 6. Total sample means, standard deviations, confidence intervals, skewness, and kurtosis for the student agency constructs (n = 425).
Table 6. Total sample means, standard deviations, confidence intervals, skewness, and kurtosis for the student agency constructs (n = 425).
Student Agency ConstructsMSD95% CISkewnessKurtosis
SE4.510.98[4.40, 4.59]−0.670.33
PI3.830.97[3.83, 3.92]−0.05−0.45
PE3.711.04[3.61, 3.81]0.03−0.42
MLGO4.241.08[4.13, 4.34]−0.30−0.29
LC4.550.99[4.45, 4.64]−0.640.44
FO3.731.16[3.62, 3.84]−0.11−0.39
SR3.861.10[3.75, 3.96]−0.21−0.29
MSR4.091.11[3.98, 4.19]−0.23−0.42
Table 7. Pearson correlations for student agency constructs. All correlation coefficients (off-diagonal) are significant at the 0.01 level (2-tailed).
Table 7. Pearson correlations for student agency constructs. All correlation coefficients (off-diagonal) are significant at the 0.01 level (2-tailed).
Student Agency Constructs12345678
SE1.00
PI0.431.00
PE0.290.531.00
MLGO0.480.420.391.00
LC0.540.320.240.381.00
FO0.270.250.280.320.241.00
SR0.340.490.450.390.280.381.00
MSR0.450.470.480.550.390.410.511.00
Table 8. Mean scores and standard deviations of the agency profiles on the cluster variables for 4-cluster solution.
Table 8. Mean scores and standard deviations of the agency profiles on the cluster variables for 4-cluster solution.
Student Agency ConstructsHierarchical Clusteringk-Means Clustering
1
(n = 204)
2
(n = 66)
3
(n = 42)
4
(n = 113)
1
(n = 121)
2
(n = 117)
3
(n = 68)
4
(n = 119)
SE4.45 (0.77)3.29 (0.96)4.55 (0.60)5.25 (0.66)4.20 (0.79)4.75 (0.58)3.28 (0.94)5.25 (0.65)
PI3.84 (0.72)2.78 (0.68)2.90 (0.61)4.77 (0.59)3.81 (0.75)3.47 (0.77)2.88 (0.67)4.75 (0.66)
PE3.81 (0.80)2.80 (0.84)2.55 (0.65)4.50 (0.87)3.85 (0.78)3.25 (0.85)2.78 (0.84)4.55 (0.87)
MLGO4.05 (0.79)2.98 (0.92)4.34 (0.85)5.27 (0.67)3.88 (0.71)4.33 (0.84)3.01 (0.92)5.22 (0.72)
LC4.41 (0.86)3.63 (1.08)4.96 (0.86)5.18 (0.68)4.14 (0.83)4.88 (0.71)3.55 (1.08)5.20 (0.63)
FO3.55 (1.05)2.91 (1.07)3.64 (0.76)4.59 (0.98)4.15 (0.73)3.06 (0.91)2.76 (0.94)4.52 (1.07)
SR3.88 (0.75)3.02 (0.99)2.46 (0.64)4.82 (0.86)4.02 (0.61)3.21 (0.97)2.91 (0.90)4.87 (0.77)
MSR4.02 (0.90)2.70 (0.76)4.08 (0.96)5.02 (0.73)3.99 (0.78)3.97 (0.92)2.68 (0.77)5.11 (0.69)
Table 9. Silhouette score, AIC, and BIC depending on the number of clusters. The metric with the best result is shown in bold.
Table 9. Silhouette score, AIC, and BIC depending on the number of clusters. The metric with the best result is shown in bold.
Number of ClustersSilhouette ScoreAICBIC
20.561928.422058.09
30.531798.741993.24
40.511732.801992.13
50.451685.362009.53
60.381648.882037.88
70.331627.602081.43
80.261614.162132.82
90.231604.002187.50
100.221596.092244.42
Table 10. Descriptive statistics on AI literacy amongst human service and industrial engineering track students.
Table 10. Descriptive statistics on AI literacy amongst human service and industrial engineering track students.
TracknMSD95% CISkewnessKurtosisMdIQR
Human service2178.882.97[8.48, 9.27]1.162.919.003.50
Industrial engineering20810.705.01[10.01, 11.38]1.573.2510.004.75
Table 11. Pairwise comparisons of AI literacy across student agency profiles (1–4) using Kruskal–Wallis (H) with Dunn–Bonferroni post hoc tests and effect size (ε2) (df = 3, n = 425).
Table 11. Pairwise comparisons of AI literacy across student agency profiles (1–4) using Kruskal–Wallis (H) with Dunn–Bonferroni post hoc tests and effect size (ε2) (df = 3, n = 425).
Kruskal–Wallis/Pairwise Contrast3 vs. 12 vs. 32 vs. 14 vs. 34 vs. 14 vs. 2
H-statistics15.2769.3054.2073.5558.304.25
Adjusted p-value10.0010.0040.0000.0011
Effect size ε2 *0.0360.1630.1280.1730.1420.010
* ε2 benchmarks: 0.01 = small, 0.06 = medium, 0.14 = large [93].
Table 12. Measurement model statistics.
Table 12. Measurement model statistics.
VariableLoadings
(Min.–Max.)
Cronbach’s αρCAVE A V E Max. HTMTMax.
|r|
FO[0.656–0.940]0.8740.8910.6760.8220.4630.435
LC[0.710–0.986]0.8540.9010.6970.8350.4500.535
MLGO[0.776–0.986]0.8790.9180.7380.8590.6440.565
MSR[0.793–0.985]0.8740.9150.7310.8550.5600.483
PE[0.704–0.982]0.8350.8910.6750.8220.6360.340
PI[0.732–0.969]0.8490.8960.6860.8280.5670.468
SE[0.774–0.865]0.8440.8950.6810.8250.6300.340
SR[0.586–0.955]0.8770.8850.6650.8150.5670.138
Table 13. Structural coefficients.
Table 13. Structural coefficients.
PathβBias2.5%97.5%p-ValueCohen f2 *
FO -> AI literacy−0.0520.010−0.2040.0440.3930.002
LC -> AI literacy−0.0990.017−0.220−0.0220.0430.008
MLGO -> AI literacy0.219−0.0070.0960.3500.0010.036
MSR -> AI literacy0.222−0.0240.1230.3560.0000.030
PE -> AI literacy0.036−0.010−0.0820.1840.5980.001
PI -> AI literacy−0.0810.016−0.2260.0260.1940.005
SE -> AI literacy0.195−0.0210.0920.3310.0020.027
Sex -> AI literacy−0.138−0.013−0.3280.0690.1750.002
SR -> AI literacy−0.1790.028−0.351−0.0800.0140.026
Study Year 3 -> AI literacy0.012−0.001−0.2000.2180.9110.000
Study Year 4 -> AI literacy0.0930.002−0.1050.2940.3660.001
Track -> AI literacy0.607−0.0120.4110.8300.0000.042
* Cohen f2 can be interpreted as follows: 0–<0.02—negligible/no practical effect, ≥0.02—small, ≥0.15—medium, and ≥0.35—large effect [118].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Avsec, S.; Rupnik, D. From Transformative Agency to AI Literacy: Profiling Slovenian Technical High School Students Through the Five Big Ideas Lens. Systems 2025, 13, 562. https://doi.org/10.3390/systems13070562

AMA Style

Avsec S, Rupnik D. From Transformative Agency to AI Literacy: Profiling Slovenian Technical High School Students Through the Five Big Ideas Lens. Systems. 2025; 13(7):562. https://doi.org/10.3390/systems13070562

Chicago/Turabian Style

Avsec, Stanislav, and Denis Rupnik. 2025. "From Transformative Agency to AI Literacy: Profiling Slovenian Technical High School Students Through the Five Big Ideas Lens" Systems 13, no. 7: 562. https://doi.org/10.3390/systems13070562

APA Style

Avsec, S., & Rupnik, D. (2025). From Transformative Agency to AI Literacy: Profiling Slovenian Technical High School Students Through the Five Big Ideas Lens. Systems, 13(7), 562. https://doi.org/10.3390/systems13070562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop