1. Introduction
Artificial intelligence (AI) is becoming an integral component of contemporary education systems. In K–12 education, AI-enabled tools are increasingly used to support lesson planning, assessment design, feedback generation, administrative processes, and communication. In the United Arab Emirates (UAE), AI integration in schools is being shaped by formal education policy. According to the Emirates News Agency, a national AI curriculum has been approved for rollout across all government schools from kindergarten through Grade 12, beginning with the 2025–2026 academic year. As a result, investments have been directed toward digital infrastructure, connectivity, and access to advanced tools such as large language models (LLMs). However, the effectiveness of these investments depends not only on technological availability but also on the institutional conditions that support teachers’ capacity to integrate AI into everyday educational practice.
Research on AI in education has expanded substantially in recent years, yet much of the literature concentrates on technological capabilities, system design, or ethical considerations. Large-scale reviews indicate that educators and institutional conditions are often treated as secondary considerations, despite their central role in shaping how technologies are adopted and used (
Zawacki-Richter et al., 2019). This imbalance raises a fundamental concern: education systems may invest in AI tools without sufficient evidence on the human and organizational factors that enable meaningful and sustainable integration.
Institutional readiness for AI adoption extends beyond access to devices, platforms, or software. Studies on educational change consistently show that teachers’ beliefs, confidence, skills, and perceptions of support strongly influence whether new technologies are embraced or resisted (
Ertmer & Ottenbreit-Leftwich, 2010). Leadership encouragement, professional development, and organizational culture shape not only teachers’ willingness to experiment with AI, but also their ability to integrate it in ways that align with pedagogical goals. Without these conditions, technical access alone is unlikely to lead to substantive transformation (
Fullan, 2014).
Teacher readiness is also heterogeneous rather than uniform. Differences related to age, gender, and professional experience may influence how teachers perceive AI and its relevance to their work. While younger teachers are often assumed to be more digitally confident, empirical evidence suggests that readiness is mediated by contextual factors such as institutional support and role expectations rather than demographics alone (
Horváth et al., 2025). In K–12 settings, where teaching relies heavily on interpersonal interaction and professional judgment, understanding these variations is essential for evaluating readiness at the institutional level.
Beyond skills and support, AI integration raises important questions about its impact on teachers’ professional lives. AI is frequently promoted as a means of improving productivity and reducing workload, thereby enhancing work-life balance. However, research on teacher well-being suggests that technological innovations can both improve efficiency and impose new pressures, depending on how they are implemented and governed (
Day & Gu, 2014). Emerging evidence indicates that teachers’ perceptions of balance, autonomy, and control are closely linked to their engagement with AI-supported practices (
Nyongesa & Van Der Westhuizen, 2025). This suggests that productivity outcomes and work-life balance may function not only as consequences of AI integration, but also as conditions that shape adoption readiness.
At the same time, scholars have raised concerns about the potential mechanization of teaching through AI. While automation may free time for human interaction, it may also standardize practices and reshape professional roles in ways that affect the relational core of education
Selwyn (
2019). One might argue that AI in schools sits at a critical tension point between humanization and instrumentalization, making it essential to examine how teachers experience AI integration in practice rather than assuming inherently positive outcomes.
These issues are particularly salient in the context of the UAE, where ambitious national strategies emphasize innovation and digital transformation in education. Despite strong policy momentum and investment in AI infrastructure, there remains limited empirical evidence on whether institutional support and leadership encouragement contribute to AI adoption readiness beyond technical access, especially in K–12 education. Without such evidence, decision-makers risk equating readiness with connectivity and tools, while overlooking the human and organizational conditions that determine how AI is enacted in classrooms.
Against this backdrop, the present study develops a model of AI adoption readiness in K–12 education based on three interconnected dimensions: teacher skill transformation, perceived productivity outcomes (including work–life balance), and institutional readiness. Using quantitative survey data collected from 602 teachers across public and private schools in the UAE, the study employs reliability analysis, exploratory analysis, and multiple regression to examine the relative contribution of institutional support and leadership encouragement compared to technical access to AI tools.
Accordingly, this study examines the extent to which institutional support and leadership encouragement contribute to AI adoption readiness in K–12 education beyond access to AI tools and technical infrastructure.
By addressing this question, this study contributes to the literature by shifting the focus from technology availability to the human and institutional conditions that shape AI integration. It provides empirical evidence to inform AI investment decisions in education and advances understanding of how skills, productivity, and institutional readiness interact in shaping AI adoption in K–12 contexts.
2. Literature Review
The use of artificial intelligence (AI) in education is growing rapidly. It is already affecting pedagogical approaches and requires teachers to develop a new set of skills. Teachers must know how to design, deliver, and evaluate instruction with AI tools whenever needed. AI is not futuristic; it is already embedded in daily practices such as adaptive learning platforms, automated grading, and administrative decision-making (
Ragolane & Patel, 2024). AI has been adopted across educational contexts in advance of comprehensive regulatory or pedagogical frameworks. Scholars have shifted from describing AI as a single technology to framing it as a systemic force reshaping educational ecosystems (
Williamson, 2017).
The research by
Varghese et al. (
2025) highlights that automated systems such as grading algorithms, AI tutors, and assessment models can significantly improve scalability and efficiency, particularly in large classrooms. However, the author also cautions that these tools, if unchecked, may erode educator agency and bypass critical human judgment in areas such as assessment and feedback.
The integration of AI in education is increasingly framed not only as a technological shift but also as a pedagogical transformation. This process can even occur at the individual teacher level without system-wide policy. AI in education began with intelligent systems ranging from adaptive tutoring and automated grading to assessment analytics and generative text models (
Zawacki-Richter et al., 2019). These applications now span K–12, higher education, vocational training, and administration.
To understand adoption, many scholars rely on established frameworks. The Technology Acceptance Model (TAM) emphasizes perceived usefulness and ease of use, but critics argue that it underestimates context and skills (
Xue et al., 2024). The Unified Theory of Acceptance and Use of Technology (UTAUT, UTAUT2) adds constructs such as performance expectancy, effort expectancy, social influence, and facilitating conditions (
Venkatesh et al., 2003). In education, these are often complemented by the Technological Pedagogical Content Knowledge (TPACK) framework, which stresses the intersection of technological, pedagogical, and disciplinary expertise
Mishra and Koehler (
2006).
Recent adaptations include AI-specific dimensions.
Tram (
2024) combined UTAUT and AI-TPACK to identify drivers of AI integration among language teachers.
Al-Adwan et al. (
2025) showed that TAM, TPACK, and UTAUT together offer a more comprehensive explanation for continuous use intentions in emerging economies. These hybrid models highlight that AI adoption cannot be reduced to technical acceptance alone; it must include pedagogy, ethics, and institutional readiness.
Guan et al. (
2025) illustrate this in practice: pre-service teachers’ willingness to adopt ChatGPT 4.0 depends as much on pedagogical trust and competence as on ease of use. This demonstrates the importance of contextualized adoption models in the era of generative AI.
When considering all these factors together, the literature underscores the importance of splitting skill sets into clusters. However, many studies still treat them as a single variable. There is a clear need to analyze how skills can be grouped and clustered.
Ethical literacy is closely tied to digital competence. (
Imig & Flores, 2025) argue that pre-service teacher training should include critical digital literacy and ethical reflection, given the risks of plagiarism, over-reliance on automated systems, and algorithmic bias. These concerns are echoed by
Marzuki (
2025), who highlights the need for higher education instructors to balance the benefits of generative AI with risks such as accuracy gaps and ethical misuse. Without this ethical layer, digital skills risk reinforcing dependency rather than supporting empowerment.
Applied analytical skills also connect with personalized learning.
Zhai (
2023) shows that AI tools transform not only teaching but also student writing, as teachers interpret AI outputs for formative feedback.
Väkevä and Partti (
2025) demonstrates that generative AI in teacher training fosters creativity, critical thinking, and ethical responsibility, including in music composition. Their work suggests that data literacy and creative analysis operate synergistically.
Management and communication skills consistently appear as strong factors of successful AI integration. While digital competence provides a technical base, teachers’ ability to organize classrooms, communicate with stakeholders, and manage relationships is equally important for embedding AI into daily practice. This aligns with the broader literature emphasizing human and organizational aspects of digital transformation rather than technology alone (
Bauwens & Meyfroodt, 2021).
Classroom management, emotional intelligence, and collaboration with parents and colleagues are particularly relevant in AI-mediated environments.
Holmes et al. (
2023) shows that emotional intelligence supports AI literacy, as the ability to regulate emotions and maintain well-being helps teachers adapt to AI tools.
Institutional support also amplifies these skills.
Chen and Li (
2025) found that organizational backing strengthens digital literacy and fosters growth through structured communication and mentoring. This shows that management and communication are not only individual traits but collective capacities within professional communities.
Zheng et al. (
2016) emphasize adaptability as a psychological resource: teachers with strong communication skills reported higher digital engagement and smoother integration of smart technologies, supported by their information literacy.
Together, these studies confirm that AI integration is not determined by digital know-how alone. Management and communication skills provide the scaffolding through which digital competence is applied meaningfully. Effective adoption depends on hybrid skill sets blending technical literacy with interpersonal and organizational acumen.
Rather than treating the technology–organization–environment (TOE) framework, perception-based adoption models (e.g., TAM and UTAUT), and skills-oriented pedagogical perspectives (e.g., AI–TPACK) as parallel explanations, this study integrates them into a single analytical structure. The TOE framework provides the macro-level institutional lens through which technological readiness, organizational support, and environmental conditions are examined. Within this structure, perception-based models explain how teachers interpret and respond to these conditions, particularly in relation to perceived usefulness, effort, and professional impact. The AI–TPACK perspective further specifies how these perceptions translate into enacted competencies and skills within instructional practice. Together, these frameworks operate in a complementary manner, enabling the analysis of AI adoption in education as a multilevel phenomenon shaped by institutional context, human perception, and professional capability.
Research Gap and Link to Research Question and Hypotheses
The literature provides substantial evidence that a wide range of teacher competencies support AI integration in education. Prior studies identify digital literacy, data literacy, ethical awareness, classroom management, collaboration, lesson planning, assessment design, communication, adaptability, and engagement in professional development as relevant enablers of AI use (
El Mourad et al., 2025). Collectively, this body of work demonstrates that AI integration depends on competencies spanning technical, pedagogical, ethical, and interpersonal domains.
Despite this growing consensus, several critical gaps remain. First, existing studies predominantly examine these competencies in isolation, reporting their relevance individually rather than analysing how they may interact or combine to shape AI adoption in practice. Second, although established technology adoption frameworks—such as the Technology Acceptance Model (TAM), the Unified Theory of Acceptance and Use of Technology (UTAUT), Technological Pedagogical Content Knowledge (TPACK), and AI-TPACK—acknowledge skills as important contextual conditions, they rarely test specific skill sets as direct factors of AI adoption readiness. Skills are typically treated as background variables rather than as empirically examined drivers within quantitative models. Third, there is limited empirical evidence on whether these competencies operate as distinct independent factors or whether they naturally cluster into broader dimensions of readiness that may better explain AI integration outcomes.
In addition, the literature offers limited insight into how institutional conditions and leadership encouragement contribute to AI adoption readiness beyond technical access to AI tools and infrastructure. While access is often assumed to be a prerequisite for AI integration, empirical studies rarely compare the relative contribution of human, organizational, and technical factors within a single explanatory model. As a result, it remains unclear which conditions most strongly shape AI adoption readiness in school settings.
These gaps directly motivate the central research question of this study:
“To what extent do institutional support and leadership encouragement contribute to AI adoption readiness in K–12 education beyond access to AI tools and technical infrastructure?”
To address this question empirically, this study adopts a hypothesis-driven approach. Building on the skills identified in the literature and the need to examine their combined effects, it is first hypothesised that teacher competencies associated with AI integration form coherent clusters rather than operating as isolated attributes (H1). It is further hypothesised that these skill clusters are positively associated with AI adoption readiness (H2). In line with calls to move beyond purely technical explanations, this study additionally hypothesises that institutional support and leadership encouragement are more strongly associated with AI adoption readiness than access to AI tools alone (H3). Finally, given evidence of uneven readiness across teacher populations, it is hypothesised that AI adoption readiness differs according to teachers’ age, gender, and professional experience (H4).
By explicitly deriving the research question and hypotheses from the gaps identified in the literature, this study advances a structured, model-based approach to understanding AI adoption readiness in K–12 education. This alignment ensures conceptual coherence between prior research, empirical design, and analytical strategy.
3. Materials and Methods
Building on the literature reviewed, a structured quantitative survey was administered to a large and diverse sample of educators across the United Arab Emirates (UAE) to examine AI adoption readiness in K–12 education. The survey design was informed exclusively by prior empirical and theoretical studies on AI integration, teacher competencies, institutional readiness, and technology adoption frameworks. This approach ensured that the instrument operationalized constructs that are well-established in the literature, while allowing for empirical testing within the context of the UAE.
A cross-sectional survey design was employed, with data collected at a single point in time. This design is appropriate for examining relationships among institutional, skill-based, and productivity-related factors in a rapidly evolving technological environment, where AI tools and platforms develop faster than school cycles. Quantitative analysis was selected to enable the estimation of the relative contribution of multiple factors to AI adoption readiness and to support replicability across future studies using comparable datasets.
The target population consisted of K–12 educators across the UAE, including teachers, administrators, and education technology specialists. These groups are directly involved in the implementation and use of AI tools in schools and therefore play a central role in shaping how AI is integrated into instructional, administrative, and managerial practices. Examining their experiences provides insight into institutional readiness, professional adaptation, and the broader implications of AI adoption in school settings.
Participants represented a range of professional roles, levels of experience, and degrees of familiarity with AI technologies. The sample included educators from public and private schools, reflecting differences in governance structures, resourcing, and decision-making processes (
Cohen et al., 2017). Schools following Ministry of Education, British, American, and International Baccalaureate curricula were included to capture curricular and pedagogical diversity (
Creswell & Creswell, 2018). Representation from all seven emirates ensured coverage of varied socio-economic, cultural, and technological contexts (
Teddlie & Tashakkori, 2009).
3.1. Sampling Strategy
A stratified sampling strategy was applied to enhance representativeness across three dimensions: school type, geographical location, and professional role. Proportional representation of public and private schools was maintained to reflect national distributions (
Creswell, 2014). Geographical stratification ensured inclusion of schools from each emirate, while role-based stratification incorporated classroom teachers, administrators, and IT coordinators to provide a comprehensive view of AI adoption across functional responsibilities (
Bryman, 2016). The final sample comprised 602 educators, providing sufficient statistical power for factor analysis and regression modelling. Diagnostic checks indicated that the assumptions of linear regression were satisfactorily met. Residuals showed no systematic patterns, and variance inflation factors were within acceptable thresholds, suggesting no evidence of heteroscedasticity or problematic multicollinearity.
A stratified purposive sampling frame was used to capture teacher perspectives across four axes during the period from January 2025 to April 2025:
Emirate: Abu Dhabi, Dubai, Sharjah, and the Northern Emirates.
Governance: MOE public, ADEK private, Chartered Schools, KHDA private.
Curriculum: Emirati national, British, American, IB, Indian, other.
Teacher profile: gender, experience band, subject cluster.
3.2. Survey Instrument
Responses were collected on a five-point Likert scale and collapsed into composite indices. Examples include
Perceived Impact on Job Functions.
Actual AI Use: how teachers used AI and for which skills.
Perceived Highly Required Skills for AI Integration (20 skills identified from the literature review).
AI Adoption Across Tasks.
Work–Life Balance: perception and actual experience.
Professional Development: provided and required.
Work Quality: extent of improvement due to AI.
Satisfaction: with AI implementation, skill enhancement, and time savings.
3.3. Descriptive
Analysis begins with Descriptive statistics, which present the frequencies, means, and standard deviations, as well as summarizes demographic characteristics of the sampling. Next, Exploratory Factor Analysis (EFA) was performed using Principal Component Analysis (PCA) with varimax rotation to identify interpretable dimensions among the skill items, with component loadings and internal consistency evaluated to support construct formation. Finally, based on these factors a multiple regression model was utilized to examine their associations with AI integration outcomes.
4. Results
The demographic analysis provides an overview of the participants’ background characteristics, offering context for interpreting the results of this study. As shown in
Table 1, The sample consisted of 602 respondents representing a diverse range of personal and professional attributes, including gender, age, education level, employment status, and type of educational institution.
Age Distribution: The mean age was 35.6 years (). Age brackets: 20–29 (28.4%), 30–39 (41.2%), 40–49 (21.7%), 50+ (8.7%).
4.1. Validity and Reliability
Instrument reliability was tested using Cronbach’s alpha.
Sampling adequacy for component analysis was verified through the Kaiser–Meyer– Olkin measure and Bartlett’s test of sphericity.
Validity was evaluated using multiple regression analysis.
Teaching Experience
The average teaching experience was 11.6 years (
, range 0–40). As presented in
Table 2, participants represented a wide distribution of experience levels, with the largest group (32.6%) having 0–4 years of teaching experience.
4.2. Instrument Reliability
As presented in
Table 3, all scales demonstrated strong to excellent internal consistency (
= 0.853 to 0.949). Q11 (Teacher Skills) was particularly robust (
), confirming suitability for further analysis.
4.3. Skills Analysis
As shown in
Table 4, the Teacher Skills construct (Q11) demonstrated excellent internal consistency (
). Item-level reliability results (
Table 5) further confirmed the strength of the scale, supporting its use in subsequent analyses.
4.4. Exploratory Factor Analysis of Teacher Skills
An exploratory factor analysis was conducted on the twenty teacher skill items to identify underlying competency domains related to AI integration. Principal Component Analysis (PCA) was used as the extraction method, followed by Varimax rotation with Kaiser normalization to enhance interpretability of the factor structure. Listwise deletion was applied, resulting in a final sample of 602 valid cases.
Factor retention was guided by eigenvalues greater than one and inspection of the scree plot (
Figure 1). Three components met the eigenvalue criterion and were retained for interpretation. Together, the three-factor solution explained 62.6% of the total variance. After rotation, Factor 1 accounted for 25.2% of the variance, Factor 2 for 20.7%, and Factor 3 for 16.7%.
Communalities for all items exceeded 0.48, indicating that a substantial proportion of variance in each skill item was explained by the extracted factors. To improve clarity, factor loadings below 0.40 were suppressed in the rotated solution.
The rotated component matrix (
Table 6) revealed a clear and interpretable structure. As indicated, the skills fall into three different factors: The first factor represents teachers’ capacity to engage in data-driven, technical, and AI-enhanced tasks supported by digital and ethical literacy. Analytical and Digital Competencies includes high-loading items such as Digital Literacy (0.721), Data Analytics (0.795), Gamification (0.733), Predictive Use of Data (0.725), Ethics and Privacy (0.577), Individualized Learning (0.596), and Supporting Diverse Needs (0.657).
The second factor refers to Interpersonal and Adaptive Competencies and includes items related to Classroom Management (0.807), Parent Collaboration (0.644), Change Management (0.620), Mentoring (0.599), Emotional Intelligence (0.774), and Agile Learning (0.628). This factor captures social, relational, and adaptive interpersonal skills that enable teachers to manage change and support students in AI-mediated educational settings.
Finally, the third factor, Pedagogical and Curriculum Competencies, reflects pedagogical and curriculum-related skills reinterpreted within the context of AI integration. It includes high-loading items such as Curriculum Design (0.740), Assessment (0.787), Grading and Evaluation (0.667), Pedagogy (0.635), and Lesson Planning (0.763). Student Engagement and Critical Thinking were excluded due to their low factor loadings.
Overall, the findings indicate that the 20 skills cluster into three coherent competency domains: Analytical and Digital Competencies, Interpersonal and adaptive competencies, and Pedagogical and Curriculum Competencies.
Overall, the factor solution indicates that teacher skills related to AI integration cluster into distinct but complementary domains, supporting the use of empirically derived skill factors in subsequent regression analyses.
4.5. Professional Development Factors
PCA was similarly applied to professional development items to identify interpretable dimensions of institutional support, resulting in a two-component solution consistent with structural and capacity-building forms of professional development. Question 17 (Professional Development/Institutional Support) demonstrated very strong reliability () but split into two interpretable factors:
Support and Confidence (institutional encouragement, leadership backing).
Training and Requirements (structured training, upskilling, resource availability).
These results confirm the psychometric strength of the measurement instrument.
4.6. Regression Analysis
Multiple linear regression analysis was conducted to examine the association between Actual AI Integration in Teaching (ActualAIinTeach) and a set of skill-related and institutional predictors derived from the exploratory analysis. Component-based composite scores were used as independent variables.As presented in
Table 7, the overall model explained approximately 50% of the variance in Actual AI Integration (
), indicating a substantial level of explanatory power. The regression coefficients are shown in
Table 8.
The regression results indicate that several component-based predictors were significantly associated with Actual AI Integration in teaching practice. Work–Life Balance (Actual) emerged as the strongest predictor, followed by Institutional Support, Perceived Management Skills, and Perceived Digital Skills. Perceived Education Skills showed a weaker and non-significant association with the outcome variable.
Perceived Education Skills were not significant (), suggesting that traditional teaching expertise does not directly translate to AI integration. Instead, personal well-being and institutional support—combined with management and digital skills—are the strongest drivers of AI adoption in practice. Given the cross-sectional design, reverse or reciprocal relationships cannot be ruled out; it is equally plausible that increased AI use contributes to improved perceptions of work–life balance.
Overall, the findings suggest that both human-centred conditions (such as work–life balance) and organizational enablers (such as institutional support and skill readiness) are meaningfully associated with teachers’ reported levels of AI integration. These results should be interpreted as explanatory associations rather than causal effects.
Work–Life Balance Actual (, )—strongest factor.
Institutional Professional Development (, ).
Perceived Management Skills (, ).
Perceived Digital Skills (, ).
The final regression model is presented as an equation using only statistically significant predictors (
p < 0.05) and the unstandardized coefficients (B). The resulting formula is:
5. Discussion
This study confirms that AI adoption in schools is more than a technical process; it is strongly shaped by personal and institutional conditions. Among the factors, work–life balance emerged as the strongest driver. This finding extends traditional adoption models. While TAM and UTAUT emphasize usefulness, ease of use, and social influence, the results here show that teachers adopt AI when it makes their professional lives more manageable. This human factor has not been central in earlier models, making it a novel contribution. Institutional professional development was the second strongest factor. However, it was not mandatory training that mattered most; rather, mentoring, trust, and supportive leadership were decisive. This aligns with the TOE framework and organizational change theory, which stress that adoption depends on structural support. It also confirms studies that highlight the role of leadership and professional communities in digital transformation (
Fullan, 2014;
Leithwood et al., 2020).
Perceived management skills and digital skills were also significant factors. They act as enablers: teachers with confidence in classroom management, communication, and digital competence were more likely to use AI. However, their effect sizes were smaller compared to those for work–life balance and institutional support. This suggests that adoption is not only about individual capability but also about how institutions create conditions that reduce stress and build confidence.
The non-significance of perceived education skills is also important. It shows that traditional teaching knowledge alone does not drive AI integration. This challenges the assumption that curriculum expertise automatically translates into readiness. Instead, it highlights the importance of adaptive and interpersonal skills, which connect AI use to the human side of education.
The factor analysis supports this point by grouping teacher skills into three clusters: Analytical/Digital, Interpersonal/Adaptive, and Pedagogical/Curriculum. Among these, interpersonal and adaptive skills form the real bridge between AI tools and human connection in classrooms. This emphasizes that digital adoption will succeed only if it remains human-centred.
In summary, this study contributes three key points:
Work–life balance is a critical factor of adoption.
Institutional support matters more than technical access.
Traditional education skills are not enough. Human-centred, adaptive, and interpersonal competencies are what enable meaningful AI integration.
Together, these insights reposition readiness from being purely technical to being cognitive, cultural, and organizational.
The findings also reveal fractures. Younger teachers are more confident with digital tools. Women score higher on curriculum and evaluation. Teachers above 50 risk falling behind if not supported.
Limitations must be acknowledged. The survey captures one moment in time. AI is evolving faster than policy cycles, and teachers may change their views as tools develop. Self-report measures may include bias, especially in a context where AI is celebrated as a national priority. Curricula were pooled, so system-level differences were not examined. Future work should go deeper through focus groups, leadership interviews, and student perspectives.
6. Contribution
This study makes a significant contribution to the growing body of research on AI integration in education by introducing a quantitative, explanatory model of institutional AI integration grounded in a large sample of K–12 teachers in the UAE. While previous studies have examined teacher attitudes, work–life balance, or isolated readiness factors, few have attempted to model the combined impact of psychological, leadership, and work–life balance variables on institutional readiness for AI at scale.
Most studies on AI-readiness have been conducted in Western or higher education contexts. This paper offers a rare, data-driven exploration of readiness within the K–12 sector in the Middle East, using a comprehensive sample across curricula and emirates. It answers calls for more localized, large-scale studies that reflect the unique structural, cultural, and policy conditions of the region (
Holmes et al., 2019;
Zawacki-Richter et al., 2019).
While work–life balance and training are often emphasized, this study confirms that perceived usefulness and school encouragement are stronger factors than access to tools. This shifts the narrative from “equipment readiness” to “cognitive and cultural readiness,” and reinforces the need to recentre beliefs and leadership in digital transformation strategies (
Fullan, 2014;
Venkatesh & Davis, 2000).
The study also developed a composite index of institutional readiness, supported by high internal consistency (Cronbach’s ) and regression validity. This index can serve as a replicable instrument for future studies, cross-country comparisons, or longitudinal tracking of readiness development.
Overall, this paper advances the conversation by treating AI adoption as a multidimensional phenomenon that combines leadership, perception, work–life balance, and professional development into a cohesive model. It contributes to the literature bridging technology acceptance, school improvements, and organizational change, and creates a foundation for more holistic models in future research.
From a practical standpoint, this study offers a clear concept, country-specific insight, and strong methodology. It represents a contribution to the academic literature, policy design, and implementation toolkits and provides both a lens and a tool for more strategic, data-informed approaches to AI adoption in K–12 systems. In particular, it
Introduces a factorial model of AI integration in schools;
Demonstrates how human factors (work–life balance, skills, management support) can be quantified and assess as adoption drivers;
Provides a formula-based tool for policymakers and school leaders to anticipate AI readiness and guide interventions;
Bridges research and practice by converting survey evidence into a decision support instrument for digital transformation in education.
7. Limitations
This study is based on a cross-sectional survey design and therefore captures teachers’ perceptions at a single point in time within a rapidly evolving AI landscape. As AI tools, policies, and levels of experience continue to change, perceptions of readiness and integration may also shift over time. Longitudinal or panel-based designs would be valuable for examining these dynamics.
The findings rely on self-reported data, which may be subject to social desirability or optimism bias, particularly within a policy context that actively promotes AI adoption. In addition, although the sample includes teachers from multiple curriculum systems, responses were pooled for analysis, limiting the ability to detect curriculum- or system-specific differences. Finally, the quantitative design explains associations but does not capture the underlying reasons behind teachers’ responses; qualitative follow-up research could provide deeper explanatory insight.
8. Conclusions
This study presents large-scale, data-driven models of institutional AI adoption in K–12 schools in the UAE. Using responses from 602 teachers across diverse curricula, the findings show that AI integration is not just about infrastructure or access. It is shaped by human factors, institutional culture, and leadership support.
The results highlight that work–life balance is the strongest factor of AI adoption. Teachers are more likely to integrate AI when it reduces stress and makes their work manageable. Institutional professional development also plays a major role, but its impact comes from support, mentoring, and confidence-building rather than mandatory training. Management and digital skills act as enablers, while traditional education skills alone do not predict adoption.
From a practical perspective, these findings send a clear message to school leaders and policymakers: AI adoption is a human process. Leadership must create supportive environments, policies should value teacher well-being, and professional development should focus on confidence and adaptability.
AI has already entered education. Not every school needs to integrate it at the same pace, but every school must be ready. AI integration is not about technology alone; it is about alignment of beliefs, leadership, support, and human capacity. This study provides both evidence and a formula that can guide smoother and more sustainable AI integration in schools.
Author Contributions
Conceptualization, T.E., L.H. and P.C.; methodology, K.C. and A.K.; software, T.E., K.C. and L.H.; validation, A.K. and P.C.; formal analysis, T.E. and K.C.; investigation, T.E., L.H., A.K. and P.C.; resources, P.C. and L.H.; data curation, T.E. and K.C.; writing—original draft preparation, T.E.; writing—review and editing, T.E. and L.H.; visualization, T.E., K.C. and A.K.; supervision, L.H. and P.C.; project administration, T.E., L.H. and P.C.; funding aquisition: P.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding. The APC was funded by the authors.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Ethics Committee of European University of Cypruste (protocol code 231088, with approval granted on 24 April 2025).
Informed Consent Statement
Informed consent for participation was obtained from all subjects involved in the study. Participation in the survey constituted implied consent, as all participants were required to read the consent information and confirm voluntary participation before proceeding. No identifying information was collected.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors on request.
Acknowledgments
The authors would like to thank the participating schools and teachers across the UAE for their cooperation in completing the survey. During the preparation of this manuscript, the authors used ChatGPT (OpenAI, GPT-5.1, 2025) to assist with text refinement, formatting, and organization. The authors have reviewed and edited all generated content and take full responsibility for the final version of the manuscript.
Conflicts of Interest
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
| AI | Artificial Intelligence |
| K–12 | Kindergarten to Grade 12 |
| UAE | United Arab Emirates |
| TAM | Technology Acceptance Model |
| UTAUT | Unified Theory of Acceptance and Use of Technology |
| TOE | Technology–Organization–Environment Framework |
| TPACK | Technological Pedagogical Content Knowledge |
| AI-TPACK | Artificial Intelligence–Technological Pedagogical Content Knowledge |
| PD | Professional Development |
| ICT | Information and Communication Technology |
| KMO | Kaiser–Meyer–Olkin Measure of Sampling Adequacy |
| EFA | Exploratory Factor Analysis |
| PCA | Principal Component Analysis |
| SPSS | Statistical Package for the Social Sciences |
| WLB | Work–Life Balance |
| MoE | Ministry of Education (UAE) |
| ADEK | Abu Dhabi Department of Education and Knowledge |
| KHDA | Knowledge and Human Development Authority (Dubai) |
| STEM | Science, Technology, Engineering and Mathematics |
| DEC | Digital Education Council |
| UNESCO | United Nations Educational, Scientific and Cultural Organization |
Appendix A. Survey Instrument
Response format: 5-point Likert scale (1 = Never to 5 = Always)
Table A1.
AI adoption items.
Table A1.
AI adoption items.
| Item Mishra and Koehler (2006); Redecker (2017) | Likert Scale |
|---|
| I use AI tools for lesson planning. | 1–5 |
| I use AI for grading and student assessment. | 1–5 |
| I use AI to analyze student data. | 1–5 |
| I use AI for communication with students and/or parents. | 1–5 |
| I experiment with new AI tools to enhance teaching. | 1–5 |
| My school encourages the use of AI in teaching practices. | 1–5 |
| I receive regular updates or support regarding AI integration. | 1–5 |
Response format: 5-point Likert scale (1 = Strongly disagree to 5 = Strongly agree)
Table A2.
Perception of skills required for AI integration survey items.
Table A2.
Perception of skills required for AI integration survey items.
| Item Venkatesh and Davis (2000); Venkatesh et al. (2003) | Likert Scale |
|---|
| Classroom management | 1–5 |
| Individualized learning approaches | 1–5 |
| Pedagogical competencies | 1–5 |
| Knowledge of the curriculum | 1–5 |
| Student assessment techniques | 1–5 |
| Teaching diverse student needs | 1–5 |
| Lesson planning and content structuring | 1–5 |
| Grading and evaluation methods | 1–5 |
| Student engagement strategies | 1–5 |
| Collaboration with parents | 1–5 |
| Critical thinking and problem-solving | 1–5 |
| Change management and adaptability | 1–5 |
| Coaching and mentoring educators | 1–5 |
| Emotional intelligence and student guidance | 1–5 |
| Agile learning management and curriculum flexibility | 1–5 |
| Digital literacy | 1–5 |
| Data interpretation and analytics | 1–5 |
| Gamification and engagement through digital tools | 1–5 |
| Ethical use of AI and data privacy in education | 1–5 |
| Predictive analytics for student learning outcomes | 1–5 |
Table A3.
Work–life balance items.
Table A3.
Work–life balance items.
| Item Hayman (2005) | Likert Scale |
|---|
| Time available for personal life. | 1–5 |
| Ability to disconnect from work outside school hours. | 1–5 |
| Emotional well-being and stress management. | 1–5 |
| Flexibility in managing professional responsibilities. | 1–5 |
| Balance between teaching tasks and administrative duties. | 1–5 |
| Overall sense of control over my workload. | 1–5 |
Table A4.
Institutional support and professional development items.
Table A4.
Institutional support and professional development items.
| Item Ertmer and Ottenbreit-Leftwich (2010) | Likert Scale |
|---|
| I received formal training on AI integration before using AI tools in my teaching. | 1–5 |
| The training I received adequately prepared me to use AI in education. | 1–5 |
| I feel confident in my ability to integrate AI tools into my teaching. | 1–5 |
| My school provided sufficient induction training on AI-assisted teaching. | 1–5 |
| I have ongoing opportunities to receive AI-related professional development. | 1–5 |
| The AI training sessions I attended were relevant and practical. | 1–5 |
| I need additional training to effectively use AI in my teaching. | 1–5 |
| I need more hands-on workshops to improve my AI-related teaching skills. | 1–5 |
| My school provides continuous support for AI-related professional development. | 1–5 |
| I believe related training should be mandatory for all educators. | 1–5 |
References
- Al-Adwan, A. S., Li, N., Al-Adwan, A., Albelbisi, N. A., & Habibi, A. (2023). Extending the Technology Acceptance Model (TAM) to predict university students’ intentions to use metaverse-based learning platforms. Education and Information Technologies, 28, 15381–15413. [Google Scholar] [CrossRef]
- Bauwens, R., & Meyfroodt, K. (2021). Debate: Towards a more comprehensive understanding of ritualized bureaucracy in digitalized public organizations. Public Money & Management, 41(4), 281–282. [Google Scholar] [CrossRef]
- Bryman, A. (2016). Social research methods (5th ed.). Oxford University Press. [Google Scholar]
- Chen, J., & Li, F. (2025). School-based professional development and peer collaboration: Exploring pathways for the development of digital literacy among college english teachers in the AI era. Education and Social Work, 3(1), 36–44. [Google Scholar] [CrossRef]
- Cohen, L., Manion, L., & Morrison, K. (2017). Research methods in education (8th ed.). Routledge. [Google Scholar]
- Creswell, J. W. (2014). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (4th ed.). Pearson Education. [Google Scholar]
- Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE. [Google Scholar]
- Day, C., & Gu, Q. (2014). Resilient teachers, resilient schools: Building and sustaining quality in testing times. Routledge. [Google Scholar] [CrossRef]
- El Mourad, T., Hadjiphannis, L., Christofi, K., Chourides, P., & Kythreotis, A. (2025). An exploratory investigation of the artificial intelligence adoption on teachers’ job designs. In Proceedings of the 7th international conference on finance, economics, management and IT business (femib 2025) (pp. 167–174). SCITEPRESS. [Google Scholar] [CrossRef]
- Ertmer, P. A., & Ottenbreit-Leftwich, A. T. (2010). Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education, 42(3), 255–284. [Google Scholar] [CrossRef]
- Fullan, M. (2014). The principal: Three keys to maximizing impact. Jossey-Bass. [Google Scholar]
- Guan, L., Zhang, Y., & Gu, M. M. (2025). Pre-service teachers preparedness for AI-integrated education: An investigation from perceptions, capabilities, and teachers’ identity changes. Computers & Education: Artificial Intelligence, 8, 100341. [Google Scholar] [CrossRef]
- Hayman, J. (2005). Psychometric assessment of an instrument designed to measure work–life balance. Research and Practice in Human Resource Management, 13(1), 85–91. [Google Scholar]
- Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign. [Google Scholar]
- Holmes, W., Bialik, M., & Fadel, C. (2023). Artificial intelligence in education. In Artificial Intelligence in education: Promises and implications for teaching and learning (pp. 151–180). Globethics Publications. [Google Scholar] [CrossRef]
- Horváth, L., Pintér, T. M., Misley, H., & Dringó-Horváth, I. (2025). Validity evidence regarding the use of DigCompEdu as a self-reflection tool: The case of Hungarian teacher educators. Education and Information Technologies, 30, 1–34. [Google Scholar] [CrossRef]
- Imig, D., & Flores, M. A. (2025). International trends in contexts marked by teacher shortages: Implications for teacher education, professionalism and the teaching profession. Journal of Teacher Education, 51(5), 983–998. [Google Scholar] [CrossRef]
- Leithwood, K., Harris, A., & Hopkins, D. (2020). Seven strong claims about successful school leadership revisited. School Leadership & Management, 40(1), 5–22. [Google Scholar] [CrossRef]
- Marzuki, M. (2025). Generative AI in higher education: Opportunities and ethical risks. Higher Education Research & Development, 44(2), 312–327. [Google Scholar]
- Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. [Google Scholar] [CrossRef]
- Nyongesa, J. W., & Van Der Westhuizen, J. (2025). Systematic literature review on balancing work–life integration for educators in online learning models in Africa. EUREKA: Social and Humanities, 5, 92–106. [Google Scholar] [CrossRef]
- Ragolane, M., & Patel, S. (2024). Transforming Educ-AI-tion in South Africa: Can AI-Driven grading transform the future of higher education? Journal of Education and Teaching Methods, 3(1), 26–51. [Google Scholar] [CrossRef]
- Redecker, C. (2017). European framework for the digital competence of educators: DigCompEdu. Publications Office of the European Union. [Google Scholar] [CrossRef]
- Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press. ISBN 978-1509524569. [Google Scholar]
- Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. SAGE Publications. [Google Scholar]
- Tram, N. H. M. (2024). Unveiling the drivers of AI integration among language teachers: Integrating UTAUT and AI-TPACK. Computers in the Schools, 42(2), 100–120. [Google Scholar] [CrossRef]
- Varghese, N. N., Jose, B., Bindhumol, T., Cleetus, A., & Nair, S. B. (2025). The power duo: Unleashing cognitive potential through human–AI synergy in STEM and non-STEM education. Frontiers in Education, 10, 1534582. [Google Scholar] [CrossRef]
- Väkevä, L., & Partti, H. (2025). Generative AI as a collaborator in music education: An action-network theoretical approach to fostering musical creativities. Action, Criticism, and Theory for Music Education, 24(3), 16–52. [Google Scholar] [CrossRef]
- Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. [Google Scholar] [CrossRef]
- Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
- Williamson, B. (2017). Big data in education: The digital future of learning, policy and practice. Learning, Media and Technology, 42(3), 281–295. [Google Scholar] [CrossRef]
- Xue, L., Rashid, A. M., & Ouyang, S. (2024). The unified theory of acceptance and use of technology (UTAUT) in higher education: A systematic review. SAGE Open, 14(1), 21582440241229570. [Google Scholar] [CrossRef]
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16, 39. [Google Scholar] [CrossRef]
- Zhai, X. (2023). ChatGPT for next generation science learning. SSRN Electronic Journal, 41, 281–282. [Google Scholar] [CrossRef]
- Zheng, B., Warschauer, M., Lin, C.-H., & Chang, C. (2016). Learning in one-to-one laptop environments: A meta-analysis and research synthesis. Review of Educational Research, 86(4), 1052–1084. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |