1. Introduction
Artificial intelligence (AI) has rapidly evolved from a specialized technological innovation into a transformative force within higher education. Its impact now reaches beyond curriculum planning and evaluation, influencing how universities envision and structure the teaching and learning process (
Linderoth et al., 2025). Early debates emphasized personalization and differentiated instruction, highlighting AI’s capacity to tailor content and pacing to individual learners (
Febrianti et al., 2025;
Damyanov, 2024). While these contributions remain valuable, they no longer capture the breadth of AI’s roles or the complexity of its implications for contemporary higher education.
Faculty perceptions and practices play a pivotal role in shaping the trajectory of AI integration. In higher education, instructors act as both gatekeepers and catalysts of technological change; their adoption behaviors can accelerate or constrain institutional innovation (
Billy & Anush, 2023). Understanding how faculty engage with AI is therefore critical to bridging the gap between strategic aspirations and everyday teaching practice, particularly in contexts where institutional readiness is uneven. Faculty attitudes and practices often determine whether AI remains a peripheral support tool or becomes meaningfully embedded in pedagogy (
Mah & Groß, 2024).
Although AI is expanding quickly across the global higher education landscape, research and documented practices remain scarce in many underrepresented regions. In the Middle East, adoption of AI tools has been shaped by economic pressures, uneven infrastructure, and policy uncertainty (
Al-Zahrani & Alasmari, 2025). Lebanon offers a particularly revealing case: universities continue to pursue innovation amid political and financial instability, with faculty adapting under constrained conditions (
Akar, 2022). Examining how instructors perceive, adopt, and evaluate AI in this context provides insights that extend beyond Lebanon, with implications for other resource-constrained educational systems.
Despite growing interest in faculty use of AI, much of the existing literature remains descriptive, focusing on awareness or general attitudes rather than examining how familiarity, institutional conditions, and perceived barriers jointly shape adoption in practice. This study addresses this gap by empirically examining the association between faculty familiarity with AI and the frequency of instructional use, while also assessing the role of perceived benefits, barriers, and demographic characteristics. By doing so, the study contributes to a more nuanced understanding of the widely observed readiness practice gap in higher education, where positive attitudes toward AI do not consistently translate into sustained pedagogical integration.
Against this backdrop, the present study examines faculty engagement with AI in two private Lebanese universities, Notre Dame University–Louaize (NDU) and the Holy Spirit University of Kaslik (USEK). It focuses on faculty familiarity with AI, patterns of classroom adoption, perceived benefits and barriers, and potential demographic differences in usage. The study also explores how familiarity relates to adoption frequency and considers faculty attitudes toward future AI integration. The study is guided by the following research questions:
- (1)
How familiar are faculty instructors at the two participating Lebanese private universities with AI tools used for teaching and assessment?
- (2)
Which AI tools and instructional domains are most commonly used by these instructors?
- (3)
What benefits and barriers do faculty instructors perceive regarding the pedagogical use of AI for teaching and assessment in higher education?
- (4)
What is the association between self-reported AI familiarity and the frequency of AI use in teaching and assessment among faculty instructors?
- (5)
Do AI adoption patterns differ according to faculty instructors’ gender, age group, or academic qualification?
- (6)
What are faculty instructors’ attitudes toward the future adoption of AI in Lebanese higher education?
By generating empirical evidence on familiarity, usage patterns, perceived pedagogical benefits and barriers, and demographic variation within a resource-constrained institutional context, this study offers a context-sensitive and theoretically informed contribution to the global literature on AI adoption in higher education. The findings provide actionable insights for institutions seeking to support targeted professional development, equitable access to AI tools, and the development of ethical governance frameworks, which are essential for moving from sporadic experimentation toward sustainable and pedagogically meaningful AI integration.
5. Discussion
The present study offers a focused portrait of instructors’ engagement with artificial intelligence across two Lebanese universities. Three findings are prominent. First, familiarity with AI tools is strongly associated with adoption: instructors who report higher familiarity are substantially more likely to use AI frequently for instruction and differentiation. Second, use remains concentrated in general-purpose tools, particularly chatbots and translation, whereas pedagogy-specific systems such as intelligent tutoring and adaptive or personalized platforms see minimal uptake. Third, despite limited use in assessment, attitudes are broadly optimistic and faculty indicate willingness to recommend wider adoption, while citing structural barriers, chiefly training and access, as the primary constraints. These patterns are consistent with recent syntheses that describe a readiness–practice gap in higher education, where positive sentiment and perceived efficiency gains precede deep course-embedded uses, and early adoption clusters around assistants before specialized analytics or adaptive systems are mainstreamed (
Bond et al., 2024;
Crompton & Burke, 2023;
Shata & Hartley, 2025).
When situated within the broader MENA literature, the present findings show strong convergence with regional adoption patterns, particularly the concentration of AI use in low-barrier, general-purpose applications and the central role of training, access, and institutional support in shaping adoption (
Al-Zahrani & Alasmari, 2025). At the same time, the pronounced structural constraints reported by Lebanese faculty suggest a more fragile institutional environment than that documented in some neighboring contexts, reinforcing the importance of context-sensitive interpretations of AI integration.
In relation to the broader international literature, these findings also converge with and extend established international evidence on AI adoption in higher education. International reviews consistently show that early-stage AI integration is dominated by general-purpose tools that support efficiency and content preparation, while pedagogy-specific systems and assessment-oriented applications remain marginal (
Bond et al., 2024;
Crompton & Burke, 2023). The Lebanese case aligns with this global trajectory, reinforcing the interpretation of AI as an assistive rather than transformative pedagogical technology in its early adoption phase. However, our findings also suggest a sharper manifestation of the readiness–practice gap than that typically reported in better-resourced higher education systems. In contexts where institutional training, access to licensed tools, and governance frameworks are limited, moderate to high faculty familiarity alone appears insufficient to drive sustained pedagogical integration. This contrast highlights the importance of structural capacity as a moderating factor in international AI adoption patterns.
Beyond the substantive findings, this study underscores the value of perception-based evidence in examining emerging educational technologies. Faculty perceptions play a central role in shaping whether and how AI tools are adopted, pedagogically legitimized, and sustained within higher education institutions. During early or transitional phases of technology integration, when adoption is uneven and system-level usage data may be fragmented or unavailable, instructors’ beliefs, familiarity, and perceived usefulness often precede observable behavioral change. As such, perception-focused research provides critical insight into institutional readiness, perceived barriers, and the contextual conditions that influence adoption trajectories. This perspective is particularly salient in resource-constrained higher education systems, where structural limitations may shape practice long before widespread or measurable system-level implementation occurs.
The task-level distribution observed here, strongest traction in lesson preparation and differentiation, with more hesitant use for assessment, can be interpreted through established technology adoption frameworks. From the perspective of Ertmer’s first- and second-order barriers, low-barrier, high-utility tasks such as lesson planning and resource adaptation are more readily adopted because they require limited institutional change and align closely with instructors’ existing pedagogical practices. This pattern is consistent with prior analyses of early-stage AI and LLM adoption, which show that low-risk, efficiency-oriented workflows tend to dominate initial use (
Liwanag et al., 2025).
Importantly, these patterns also illustrate how institutional structures actively shape faculty adoption behaviors rather than merely constraining them. The availability of licensed AI tools, the presence or absence of formal guidance, and the degree of institutional endorsement influence whether instructors perceive AI use as legitimate, low-risk, and worth sustained investment of time and effort. In contexts where governance frameworks, assessment policies, and professional support are limited, instructors may rationally restrict AI use to peripheral or preparatory tasks that do not require institutional approval or carry pedagogical or ethical risk. Conversely, clearer policies, access pathways, and support structures can normalize experimentation, reduce perceived risk, and encourage a shift from individual, ad hoc use toward more consistent and pedagogically embedded adoption.
In contrast, assessment-related applications implicate higher-stakes pedagogical, ethical, and governance concerns, intensifying both first-order barriers (e.g., policy clarity, institutional support) and second-order barriers (e.g., beliefs about validity, bias, and academic integrity). As a result, assessment-oriented uses are often perceived as “not applicable” in the absence of clear institutional guardrails, despite generally positive attitudes toward AI more broadly.
At the same time, the present study builds on earlier work in two important ways. First, while previous research has shown that faculty who feel confident using AI and believe it is useful are more likely to adopt it, our results go a step further by measuring how strongly familiarity is linked to actual use. We found that instructors who report higher levels of familiarity with AI also report more frequent use (
Ramos Salazar & Peeples, 2025). This association suggests that increasing faculty familiarity, such as through targeted professional development, may be an important leverage point for supporting broader AI adoption in practice.
Second, unlike some earlier studies that highlight demographic differences in technology adoption (
Qiu et al., 2024), our findings show no significant differences in AI use by gender, age, or academic qualification. Interpreted through Ertmer’s framework, this pattern suggests that structural and institutional conditions (first-order barriers), such as access to tools, training opportunities, and governance frameworks, may exert a stronger influence on AI adoption than individual demographic characteristics. While second-order barriers related to beliefs and familiarity remain important, the absence of demographic effects underscores the central role of institutional context in enabling or constraining meaningful AI integration. This interpretation aligns with recent sector analyses emphasizing the role of institutional levers, such as clear policies, access to vetted tools, and dedicated time and support for course redesign, as the most powerful drivers of sustainable AI integration in higher education (
OECD, 2023;
Robert, 2024). Strengthening these structural supports may therefore accelerate adoption more effectively than targeting demographic subgroups.
The recommendations outlined in the following section are explicitly grounded in the barriers identified in this study. Insufficient training informs the emphasis on professional development anchored in concrete teaching workflows, while limited access to licensed AI tools motivates the recommendation for curated, institutionally approved tool portfolios. Similarly, the low uptake of AI in assessment, despite generally positive attitudes toward its potential, supports the proposal for small-scale, policy-supported assessment pilots designed to reduce perceived risk and clarify acceptable use. By directly linking observed barriers to corresponding recommendations, the study translates diagnostic insights into institutionally actionable strategies rather than broad or abstract calls for innovation.
Taken together, these findings have implications that extend beyond interpretation to inform policy, practice, and institutional strategy. At the policy level, the concentration of AI use in low-stakes tasks and the persistent hesitation around assessment underscore the need for clear governance frameworks addressing validity, transparency, and academic integrity. At the level of teaching practice, the strong association between familiarity and adoption highlights the importance of professional learning anchored in concrete instructional workflows. Strategically, the findings suggest that institutions should approach AI integration as a staged capacity-building process, prioritizing access, training, and tool curation before expecting deeper pedagogical transformation. The following section translates these implications into actionable recommendations aligned with the empirical findings.
Conceptually, this study contributes to the AI-in-education literature by reframing early AI adoption not simply as a function of faculty attitudes or familiarity, but as an interaction between task-level pedagogical risk and institutional capacity, offering an interpretive lens that is applicable beyond the Lebanese context.
5.1. Practical Implications
Our findings indicate that insufficient training and limited access to AI tools were the most commonly cited barriers to adoption, while faculty familiarity emerged as a strong predictor of adoption frequency. Instructors also identified time savings, improved engagement, and support for personalized learning as key perceived benefits. Based on these results, we propose the following practical implications. Because lack of training was the most frequently reported barrier and familiarity strongly predicted adoption, we recommend a tiered professional learning sequence anchored in concrete teaching workflows, such as lesson preparation, differentiation, and formative feedback. Short-cycle evaluation (4–8 weeks) is particularly appropriate in the Lebanese context, as it allows institutions to build familiarity and demonstrate value without requiring extensive upfront investment. Because limited access to AI tools was the second most common barrier, institutions should prioritize the publication of a curated, privacy assured list of approved AI tools, accompanied by clear usage guidance and, where feasible, single sign-on or LMS integration. This approach can reduce uncertainty and promote equitable access while remaining feasible in resource-constrained settings. Given the limited use of AI in assessment, despite generally positive perceptions where it is employed, we recommend small-scale assessment redesign pilots in selected courses. These pilots should incorporate validity, transparency, and bias guardrails and allow institutions to explore responsible assessment use incrementally. Finally, the absence of statistically significant demographic differences suggests that adoption is shaped more by institutional conditions than by individual characteristics. Lightweight institutional supports, such as concise AI teaching policies and short evidence briefs summarizing pilot outcomes, can help sustain adoption and translate early experimentation into longer-term practice. Short-term, feasible actions in the Lebanese context include targeted professional development, curated tool access, and small-scale assessment pilots. Broader initiatives, such as micro grant schemes or expanded policy frameworks, should be viewed as longer-term aspirations requiring additional institutional resources.
5.2. Actionable Checklist
Short-term, feasible actions:
Launch a 2-module professional development sprint focused on lesson planning and formative feedback, with clear competency targets and uptake indicators, reflecting the strong association observed between faculty familiarity and AI adoption.
Publish a curated, privacy assured list of approved AI tools with quick-start guides and, where feasible, LMS integration to address commonly reported access barriers.
Pilot AI supported assessment redesign in 2–3 courses per faculty using validity, bias, and transparency checklists and shared exemplars, in response to the limited but promising use of AI in assessment identified in this study.
Longer-term aspirations:
Provide micro grants or shared licenses to ensure equitable, discipline balanced access to AI tools, particularly in resource constrained institutional settings.
Establish a small standing review group to evaluate emerging tools and issue brief impact summaries to inform policy refinement and support sustainable scaling.
5.3. Limitations
As with any cross-sectional, self-report study, certain limitations should be acknowledged. The design does not allow causal inference, and the focus on two private Lebanese universities limits statistical generalizability to the broader national system. Convenience sampling may also introduce some self-selection effects. In addition, the small number of respondents in the ‘extremely familiar’ group (n = 4) warrants careful interpretation of the familiarity–use gradient. Finally, the study did not include objective usage logs or student learning outcomes; future research could incorporate qualitative methods and system-level data to further examine the pedagogical impact of AI integration.
5.4. Future Directions
This study provides a foundation for understanding faculty engagement with AI in higher education, but it also points to several important avenues for future research. First, longitudinal studies could clarify how faculty familiarity, attitudes, and usage patterns evolve over time and whether professional learning leads to sustained adoption. Incorporating objective usage data would allow more precise measurement and validation of self-reported behaviors. Second, expanding the sample to public universities and other educational sectors would improve generalizability and support cross-institutional comparisons. Comparative research across national and international contexts could further illuminate how policy environments shape AI adoption trajectories. Finally, targeted studies are needed to explore AI integration in assessment and its effects on student learning outcomes. Experimental or quasi-experimental designs examining institutional guardrails, tool vetting, and professional development could help bridge the readiness–practice gap and inform evidence-based policy.