Next Article in Journal
Evolution Trends, Spatial Differentiation, and Convergence Characteristics of Urban Ecological Economic Resilience in China
Previous Article in Journal
Comparative Assessment of Health Systems Resilience: A Cross-Country Analysis Using Key Performance Indicators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Perception to Practice: Artificial Intelligence as a Pathway to Enhancing Digital Literacy in Higher Education Teaching

1
School of Management Science, Chengdu University of Technology, Chengdu 610059, China
2
School of Economics and Management, Southwest Petroleum University, Chengdu 610500, China
*
Author to whom correspondence should be addressed.
Systems 2025, 13(8), 664; https://doi.org/10.3390/systems13080664
Submission received: 23 June 2025 / Revised: 22 July 2025 / Accepted: 28 July 2025 / Published: 6 August 2025
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)

Abstract

In the context of increasing Artificial Intelligence integration in higher education, understanding the factors influencing university teachers’ adoption of AI tools is critical for effective implementation. This study adopts a perception–intention–behavior framework to explores the roles of perceived usefulness, perceived ease of use, perceived trust, perceived substitution crisis, and perceived risk in shaping teachers’ behavioral intention and actual usage of AI tools. It also investigates the moderating effects of peer influence and organizational support on these relationships. Using a comprehensive survey instrument, data was collected from 487 university teachers across four major regions in China. The results reveal that perceived usefulness and perceived ease of use are strong predictors of behavioral intention, with perceived ease of use also significantly influencing perceived usefulness. Perceived trust serves as a key mediator, enhancing the relationship between perceived usefulness, perceived ease of use, and behavioral intention. While perceived substitution crisis negatively influenced perceived trust, it showed no significant direct effect on behavioral intention, suggesting a complex relationship between job displacement concerns and AI adoption. In contrast, perceived risk was found to negatively impact behavioral intention, though it was mitigated by perceived ease of use. Peer influence significantly moderated the relationship between perceived trust and behavioral intention, highlighting the importance of peer influence in AI adoption, while organizational support amplified the effect of perceived ease of use on behavioral intention. These findings inform practical strategies such as co-developing user-centered AI tools, enhancing institutional trust through transparent governance, leveraging peer support, providing structured training and technical assistance, and advancing policy-level initiatives to guide digital transformation in universities.

1. Introduction

Artificial intelligence (AI) has evolved from symbolic systems in the 1950s to advanced machine learning architectures, transforming education through breakthroughs in image recognition, natural language processing, and adaptive learning [1]. Enabled by increased computational power and data availability, AI has reshaped teaching methods and expanded learning beyond traditional classroom boundaries [2,3].
As AI reconfigures labor markets and economic structures [4], educational institutions face unprecedented opportunities and challenges. The convergence of digital technologies with physical learning environments necessitates adaptation of traditional pedagogical approaches to address emerging educational demands [5]. This transformation positions AI as a catalyst for educational innovation, enhancing learning experiences and preparing individuals for an increasingly digital future [6]. The 2021 Horizon Report’s recognition of AI as a key educational technology has intensified research interest in Artificial Intelligence in Education (AIED), while simultaneously highlighting implementation complexities.
AI demonstrates significant potential in educational advancement, supporting both structured academic programs and continuous learning initiatives [7]. Its applications span four key domains: analytical assessment, intelligent instruction, evaluation systems, and personalized learning platforms [8]. Current educational uses of AI fall into three main categories: (i) Conversational AI systems like ChatGPT that handle routine queries and free educators for higher-order tasks [9]; (ii) Intelligent tutoring systems offering real-time personalized feedback [10]; and (iii) Adaptive learning platforms such as Socratic and Habitica, which adjust content dynamically based on student performance [11].
AI holds promise for enhancing equity and teaching quality [12,13]. It facilitates personalized learning and relieves educators from repetitive tasks, allowing focus on complex pedagogy [14]. Yet, its integration raises concerns about authenticity, job security, and human connection [8]. Automation risks reducing the nuanced feedback essential for effective education. Moreover, ethical issues such as diminished teacher autonomy, student data surveillance, and opaque algorithmic decisions have drawn increasing attention [15], underscoring the need for human-centered, transparent AI design.
Further barriers include data privacy risks, algorithmic bias, and financial constraints—particularly in resource-limited institutions [16,17]. These complexities suggest that AI adoption is not merely a technical matter, but a sociotechnical process shaped by individual, organizational, and systemic dynamics. Global initiatives such as the UNESCO AI Competency Framework for Teachers emphasize the educator’s role in ensuring responsible adoption [18], making it essential to understand the psychological and contextual factors influencing their engagement.
To this end, Rogers’ Diffusion of Innovations (DoI) theory (2014) [19] provides a useful lens, outlining five key attributes—relative advantage, compatibility, complexity, trialability, and observability—that affect adoption. Prior research confirms DoI’s utility in explaining divergent faculty responses to educational technologies [20,21]. Integrating this framework helps reveal how perceived benefits and risks shape AI adoption decisions, particularly in constrained institutional settings [17].
This tension between technological advancement and educational integrity positions university faculty as central actors in the AI adoption process. Their perceptions determine whether AI functions as a collaborative pedagogical tool or a disruptive force [22]. Understanding these dynamics requires examining not only educators’ perceptions—such as trust, risk, and usefulness—but also how these translate into behavioral intention and actual usage. To capture this process, this study adopts a perception–intention–behavior framework to explore the psychological and organizational factors influencing AI adoption among university educators. Accordingly, this study seeks to address the following research questions:
(1)
How do university teachers perceive the role and impact of AI integration in higher education?
(2)
What individual, interpersonal, and organizational factors influence university teachers’ behavioral intention and actual use of AI technologies in teaching?
The study is structured in the following manner. Section 2 presents the theoretical foundation and hypnoses development. Section 3 details the methodology, including data collection, sample characteristics, and analytical techniques used. Section 4 presents the results and analysis. Section 5 offers conclusions and future outlook, considering the implications of the findings for educational reform and policy.

2. Theoretical Underpinning and Hypotheses Development

2.1. Perceived Ease of Use, Perceived Usefulness, and Adoption Intentions

The Technology Acceptance Model (TAM), anchored in the psychological theories of reasoned action and planned behavior, provides a systematic framework for understanding technology adoption decisions [23]. At its core, TAM posits two primary determinants: perceived usefulness (PU) and perceived ease of use (PEOU). PU represents an individual’s assessment that a system will enhance their job performance, while PEOU reflects their belief that system usage will require minimal effort [24].
Empirical research consistently demonstrates PU’s direct positive influence on behavioral intention (BI). Users exhibit stronger adoption intentions when they recognize a technology’s potential to improve performance and efficiency [25]. This relationship maintains robustness across diverse contexts, with higher perceived usefulness consistently correlating with stronger adoption intentions [26]. PEOU similarly influences adoption behavior through multiple pathways. When users encounter user-friendly interfaces and manageable learning curves, particularly in AI systems, they develop favorable attitudes toward the technology, strengthening usage intentions [27]. Time-sensitive professional environments especially benefit from enhanced PEOU, as demonstrated by Venkatesh & Morris (2000) [28]. Furthermore, PEOU positively impacts PU—technologies perceived as easy to use often appear more valuable, as accessibility enhances users’ ability to recognize and capitalize on system benefits [29]. Drawing from this theoretical and empirical foundation, we propose:
H1(a): 
Perceived usefulness positively influences behavioral intention to adopt.
H1(b): 
Perceived ease of use positively influences behavioral intention to adopt.
H1(c): 
Perceived ease of use positively influences perceived usefulness.

2.2. Perceived Trust, Perceived Risk, Perceived Substitution Crisis and Adoption Intentions

The dynamics of technology adoption are significantly shaped by psychological and organizational factors, particularly perceived trust (PT), perceived risk (PR), and perceived substitution crisis (PSC). These constructs, grounded in established theoretical frameworks, play pivotal roles in understanding users’ acceptance or resistance to AI technologies.
PT encompasses users’ confidence in a technology’s reliability, security, and performance consistency [30]. This multidimensional construct integrates elements of institutional trust theory and technology acceptance frameworks, suggesting that trust formation involves both cognitive and affective components. In organizational contexts, trust becomes particularly salient as it mediates the relationship between system characteristics and user acceptance [31]. PR, conceptualized through the lens of prospect theory and behavioral economics, represents users’ assessment of potential negative outcomes associated with technology adoption [32]. In AI-enabled educational environments, these risks manifest across multiple dimensions: performance risk (system reliability), privacy risk (data security), and professional risk (impact on teaching effectiveness). The theory of planned behavior suggests that risk perceptions act as psychological barriers, systematically influencing behavioral intentions through both direct and indirect pathways. PSC emerges as a distinct psychological construct reflecting users’ apprehension about technological displacement. Drawing from self-determination theory, PSC represents a threat to fundamental psychological needs—autonomy, competence, and relatedness [33]. This theoretical perspective suggests that when individuals perceive technology as threatening their professional identity or job security, they experience psychological resistance that manifests in reduced trust and heightened risk perception. The interrelationships among these constructs create a complex web of influences on adoption behavior. Research indicates that PSC can trigger a cascade effect, simultaneously elevating risk perceptions and eroding trust, ultimately diminishing adoption intentions [34]. This dynamic is particularly evident in knowledge-intensive professions where professional identity is closely tied to expertise and autonomy. Based on this theoretical foundation and empirical evidence, we propose:
H2(a): 
Perceived trust demonstrates a significant positive effect on behavioral intention to adopt AI technology.
H2(b): 
Perceived risk exhibits a significant negative influence on behavioral intention.
H2(c): 
Perceived substitution crisis significantly amplifies perceived risk.
H2(d): 
Perceived substitution crisis significantly diminishes perceived trust.
H2(e): 
Perceived substitution crisis demonstrates a significant negative effect on behavioral intention.

2.3. Moderating Role of Peer Influence

The role of peer influence (PI) in technology adoption draws upon established social and organizational theories. Social influence theory posits that individuals’ attitudes and behaviors are significantly shaped by their social environment, particularly through processes of compliance, identification, and internalization [35]. In technology adoption contexts, PI operates as a powerful moderating force, amplifying or attenuating the effects of other adoption determinants [36].
The theoretical underpinning of peer influence extends beyond simple social pressure. Diffusion of innovations theory emphasizes the critical role of interpersonal networks in innovation adoption, particularly through observability and trialability mechanisms [37]. When peers successfully integrate new technologies, their experiences serve as valuable social proof, reducing uncertainty and enhancing trust through vicarious learning. In AI adoption specifically, peer influence operates through multiple channels. First, it can strengthen the relationship between perceived usefulness and adoption intention by providing concrete evidence of technology benefits through peer experiences. Second, peer influence can enhance the trust-adoption relationship by validating technology reliability through collective experience. Research demonstrates that positive peer experiences particularly impact early majority adopters, who rely heavily on social validation in their adoption decisions [28]. Based on these theoretical foundations, we propose:
H3(a): 
Peer influence positively moderates the relationship between perceived usefulness and behavioral intention.
H3(b): 
Peer influence positively moderates the relationship between perceived trust and behavioral intention.

2.4. Moderating Role of Organization Support

Organizational support (OS) represents a critical contextual factor in technology adoption, encompassing both infrastructural and socioemotional dimensions. Drawing from organizational behavior theory and institutional support frameworks, OS extends beyond basic facilitating conditions to include strategic leadership, resource allocation, and cultural alignment [38]. The theoretical foundation of OS in technology adoption integrates elements from social cognitive theory and organizational learning theory. In educational settings, where AI integration presents unique challenges, OS manifests through multiple channels: technical infrastructure, professional development opportunities, and institutional policies that encourage innovation.
In the Chinese higher education context, organizational support is closely intertwined with national-level digital strategies such as the “Education Informatization 2.0 Action Plan” and the “Smart Education of China” framework, which mandate digital transformation and AI integration across institutions. This context emphasizes the role of formalized leadership initiatives, top-down capacity building, and administrative mechanisms in shaping faculty engagement with AI technologies.
Research demonstrates that facilitating conditions significantly influence technology adoption behavior [28]. However, OS’s role extends beyond physical resources to include psychological support mechanisms. This comprehensive support structure can moderate the relationship between perceived ease of use and adoption intention by reducing implementation barriers and enhancing user confidence. Similarly, OS can buffer the negative impact of perceived risks by providing institutional safeguards and clear guidelines for technology use. We therefore propose:
H4(a): 
Organizational support positively moderates the relationship between perceived ease of use and behavioral intention.
H4(b): 
Organizational support negatively moderates the relationship between perceived risk and behavioral intention.

2.5. Mediating Role of Perceived Risk

Perceived risk (PR) associated with educational technologies often encompasses concerns related to data privacy, professional displacement, and pedagogical effectiveness [39]. As AI tools become increasingly integrated into university teaching, these risk perceptions take on renewed relevance. Drawing from risk perception theory, PR functions as a critical mediating factor between system characteristics and adoption intentions in AI-related educational practices. Research demonstrates that high risk perceptions can attenuate the positive effects of perceived usefulness on adoption intentions, even when users recognize potential benefits [40]. Similarly, PR mediates the relationship between ease of use and adoption behavior by influencing users’ cognitive evaluation of technology benefits versus potential threats [41]. In educational contexts, PR’s mediating effects are particularly salient as AI integration introduces unique uncertainties regarding teaching quality and professional autonomy [42]. We propose:
H5(a): 
Perceived risk mediates the relationship between perceived usefulness and behavioral intention to adopt AI technology.
H5(b): 
Perceived risk mediates the relationship between perceived ease of use and behavioral intention to adopt AI technology.

2.6. Mediating Role of Perceived Trust

Trust is defined as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” [43]. Perceived trust (PT) is a central construct in understanding technology adoption, which reflects users’ confidence in the reliability, fairness, and ethical use of technology, which can alleviate apprehensions and positively influence their attitudes and behaviors toward adoption [44] Empirical research supports the mediating role of trust in technology adoption. Wu & Chen (2005) [45] demonstrated that trust can bridge the gap between users’ perceptions and intentions, creating a pathway for positive adoption behaviors. By reducing uncertainty and reinforcing the perceived benefits and usability of technology, trust facilitates a more robust connection between perceptions and behavioral outcomes. Building users’ trust is critically important for technology adoption because trust leads to positive outcomes [46,47]. Based on these insights, the following hypotheses are proposed:
H6(a): 
Perceived trust mediates the relationship between perceived usefulness and behavioral intention to adopt AI technology.
H6(b): 
Perceived trust mediates the relationship between perceived ease of use and behavioral intention to adopt AI technology.

2.7. Behavior Intentions and Actual Usage

Behavioral intention (BI) represents a critical psychological mechanism bridging cognitive evaluations and actual technology usage [48]. Drawing from the theory of planned behavior and expectancy-value frameworks, BI serves as both a culmination of antecedent perceptions and a predictor of future behavior [49].
In educational AI adoption contexts, the intention-behavior relationship operates through multiple theoretical pathways. First, implementation intention theory suggests that strong behavioral intentions help users form specific plans for technology integration, enhancing the likelihood of actual implementation [50]. Second, self-regulation theory indicates that robust intentions strengthen users’ commitment to overcoming adoption barriers through persistent effort and adaptive strategies [51]. Recent empirical evidence supports these theoretical mechanisms. Wei et al. (2021) [52] demonstrated that educators with strong adoption intentions exhibit more consistent and sophisticated AI tool usage patterns. Pillai et al. (2023) [53] found that intention strength moderates the relationship between environmental constraints and actual usage behavior. Polyportis & Pahos et (2024) [54] further revealed that implementation intentions mediate the relationship between general adoption intentions and specific usage behaviors. Based on this theoretical foundation and empirical evidence, we propose:
H7: 
Behavioral intention demonstrates a significant positive effect on actual usage behavior of AI technology.
Following an in-depth discussion on the model’s development and a detailed explanation of the mechanisms underlying the hypotheses, the resulting conceptual framework is presented in Figure 1.

3. Methodology

3.1. Survey Instrument Design

This study collected primary data through a nationwide online survey of university teachers in China. The questionnaire was distributed via academic mailing lists, university teaching networks, and social media platforms using a non-probability purposive sampling approach. The sample encompasses educators from the East, Central, West, and Northeast regions, ensuring geographical diversity. It includes teachers across professional titles from teaching assistants to full professors, providing comprehensive research perspectives. The survey comprises two sections: The first gathers demographic and professional data (age, gender, professional title, teaching experience, location). The second measures nine constructs: perceived usefulness (PU), perceived ease of use (PEOU), peer influence (PI), organizational support (OS), perceived trust (PT), perceived risk (PR), perceived substitution crisis (PSC), behavior intention (BI), and actual usage (AU).
To evaluate these constructs, this study adopted a structured scale to measure latent variables that cannot be directly observed [55]. All constructs, except actual use (AU), were measured using items on a 5-point Likert scale, where 1 = “strongly disagree” and 5 = “strongly agree.” The AU construct was assessed using a frequency scale ranging from 1 = “never” to 5 = “always.” To ensure content validity, the questionnaire items were adapted from established scales validated by prior studies [56]. This approach ensures that the constructs accurately capture the intended dimensions of the study. Table 1 presents the mean, standard deviation, skewness, kurtosis, factor loadings, reliability coefficients (Cronbach’s alpha), and assessments of convergent and divergent validity.

3.2. Data Collection and Sample Characteristics

To ensure survey quality, the instrument underwent two validation phases: First, five expert researchers in survey methodology and educational research conducted a pre-test to evaluate questionnaire design. Second, a pilot test with 20 university teachers across China was performed. Based on feedback, the questionnaire was refined to enhance clarity and relevance. The final survey was distributed via Wenjuanxing (https://www.wjx.cn/login.aspx (accessed on 15 August 2024)), a prominent Chinese survey platform. Following recommended sample size ratios of 5:1 or 10:1 and given the study’s 45 indicators, a minimum sample of 450 participants was targeted to ensure statistical validity [57].
Data collection occurred between 1 September and 31 December 2024, yielding 522 responses. After screening for outliers, missing values, and incomplete answers, 487 valid responses remained, achieving a 93.3% effective response rate. Table 2 details respondents’ demographic information, while Figure 2 visualizes the cross-distribution of key demographic characteristics, demonstrating sample representativeness.

3.3. Structural Equation Model (SEM)

This study utilized Partial Least Squares Structural Equation Modeling (PLS-SEM) for data analysis. SEM was chosen for its capacity to simultaneously conduct exploratory factor analysis and multiple regression, enabling thorough examination of causal relationships between variables [58]. PLS-SEM was particularly suitable for three reasons: it excels in exploratory research by effectively identifying latent variables and testing theoretical relationships [59]; it handles non-normal data distributions robustly [60]; and it accommodates smaller sample sizes with fewer restrictions compared to covariance-based SEM, making it ideal for survey research [38].

4. Results and Analysis

4.1. Common Method Bias

To ensure research validity, we implemented comprehensive measures to address common method bias (CMB). Procedurally, we enhanced survey design through careful question construction, extensive pre-testing for clarity, and strict anonymity protocols to elicit authentic responses [61]. Participant trust was further strengthened by transparent communication of research objectives and data confidentiality assurances. Statistically, we employed Harman’s single factor test, which revealed the maximum variance explained by any single factor was substantially below the 50% threshold, indicating robust protection against CMB [62]. This dual-approach strategy, combining methodological rigor with statistical validation, aligns with best practices in survey research methodology [63] and strengthens the credibility of our findings. Additionally, the low variance explained by individual factors suggests successful measurement design and data collection procedures, reinforcing the reliability of our analytical framework.

4.2. Measurement Model Evaluation

The Partial Least Squares Structural Equation Modeling (PLS-SEM) methodology encompasses two critical phases: measurement model assessment and structural model assessment. During the initial phase, researchers evaluate the measurement model to validate the reliability, convergent validity, and discriminant validity of the constructs. This comprehensive evaluation process examines both the internal consistency and the degree of concordance among indicators representing each construct. Table 2 presents a detailed interpretation of the model estimates.
The assessment of construct reliability utilized two widely acclaimed metrics in structural equation modeling: Cronbach’s α and composite reliability (CR). Cronbach’s α, pioneered by Lee Cronbach, quantifies a construct’s internal consistency through the analysis of inter-item correlations [64]. Internal consistency indicates the degree of interrelation among individual items within a construct, ensuring their collective representation of the underlying theoretical concept. It is noteworthy that Cronbach’s α exhibits an upward trend as the correlation among variables within a construct intensifies. In reliability assessment, a Cronbach’s α value of 0.70 or higher meets acceptable standards, while values exceeding 0.70 are prescribed for robust models [65]. As evidenced in Table 2, all constructs demonstrated Cronbach’s α values surpassing the 0.70 threshold, thereby confirming the data’s reliability and precision.
Complementarily, CR was computed to evaluate construct consistency by examining the external loading values of indicators [66]. CR offers a more comprehensive reliability measure compared to Cronbach’s α, as it accounts for the differential contributions of individual indicators. A CR value exceeding the conventional threshold of 0.70 signifies robust internal consistency among a construct’s indicators. As illustrated in Table 2, all CR scores exceeded the recommended benchmark, further substantiating the measurement model’s reliability.
The PLS-SEM analysis method involves two key stages: measurement model assessment and structural model assessment. In the first stage, the measurement model was evaluated to ensure the reliability, convergent validity, and discriminant validity of the constructs. This process involved examining the internal consistency and the degree of agreement among the indicators representing each construct. Table 2 provides a detailed interpretation of the model estimates.
Furthermore, this study employed the Fornell-Larcker correlation matrix, the square root of the AVE, and the Heterotrait-Monotrait (HTMT) ratio to evaluate discriminant validity [67]. These methods collectively provide a robust assessment of whether constructs are sufficiently distinct from one another. Table 3 presents the Fornell-Larcker correlation matrix, in which the diagonal elements represent the square root of the AVE for each construct. In accordance with the Fornell-Larcker criterion, the square root of the AVE for each construct exceeds its correlations with any other construct. This result demonstrates strong discriminant validity, as each construct shares more variance with its own indicators than with other constructs in the model.
Table 4 presents the Heterotrait-Monotrait (HTMT) ratios, calculated following the methodological guidelines established by Hair et al. (2023) [68]. The HTMT ratio serves as a sophisticated measure of discriminant validity by comparing the heterogeneity between constructs to their internal homogeneity, offering a more stringent test of discriminant validity that accounts for both measurement error and construct overlap. All HTMT values reported in Table 4 fell well below the conservative threshold of 0.85, with the highest observed ratio being 0.827, substantially below this critical value. These results provide compelling evidence of discriminant validity, demonstrating that the constructs are empirically distinct and capture unique theoretical concepts. This finding is especially significant given that the 0.85 threshold represents a conservative criterion in the methodological literature. Some scholars suggest that values up to 0.90 may be acceptable [69], making the results particularly robust.

4.3. Structural Model Evaluation

This study applied PLS-SEM to analyze the relationships between the constructs in the structural model, utilizing SmartPLS 4 for hypothesis testing. A comprehensive sample of 487 valid responses was analyzed, and bootstrapping with 5000 subsamples was employed to ensure the robustness and statistical reliability of the findings. The bootstrapping procedure’s large number of subsamples helps minimize sampling error and provides stable parameter estimates, particularly crucial for examining complex path relationships in educational technology adoption models. Control variables were not included in the model, consistent with prior research in technology acceptance studies [70], as the focus was on examining the direct effects of key theoretical constructs. Table 5 summarizes the main effects, providing detailed insights into the factors that influence teachers’ BI and AU of AI tools in higher education.
PU demonstrated a significant positive effect on BI (β = 0.550, t = 1.687, p = 0.091), supporting H1(a). This finding suggests that teachers are more inclined to adopt AI tools when they perceive them as useful for improving teaching effectiveness, enhancing student engagement, and optimizing course design. However, the relatively marginal significance level implies that other factors, such as perceived risks, institutional context, or individual differences in technology readiness, may moderate this relationship, necessitating further investigation in future research. This finding aligns with core technology acceptance theories while highlighting the complexity of AI adoption in educational settings.
Similarly, PEOU exhibited a strong positive influence on BI (β = 0.551, t = 6.482, p < 0.001), validating H1(b). This robust relationship indicates that when AI tools are intuitive and user-friendly, teachers are more likely to integrate them into their teaching practices. The findings emphasize the critical importance of designing AI tools with simplicity and ease of use to encourage widespread adoption among educators in higher education settings, particularly given the diverse technological proficiency levels among faculty members.
PEOU also significantly influenced PU (β = 0.417, t = 4.893, p < 0.001), supporting H1(c). This finding reflects the intertwined nature of ease of use and perceived usefulness, where user-friendly tools not only lower barriers to adoption but also enhance teachers’ perceptions of their practical utility in educational contexts. This relationship underscores the importance of considering both usability and functionality in AI tool development, particularly in the complex context of higher education teaching.
Additionally, PT positively impacted BI (β = 0.482, t = 8.951, p < 0.001), affirming H2(a). The path coefficient and highly significant t-value highlight trust as a crucial determinant of AI adoption intentions. The findings suggest that trust may be particularly important for AI adoption compared to traditional educational technologies, given AI’s unique characteristics such as autonomous decision-making and data processing capabilities. Institutions must build trust by ensuring robust data protection measures and transparent policies to support successful AI adoption in education.
In contrast, PSC yielded mixed results. While PSC negatively influenced PT (β = −0.131, t = 5.031, p < 0.001), its direct effect on BI (β = −0.048, t = 1.296, p = 0.201) was insignificant, leading to the rejection of H2(e). The significant negative effect on trust but insignificant direct effect on behavioral intention suggests a complex relationship between job displacement concerns and AI adoption. This nuanced finding suggests that while concerns about AI replacing teachers may erode trust in AI systems, they do not directly deter adoption intentions in this educational context. The relatively small path coefficient from PSC to PT (β = −0.131) indicates that while job displacement concerns influence trust, other factors such as system reliability and data privacy may be more crucial in trust formation. These findings highlight the critical need for addressing teachers’ concerns about their professional role in an AI-enhanced educational environment to ensure smooth and effective technology integration, particularly through professional development programs and institutional policies that emphasize AI as a complementary tool rather than a replacement for human educators.

4.4. Moderation Analysis

This study examined the moderation effects of PI and OS on BI. Table 6 summarizes the results of the path analysis for moderation effects, revealing complex interactions between social, organizational, and individual factors in AI adoption decisions.
The moderation effect of PI on the relationship between PU and BI was not significant (β = 0.035, t = 0.621, p = 0.511), indicating that PU’s influence on BI is independent of peer interactions. This finding suggests that teachers’ evaluation of AI tools’ usefulness is primarily driven by intrinsic assessments and personal teaching experiences, rather than being substantially influenced by colleagues’ opinions. This independence from peer influence in usefulness assessment highlights the importance of demonstrating concrete benefits to individual teachers’ pedagogical practices. However, PI significantly moderated the relationship between PT and BI (β = 0.244, t = 2.172, p = 0.035), highlighting the crucial role of social dynamics in trust formation and adoption decisions. The positive moderation effect suggests that teachers are more likely to trust and subsequently adopt AI tools when colleagues demonstrate successful usage experiences, aligning with core tenets of Social Influence Theory. This social validation effect is particularly important in educational settings where peer learning and collaborative professional development are common.
The moderation effect of OS on the relationship between PEOU and BI was significant and substantial (β = 0.289, t = 5.211, p < 0.001), suggesting that institutional support mechanisms significantly enhance the impact of PEOU on adoption intentions. The strong positive moderation effect underscores the critical need for robust organizational support systems to reinforce teachers’ adoption intentions, particularly when dealing with sophisticated AI technologies. This finding implies that even user-friendly AI tools may face adoption barriers without adequate institutional backing. Conversely, OS did not significantly moderate the relationship between PR and BI (β = 0.011, t = 0.782, p = 0.619), indicating that general organizational support measures are insufficient to address specific risk-related concerns. Targeted measures, such as transparent data security protocols and system reliability guarantees, are essential to mitigate PR and encourage AI adoption.

4.5. Meditation Analysis

Table 7 presents the mediation effects of PR and PT on BI. The mediation analysis revealed complex relationships between perceptions, risk assessment, trust formation, and adoption intentions in the context of AI tool integration in higher education.
The mediation effect of PR on the relationship between PU and BI was insignificant (β = −0.015, t = 0.178, p = 0.859), leading to the rejection of H5(a). This notable finding indicates that perceptions of AI tools’ utility do not effectively mitigate fundamental risk concerns such as data privacy, algorithmic bias, or system reliability. The weak mediation effect suggests that PR may be more strongly influenced by external factors such as institutional policies, technical safeguards, or explicit security assurances, highlighting the critical need for targeted risk mitigation strategies beyond merely emphasizing AI’s practical benefits. This finding underscores the importance of addressing security and privacy concerns directly through robust institutional frameworks. Conversely, PR significantly mediated the relationship between PEOU and BI (β = −0.259, t = 3.931, p < 0.001), supporting H5(b). The substantial negative mediation effect indicates that user-friendly and intuitive AI tools effectively reduce risk perceptions, thereby enhancing adoption intentions. This robust finding emphasizes the crucial importance of designing accessible and transparent tools to alleviate operational concerns and foster confidence in AI usage. The significant mediation suggests that well-designed interfaces and intuitive interactions serve as powerful mechanisms for reducing perceived risks associated with AI adoption.
PT demonstrated a significant mediating effect between PU and BI (β = 0.322, t = 2.011, p = 0.045), supporting H5(a). This substantial positive mediation underscores the pivotal role of trust in translating perceived usefulness into concrete adoption intentions. The finding suggests that teachers who recognize AI tools’ potential for improving teaching outcomes are more likely to develop trust in these systems, which subsequently serves as a critical bridge facilitating adoption decisions. This highlights the importance of demonstrating clear pedagogical benefits while simultaneously building trust through transparent and reliable system performance. Similarly, the mediation effect of PT on the relationship between PEOU and BI was marginally significant (β = 0.149, t = 1.782, p = 0.075), supporting H5(b). While this marginal significance suggests that ease of use contributes to trust formation, the relatively modest effect size indicates that its impact may vary based on multiple factors, including tool design sophistication, institutional support mechanisms, and peer endorsements. This nuanced finding suggests that while user-friendly interfaces can help build trust, other contextual factors play important roles in strengthening this relationship and ultimately influencing adoption intentions.

4.6. Model Explanatory and Predictive Power

The empirical validation of the research model was conducted through a comprehensive assessment of both explanatory and predictive capabilities using multiple sophisticated analytical metrics. The model’s explanatory power was primarily evaluated through the coefficient of determination (R2), which quantifies the proportion of variance explained in endogenous constructs. All R2 values substantially exceeded the conservative threshold of 0.10 established by Bagozzi and Yi (1998) [65], demonstrating robust explanatory capacity. Notably, the model explained 67.2% of the variance in BI to adopt AI tools (R2 = 0.672), representing a substantial level of explanatory power. The model also effectively explained the variance in PT (R2 = 0.538), PR (R2 = 0.412), and AU (R2 = 0.389), indicating strong predictive relationships among the constructs.
The model’s structural validity was further substantiated by the Standardized Root Mean Square Residual (SRMR) value of 0.043, which falls well below the stringent threshold of 0.080 recommended by Hair et al. (2020) [71]. This demonstrates excellent model fit and confirms the theoretical framework’s alignment with empirical observations. The effect size analysis (f2) revealed hierarchical impacts among the constructs, with PT exhibiting a large effect on BI (f2 = 0.35), while PR and PU demonstrated moderate effects (f2 = 0.21 and 0.18, respectively). These findings align with recent literature suggesting the paramount importance of trust-building mechanisms in technology adoption contexts [72].
The model’s predictive capability was rigorously assessed through cross-validated redundancy analysis (Q2) using the blindfolding procedure with an omission distance of 7. The analysis yielded consistently positive Q2 values across all endogenous constructs, surpassing the null threshold and confirming the model’s predictive relevance (Shahzad et al., 2024). The hierarchical pattern of Q2 values—BI (0.571), PT (0.386), PR (0.342), and AU (0.321)—reveals particularly strong predictive accuracy for behavioural intentions, suggesting the model’s enhanced capability in forecasting teachers’ propensity to adopt AI tools in higher education settings. The notably high Q2 value for BI (0.571) indicates that the model demonstrates superior predictive power for this crucial outcome variable, substantially exceeding typical predictive relevance thresholds in technology adoption research.

4.7. Discussion

The findings of this study shed light on the complex interplay of cognitive, affective, and contextual factors influencing university teachers’ adoption of AI tools in higher education.
One contribution is the demonstrated mediating role of PT between cognitive appraisals (PU and PEOU) and behavioral intention (BI). This highlights that trust is not merely an external factor, but an affective conduit that transforms rational assessments into committed adoption decisions. In contrast, PR functioned more as a contextual inhibitor, particularly shaped by perceived ease of use—suggesting that operational complexity, rather than perceived utility, is the main trigger of risk perceptions. This nuance deepens our understanding of risk cognition in educational AI, a relatively underexplored domain in existing TAM- or UTAUT-based models.
Beyond individual psychology, this study foregrounds the significance of social and institutional moderators—specifically, PI and OS—in amplifying or suppressing adoption pathways. These findings emphasize that AI adoption is not merely a personal decision, but one shaped by interpersonal trust networks and institutional enablers, aligning with Teo (2011) [73], Zhao & Frank (2003) [74].
Additionally, the study introduces PSC as a novel, emotionally charged construct that interacts with trust but not directly with behavioral intention. This suggests that identity threat—a teacher’s fear of being replaced by AI—undermines affective receptivity without necessarily deterring pragmatic engagement. This insight contributes to the growing conversation around human-machine role negotiation in AI-enhanced education, indicating the importance of reframing AI not as a competitor, but as a collaborator.
These patterns resonate closely with Everett Rogers’ DoI theory. Constructs like PU and PEOU mirror Rogers’ notions of relative advantage and complexity, while PI and OS reflect the importance of observability, trialability, and system readiness in facilitating adoption. By integrating DoI into the interpretation of results, this study provides a dual-theoretical perspective that enriches our understanding of AI diffusion in complex organizational environments like higher education.

5. Conclusions and Implications

This study investigated the factors influencing university teachers’ adoption of AI tools in higher education, employing a PLS-SEM approach. The findings highlight the pivotal roles of PU, PEOU, PT, and PR in shaping teachers’ BI and AU of AI tools. The study concluded that: (i) PU and PEOU emerged as significant predictors of BI, indicating that teachers are more likely to adopt AI tools when they perceive them as beneficial and intuitive. PEOU also exhibited a strong influence on PU, highlighting the intertwined relationship between ease of use and perceived utility. (ii) PT played a pivotal mediating role in translating PU and PEOU into stronger adoption intentions, while PR served as a barrier to adoption. Notably, while PR was significantly mitigated by PEOU, it was not influenced by PU, suggesting that risk perceptions are more closely tied to the perceived simplicity and operational feasibility of AI tools than their utility. This highlights the importance of designing accessible and user-friendly tools to alleviate operational concerns and foster adoption. (iii) The moderating roles of PI and OS further emphasize the importance of social and institutional contexts in facilitating AI adoption. PI significantly strengthened the relationship between PT and BI, suggesting that peer demonstrations and endorsements can enhance trust and encourage adoption. OS, on the other hand, amplified the effect of PEOU on BI, underscoring the necessity of robust institutional support systems, such as training programs and technical resources, to reinforce teachers’ intentions to integrate AI tools into their teaching practices. Additionally, PSC demonstrated a complex relationship with adoption: while it negatively influenced PT (β = −0.131, t = 5.031, p < 0.001), its direct effect on BI was insignificant (β = −0.048, t = 1.296, p = 0.201). This nuanced finding suggests that while concerns about AI replacing teachers may erode trust in AI systems, they do not directly deter adoption intentions, highlighting the importance of addressing professional role concerns through institutional policies that emphasize AI as a complementary tool rather than a replacement for human educators. (iv) The study’s explanatory and predictive power metrics, including high R2 and Q2 values, demonstrate the robustness of the research model in capturing the key dynamics of AI adoption in higher education. The R2 for BI was 0.672, indicating that 67.2% of its variance was explained by the model. Similarly, Q2 values for BI (0.571), PT (0.386), and AU (0.321) confirmed the model’s strong predictive relevance.
Based on the study’s findings, the following strategies are proposed to facilitate the adoption of AI tools among university teachers:
(i)
AI tools should be tailored to the actual teaching needs of university educators and continuously refined based on teacher feedback during development. AI developers and universities should collaborate to design user-friendly tools that focus on addressing practical classroom needs such as streamlining course design, enhancing student engagement, and improving assessment processes. Teachers’ ongoing feedback during the development phase ensures alignment with classroom requirements, increasing both PU and PEOU.
(ii)
Higher education institutions should build trust by establishing transparent data protection policies, which will facilitate AI tool adoption. Trust is crucial for AI adoption, especially regarding concerns over data privacy and system reliability. Institutions and policymakers should implement robust data protection measures and transparent AI usage policies. Effectively communicating these efforts through regular updates, transparent audits, and faculty briefings will alleviate fears and foster confidence among educators, addressing the critical role of PT.
(iii)
Universities should leverage social influence by encouraging educators to act as AI champions, sharing their experiences and strategies. Social influence plays a significant role in technology adoption. University administrations should encourage experienced educators to become AI champions, sharing their positive experiences and best practices with their peers. Peer-led workshops and collaborative forums will normalize AI use, increase trust, and reduce resistance from hesitant teachers.
(iv)
Higher education institutions should establish a comprehensive professional development roadmap, including continuous technical support, to reinforce teachers’ intentions to adopt AI tools. Institutional support is essential for reinforcing AI adoption intentions. Universities should develop comprehensive training programs focused on building digital literacy and AI competence. These programs should include discipline-specific workshops, hands-on sessions, and access to technical resources. Additionally, offering real-time technical support and establishing dedicated help desks will make AI tools more approachable, reducing barriers for educators unfamiliar with technology.
(v)
Policymakers and university leaders should prioritize AI adoption within the broader digital transformation strategy and provide necessary policy support and resources. Policymakers and university leadership must prioritize AI adoption within the broader digital transformation strategy, setting clear goals for integrating AI into teaching and learning. Adequate funding for digital infrastructure and continuous faculty training must be ensured. Universities should integrate AI-related metrics into performance evaluations and funding allocations, incentivizing continuous progress. Collaboration with national strategies, such as China’s “Smart Education”, will ensure alignment and foster systemic innovation.
Despite its contributions, this study has certain limitations. First, the sample was limited to university teachers in one country, which may affect generalizability across regions or cultures and restricts the ability to conduct subgroup analyses based on demographic or institutional characteristics. Second, while disciplinary data were collected, field-specific adoption differences were not analyzed and warrant future attention. Third, perceived risk was examined mainly at the individual level, focusing on usability and professional concerns; broader ethical risks—such as AI’s potential to deepen educational inequality—were not addressed. Lastly, the cross-sectional design limits causal interpretation. Longitudinal studies are needed to track changes in trust, risk, and sustained adoption over time.

Author Contributions

Conceptualization, Z.Z. and Y.L.; methodology, Z.Z.; software, Z.Z.; validation, Z.Z., S.Y. and L.J.; formal analysis, Z.Z.; data curation, Z.Z., Y.L., S.Y. and L.J.; writing—original draft preparation, Z.Z.; writing—review and editing, Z.Z., Y.L., S.Y. and L.J.; funding acquisition, Z.Z. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Chengdu University of Technology Higher Education Talent Training Quality and Teaching Reform Project [Grant number JG2430085; Grant recipient Zhili Zuo].

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, B.; Zhu, J.; Su, H. Toward the third generation artificial intelligence. Sci. China Inf. Sci. 2023, 66, 121101. [Google Scholar] [CrossRef]
  2. Ma, L.; Sun, B. Machine learning and AI in marketing–Connecting computing power to human insights. Int. J. Res. Mark. 2020, 37, 481–504. [Google Scholar] [CrossRef]
  3. Kaplan, A.; Haenlein, M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz. 2019, 62, 15–25. [Google Scholar] [CrossRef]
  4. Bessen, J. Automation and jobs: When technology boosts employment. Econ. Policy 2019, 34, 589–626. [Google Scholar] [CrossRef]
  5. Sok, S.; Heng, K. ChatGPT for education and research: A review of benefits and risks. Cambodian J. Educ. Res. 2023, 3, 110–121. [Google Scholar] [CrossRef]
  6. Taecharungroj, V. “What can ChatGPT do?” Analyzing early reactions to the innovative AI chatbot on Twitter. Big Data Cogn. Comput. 2023, 7, 35. [Google Scholar] [CrossRef]
  7. Nalbant, K.G. The importance of artificial intelligence in education: A short review. J. Rev. Sci. Eng. 2021, 2021, 1–15. [Google Scholar]
  8. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  9. Labadze, L.; Grigolia, M.; Machaidze, L. Role of AI chatbots in education: Systematic literature review. Int. J. Educ. Technol. High. Educ. 2023, 20, 56. [Google Scholar] [CrossRef]
  10. Ma, W.; Adesope, O.O.; Nesbit, J.C.; Liu, Q. Intelligent tutoring systems and learning outcomes: A meta-analysis. J. Educ. Psychol. 2014, 106, 901. [Google Scholar] [CrossRef]
  11. Clark, R.M.; Kaw, A.K.; Braga Gomes, R. Adaptive learning: Helpful to the flipped classroom in the online environment of COVID? Comput. Appl. Eng. Educ. 2022, 30, 517–531. [Google Scholar] [CrossRef]
  12. Bernard, J.; Chang, T.W.; Popescu, E.; Graf, S. Learning style Identifier: Improving the precision of learning style identification through computational intelligence algorithms. Expert Syst. Appl. 2017, 75, 94–108. [Google Scholar] [CrossRef]
  13. El-Bishouty, M.M.; Aldraiweesh, A.; Alturki, U.; Tortorella, R.; Yang, J.; Chang, T.-W.; Graf, S.; Kinshuk. Use of Felder and Silverman learning style model for online course design. Educ. Technol. Res. Dev. 2019, 67, 161–177. [Google Scholar] [CrossRef]
  14. Holmes, W. Artificial intelligence in education. In Encyclopedia of Education and Information Technologies; Springer International Publishing: Cham, Germany, 2020; pp. 88–103. [Google Scholar]
  15. Airaj, M. Ethical artificial intelligence for teaching-learning in higher education. Educ. Inf. Technol. 2024, 29, 17145–17167. [Google Scholar] [CrossRef]
  16. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.-P.T. Ethical principles for artificial intelligence in education. Educ. Inf. Technol. 2023, 28, 4221–4241. [Google Scholar] [CrossRef]
  17. Ouyang, F.; Zheng, L.; Jiao, P. Artificial intelligence in online higher education: A systematic review of empirical research from 2011 to 2020. Educ. Inf. Technol. 2022, 27, 7893–7925. [Google Scholar] [CrossRef]
  18. UNESCO. AI and Education: Guidance for Policy-Makers; United Nations Educational, Scientific and Cultural Organization: Paris, France, 2021; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000391104 (accessed on 8 August 2024).
  19. Rogers, E.M.; Singhal, A.; Quinlan, M.M. Diffusion of innovations. In An Integrated Approach to Communication Theory and Research; Routledge: London, UK, 2014; pp. 432–448. [Google Scholar]
  20. Trelease, R.B. Diffusion of innovations: Smartphones and wireless anatomy learning resources. Anat. Sci. Educ. 2008, 1, 233–239. [Google Scholar] [CrossRef]
  21. Inan, F.A.; Lowther, D.L. Factors affecting technology integration in K-12 classrooms: A path model. Educ. Technol. Res. Dev. 2010, 58, 137–154. [Google Scholar] [CrossRef]
  22. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  23. Marangunić, N.; Granić, A. Technology acceptance model: A literature review from 1986 to 2013. Univers. Access Inf. Soc. 2015, 14, 81–95. [Google Scholar] [CrossRef]
  24. Davis, F.D. A Technology Acceptance Model for Empirically Testing New End-User; Massachusetts Institute of Technology: Cambridge, MA, USA, 1985. [Google Scholar]
  25. Humida, T.; Al Mamun, M.H.; Keikhosrokiani, P. Predicting behavioral intention to use e-learning system: A case-study in Begum Rokeya University, Rangpur, Bangladesh. Educ. Inf. Technol. 2022, 27, 2241–2265. [Google Scholar] [CrossRef]
  26. Keikhosrokiani, P. The role of m-Commerce literacy on the attitude towards using e-Torch in Penang, Malaysia. In E-Business in the 21st Century: Essential Topics and Studies; World Scientific Publishing Company: Singapore, 2021; pp. 309–333. [Google Scholar]
  27. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  28. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  29. Hu, Y.H. Effects and acceptance of precision education in an AI-supported smart learning environment. Educ. Inf. Technol. 2022, 27, 2013–2037. [Google Scholar] [CrossRef]
  30. Choi, S.; Jang, Y.; Kim, H. Influence of pedagogical beliefs and perceived trust on teachers’ acceptance of educational artificial intelligence tools. Int. J. Hum.–Comput. Interact. 2023, 39, 910–922. [Google Scholar] [CrossRef]
  31. Capestro, M.; Rizzo, C.; Kliestik, T.; Peluso, A.M.; Pino, G. Enabling digital technologies adoption in industrial districts: The key role of trust and knowledge sharing. Technol. Forecast. Soc. Change 2024, 198, 123003. [Google Scholar] [CrossRef]
  32. Gupta, S.; Kamboj, S.; Bag, S. Role of risks in the development of responsible artificial intelligence in the digital healthcare domain. Inf. Syst. Front. 2023, 25, 2257–2274. [Google Scholar] [CrossRef]
  33. Meyer, J.P.; Gagne, M. Employee engagement from a self-determination theory perspective. Ind. Organ. Psychol. 2008, 1, 60–62. [Google Scholar] [CrossRef]
  34. Fan, W.; Liu, J.; Zhu, S.; Pardalos, P.M. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann. Oper. Res. 2020, 294, 567–592. [Google Scholar] [CrossRef]
  35. Kelman, H.C. Compliance, identification, and internalization three processes of attitude change. J. Confl. Resolut. 1958, 2, 51–60. [Google Scholar] [CrossRef]
  36. Granić, A. Educational technology adoption: A systematic review. Educ. Inf. Technol. 2022, 27, 9725–9744. [Google Scholar] [CrossRef] [PubMed]
  37. Valente, T.W. Network models and methods for studying the diffusion of innovations. Models Methods Soc. Netw. Anal. 2005, 28, 98–116. [Google Scholar]
  38. Chatterjee, S.; Rana, N.P.; Dwivedi, Y.K.; Baabdullah, A.M. Understanding AI adoption in manufacturing and production firms using an integrated TAM-TOE model. Technol. Forecast. Soc. Change 2021, 170, 120880. [Google Scholar] [CrossRef]
  39. Nepomuceno, M.V.; Laroche, M.; Richard, M.O. How to reduce perceived risk when buying online: The interactions between intangibility, product knowledge, brand familiarity, privacy and security concerns. J. Retail. Consum. Serv. 2014, 21, 619–629. [Google Scholar] [CrossRef]
  40. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in online shopping: An integrated model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  41. Featherman, M.S.; Pavlou, P.A. Predicting e-services adoption: A perceived risk facets perspective. Int. J. Hum.-Comput. Stud. 2003, 59, 451–474. [Google Scholar] [CrossRef]
  42. Martins, C.; Oliveira, T.; Popovič, A. Understanding the Internet banking adoption: A unified theory of acceptance and use of technology and perceived risk application. Int. J. Inf. Manag. 2014, 34, 1–13. [Google Scholar] [CrossRef]
  43. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  44. Namahoot, K.S.; Jantasri, V. Integration of UTAUT model in Thailand cashless payment system adoption: The mediating role of perceived risk and trust. J. Sci. Technol. Policy Manag. 2023, 14, 634–658. [Google Scholar] [CrossRef]
  45. Wu, L.; Chen, J.L. An extension of trust and TAM model with TPB in the initial adoption of on-line tax: An empirical study. Int. J. Hum.-Comput. Stud. 2005, 62, 784–808. [Google Scholar] [CrossRef]
  46. Diallo, M.F.; Lambey-Checchin, C. Consumers’ perceptions of retail business ethics and loyalty to the retailer: The moderating role of social discount practices. J. Bus. Ethics 2017, 141, 435–449. [Google Scholar] [CrossRef]
  47. Nguyen, N.; Pervan, S. Retailer corporate social responsibility and consumer citizenship behavior: The mediating roles of perceived consumer effectiveness and consumer trust. J. Retail. Consum. Serv. 2020, 55, 102082. [Google Scholar] [CrossRef]
  48. Venkatesh, V.; Morris, M.G. Why don’t men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior. MIS Q. 2000, 24, 115–139. [Google Scholar] [CrossRef]
  49. Fishbein, M.; Ajzen, I. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research; Addison-Wesley: Reading, MA, USA, 1975. [Google Scholar]
  50. Gollwitzer, P.M.; Sheeran, P. Implementation intentions and goal achievement: A meta-analysis of effects and processes. Adv. Exp. Soc. Psychol. 2006, 38, 69–119. [Google Scholar]
  51. Bandura, A. Social cognitive theory: An agentic perspective. Annu. Rev. Psychol. 2001, 52, 1–26. [Google Scholar] [CrossRef] [PubMed]
  52. Wei, J.; Vinnikova, A.; Lu, L.; Xu, J. Understanding and predicting the adoption of fitness mobile apps: Evidence from China. Health Commun. 2021, 36, 950–961. [Google Scholar] [CrossRef] [PubMed]
  53. Pillai, R.; Sivathanu, B.; Metri, B.; Kaushik, N. Students’ adoption of AI-based teacher-bots (T-bots) for learning in higher education. Inf. Technol. People 2024, 37, 328–355. [Google Scholar] [CrossRef]
  54. Polyportis, A.; Pahos, N. Understanding students’ adoption of the ChatGPT chatbot in higher education: The role of anthropomorphism, trust, design novelty and institutional policy. Behav. Inf. Technol. 2024, 44, 315–336. [Google Scholar] [CrossRef]
  55. Zhang, X.; Abbas, J.; Shahzad, M.F.; Shankar, A.; Ercisli, S.; Dobhal, D.C. Association between social media use and students’ academic performance through family bonding and collective learning: The moderating role of mental well-being. Educ. Inf. Technol. 2024, 29, 14059–14089. [Google Scholar] [CrossRef]
  56. Shahzad, M.F.; Xu, S.; Zahid, H. Exploring the impact of generative AI-based technologies on learning performance through self-efficacy, fairness & ethics, creativity, and trust in higher education. Educ. Inf. Technol. 2024, 30, 3691–3716. [Google Scholar]
  57. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  58. Maheshwari, G. Entrepreneurial intentions of university students in Vietnam: Integrated model of social learning, human motivation, and TPB. Int. J. Manag. Educ. 2022, 20, 100714. [Google Scholar] [CrossRef]
  59. Aktar, A.; Pangil, F. The relationship between employee engagement, HRM practices and perceived organizational support: Evidence from banking employees. Int. J. Hum. Resour. Stud. 2017, 7, 1–22. [Google Scholar] [CrossRef]
  60. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  61. Olarewaju, A.D.; Gonzalez-Tamayo, L.A.; Maheshwari, G.; Ortiz-Riaga, M.C. Student entrepreneurial intentions in emerging economies: Institutional influences and individual motivations. J. Small Bus. Enterp. Dev. 2023, 30, 475–500. [Google Scholar] [CrossRef]
  62. Fuller, C.M.; Simmering, M.J.; Atinc, G.; Atinc, Y.; Babin, B.J. Common methods variance detection in business research. J. Bus. Res. 2016, 69, 3192–3198. [Google Scholar] [CrossRef]
  63. Kock, F.; Berbekova, A.; Assaf, A.G. Understanding and managing the threat of common method bias: Detection, prevention and control. Tour. Manag. 2021, 86, 104330. [Google Scholar] [CrossRef]
  64. Tavakol, M.; Dennick, R. Making sense of Cronbach’s alpha. Int. J. Med. Educ. 2011, 2, 53. [Google Scholar] [CrossRef]
  65. Bagozzi, R.P.; Edwards, J.R. A general approach for representing constructs in organizational research. Organ. Res. Methods 1998, 1, 45–87. [Google Scholar] [CrossRef]
  66. Henseler, J.; Hubona, G.; Ray, P.A. Using PLS path modeling in new technology research: Updated guidelines. Ind. Manag. Data Syst. 2016, 116, 2–20. [Google Scholar] [CrossRef]
  67. Cheung, G.W.; Cooper-Thomas, H.D.; Lau, R.S.; Wang, L.C. Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations. Asia Pac. J. Manag. 2024, 41, 745–783. [Google Scholar] [CrossRef]
  68. Hair, J.F.; Sarstedt, M.; Ringle, C.M.; Gudergan, S.P. Advanced Issues in Partial Least Squares Structural Equation Modeling; Sage College Publishing: Thousand Oaks, CA, USA, 2023. [Google Scholar]
  69. Ab Hamid, M.R.; Sami, W.; Sidek, M.H.M. Discriminant validity assessment: Use of Fornell & Larcker criterion versus HTMT criterion. J. Phys. Conf. Ser. 2017, 890, 012163. [Google Scholar]
  70. Salloum, S.A.; Alhamad, A.Q.M.; Al-Emran, M.; Monem, A.A.; Shaalan, K. Exploring students’ acceptance of e-learning through the development of a comprehensive technology acceptance model. IEEE Access 2019, 7, 128445–128462. [Google Scholar] [CrossRef]
  71. Hair, J.F.; Howard, M.C.; Nitzl, C. Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. J. Bus. Res. 2020, 109, 101–110. [Google Scholar] [CrossRef]
  72. Purwanto, A.; Sudargini, Y. Partial least squares structural squation modeling (PLS-SEM) analysis for social and management research: A literature review. J. Ind. Eng. Manag. Res. 2021, 2, 114–123. [Google Scholar]
  73. Teo, T. Factors influencing teachers’ intention to use technology: Model development and test. Comput. Educ. 2011, 57, 2432–2440. [Google Scholar] [CrossRef]
  74. Zhao, Y.; Frank, K.A. Factors affecting technology uses in schools: An ecological perspective. Am. Educ. Res. J. 2003, 40, 807–840. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework.
Figure 1. Conceptual framework.
Systems 13 00664 g001
Figure 2. Cross-distribution of academic titles with demographic characteristics.
Figure 2. Cross-distribution of academic titles with demographic characteristics.
Systems 13 00664 g002
Table 1. Demographics information.
Table 1. Demographics information.
DemographicsDistributionsPercentage
GenderMale47.43%
Female52.36%
AgeUnder 253.49%
25–3437.78%
35–4424.23%
45–5518.07%
Over 5516.22%
Highest DegreeBachelor18.48%
Master27.52%
Doctorate53.80%
Professional TitleAssistant professor14.17%
Lecturer52.16%
Associate professor25.05%
Professor8.42%
Field of StudyAgricultural Sciences9.65%
Engineering and Technology27.31%
Humanities and Social Sciences27.52%
Medical Sciences11.50%
Natural Sciences19.71%
Other4.11%
Teaching Experience1–5 years41.27%
6–10 years11.70%
11–20 years29.77%
More than 2017.04%
RegionEastern36.14%
Central25.87%
West25.26%
Northeast12.32%
Table 2. Main constructs, measurement scale, and factors.
Table 2. Main constructs, measurement scale, and factors.
Construct with ItemsMeanVarianceCronbach’s AlphaComposite ReliabilitySkewnessKurtosisFactor Loading
Perceived Usefulness (PU)2.9770.5840.9080.873
PU1: Using AI enhances the effectiveness of my classroom teaching. −0.047−0.4390.832
PU2:AI improves the accuracy and efficiency of assessing student learning outcomes. 0.064−0.6580.847
PU3: AI supports the optimization of course design and content presentation. 0.005−0.4620.846
PU4: AI tools enhance student engagement and interactivity in learning. −0.029−0.6200.847
Perceived Ease of Use (PEOU)2.9300.6510.9210.860
PEOU1: Learning to use AI tools in higher education is straightforward. 0.023−0.2900.869
PEOU2: Incorporating AI in teaching does not increase complexity. −0.027−0.4030.848
PEOU3: Using AI tools makes teaching in the classroom more comfortable. 0.050−0.5040.859
PEOU4: The user interface and functional design of AI tools meet teaching requirements effectively. −0.013−0.1730.876
Peer Influence (PI)3.0610.6260.9140.865
PI1: My colleagues frequently share their experiences and best practices in using AI for teaching. −0.012−0.1600.841
PI2: Many teachers in my environment actively use AI technology in their courses. 0.098−0.2770.853
PI3: My colleagues’ use of AI in teaching encourages me to adopt it as well. −0.028−0.3190.858
PI4: Conversations with my colleagues increase my confidence in using AI in my teaching. 0.061−0.1370.856
Organizational Support (OS)3.0000.6440.9250.861
OS1: My institution provides sufficient training and guidance for using AI in teaching. −0.040−0.4170.878
OS2: My institution offers the necessary resources and technical support for AI integration in teaching. 0.002−0.4740.857
OS3: My institution actively encourages the use of advanced AI technologies in courses. −0.024−0.3500.873
OS4: I can easily access technical assistance provided by my institution to support AI use in teaching. −0.151−0.3940.869
Perceived Trust (PT)2.9780.6310.9300.864
PT1: I trust that AI systems operate stably and reliably in classroom environments. 0.009−0.5050.885
PT2: I trust that AI ensures the safety and protection of student data during use. −0.073−0.5400.883
PT3: I am confident that AI can provide accurate support in classroom teaching. −0.005−0.5890.864
PT4: I trust that AI will not introduce biases or misleading information in teaching. 0.004−0.5990.875
Perceived Risk (PR)2.9920.6150.9220.867
PR1: I am concerned that using AI could lead to student data privacy breaches. 0.094−0.6260.87
PR2: I am worried that technical failures of AI could disrupt classroom teaching. 0.119−0.5280.864
PR3: I believe that using AI may reduce the quality of direct interaction with students. 0.042−0.6250.872
PR4: I am concerned that AI may introduce unexpected problems in teaching. 0.003−0.5000.851
Perceived Substitution Crisis (PSC)2.9580.6390.9260.862
PSC1: I am concerned that AI may eventually replace my teaching role. −0.071−0.5170.868
PSC2: I believe the use of AI could reduce the demand for teachers. 0.008−0.4050.874
PSC3: The rapid advancement of AI technology makes me anxious about my career prospects. −0.010−0.5270.867
PSC4: I am concerned that AI could diminish the unique role of teachers in the classroom. −0.070−0.4320.871
Behavior Intention (BI)2.9520.5710.9230.875
BI1: I am willing to adopt AI technology more extensively in future courses. 0.218−0.3030.861
BI2: I plan to integrate more AI tools into my teaching in future semesters. 0.213−0.3620.848
BI3: I aim to explore innovative ways to apply AI in the classroom. 0.160−0.6020.873
BI4: I am interested in participating in training sessions and learning activities related to the application of AI in teaching. 0.147−0.4060.881
Actual Usage (AU)3.0340.6440.9250.861
AU1: I use AI tools to assist in course design. 0.104−0.1970.864
AU2: I utilize AI tools to analyze student learning data during classes. 0.124−0.3990.846
AU3: I use AI to provide personalized teaching suggestions and feedback. 0.113−0.4010.888
AU4: I rely on AI tools to offer individual tutoring and assistance to students outside of class. 0.189−0.5450.877
Table 3. The correlation matrix and the square root of the AVE.
Table 3. The correlation matrix and the square root of the AVE.
ConstructPUPEOUPIOSPTPRPSCBIAU
PU0.715
PEOU0.6540.749
PI0.5160.6540.734
OS0.5570.6960.5230.762
PT0.4520.3490.6420.5200.766
PR0.6310.4560.5600.5730.4920.739
PSC0.5990.7610.3440.3980.3230.5630.757
BI0.4780.4400.5710.4700.7010.4250.4630.753
AU0.3030.6080.6530.6650.6860.3370.4790.6790.763
Note: Square root of AVE is displayed on the diagonal in bold values.
Table 4. Heterotrait–monotrait ratio.
Table 4. Heterotrait–monotrait ratio.
ConstructPUPEOUPIOSPTPRPSCBIAU
PU
PEOU0.803
PI0.5870.617
OS0.7770.5440.691
PT0.5480.7320.5390.813
PR0.7960.8050.6700.6490.718
PSC0.7360.6260.6060.8260.8040.615
BI0.6960.7750.7260.7370.6200.8270.818
AU0.5440.6010.7710.8060.5930.7140.6290.763
Table 5. Path analysis results for main effects.
Table 5. Path analysis results for main effects.
Construct2.5%97.5%βt-Valuep-ValueDecision
H1(a): PU→BI0.4430.6570.5501.6870.091Accepted
H1(b): PEOU→BI0.4810.6210.5516.482<0.001Accepted
H1(c): PEOU→PU0.3010.4890.4174.893<0.001Accepted
H2(a): PT→BI0.4210.5430.4828.951<0.001Accepted
H2(b): PR→BI−0.552−0.438−0.3755.685<0.001Accepted
H2(c): PSC→PR−0.0230.1550.0891.5820.115Rejected
H2(d): PSC→PT−0.489−0.311−0.1315.031<0.001Accepted
H2(e): PSC→BI−0.1210.023−0.0481.2960.201Rejected
Table 6. Path analysis results for moderation effects.
Table 6. Path analysis results for moderation effects.
Construct2.5%97.5%βt-Valuep-ValueDecision
H3(a): PI × PU→BI−0.1560.1080.0350.6210.511Rejected
H3(b): PI × PT→BI0.1530.2980.2442.1720.035Accepted
H4(a): OS × PEOU→BI0.1770.3010.2895.211<0.001Accepted
H4(b): OS × PR→BI−0.2970.0130.0110.7820.619Rejected
Table 7. Path analysis results for meditation effects.
Table 7. Path analysis results for meditation effects.
Construct2.5%97.5%βt-Valuep-ValueDecision
H5(a): PU→PR→BI−0.1030.279−0.0150.1780.859Rejected
H5(b): PEOU→PR→BI0.1190.328−0.2593.931<0.001Accepted
H5(a): PU→PT→BI0.2150.4250.3222.0110.045Accepted
H5(b): PEOU→PT→BI0.2810.5370.1491.7820.075Accepted
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zuo, Z.; Luo, Y.; Yan, S.; Jiang, L. From Perception to Practice: Artificial Intelligence as a Pathway to Enhancing Digital Literacy in Higher Education Teaching. Systems 2025, 13, 664. https://doi.org/10.3390/systems13080664

AMA Style

Zuo Z, Luo Y, Yan S, Jiang L. From Perception to Practice: Artificial Intelligence as a Pathway to Enhancing Digital Literacy in Higher Education Teaching. Systems. 2025; 13(8):664. https://doi.org/10.3390/systems13080664

Chicago/Turabian Style

Zuo, Zhili, Yilun Luo, Shiyu Yan, and Lisheng Jiang. 2025. "From Perception to Practice: Artificial Intelligence as a Pathway to Enhancing Digital Literacy in Higher Education Teaching" Systems 13, no. 8: 664. https://doi.org/10.3390/systems13080664

APA Style

Zuo, Z., Luo, Y., Yan, S., & Jiang, L. (2025). From Perception to Practice: Artificial Intelligence as a Pathway to Enhancing Digital Literacy in Higher Education Teaching. Systems, 13(8), 664. https://doi.org/10.3390/systems13080664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop