Next Article in Journal
Assessing the Credibility of National Park Concession Systems: A Community Residents’ Perspective Based on Evidence from Wuyishan National Park, China
Previous Article in Journal
Strategies for Developing Romanian Seaports as Smart Ports
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Use Intention of Text-to-Image Generative AI in Higher Education: An S–O–R Model with Parallel Trust and Risk Pathways

1
Graduate School, Hanyang University, 222 10th Lane, Gangnam District, Seoul 04763, Republic of Korea
2
School of Sociology, Central China Normal University, Wuhan 430079, China
3
School of Information Management, Wuhan University, Wuhan 430072, China
*
Authors to whom correspondence should be addressed.
Sustainability 2026, 18(3), 1657; https://doi.org/10.3390/su18031657
Submission received: 12 January 2026 / Revised: 30 January 2026 / Accepted: 3 February 2026 / Published: 5 February 2026

Abstract

In light of the rapid adoption of text-to-image (T2I) tools in higher education, this study develops a stimulus–organism–response (S-O-R) model to explain the sustainable and responsible use intentions of text-to-image generative AI tools in higher education. Focusing on both university students and faculty, the model conceptualizes perceptions of ease of use, information quality, and ethical awareness as external stimuli; technology- and ethics-related anxiety as internal emotional states; and algorithmic trust, perceived risk, and sustainable use intention as behavioral evaluations and responses. Grounded in the Stimulus–Organism–Response (S–O–R) framework, we integrate the Technology Acceptance Model (TAM), Technology Threat Avoidance Theory (TTAT), and the DeLone–McLean (D&M) model to propose a layered mechanism, with personal innovativeness serving as a moderator. Utilizing 807 valid survey responses, we employed structural equation modeling and fuzzy-set qualitative comparative analysis. The results reveal that (1) the overall chain is supported: perceived ease of use, information quality, and ethical awareness primarily influence sustainable use intention indirectly through anxiety, trust, and risk; (2) although higher usability and quality do not alleviate anxiety, they coexist within a complex pattern of trust amid anxiety; and (3) high levels of personal innovativeness diminish the linear effects of trust and risk on intention. Configurational evidence further indicates multiple pathways leading to high sustainable intention, whereas low intention is typically characterized by uniformly low perceptions, emotions, evaluations, and innovativeness. By framing sustainable adoption through a coupled trust–risk–anxiety lens, this study extends the understanding of generative AI use in education and offers actionable implications for promoting responsible and sustainable practices in universities.

1. Introduction

The rapid advancement of generative artificial intelligence is transforming higher education, particularly in image-intensive fields such as art and design. Text-to-image (T2I) systems significantly reduce the time required to move from ideation to visual realization and are now widely used for coursework and instructional demonstrations. However, their integration into classrooms has raised growing concerns regarding privacy breaches, copyright ownership, and the authenticity of generated images. Policy frameworks—including UNESCO guidelines and the EU Artificial Intelligence Act—stress that the educational use of AI must adhere to principles of fairness, transparency, and accountability. Together, these developments suggest that instrumental benefits are no longer sufficient to explain why students and instructors continue to rely on T2I tools amid increasing ethical and regulatory scrutiny.
In the existing literature on AI-enabled educational technologies, much of the research is grounded in the Technology Acceptance Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT), primarily concentrating on the initial decision to adopt these technologies. For instance, Scherer and colleagues employed meta-analytic structural equation modeling to demonstrate that perceived ease of use (PEOU) fosters teachers’ adoption of digital technologies primarily by enhancing perceived usefulness (PU) [1]. Alturki and Aldraiweesh extended the Technology Acceptance Model (TAM) by incorporating information quality and task–technology fit to clarify the factors influencing graduate students’ adoption of Google Meet [2]. However, this line of research often remains confined to a binary perspective of “adopt versus not adopt” and rarely integrates algorithmic trust, perceived risk, and ethics-related concerns within a unified theoretical framework. Consequently, it offers limited explanatory power regarding how users navigate the trust–risk trade-off as they transition from initial adoption to sustained use.
In pedagogical applications of text-to-image (T2I) tools, Iranmanesh has highlighted concerns related to creativity and risks to authenticity in architectural design education [3], whereas Vartiainen focused on copyright issues and the construction of classroom rules and norms [4]. Other studies in vocational and higher art education similarly highlight a dual reality: T2I tools can significantly enhance the efficiency of concept exploration and iterative design; however, they also raise important concerns regarding data bias and the limited control over the generated outputs [2,5,6]. In parallel, research on AI ethics and anxiety suggests that stronger AI-related ethical awareness heightens students’ sensitivity to issues such as fairness, privacy, and plagiarism [7]. Ethics-related anxiety and perceived ethical risks have often been found to suppress university students’ intentions to use generative AI [8]. Teachers’ anxiety related to AI has also been identified as a significant barrier to its adoption and use [9,10]. However, some evidence suggests that moderate concern may actually promote more cautious and norm-compliant behavior [11,12]. Collectively, these strands of research illuminate specific facets of text-to-image (T2I) use, including performance improvements, ethical concerns, and emotional experiences. However, they have yet to develop a unified framework that explains how external perceptions influence sustainable usage intentions toward T2I tools, particularly through the lenses of anxiety, trust formation, and risk appraisal.
Given the multifaceted determinants of technology-related behavior, a single theoretical perspective is often insufficient. Accordingly, this study adopts a multi-theory integration approach by embedding the Technology Acceptance Model (TAM), Technology Threat Avoidance Theory (TTAT), and the DeLone–McLean Information Systems Success Model (D&M) within a unified Stimulus–Organism–Response (S–O–R) framework. Within this framework, TAM captures the cognitive entry point of “how easy the tool is to use,” thereby providing a foundational explanation for why users are inclined to engage with the system.
TTAT, in turn, focuses on individuals’ threat appraisal and coping appraisal processes when they perceive potential information technology threats [13,14]. This clarifies the reasons behind users’ concerns and their motivation to discontinue use. Furthermore, the D&M model emphasizes the critical role of information quality in influencing user satisfaction and continued use, thereby addressing the perceived reliability of the generated content [15,16]. Building on this integration, we propose a layered model encompassing “perception–emotion–evaluation–intention” (PEEI). In this model, perceived ease of use (PEOU), information quality (IQ), and ethical awareness (EA) are conceptualized as stimuli. Technology anxiety (TA) and ethics-related anxiety (EA2) are regarded as affective responses. Algorithmic trust (AT) and perceived risk (PR) are framed as evaluative outcomes, while sustainable use intention (SUI) represents the behavioral response. Additionally, personal innovativeness (PI) is introduced as a boundary condition. Guided by this framework, we articulate the following research questions:
RQ1: In text-to-image (T2I) supported instructional contexts, does the proposed perception–emotion–evaluation–intention pathway hold true?
RQ2: What distinct mediating roles do technology anxiety (TA) and ethics-related anxiety (EA2) play within this mechanism?
RQ3: How does personal innovativeness (PI) moderate the relationships among algorithmic trust (AT), perceived risk (PR), and sustainable use intention (SUI)?
RQ4: From a configurational perspective, which combinations of perceptual, affective, evaluative, and innovativeness-related conditions lead to higher versus lower levels of SUI?
To address these questions, we analyze survey data from university instructors and students using a complementary SEM–fsQCA approach. Structural equation modeling is employed to test net effects and identify mediation and moderation pathways. Meanwhile, fuzzy-set qualitative comparative analysis reveals multiple equifinal configurations that lead to high versus low SUI. This dual-analytic approach not only enhances theoretical understanding of the perception–emotion–evaluation–intention mechanism within generative AI contexts but also provides empirical guidance for universities aiming to develop governance and pedagogical strategies that promote the compliant, responsible, and sustainable use of T2I tools.

2. Literature Review, Integrated Theoretical Model, and Hypothesis Development

2.1. Current Applications and Challenges of Generative AI Images in Education

Text-to-image (T2I) generative AI tools are revolutionizing design education in higher education by significantly shortening creative cycles and improving exploratory efficiency [17,18]. At the same time, they introduce significant risks, including copyright disputes, data bias, and limited control over outputs [6,19]. The coexistence of and that the traditional usefulness–ease-of-use framework is insufficient to explain how both students and instructors manage trade-offs and sustain their intention to use over time.
Two notable limitations are evident in the current literature. First, many studies primarily utilize the Technology Acceptance Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT) to elucidate initial adoption. While these frameworks emphasize approach-oriented motives, they often neglect the role of avoidance psychology in the context of risk [1,2]. Second, although some studies consider information quality or security-related risks, Researchers often treat quality, affect, and trust as separate constructs. Consequently, there is limited empirical evidence supporting an integrated pathway that links quality perceptions to affective responses, trust, and risk appraisals. Existing models rarely capture the distinctive anxiety associated with image generation. To address these gaps, we adopt a dual-motive avoidance perspective and develop a complementary framework that integrates the Technology Acceptance Model (TAM), the DeLone–McLean (D&M) model, and the Technology Threat Avoidance Theory (TTAT). The TAM, which focuses on ease of use, captures the cognitive entry threshold; the D&M model, emphasizing information quality, serves as a benchmark for trustworthiness; and the TTAT, addressing technology anxiety, represents defensive coping mechanisms. Building upon a layered framework of “perception–emotion–evaluation–intention,” we define algorithmic trust (AT) and perceived risk (PR) as parallel mediators and introduce personal innovativeness (PI) as a moderating factor [20]. Utilizing a combined SEM–fsQCA approach, this study aims to elucidate the role of external perceptions in driving SUI through anxiety and evaluative mechanisms. Additionally, it seeks to identify the configurations of conditions that promote responsible use among students and instructors in higher education.

2.2. Embedded Theoretical Foundation

2.2.1. Technology Acceptance Model (TAM)

The Technology Acceptance Model (TAM), proposed by Davis, posits that perceived ease of use (PEOU) is a central determinant of users’ technology adoption and actual usage. In the realm of educational technology research, Scherer and colleagues further argue that PEOU serves as a crucial link between users’ attitudes and their behavioral intentions [1]. Across various contexts, including general Information and Communication Technology (ICT) and immersive Virtual/Augmented Reality (VR/AR) learning environments, substantial evidence supports the strong pathway from Perceived Ease of Use (PEOU) to intention, and subsequently to actual use [21,22]. However, in rapidly evolving and complex contexts such as T2I tools, the TAM alone may not adequately capture contextual influences and individual differences. Consequently, researchers often extend the model with complementary constructs to enhance its explanatory power [23].

2.2.2. Technology Threat Avoidance Theory (TTAT)

The Technology Threat Avoidance Theory (TTAT), introduced by Liang and Xue, elucidates the manner in which individuals react to IT-related threats through a dual appraisal process comprising threat appraisal and coping appraisal. When individuals perceive a threat as challenging to manage, they may transition from problem-focused coping strategies to more emotion-focused responses [24]. In the context of generative AI, prior research indicates that perceived threats can significantly reduce the intention to use these technologies. Conversely, effective avoidance or mitigation strategies can not only sustain but also enhance continued usage [14]. In this study, we operationalize threat appraisal as perceived risk (PR) and treat algorithmic trust (AT) as an indicator of perceived controllability. Additionally, we conceptualize technology anxiety and ethics-related anxiety (TA/EA2) as affective responses that emerge under conditions of high threat. We examine how these anxieties influence sustainable use intention (SUI) through the parallel pathways of AT and PR.

2.2.3. DeLone and McLean (D&M) Information Systems Success Model

The DeLone and McLean (D&M) model posits that information quality, system quality, and service quality are fundamental drivers of user satisfaction and the intention to use [25,26]. In educational contexts, information quality (IQ)—commonly defined by accuracy, relevance, and completeness—has consistently shown a strong correlation with user satisfaction and the likelihood of continued use [16,27,28]. In the high-salience T2I context, we consider IQ to be a crucial exogenous driver. On one hand, higher-quality outputs enhance perceptions of the usability of the results [29]; On the other hand, IQ constitutes a core input to trust and risk appraisals and—together with PEOU—jointly shapes downstream constructs [5,6].

2.2.4. Integrated Structure: Embedding Three Theories Within an S–O–R Framework

From a broad theoretical perspective, this study adopts the Stimulus–Organism–Response (S–O–R) framework (Figure 1). The S–O–R model posits that environmental stimuli (S) first influence individuals’ internal cognitive and affective states (O), which subsequently trigger behavioral responses (R). The S–O–R framework explains how external factors shape behavior through internal cognitive and emotional processes [30,31]. In this study, perceived ease of use (PEOU), information quality (IQ), and ethical awareness (EA) serve as external stimuli—signals that influence learners’ attention, cognitive effort, and initial appraisal of system reliability and legitimacy. Research has shown that usability and information quality are key drivers of learning engagement and downstream emotional responses [32,33,34]. The organism stage captures how learners process these cues through perceptions and emotions. Technology anxiety (TA) and ethics-related anxiety (EA2) reflect affective reactions to uncertainty, loss of control, and ethical ambiguity, while algorithmic trust (AT) and perceived risk (PR) represent higher-level evaluations of reliability and risk [35,36,37]. At the response level, sustainable use intention (SUI) refers not only to continued use but also to doing so responsibly and in compliance with norms, aligned with educational and ethical standards [4,37]. Importantly, the S–O–R framework in this study is conceptually aligned with the core principles of cognitive learning theory, particularly constructivism and its emphasis on active knowledge construction. Learning with generative AI is not a passive process; rather, students engage in iterative cycles of exploration, critical evaluation, and meaning-making as they generate, analyze, and refine visual outputs [38,39]. Within the organism layer, internal states encompass not only emotional responses but also cognitive engagement and reflective judgment. Thus, the S–O–R model is well-suited to explain how external cues (e.g., usability, information quality, ethical awareness) capture attention, trigger cognitive appraisal, and activate deeper processes of self-regulation and critical thinking [33,40]. By linking the S–O–R model with cognitive learning theory, the present framework explains how the interaction between external stimuli and internal processing promotes sustainable and responsible behaviors, supporting not only the intention to use but also the quality and normative aspects of AI-driven educational practices.
Critically, the S–O–R model also accounts for individual differences: personal innovativeness (PI) influences how users interpret stimuli and manage uncertainty, thereby modifying the effects of trust and risk on SUI [37,41].
This framework has been widely applied to technology use and user behavior in digital environments. In our model, Perceived Ease of Use (PEOU), Information Quality (IQ), and Ethical Awareness (EA) collectively constitute the stimulus layer. PEOU, derived from the Technology Acceptance Model (TAM), captures perceived entry barriers related to operational convenience; IQ, based on the DeLone and McLean model (D&M), reflects perceptions of the accuracy and completeness of generated outputs; and EA, informed by the Technology Threat Avoidance Theory (TTAT), represents context-sensitive awareness of ethical issues such as privacy, copyright, and academic norms. Technology Anxiety (TA) and Ethics-related Anxiety (EA2) comprise the organism layer, capturing anxiety responses to technological uncertainty and ethical risks, respectively. Algorithmic trust (AT) and perceived risk (PR) represent evaluative assessments of system controllability and potential negative consequences. At the response level, sustainable use intention (SUI) signifies the intention to continuously and normatively utilize AI image-generation tools. Conceptually, the Technology Acceptance Model (TAM) (blue region) delineates the primary perception-intention pathway, while the Threat, Emotion, and Evaluation Theory (TTAT) (magenta region) articulates a defensive channel involving threat, emotion, and evaluation. Additionally, the DeLone and McLean Model (D&M) (yellow region) provides upstream indicators related to content credibility through information quality. Personal innovativeness (PI) is introduced as a moderating variable to capture individual differences in the strength of evaluations that translate into intention. This integrated framework establishes a clear theoretical division of labor and a coherent foundation for the subsequent joint analysis using Structural Equation Modeling (SEM) and fuzzy set Qualitative Comparative Analysis (fsQCA).

2.3. Model and Hypotheses

In the Technology Acceptance Model (TAM), perceived ease of use (PEOU) is defined as the extent to which an individual anticipates that utilizing a system or tool will require minimal effort. This concept reflects a subjective assessment of the system’s usability, indicating that it can be employed with little difficulty or exertion [42,43]. In educational technology settings, perceived ease of use (PEOU) is often demonstrated through tangible usability cues, such as a user-friendly interface, an intuitive learning curve, and clearly articulated operational steps [44]. Prior research in educational technology has consistently demonstrated that Perceived Ease of Use (PEOU) significantly influences users’ attitudes, which subsequently affect their behavioral intentions. This finding indicates that reducing perceived barriers to usage is a crucial factor in facilitating initial adoption and promoting sustained engagement [22,23]. Evidence regarding emotion-related inhibitory effects indicates a significant negative association between technology anxiety and perceived ease of use (PEOU); higher levels of anxiety tend to reduce users’ perceptions of a system’s usability [45,46]; Conversely, improving ease of use may, in practice, indirectly alleviate technology anxiety [21,47]. In extended TAM research that incorporates trust, scholars generally argue that systems that are easier to learn and whose errors are more predictable are more likely to foster algorithmic trust (AT) [48]. In our context, PEOU can be further specified as the extent to which prompts are learnable, parameters are controllable, and outputs are perceived as predictable [49]. Classroom-based action research indicates that an prompting and human–AI collaboration workflow can significantly reduce cognitive load and enhance perceived process controllability [50]. In TTAT-informed settings, it is generally assumed that when users perceive a generative tool as controllable and easy to use, their threat appraisals decrease while their acceptance intentions increase. However, emerging evidence suggests that improvements in ease of use alone may not be sufficient. When the quality of information and source transparency are inadequate, combined with a lack of evaluative evidence, perceived risk and anxiety can rebound, offsetting the otherwise positive effects of ease of use [6]. Accordingly, we propose the following hypotheses:
H1. 
Perceived ease of use (PEOU) positively influences algorithmic trust (AT) (PEOU → AT, +).
H2. 
Perceived ease of use (PEOU) negatively influences technology anxiety (TA) (PEOU → TA, −).
H3. 
Perceived ease of use (PEOU) positively influences ethics-related anxiety (EA2) (PEOU → EA2, +).
Information quality (IQ) is a fundamental construct within the DeLone–McLean (D&M) Information Systems Success Model. It generally refers to the completeness, accuracy, relevance, timeliness, and understandability of a system’s outputs, emphasizing the trustworthiness, sufficiency, and usability of the content produced for practical applications [15,51,52]. In this study, information quality (IQ) refers to learners’ overall perceptions of the outputs produced by translation tools and the related disclosed information. Research on information services in higher education shows that information quality is a strong predictor of satisfaction and behavioral intention, exhibiting a robust positive correlation with both initial use and continued use [28,53,54]. In educational settings, the adoption and continued use of technologies are often facilitated by enhancing perceived trustworthiness and usefulness. For example, studies examining university students’ use of ChatGPT (OpenAI, San Francisco, CA, USA) show that information quality, credibility, and satisfaction significantly reduce perceived risk and increase the intention to use the technology. These findings also inform governance discussions related to academic integrity [55,56,57]. In T2I-enabled teaching, research in design and craft education generally indicates that stable algorithmic trust and sustained usage occur only when the generated outputs closely align with learners’ intentions and maintain consistent details [58]. Conversely, unstable quality and opaque provenance can amplify uncertainty, heighten technology anxiety, and impede classroom integration [18]. Synthesizing this evidence, we propose the following hypotheses:
H4. 
Information quality (IQ) positively influences algorithmic trust (AT) (IQ → AT, +).
H5. 
Information quality (IQ) negatively influences technology anxiety (TA) (IQ → TA, −).
H6. 
Information quality (IQ) positively influences ethics-related anxiety (EA2) (IQ → EA2, +).
H7. 
Information quality (IQ) negatively influences perceived risk (PR) (IQ → PR, −).
Ethical awareness (EA) refers to an individual’s ability to recognize and understand the ethical issues related to the use of artificial intelligence (AI). This includes an appreciation of and concern for principles such as fairness, transparency, privacy, accountability, and academic integrity [7,59,60]. In this study, EA measures learners’ sensitivity to ethical issues that arise when text-to-image (T2I) tools are used in educational settings, particularly concerning copyright, fairness, and privacy [61]. In higher education, academic integrity and ethical awareness are consistently recognized as fundamental components of responsible AI use. Previous research in university settings indicates that increased ethical awareness enhances students’ attention to issues such as copyright, privacy, and bias. This heightened awareness subsequently amplifies the salience of threat cues, increasing the likelihood of experiencing tension and concern related to technology use, commonly referred to as technology anxiety (TA) [62,63]. Most studies conceptualize Ethical Awareness (EA) as a cognitive process occurring during the initial “recognition” phase. As EA increases, individuals are more likely to evaluate ethical threats more rigorously during the subsequent “judgment” stage, which, in turn, elevates their perception of risk (PR) [64]. Accordingly, we propose the following hypotheses:
H8. 
Ethical awareness (EA) positively influences perceived risk (PR) (EA → PR, +).
H9. 
Ethical awareness (EA) positively influences ethics-related anxiety (EA2) (EA → EA2, +).
H10. 
Ethical awareness (EA) positively influences technology anxiety (TA) (EA → TA, +).
H11. 
Ethical awareness (EA) positively influences algorithmic trust (AT) (EA → AT, +).
The Technology Acceptance Theory (TAT) posits that when individuals face potential information technology (IT) threats, they first undergo a process of threat appraisal, followed by coping appraisal. These evaluations collectively generate motivational tendencies that either encourage individuals to avoid the technology or continue using it [14,24,65]. In this study, technology anxiety (TA) is defined as the tension and apprehension students experience when using or anticipating the use of T2I tools, arising from uncertainties about the system and its generated outputs. This anxiety is not solely due to operational difficulties; rather, it reflects subjective concerns about the system’s reliability and the controllability of the process. These concerns can reduce perceived controllability, which in turn affects both algorithmic trust and perceptions of risk [47,48]. Consistent with the perspective articulated by Jinlei Li and colleagues, higher technological anxiety (TA) tends to undermine positive evaluations of a system’s capability and controllability. This, in turn, reduces algorithmic trust (AT) and increases perceived risk (PR) [48]. Similarly, the TTAT-based evidence presented by Chenhui Liu and colleagues indicates that increased perceptions of threat and uncertainty lead to heightened avoidance tendencies and intensified negative evaluations [14]. To extend the Technology Acceptance Theory (TTAT), we introduce a new construct called ethics-related anxiety (EA2). This construct is defined as the moral unease and anxiety concerning the potential ethical implications of Text-to-Image (T2I) technology use. These implications include issues related to copyright attribution, training data provenance and privacy, bias, representational fairness, and academic integrity. EA2 effectively captures the emotional tension experienced during specific usage episodes [63,64]. Prior studies employing structural equation modeling (SEM) with university samples indicate that ethical awareness (EA) increases perceived ethical risk (PR). Furthermore, higher levels of PR subsequently elicit stronger ethics-related anxiety (EA2) and suppress behavioral intentions, consistent with Threat Appraisal Theory (TTAT), which posits a sequence of threat appraisal leading to affective responses that subsequently influence behavioral tendencies [7,14,64]. In generative AI educational settings, Zhu and colleagues introduced the constructs of “AI ethics anxiety” (AIEA) and “perceived ethical risk” (PER), finding that both significantly inhibit behavioral intention and actual use. They further suggested that perceived risk may indirectly reduce use through ethics anxiety, providing path-level evidence supporting the sequence PR → AIEA → reduced use [64]. Empirical evidence from higher education populations similarly identifies as a significant barrier to readiness and adoption intention. This supports a proximal mechanism whereby risk-triggered anxiety leads to reduced adoption and continuance tendencies [66,67]. However, the relationship between affective factors and the intention to use sustainably remains debated, with empirical findings varying across different technologies and contexts. Therefore, it is necessary to re-examine these pathways specifically within T2I-enabled educational settings. Accordingly, we propose the following testable hypotheses:
H12. 
Technology anxiety (TA) negatively influences algorithmic trust (AT) (TA → AT, −).
H13. 
Technology anxiety (TA) positively influences perceived risk (PR) (TA → PR, +).
H14. 
Ethics-related anxiety (EA2) negatively influences the intention to use sustainably (SUI) (EA2 → SUI, −).
H15. 
Perceived risk (PR) positively influences ethics-related anxiety (EA2) (PR → EA2, +).
This study extends the Technology Acceptance Model (TAM) by incorporating algorithmic trust. Algorithmic trust refers to users’ positive expectations regarding a technological system’s competence and reliability under conditions of uncertainty. In this context, it reflects the degree to which students and instructors believe that the generative model can consistently produce usable outputs and that the platform offers responsible safeguards, thereby fostering a sense of security and encouraging continued use [68,69]. In immersive intelligent learning environments, factors such as ethics and transparency can enhance trust propensity, which in turn positively influences students’ intentions to adopt and use the technology [11]. Perceived risk is users’ subjective appraisal of potential negative consequences—such as privacy leakage, copyright and attribution disputes, content distortion, and bias—and constitutes a key pathway through which adoption and continued use are inhibited in educational contexts [70]. It focuses on assessing the likelihood of adverse outcomes. In generative AI teaching contexts, perceived risk (PR) specifically refers to students’ and instructors’ overall judgment of uncertainties and potential harms associated with text-to-image (T2I) tools across data, content, and accountability dimensions [6,71,72,73]. As previous TAM-extension research published in Frontiers in Psychology suggests, trust and perceived risk often function as parallel safety channels that jointly predict both adoption and continued use [48]. Accordingly, we propose:
H16. 
Algorithmic trust (AT) positively influences the intention for sustainable use (SUI) (AT → SUI, +).
H17. 
Perceived risk (PR) negatively influences the intention to use sustainably (SUI) (PR → SUI, −).
Personal innovativeness in IT (PIIT) refers to an individual’s consistent tendency to be among the first to adopt new information technologies. It is a key trait that explains who is more willing to take risks and experiment with novel tools [20,74,75]. In higher education AI adoption, Tang’s research demonstrates that personal innovativeness (PI) not only directly increases teachers’ intention to use AI but also significantly moderates the relationship between internal barriers and adoption intention. This suggests that highly innovative individuals can partially offset the negative effects of these obstacles [76]. In AI-service continuance settings, Salih and colleagues report that trust is a key positive driver of sustainable use intention (SUI), which is significant. Their findings also suggest that personal involvement (PI) itself increases continuance intention, indicating that individuals with high PI are more inclined to engage persistently and deeply with novel AI services [12]. Building on evidence that personal innovativeness (PI) can mitigate the detrimental effects of barriers and strengthen continuance intentions, we propose PI as a boundary condition that moderates the relationship between evaluation and intention. Specifically, PI is expected to positively moderate the attitude (AT) → service use intention (SUI) link, meaning individuals with high PI are more willing to continue using the tool at a given level of trust. Conversely, PI is expected to negatively moderate the perceived risk (PR) → SUI link, indicating that individuals with high PI are less likely to disengage when facing the same level of perceived risk. This framework extends prior findings from the education adoption and AI service continuance studies [12,76]. As with TA and EA2, empirical findings on PI across technologies and contexts are not fully consistent, making it necessary to re-examine these moderation effects in T2I-enabled educational settings. Accordingly, we propose:
H18. 
Personal innovativeness (PI) positively moderates the effect of algorithmic trust (AT) on sustainable use intention (SUI), such that PI strengthens the relationship between AT and SUI.
H19. 
Personal innovativeness (PI) negatively moderates the effect of perceived risk (PR) on sustainable use intention (SUI), meaning that higher PI weakens the negative impact of PR on SUI.
Building on these hypotheses, we develop an integrated theoretical model that combines the Technology Acceptance Model (TAM), the Technology Threat Avoidance Theory (TTAT), and the DeLone–McLean (D&M) framework, thereby specifying a four-layer psychological mechanism encompassing perception, emotion, evaluation, and intention (Figure 2). Specifically, perceived ease of use (PEOU), information quality (IQ), and ethical awareness (EA) serve as external inputs that initially elicit affective responses—technology anxiety (TA) and ethics-related anxiety (EA2)—which subsequently influence algorithmic trust (AT) and perceived risk (PR) through two parallel evaluative pathways, ultimately shaping sustainable use intention (SUI). Here, TA captures general anxiety arising from technological uncertainty, whereas EA2 denotes a second-order anxiety triggered by ethical risks.
Beyond the main chain, the model retains a PR → EA2 feedback link consistent with TTAT, indicating that heightened perceived risk further amplifies ethics-related anxiety. This specification aims to capture the mechanism’s dynamic features rather than introduce a logical inconsistency within the four-layer structure. Additionally, personal innovativeness (PI) is modeled as a moderator to reflect individual differences in the strength of the evaluation → intention translation.

3. Methodology

This study employs a mixed empirical approach that integrates structural equation modeling (SEM) and fuzzy-set qualitative comparative analysis (fsQCA) to examine the “perception–emotion–evaluation–intention” mechanism within an educational context. The model consists of a perception layer (PEOU, IQ, EA), an emotion layer (TA, EA2), an evaluation layer (AT, PR), and an outcome layer (SUI), with personal innovativeness (PI) serving as a moderator. Utilizing cross-sectional data collected via 7-point Likert scales, the analytical strategy unfolds in two stages. First, SEM is applied to assess linear net effects among constructs, the proposed sequential mediation structure, and moderation effects. Second, fsQCA complements the limitations of a purely linear approach by identifying non-linear, equifinal configurations that result in high versus low SUI. By triangulating net effects (SEM) with configurational effects (fsQCA), this study aims to reveal key pathways and actionable governance levers that promote the sustainable use of T2I tools in higher education.

3.1. Measurement

The integrated theoretical model incorporates multiple constructs, each grounded in established literature and specifically adapted for the application of generative AI in educational settings. As summarized in Table 1, we present the theoretical definitions, measurement dimensions, representative sources, and key adaptation decisions for each construct in this study. This documentation aims to ensure measurement consistency and contextual validity while providing reusable indicators for subsequent SEM and fsQCA analyses.

3.2. Sample and Data Collection

Participants in this study included university students and instructors who had prior experience with text-to-image (T2I) tools, such as Midjourney and Stable Diffusion. Participants in this study included university students and instructors with prior experience using text-to-image (T2I) generative AI tools, such as Midjourney and Stable Diffusion. The research protocol was approved by the Ethics Committee of Wuhan University, and informed consent was obtained from all participants before data collection.
Given the uneven accessibility and adoption of text-to-image (T2I) tools across academic contexts, a combined convenience and snowball sampling strategy was employed. An online questionnaire was administered via Wenjuanxing, a widely used professional survey platform in China, and distributed through course-related groups, academic communities, and peer referrals. No monetary incentives were offered, and participation was entirely voluntary. A total of 956 responses were initially collected. To ensure data quality, responses were screened using predefined criteria, including the removal of cases with unrealistically short completion times, patterned or invariant responses, and underage participants. After data cleaning, 807 valid responses were retained for analysis. This sample size meets common recommendations for multivariate statistical techniques and provides sufficient statistical power to estimate the proposed integrated model.

3.3. Data Analysis

This study employed a hybrid analytical approach that integrates structural equation modeling (SEM) and fuzzy-set qualitative comparative analysis (fsQCA). First, we conducted SEM using Mplus 8.3, applying confirmatory factor analysis to assess the reliability and validity of the measurement scales. Next, we estimated the structural paths in the full model, examining the relationships from external perceptions (PEOU, IQ, EA) to affective responses (TA, EA2), evaluative appraisals (AT, PR), and ultimately to sustainable use intention (SUI). We focused particularly on the mediating roles of algorithmic trust and perceived risk, as well as the moderating effects of personal innovativeness (PI) on these key relationships.
Second, we utilized fsQCA 3.0 to perform fuzzy-set qualitative comparative analysis (QCA) from a configurational, set-theoretic perspective. The condition variables—perceived ease of use (PEOU), information quality (IQ), emotional appeal (EA), trust in technology (TA), emotional appeal squared (EA2), attitude toward use (AT), perceived relevance (PR), and perceived intention (PI)—as well as the outcome variable (SUI), were calibrated into fuzzy sets. We examined multiple equifinal configurations that lead to high SUI. This approach complements the results obtained from structural equation modeling (SEM) by transcending a singular linear logic. By triangulating the linear net effects identified through SEM with the configurational multiple pathways revealed by fsQCA, we not only quantify key relationships but also illuminate complex causal patterns and mechanisms [80]. Collectively, these complementary strands provide robust, multifaceted evidence for a more comprehensive understanding of the “perception–emotion–evaluation–intention” process underlying the use of T2I tools by both students and instructors in higher education.

3.4. SEM Results

3.4.1. Reliability and Validity Assessment

Before conducting the structural path analysis, we first evaluated the reliability and validity of the measurement model using confirmatory factor analysis. The results of the measurement model are summarized in Table 2. Composite reliability (CR) was employed to assess the internal consistency of each latent construct, while average variance extracted (AVE) served as an indicator of convergent validity. As extended by Alturki and Aldraiweesh, all standardized factor loadings were significant and fell within an acceptable range (approximately 0.655–0.824, with most exceeding 0.70), indicating that the items were effective indicators of their respective constructs [81]. The CR values for all constructs ranged from 0.873 to 0.926, surpassing the recommended threshold of 0.70, thus demonstrating satisfactory internal consistency. The AVE values ranged from 0.502 to 0.634, all meeting or slightly exceeding the commonly accepted cutoff of 0.50 (with the AT construct showing the lowest AVE of 0.502), suggesting that each construct explained a substantial proportion of the variance in its indicators, thereby providing evidence of adequate convergent validity [82]. Discriminant validity was assessed using the Fornell–Larcker criterion by comparing the square root of each construct’s AVE with its correlations with other constructs [83]. In all cases, the square root of the AVE was greater than the corresponding inter-construct correlations, indicating that each latent construct shared more variance with its own indicators than with any other construct, thereby supporting the satisfactory discriminant validity of the measurement model.

3.4.2. Structural Model Fit Assessment

To evaluate the overall fit of the structural model, we reported the chi-square to degrees-of-freedom ratio (χ2/df), the root mean square error of approximation (RMSEA), the standardized root mean square residual (SRMR), the comparative fit index (CFI), and the Tucker–Lewis index (TLI) [84]. As shown in Table 3, the χ2/df ratio was below the commonly accepted threshold of 3. Both RMSEA and SRMR were below 0.08, indicating low residuals and a good approximate fit, while CFI and TLI exceeded the conventional criterion of 0.90 [84]. Collectively, these indices suggest that the proposed structural model fits the data well, providing a solid foundation for subsequent testing and interpretation of the hypothesized paths.

3.4.3. Path Analysis and Hypothesis Testing

We tested the proposed hypotheses using structural equation modeling (SEM), with a significance level set at p < 0.05. Table 4 shows that among the 19 hypothesized paths, H3, H4, H6–H11, H13, and H15–H17 were statistically significant and aligned with the predicted directions. This evidence provides strong support for the proposed mechanism of perception–emotion–evaluation–intention. Additionally, several statistically significant paths emerged in directions contrary to our initial predictions; we will explore these unexpected patterns in Section 4.2.
In contrast, hypothesis H1 was not statistically supported. Furthermore, although hypotheses H2, H5, H12, H14, and the moderation hypotheses H18–H19 reached statistical significance, their estimated directions contradicted the original predictions. These unexpected findings suggest that technology anxiety, ethics-related anxiety, and personal innovativeness operate in a more nuanced way within T2I-enabled educational contexts than the baseline hypotheses anticipated. The structural paths depicted in Figure 3 confirm the overall logic of the proposed model while clarifying the causal relationships among the latent constructs, thereby providing empirical support for the integrated framework.

3.4.4. Moderation Analysis

To examine the moderating role of personal innovativeness (PI), we incorporated the interaction terms AT × PI and PR × PI into the structural model (see Table 4). The results indicate that the interaction effect of AT × PI on sustainable use intention (SUI) was significant (β = −5.852, p < 0.01), as was the interaction effect of PR × PI on SUI (β = 4.620, p < 0.05). Notably, both effects contradicted the hypothesized directions: PI weakened the positive relationship between algorithmic trust and sustainable use intention, while it moderated (i.e., attenuated) the negative relationship between perceived risk and sustainable use intention.

3.4.5. Mediation Analysis

We tested the indirect effects using bootstrap resampling with 95% confidence intervals (CIs). An indirect effect was considered significant when its CI did not include zero [85,86]. As reported in Table 5, multiple serial mediation paths were supported. For instance, the paths PEOU → TA → AT → SUI and IQ → TA → AT → SUI both demonstrated positive indirect effects. Although these pathways increased TA, the positive associations between TA → AT and AT → SUI culminated in an overall positive indirect influence, indicating a pattern of “anxiety-driven, compliance-like acceptance” in T2I-enabled educational contexts.
Moreover, information quality exerted significant positive indirect effects on Sustainable Use Intention (SUI) through two evaluative channels: Attitude (AT) → SUI and Perceived Risk (PR) → SUI. Ethical awareness exhibited a dual mechanism: it positively influenced SUI through the trust pathway (AT and Trust Awareness (TA) → AT → SUI), while simultaneously generating a negative indirect effect through the risk pathway (PR → SUI). Overall, technology anxiety, algorithmic trust, and perceived risk served as crucial links connecting external perceptions to sustainable use intention, thereby providing empirical support for the proposed perception–emotion–evaluation–intention mediation mechanism.
The results of the Structural Equation Modeling (SEM) reveal the primary mechanisms of perception, emotion, evaluation, and intention. However, the directions of several path coefficients do not fully align with the conventional expectations derived from the Technology Acceptance Model (TAM) and the Technology-Trust-Attitude Theory (TTAT). This discrepancy suggests the presence of more complex psychological processes and causal structures within the context of generative AI. Since SEM is based on assumptions of linearity and symmetric effects, it may not adequately capture equifinal pathways and boundary conditions that arise from various combinations of antecedent factors [87]. Therefore, we also employed fuzzy-set qualitative comparative analysis (fsQCA) to investigate the configurational antecedents of high versus non-high System Usability Index (SUI) scores and to provide complementary interpretations and robustness checks for the SEM findings [88].

3.5. fsQCA Results

3.5.1. Calibration of Variables

In fuzzy-set qualitative comparative analysis (fsQCA), calibration is a critical step that converts raw scale scores into set-membership scores ranging from 0 to 1 [89,90]. We applied the direct calibration method, guided by both theoretical expectations and empirical distributions, using the 75th, 50th, and 25th percentiles of each variable as thresholds for full membership, the crossover point, and full non-membership, respectively. To reduce ambiguity for cases exactly at the crossover point (membership = 0.50), we slightly adjusted this value to 0.501. After calibration, all conditions (PEOU, IQ, EA, AT, PR, TA, EA2, and PI), as well as the outcome (SUI), were expressed as fuzzy-set scores, providing the basis for subsequent necessity and sufficiency analyses. The specific calibration thresholds are presented in Table 6.

3.5.2. Single-Condition Necessity Analysis

Before conducting the configurational sufficiency analysis, we first examined whether any single condition was always present when the outcome occurred, thereby determining if it constituted a necessary condition. Accordingly, we performed necessity analyses for both high SUI and non-high SUI, testing each condition along with its negation and calculating consistency and coverage (see Table 7). Following established conventions, we used a consistency threshold of 0.90 or higher as the benchmark for necessity.
As shown in Table 7, neither high nor non-high SUI conditions, including their negations, reached the 0.90 consistency threshold. This finding suggests that no single factor can be considered a necessary condition in this study; rather, high SUI appears to result from combinations of various conditions. Although the consistency values for PI, TA, and EA2 were relatively higher than those of other factors, they still did not meet the necessity criterion. Overall, these results support a configurational interpretation, indicating that learners’ sustainable use intention is shaped jointly by multiple co-occurring conditions. This underscores the need for further exploration through sufficiency-based configurational analyses to identify distinct pathways.

3.5.3. Configurational Sufficiency Analysis (fsQCA)

In fuzzy set Qualitative Comparative Analysis (fsQCA), sufficiency analysis is used to identify combinations of conditions that are sufficient to produce a specific outcome by constructing and minimizing a truth table [91]. In this study, we focus on high sustainable use intention (SUI) as the outcome set. We generated a truth table in fsQCA and applied specific thresholds to ensure both solution robustness and interpretability: a frequency cutoff of 7, a consistency cutoff of 0.80, and a PRI consistency cutoff of 0.85. The configurations that met these criteria were then minimized to derive configurational pathways leading to both high SUI and non-high SUI, as detailed in Table 8 and Table 9.
As shown in Table 8, the analysis identified seven configurations (H1–H7) that are sufficient to produce a high intention of sustainable use (SUI). The overall solution demonstrates high consistency (0.948) and substantial coverage (0.574), indicating that these combinations account for a significant proportion of high-SUI cases within the sample. Structurally, information quality (IQ), ethical awareness (EA), and ethics-related anxiety (EA2) emerge as core or peripheral conditions in most configurations, interacting synergistically with perceived ease of use (PEOU), algorithmic trust (AT), technology anxiety (TA), perceived risk (PR), and personal innovativeness (PI). This observation highlights the concept of configurational equifinality, where multiple distinct pathways can lead to the same outcome.
In the symmetric analysis of non-high sustainable use intention (SUI) as the outcome, only one dominant configuration (NH1) emerged (Table 9). This solution exhibits very high consistency (0.972) and substantial coverage (0.529). Its defining feature is the core absence of perceived ease of use (PEOU), attitude (AT), and purchase intention (PI), accompanied by generally low levels of information quality (IQ), engagement attitude (EA), and engagement attitude squared (EA2). Unlike the diverse pathways associated with high SUI, the antecedents of non-high SUI are more concentrated, indicating that high and non-high SUI are not simply mirror images but demonstrate clear causal asymmetry. Specifically, while multiple combinations can promote high SUI, insufficient sustainable use intention is primarily characterized by the simultaneous breakdown of the “ease of use–trust–innovativeness” chain. Finally, robustness checks—conducted by adjusting the consistency and PRI thresholds and increasing the frequency cutoff—produced stable configurational structures, thereby supporting the robustness and interpretability of the fsQCA findings [92].

4. Discussion

4.1. Addressing the Research Questions

To present the results more clearly and minimize confusion between SEM paths, fsQCA configurations, and hypotheses, this section is organized around the four research questions (RQ1–RQ4). Table 10 summarizes the main empirical findings for each question, while the following subsections elaborate on these patterns and offer psychological and technological interpretations grounded in prior research.
RQ1. Does the overall “perception–emotion–evaluation–intention” chain hold?
Building on the S–O–R framework, we modeled perceived ease of use (PEOU), information quality (IQ), and ethical awareness (EA) as external stimuli; technology anxiety (TA) and ethics-related anxiety (EA2) as affective responses; algorithmic trust (AT) and perceived risk (PR) as evaluative outcomes; and sustainable use intention (SUI) as the behavioral intention.
Results from SEM reveal significant effects of perception-related factors on both emotional and evaluative components. Specifically, PEOU, IQ, and EA enhance TA and EA2, with IQ and EA further boosting AT and influencing PR. However, PEOU’s relationship with AT was found to be insignificant, which challenges previous findings in TAM, where ease of use is expected to positively impact trust. In the context of generative AI, this result suggests that increasing ease of use may not directly enhance trust but could instead lead to anxiety-driven engagement, where users, despite ease of use, feel a lack of control over the generative process [30]. Both AT and PR exert opposite effects on sustainable use intention (SUI), with AT positively predicting SUI and PR having a negative effect. Multiple serial indirect effects following the sequence perception → emotion → evaluation → intention are confirmed through bootstrapping analysis, indicating that sustainable use intention is primarily shaped by combined affective and evaluative processes rather than direct perceptual effects.
Complementary evidence from fsQCA further indicates that no single condition is necessary for a high intention of sustainable use. Instead, high SUI emerges from multiple equifinal configurations involving different combinations of favorable perceptions, affective states, evaluative judgments, and personal innovativeness. Taken together, the results from SEM and fsQCA consistently support the existence of a perception–emotion–evaluation–intention mechanism, while highlighting its configurational and non-linear nature within the T2I educational context.
RQ2. What mediating roles do technology anxiety and ethics-related anxiety play?
Within the proposed S–O–R framework, technology anxiety (TA) and ethics-related anxiety (EA2) serve as pivotal affective links connecting external perceptions to the downstream trust–risk–intention process. SEM results reveal that perceived ease of use (PEOU), information quality (IQ), and ethics-related anxiety (EA) significantly predict technology anxiety (TA). This suggests that when T2I tools are perceived as more user-friendly, capable, and ethically framed, students may experience increased anxiety regarding their technological competence and the potential consequences of their use. TA, in turn, exerts a dual influence: it increases perceived risk (TA → PR) while, contrary to conventional expectations, it also positively predicts algorithmic trust (TA → AT), indicating a pattern of under anxiety. This finding contrasts with the expectations set by the Technology Threat Avoidance Theory (TTAT), which typically posits that anxiety suppresses trust [40]. In generative AI environments, anxiety can activate coping mechanisms such as verification and risk assessment, which contribute to the development of [93]. Consistent with this, multiple serial indirect effects of the form perception → TA → (AT/PR) → sustainable use intention (SUI) are supported by bootstrap tests, positioning TA as a central emotional conduit through which perceptions shape sustainable use intention.
In contrast, EA2 directly influences the intention for sustainable use through an emotion-to-intention pathway. Its key antecedents include PEOU and PR; more user-friendly tools and higher perceived risks evoke greater ethical concerns, such as those related to academic integrity and copyright issues. The direct link from PR to EA2 provides substantial evidence for this escalation from to Notably, EA2 does not mediate effects through trust; instead, it directly and positively predicts System Use Intention (SUI) (EA2 → SUI), thereby reversing the previously hypothesized negative direction. The fsQCA results further support these findings. In high-SUI configurations, EA2 co-occurs with high AT and manageable PR, indicating that moderate ethical anxiety can encourage continued engagement. This complements the SEM results, which suggest that anxiety-driven trust plays a central role in promoting responsible use in generative AI contexts. Rather, it may act as a form of responsibility activation, motivating sustained engagement under heightened self-regulation and normative awareness.
Overall, Trustworthiness Assessment (TA) primarily mediates the pathway from perception to intention through the complete sequence: perception → emotion → evaluation → intention. In contrast, Emotional Assessment 2 (EA2) contributes via a downstream pathway from emotion directly to intention. Together, TA and EA2 form a nuanced affective mechanism that illustrates how external perceptions shape sustainable usage intentions—offering an explanation that goes beyond the simplistic anxiety-as-barrier model.
RQ3. How does personal innovativeness moderate the evaluation–intention links?
In this study, PI is positioned as a moderating variable in the evaluation-intention stage, influencing the relationships among AT, PR, and SUI. Contrary to the hypothesized direction, SEM results reveal that as PI increases, the positive impact of AT on SUI weakens, while the negative effect of PR on SUI is significantly diminished. These findings suggest that users with high innovativeness rely less on traditional evaluative cues (e.g., trust and risk), and their engagement with T2I tools is driven more by intrinsic motivation to explore new technologies and maintain competitiveness. This aligns with prior research on personality traits and technology adoption [37]. In other words, highly innovative users’ continuance intentions are less dependent on linear assessments of trust and risk [94]. Their SUI appears to be driven more by an intrinsic inclination to experiment with new technologies, maintain competitiveness, and explore emerging tools. In contrast, users with low PI rely more heavily on evaluative cues: higher trust leads to stronger intention, whereas higher risk more readily deters continued use.
The fsQCA results reinforce this interpretation at the configurational level. As illustrated in Table 8, PI emerges as a core present condition across all seven high-SUI configurations (H1–H7). In contrast, in the non-high SUI solution (NH1; Table 9), PI is absent as a core driver. This pattern indicates that high PI functions less as a mere amplifier and more as a buffer: it mitigates the marginal dependence of SUI on trust and risk, allowing innovative users to maintain engagement even when trust has not fully stabilized and risks remain unresolved. Substantively, PI encapsulates a key learner profile—those who are more inclined to persist amidst uncertainty in generative-AI contexts—and provides a theoretical foundation for designing differentiated, user-segmented interventions aimed at promoting responsible and sustainable use.
RQ4. From a configurational perspective, which condition combinations lead to high versus low SUI?
The fsQCA results indicate that high sustainable use intention (SUI) is not driven by a single key variable but emerges from multiple conjunctural combinations. Across the high-SUI solutions, conditions involving perceptions (Perceived Ease of Use [PEOU], Information Quality [IQ], and Engagement [EA]), affective responses (Technology Anxiety [TA] and Engagement Anxiety [EA2]), evaluative appraisals (Attitude [AT] and Perceived Risk [PR]), and individual traits (Personal Innovation [PI]) co-occur in various synergistic patterns. Notably, PI and EA2 frequently appear as present—and often core—components across most pathways, suggesting that students who exhibit both innovation-oriented traits and ethical vigilance are more likely to maintain usage across diverse perceptual and evaluative states. High-SUI configurations also typically involve significant levels of technology- and ethics-related anxiety, a relatively higher degree of algorithmic trust, and risk perceptions that remain within a manageable range. This implies that sustainable use does not require zero anxiety or zero risk; rather, it is achieved through a calibrated balance of vigilance and trust.
In contrast, non-high sustainable use intention (SUI) is characterized by a single dominant configuration marked by a generally low profile across the system. This includes low perceived ease of use, poor information quality, limited ethical awareness, minimal technology- and ethics-related anxiety, low algorithmic trust, and low personal innovativeness. Substantively, this configuration reflects a state of global under-activation, in which students neither perceive the tool as particularly useful or salient nor demonstrate sufficient trust, alertness, or willingness to experiment. The stark contrast between the multiple pathways leading to high SUI and the singular pathway associated with non-high SUI highlights causal asymmetry. Enhancing sustainable use intention likely requires cultivating multiple across perceptions, affect, appraisals, and learner traits, whereas low intention is more likely to arise when these conditions are persistently misaligned or collectively absent.

4.2. Unexpected Findings and Conflicting Paths

Our findings revealed unexpected results that challenge established assumptions in TAM and TTAT. Specifically, hypotheses H2 and H5 suggest that PEOU and IQ do not alleviate technology anxiety (TA). This contradicts the TAM assumption that easier-to-use and higher-quality systems reduce user anxiety. This phenomenon, referred to as the “ease-of-use paradox, explained through the S–O–R framework, where ease of use and output quality can amplify perceived anxiety when users encounter uncertainty about the system’s underlying logic. These results extend prior research on anxiety-driven trust [40], The findings show that anxiety can enhance trust rather than diminish it. H12 further revealed that TA does not negatively impact trust as expected; instead, it fosters algorithmic trust (AT), suggesting that anxiety-driven engagement promotes trust development. Similarly, H14 demonstrated that ethics-related anxiety (EA2) does not inhibit sustainable use intention (SUI), but rather enhances it. This finding suggests that concerns over ethical behavior and privacy actually motivate users to continue using the tool in a more responsible and compliant manner, as previously indicated in studies on ethical anxiety [93]. The observed counterintuitive effects challenge traditional TAM and TTAT assumptions. For example, the increase in anxiety associated with higher perceived ease of use (PEOU) and information quality (IQ) contradicts the common assumption that greater usability reduces anxiety. This finding calls for a theoretical re-interpretation, especially in the context of generative AI, where the dynamics of anxiety and trust may differ from those in traditional technology adoption models.
The paradox of ease and control: ease of use as a trigger of The positive effects of perceived ease of use (PEOU) and information quality (IQ) on technology acceptance (TA) (H2 and H5) suggest that, in technology-to-image (T2I) contexts, usability and high-quality outputs may function less as stress relievers—as typically assumed in Technology Acceptance Model (TAM) research—and more as amplifiers of anxiety. With T2I tools, students can generate highly complex, stylized images through brief prompts. This low-input, high-output contrast can induce a sense of reduced agency. On one hand, strong system performance elevates expectations regarding the quality of produced outputs; on the other hand, the black-box generative logic of the model complicates the anticipation of output quality and associated risks. As these tools become easier to use and more capable, students may become increasingly aware of their limited understanding of the underlying mechanisms, which can be perceived as a loss of control—an anxiety stemming from the perceived technology-driven displacement of user agency.
Exceptionally high perceived IQ may intensify a form of cognitive dissonance: when outputs appear highly polished, learners may struggle to distinguish between what was by me and what was by the model. This can lead to a high-fidelity trap, where realism and refinement obscure authorship, effort, and accountability. In this context, ease of use is no longer merely a facilitator; instead, it may increase engagement and reliance while simultaneously exposing users to uncertainty regarding provenance, controllability, and unintended consequences. Consequently, ease of use transforms into a gateway that leads learners from low-entry interactions into a high-uncertainty, high-stakes domain. This mismatch—characterized by high capability but low controllability—provides a coherent explanation for the escalation of technology anxiety as text-to-image (T2I) tools become more user-friendly and their outputs appear increasingly high in quality.
Vigilant Trust through Active Engagement: Anxiety as a Catalyst for Calibrated Trust. Hypothesis 12 suggests that technology anxiety (TA) positively predicts algorithmic trust (AT). Although this relationship may appear paradoxical, it offers valuable theoretical insights within the context of generative AI and may indicate an evolution in human-AI interactions during complex higher education tasks. Rather than resulting in abandonment or generalized distrust, anxiety can trigger defensive coping mechanisms. When students express concerns about potential model errors or unintended outcomes, they are more likely to shift from passive recipients to active verifiers. This shift involves frequent checking, correcting, and iterative refinement of human-AI interactions. Through this process of repeated verification, learners not only manage risks but also accumulate performance evidence that validates the tool’s functional competence in practice. Consequently, the trust established in this context is unlikely to be blind reliance; instead, it resembles calibrated trust—a form of confidence developed under heightened risk awareness and sustained interaction. In this regard, anxiety may disrupt complacency associated with practices and serve as a catalyst for deeper engagement, ultimately reinforcing trust.
Sustainable Use Within Ethical Boundaries: Ethics-Related Anxiety as a Driver of Compliant Continuance. H14 demonstrates a positive association between ethics-related anxiety (EA2) and sustainable use intention (SUI), indicating that moderate ethical unease does not necessarily inhibit usage; rather, it can motivate compliant continuance. In higher education settings, once students recognize the importance of copyright attribution, authorship norms, academic integrity, and related accountability issues, they typically do not respond by discontinuing use altogether. Instead, they tend to adopt more cautious practices to mitigate ethical risks. For example, they may explicitly disclose AI assistance in assignments, retain intermediate outputs as evidence of the process, and incorporate substantial human editing and post-processing. Heightened sensitivity to ethical risks thus drives learners to seek a workable balance between continued use and adherence to rules, resulting in a bounded form of sustainable use. In other words, ethics-related anxiety functions both as a constraint and a protective mechanism: it curbs reckless misuse while supporting long-term use within normative and institutional frameworks.
These counterintuitive effects can be theoretically grounded in the foundational logic of the S–O–R model, TAM, and TTAT. The S–O–R model emphasizes that external stimuli—such as high usability and superior information quality—do not always lead to straightforward behavioral outcomes; instead, they may provoke complex internal cognitive and emotional responses, including increased anxiety or vigilance, particularly in uncertain AI environments [30,31]. According to TTAT, moderate anxiety can serve a defensive and adaptive function by prompting users to be more attentive, engage in verification, and develop through active engagement [93]. Likewise, ethical anxiety can trigger coping appraisal processes, promoting responsible and compliant behaviors, as demonstrated in the recent literature on generative AI and education [38]. These mechanisms demonstrate that user anxiety and ethical concerns, when properly channeled, do not undermine but rather support sustainable use intentions—aligning with recent expansions of TAM and S–O–R frameworks [37,95].
Taken together, these counterintuitive relationships do not simply invalidate the Technology Acceptance Model (TAM) or the Technology Threat Avoidance Theory (TTAT); rather, they delineate a generative AI user profile characterized by accompanying anxiety. Sustainable Use Intention (SUI) does not emerge in an idealized vacuum free of anxiety. Instead, it is sustained amid dynamic tensions, such as usability but limited controllability, coupled with vigilance, and use constrained by compliance. This pattern refines the applicability boundaries of TAM in the era of artificial intelligence and offers a complementary theoretical lens for understanding the mixed feelings that often accompany human–AI collaboration in educational practice.

Alternative Explanations

While our explanations are primarily grounded in the S–O–R framework, TAM, and TTAT, it is important to acknowledge other possible mechanisms. For instance, gradual user adaptation may reduce technology anxiety and recalibrate trust as students and instructors gain more experience with T2I tools [33,38]. Contextual factors, such as institutional policies, academic discipline norms, or recent changes in educational governance, may also affect how ethical anxiety is processed and how sustainable use intentions develop [39,95]. Peer influence and social learning could further shape compliance behavior and ethical risk assessment. Recognizing these alternative explanations highlights the need for future research using longitudinal, multi-method, and cross-contextual designs to fully capture the complexity of sustainable AI tool adoption in education.
The theoretical implications of our findings are clear and distinct. Our SEM results indicate that perceived ease of use and information quality—factors typically assumed to reduce user anxiety—actually increase technological anxiety in T2I contexts. This reveals an and extends the Technology Acceptance Model (TAM) by demonstrating that usability and output quality can heighten, rather than alleviate, feelings of uncertainty and lack of control in AI-mediated creative tasks.
Moreover, the observed positive pathway from anxiety to algorithmic trust and sustainable use intention challenges the inhibitory perspective of Technology Threat Avoidance Theory (TTAT). In our sample, moderate anxiety prompts verification, risk appraisal, and norm-compliant behaviors—what we describe as than disengagement. This finding supports an updated view in which anxiety acts as a motivator for responsible and active engagement, rather than merely as a barrier.
Finally, the fsQCA results demonstrate that multiple causal configurations—such as combinations of high trust with moderate anxiety or high usability with strong personal innovativeness—can lead to a high intention for sustainable use. This finding reveals the limitations of linear, single-path models and underscores the need for more nuanced, context-sensitive frameworks to better understand diverse user adaptation in generative AI environments.
Taken together, these results extend classic technology acceptance and threat-avoidance models by incorporating nonlinear, context-dependent pathways and demonstrating how psychological mechanisms interact to support responsible and sustainable AI use in education.

4.3. Practical Implications

Curriculum design should directly address the observed “ease-of-use paradox”: our SEM results indicate that both higher perceived ease of use (PEOU) and information quality (IQ) are associated with increased technology anxiety (TA), rather than its reduction. Effective pedagogical interventions include integrating guided reflection, explicit troubleshooting modules, and iterative critique of AI-generated work. Instead of attempting to suppress anxiety, educators should help students recognize and transform moderate anxiety into attentive and responsible engagement. (Supported by H2 and H5 SEM findings)
Educational technology platforms and digital tool providers should develop systems that combine high usability with features promoting transparency and risk awareness. The fsQCA configurations indicate that sustainable use intention (SUI) is highest when strong algorithmic trust (AT) is balanced with the ability to perceive and manage risk (PR). Design elements such as process tracing, uncertainty visualization, edit logs, and transparent AI disclosures empower users to make informed, norm-compliant decisions, thereby mitigating hidden risks while fostering functional trust. (Supported by high-SUI fsQCA pathways)
Support mechanisms should specifically target individuals with lower personal innovativeness (PI), as the absence of PI consistently characterizes low SUI configurations in our data. Institutional policies can emphasize onboarding tutorials, structured peer mentoring, scaffolded tasks, and formative feedback to gradually build confidence among low-PI learners. Targeted ethics training should go beyond abstract principles to emphasize practical guidelines for AI use, disclosure, and revision. (Supported by non-high SUI fsQCA configuration)
The overall implementation environment should strive to balance fostering trust, increasing risk awareness, and maintaining moderate anxiety as a positive motivator. A supportive, transparent, and norm-based ecosystem will encourage rational, reflective, and sustainable adoption of T2I tools across diverse learner profiles. (Integrated SEM/fsQCA results)

4.4. Limitations and Future Directions

4.4.1. Limitations

This study is limited by its focus on Chinese university students and instructors, which may restrict the generalizability of the findings to other cultural and educational contexts. Institutional norms, technology governance, and academic integrity policies in China may influence user perceptions and behaviors differently; therefore, mechanisms such as anxiety-driven trust and ethics-related compliance should be considered context-sensitive. The analysis treats students and instructors as a single group, potentially obscuring differences in psychological responses and adaptation pathways across user roles or academic disciplines. Conducting multi-group analyses could clarify whether mechanisms like vigilant trust or compliant continuance vary by role or field. Additionally, the focus on high-creativity T2I tasks means the findings may not generalize to other generative AI applications (e.g., text or code generation) that have distinct cognitive demands and error tolerance profiles. As this study is cross-sectional, it does not capture the evolution of psychological mechanisms over time; thus, longitudinal and cross-cultural studies are necessary to assess the persistence and transferability of patterns such as the ease-of-use paradox or anxiety-driven trust. Finally, although measurement tools adapted from TAM and TTAT showed reliability in this context, further refinement will be required as generative AI becomes more embedded and user experiences diversify.

4.4.2. Future Directions

Future research should encompass multiple generative AI tasks, utilize multi-group and longitudinal study designs, and develop more nuanced measurement instruments. Such efforts will clarify how core psychological mechanisms function across different roles, tasks, and cultural contexts, thereby strengthening the evidence base for responsible and sustainable AI use in education.

5. Conclusions

As T2I generative AI tools become increasingly integrated into higher education, this study synthesizes the TAM, TTAT, and the DeLone and McLean Information Systems Success Model within an S–O–R framework. By combining SEM and fsQCA, we clarify how perception, emotion, evaluation, and intention interact to shape the SUI of AI image-generation tools among university students and instructors.
The results reveal that sustained use of generative AI is influenced not only by perceptions but also by affective and evaluative processes. Perceived ease of use, information quality, and ethical awareness collectively increase both technological and ethical anxiety. Additionally, information quality and ethical awareness enhance algorithmic trust and influence perceived risk. Notably, anxiety does not merely inhibit use; instead, it can foster vigilant trust through repeated verification and active engagement. Ethical anxiety is positively associated with responsible and continued use, indicating that moderate concern about ethical issues motivates compliance rather than avoidance.
The study also demonstrates that user-friendly tools may increase users’ anxiety, but this anxiety does not diminish trust. Instead, it fosters a form of calibrated trust, developed through continuous human–AI interaction and critical evaluation. In generative AI contexts, anxiety serves both as a source of tension and as a catalyst for responsible practice [96].
The configurational analysis reveals that there is no single pathway to high SUI. High intention arises from various combinations of perceptions, trust, manageable risk, moderate anxiety, and personal innovativeness. Conversely, low SUI reflects a lack of activation across these dimensions, emphasizing that sustainable use is facilitated not by the absence of anxiety but by the presence of perceived value, trust, and willingness to engage.
This study refines classic models of technology adoption by demonstrating that anxiety and trust can coexist, and that responsible, sustainable use arises from a dynamic balance of vigilance and engagement. The methodological integration of SEM and fsQCA provides a comprehensive framework for examining diverse user pathways. Practically, the findings suggest that promoting responsible adoption of T2I technology requires improving information quality and transparency, strengthening AI and ethics literacy, and encouraging reflective, vigilant engagement—rather than attempting to eliminate all anxiety and risk.

Author Contributions

Conceptualization, B.X. and Y.L.; methodology, B.X. and J.Z.; software, B.X.; validation, B.X.; formal analysis, B.X.; investigation, B.X., Y.H. and X.Z.; resources, J.Z. and Y.L.; data curation, Y.H. and X.Z.; writing—original draft preparation, B.X.; writing—review and editing, J.Z. and Y.L.; visualization, B.X.; supervision, Y.L.; project administration, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of the School of Information Management, Wuhan University, China, on 3 October 2025. The committee did not issue a protocol number; a signed and stamped approval form is on file with the authors.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The survey data underlying this article contain information that could compromise the privacy of research participants and are therefore not publicly available. De-identified datasets and analysis code can be obtained from the first author (B.X.) or the corresponding author (Y.L.) upon reasonable request and subject to approval by the Ethics Committee of the School of Information Management, Wuhan University.

Acknowledgments

During the preparation of this manuscript, the authors used ChatGPT (OpenAI, San Francisco, CA, USA) to assist with language polishing and improving the clarity of expression. The authors have carefully reviewed and edited all AI-assisted text and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
T2IText-to-Image
S–O–RStimulus–Organism–Response
TAMTechnology Acceptance Model
TTATTechnology Threat Avoidance Theory
UTAUTUnified Theory of Acceptance and Use of Technology
ICTInformation and Communication Technology
VR/ARVirtual/Augmented Reality
PU Perceived Usefulness
PEOUPerceived Ease of Use
IQInformation Quality
EAEthical Awareness
EA2Ethics-Related Anxiety
TATechnology Anxiety
ATAlgorithmic Trust
PRPerceived Risk
SUISustainable Use Intention
PIPersonal Innovativeness

References

  1. Scherer, R.; Siddiq, F.; Tondeur, J. The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Comput. Educ. 2019, 128, 13–35. [Google Scholar] [CrossRef]
  2. Alturki, U.; Aldraiweesh, A. Adoption of Google Meet by postgraduate students: The role of task technology fit and the TAM model. Sustainability 2022, 14, 15765. [Google Scholar] [CrossRef]
  3. Iranmanesh, A.; Lotfabadi, P. Critical questions on the emergence of text-to-image artificial intelligence in architectural design pedagogy. AI Soc. 2025, 40, 3557–3571. [Google Scholar] [CrossRef]
  4. Vartiainen, H.; Tedre, M.; Jormanainen, I. Co-creating digital art with generative AI in K-9 education: Socio-material insights. Int. J. Educ. Through Art 2023, 19, 405–423. [Google Scholar] [CrossRef]
  5. Liao, C.-W.; Chen, H.-W.; Chen, B.-S.; Wang, I.-C.; Ho, W.-S.; Huang, W.-L. Exploring the Application of Text-to-Image Generation Technology in Art Education at Vocational Senior High Schools in Taiwan. Information 2025, 16, 341. [Google Scholar] [CrossRef]
  6. Ansone, A.; Zālīte-Supe, Z.; Daniela, L. Generative Artificial Intelligence as a Catalyst for Change in Higher Education Art Study Programs. Computers 2025, 14, 154. [Google Scholar] [CrossRef]
  7. Abuadas, M.; Albikawi, Z. AI ethical awareness and academic integrity in higher education: Development and validation of a new scale. Ethics Behav. 2025, 36, 1–18. [Google Scholar] [CrossRef]
  8. Browning-Samoni, L. AI and Fashion: Student Perspectives on the Application and Ethical Use of Various Forms of Generative Artificial Intelligence (GAI) in a Fashion Context. Ph.D. Thesis, Iowa State University, Ames, IA, USA, 2024. [Google Scholar]
  9. Alarcón, C.M.M.; Lasekan, O.A.; Teresa, M.; Pena, G. AI concerns among educators: A thematic analysis. Int. Res. J. Med. Sci. 2025, 6, 1031–1045. [Google Scholar]
  10. Liu, N. Exploring the factors influencing the adoption of artificial intelligence technology by university teachers: The mediating role of confidence and AI readiness. BMC Psychol. 2025, 13, 311. [Google Scholar] [CrossRef]
  11. Al-kfairy, M.; Alalawi, M.; Alrabaee, S.; Alfandi, O. Privacy, identity, and fairness: Unpacking ethical influences on metaverse adoption in university learning. Comput. Educ. Open 2025, 9, 100292. [Google Scholar] [CrossRef]
  12. Salih, L.; Tarhini, A.; Acikgoz, F. AI-Enabled service continuance: Roles of trust and privacy risk. J. Comput. Inf. Syst. 2025, 1–16. [Google Scholar] [CrossRef]
  13. Leirmo, J.L.; Mugurusi, G. Coping after a cyberattack and its effects on employee learning: An extension of the technology threat avoidance theory. Inf. Comput. Secur. 2025, 1–17. [Google Scholar] [CrossRef]
  14. Liu, C.; Yang, L.; Dong, X.; Li, X. Factors Influencing Generative AI Usage Intention in China: Extending the Acceptance–Avoidance Framework with Perceived AI Literacy. Systems 2025, 13, 639. [Google Scholar] [CrossRef]
  15. DeLone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  16. Çelik, K.; Ayaz, A. Validation of the Delone and McLean information systems success model: A study on student information system. Educ. Inf. Technol. 2022, 27, 4709–4727. [Google Scholar] [CrossRef]
  17. Ye, X.; Huang, T.; Song, Y.; Li, X.; Newman, G.; Wu, D.J.; Zeng, Y. Generating conceptual landscape design via text-to-image generative AI model. Environ. Plan. B Urban Anal. City Sci. 2025, 23998083251316064. [Google Scholar] [CrossRef]
  18. Hwang, Y.; Wu, Y. Graphic Design Education in the Era of Text-to-Image Generation: Transitioning to Contents Creator. Int. J. Art Des. Educ. 2025, 44, 239–253. [Google Scholar] [CrossRef]
  19. Condorelli, F.; Berti, F. Creativity and Awareness in Co-Creation of Art Using Artificial Intelligence-Based Systems in Heritage Education. Heritage 2025, 8, 157. [Google Scholar] [CrossRef]
  20. Liu, Y.; Wang, Q.; Lei, J. Adopting generative AI in future classrooms: A study of preservice teachers’ intentions and influencing factors. Behav. Sci. 2025, 15, 1040. [Google Scholar] [CrossRef] [PubMed]
  21. Al-Rahmi, W.M.; Alzahrani, A.I.; Yahaya, N.; Alalwan, N.; Kamin, Y.B. Digital communication: Information and communication technology (ICT) usage for education sustainability. Sustainability 2020, 12, 5052. [Google Scholar] [CrossRef]
  22. Fussell, S.G.; Truong, D. Accepting virtual reality for dynamic learning: An extension of the technology acceptance model. Interact. Learn. Environ. 2023, 31, 5442–5459. [Google Scholar] [CrossRef]
  23. de la Mora Velasco, E.; Miller, R.; Williams, F.; deNoyelles, A. Applying the TAM Framework to Inform Faculty Participation in Course Quality Reviews. Online Learn. 2025, 29, 130–157. [Google Scholar] [CrossRef]
  24. Liang, H.; Xue, Y. Avoidance of information technology threats: A theoretical perspective. MIS Q. 2009, 33, 71–90. [Google Scholar] [CrossRef]
  25. Hidayah, N.A.; Hasanati, N.u.; Putri, R.N.; Musa, K.F.; Nihayah, Z.; Muin, A. Analysis using the technology acceptance model (TAM) and DeLone & McLean information system (D&M IS) success model of AIS mobile user acceptance. In Proceedings of the 2020 8th International Conference on Cyber and IT Service Management (CITSM), Pangkal Pinang, Indonesia, 23–24 October 2020; pp. 1–4. [Google Scholar]
  26. Ariyanto, R.; Rohadi, E.; Lestari, V. The effect of information quality, system quality, service quality on intention to use and user satisfaction, and their effect on net benefits primary care application at primary health facilities in Malang. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2020; p. 012084. [Google Scholar]
  27. Lukyanenko, R.; Parsons, J.; Wiersma, Y.F. The IQ of the crowd: Understanding and improving information quality in structured user-generated content. Inf. Syst. Res. 2014, 25, 669–689. [Google Scholar] [CrossRef]
  28. Alzahrani, A.I.; Mahmud, I.; Ramayah, T.; Alfarraj, O.; Alalwan, N. Modelling digital library success using the DeLone and McLean information system success model. J. Librariansh. Inf. Sci. 2019, 51, 291–306. [Google Scholar] [CrossRef]
  29. Jin, Y.; Tintarev, N.; Verbert, K. Effects of personal characteristics on music recommender systems with different levels of controllability. In Proceedings of the 12th ACM Conference on Recommender Systems, Vancouver, BC, Canada, 2–7 October 2018; pp. 13–21. [Google Scholar]
  30. Yaqub, M.Z.; Badghish, S.; Yaqub, R.M.S.; Ali, I.; Ali, N.S. Integrating and extending the SOR model, TAM and the UTAUT to assess M-commerce adoption during COVID times. J. Econ. Adm. Sci. 2024. [Google Scholar] [CrossRef]
  31. Wan, Y.K.P.; Wu, D.; Zhang, Z.; Cheng, W.L.A. Understanding of non-immersive virtual reality technology in the context of museums via the lens of stimulus–organism–response (S–O–R) and aesthetics frameworks. Curr. Issues Tour. 2024, 1–19. [Google Scholar] [CrossRef]
  32. Laumer, S.; Maier, C.; Weitzel, T. Information quality, user satisfaction, and the manifestation of workarounds: A qualitative and quantitative study of enterprise content management system users. Eur. J. Inf. Syst. 2017, 26, 333–360. [Google Scholar] [CrossRef]
  33. Wut, T.-m.; Shun-mun Wong, H.; Ka-man Sum, C.; Ah-heung Chan, E. Does institution support matter? Blended learning approach in the higher education sector. Educ. Inf. Technol. 2024, 29, 15133–15145. [Google Scholar] [CrossRef]
  34. Mim, K.B.; Jai, T.; Lee, S.H. The influence of sustainable positioning on eWOM and brand loyalty: Analysis of credible sources and transparency practices based on the SOR model. Sustainability 2022, 14, 12461. [Google Scholar] [CrossRef]
  35. Soleimani, S.; Farrokhnia, M.; van Dijk, A.; Noroozi, O. Educators’ perceptions of generative AI: Investigating attitudes, barriers and learning needs in higher education. Innov. Educ. Teach. Int. 2025, 62, 1598–1613. [Google Scholar] [CrossRef]
  36. García de Torres, E.; Ramos, G.; Yezers’ ka, L.; Gonzales, M.; Higuera, L.; Herrera, C. The use and ethical implications of artificial intelligence, collaboration, and participation in local Ibero-American newsrooms. Front. Commun. 2025, 10, 1539844. [Google Scholar] [CrossRef]
  37. Laumer, S.; Maier, C.; Eckhardt, A.; Weitzel, T. User personality and resistance to mandatory information systems in organizations: A theoretical model and empirical test of dispositional resistance to change. J. Inf. Technol. 2016, 31, 67–82. [Google Scholar] [CrossRef]
  38. Swidan, A.; Lee, S.Y.; Romdhane, S.B. College Students’ Use and Perceptions of AI Tools in the UAE: Motivations, Ethical Concerns and Institutional Guidelines. Educ. Sci. 2025, 15, 461. [Google Scholar] [CrossRef]
  39. Youn, S.-y.; Hwang, J.; Zhao, L.; Kim, J.-B. Privacy paradox in 3D body scanning technology: The effect of 3D virtual try-on experience in the relationship between privacy concerns and mobile app adoption intention. Humanit. Soc. Sci. Commun. 2023, 10, 1–13. [Google Scholar] [CrossRef]
  40. Li, M.; Wan, Y.; Zhou, L.; Rao, H. How hedonic and utilitarian generative AI shape ethical decision-making: Neural insights. Behav. Inf. Technol. 2025, 1–16. [Google Scholar] [CrossRef]
  41. Gibbons, C. Untangling the role of optimism, pessimism and coping influences on student mood, motivation and satisfaction. Innov. Educ. Teach. Int. 2024, 61, 1368–1383. [Google Scholar] [CrossRef]
  42. Argabright, G.C. An Investigation of the Relationship Between Technology Acceptance and Technological Stress on Consumer Behavior. Ph.D. Thesis, University of Sarasota, Sarasota, FL, USA, 2002. [Google Scholar]
  43. Bambe, D.I. Technology Use, Technology Acceptance, and Degree of Employee Burnout. Ph.D. Thesis, Trident University International, Chandler, AZ, USA, 2019. [Google Scholar]
  44. Aljouaid, M. Determinants of Saudi Nursing Faculty’s Continuance Intention to Use Learning Management Systems Using the Technology Acceptance Model. Ph.D. Thesis, Barry University, Miami Shores, FL, USA, 2024. [Google Scholar]
  45. Abikari, M.; Öhman, P.; Yazdanfar, D. Negative emotions and consumer behavioural intention to adopt emerging e-banking technology. J. Financ. Serv. Mark. 2023, 28, 691–704. [Google Scholar] [CrossRef]
  46. de Guinea, A.O.; Titah, R.; Léger, P.-M. Explicit and implicit antecedents of users’ behavioral beliefs in information systems: A neuropsychological investigation. J. Manag. Inf. Syst. 2014, 30, 179–210. [Google Scholar] [CrossRef]
  47. Sánchez-Prieto, J.C.; Olmos-Migueláñez, S.; García-Peñalvo, F.J. MLearning and pre-service teachers: An assessment of the behavioral intention using an expanded TAM model. Comput. Hum. Behav. 2017, 72, 644–654. [Google Scholar] [CrossRef]
  48. Li, J.; Jin, M.; Chen, X. Understanding continued use of smart learning platforms: Psychological wellbeing in an extended TAM-ISCM model. Front. Psychol. 2025, 16, 1521174. [Google Scholar] [CrossRef]
  49. Ma, J.; Wang, P.; Li, B.; Wang, T.; Pang, X.S.; Wang, D. Exploring user adoption of ChatGPT: A technology acceptance model perspective. Int. J. Hum. Comput. Interact. 2025, 41, 1431–1445. [Google Scholar] [CrossRef]
  50. Vartiainen, H.; Tedre, M. Using artificial intelligence in craft education: Crafting with text-to-image generative models. Digit. Creat. 2023, 34, 1–21. [Google Scholar] [CrossRef]
  51. Božič, K.; Dimovski, V. The relationship between business intelligence and analytics use and organizational absorptive capacity: Applying the DeLone & mclean information systems success model. Econ. Bus. Rev. 2020, 22, 2. [Google Scholar] [CrossRef]
  52. Khan, M.F.A. Evaluating Project Management Information System Applications and Success Factors in Construction Industries of Emerging Economies: A DeLone and McLean Success Model Approach: A Survey Based Study. Master’s Thesis, University of VAASA, Vaasa, Finland, 2025. [Google Scholar]
  53. Hwang, Y.-S.; Choi, Y.K. Higher education service quality and student satisfaction, institutional image, and behavioral intention. Soc. Behav. Personal. Int. J. 2019, 47, 1–12. [Google Scholar] [CrossRef]
  54. Rahimizhian, S.; Avci, T.; Eluwole, K.K. A conceptual model development of the impact of higher education service quality in guaranteeing edu-tourists’ satisfaction and behavioral intentions. J. Public Aff. 2020, 20, e2085. [Google Scholar] [CrossRef]
  55. Balaskas, S.; Tsiantos, V.; Chatzifotiou, S.; Rigou, M. Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk. Information 2025, 16, 82. [Google Scholar] [CrossRef]
  56. Fu, C.-J.; Silalahi, A.D.K.; Shih, I.-T.; Phuong, D.T.T.; Eunike, I.J.; Jargalsaikhan, S. Assessing ChatGPT’s information quality through the lens of user information satisfaction and information quality theory in higher education: A theoretical framework. Hum. Behav. Emerg. Technol. 2024, 2024, 8114315. [Google Scholar] [CrossRef]
  57. Qamar, M.T.; Yasmeen, J.; Malik, A.; VT, S.S. Chatting Heavily with ChatGPT: Investigating Usefulness, Privacy, Integrity, Ease, and Intention as Drivers of Technology Acceptance Among Business Communication Students. Bus. Prof. Commun. Q. 2025, 23294906251319016. [Google Scholar] [CrossRef]
  58. Tedre, M.; Kahila, J.; Vartiainen, H. Exploration on how co-designing with AI facilitates critical evaluation of ethics of AI in craft education. In Society for Information Technology & Teacher Education International Conference; Association for the Advancement of Computing in Education (AACE): Waynesville, NC, USA, 2023; pp. 2289–2296. [Google Scholar]
  59. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.-P.T. Ethical principles for artificial intelligence in education. Educ. Inf. Technol. 2023, 28, 4221–4241. [Google Scholar] [CrossRef]
  60. Nnorom, I.C. Ethical Considerations in Artificial Intelligence and Academic Integrity: Balancing Technology and Human Values. AI Ethics Acad. Integr. Future Qual. Assur. High. Educ. 2025, 15, 87–95. [Google Scholar]
  61. Kong, S.-C.; Cheung, W.M.-Y.; Tsang, O. Evaluating an artificial intelligence literacy programme for empowering and developing concepts, literacy and ethical awareness in senior secondary students. Educ. Inf. Technol. 2023, 28, 4703–4724. [Google Scholar] [CrossRef]
  62. Đerić, E.; Frank, D.; Vuković, D. Exploring the ethical implications of using generative AI tools in higher education. Informatics 2025, 12, 36. [Google Scholar] [CrossRef]
  63. Zhang, X.; Hu, X.; Sun, Y.; Li, L.; Deng, S.; Chen, X. Integrating AI Literacy with the TPB-TAM Framework to Explore Chinese University Students’ Adoption of Generative AI. Behav. Sci. 2025, 15, 1398. [Google Scholar] [CrossRef]
  64. Zhu, W.; Huang, L.; Zhou, X.; Li, X.; Shi, G.; Ying, J.; Wang, C. Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. Int. J. Hum. Comput. Interact. 2025, 41, 742–764. [Google Scholar] [CrossRef]
  65. Carpenter, D.; Young, D.K.; Barrett, P.; McLeod, A.J. Refining technology threat avoidance theory. Commun. Assoc. Inf. Syst. 2019, 44, 22. [Google Scholar] [CrossRef]
  66. Granström, M.; Oppi, P. Assessing Teachers’ Readiness and Perceived Usefulness of AI in Education: An Estonian Perspective. Front. Educ. 2025, 10, 1622240. [Google Scholar] [CrossRef]
  67. Syed, A.Z.; Memon, Z.H.; Khan, K.; Hameed, I.; Nadeem, M. Examining the behavioral determinants of AI adoption in higher education: A focus on perceptional factors and demographic differences. Horiz. Int. J. Learn. Futures 2025, 33, 245–264. [Google Scholar] [CrossRef]
  68. Rahman, M.A.; Alqahtani, L.; Albooq, A.; Ainousah, A. A survey on security and privacy of large multimodal deep learning models: Teaching and learning perspective. In Proceedings of the 2024 21st Learning and Technology Conference (L&T), Jeddah, Saudi Arabia, 15–16 January 2024; pp. 13–18. [Google Scholar]
  69. Chiu, M.-L. Exploring user awareness and perceived usefulness of generative AI in higher education: The moderating role of trust. Educ. Inf. Technol. 2025, 30, 1–35. [Google Scholar] [CrossRef]
  70. Yu, T.; Tian, Y.; Chen, Y.; Huang, Y.; Pan, Y.; Jang, W. How Do Ethical Factors Affect User Trust and Adoption Intentions of AI-Generated Content Tools? Evidence from a Risk-Trust Perspective. Systems 2025, 13, 461. [Google Scholar] [CrossRef]
  71. Li, Z. Generative AI in Higher Education Academic Assignments: Policy Implications from a Systematic Review of Student and Teacher Perceptions. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2024. [Google Scholar]
  72. Mohammed, R.R. Generative AI in the Academy: Analysis of Stakeholders’ Experiences in US Higher Education Organizations. Ph.D. Thesis, Arizona State University, Tempe, AZ, USA, 2025. [Google Scholar]
  73. Al-Adwan, A.S.; Li, N.; Al-Adwan, A.; Abbasi, G.A.; Albelbisi, N.A.; Habibi, A. Extending the technology acceptance model (TAM) to Predict University Students’ intentions to use metaverse-based learning platforms. Educ. Inf. Technol. 2023, 28, 15381–15413. [Google Scholar] [CrossRef]
  74. Abubakre, M.; Zhou, Y.; Zhou, Z. The impact of information technology culture and personal innovativeness in information technology on digital entrepreneurship success. Inf. Technol. People 2022, 35, 204–231. [Google Scholar] [CrossRef]
  75. Wang, W.; Butler, J.E.; Hsieh, J.P.-A.; Hsu, S.-H. Innovate with complex information technologies: A theoretical model and empirical examination. J. Comput. Inf. Syst. 2008, 49, 27–36. [Google Scholar] [CrossRef]
  76. Tang, S.M. Barriers to artificial intelligence adoption in teaching: The moderating role of personal innovativeness. Int. J. Educ. Manag. 2025, 1–12. [Google Scholar] [CrossRef]
  77. Jeyaraj, A. DeLone & McLean models of information system success: Critical meta-review and research directions. Int. J. Inf. Manag. 2020, 54, 102139. [Google Scholar]
  78. Shrivastava, P. Understanding acceptance and resistance toward generative AI technologies: A multi-theoretical framework integrating functional, risk, and sociolegal factors. Front. Artif. Intell. 2025, 8, 1565927. [Google Scholar] [CrossRef] [PubMed]
  79. Gökçearslan, Ş.; Esiyok, E.; Kucukergin, K.G. Understanding the intention to use artificial intelligence chatbots in education: The role of individual innovativeness and AI trust among university students. J. Comput. Soc. Sci. 2025, 8, 71. [Google Scholar] [CrossRef]
  80. Chen, Y.; Qiu, W.; Xiao, M. Influence mechanisms of digital construction organizations’ capabilities on performance: Evidence from SEM and fsQCA. Eng. Constr. Archit. Manag. 2025; ahead of print. [Google Scholar]
  81. Alturki, U.; Aldraiweesh, A. An empirical investigation into students’ actual use of MOOCs in Saudi Arabia higher education. Sustainability 2023, 15, 6918. [Google Scholar] [CrossRef]
  82. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  83. Ab Hamid, M.R.; Sami, W.; Sidek, M.M. Discriminant validity assessment: Use of Fornell & Larcker criterion versus HTMT criterion. J. Phys. Conf. Ser. 2017, 890, 012163. [Google Scholar]
  84. Ramírez, A.; Burgos-Benavides, L.; Sinchi, H.; Quito-Calle, J.V.; Díez, F.H.; Rodríguez-Díaz, F.J. Adaptation and validation of psychological assessment questionnaires using confirmatory factor analysis: A tutorial for planning and reporting analysis. Preprints 2025, 2025021192. [Google Scholar]
  85. Cheung, S.F.; Cheung, S.-H. manymome: An R package for computing the indirect effects, conditional effects, and conditional indirect effects, standardized or unstandardized, and their bootstrap confidence intervals, in many (though not all) models. Behav. Res. Methods 2024, 56, 4862–4882. [Google Scholar] [CrossRef] [PubMed]
  86. MacKinnon, D.P.; Lockwood, C.M.; Williams, J. Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivar. Behav. Res. 2004, 39, 99–128. [Google Scholar] [CrossRef]
  87. Han, H.; Wong, A.K.F.; Kim, S.; Cheng, X.; Chi, X. Assessing asymmetric selection of responsible airlines based on CSR, normative, and emotional factors: A complexity approach with SEM, fsQCA, and NCA. Asia Pac. J. Tour. Res. 2025, 30, 320–337. [Google Scholar] [CrossRef]
  88. Lv, J.; Lv, J. The Configuration Effect of Antecedents of Effective Use in Mobile Health: A Fuzzy-Set QCA Approach. In Proceedings of the 2023 4th International Symposium on Artificial Intelligence for Medicine Science, Chengdu, China, 27–29 October 2023; pp. 1343–1346. [Google Scholar]
  89. Sun, J.; Wang, Y. Fuzzy-set qualitative comparative analysis (fsQCA) in second language acquisition: An applied example of writing engagement. Res. Methods Appl. Linguist. 2025, 4, 100193. [Google Scholar] [CrossRef]
  90. Rasoolimanesh, S.M.; Ringle, C.M.; Sarstedt, M.; Olya, H. The combined use of symmetric and asymmetric approaches: Partial least squares-structural equation modeling and fuzzy-set qualitative comparative analysis. Int. J. Contemp. Hosp. Manag. 2021, 33, 1571–1592. [Google Scholar] [CrossRef]
  91. Baumgartner, M. Qualitative comparative analysis and robust sufficiency. Qual. Quant. 2022, 56, 1939–1963. [Google Scholar] [CrossRef]
  92. Nguyen, T.T. A Comparative Assessment of Fuzzy Set-Theoretic Rule Validation: Analyzing fs/QCA Software Procedures and the Modified Consistency and Coverage Measures: Evidence from ESG-Stock Return Relations and Online Shopping Behavior Configurations. Master’s Thesis, LUT University, Lappeenranta, Finland, 2025. [Google Scholar]
  93. Norem, J.K. Defensive pessimism, anxiety, and the complexity of evaluating self-regulation. Soc. Personal. Psychol. Compass 2008, 2, 121–134. [Google Scholar] [CrossRef]
  94. Jahanmir, S.F.; Silva, G.M.; Gomes, P.J.; Gonçalves, H.M. Determinants of users’ continuance intention toward digital innovations: Are late adopters different? J. Bus. Res. 2020, 115, 225–233. [Google Scholar] [CrossRef]
  95. Zheng, H.; Qian, Y.; Wang, Z.; Wu, Y. Research on the Influence of E-Learning Quality on the Intention to Continue E-Learning: Evidence from SEM and fsQCA. Sustainability 2023, 15, 5557. [Google Scholar] [CrossRef]
  96. Caporusso, N. Generative artificial intelligence and the emergence of creative displacement anxiety. Res. Psychol. Behav. 2023, 3. [Google Scholar] [CrossRef]
Figure 1. Integrated theoretical framework embedding TAM, TTAT, and the DeLone–McLean (D&M) model, illustrating the functional roles of each construct.
Figure 1. Integrated theoretical framework embedding TAM, TTAT, and the DeLone–McLean (D&M) model, illustrating the functional roles of each construct.
Sustainability 18 01657 g001
Figure 2. Layered “Perception–Emotion–Evaluation–Intention” framework.
Figure 2. Layered “Perception–Emotion–Evaluation–Intention” framework.
Sustainability 18 01657 g002
Figure 3. Structural equation model (SEM) path diagram.
Figure 3. Structural equation model (SEM) path diagram.
Sustainability 18 01657 g003
Table 1. Theoretical foundations of the main constructs and adaptation notes.
Table 1. Theoretical foundations of the main constructs and adaptation notes.
ConstructTheoretical
Basis
Measurement SourceAdaptation Notes
Perceived Ease of UseTAM[22]The original scale was used to assess learners’ “ease-of-learning” experiences in VR learning systems. In this study, it was adapted to capture perceived ease of use when using T2I tools.
Information QualityD&M[77]The original scale, grounded in the D&M model, measured the completeness and accuracy of information outputs. In this study, we replaced “system information” with learners’ perceived quality of “generated images and related disclosures/annotations.”
Ethical AwarenessAI ethics[7]The original scale assessed university students’ ethical awareness regarding AI use and academic integrity. We retained its core meaning while contextualizing items specifically to the use of T2I tools in teaching and learning.
Technology AnxietyTTAT[47]The original scale measured anxiety and tension in the use of information technologies. In this study, it was contextualized to capture uncertainty-driven anxiety in complex generative tasks—specifically, concerns that T2I tools may produce stochastic outputs, make errors, or be difficult to fully control.
Ethical AnxietyTTAT[64]The original scale assessed ethics-related anxiety when using generative AI products. In this study, we restricted the context to educational use of T2I tools, focusing on concerns about copyright, content compliance, academic integrity, privacy, and fairness.
Algorithmic TrustTAM[12]The original scale captured users’ trust appraisals regarding the reliability and controllability of AI services. We adapted the wording to assess whether T2I tools are perceived as trustworthy and dependable in teaching and learning contexts.
Perceived RiskTTAT[14]The original scale measured perceived risk regarding potential negative consequences of using AI systems. We tailored it to the educational T2I context by emphasizing risks such as misleading generated content, unclear copyright/attribution, and overall outcome uncertainty, which represent salient risk types for AI-generated imagery in education.
Sustainable Use IntentionTAM[78]The original scale assessed users’ intentions to continue using AI services and their willingness to recommend them. We adapted it to measure intention to continue using—and to do so in a compliant, responsible manner—T2I tools in future teaching and learning activities.
Personal Innovativeness in ITPIIT[79]The original scale measured individuals’ proactive tendency to try new IT. We retained the original structure and simply contextualized it to T2I tools.”
Table 2. Reliability and validity assessment of the measurement model.
Table 2. Reliability and validity assessment of the measurement model.
ItemFactor LoadingS.E.
(Standard
Error)
Squared Multiple CorrelationCRAVE
PEOUPEOU10.7890.030.6230.9200.616
PEOU20.7750.0310.601
PEOU30.7860.030.618
PEOU40.790.030.625
IQIQ10.7740.0310.6010.8950.551
IQ20.7570.0310.574
IQ30.7180.0310.516
IQ40.7190.0310.518
EAEA10.7620.0310.5810.8950.552
EA20.7730.0310.598
EA30.7080.0320.501
EA40.7260.0310.527
ATAT10.7120.0310.5080.8730.502
AT20.7650.0310.585
AT30.6990.0320.489
AT40.6550.0330.429
PRPR10.7040.0320.4960.8990.561
PR20.7540.0310.57
PR30.7970.030.635
PR40.7390.0310.547
TATA10.6810.0320.4650.9110.594
TA20.7870.030.62
TA30.7970.030.636
TA40.8110.030.659
EA2EA210.7810.030.6110.9180.612
EA220.7970.030.636
EA230.7820.030.612
EA240.7690.030.592
SUISUI10.790.030.6250.9260.634
SUI20.8240.0290.68
SUI30.7830.030.613
SUI40.7880.030.622
PIPI10.7890.030.6230.9210.619
PI20.7870.030.62
PI30.80.030.64
PI40.7710.030.595
Table 3. Structural model fit indices.
Table 3. Structural model fit indices.
Evaluation IndexRMSEACFITLISRMRχ2/df
Standard value<0.08>0.9>0.9<0.08<3
Actual value0.0290.9120.9280.0561.688
Table 4. Structural path estimates and moderation effects.
Table 4. Structural path estimates and moderation effects.
PathEstimateS.E.Est./S.E.p-ValueHypothesis Test Result
EA → TA0.3370.02811.8750.000H10 supported
PEOU → TA0.1110.0215.4140.000H2 significant but in the opposite direction
IQ → TA0.2670.02610.460.000H5 significant effect in the opposite direction
EA → EA20.2360.0842.8120.005H9 supported
PEOU → EA20.1960.0238.4210.000H3 supported
IQ → EA20.1370.034.6250.000H6 supported
PR → EA20.250.1182.1180.034H15 supported
PEOU → AT0.0090.0190.4960.620H1 not supported
IQ → AT0.5010.02817.6080.000H4 supported
TA → AT0.1120.0373.0490.002H12 significant effect in the opposite direction
EA → AT0.4330.0314.4470.000H11 supported
IQ → PR−0.0450.018−2.4990.012H7 supported
EA → PR0.4760.02717.3180.000H8 supported
TA → PR0.5520.04312.8390.000H13 supported
AT → SUI0.210.0346.1810.000H16 supported
EA2 → SUI0.4140.0469.0780.000H14 significant effect in the opposite direction
PR → SUI−0.1740.043−4.0710.000H17 supported
AT × PI → SUI−5.8522.067−2.8310.005H18 significant effect in the opposite direction
PR × PI → SUI4.621.8282.5270.011H19 significant effect in the opposite direction
Table 5. Bootstrapped mediation (indirect) effects results.
Table 5. Bootstrapped mediation (indirect) effects results.
Exogenous
Variable
Mediator
1
Mediator
2
Mediator
3
OutcomeIndirect
Effect
95% Bootstrap
CI (LL)
95% Bootstrap
CI (UL)
PEOUTAAT SUI0.00200.01
IQTAAT SUI0.00600.02
IQAT SUI0.1050.0140.205
IQPR SUI0.0080.0010.051
EAAT SUI0.0920.0240.178
EATAAT SUI0.0090.0010.021
EAPR SUI−0.083−0.226−0.003
Table 6. Calibration thresholds for each variable.
Table 6. Calibration thresholds for each variable.
VariableFull MembershipCrossover PointFull Non-Membership
PEOU5.750 5.000 4.000
IQ5.750 5.000 3.750
EA5.750 5.000 4.000
AT5.750 5.000 4.000
PR5.750 5.000 4.000
TA5.750 5.000 4.000
EA25.750 5.250 4.000
PI6.000 5.250 4.000
SUI5.750 5.250 4.000
Table 7. Single-condition necessity analysis for high SUI and non-high SUI.
Table 7. Single-condition necessity analysis for high SUI and non-high SUI.
ConditionHigh SUINon-High SUI
ConsistencyCoverageConsistencyCoverage
PEOU0.776 0.763 0.367 0.340
~PEOU0.329 0.356 0.744 0.758
IQ0.751 0.764 0.363 0.348
~IQ0.360 0.375 0.754 0.740
EA0.729 0.752 0.368 0.357
~EA0.377 0.387 0.745 0.722
AT0.758 0.786 0.342 0.334
~AT0.357 0.365 0.781 0.752
PR0.785 0.782 0.358 0.336
~PR0.334 0.356 0.768 0.771
TA0.825 0.832 0.312 0.296
~TA0.301 0.317 0.823 0.816
EA20.813 0.855 0.287 0.284
~EA20.319 0.322 0.854 0.811
PI0.845 0.868 0.289 0.280
~PI0.299 0.308 0.864 0.840
Table 8. Configurational solutions leading to high sustainable use intention (SUI).
Table 8. Configurational solutions leading to high sustainable use intention (SUI).
ConstructH1H2H3H4H5H6H7
PEOUSustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001 Sustainability 18 01657 i001
IQ
EA
ATSustainability 18 01657 i001 Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001
PR
TA
EA2Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001
PISustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001Sustainability 18 01657 i001
Raw
coverage
0.459 0.463 0.462 0.462 0.475 0.087 0.084
Unique
coverage
0.011 0.014 0.011 0.013 0.025 0.022 0.015
Consistency0.956 0.957 0.959 0.960 0.959 0.944 0.954
Solution
coverage
0.574
Solution
consistency
0.948
Note: Sustainability 18 01657 i001 core presence; ● peripheral presence; ⊗ peripheral absence; blank = “don’t care” (may be present or absent).
Table 9. Configurational solution for non-high sustainable use intention (SUI).
Table 9. Configurational solution for non-high sustainable use intention (SUI).
ConstructNH1
PEOUSustainability 18 01657 i002
IQ
EA
ATSustainability 18 01657 i002
PR
TA
EA2
PISustainability 18 01657 i002
Raw coverage0.529
Unique coverage0.529
Consistency0.972
Solution coverage0.529
Note: Sustainability 18 01657 i002 indicates the absence of a core condition; ⊗ indicates the absence of a peripheral condition.
Table 10. Summary of empirical findings by research question.
Table 10. Summary of empirical findings by research question.
RQMain FindingsSEM ResultsfsQCA Results
RQ1Perception-related factors (PEOU, IQ, EA) influence both emotion and evaluation components.PEOU, IQ, and EA increase TA and EA2. PEOU’s effect on AT is not significant.Multiple configurations lead to high SUI, involving favorable perceptions and manageable anxiety.
RQ2TA mediates perception → evaluation → intention. EA2 operates closer to the terminal stage, influencing SUI.TA increases PR and AT; EA2 directly influences SUI.EA2 contributes to high SUI with elevated AT and manageable PR.
RQ3High PI weakens the impact of AT on SUI and reduces the negative effect of PR on SUI.PI moderates the impact of AT and PR.PI appears in high-SUI configurations, mitigating the dependence on trust and risk.
RQ4High SUI emerges from multiple equifinal combinations of perceptions, affective responses, and personal innovativeness.High SUI linked to favorable perceptions.Low SUI linked to low perceptions, trust, anxiety, and PI.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xia, B.; Lei, Y.; Hu, Y.; Zhu, X.; Zhang, J. Sustainable Use Intention of Text-to-Image Generative AI in Higher Education: An S–O–R Model with Parallel Trust and Risk Pathways. Sustainability 2026, 18, 1657. https://doi.org/10.3390/su18031657

AMA Style

Xia B, Lei Y, Hu Y, Zhu X, Zhang J. Sustainable Use Intention of Text-to-Image Generative AI in Higher Education: An S–O–R Model with Parallel Trust and Risk Pathways. Sustainability. 2026; 18(3):1657. https://doi.org/10.3390/su18031657

Chicago/Turabian Style

Xia, Buling, Yaoxi Lei, Yuexin Hu, Xuran Zhu, and Jibin Zhang. 2026. "Sustainable Use Intention of Text-to-Image Generative AI in Higher Education: An S–O–R Model with Parallel Trust and Risk Pathways" Sustainability 18, no. 3: 1657. https://doi.org/10.3390/su18031657

APA Style

Xia, B., Lei, Y., Hu, Y., Zhu, X., & Zhang, J. (2026). Sustainable Use Intention of Text-to-Image Generative AI in Higher Education: An S–O–R Model with Parallel Trust and Risk Pathways. Sustainability, 18(3), 1657. https://doi.org/10.3390/su18031657

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop