You are currently viewing a new version of our website. To view the old version click .
Future Internet
  • Article
  • Open Access

9 December 2025

Explainability, Safety Cues, and Trust in GenAI Advisors: A SEM–ANN Hybrid Study

,
and
1
Department of Physics, School of Sciences, Democritus University of Thrace, Kavala Campus, 65404 Kavala, Greece
2
Department of Business Administration, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Generative Artificial Intelligence: Systems, Technologies and Applications

Abstract

“GenAI” assistants are gradually being integrated into daily tasks and learning, but their uptake is no less contingent on perceptions of credibility or safety than on their capabilities per se. The current study hypothesizes and tests its proposed two-road construct consisting of two interface-level constructs, namely perceived transparency (PT) and perceived safety/guardrails (PSG), influencing “behavioral intention” (BI) both directly and indirectly, via the two socio-cognitive mediators trust in automation (TR) and psychological reactance (RE). Furthermore, we also provide formulations for the evaluative lenses, namely perceived usefulness (PU) and “perceived risk” (PR). Employing survey data with a sample of 365 responses and partial least squares structural equation modeling (PLS-SEM) with bootstrap techniques in SMART-PLS 4, we discovered that PT is the most influential factor in BI, supported by TR, with some contributions from PSG/PU, but none from PR/RE. Mediation testing revealed significant partial mediations, with PT only exhibiting indirect-only mediated relationships via TR, while the other variables are nonsignificant via reactance-driven paths. To uncover non-linearity and non-compensation, a Stage 2 multilayer perceptron was implemented, confirming the SEM ranking, complimented by an importance of variables and sensitivity analysis. In practical terms, the study’s findings support the primacy of explanatory clarity and the importance of clear rules that are rigorously obligatory, with usefulness subordinated to credibility once the latter is achieved. The integration of SEM and ANN improves explanation and prediction, providing valuable insights for policy, managerial, or educational decision-makers about the implementation of GenAI.

1. Introduction

Generative AI (GenAI) agents have progressed from experimental demonstrations of concept to infrastructural actors mediating research, writing, coaching, and choice in education, finance, and the government [1,2]. As the agents are scaled up, adoption is a function not merely of technical ability but also of user familiarity with the system—whether the system is understandable, restricted by reasonable safeguards, useful to the task at hand, and unlikely to subject the user to harm [2,3]. Emerging evidence from user-centered research on explainable AI (XAI) suggests that experienced transparency—understanding the reasoning, sources, and limits of a model—can counteract negative perceptions and enhance trust and adoption, but these effects are typically small and sensitive to design. Meanwhile, policy and design efforts within responsible AI prioritize the importance of open guardrails—discernible limits, refusals, and policy revelations with controlled risk and explicit acceptable use [2,3]. Collectively, these strands suggest that reliable explanations and reliable safeguards must be dealt with simultaneously in order to achieve trust in real-world settings.
On the level of individual decision, modern acceptance studies exhibit a benefit–hazard calculus: perceived utility (possible performance improvements) and perceived risk (possible harms such as error costs or loss of privacy) both contribute to intention to use. Throughout domains—health, education, customer service, and organizational settings—ease of use and performance expectancy come before intention and use, yet their effects are inconsistent when credibility and risk are more prominent (e.g., in high-stakes or private settings) [4,5]. Findings from certain studies indicate that usefulness is more prominent than intention, while others conclude that usefulness declines after risk and trust are made explicit, or that organizational and social issues dominate sheer beliefs on performance when adoption is new [6,7,8]. These differences most probably index heterogeneity in context, measurement, and interface signaling and motivate models that predict for utility as well as risk and trust in one structural explanation [4,6].
Trust constantly acts as the proximal cause of dependence, with results converging that illustrated automation-trust measures generalize effectively across AI environments [1]. Conversely, user-focused tests warn that explanations are not enough; perceived explainability must reach salience and clarity levels in order to have an effect on trust or perceived usefulness, and affective aspects of the explanatory process may act as mediators of these outcomes. These findings validate keeping transparency as a user impression, not as a developer metric, and to quantify its effect along with risk and trust separately and not in aggregate [1,2]. A second, less frequently modeled mechanism is psychological reactance—the aversive response to perceived threats to freedom or agency. In automated or surveillant settings, reactance can be triggered by interface choices that signal control (e.g., preventive monitoring), by curation that narrows perceived choice (e.g., echo-chamber dynamics), or by social mimicry that feels inauthentic, producing downstream avoidance, lower attitudes, and disengagement [9,10]. Conversely, transparent expectations and supportive framings can de-threaten and restore agency. For GenAI assistants, this implies that the same design intended to reassure or “humanize” can backfire if it inadvertently heightens freedom threat. Incorporating reactance alongside trust is therefore critical for explaining why transparency and guardrail cues sometimes underperform in practice [9,10,11].
Despite rapid development, the existing literature has four limitations that hinder the development of cumulative theory and design guidance. First, research combines the constructs of safety and transparency: “transparency” is operationalized as a general disclosure or explanation cue, and guardrails (visible boundaries, refusals, and policy cues) are excluded or absorbed into the same construct [12]. These blurs separate psychological mechanisms—diagnosticity/traceability vs. assurance/risk containment—through which interface cues affect adoption. Second, while trust is firmly grounded as an antecedent of intention, reactance is seldom represented together with trust in a composite path; where present, it is more often a trait or post hoc rationalization than a design-sensitive mediator [12,13]. Third, results for usefulness are contingent on domains: usefulness occasionally overshadows intention, but in privacy-sensitive or high-consequence settings it attenuates when risk and credibility enter the picture. Previous research has the habit of assessing usefulness and risk separately, using heterogeneous operationalizations of risk (privacy vs. error cost vs. liability), or leaves out the joint estimation of usefulness, risk, trust, and reactance necessary to adjudicate between rival accounts [14,15]. Fourth, the majority of evidence is based on single-method, single-model inferences: experiments find small, design-sensitive explanation effects with zero external validity, and surveys apply linear SEM without experimenting with non-linearities, thresholds, or interactions that would detail under what conditions transparency and guardrails do alter behavior; invariance and heterogeneity checks are reported occasionally, and multilingual, ecologically based retrospective-use samples are still rare [12,16].
With these in mind, we identify four target user perceptions and two socio-cognitive mediators [17,18,19]. Perceived transparency (PT) is the sense that the reasoning, steps, and sources of an assistant can be traced; perceived safety/guardrails (PSG) is how clearly limits, refusals, and policy guardrails are established and enacted; perceived usefulness (PU) is the anticipated task usefulness; and perceived risk (PR) is the expected downside of dependence [19,20]. We hypothesize that PT and PSG are design-perception levers offering two complementary channels: a risk–trust pathway (guardrails diminish perceived risk, which increases trust and intention) and a freedom–trust pathway (traceable reasoning and transparent rules diminish reactance, which increases trust and intention). This specification explicitly responds to the tendency to confuse transparency with measurement safety and the lack of modeling reactance and trust as concurrent mediators from interface signals to adoption [17,21].
To empirically test the dual-pathway model, a structural model of GenAI adoption was formulated and estimated to determine the relationships among transparency perceptions, guardrails, usefulness, risk, trust, reactance, and behavioral intentions using variance-based structural equations. To supplement the structural equation model with non-linear predictive insights, a Stage 2 artificial neural network model was applied to explore if these importance measures would still hold when flexible functional forms are permitted.
The rest of the paper is organized as follows. Section 2 provides the literature review on GenAI adoption, separating design perception levers from evaluative lenses, with trust and reactance serving the role of mediators. Section 3 introduces the conceptual framework, or the proposed model for the study. Section 4 describes the study’s methodical framework, from sampling techniques to the procedure underlying the two levels of the study, namely the second level on predictive diagnostics, which is performed with ANN. Section 5 interprets the study’s findings, from structural levels to predictive diagnostics, while Section 6 presents the application usage of the study, including its application to policymaking, management, and educational circles. Section 7 provides the study’s conclusions, with the synthesis of its contributions, followed by its limitations and future paths.

2. Literature Review

2.1. GenAI Adoption, XAI, and Trust in Automation: From Interface Signals to Evaluative Judgments

Transdisciplinary research is converging to the perspective that adoption of generative AI systems is not merely based on their utilitarian benefits but also on reliable cues that they are being used transparently and securely [22,23,24]. Reviews of trustworthy and explainable AI set the issue at an overarching level: Chamola et al. [22] juxtapose “TAI” elements—bias reduction, trustworthiness, assurance, and explanations post hoc—on the basis that transparent and auditable procedures must underlie societal trust in banking, healthcare, autonomous vehicles, and IoT. Parallel to this, Industry 4.0/5.0 XAI polls [25] and user-oriented assessment [24] identify that explanation quality and visualization options are what determine acceptance but empirical assessment from the user’s point of view is under-deployed. Method-focused contributions deepen this human-factors lens: Shin [26] compares explainability with causability—users’ subjective feeling of being able to understand an explanation—demonstrating their complementary contributions to trust and perceived performance, whereas Theis (structured review) exhibits what various stakeholders require explanations to provide (end-users, more outcome-level clarity and bounds; developers, internal mechanisms). Public-sector imitations [27] reinforce that explanations being given result in more trust and that the effect of transparency depends on the situation—visible where discretion is broad. Specifically, efforts to operationalize “transparency by design” continue to encounter resistance: Schor et al. [28] document a gap between designers’ outputs and IEEE 7001 guidelines, citing an anticipated standards-to-practice gap that leaves practitioners in doubt on how to apply the transparency that users truly are concerned with. They work together to establish the difference between perceived transparency (PT)—the user’s perception that thinking, steps, and sources are comprehensible—and perceived safety/guardrails (PSG)—the user’s perception that boundaries, refusals, and policies are explicitly laid out and enforced. PT is traceability and comprehension; PSG is setting boundaries and limiting risk. Both are interface-level controls that, in theory, minimize uncertainty and anchor trust.
In the adoption literature, the benefit/performance side is robust but not context-invariant. Synthesizing concepts in education [29] identifies performance expectancy and trust as the primary predictors across studies and other UTAUT/TAM variables with context-dependent influences. Empirical research tends to support this trend—Wang et al. [30] recognize perceived ease of use and perceived usefulness as factors that influence attitudes and intention when it comes to generative AI teaching assistants, while trust, hedonic motivation, and adaptability influence fundamental TAM beliefs. However, some studies complicate the straightforward “usefulness on intention” story. Effort expectancy and social influence also explain intention but not performance expectancy and facilitating conditions in Korean firms, indicating that organizational adoption at the first instance could be more based on learnability and social support than perceived performance advantage. Kim [31] places effort expectancy and social influence over usefulness using a Bayesian network–based SEM and adds anthropomorphism and animacy as less-strong but effective determinants. Nearer to the domain of GenAI assistants, Balaskas et al. [32] show that perceived intelligence and perceived ease impact ChatGPT (v4) adoption, while usefulness has limited impact when trust and risk are framed as mediators; risk is a partial or full mediator of several antecedents. Throughout these findings, there are two recurring themes: first, usability, social norms, and trust can override benefit calculus when systems are new or unclear; second, trust will often be the proximal impact on intention, and usefulness a variable impact based on task, stakes, and user skill.
Safety and risk attitudes offer the counterpoint to precisely this usefulness. Those studies that draw on perceived risk as a mediator, occasionally a moderator, occasionally an antecedent, consistently identify that it dampens intention and dissolves trust. Li et al. [23] deploy UTAUT to students by formally modeling perceived risk as a moderator to precisely capture how fear of privacy and harm moderates the influence of standard predictors on willingness to use. In risk-information situations, Hong [33] demonstrates that privacy concerns enhance AI anxiety and perceived uncertainty, and diminish intent to obtain information through chatbots, whereas trust diminishes uncertainty and enhances intent. These results are corroborated by XAI evidence that transparency of limitations and possible breakdowns can paradoxically increase acceptance by reconciling expectations [34], with public-sector work indicating that explicit, comprehensible justifications are most successful if decisions are solemn [27]. From a design perspective, this verifies addressing PSG—clear refusals, scope boundaries, and policy cues—as a discrete perception that is separate from PT: transparency without perceived guardrails may be ineffectual in mitigating risk, and guardrails without obvious purpose may create resistance or misaligned overdependence.
Interface-level studies of personal assistants continue to confirm the importance of such cues. Florentin et al. [35] extend TAM to a cellular intelligent personal assistant by incorporating conversational/task intelligence and visual simplicity/esthetics, with general support for direct and indirect effects on acceptance and loyalty; configurational analysis (fsQCA) also identified multiple routes to high acceptance—some headed by interface clarity, others by perceived intelligence—again suggesting that no individual cue ensures adoption across various contexts. Appearance and anthropomorphism do count: Liu and Yang show that anthropomorphic signals, besides being efficient and handy, influence attitudes and following purchasing/sharing intention, the moderators being novelty seeking and price sensitivity. Within organizational performance [36] and educational contexts [29,30], the overall image is one of design-perception levers (PT, PSG) mixed with evaluation lenses (PU, PR) with domain-specific costs, norms, and constraints.
There are two major gaps that this research sets out to address. Firstly, while transparency and safety were regularly brought up throughout the discussion of GenAI, it was found that these were still matched with an extraordinarily low level of consistency when operationalized and measured using survey tools; more specifically, PT—which was more related to traceability—was not properly differentiated from PSG—which was more related to boundaries—when measured using current XAI research when understood from a different paradigm altogether (controlling risks vs. understanding) [24]. Lastly, while research into adoptability of GenAI brought rather mixed results when measuring perceived usefulness (PU), perceived risk (PR) and trust were normally measured separately instead of utilizing these constructs together within an overarching risk-to-trust constructs model [31,35]. Specifically, this research attempts to close these two gaps with two different objectives: firstly, to utilize PT and PSG properly as interface-level constructs which would necessarily precede perceived usefulness (PR) and trust; and secondly, to model perceived usefulness (PU) simultaneously with perceived risk (PR) using structural equation models together with an artificial neural network analysis at Stage 2 to identify any potential non-linearities or thresholds which exist between these factors.

2.1.1. Design-Perception Levers: Perceived Transparency (PT) and Perceived Safety/Guardrails (PSG)

There is increasing research showing that uptake of AI assistants is not just a matter of instrumental value but also a matter of interface cues by which people assess how systems work and how safely they are constrained [37,38,39]. Two complementary levers have been identified. The first—perceived transparency (PT)—involves users’ sense that an assistant’s reasoning, steps, training basis, or sources are understandable [40,41]. The second—perceived safety/guardrails (PSG)—is whether boundaries, denials, and policies are conveyed so that users perceive that they are shielded from risky or objectionable outputs [42,43]. While usually collapsed under “transparency,” emerging evidence indicates they work through partially different psychological mechanisms—PT is mostly explaining and diagnosing system activity, while PSG is mostly containing danger and setting expectations.
From a transparency point of view, user-centered XAI research similarly equates explanatory quality to downstream trust and acceptance and emphasizes that effects are small and design-sensitive [38]. In a randomized experiment, Xu et al. [44] demonstrate that it makes participants less “creepy,” more friendly towards the social chatbot, and more capable of sensing its social intelligence; most impressively, effects were modest but larger for less-experienced participants—evidence that PT can level the playing field for less-experienced users. Experimental research conducted by Hamm et al. [45] also adds that the inclusion of SHAP-style post hoc explanations adds to perceived explainability only weakly, whereas perceived explainability highly predicts trust and utility, with hedonic considerations also coming into play—a caution that transparency signals do not just have to be informative but also entertaining. Systematic reviews attest to such nuances [1,22,29,41]. Rong et al. [38] account that human-subject testing of XAI continues to be limited and sparse by application, with social-psychological and cognitive theory poorly integrated, and Mathew et al. [46] enumerate technique-level advantages and limitations and contend that interpretability gains matter only to the degree that it enhances human comprehension and critique. Outside the assistant context, Montecchi et al. [47] validated a scale for brand transparency—differentiating observability, comprehensibility, and intentionality, which usefully reminds HCI scholars that transparency is multidimensional: users care not only about seeing what is disclosed but also about why and to what end, a distinction that maps closely onto PT’s traceability and sincerity components in AI interfaces.
This is where perceived safety/guardrails (PSG) emerges as a separate construct instead of a subset of PT. PSG reflects whether an assistant makes its boundaries clear, what it will and will not do, when it will say no, and under what policies it is running. Although most acceptance research does not invoke PSG by name, convergent evidence implies that risk-containing disclosures affect perceived value and intention. In a university context, Moghavvemi et al. [43] find that privacy risk reduces the perceived value of ChatGPT despite usefulness and ease of use being high, suggesting that guardrail cues that address privacy or harm concerns can be determinative of ongoing use. In organizational contexts, Prasad et al. [40] illustrate that trust mediates the link between positive user perceptions (e.g., usefulness/ease) and outcomes like commitment and involvement—also showing the necessity for design signals that inform but also reassure. In contrast, Yakubu et al. [42] do not find perceived risk to be important in a UTAUT model of Nigerian students, yet performance expectancy, effort expectancy, and social influence are salient; this discrepancy can plausibly be explained by contextual (discipline, infrastructure, and social norms) and measurement differences, and serves as a caution against the assumption of invariant risk effects without careful operationalization of PSG. Together, the literature indicates that PT and PSG are neighboring but distinct: transparency is a matter of understanding (can I see how it works?), whereas guardrails are a matter of containment (will it keep me from harm and honor boundaries?). Both operate ultimately through trust, but through different proximal appraisals (understanding vs. safety).
These related gaps pertain to conceptualization and model testing for transparency. While calls for “more transparency” abound, empirical studies rarely discern differences between process transparency (PT) and perceived system guarantees (PSG) using proven survey constructs, while studies on user evaluation too often lack detail about which particular explanatory properties would relate to specific types of users [38]. By conceptualizing PT in terms of response traceability and sourceability, and PSG in terms of refusals and boundary or policy protections, this research permits analysis of how these two factors differ in their associations with trust and perceived risk (PR) (e.g., [44,45,48]). Since empirical demonstrations of PT or PSG’s effects tend to be small or ambiguous or condition-laden anyway, a purposed hybrid research strategy using structural equations (SEM) modeling to identify linear relationships among research constructs and artificial neural networks (ANN) analysis at “Stage 2” investigations into thresholds and non-linearities (e.g., “At what level does PT need to exceed before it can significantly affect trust?”, or “At which level of risk-mitigating value does PSG’s contribution actually pertain?”) would seem especially efficient—and more precisely informative—than either research strategy alone would need to cover when researching how these two factors interplay with design-perception levers to reduce risk perceptions, increase trust perceptions, and ultimately lead to upwardly manifested intentions to adopt GenAI systems.

2.1.2. Evaluation Lenses: Perceived Usefulness (PU) and Perceived Risk (PR)

Across domains of application, past research confirms that acceptance of AI assistants is motivated by a two-dimensional calculation: a benefit calculus indexed by perceived usefulness and a hazard calculus indexed by perceived risk [9,10,11,49]. In learning contexts, usefulness continues to be a strong predictor of intention and long-term use. De Roca et al. [50] find that students’ behavioral intention to use course-integrated chatbots is highly linked with usefulness perceptions from 24/7 availability and task assistance, whereas Cabero-Almenara et al. [51] apply UTAUT2 to Costa Rican students and find performance expectancy as the most significant behavior driver of intention, and intention, in turn, to predict use. Parallel benefit-driven results are found in both policing and service contexts: Tomas et al. [52] determine perceived ease of use and usefulness to be a significant contributor to citizens’ intention to use chatbots for eyewitness interviews, and Jo et al. [53], in the context of personal assistants, find continuance intention with utilitarian value driven by usefulness, novelty, and ease of use. Even when more affluent socio-psychological mechanisms are mimicked, usefulness is at the core: Salih et al. [54]’s post-adoption framework for general AI services equates continuance with performance expectancy and hedonic and social factors, and Park et al. [55]’s research on mental health chatbots argues that parasocial interaction and ease of use facilitate usefulness, which in turn satisfies intention.
At the same time, the literature also warns against inferring a one-size-fits-all or unconditional usefulness–intention relationship. For both offline and online shopping, Silva et al. [11] find that trust influences intention at least as much as usefulness does, both reducing risk and increasing flow; Marjerison et al. [56], working on Chinese consumers, demonstrate that usefulness acts to mediate the influence of perceived authenticity and perceived risk on intention, and suggest that judgments of usefulness themselves are relative to perceptions of credibility and hazard. Sectoral variations can reverse standard TAM effects: in banking, Hasan et al. [49] find that usefulness does not predict intention very well after controlling for enjoyment, ease of use, and trust, and the ease on intention path is negatively moderated by risk—conforming to the hypothesis that when stakes are high, hazard appraisals reduce payoffs to usability. Design decisions will also suppress usefulness perceptions. Yuviler-Gavish et al. [9] illustrate how a “whatsappization” form of conversation and presence cues trade off ease of use and utility, and decrease attitudes, a reminder that human social cues will not always port so easily to non-human actors. Overall, these findings illustrate that usefulness is not just instrumental and context-independent: it is co-determined with credibility, affect, and stakes in a domain.
Perceived risk, on the other hand, is a cross-cutting inhibitor whose functioning differs by context and operationalization. At the university level, Moghavvemi et al. [43] depict privacy risk as inhibiting perceived value for ChatGPT—decreasing usefulness-to-repeated-use channels—whereas Salih et al. [54] finds that privacy risk moderates a number of post-adoption relationships. In hotel robotics, Seo et al. [57] places risk in a trust–satisfaction chain as follows: trust diminishes perceptions of risk, risk decreases satisfaction and revisit intention, and usefulness has direct and trust-mediated effects. In finance, Hasan et al. [49] moderation finding means that risk re-slopes rather than intercepts underlying acceptance trajectories. By contrast, Yakubu et al. [42]’s UTAUT survey of Nigerian students of computer science identifies non-significant risk effects when performance expectancy, effort expectancy, and social influence are controlled for—highlighting how salience of risk may vary with measurement focus (privacy vs. outcome errors), local norms and infrastructure, or sampling population. A reasonable conclusion is that risk needs to be thought of not as a monosemous concept but as a brotherhood set of threat judgments (privacy, cost of error, and liability) whose interrelations with intention can be different.
Methodologically, the literature is still divided between defining and examining PU and PR; most studies favor adoption over continuance (or vice versa) with little consideration of how the benefit–risk trade-off changes over time; risk measures typically bundle privacy, accuracy, and reputational damage, and only a minority conceptualize risk as a moderator or a mediator to trust. Samples are often convenient student cohorts or geographically limited panels, which delimit external validity and prompt concerns about domain generalization. Experimental manipulations, when they exist, are apt to be stylized (e.g., chat interface cues) and may fail to represent task stakes or policy environments in ways that condition risk and usefulness judgments in significant ways. These boundaries account for incongruent results—e.g., usefulness falling off at banks but taking hold in classrooms—and suggest the merit of modeling PU and PR jointly with trust.
This current study builds on this agenda by handling PU and PR as simultaneous evaluation perspectives that guide effects of interface signals towards adoption. Conceptually, we differentiate perceived transparency and perceived safety/guardrails as upstream design-perception processes that impact PU and PR in different ways: transparency may amplify diagnosticity and therefore usefulness, whereas observable guardrails may constrain perceived hazard. Empirically, we measure PU and PR in conjunction with trust and reactance in order to allow risk to have a direct and indirect influence on intention via trust and record the potentiality that freedom-threat perceptions suppress usefulness gains. To address small or conditional effects reported in certain XAI and adoption tests, we add a Stage 2 ANN to PLS-SEM so that it can identify non-linearities (e.g., levels of risk above which usefulness no longer applies) and interactions (e.g., guardrails that most lower risk when task stakes are highest). Along the way, we hope to reconcile conflicting findings between education, retail, finance, and public service contexts, identify where usefulness will reliably cause intention, and explain where risk is the dominant calculus of choice for generative AI aids. Thus, the following hypotheses were formed:
H1. 
Perceived transparency (PT) will be directly associated with behavioral intention (BI).
H2. 
Perceived safety/guardrails (PSG) will be directly associated with BI.
H3. 
Perceived usefulness (PU) will be directly associated with BI.
H4. 
Perceived risk (PR) will be directly associated with BI.
H5a. 
Trust in automation (TR) will be directly associated with BI.
H5b. 
Psychological reactance (RE) will be directly associated with BI.

2.2. Mediating Mechanisms: Trust and Reactance in the GenAI Adoption Pathway

In alternative contexts, the most recent findings place trust as the proximal determinant of AI adoption and persistence, and psychological reactance—users’ negative reaction to perceived loss of autonomy—is identified as a countervailing force that can inhibit or reverse otherwise favorable judgments [19,20,21]. In post-adoption contexts, Salih et al. [54] synthesize trust–commitment theory, UTAUT2, and sRAM to illustrate that performance expectancy, hedonic motivation, and social influence enhance engagement and continuance but that privacy risk moderates some of the relationships, contending that trust needs to be maintained under perceived vulnerability as opposed to being assumed to exist as static. Human–robot interaction evidence also verifies the centerpiece—even quantifiable—nature of trust: Pinto et al. [58] confirm a Human–Robot Interaction Trust Scale with robust psychometrics, verifying that trust can be conceptualized as a higher-order construct (competence, reliability, and integrity) with meaningful group differences. In services and e-commerce, Singh identifies trust as the central motivator of acceptance in conjunction with usefulness and pleasure, while Marjerison et al. [56] demonstrate that usefulness itself is moderated by risk and authenticity through trust, suggesting that trust functions as an entry to intention as well as the screen through which performance gain is viewed by users.
Concurrently, research increasingly shows the circumstances in which systems provoke threats of freedom and reactance with consequent implications for behavior and attitude. Perceived autonomy of AI bolsters threat and reactance, but personality moderates this effect by strengthening perceived fit, as Oh et al. [17] show; the implication is that the very same trait (autonomy) will provoke resistance unless accompanied by signals to reassert user control. Under monitoring-filled or coercion scenarios, reactance is strong: Guo [59] demonstrates that obligatory app usage increases freedom threat and reactance sensitivity, lowering attitude and intent; in organizational monitoring, Wang et al. [30] find that “preventive” EPM (policing) strengthens reactance and weakens proactivity, while “developmental” EPM (supportive framing) reduces reactance—effects moderated by mindfulness. Algorithmic curation can also have the same consequences: Hong et al. [19] demonstrate that echo-chamber characteristics (similarity and density) enhance freedom threat and fatigue, moderating reactance and generating discontinuance and avoidance behaviors. Beyond AI assistants, Xie et al. [60] demonstrate that external control (parental psychological control) enhances reactance, partially explaining problem media use, and strengthening the generality of the freedom-threat mechanism. Combined, all these results indicate the same robust pattern: as users infer constraint, opacity, or manipulation, reactance rises, trust falls, and adoption worsens.
Design and framing decisions situate both mediators. The hierarchy of attributed agency—be the recommender human-, machine-, or proxy-centered—modifies competence and integrity beliefs and, through them, trusting intentions, as demonstrated by Liu; more (appropriate) human-agency-attributing users are more confident and less reluctant to provide data. In contrast, anthropomorphized social cues are not necessarily empowering: Yuviler-Gavish et al. [9] demonstrate that the “whatsappization” of chatbot interfaces (dialog structure and presence indicators) decreases perceived ease and interconnectedness and diminishes attitudes, in line with the prediction from the view that social imitation has the unforeseen effect of conveying pretense or coercion and inducing reactance as opposed to rapport. In daily-life AI applications, Salih et al. [54] also demonstrate how risk to privacy drains continuance paths even where drives of engagement are present, again confirming that assurance signals are as important as instrumental elements in building long-term trust. Lastly, “dark side” sentiments like creepiness based on privacy worry, technology worry, and uncertainty avoidance make distrust and disengagement more powerful, as Maduku et al. [61] discuss, again pointing towards the need for interface signals that de-threaten yet still maintain agency.
Two more gaps exist with regard to the study of mediating mechanisms: trust and reactance are constantly assumed to be important antecedents of GenAI system use but are rarely incorporated together with trust into a structural model; when included models do account for reactance, it was always viewed either as an individual-difference variable or treated ex post facto rather than as a structurally sensitive state amenable to mitigation via interface design decisions; thirdly, antecedent manipulations were diverse and not well-supported by real GenAI systems to identify cues depicting restoration of user control without trading off clarity or credibility. The current model meets these challenges by incorporating trust and reactance as mediation processes—which can now be measured using these two specific and interface-mimetic constructs: PT (traceability of reasoning, sources, and steps) and PSG (visible limits, refusals, and policy signals) directly linked to trust processes—and by model testing for risk–trust models and trust–freedom models, as well as models where high usefulness depends on trust and low perceived constraint. A complementary PLS-SEM plus Stage 2 ANN strategy is used to explore whether these mechanisms exhibit non-linear thresholds, particularly for low-knowledge users or surveillance-prone settings, and to delineate when design perceptions effectively translate into GenAI adoption. From this synthesis, the formal hypotheses (H1–H6) of direct, mediated, and serial-mediated effects via trust and reactance are stated in the next subsection, and the conceptual model figure is given that delineates the exogenous perceptions (perceived transparency, perceived safety/guardrails, perceived usefulness, and perceived risk), the mediators (trust and reactance), and the outcome (behavioral intention).
H6a. 
Trust (TR) mediates the relationship between PT and BI.
H6b. 
Psychological reactance (RE) mediates the relationship between PT and BI.
H7a. 
Trust (TR) mediates the relationship between PSG and BI.
H7b. 
Psychological reactance (RE) mediates the relationship between PSG and BI.
H8a. 
Trust (TR) mediates the relationship between PU and BI.
H8b. 
Psychological reactance (RE) mediates the relationship between PU and BI.
H9a. 
Trust (TR) mediates the relationship between PR and BI.
H9b. 
Psychological reactance (RE) mediates the relationship between PR and BI.

3. Research Methodology

3.1. Conceptual Model and Rationale

This research presents a design-oriented explanation of GenAI adoption by differentiating between two perceptions of interfaces—perceived transparency (PT) and perceived safety/guardrails (PSG)—and integrating them with perceived usefulness (PU) and perceived risk (PR) in an overarching, socio-cognitive framework to explain behavioral intention (BI). The model integrates ideas from TAM/UTAUT, trust in automation, XAI, risk communication, and reactance theory psychology to fill three gaps: pervasive blurring of transparency and safety, under-expression of RE as the complement of TR in adoption streams, and dis-integrated management of usefulness and risk that prevents intelligent design direction. By positioning PT and PSG as levers at interfaces and inferring PU and PR in conjunction, the architecture closes the gap between what systems signal and how they signal boundaries to the appraisals that regulate dependence in real-world contexts (Figure 1).
Figure 1. Conceptual model.
Conceptually, PT is users’ expectation of an assistant’s reasoning traceability, steps, and sources; PSG is boundary, refusal, and policy protection transparency; PU is anticipated task value; PR is anticipated downside from reliance; TR is willingness to depend on the assistant in ambiguity; and RE is the aversive state after constraint or manipulation perception [12,16]. The model identifies two interactive routes: a risk–trust route via which guardrail cues influence hazard appraisals that impact trust and, derivatively, intention; and a freedom–trust route via which traceable explanation and explicit guidelines influence autonomy appraisals that impact reactance, which restricts trust and intention. PU and PR also have a straightforward connection with intention, in line with acceptance theory and with areas where performance beliefs or risk salience are most pivotal; they also affect trust as a consolidating belief regarding the advisability of leaning [4,5]. Fewer straightforward connections from PT and PSG are treated as robustness checks to maintain parsimony.
This specification has theoretic significance in the respect that it isolates comprehension-based effects (PT’s diagnosticity and sourceability) from assurance-based effects (PSG’s boundary-setting and error-containment), and explains why explanation cues sometimes mediate attitudes only when salient, credible, and user-interpretable, and safety cues redirect adoption particularly where task stakes or privacy salience is present. It also makes reactance a design-sensitive mediator instead of an unchangeable personality and describes the interface conditions under which “humanizing” or limiting features can suppress cooperation even if technical performance is sufficient [19,20,21]. In practice, the model provides actionable knobs: invest in transparency features where threat of autonomy is high; reveal guardrails where hazard assessments predominate; and align performance messaging with guarantees to prevent signaling control or peril involuntarily. Empirically, it is projected using PLS-SEM based on a survey-only, multilingual, retrospective sample of ecologically developed attitudes, supported by a Stage 2 ANN to analyze plausible thresholds and interactions inapproachable via linear models. In so doing, it offers a balanced explanation of how interface hints influence judgments of usefulness and risk, how they move through trust and reactance, and when these flows create intention to use GenAI helpers.

3.2. Data Collection and Sampling

This research used a quantitative cross-sectional design and an online self-administered survey to explore the interrelationship between the respondents’ previous experience with GenAI advisors and perceived transparency, perceived risk, privacy concern, psychological reactance, trust, perceived usefulness, and intention to use [62,63,64]. Sampling and data collection methods conformed to the theoretical model of the study—technology-adoption and trust-in-automation pathways estimated using SEM supplemented with a Stage 2 ANN for non-linear predictive diagnostics—and to the psychometric requirements for valid analysis of latent variables.
Sampling utilized a stratified purposive quota method conducted via a quality-checked online research panel supplemented with purposeful recruitment, seeking to recruit adults with recent exposure to AI-supported online services (e.g., university websites, bank or helpdesk chatbots) [65,66,67]. Gender, age ranges (18–24, 25–34, 35–44, 45–54, 55+), and educational attainment (non-tertiary vs. tertiary) were used as quotas to procure maximum subgroup equilibrium for measurement invariance and multi-group SEM. Within strata, recruitment was conducted with invitations issued to fill quotas. This approach prioritizes conceptual relevance—asserting respondents’ capacity to report on GenAI experience meaningfully—while maintaining external validity through controlled representation of demographics. The co-deployment of an online panel of EU citizen members with prior experience with GenAI sets specific boundaries for generalization. Uninitiated persons and non-EU members are not included, and among those surveyed, knowledge level of GenAI varies greatly, with a large proportion with low knowledge, which could systematically differ from those with more knowledge about interface perceptions and evaluation.
Data were gathered via Google Forms. After screening for eligibility and electronic informed consent, participants initially read a short, unbiased definition of a GenAI assistant to ensure uniform understanding, then filled in all items retrospectively on their own use of such systems during the past 12 months. The tool merged scale-trialed Likert-type items (4–5 items per factor, five-point scales) for the target focal latent variables—perceived transparency, safety perception, perceived risk, privacy concern, psychological reactance, trust, perceived usefulness, and behavioral intention, along with demographic items and short digital-literacy indicators. The survey was administered in Greek and English; translation–back-translation by bilingual specialists and cognitive pretesting (≈10 participants) were employed to ensure semantic equivalence and readability. Responses were voluntary and anonymous, and no manipulation was used in experiments.
Inclusion criteria asked for adults (≥18 years), with very good Greek or English proficiency, residence in the EU, and self-report of having used AI-supported online services over the past year on an aided device (desktop, laptop, or mobile). These inclusion criteria allowed the participants to offer informed consent, understand the questionnaire, and report experience-based judgments regarding GenAI interaction. Exclusion criteria—pre-registered for internal validity—excluded cases with no applicable prior exposure, greater than one attention/consistency check failure, implausibly brief completion times (less than one-third of median page time), stereotypical “long-string” responding, irrational open-ended attention probe responses, or duplicates suspected by panel/device/IP controls. Automated red flags were complemented by hand coding of open-ended content.
Pretesting in a small convenience sample (≈30–40) was conducted in order to assess timing, wording, and salience of the framing definition; small textual and layout modifications ensued. In the main wave, procedural controls against common-method bias involved psychological distance between predictors and outcomes (with a brief neutral filler section intervening), randomized item presentation, mixed stems/anchors, and verification of anonymity. Measurement quality was established using internal consistency (Cronbach’s α and composite reliability), convergent validity (average variance extracted ≥0.50), and discriminant validity (Fornell–Larcker criterion and HTMT < 0.85) and full-collinearity VIFs (<3.3). For the ANN stage, standardized latent scores of the exogenous constructs were used to train a multilayer perceptron with early stopping, predictive performance (MAE/RMSE) was benchmarked against linear baselines, and interpretability was supported with importance values and individual conditional expectation plots.
An aggregate of approximately 420 completes was planned to include anticipated 10–15% quality-based exclusions to yield an analytical sample of approximately 360–380 observations. This sample size was a priori warranted: inverse square-root and gamma-exponential strategies provided ≥0.80 power to detect small-to-medium structural effects (β ≈ 0.15–0.20) in the most challenging parts of the model, the conservative “10-times rule” was met [68,69,70], CB-SEM sensitivity analyses (robust ML) at N ≈ 350–400 yield detection of standardized loadings ≥0.55 and paths ≥0.15 with 0.80 power, and the SEM–ANN stage was aided by N ≥ 350 for stable 70/30 train–validation–test splits with 10-fold cross-validation and respectable non-linear importance diagnostics.
The study was cleared by the authors’ institutional ethics committee (insert committee name and ID). Voluntary participation was requested, and electronic informed consent was collected before data collection; respondents were free to withdraw at any time without consequence. No identifying information was retained other than panel identifiers necessary for compensation reconciliation. Processing was in line with GDPR principles of lawfulness, purpose limitation, and data minimization, and encrypted institutional drives with limited access were used to store files. Vulnerable adults and minors were not recruited, and the eligibility screen kept them from registering. The research was preregistered (hypotheses, design, exclusion criteria, and analysis plan), and—upon publication—de-identified data, code, prompts, and a data dictionary will be stored in an open repository.

3.3. Measurement Scales

All latent constructs were operationalized reflectively using multi-item Likert scales (see Appendix A, Table A1). Perceived transparency (PT) measured users’ perception of traceability and diagnosticity of assistant responses with four items (PT1–PT4: “I could see how the assistant arrived at its answers”; “The assistant’s explanations were clear and detailed”; “I could verify sources or evidence behind its answers”; “It was easy to follow the steps the assistant took”), drawn from validated XAI/transparency tools [12,14]. Perceived safety/guardrails (PSG) measured salience of boundaries, refusals, and policy guardrails on four items (PSG1–PSG4, e.g., “The assistant clearly delineated what it would and wouldn’t do”; “I had the sense that protection was built-in to stop dangerous advice”), adapted from risk-communication and responsible-AI disclosure scales [71,72]. Perceived usefulness (PU) measured anticipated benefits of performing tasks using four items (PU1–PU4, e.g., “Using such an assistant makes me more productive”), adopted from TAM/UTAUT performance-expectancy items [73,74]. Psychological reactance (RE) evaluated perceived threat of freedom while being guided by humans and AI with four items (RE1–RE4, e.g., “I become irritated when an assistant attempts to guide me towards a specific action”), adopted from established psychological reactance scales for technology use [15,16]. The items were measured at the level of their relevance for “such assistants” (e.g., feeling irritation at steering and constraining) rather than at a pure statement level. It should be noted, however, that these items still reflect responsiveness to freedom threats which can combine with design-induced responses and relate to state-like readiness to disregard guidance when engaging with human and artificial intelligence systems—that topic will be addressed below when considering the null findings. Trust in automation (TR) was used to assess willingness to trust in conditions of uncertainty with four items (TR1–TR4, e.g., “I believe such assistants are trustworthy”), based on human–automation/AI trust scales [13]. Behavioral intention (BI) was used to assess short-term adoption likelihood with four items (BI1–BI4, e.g., “I would use such assistants in the next 3 months”), derived from TAM/UTAUT intention measures. Item stems and phrasing were equated to a survey-only, retrospective frame (“such assistants”), with neutral, non-leaded phrasing and no double-barreled phrasing [75].

3.4. Sample Profile

A total of 365 adults participated in the survey, with 51.2% of them women (Table 1). The age breakdown was 18 to 24 (31.5%), 25 to 34 (25.8%), 35 to 44 (23.6%), 45 to 54 (11.5%), and 55 or older (7.7%). The educational levels reported for the group consisted of secondary or below (21.9%), post-secondary/vocational qualifications (34.8%), baccalaureates (31.2%), and master’s degrees or higher (12.1%). Regarding the main uses of GenAI, the responses received covered search/everyday tasks (29.9%), coding/data/tech work (27.4%), educational/learning/teaching (11.0%), or office/customer support tasks (6.3%); the other 25.5% reported other or all-of-the-above uses. Responses about how familiar the interviewees actually are with the available AI resources also indicated low/very low familiarity levels in 43.3%, moderate familiarity levels in 21.4%, or high/very high levels of familiarity in 36.4% of the cases examined. Turning to the question about its treatment of users’ data, 44.1% reported sometimes providing their own data, which is regarded, nonetheless, as non-sensitive or perhaps sensitive, 20.8% often included personal data, 21.9% rarely or never did so, and 9.6% preferred not to say. Usage frequency was daily or almost daily for 18.4%, several times per week for 15.6%, weekly for 21.1%, monthly for 25.2%, and less than monthly for 19.7%.
Table 1. Sample profile.

4. Data Analysis and Results

We applied structural equation modeling in SmartPLS 4 (v4.1.1.4). According to Nitzl et al. [76], variance-based SEM is suitable for business and social science applications. PLS-SEM was chosen because it maximizes explained variance of endogenous constructs and has a focus on predictive relevance [77]. Multi-Group Analysis explored unobserved heterogeneity by comparing structural paths across subpopulations, thus detecting contextual differences that would not have been possible with standard regression [78]. Estimation followed the guidelines of Wong [79] in the computation of path coefficients, standard errors, and reliability indices. For reflective measures, indicator loadings ≥0.70 were considered acceptable as threshold levels to establish convergent validity. This workflow has enabled us to rigorously test the structural relationships and perform robust measurement evaluation within and across respondent groups.

4.1. Common Method Bias (CMB)

The potential of common method bias was measured by following the procedures of Podsakoff et al. [80]. It checks whether a single latent factor dominates the covariance by the Harman single-factor test. A factor analysis without rotation resulted in the first factor explaining 25.452% of the variance, well below the conventional threshold of 50%. Therefore, CMB is unlikely to pose a threat to the findings. By documenting low CMB, this study has enhanced the construct validity and the credibility of the relationships between constructs, reducing the problem of systematic measurement bias [80,81].

4.2. Measurement Model

The PLS-SEM workflow followed next with the assessment of the reflective measurement model. In concert with Hair et al. [77], we evaluated composite reliability (CR), indicator reliability, convergent validity, and discriminant validity to establish psychometric adequacy before the interpretation of the structural paths. Indicator reliability was defined as the proportion of item variance explained by its latent construct and examined via outer loadings following Vinzi et al. [82]. Following Wong [79] and Chin [82,83], loadings ≥0.70 were treated as acceptable indicators of item quality. Acknowledging that social science measures often fall below this benchmark [83], item retention was not automatic. All deletion decisions were based on incremental gains in model quality: the rule of thumb was to retain indicators that increased CR and AVE and remove them only when their exclusion produced a clear, substantive enhancement in these indices as a means of avoiding discarding potentially informative measures prematurely.
The items with loadings between 0.40 and 0.70 were removed only if their removal substantially improved the CR or AVE of the respective construct. Based on these criteria and guided by the decision rules of Gefen et al. [84], two indicators, RE4 and TR4, which demonstrated loadings less than 0.50, were dropped in the process of purifying the measurement model. As shown in Table 2, this parsimonious refinement enhanced the quality of the measurement model without substantially reducing construct representation, thus being sufficient for further structural estimation and hypothesis testing (Table 2).
Table 2. Factor loading reliability and convergent validity.
Reliability was examined by using Cronbach’s alpha, rho_A, and composite reliability (CR). In line with Wasko et al. [85], the 0.70 threshold was reached for BI, PR, PSG, PT, PU, RE, and TR, while for the other constructs, rho_A also showed a moderate to high reliability as found in previous studies as well [86,87]. Given that rho_A conceptually falls between alpha and CR, its values were above 0.70 for most constructs, thus supporting the findings of rho_A regarding reliability by Sarstedt et al. [88] as well as the criteria related to consistency by Henseler et al. [87].
There was also support for convergent validity, since the AVE surpassed 0.50 for most constructs. Following Fornell et al. [89], AVE scores slightly less than 0.50 were still considered acceptable when combined with CR > 0.60, a requirement that was met in those cases. Discriminant validity was established based on the Fornell–Larcker criterion, as the square root of the AVE for each construct was greater than its interconstruct correlations, but was also further established using HTMT ratios below the conservative threshold of 0.85 suggested by Henseler et al. [87]. Overall, these diagnostics suggest good construct validity and a strong degree of internal consistency across the measurement model. Specific indices (alpha, rho_A, CR, AVE, interconstruct correlations, and HTMT) are presented in Table 3 and Table 4.
Table 3. HTMT ratio.
Table 4. Fornell and Larcker criterion.

4.3. Structural Model

Reliability was assessed by Cronbach’s alpha, rho_A, and composite reliability (CR). In line with Wasko et al. [85], the 0.70 threshold was surpassed for BI, PR, PSG, PT, PU, RE, and TR; even the remaining constructs revealed reliability ranging from moderate to high, as suggested in previous studies [78,90]. This is viewed as a coefficient that is conceptually positioned between alpha and CR; thus, in most cases, rho_A exceeded the threshold of 0.70, in support of the reliability estimates shown by Sarstedt et al. [90], hence meeting the consistency threshold recommended by Kock et al. [68].
The coefficient of determination, predictive relevance, and path significance were used to check the structural model [78]. It accounted for 47% of the variance in behavioral intention, with an R2 of 0.470, 25.7% in psychological reactance with an R2 of 0.257, and 19.5% in trust with an R2 of 0.195, thus having a moderate explanatory power. Predictive relevance was established when the cross-validated redundancy or Q2 showed values of 0.231, 0.174, and 0.393 for the endogenous constructs, which is consistent with moderate-to-strong out-of-sample prediction.
Hypotheses were tested using nonparametric bootstrapping following Hair et al. [91], yielding path estimates and standard errors. Indirect effects were assessed with a bias-corrected, one-tailed bootstrap procedure with 10,000 resamples, as described by Preacher et al. [92] and Streukens et al. [93]. Overall, these diagnostics substantiate the model’s structural adequacy and predictive capability. Full estimates are presented in Table 5.
Table 5. Hypotheses testing.
Path coefficients (β), bootstrap standard errors (SE), t values, and exact p-values were used to test direct effects on BI. As expected, PT exhibited the highest positive relation with BI: β = 0.386, SE = 0.050, t = 7.75, p < 0.001 (H1 supported). In turn, TR was positively related to BI: β = 0.264, SE = 0.047, t = 5.68, p < 0.001 (H5a supported). PSG showed a smaller association with BI that was still statistically significant: β = 0.133, SE = 0.053, t = 2.52, p = 0.006 (H2 supported). In addition, PU demonstrated a modest positive relation with BI: β = 0.093, SE = 0.050, t = 1.85, p = 0.032 (H3 supported). In contrast, PR did not have a significant association with BI: β = −0.017, SE = 0.048, t = 0.34, p = 0.366 (H4 not supported), and RE was not a significant predictor: β = 0.064, SE = 0.047, t = 1.37, p = 0.085 (H5b not supported). In general, these findings indicate that BI is most strongly associated with perceptions of transparency and trust, with additional, smaller contributions from perceived safety/guardrails and usefulness. Neither risk nor reactance explained unique variance in BI in the presence of the other predictors.

4.4. Mediation Analysis Results

We adopted bias-corrected bootstrapping with 10,000 resamples to test indirect effects, reporting standardized coefficients (β), bootstrap standard errors (SE), t values, and p values. Total effects were also investigated to contextualize mediation in terms of direct paths (Table 6).
Table 6. Mediation analysis.
PT had a significant total effect on BI: β = 0.473, SE = 0.048, t = 9.93, p < 0.001, which was larger than its direct effect: β = 0.386, SE = 0.050, t = 7.75, p < 0.001. The indirect effect of PT → trust (TR) → BI was significant: β = 0.071, SE = 0.020, t = 3.56, p < 0.001, thus supporting H6a with partial mediation, while the PT → psychological reactance (RE) → BI path was not significant: β = 0.016, SE = 0.013, t = 1.21, p = 0.113, hence not supporting H6b. For PSG, there was a significant total effect on BI: β = 0.210, SE = 0.050, t = 4.17, p < 0.001, and a smaller but significant direct effect: β = 0.133, SE = 0.053, t = 2.52, p = 0.006. The indirect effect of PSG → TR → BI was significant: β = 0.058, SE = 0.019, t = 2.97, p = 0.002, hence supporting H7a, while the PSG → RE → BI path was insignificant: β = 0.020, SE = 0.016, t = 1.29, p = 0.098, hence not supporting H7b. These results support partial mediation through TR. PU had a significant total effect on BI: β = 0.104, SE = 0.053, t = 1.96, p = 0.025, with a modest direct effect: β = 0.093, SE = 0.050, t = 1.85, p = 0.032. The PU → TR → BI indirect effect was not statistically significant at α = 0.05, β = 0.015, SE = 0.015, t = 1.02, p = 0.055 (H8a supported under conventional criteria), and the PU → RE → BI path also did not reach significance: β = −0.004, SE = 0.005, t = 0.75, p = 0.227 (H8b not supported). Overall, results are most consistent with no mediation for PU. PR had a non-significant direct effect on BI: β = −0.017, SE = 0.048, t = 0.34, p = 0.366, and a non-significant total effect: β = −0.027, SE = 0.051, t = 0.53, p = 0.299. The indirect path PR → TR → BI was significant: β = −0.014, SE = 0.018, t = 0.78, p = 0.018 (H9a supported), but the path PR → RE → BI was not: β = 0.004, SE = 0.006, t = 0.57, p = 0.286 (H9b not supported). Because the direct effect is non-significant and the indirect effect via TR was significant, findings point to indirect-only (full) mediation via TR for PR.
Collectively, mediation occurred consistently via trust (for PT and PSG: partial; for PR: indirect-only), with no support for reactance-based indirect effects. It is concluded that these patterns are in line with a risk–trust channel, whereby interface signals improving transparency and guardrails mainly have their impact on intention through trust rather than through reactance.

4.5. Neural Network Augmentation (Stage 2 ANN)

Conventional linear techniques—such as multiple regression and covariance- or variance-based SEM—are well suited for theory testing, but often underrepresent the complexity of adoption decisions that can involve non-linear, non-compensatory, and interactive relationships among perceptions of transparency, safety, risk, and trust. Complementing our linear structural tests, therefore, we added a Stage 2 artificial neural network—a technique noted for flexible function forms without strict distributional assumptions and strong predictive accuracy in technology-adoption contexts [94,95,96]. Because ANNs are not designed for hypothesis testing and offer limited causal interpretability—the so-called “black-box” caveat [97,98]—we used them strictly as a second-stage, predictive complement once SEM had already identified significant structural paths, following the recommended hybrid, two-stage logic in prior work [97].
We implemented a feed-forward multilayer perceptron (MLP) in IBM SPSS Statistics v.29, and a feed-forward multilayer perceptron (MLP), taking the continuous latent score for behavioral intention as the target [94,95,96]. Predictors were the latent composites perceived transparency, perceived safety/guardrails, perceived usefulness, perceived risk, trust, and reactance, which were entered as standardized covariates. Cases were randomly partitioned into 70% training/30% testing, with early stopping on the testing loss guarding against overfitting. Consonant with common guidance that one hidden layer is sufficient to approximate continuous functions in adoption models [99,100], we used one hidden layer with sigmoid activation; the output layer used identity (linear) activation, appropriate for a continuous criterion. The number of hidden neurons was set to a small intermediate size (between the number of inputs and the output), following common rules-of-thumb and a brief trial-and-error process to balance prediction accuracy and overfitting risk [94,95,96]. Optimization employed SPSS’s scaled conjugate gradient backpropagation. Model outputs included the network diagram, case-processing summary, and predicted-versus-observed and residual-versus-predicted plots. Independent-variable importance was computed and normalized to assess the relative contribution of PT, PSG, PU, PR, TR, and RE to BI. To ensure robustness, ten MLPs with different random initializations (Networks 1–10 in Table 7) were trained and evaluated on the hold-out set, and the network with the lowest testing RMSE was subsequently used for the importance analysis.
Table 7. Multilayer perceptron performance across 10 runs.
Comparable to other studies on SEM-ANN modeling in technology-adoption research, to keep matters manageable and stay well within the needed sample size of N = 365 at this exploratory pilot phase of research, we limited ourselves to a single hidden-layer multilayer perceptron neural net architecture which has enough power to model continuous outcomes with small- to mid-sized datasets like ours but with much lower risks of overparameterization [94,95,96], which may come with more complex neural nets with more hidden layers like those with swarm intelligence optimizations commonly applied, for example, to engineering models of outcomes like prediction problems typically found at more applied rather than more basic or foundation research studies akin to ours [96,101]. Correspondingly, we chose not to conduct these benchmarks with other machine learning models (such as random forests or support vector machines) because we are principally interested in comparison with the linear SEM model on this particular problem, and because we are using the ANN merely to explore potential non-linear patterns of importance rather than to propose or evaluate these models as tools for either general or specific predictive tasks. Exploring these alternatives would necessarily significantly broaden the technical focus of this manuscript’s contribution to SEM research. This semi-parametric approach combines the explanatory strengths of SEM, used for the evaluation of theoretical models, with the capabilities of the ANN to explore the value of thresholds, curvatures, or interaction-like terms, which could be nonlinearly related to the response, beyond the capabilities of linear models [96,101,102]. The SEM coefficients, inference, or the ANN out-of-sample predictive accuracy, along with its importance, will be presented in the Structural Results section, to contribute to the confirmation of the relevance of the BI determinants (Figure 2).
Figure 2. ANN diagram.
We then fitted ten feed-forward multilayer perceptron (MLP) networks on the 70% training split with BI as the dependent variable and PT, PSG, PU, PR, TR, and RE as standardized input covariates. For each run, a different random initialization was used, and the remaining 30% of the sample was held out for testing. Across the ten networks (labeled Network 1–Network 10 in Table 7), the mean testing RMSE was 0.535 (SD = 0.027), indicating stable prediction accuracy with low variability (coefficient of variation ≈ 5%). The network with the lowest testing error was Network 4 (testing RMSE = 0.500 for 110 test cases), followed by Network 5 (RMSE = 0.503) and Network 8 (RMSE = 0.518). The average training RMSE was 1.969 (SD = 0.058). Testing error was consistently lower than training error, and together with the small dispersion across runs, this pattern suggests no substantial overfitting under the chosen architecture (one hidden layer, sigmoid hidden activation, and identity output).
In terms of the interpretation of the BI latent construct, with an RMSE value of about 0.50, the average prediction error is about one-half standard deviation, which is moderate precision on the individual prediction score, but is adequate for ranking/selection purposes only, not for individuated forecasting techniques. The ANN outcomes are consistent with the SEM in underscoring the concerted, independent predictive value of the same set of BI perceptual variables over time. Network 4, the best-performing model in Table 7, was selected for subsequent sensitivity and importance analyses.
To validate the SEM results, we also evaluated the importance of variables on the out-of-sample data of the ten multilayer perceptron models with the following parameters (70/30 split; one hidden layer; sigmoid hidden activation; identity output) (Table 8). The importance scores are scaled on each network, with the average score reported, along with a normalized score in which the importance of perceived transparency (PT) = 100%. In all models, the variables with the highest importance in the prediction of behavioral intention (BI) were trust (TR) and perceived transparency (PT). The mean importance across all models was: PT = 100%, TR = 88%, perceived safety/guardrails (PSG) = 55%, perceived usefulness (PU) = 30%, psychological reactance (RE) = 26%, and perceived risk (PR) = 8%. The importance values were consistent across different random initializations, with TR = 0.75 to 0.98, PSG = 0.36 to 0.81, PU = 0.17 to 0.41. The pattern is consistent with the structural model, with PT and TR having the largest predictive value, followed by PSG, while PU and RE had smaller but still meaningful contributions, but with little predictive value from PR in the ANN stage, consistent with its nonsignificant direct effect in SEM with only a weak indirect effect. The ANN diagnostics show that, inclusive of non-linear and possibly non-compensatory relationships, the perceptions of users’ transparency and trust are the most important, along with visibility of the value. Guardrails are the secondary leverage point, with usefulness important mostly once credibility is signaled, reactance having only a modest role, with risk having only trivial predictive value over and above the other perceptions accounted for. This pattern is consistent with the RMSE analysis, where the network with the best test RMSE was around 0.50, consistent with the dual-pathway account for which the interface cues that increase perceptions of transparency and build trust are the most predictive for GenAI adoption on the problem suite described.
Table 8. Neural network sensitivity analysis.

5. Discussion

5.1. Direct Relationships

This structural model highlights the importance of interface-level perceptions for the adoption of GenAI. PT was observed to have the strongest correlation with BI, even greater than the effect sizes of PSG, PU, or other precursors combined. This is in line with user-focused XAI studies, which have found the traceability of reasoning, steps, and sources to be crucial proximal cues predictive of trust and utilization, once salient and well-understood by the end-users (e.g., [7,45]). In the current study, the effect size of PT is consistent with the notion that explainability perceptions, over the availability of explanation itself, are the drivers for downstream cognitive evaluations, including trust perceptions, usefulness, or the citizen’s propensity to adopt the technology.
Trust in automation (TR) also had a strong positive correlation with BI, confirming the generality of the result that trust is the proximal correlate of reliance across the board in AI, robotics, and service robots alike (e.g., [12,13]). The joint importance of PT and TR indicates strong entwinement between explanation salience and confidence in the relation, with users preferring to enter the relation-cognitive stage if and only if they can see how the system works (PT) and if they trust the system to perform well and genuinely (TR).
A smaller, but significant, association was also found with BI in the case of PSG. This is consistent with the view that visibility of safeguards, refusals, and policy cues affects perceptions of assurance and feelings of vulnerability in those seeking to protect their privacy or prevent harm [43,57]. The result is also consistent with the view that the effect of guardrails on system adoption is largely one of signaling the bounds of system acceptability, which is more concerned with signaling that the system will respect those bounds, and hence is theoretically consistent with the pattern we see compared with PT, even if less strong, because the value of the effect is largely one of signaling, which is weaker than the strong signal sent by PT, but consistent with the view that adoption occurs because the system provides assurance, but also because the system provides its users with the necessary explanation regarding how the system operates to be justified in its operation on a daily basis to prevent BI.
PU had only a modest positive effect on BI. This result occupies a middling position in a split literature, wherein, on the one hand, usefulness is seen to play a prominent role in particular areas, while, on the other hand, its effect is diminished once credibility and risk are represented in the model. However, the current study indicates that usefulness provides explanatory value independent of PT, PSG, or TR, implying that users do, in fact, evaluate the system on the basis of both its value proposition and its credibility attributes. The magnitude of the effect is also enlightening, wherein, within contexts of working with competent but sometimes fallible agents, credibility, legibility, and reputation may be seen to carry greater importance than actual performance expectations, particularly with reference to tasks involving reputation or error penalties [71,74].
However, neither perceived risk (PR) nor reactance (RE) contributed to the prediction of BI, once the other variables had been controlled for. The nonsignificant direct effect of PR is consistent with interpretations involving the mediator role of trust (risk damages trust, which in turn hurts intent) instead of risk acting alone as an intimidator. It is also consistent with the notion that the salience of risk is highly context-dependent, specifically with the focus of the measure or question asked—from error cost to liability and privacy concerns). There appear to be two interpretations of RE: First: For the predominantly retrospective, real-world-use scenarios described above, GenAI interfaces could occasionally express threats to autonomy strongly enough to establish state-level reactance measures, given that users can easily opt to disregard or opt-out of the recommended guidance provided. Second: While the items taken together assess “such assistants,” they instead tap more broadly into an individual’s tendency to feel annoyance at being directed or limited, which could lack adequate detail to capture specific design-level threats to freedom related to guardrail or nudge designs. The lack of significance of RE supports the idea that under the interface treatments addressed within this study, perceived threat to freedom was probably well below what was needed to induce reactance and that this operationalization of the construct could easily fall somewhere between state-level and design-level reactance notions of constructs like RE. Future research should carefully tease these apart with measures of state-level reactance together with manipulations at the interface level to see if more design-sensitive notions of reactance can instead capture mediation of safety cues on intention directly.

5.2. Mediation Analysis

The mediated model confirms the risk–trust view of GenAI adoption and the respective functions of the two interface levers in the model. To answer the second question, the effect of both PT and PSG was partially mediated by TR on BI, with PT → TR → BI and the paths from both PT and PSG to BI retaining their direct significance. This result supports the view from user-focused XAI that perceptions of explainability are seen as increasing trust largely if they implicate feelings of competency and integrity, with PT influencing trustful enactment of BI on the lines of [7,45], who both argued that perceptions of explainability improve trustful usage largely if the explanation impinges on feelings of competency, integrity, or other pseudo-character attributes, with the trust in automation arguing that trust is the proximal, discretionary mediator of adoption, implying that interface levers must be seen to work on the trust mechanism, with traceability or refusals acting on users’ feelings of competency or integrity rather than just informing users about the system, to improve adoption outcomes.
Second, the construct “perceived risk” (PR) also demonstrated indirect-only significant mediation with a non-significant direct effect on BI, with the “full mediation” pattern having clear theoretical interpretation. This indicates that the proposed generalized downside perceptions of privacy risks or error costs of influencing BI are mostly mediated by trust, meaning that the deterrent is not about discouraging the service itself but about the undermining of trust. A design implication of the study is the importance of turning the management of risk into trust-building cues, instead of trying to reduce the perceptions of risk on their own [11,49].
Thirdly, the paths on reactance-based hypotheses were always non-significant. Given existing findings that social mimicry, coercive monitoring, or restricted choice can provoke freedom-threat responses, the result indicates that the interface in the current study did not trigger the reactance process strongly enough to propagate the effect to the intended target, intention [16,19,21]. There are two possible interpretations of the result. Either the PT/PSG implementations de-threatened agency, perhaps by requiring clear rules but being seen, rather than restrictive, or the reactance process is context-dependent, manifesting in designs that emphasize human qualities, more restrictive refusals, or surveillance framing. Theoretical implication would be that reactance is just one particular type of process involved in the adoption of GenAI that is highly design-dependent, having value only if seen in the context of explicit preventive EPM, but not otherwise [19,47,52].
A small total effect with a modest direct effect on BI was found for PU, with no significant mediation effect involving TR (PU → TR → BI: p = 0.055, marginal at best). This subtle complexity resolves the mixed support in the literature, with instrumental value being important but, with the underlying credibility paths specified, perhaps having only modest, or trivial, indirect value in influencing BI, unless the task is high-stakes or the claim of competence is ambiguous.

5.3. SEM-ANN Synthesis

The semi-parametric SEM-ANN model provides insights on how the interface signals are linked with adoption and the importance of the signals once non-linear, possibly non-compensatory relationships are enabled between them. Regarding the SEM part, trust was always carrying the effect of design perceptions to the intention: perceived transparency (PT) had partial mediated relationships via trust, while for both perceived safety/guardrails (PSG) and perceived risk (PR), there were indirect-only, fully mediated relationships via trust. There were no mediated relationships involving psychological reactance (RE). All these result support the risk–trust conduit due to the involvement of trust between the variables, traceability on trust conduit due to the involvement of traceability, and perhaps an inactive freedom-threat dynamics process on the current interface map.
The conclusions reached by the ANN are sustained by the following structure from the predictive view. The average value of the hold-out RMSE over the ten multilayer perceptron models (0.500, with SD = 0.027) indicated stable, moderate prediction accuracy on the individual level of behavioral intention, with the best iteration also achieving 0.500, just below the baseline average. The sensitivity analysis revealed PT, followed by the importance of trust (TR) normalized to 100% and 88%, respectively, followed by the mid-tier effect of the importance of PSG, followed by the PU importance of 30% with the smallest, but still incrementally significant, importance of RE = 26%, with the addition of PR = 8% having only trivial predictive value beyond the inclusion of the other perceptions. Taken together, the SEM mediation and ANN importance profiles converge on the view that the appropriation mechanism is trust-focused, wherein TR, transparency, or the lack thereof, is the decisive factor, followed by the importance of usefulness, followed by the generalized risks, but the latter exercising its influence largely on TR, rather than on the adjustment of intent itself.
In terms of theoretical implications, there is reason to distinguish between PT and PSG on theoretical grounds. Despite common aggregation into “transparency,” the current findings reveal the presence of paths that are almost but not completely identical: traceability/comprehension for PT and assurance/boundary interpretation for PSG. Both are primarily trust-calibrating, consistent with user-focused XAI interpretations that perceived explainability needs to attain salience thresholds before achieving fall-back, with assurance cues fixing points for credibility. Second, the only indirect pattern supported by PR situates trust itself within the immediately decisive role in the benefit–risks calculus, believing risk will deter primarily by undermining trust itself. Third, the lack of reactance support indicates boundary constraints on freedom-threat explanation, reactance being instead design-dependent, rather than generally applicable, becoming consequential under stronger autonomy-threat framings (e.g., coercive monitoring, heavy anthropomorphic mimicry) documented elsewhere.
Taking a closer look at perceived risk (PR), it appears that structural equation modeling (SEM) and artificial neural networks (ANN) find complementary rather than contradictory results about it. On analyzing using the SEM model, PR shows an indirect-only effect on intention mediated by trust, which fits well with risk–trust–intention models where high perceived risk chiefly affects adoption negatively through lower trust rather than directly affecting intention. On the other hand, using an ANN model, PR receives relatively low normalized importance (8%) when considering PT, TR, PSG, PU, and RE simultaneously; hence, concerning a general-use scenario for GenAI applications where diverse applications are involved, PR appears relatively insignificant compared to other factors related to transparency and trust perceptions for out-of-sample predictive models rather than models inside samples. These findings together imply that PR tends to work like an antecedent to trust rather than an actor on intention onstage in this particular sample, probably because of moderate concerns rather than specific ones involved diffusely; hence, these findings can neither refute nor state conclusively any specific models related to detailed segment designs with or without expertise due to of sample unfitness, but rather seem plausible when assuming that specific applications would receive relatively higher concerns of perceived risk compared to general-use applications like those of GenAI assistance systems. Although structural paths and mediation patterns are grounded in theory, empirical estimates involve cross-sectional and retrospective self-report measures. Therefore, reported effects should be viewed only as directional associations but not causal paths. There may still be bidirectional influences among perceived transparency, safety, risk, trust, and prior uses of GenAI, which can only be addressed through longitudinal or experimental studies.

6. Practical Implications

The results suggest the existence of pervasive trust-building paths to the adoption of GenAI, with the main drivers being perceived transparency (PT) and perceived safety/guardrails (PSG), supported by—rather than instead of—perceived usefulness (PU). The role played by perceived risk (PR) remains mediated by trust, while reactance is significant only in the more autonomy-threatening design conditions, with the following steps taken by stakeholders.
Transparence on result-focused assurance, over broad disclosure, must be the priority. Best practices for standards must differentiate between traceability—ability to trace steps, trace sources, and trace well beyond the limit—and guardrails—scope, refusal reasons, and notices on policy matters. Certification or audit processes can evaluate traceability and guardrails formally for their respective contributions to trust-building. Since trust is the operative factor in risk, risk communication must be refined—to specific, brief notices linked with mitigation strategies, rather than broad warnings, which can reduce confidence without enhancing understanding. When GenAI is involved in public services, explanation on request, with accessible, understandable reasons, must be required, with assurance by default, in view of equitable adoption.
Roadmap design allocations can focus on legibility aspects (reasoned justifications, citations, links to checked facts, and “why this answer” areas) and assurance aspects (definition of scope, refusal messages, safety/privacy overviews, and data management controls). Consider these UX basics, not optional help text or nice-to-have UI polishing. The sensitivity of the ANN adoption model indicates the biggest adoption benefits will come from PT improvements, with trust also highly beneficial, followed by the secondary tactic of PSG, with PU messages working better if credibility indications are already in place. Instrument and A/B test trust diagnostics on task outcomes, perceived explainability, and perceived safeguard clarity, looking out for autonomy-threatening cues (overly restrictive nudges) that may provoke reactance or resistance from users who feel their agency has been usurped.
When incorporating GenAI into the curriculum, link skill development with the literacies of transparency (evaluate reasoning and sources) and assurance (evaluate limitations and refusals). Classroom policies need to reflect the rules of the PSG, specifying acceptable uses, the boundary conditions of error, and academic integrity in the interface paradigm itself, perhaps with templates defining the bounds of scope and refusal logic. The role of PU, since it makes contributions only after the formation of credibility, must be featured in the credible workflow, “explain → verify → apply,” before the efficiency application itself.

7. Conclusions, Limitations, and Future Directions

This work moves the frontier on a dual-path model of GenAI adoption, wherein interface-level cues primarily influence intention via trust. In the SEM, perceived transparency (PT) was the strongest direct predictor of intention, followed by the strong direct impact of trust (TR). The effect of perceived safety/guardrails (PSG) was more modest, yet significant, with the inclusion of perceived usefulness (PU) adding modestly to the model. Neither perceived risk (PR) or reactance (RE) explained unique variance beyond other variables specified in the SEM. The set of mediated models revealed that PT, and the effect of PSG, operated partially via trust, with the effect of perceived risk operated fully/indirectly via trust, confirming the risk-to-trust mechanism proposed. The mediational paths involving reactance appreciated none of the hypothesized relationships, failing to support the proposed relationships fully. The diagnostics on the Stage 2 ANN supported the structural modeling, wherein PT, followed by TR, provided well-driven predictive priority, with the inclusion of PSG, PU, and RE offering some modest, incremental predictive boost, with the inclusion of PR offering some slight predictive value beyond the other variables.
To move the existing body of work forward, a number of promising lines of work are proposed. It relied on a stratified purposive quota sample recruited using an online panel composed of EU citizens with previous experience with GenAI systems. Thus, external validity factors can only relate to circumstances with parallel regulatory and cultural factors; it cannot inform or guide studies among those unexposed to systems like GenAI. Consistency among respondents was found to differ with regard to experience level (with considerable numbers of low- or moderate-experience holders paralleled with more expert types); this could dampen specific-domain patterns identifiable among those expert or so-called “power” system-users. Still, for this specific article’s purposes, the focus was on how well the core perceptual pathway (PT, PSG, PU, PR → TR/RE → BI) model could work together with or benefit from using structural equation modeling–artificial neural network (SEM–ANN) hybrids which were necessary to explain and show effects rather than to exhaustively probe all possible subgroup differences among EU system users interested in or using GenAI systems for their own benefits or purposes. Second, this study relied only on a single wave of cross-sectional, self-report data to infer perceptions and usage over time, which limited causal interpretations. Specifically, these measures assess intention at a particular point in time with no distinction between those who maintained, escalated, or discontinued using GenAI systems once they were adopted for initial purposes. Future studies would benefit from using experience-sampling designs or other techniques that would track these samples longitudinally with multiple points separated to differentiate initial adopters versus persistent patterns (e.g., sustained usage or dropouts). Incorporating these longitudinal constructs with field experimentation of cues of transparency would significantly clarify matters concerning cause and effect. Field A/B tests with independent manipulation of transparency (traceability, source links and step-by-step rationales), CRLS/SG, and guardrails (scope boundaries, refusal rationales and policy prompts), may help elucidate thresholds, diminishing marginal returns or context-dependent trade-offs between them. Panel studies alongside the A/B work would help illustrate the unfolding perceptions of PT, PSG, risk, trust, and reactance over time, with longitudinal study design allowing for the discrimination between study variables’ treatment periods or A/B groups. Risk and agency are due for some attention as intervening variables, with the treatment instead taken on its own multifaceted terms, rather than an oversimplified one-dimensional construct, with attention to the particular facets of each, specifically, on the side of risk [11,49,75]. Since freedom-threat responses are design-sensitive, simultaneous manipulations of trait reactance and autonomy threats (e.g., preventive or developmental framing, choice architectures, override controls) can help disambiguate the activation of the freedom–trust link and the role of agency-restoring attributes in its process [32,75]. Future models will require the inclusion of outcomes other than intention, namely, the quality of reliance: well-calibrated trust, second opinions, error correction, and human–AI cooperation effectiveness. These outcomes are relevant to analysts concerned with policy/behavioral impacts on safety, who may identify conditions wherein greater system openness leads to over-reliance, unless coupled with well-engineered safety bounds or just-in-time warnings with adjustable levels over time. The predictive pathway will be improved. Side by side with single-hidden-layer MLP, compare GBM, GAM, or calibrated logistic/ordinal models with respect to repeated k-fold CV, temporal holdouts, or learning curves [35]. Post hoc methods (permutation importance, partial dependencies, and cumulative local effects) or sparse additive models can help integrate the sometimes-conflicting needs for accuracy and explainability, while SEM with its potential for interactions in the underlying constructs, or LMS/PLS product indicators, can attempt to probe the non-linear synergy that the ANN model implies.
In conclusion, the implications of our discoveries outline the promise that exists within complex systems: if computers can reveal their process and stick to their bounds, humans will help them out along the way, meeting them halfway. The answer, then, to the question of trust is neither on nor off but rather the road we walk, guided by reason, lined with the gentle hand of human moderation. A future of system design holding to these lines—showing the user just enough to understand the process, stopping just in time for the user to feel secure—may propel GenAI from an intelligent solution to a trusted sidekick in the work we do each day. When these solutions are able to speak more clearly, to say “no” in due time, perhaps we will too, learning to target better, build higher, with the confidence of silence in the process we create.

Author Contributions

Conceptualization, S.B. and I.S.; methodology, S.B., I.S. and G.A.; software, S.B.; validation, S.B.; formal analysis, S.B.; data curation, S.B.; writing—original draft preparation, S.B.; writing—review and editing, S.B. and G.A.; visualization, S.B.; supervision, S.B., I.S. and G.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics and Deontology Committee (E.H.D.E.) of Democritus University of Thrace on 22 October 2025 (Protocol/Ref. No.: 16216/175).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Measurement Scales.
Table A1. Measurement Scales.
Perceived Transparency (PT)
PT1I could tell how the assistant arrived at its answers.Adapted from Scharowski et al. [12] and Yu et al. [14]
PT2The assistant’s explanations were clear and specific.
PT3I could verify sources or evidence behind its responses.
PT4It was easy to follow the steps the assistant took.
Perceived Safety/Guardrails (PSG)
PSG1The assistant made it clear what it would and would not do.Adapted from Tolsdorf et al. [71] and Wu et al. [72]
PSG2I felt that safeguards were in place to prevent harmful advice.
PSG3The assistant warned about limitations when appropriate.
PSG4I was informed about policy or safety rules relevant to my use
Perceived Usefulness (PU)
PU1Using such an assistant improves my efficiency.Adapted from Ibrahim et al. [73] and Abdalla [74]
PU2It enhances the quality of my work.
PU3Overall, a GenAI assistant is useful for my tasks.
PU4When available, it helps me get things done more effectively.
Perceived Risk (PR)
PR1Using such assistants could lead to serious mistakes for me.Adapted from Ibrahim et al. [73] and Abdalla [74]
PR2I worry about negative consequences if I rely on them.
PR3I feel vulnerable to errors when using them.
PR4For my needs, the downsides can outweigh the benefits.
Psychological Reactance (RE)
RE1I feel irritated when an assistant tries to steer me toward a particular action.Adapted from Jin [15] and Heatherly et al. [16]
RE2I feel pressured when it labels one option as the ‘best’.
RE3I resist when the assistant restricts what I can ask or do.
RE4I dislike being told what I should do by such systems. (deleted)
Trust in Automation (TR)
TR1I trust these assistants to act in my best interest.Adapted from McGrath et al. [13]
TR2I consider such assistants reliable.
TR3I believe these assistants provide accurate information.
TR4I feel confident relying on such assistants for tasks I care about. (deleted)
Behavioral Intention (BI)
BI1I intend to use such assistants in the next 3 months.Adapted from Lai et al. [75]
BI2I will frequently use them when available.
BI3I would recommend using these assistants to others.

References

  1. Chen, J.; Xie, W.; Xie, Q.; Hu, A.; Qiao, Y.; Wan, R.; Liu, Y. A Systematic Review of User Attitudes Toward GenAI: Influencing Factors and Industry Perspectives. J. Intell. 2025, 13, 78. [Google Scholar] [CrossRef] [PubMed]
  2. Su, J.; Wang, Y.; Liu, H.; Zhang, Z.; Wang, Z.; Li, Z. Investigating the Factors Influencing Users’ Adoption of Artificial Intelligence Health Assistants Based on an Extended UTAUT Model. Sci. Rep. 2025, 15, 18215. [Google Scholar] [CrossRef] [PubMed]
  3. Park, D.H.; Jiang, Q.; Ko, E.; Son, S.C.; Kim, K.H. AI Transformation and AI Adoption Intention in B2B Environment. J. Glob. Sch. Mark. Sci. Bridg. Asia World 2025, 35, 439–457. [Google Scholar] [CrossRef]
  4. Ali, I.; Warraich, N.F.; Butt, K. Acceptance and Use of Artificial Intelligence and AI-Based Applications in Education: A Meta-Analysis and Future Direction. Inf. Dev. 2024, 41, 859–874. [Google Scholar] [CrossRef]
  5. Park, K.; Young Yoon, H. AI Algorithm Transparency, Pipelines for Trust Not Prisms: Mitigating General Negative Attitudes and Enhancing Trust toward AI. Humanit. Soc. Sci. Commun. 2025, 12, 1160. [Google Scholar] [CrossRef]
  6. Wang, Q.; Madaio, M.; Kane, S.; Kapania, S.; Terry, M.; Wilcox, L. Designing Responsible AI: Adaptations of UX Practice to Meet Responsible AI Challenges. In Proceedings of the Conference on Human Factors in Computing Systems-Proceedings, Association for Computing Machinery, New York, NY, USA, 19 April 2023. [Google Scholar]
  7. Hu, A.; Ou, M. From Passive to Active: How Does Algorithm Awareness Affect Users’ News Seeking Behavior on Digital Platforms. Telemat. Inform. 2025, 100, 102291. [Google Scholar] [CrossRef]
  8. Nikolic, S.; Wentworth, I.; Sheridan, L.; Moss, S.; Duursma, E.; Jones, R.A.; Ros, M.; Middleton, R. A Systematic Literature Review of Attitudes, Intentions and Behaviours of Teaching Academics Pertaining to AI and Generative AI (GenAI) in Higher Education: An Analysis of GenAI Adoption Using the UTAUT Framework. Australas. J. Educ. Technol. 2024, 40, 56–75. [Google Scholar]
  9. Yuviler-Gavish, N.; Halutz, R.; Neta, L. How Whatsappization of the Chatbot Affects Perceived Ease of Use, Perceived Usefulness, and Attitude toward Using in a Drive-Sharing Task. Comput. Human. Behav. Rep. 2024, 16, 100546. [Google Scholar] [CrossRef]
  10. Chen, C.; Gong, X.; Liu, Z.; Jiang, W.; Goh, S.Q.; Lam, K.Y. Trustworthy, Responsible, and Safe AI: A Comprehensive Architectural Framework for AI Safety with Challenges and Mitigations. arXiv 2025, arXiv:2408.12935. [Google Scholar]
  11. Silva, S.C.; De Cicco, R.; Vlačić, B.; Elmashhara, M.G. Using Chatbots in E-Retailing–How to Mitigate Perceived Risk and Enhance the Flow Experience. Int. J. Retail. Distrib. Manag. 2023, 51, 285–305. [Google Scholar] [CrossRef]
  12. Scharowski, N.; Perrig, S.A.C.; Svab, M.; Opwis, K.; Brühlmann, F. Exploring the Effects of Human-Centered AI Explanations on Trust and Reliance. Front. Comput. Sci. 2023, 5, 1151150. [Google Scholar] [CrossRef]
  13. McGrath, M.J.; Lack, O.; Tisch, J.; Duenser, A. Measuring Trust in Artificial Intelligence: Validation of an Established Scale and Its Short Form. Front. Artif. Intell. 2025, 8, 1582880. [Google Scholar] [CrossRef]
  14. Yu, L.; Li, Y. Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort. Behav. Sci. 2022, 12, 127. [Google Scholar] [CrossRef]
  15. Jin, S.V. “To Comply or to React, That Is the Question:” The Roles of Humanness versus Eeriness of AI-Powered Virtual Influencers, Loneliness, and Threats to Human Identities in AI-Driven Digital Transformation. Comput. Human. Behav. Artif. Hum. 2023, 1, 100011. [Google Scholar] [CrossRef]
  16. Heatherly, M.; Baker, D.A.; Canfield, C. Don’t Touch That Dial: Psychological Reactance, Transparency, and User Acceptance of Smart Thermostat Setting Changes. PLoS ONE 2023, 18, e0289017. [Google Scholar] [CrossRef]
  17. Oh, J.; Nah, S.; Yang, Z.D. How Autonomy of Artificial Intelligence Technology and User Agency Influence AI Perceptions and Attitudes: Applying the Theory of Psychological Reactance. J. Broadcast. Electron. Media 2025, 69, 161–182. [Google Scholar] [CrossRef]
  18. Singh, C.; Dash, M.K.; Sahu, R.; Kumar, A. Investigating the Acceptance Intentions of Online Shopping Assistants in E-Commerce Interactions: Mediating Role of Trust and Effects of Consumer Demographics. Heliyon 2024, 10, e25031. [Google Scholar] [CrossRef]
  19. Hong, X.; Pan, L.; Xu, M.; Chen, Q. Escaping from the Echo Chamber: Understanding User Behavior from the Perspective of Psychological Reactance Theory. Inf. Technol. People 2025, 1–27. [Google Scholar] [CrossRef]
  20. Liu, W.; Wang, Y. Evaluating Trust in Recommender Systems: A User Study on the Impacts of Explanations, Agency Attribution, and Product Types. Int. J. Hum. Comput. Interact. 2025, 41, 1280–1292. [Google Scholar] [CrossRef]
  21. Wang, J.; Zheng, W.; Zhang, L.; Wu, Y.J. How Organizational Electronic Performance Monitoring Affects Employee Proactive Behaviors: The Psychological Reactance Perspective. Int. J. Hum. Comput. Interact. 2025, 41, 1902–1916. [Google Scholar] [CrossRef]
  22. Chamola, V.; Hassija, V.; Sulthana, A.R.; Ghosh, D.; Dhingra, D.; Sikdar, B. A Review of Trustworthy and Explainable Artificial Intelligence (XAI). IEEE Access 2023, 11, 78994–79015. [Google Scholar] [CrossRef]
  23. Li, S.; Zhang, H.; Du, Z. Factors Influencing College Students’ Willingness to Use Generative Artificial Intelligence Tools—Based on the UTAUT Model. In Proceedings of the 2025 11th International Conference on Education and Training Technologies, ICETT 2025, Macao, China, 23–25 May 2025; pp. 57–70. [Google Scholar]
  24. Naveed, S.; Stevens, G.; Robin-Kern, D. An Overview of the Empirical Evaluation of Explainable AI (XAI): A Comprehensive Guideline for User-Centered Evaluation in XAI. Appl. Sci. 2024, 14, 11288. [Google Scholar] [CrossRef]
  25. Nikiforidis, K.; Kyrtsoglou, A.; Vafeiadis, T.; Kotsiopoulos, T.; Nizamis, A.; Ioannidis, D.; Votis, K.; Tzovaras, D.; Sarigiannidis, P. Enhancing Transparency and Trust in AI-Powered Manufacturing: A Survey of Explainable AI (XAI) Applications in Smart Manufacturing in the Era of Industry 4.0/5.0. ICT Express 2025, 11, 135–148. [Google Scholar] [CrossRef]
  26. Shin, D. The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI. Int. J. Human. Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  27. Fang, X.; Zhou, H.; Chen, S. A Replication of Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making. Public. Adm. 2025. [Google Scholar] [CrossRef]
  28. Schor, B.G.S.; Norval, C.; Charlesworth, E.; Singh, J. Mind The Gap: Designers and Standards on Algorithmic System Transparency for Users. In Proceedings of the Conference on Human Factors in Computing Systems-Proceedings, Association for Computing Machinery, New York, NY, USA, 11 May 2024. [Google Scholar]
  29. Granić, A. Emerging Drivers of Adoption of Generative AI Technology in Education: A Review. Appl. Sci. 2025, 15, 6968. [Google Scholar] [CrossRef]
  30. Wang, Y.; Yu, R. Exploring the Factors on the Acceptance of Generative Artificial Intelligence Teaching Assistants: The Perspective of Technology Acceptance Model. Int. J. Hum. Comput. Interact. 2025, 1–15. [Google Scholar] [CrossRef]
  31. Kim, C. Understanding Factors Influencing Generative AI Use Intention: A Bayesian Network-Based Probabilistic Structural Equation Model Approach. Electronics 2025, 14, 530. [Google Scholar] [CrossRef]
  32. Balaskas, S.; Tsiantos, V.; Chatzifotiou, S.; Rigou, M. Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk. Information 2025, 16, 82. [Google Scholar] [CrossRef]
  33. Hong, S.J. What Drives AI-Based Risk Information-Seeking Intent? Insufficiency of Risk Information versus (Un)Certainty of AI Chatbots. Comput. Human. Behav. 2025, 162, 108460. [Google Scholar] [CrossRef]
  34. Theis, S.; Jentzsch, S.; Deligiannaki, F.; Berro, C.; Raulf, A.P.; Bruder, C. Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work. In Artificial Intelligence in HCI; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2023; Volume 14050. [Google Scholar]
  35. Florentin, G.; Mvondo, N.; Niu, B. Exploring User Acceptance of Portable Intelligent Personal Assistants: A Hybrid Approach Using PLS-SEM And FsQCA. arXiv 2024, arXiv:2408.17119. [Google Scholar] [CrossRef]
  36. Kim, Y.; Blazquez, V.; Oh, T. Determinants of Generative AI System Adoption and Usage Behavior in Korean Companies: Applying the UTAUT Model. Behav. Sci. 2024, 14, 1035. [Google Scholar] [CrossRef]
  37. Constantinides, M.; Bogucka, E.; Quercia, D.; Kallio, S.; Tahaei, M. RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical Roles. Proc. ACM Hum. Comput. Interact. 2024, 8, 1–28. [Google Scholar] [CrossRef]
  38. Rong, Y.; Leemann, T.; Nguyen, T.T.; Fiedler, L.; Qian, P.; Unhelkar, V.; Seidel, T.; Kasneci, G.; Kasneci, E. Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 2104–2122. [Google Scholar] [CrossRef]
  39. Vorm, E.S.; Combs, D.J.Y. Integrating Transparency, Trust, and Acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM). Int. J. Hum. Comput. Interact. 2022, 38, 1828–1845. [Google Scholar] [CrossRef]
  40. Prasad, K.D.V.; De, T. Generative AI as a Catalyst for HRM Practices: Mediating Effects of Trust. Humanit. Soc. Sci. Commun. 2024, 11, 1–16. [Google Scholar] [CrossRef]
  41. Ochmann, J.; Michels, L.; Tiefenbeck, V.; Maier, C.; Laumer, S. Perceived Algorithmic Fairness: An Empirical Study of Transparency and Anthropomorphism in Algorithmic Recruiting. Inf. Syst. J. 2024, 34, 384–414. [Google Scholar] [CrossRef]
  42. Yakubu, M.N.; David, N.; Abubakar, N.H. Students’ Behavioural Intention to Use Content Generative AI for Learning and Research: A UTAUT Theoretical Perspective. Educ. Inf. Technol. 2025, 30, 17969–17994. [Google Scholar] [CrossRef]
  43. Moghavvemi, S.; Jam, F.A. Unraveling the Influential Factors Driving Persistent Adoption of ChatGPT in Learning Environments. Educ. Inf. Technol. 2025, 30, 1–28. [Google Scholar] [CrossRef]
  44. Xu, Y.; Bradford, N.; Garg, R. Transparency Enhances Positive Perceptions of Social Artificial Intelligence. Hum. Behav. Emerg. Technol. 2023, 2023, 5550418. [Google Scholar] [CrossRef]
  45. Hamm, P.; Klesel, M.; Coberger, P.; Wittmann, H.F. Explanation Matters: An Experimental Study on Explainable AI. Electronic Markets 2023, 33, 17. [Google Scholar] [CrossRef]
  46. Mathew, D.E.; Ebem, D.U.; Ikegwu, A.C.; Ukeoma, P.E.; Dibiaezue, N.F. Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for Human. Neural Process Lett. 2025, 57, 16. [Google Scholar] [CrossRef]
  47. Montecchi, M.; Plangger, K.; West, D.; de Ruyter, K. Perceived Brand Transparency: A Conceptualization and Measurement Scale. Psychol. Mark. 2024, 41, 2274–2297. [Google Scholar] [CrossRef]
  48. Wang, Y.; Sauka, K.; Situmeang, F.B.I. Anthropomorphism and Transparency Interplay on Consumer Behaviour in Generative AI-Driven Marketing Communication. J. Consum. Mark. 2025, 42, 512–536. [Google Scholar] [CrossRef]
  49. Hasan, S.; Godhuli, E.R.; Rahman, M.S.; Mamun, M.A. Al The Adoption of Conversational Assistants in the Banking Industry: Is the Perceived Risk a Moderator? Heliyon 2023, 9, e20220. [Google Scholar] [CrossRef]
  50. Roca, M.D.L.; Chan, M.M.; Garcia-Cabot, A.; Garcia-Lopez, E.; Amado-Salvatierra, H. The Impact of a Chatbot Working as an Assistant in a Course for Supporting Student Learning and Engagement. Comput. Appl. Eng. Educ. 2024, 32, e22750. [Google Scholar] [CrossRef]
  51. Cabero-Almenara, J.; Palacios-Rodríguez, A.; Rojas Guzmán, H.d.l.Á.; Fernández-Scagliusi, V. Prediction of the Use of Generative Artificial Intelligence Through ChatGPT Among Costa Rican University Students: A PLS Model Based on UTAUT2. Appl. Sci. 2025, 15, 3363. [Google Scholar] [CrossRef]
  52. Tomas, F.; Immerzeel, J. Chatbots in Eyewitness Interviews: Perceived Usefulness and Ease of Use Drive Intent to Use Conversational Agent. J. Crim. Psychol. 2025. [Google Scholar] [CrossRef]
  53. Jo, H. Continuance Intention to Use Artificial Intelligence Personal Assistant: Type, Gender, and Use Experience. Heliyon 2022, 8, e10662. [Google Scholar] [CrossRef] [PubMed]
  54. Salih, L.; Tarhini, A.; Acikgoz, F. AI-Enabled Service Continuance: Roles of Trust and Privacy Risk. J. Comput. Inf. Syst. 2025, 1–16. [Google Scholar] [CrossRef]
  55. Park, D.Y.; Kim, H. Determinants of Intentions to Use Digital Mental Healthcare Content among University Students, Faculty, and Staff: Motivation, Perceived Usefulness, Perceived Ease of Use, and Parasocial Interaction with AI Chatbot. Sustainability 2023, 15, 872. [Google Scholar] [CrossRef]
  56. Marjerison, R.K.; Dong, H.; Kim, J.M.; Zheng, H.; Zhang, Y.; Kuan, G. Understanding User Acceptance of AI-Driven Chatbots in China’s E-Commerce: The Roles of Perceived Authenticity, Usefulness, and Risk. Systems 2025, 13, 71. [Google Scholar] [CrossRef]
  57. Seo, K.H.; Lee, J.H. The Emergence of Service Robots at Restaurants: Integrating Trust, Perceived Risk, and Satisfaction. Sustainability 2021, 13, 4431. [Google Scholar] [CrossRef]
  58. Pinto, A.; Sousa, S.; Simões, A.; Santos, J. A Trust Scale for Human-Robot Interaction: Translation, Adaptation, and Validation of a Human Computer Trust Scale. Hum. Behav. Emerg. Technol. 2022, 2022, 6437441. [Google Scholar] [CrossRef]
  59. Guo, J. Exploring College Students’ Resistance to Mandatory Use of Sports Apps: A Psychological Reactance Theory Perspective. Front. Psychol. 2024, 15, 1366164. [Google Scholar] [CrossRef]
  60. Xie, Z.; Han, J.; Liu, J.; Guan, W. Short-Form Video Addiction of Students with Hearing Impairments: The Roles of Demographics, Parental Psychological Control, and Psychological Reactance. J. Autism Dev. Disord. 2025, 1–12. [Google Scholar] [CrossRef]
  61. Maduku, D.K.; Rana, N.P.; Mpinganjira, M.; Thusi, P. Exploring the ‘Dark Side’ of AI-Powered Digital Assistants: A Moderated Mediation Model of Antecedents and Outcomes of Perceived Creepiness. J. Consum. Behav. 2025, 24, 1194–1221. [Google Scholar] [CrossRef]
  62. Spector, P.E. Do Not Cross Me: Optimizing the Use of Cross-Sectional Designs. J. Bus. Psychol. 2019, 34, 125–137. [Google Scholar] [CrossRef]
  63. Olsen, C.; St George, D.M.M. Cross-Sectional Study Design and Data Analysis. Coll. Entr. Exam. Board 2004, 26, 2006. [Google Scholar]
  64. Kesmodel, U.S. Cross-sectional Studies–What Are They Good For? Acta Obstet. Gynecol. Scand. 2018, 97, 388–393. [Google Scholar] [CrossRef]
  65. Brewer, K.R.W. Design-Based or Prediction-Based Inference? Stratified Random vs Stratified Balanced Sampling. Int. Stat. Rev. 1999, 67, 35–47. [Google Scholar] [CrossRef]
  66. Ding, C.S.; Haieh, C.T.; Wu, Q.; Pedram, M. Stratified Random Sampling for Power Estimation. In Proceedings of the International Conference on Computer Aided Design, San Jose, CA, USA, 10–14 November 1996; pp. 576–582. [Google Scholar]
  67. Lynn, P. The Advantage and Disadvantage of Implicitly Stratified Sampling. Methods Data Anal. A J. Quant. Methods Surv. Methodol. 2019, 13, 253–266. [Google Scholar] [CrossRef]
  68. Kock, N.; Hadaya, P. Minimum Sample Size Estimation in PLS-SEM: The Inverse Square Root and Gamma-exponential Methods. Inf. Syst. J. 2018, 28, 227–261. [Google Scholar] [CrossRef]
  69. Memon, M.A.; Ting, H.; Cheah, J.-H.; Thurasamy, R.; Chuah, F.; Cham, T.H. Sample Size for Survey Research: Review and Recommendations. J. Appl. Struct. Equ. Model. 2020, 4, i-xx. [Google Scholar] [CrossRef]
  70. Rahman, M.M. Sample Size Determination for Survey Research and Non-Probability Sampling Techniques: A Review and Set of Recommendations. J. Entrep. Bus. Econ. 2023, 11, 42–62. [Google Scholar]
  71. Tolsdorf, J.; Luo, A.F.; Kodwani, M.; Eum, J.; Mazurek, M.L.; Aviv, A.J. Safety Perceptions of Generative AI Conversational Agents: Uncovering Perceptual Differences in Trust, Risk, and Fairness; USENIX Association: Berkeley, CA, USA, 2025; ISBN 978-1-939133-51-9. [Google Scholar]
  72. Wu, P.F.; Summers, C.; Panesar, A.; Kaura, A.; Zhang, L. AI Hesitancy and Acceptability—Perceptions of AI Chatbots for Chronic Health Management and Long COVID Support: Survey Study. JMIR Hum. Factors 2024, 11, e51086. [Google Scholar] [CrossRef]
  73. Ibrahim, F.; Münscher, J.C.; Daseking, M.; Telle, N.T. The Technology Acceptance Model and Adopter Type Analysis in the Context of Artificial Intelligence. Front. Artif. Intell. 2024, 7, 1496518. [Google Scholar] [CrossRef] [PubMed]
  74. Abdalla, R.A.M. Examining Awareness, Social Influence, and Perceived Enjoyment in the TAM Framework as Determinants of ChatGPT. Personalization as a Moderator. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100327. [Google Scholar] [CrossRef]
  75. Lai, C.Y.; Cheung, K.Y.; Chan, C.S.; Law, K.K. Integrating the Adapted UTAUT Model with Moral Obligation, Trust and Perceived Risk to Predict ChatGPT Adoption for Assessment Support: A Survey with Students. Comput. Educ. Artif. Intell. 2024, 6, 100246. [Google Scholar] [CrossRef]
  76. Nitzl, C.; Roldan, J.L.; Cepeda, G. Mediation Analysis in Partial Least Squares Path Modeling: Helping Researchers Discuss More Sophisticated Models. Ind. Manag. Data Syst. 2016, 116, 1849–1864. [Google Scholar] [CrossRef]
  77. Hair, J.; Alamer, A. Partial Least Squares Structural Equation Modeling (PLS-SEM) in Second Language and Education Research: Guidelines Using an Applied Example. Res. Methods Appl. Linguist. 2022, 1, 100027. [Google Scholar] [CrossRef]
  78. Hair, J.F.; Ringle, C.M.; Sarstedt, M. PLS-SEM: Indeed a Silver Bullet. J. Mark. Theory Pract. 2011, 19, 139–152. [Google Scholar] [CrossRef]
  79. Wong, K.K.-K. Partial Least Squares Structural Equation Modeling (PLS-SEM) Techniques Using SmartPLS. Mark. Bull. 2013, 24, 1–32. [Google Scholar]
  80. Podsakoff, P.M.; MacKenzie, S.B.; Podsakoff, N.P. Sources of Method Bias in Social Science Research and Recommendations on How to Control It. Annu. Rev. Psychol. 2012, 63, 539–569. [Google Scholar] [CrossRef] [PubMed]
  81. Kock, N. Common Method Bias in PLS-SEM: A Full Collinearity Assessment Approach. Int. J. e-Collab. (Ijec) 2015, 11, 1–10. [Google Scholar] [CrossRef]
  82. Vinzi, V.E.; Chin, W.W.; Henseler, J.; Wang, H. Handbook of Partial Least Squares; Springer: Berlin/Heidelberg, Germany, 2010; Volume 201. [Google Scholar]
  83. Chin, W.W. The Partial Least Squares Approach to Structural Equation Modeling. Mod. Methods Bus. Res. 1998, 295, 295–336. [Google Scholar]
  84. Gefen, D.; Straub, D. A Practical Guide to Factorial Validity Using PLS-Graph: Tutorial and Annotated Example. Commun. Assoc. Inf. Syst. 2005, 16, 5. [Google Scholar] [CrossRef]
  85. Wasko, M.M.; Faraj, S. Why Should I Share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice. MIS Q. 2005, 29, 35–57. [Google Scholar] [CrossRef]
  86. Reichenheim, M.E.; Hökerberg, Y.H.M.; Moraes, C.L. Assessing Construct Structural Validity of Epidemiological Measurement Tools: A Seven-Step Roadmap. Cad. Saude Publica 2014, 30, 927–939. [Google Scholar] [CrossRef]
  87. Henseler, J.; Ringle, C.M.; Sarstedt, M. A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equation Modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  88. Sarstedt, M.; Henseler, J.; Ringle, C.M. Multigroup Analysis in Partial Least Squares (PLS) Path Modeling: Alternative Methods and Empirical Results. In Measurement and Research Methods in International Marketing; Emerald Group Publishing Limited: Leeds, UK, 2011; pp. 195–218. ISBN 1474-7979. [Google Scholar]
  89. Fornell, C.; Larcker, D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  90. Sarstedt, M.; Ringle, C.M.; Hair, J.F. Partial Least Squares Structural Equation Modeling. In Handbook of Market Research; Springer: Berlin/Heidelberg, Germany, 2021; pp. 587–632. [Google Scholar]
  91. Hair, J.F., Jr.; Sarstedt, M.; Hopkins, L.; Kuppelwieser, V.G. Partial Least Squares Structural Equation Modeling (PLS-SEM): An Emerging Tool in Business Research. Eur. Bus. Rev. 2014, 26, 106–121. [Google Scholar] [CrossRef]
  92. Preacher, K.J.; Hayes, A.F. Assessing Mediation in Communication Research. In The Sage Sourcebook of Advanced Data Analysis Methods for Communication; SAGE: Thousand Oaks, CA, USA, 2008. [Google Scholar]
  93. Streukens, S.; Leroi-Werelds, S. Bootstrapping and PLS-SEM: A Step-by-Step Guide to Get More out of Your Bootstrap Results. Eur. Manag. J. 2016, 34, 618–632. [Google Scholar] [CrossRef]
  94. Albahri, A.S.; Alnoor, A.; Zaidan, A.A.; Albahri, O.S.; Hameed, H.; Zaidan, B.B.; Peh, S.S.; Zain, A.B.; Siraj, S.B.; Masnan, A.H.B.; et al. Hybrid Artificial Neural Network and Structural Equation Modelling Techniques: A Survey. Complex. Intell. Syst. 2022, 8, 1781–1801. [Google Scholar] [CrossRef]
  95. Soomro, R.B.; Memon, S.G.; Dahri, N.A.; Al-Rahmi, W.M.; Aldriwish, K.; Salameh, A.; Al-Adwan, A.S.; Saleem, A. The Adoption of Digital Technologies by Small and Medium-Sized Enterprises for Sustainability and Value Creation in Pakistan: The Application of a Two-Staged Hybrid SEM-ANN Approach. Sustainability 2024, 16, 7351. [Google Scholar] [CrossRef]
  96. Leong, L.Y.; Hew, J.J.; Lee, V.H.; Tan, G.W.H.; Ooi, K.B.; Rana, N.P. An SEM-ANN Analysis of the Impacts of Blockchain on Competitive Advantage. Ind. Manag. Data Syst. 2023, 123, 967–1004. [Google Scholar] [CrossRef]
  97. Chong, A.Y.L. A Two-Staged SEM-Neural Network Approach for Understanding and Predicting the Determinants of m-Commerce Adoption. Expert. Syst. Appl. 2013, 40, 1240–1247. [Google Scholar] [CrossRef]
  98. Chan, F.T.S.; Chong, A.Y.L. A SEM-Neural Network Approach for Understanding Determinants of Interorganizational System Standard Adoption and Performances. Decis. Support. Syst. 2012, 54, 621–630. [Google Scholar] [CrossRef]
  99. Mustafa, S.; Qiao, Y.; Yan, X.; Anwar, A.; Hao, T.; Rana, S. Digital Students’ Satisfaction with and Intention to Use Online Teaching Modes, Role of Big Five Personality Traits. Front. Psychol. 2022, 13, 956281. [Google Scholar] [CrossRef]
  100. Sternad Zabukovšek, S.; Kalinic, Z.; Bobek, S.; Tominc, P. SEM–ANN Based Research of Factors’ Impact on Extended Use of ERP Systems. Cent. Eur. J. Oper. Res. 2019, 27, 703–735. [Google Scholar] [CrossRef]
  101. Hidayat-Ur-Rehman, I.; Alzahrani, S.; Rehman, M.Z.; Akhter, F. Determining the Factors of M-Wallets Adoption. A Twofold SEM-ANN Approach. PLoS ONE 2022, 17, e0262954. [Google Scholar] [CrossRef] [PubMed]
  102. Elseufy, S.M.; Hussein, A.; Badawy, M. A Hybrid SEM-ANN Model for Predicting Overall Rework Impact on the Performance of Bridge Construction Projects. Structures 2022, 46, 713–724. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.