Next Article in Journal
Circularity and Climate Mitigation in the EU27: An Elasticity-Based Scenario Analysis to 2050
Previous Article in Journal
Sustainable Cold Region Urban Expansion Assessment Through Impervious Surface Classification and GDP Spatial Simulation
Previous Article in Special Issue
The Prospects of Sustainable Development of Destroyed Tourism Areas Using Virtual Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of Perceived Organizational Support on Sustainable AI Adoption in Digital Transformation: An Integrated SEM–ANN–NCA Model

1
Department of Global Business Graduate School, Kyonggi University, Suwon 16227, Republic of Korea
2
Department of Business School, Anyang Institute of Technology, Anyang 455000, China
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(24), 11373; https://doi.org/10.3390/su172411373
Submission received: 22 October 2025 / Revised: 19 November 2025 / Accepted: 24 November 2025 / Published: 18 December 2025
(This article belongs to the Special Issue Digital Marketing and Sustainable Circular Economy)

Abstract

In the era of sustainable digital transformation, organizations increasingly rely on artificial intelligence (AI) to enhance efficiency, innovation, and long-term competitiveness. However, employees’ psychological barriers, including technostress and innovation resistance, continue to constrain successful and sustainable AI adoption. Grounded in Social Exchange Theory (SET), Conservation of Resources Theory (COR), Diffusion of Innovation Theory (DOI), and the Technology Acceptance Model (TAM), this study develops an integrated model linking perceived organizational support (POS)—comprising emotional, informational, and instrumental dimensions—to employees’ sustainable AI adoption through the dual mediating roles of technostress and innovation resistance. Based on 426 valid responses collected from multiple industries, a triadic hybrid approach combining Structural Equation Modeling (SEM), Artificial Neural Networks (ANNs), and Necessary Condition Analysis (NCA) was applied to capture both linear and nonlinear mechanisms. The results reveal that Informational Support (IFS) is the most influential factor and constitutes the sole necessary condition for high-level AI adoption, while emotional and instrumental support indirectly promote sustainable adoption by mitigating employees’ stress and resistance. This study contributes to sustainable management and AI adoption research by providing insights into the potential hierarchical and threshold patterns of organizational support systems in digital transformation. It also provides managerial implications for designing transparent, empathetic, and resource-efficient support ecosystems that foster employee-driven intelligent transformation.

1. Introduction

Amid the wave of global digital transformation, Artificial Intelligence (AI) has gradually emerged as the core driving force behind enterprise innovation and organizational change. The rapid advancement of AI technologies has not only reshaped firms’ operational models and management structures but also facilitated the intelligent upgrading of decision-making mechanisms and human resource systems [1]. However, in stark contrast to its potential value, many organizations still face challenges, such as employees’ low willingness to adopt AI and psychological resistance during AI implementation. Factors such as technological anxiety, learning pressure, and fear of replacement risks often lead AI transformation into a dilemma of being “technologically advanced but psychologically lagging” [2].
Theoretically, Social Exchange Theory (SET) posits that organizations can elicit employees’ positive behaviors and reciprocal responses by providing resources and support [3]. The theory of Perceived Organizational Support (POS) further emphasizes that when employees perceive emotional, informational, and instrumental support from their organizations, they develop stronger trust, belonging, and behavioral commitment [4]. Recent studies have revealed that in digital transformation contexts, employees’ perceptions of organizational support not only influence their performance but also significantly affect their ability to cope with technostress [5].
Meanwhile, social and informational support have been identified as vital psychological resources for mitigating technostress. Lanzl [6] argued that emotional and informational support effectively suppress the emergence of technostress, particularly under conditions of high uncertainty. This indicates that organizational support not only serves an emotional soothing function but also alleviates cognitive burdens induced by technological complexity through empowerment and learning resources. Similarly, Pirkkalainen, Salo, Tarafdar, and Makkonen [7] found that a positive organizational support environment encourages employees to adopt proactive coping strategies, thereby reducing stress arising from digital transformation.
Moreover, the impact of technostress on employees’ innovative behaviors and technology adoption has become a significant topic in the field of information systems. Chandra, Shirish, and Srivastava [8] noted that technostress may suppress innovative behavior or, at moderate levels, stimulate innovative potential—revealing a nonlinear relationship. This finding uncovers a dynamic balance between the motivating and inhibiting effects of technostress, suggesting that organizational support plays a crucial moderating role in this relationship. Collectively, the interactive patterns among technostress, innovation resistance, and organizational support provide an important perspective for understanding the psychological processes involved in employees’ AI adoption.
Nevertheless, existing research presents two main limitations. First, most studies treat organizational support as a unidimensional construct, lacking a hierarchical and differentiated analysis of emotional, informational, and instrumental support. Second, the threshold effects of organizational support in the AI adoption process have not been systematically explored. In other words, whether different types of organizational support constitute “core necessary conditions” or “supplementary conditions” remains a theoretical gap [9].
Recent studies have emphasized that sustainable digital transformation reshapes marketing and organizational strategies toward circular economy principles [10,11]. Building upon this perspective, organizational support represents a critical mechanism for fostering employee-driven adoption of intelligent technologies within sustainable management systems.
Based on this, the present study adopts the framework of “multidimensional organizational support–technostress–innovation resistance–AI adoption” and integrates structural equation modeling (SEM), artificial neural networks (ANNs), and necessary condition analysis (NCA) to systematically uncover the structural mechanisms and threshold characteristics of employees’ AI technology adoption. The research objectives include: (1) examining the direct and indirect effects of different types of organizational support on AI adoption intention; (2) analyzing the mediating roles of technostress and innovation resistance between organizational support and adoption behavior; and (3) identifying the necessary conditions for high-level AI adoption based on NCA and proposing a new “threshold-dependent structure” model. Through this analytical framework, this study aims to extend the applicability boundaries of POS theory within the context of intelligent transformation and provide both empirical and theoretical insights for organizations in designing AI adoption strategies and support systems.
In this study, “sustainable AI adoption” refers to the long-term and stable utilization and adoption of AI technologies within organizations during technological innovation processes, representing a dimension of organizational sustainability rather than environmental or ecological sustainability. It emphasizes the stability and enduring performance of AI technology adoption at the organizational level. This organizational sustainability dimension simultaneously supports firms’ long-term resilience and learning capability throughout digital transformation. In this sense, sustainable AI adoption represents an organizational capability that ensures continuous alignment between human cognition and intelligent systems.

2. Theoretical Foundation and Hypothesis Development

2.1. Sustainable AI Adoption: Concept and Literature Background

Recent research has emphasized that digital transformation should not be viewed as a one-time technological upgrade but as a continuous and adaptive process of organizational learning and capability renewal [12]. Within this paradigm, sustainable AI adoption refers to the long-term, stable, and value-generating integration of artificial intelligence technologies into organizational routines and culture. It emphasizes the continuity and resilience of adoption behaviors, ensuring that employees not only accept AI systems but also continuously engage with and adapt to them over time.
Unlike traditional technology acceptance models that focus on initial adoption, sustainable adoption frameworks highlight post-adoption persistence, behavioral reinforcement, and adaptive resource regeneration. This perspective shifts the analytical focus from short-term behavioral intention to long-term adaptation capacity, aligning AI adoption with broader organizational sustainability and human-centered innovation goals [13].
In organizational contexts, sustainable AI adoption is supported by mechanisms of resource renewal and psychological resilience, consistent with Conservation of Resources Theory (COR). Emotional, informational, and instrumental support jointly serve to restore depleted psychological resources and maintain a stable adoption attitude over time. Similarly, Social Exchange Theory (SET) explains how reciprocal trust and perceived fairness sustain employees’ commitment to continuous AI use, reinforcing the social foundations of long-term adoption.
Therefore, sustainable AI adoption can be conceptualized as a dynamic and cyclical process of technological learning and adaptation, sustained by organizational support systems. It represents not only the outcome of successful AI implementation but also the organizational capability to continually align human and technological systems for enduring performance and innovation.
Building on this conceptual foundation, it becomes essential to identify the underlying theoretical mechanisms that explain how sustainable AI adoption occurs within organizations. While previous models have addressed either the cognitive or social aspects of technology use, a holistic understanding requires integrating both resource-based and behavioral perspectives. To this end, the present study constructs a multi-theoretical framework that combines Social Exchange Theory (SET), Conservation of Resources Theory (COR), Diffusion of Innovation Theory (DOI), and the Technology Acceptance Model (TAM). This integration provides a comprehensive lens through which to examine how multidimensional organizational support fosters employees’ psychological adaptation, reduces technostress and resistance, and ultimately promotes sustainable AI adoption.

2.2. An Integrated Multi-Theoretical Framework for Sustainable AI Adoption

In the context of AI-driven digital transformation, employees’ willingness to adopt technology depends not only on perceived usefulness and ease of use but also on the interplay of organizational support, psychological resources, and cognitive trust. To comprehensively capture these multi-level influences, this study integrates four complementary theoretical perspectives—Social Exchange Theory (SET), Conservation of Resources Theory (COR), Diffusion of Innovation Theory (DOI), and the Technology Acceptance Model (TAM)—to construct a unified explanatory framework for sustainable AI adoption.
While Section 2.1 outlined the conceptual meaning of sustainable AI adoption, this section deepens the discussion by identifying the theoretical foundations that can explain how such sustainability is achieved in practice. Rather than relying on a single theoretical view, a multi-theoretical integration is necessary to capture the complex interactions between organizational support, employee cognition, and behavioral intention in the AI context.
Rather than treating these theories as independent, this study positions them as mutually reinforcing layers of explanation. SET provides the social foundation, clarifying why employees reciprocate organizational care with loyalty and adoption behaviors. COR supplies the psychological mechanism, explaining how organizational support replenishes employees’ emotional and cognitive resources, thereby reducing technostress. DOI introduces the communicative and informational dimension, describing how knowledge diffusion and feedback enhance employees’ cognitive readiness for innovation. TAM captures the evaluative and behavioral stage, demonstrating how these cognitive appraisals translate into concrete adoption intentions. Collectively, these theories form a progressive explanatory chain—from social reciprocity to resource conservation, cognitive transformation, and behavioral acceptance.
Within this integrated framework, Perceived Organizational Support (POS) functions as the central operational construct linking these theoretical dimensions. The multidimensional POS model—comprising emotional, informational, and instrumental support—maps directly onto the theoretical sequence above:
Emotional support embodies the relational and affective exchange emphasized in SET, reflecting how care and empathy foster psychological trust;
Informational support corresponds to the knowledge flow and cognitive renewal emphasized in DOI, facilitating comprehension and reducing uncertainty;
Instrumental support represents the tangible resource provision highlighted in COR, reinforcing the perceived ease of use mechanism in TAM.
Thus, POS serves as a conceptual bridge connecting social reciprocity, psychological resource management, and cognitive evaluation into a coherent explanatory system of AI adoption. Through this lens, employees’ acceptance of AI technologies can be interpreted through a resource–cognition–behavior sequence: organizations first provide emotional and material resources (SET and COR), which enhance employees’ cognitive clarity and trust (DOI and TAM), ultimately fostering sustained adoption behavior.
In summary, this integrative theoretical framework not only demonstrates how SET, COR, DOI, and TAM complement each other in explaining the AI adoption process but also establishes multidimensional POS as the most appropriate lens for capturing the dynamic interactions between social, psychological, and cognitive factors that drive sustainable AI adoption.
Although perceived usefulness and perceived ease of use are not directly included as constructs in this model, the TAM framework is incorporated conceptually to explain how informational and instrumental support shape employees’ cognitive evaluations during AI adoption. This multidimensional lens is particularly suitable for sustainable AI adoption, as it simultaneously accounts for emotional reciprocity (SET), resource resilience (COR), knowledge diffusion (DOI), and behavioral intention formation (TAM), thereby unifying social, psychological, and cognitive processes into a single explanatory framework.

2.3. The Connotation and Mechanisms of Multidimensional Organizational Support

Perceived Organizational Support (POS) refers to employees’ overall belief that their organization values their contributions and cares about their well-being [14]. This concept is grounded in Social Exchange Theory (SET), which emphasizes reciprocal resource exchanges and mutual trust between employees and the organization [15]. When employees perceive organizational support, they tend to reciprocate with greater engagement, loyalty, and innovative behaviors [16].
In recent years, research in the field of information science has highlighted that individuals’ technology adoption behaviors are closely related to their information behaviors—that is, how they search for, evaluate, and utilize information in technology-rich environments [17,18].
According to Wilson’s model of information behavior [17], information behavior is influenced by situational factors, including external environmental support, stressors, and social resources. Within this framework, organizational informational support can be regarded as a key situational enabler that helps employees more effectively access, comprehend, and apply information related to artificial intelligence technologies. This process reduces uncertainty, alleviates cognitive load, and enhances employees’ adaptability to new technologies.
Furthermore, Savolainen’s Everyday Life Information Seeking (ELIS) theory [18] emphasizes that information behavior depends not only on individual motivation but also on the structure of social support and work environment. Thus, organizational informational support mechanisms provide not only knowledge resources but also psychological and cognitive support for employees’ learning and decision-making during technological transformation. This perspective helps explain why, in the present study, informational support emerges as one of the most critical factors influencing employees’ AI adoption intentions.
In the context of AI-driven digital transformation, the role of POS becomes even more prominent. According to the Conservation of Resources Theory (COR), employees facing technological complexity and uncertainty actively seek external support to maintain psychological resource balance [19]. Therefore, emotional, informational, and instrumental support provided by organizations serve as critical resources to alleviate technostress and foster technology acceptance [20].
This study categorizes POS into three dimensions:
Emotional Support—reflects the organization’s concern for employees’ emotional states and psychological safety, helping to reduce anxiety and resistance [15];
Informational Support—reduces technological uncertainty through training, communication, and knowledge sharing [21];
This theoretical perspective further supports hypotheses H5 and H10, as informational support reduces uncertainty and enhances employees’ readiness for AI adoption.
Instrumental Support—provides employees with tangible resources, operational guidelines, and technical assistance to enhance self-efficacy [22].

2.4. Organizational Support in Alleviating Technostress (H1–H3)

Technostress refers to the psychological strain employees experience as a result of technological complexity, information overload, or system changes [23]. The extensive implementation of AI technologies within organizations places additional pressure on employees as they learn new systems and cope with automated monitoring [21].
According to the Conservation of Resources (COR) theory, external support serves as an essential compensatory mechanism for alleviating technostress [19]. Emotional support can reduce anxiety, informational support can enhance cognitive control, and instrumental support can strengthen technological confidence and adaptability. Empirical evidence from Chang, Zhang, Cai, and Guo [24] demonstrates that technostress can act as both a challenge-based motivator and an obstacle-based threat, while enhanced POS can steer it toward a positive pathway.
H1: 
Emotional support significantly and negatively predicts technostress.
H2: 
Informational support significantly and negatively predicts technostress.
H3: 
Instrumental support significantly and negatively predicts technostress.

2.5. Organizational Support in Reducing Innovation Resistance (H4–H6)

Innovation Resistance (IR) refers to employees’ defensive attitudes and behaviors toward new technologies [25]. During AI adoption, employees may develop psychological defenses and negative coping behaviors if they perceive a lack of capability or resources.
Multidimensional organizational support serves as a key mechanism for reducing innovation resistance. Emotional support can enhance a sense of belonging and alleviate anxiety; informational support reduces uncertainty through clear communication; instrumental support assists employees in smoothly transitioning to new technological environments [22]. Research indicates that a supportive organizational culture can significantly mitigate resistance effects, while open digital leadership further fosters employees’ willingness to innovate [26].
H4: 
Emotional support significantly and negatively predicts innovation resistance.
H5: 
Informational support significantly and negatively predicts innovation resistance.
H6: 
Instrumental support significantly and negatively predicts innovation resistance.

2.6. Technostress Intensifies Innovation Resistance (H7)

When employees experience sustained high pressure during AI transformation, their psychological defense mechanisms are activated, leading to resistance toward technology [20]. Zhang [27] found that AI system monitoring and algorithmic evaluations exacerbate feelings of loss of control, thereby triggering resistance behaviors. Bausch et al. [28] proposed a “stress–resistance loop” model: initial stress triggers psychological defense responses, which in turn lead to further resistance and learning stagnation.
H7: 
Technostress significantly and positively predicts innovation resistance.

2.7. Innovation Resistance Inhibits AI Technology Acceptance (H8)

According to the Technology Acceptance Model (TAM), perceived usefulness and ease of use are core determinants of adoption intention [1]. However, if employees exhibit resistance toward AI, their learning motivation and behavioral intentions decline significantly. Research indicates that innovation resistance constitutes the most disruptive psychological barrier in the AI adoption process. When resistance exceeds a psychological threshold, positive behavioral change is unlikely, even in the presence of external incentives [25].
H8: 
Innovation resistance significantly and negatively predicts AI technology acceptance.

2.8. Organizational Support Promotes AI Technology Acceptance (H9–H11)

POS not only indirectly alleviates stress and resistance but also directly promotes AI adoption through trust and efficacy mechanisms. Emotional support enhances organizational identification, informational support strengthens cognitive trust, and instrumental support increases operational confidence [29]. Chang et al. [24] found that AI adoption intention rises significantly with the level of organizational support, particularly when management actively invests in training and resource development, making this positive effect most pronounced.
H9: 
Emotional support significantly and positively predicts AI technology acceptance.
H10: 
Informational support significantly and positively predicts AI technology acceptance.
H11: 
Instrumental support significantly and positively predicts AI technology acceptance.

2.9. Technostress Inhibits AI Technology Acceptance (H12)

Technostress undermines employees’ trust in AI and their sense of self-efficacy. When the psychological burden exceeds the resource recovery threshold, employees tend to adopt avoidance or defensive coping strategies, thereby inhibiting technology adoption [2,24]. Feroz and Kwak [20] indicated that if organizations fail to establish an effective support system during AI transformation, employees’ adoption of AI will significantly decline [30].
H12: 
Technostress significantly and negatively predicts AI technology acceptance.
The preceding Section 2.3, Section 2.4, Section 2.5, Section 2.6, Section 2.7, Section 2.8 and Section 2.9 have progressively examined how different dimensions of organizational support influence technostress, innovation resistance, and ultimately AI adoption intention. To synthesize these interrelationships, the following Section 2.10 integrates the theoretical logic and empirical hypotheses into a coherent research framework that guides the subsequent model testing.

2.10. Theoretical Integration and Research Framework

This study adopts Social Exchange Theory (SET) and Conservation of Resources Theory (COR) as its foundational logic, explaining how organizational support is transformed into psychological resources, and integrates the Diffusion of Innovation Theory (DOI) and Technology Acceptance Model (TAM) as the behavioral transformation mechanisms, elucidating how perceived resources foster adoption. Accordingly, multidimensional Perceived Organizational Support (POS) alleviates technostress through a resource compensation mechanism (COR) and subsequently promotes sustainable AI adoption through the channels of cognitive trust and behavioral intention (TAM).
SET explains the formation mechanism of organizational support and employees’ reciprocal behaviors [31]. COR elucidates the dynamic relationship between resource loss and compensation [19]. DOI emphasizes the role of information dissemination in the process of innovation diffusion [21]. TAM illustrates how external support promotes adoption by influencing perceived usefulness and ease of use [1].
Recent research [24] further indicates that POS exhibits a “threshold effect” on AI adoption: when support levels are below the psychological threshold, employees’ adoption intention is suppressed, whereas exceeding this threshold significantly activates adoption behavior. This study will validate this nonlinear mechanism in subsequent chapters using SEM, ANN, and NCA methods. Together, these methods allow for a comprehensive exploration of both the structural and sustainability mechanisms underlying AI adoption behavior, as shown in Figure 1.

3. Research Methodology

3.1. Overview of Research Design

This study aims to investigate how multidimensional organizational support influences employees’ acceptance of artificial intelligence (AI) technologies, with a particular focus on the mediating mechanisms of technostress and innovation resistance. Therefore, a quantitative empirical research approach was adopted, combining survey data with structural equation modeling (SEM), and supplemented by artificial neural networks (ANNs) and necessary condition analysis (NCA) to enhance both predictive accuracy and theoretical depth.

3.2. Sample Selection and Distribution

The study sample consisted of AI technology users from multiple industries in China, including the education, manufacturing, IT, and service sectors. A non-probability convenience sampling method was employed, distributing the survey through both online and offline channels. To ensure sample diversity and representativeness, different age groups, occupational categories, and educational backgrounds were controlled.
A total of 426 valid questionnaires were collected, with basic demographic characteristics as follows: 51.2% were male, 48.8% were female; the main age group was 26–40 years, accounting for 51.4%; 8.5% held a postgraduate degree or higher; and the occupational structure was diverse, covering technology, education, management, freelancing, and other fields.

3.3. Basis for Sample Size Calculation

To ensure adequate statistical power (Power > 0.8), this study calculated the required sample size using G*Power 3.1 software, setting the significance level (α) at 0.05, a medium effect size (f2) of 0.15, and 6 independent variables. The calculation indicated a minimum required sample size of 146.
Considering model complexity and data quality control, the final valid sample for the formal survey was 426, well above the minimum requirement, ensuring strong statistical validity, as shown in Figure 2.

3.4. Variable Measurement and Scale Design

This study employed established scales with modifications to measure six key variables: Emotional Support (ES), Informational Support (IFS), Instrumental Support (ITS), Technostress (TS), Innovation Resistance (IR), and AI Technology Acceptance (AIA). The detailed measurement items for all constructs are provided in Table A1. All scales were scored using a 5-point Likert scale.
The above scales were validated through a pre-survey and demonstrated good reliability and validity. Cronbach’s α coefficients for all variables ranged from 0.83 to 0.87, AVE values exceeded 0.5, and CR values were above 0.7, indicating strong internal consistency and construct validity of the measurements.

3.5. Data Analysis Methods

To comprehensively examine how multidimensional organizational support affects employees’ AI technology acceptance and to uncover the underlying mediating mechanisms and nonlinear interaction paths, this study employed three complementary analytical methods: Structural Equation Modeling (SEM), Artificial Neural Networks (ANNs), and Necessary Condition Analysis (NCA).
These methods, respectively, address theoretical validation, enhanced prediction, and threshold identification, forming a systematic multi-level “explain–predict–necessary” framework for model verification. This multi-method integration approach has been widely applied in management and information systems research and is considered to simultaneously enhance theoretical explanatory power, predictive accuracy, and practical implications [30,31,32,33,34].
Specifically, SEM was used to test the path effects and mediating mechanisms in the research model, ANN to identify nonlinear relationships and assess the importance of each variable, and NCA to reveal the necessary conditions for achieving AI technology acceptance. The three methods operate independently and complement each other logically, thereby avoiding redundant or biased statistical results and creating a complete interpretive loop of “theoretical validation–predictive analysis–threshold identification.”

3.5.1. Structural Equation Modeling (SEM): Path Verification and Mediation Effect Identification

SEM is the primary tool for testing the theoretical hypotheses in this study, used to validate the model of multidimensional organizational support and employees’ AI acceptance as constructed under hypotheses H1–H6. This method allows for systematic analysis of complex causal paths and mediating mechanisms among latent variables, making it suitable for theory-driven model testing [30,35].
This study employed Covariance-Based SEM (CB-SEM), estimated using Maximum Likelihood Estimation (MLE), to address path estimation under conditions of multiple variables and medium-to-small sample sizes, providing path coefficients, significance tests, model fit (SRMR), and explanatory power (R2, Q2) indicators [32]. SEM enables testing of the direct effects of emotional, informational, and instrumental support on AI acceptance, as well as their mediating paths through technostress and innovation resistance [1].
However, SEM assumes linear relationships and is limited in identifying potential nonlinear structures or marginal effects among variables, thus necessitating the supplementary use of Artificial Neural Network analysis.

3.5.2. Artificial Neural Network (ANN): Identification of Nonlinear Relationships and Variable Importance Ranking

The ANN method is used to address the limitations of SEM in identifying nonlinearities and interactions. This method is based on a multi-layer perceptron (MLP) architecture and trained via backpropagation algorithms, enabling it to capture complex nonlinear patterns and asymmetric effects ([31,36].
In this study, ANN is used to identify the nonlinear impact strength and relative importance of each factor within multidimensional organizational support on AI technology acceptance [33]. By configuring multiple hidden layers and activation functions, the model outputs normalized importance values for each variable, which are used to determine the most substantively influential support dimensions.
In recent years, ANN has been widely applied in AI adoption, smart retail, and digital trust research [37,38], and its high predictive accuracy and robustness provide a strong complement to traditional causal models. However, although ANN demonstrates excellent predictive performance, it lacks significance testing and theoretical interpretability and thus should be combined with SEM to verify the theoretical robustness of path relationships.

3.5.3. Necessary Condition Analysis (NCA): Threshold Identification and Necessity Reasoning

To further identify the “critical threshold” variables for achieving AI technology acceptance, this study introduces the Necessary Condition Analysis (NCA) method. NCA is based on the logic of “necessary but not sufficient” and is used to determine conditions indispensable for the occurrence of a target outcome, meaning that even if all other factors are present, the outcome cannot occur without the specific variable [34].
This study uses scatter boundary lines (CE-FDH, CR-FDH) and bottleneck tables to calculate necessity coefficients (d) and confidence intervals, thereby assessing the strength of necessity for each variable [32]. This method can reveal which dimensions constitute the minimal constraints for achieving AI adoption intentions and provides managers with a “bottom-line” basis for decision-making [39,40].
For example, bottleneck table results can identify the minimum levels of informational or emotional support required for AI adoption (e.g., informational support ≥ 0.35), providing quantitative guidance for designing minimal necessary support combinations.
Although NCA can identify necessary conditions, it lacks sufficiency or predictive capability and therefore should be used in conjunction with SEM and ANN to form an integrated analytical framework spanning theoretical validation, prediction, and threshold identification.
Through the synergistic application of SEM, ANN, and NCA, this study not only enhances the statistical explanatory power and predictive accuracy of the theoretical model but also strengthens the practical identification of critical conditions for AI adoption, providing a solid theoretical and empirical foundation for managers to design “minimum necessary support strategies.”

4. Data Analysis and Results

4.1. Sample Characteristics Description

To ensure the reliability and validity of theoretical interpretation and statistical modeling, this study employed purposive non-probability sampling, particularly emphasizing respondents’ experience and cognitive engagement in the field of artificial intelligence [41]. The research team used a personal intercept approach to screen the target population and implemented pre-screening mechanisms to exclude individuals without relevant experience, ensuring the relevance and specificity of the collected data. The survey lasted 16 weeks and covered multiple regions, enhancing the geographic diversity of the sample while ensuring controlled survey conditions and data quality.
All respondents provided informed consent and completed the questionnaire anonymously and without compensation. The survey instrument was developed based on prior theoretical constructs. Three academic experts reviewed the content and face validity, followed by adjustments to format and terminology. After a pilot test with a small sample (N = 45), the questionnaire was formally deployed. The pilot results showed that all construct Cronbach’s α coefficients exceeded 0.80, meeting the reliability standards proposed by [42]. The pilot sample was not included in the final statistical analysis.
A total of 426 valid questionnaires were included in the final analysis, representing an effective response rate of 85.2%. The sample characteristics are summarized as follows: gender distribution was balanced (51.2% male, 48.8% female); young adults (18–30 years) dominated the age structure at 72.0%, reflecting the mainstream profile of AI technology adopters [1]; educational levels were concentrated at the associate and bachelor levels (57.9% in total), indicating that respondents generally had a solid cognitive foundation and were capable of understanding emerging technology tools and survey content; and monthly income was mainly concentrated between 5001 and 12,000 RMB (62.9%), corresponding to the mainstream economic composition of the urban middle class in China.
Regarding occupational distribution, the sample encompassed management, R&D, education, healthcare, service, agriculture, and government-related sectors, enhancing the cross-industry generalizability and modeling suitability of this study. This horizontal breadth provides a rich, heterogeneous data foundation for the subsequent application of SEM, ANN, and NCA methods.
Finally, the sample size of 426 met the minimum requirements for SEM path estimation [42] and exceeded the heuristic rule of “10 times the number of weight parameters” for ANN modeling [43], enhancing the statistical power and robustness of the findings, as shown in Table 1.

4.2. Reliability and Validity Analysis

4.2.1. Descriptive Statistics and Normality Test

Before conducting structural modeling and hypothesis testing, systematically examining the distribution characteristics of sample data is an essential preliminary step in quantitative research. Descriptive statistics not only reveal data central tendency and dispersion but also provide a basis for assessing whether the data meet fundamental assumptions of multivariate statistical analysis, such as normality, linearity, and homoscedasticity [44].
This study used 426 valid samples and conducted descriptive statistics and normality tests using SPSS 25.0. The results showed that the means of all measurement items ranged from 3.0 to 4.2, indicating that respondents generally held moderately positive attitudes across dimensions, consistent with normal sample distributions observed in previous technology acceptance studies [1]. Standard deviations ranged from 0.9 to 1.2, reflecting moderate variability within the sample, which avoids excessive homogeneity and provides a reliable basis for parameter estimation in subsequent modeling [45].
Regarding normality tests, this study followed the criteria proposed by George and Mallery [45]: a univariate distribution can be considered approximately normal if skewness is less than 3 in absolute value and kurtosis is less than 8 in absolute value. Detailed analysis indicated that item skewness ranged from −1.2 to 0.9 and kurtosis ranged from −0.5 to 1.1, both well below the threshold values. The statistical results confirmed approximate normality, with no significant deviation trends observed.
In summary, the sample data in this study exhibited favorable statistical properties in terms of central tendency, dispersion, and distribution, satisfying the basic prerequisites for multivariate statistical modeling. This lays a solid foundation for the rigorous application of subsequent methods, including Structural Equation Modeling (SEM), Artificial Neural Networks (ANNs), and Necessary Condition Analysis (NCA), as shown in Table 2.

4.2.2. Reliability Analysis

Ensuring the reliability of measurement instruments is a fundamental prerequisite in scale development and empirical research. Reliability primarily reflects the consistency and stability of measurement results, i.e., whether similar outcomes can be obtained upon repeated measurements [46]. In multidimensional scale assessments, internal consistency is one of the most commonly used reliability indicators, typically evaluated via Cronbach’s α coefficient [47,48]. Generally, an α coefficient above 0.70 is considered acceptable, while values above 0.80 indicate good internal consistency [44,49]. In this study, to assess scale reliability, SPSS 25.0 was used to perform the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy and Bartlett’s test of sphericity, determining whether the data were suitable for subsequent factor analysis. The results indicated a KMO value of 0.910, well above the empirical threshold of 0.80 [50], suggesting sufficient inter-variable correlations suitable for factor extraction analysis [51]. Bartlett’s test of sphericity also showed an approximate chi-square statistic of 5864.468 with 406 degrees of freedom, significant at p < 0.001, further confirming that the data meet the prerequisites for factor analysis [42]. Relevant statistics are presented in Table 3.
In the further reliability assessment, this study calculated Cronbach’s α coefficients for six latent variables: Emotional Support (ES), Informational Support (IFS), Instrumental Support (ITS), Technostress (TS), Innovation Resistance (IR), and AI Adoption (AIA). The α coefficients for all constructs ranged from 0.836 to 0.877, well above the minimum threshold of 0.70 [52], indicating good internal consistency of the scales. Detailed reliability analysis results are presented in Table 4.
It is particularly noteworthy that, as academic awareness of the limitations of Cronbach’s α has deepened [53,54], this study strictly adhered to the construct homogeneity principle during testing to ensure that each item genuinely measures the same latent concept, avoiding inflated or distorted reliability due to dimensional heterogeneity [55].
Overall, the reliability test results of the scales in this study were fully satisfactory, providing solid and credible data support for subsequent Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA), and Structural Equation Modeling (SEM).

4.2.3. Validity Analysis

Construct validity is the core criterion for assessing whether a scale accurately reflects latent constructs and plays a critical role in scale development and model testing [44]. To ensure the scientific rigor of the scales in terms of theoretical mapping and statistical robustness, this study employed Exploratory Factor Analysis (EFA) to systematically evaluate the convergent and discriminant validity of the scales.
Following the recommendations of Fabrigar, Wegener, MacCallum, and Strahan [56], when the relationships among latent constructs are not fully established, Principal Component Analysis (PCA) is the most appropriate factor extraction method. PCA can efficiently extract common variance among variables, enhancing factor interpretability and parsimony. To further optimize the interpretability of the factor structure, Varimax orthogonal rotation was applied to ensure high loadings of items on their respective factors while maintaining near-zero loadings on other factors, thereby achieving clearer factor differentiation [57].
After six iterations, the EFA converged successfully, indicating a stable data structure and robust statistical characteristics of the model [58]. The rotated component matrix revealed that all measurement items had standardized factor loadings above 0.70 on their corresponding latent variables, most exceeding 0.75, meeting the convergent validity criteria proposed by [59], indicating that each item primarily reflects its respective latent construct. Simultaneously, cross-loadings were substantially lower than primary loadings, with no instances of reversed loadings or significant cross-factor high loadings, further confirming good discriminant validity among latent constructs.
Specifically, the six factors—Emotional Support (ES), Informational Support (IFS), Instrumental Support (ITS), Technostress (TS), Innovation Resistance (IR), and AI Adoption (AIA)—were clearly differentiated, with each item exhibiting significant primary loadings on its respective factor. This stable factor structure not only enhances the theoretical interpretability of the scales but also provides a solid statistical foundation for subsequent Confirmatory Factor Analysis (CFA) and Structural Equation Modeling (SEM) [60].
Furthermore, since all variables were measured through a single-source survey, the potential impact of Common Method Variance (CMV) was examined using Harman’s single-factor test. All measurement items were entered into an unrotated Principal Component Analysis (PCA), and the results showed that the first factor accounted for 28.8% of the total variance, well below the critical threshold of 50%. This indicates that CMV is not a serious concern in this study, and the data demonstrate satisfactory reliability and validity for subsequent analyses. In summary, these findings collectively confirm the theoretical consistency and statistical soundness of the measurement model, providing a rigorous psychometric foundation for further model testing in the context of organizational behavior and AI adoption research, as shown in Table 5 and Table 6.

4.2.4. Convergent Validity and Composite Reliability

In the field of scale measurement, convergent validity and composite reliability (CR) are key indicators for assessing the measurement quality of latent variables [44]. Convergent validity evaluates whether items under the same latent variable collectively reflect the core construct, while composite reliability emphasizes the overall consistency and reliability of the items’ measurement [60,61].
According to Fornell and Larcker [59], the average variance extracted (AVE) for each latent variable should be ≥0.50 to ensure that the latent variable explains more than half of the variance in its items; simultaneously, composite reliability (CR) should be ≥0.70, indicating strong internal consistency of the scale.
The formulas for calculating these two indicators are as follows:
A V E = i = 1 n λ i 2 n
C R = ( i = 1 n λ i ) 2 i = 1 n λ i 2 + i = 1 n δ i
Here, λ i represents the standardized factor loading of each measurement item, and θi denotes the corresponding error variance.
Based on the results of confirmatory factor analysis (CFA), this study examined the convergent validity and composite reliability of six core constructs: Emotional Support (ES), Informational Support (IFS), Instrumental Support (ITS), Technostress (TS), Innovation Resistance (IR), and AI Acceptance (AIA). The results indicated that all factor loadings were significant and above 0.70, meeting theoretical modeling requirements.
Further calculations showed that the AVE values of the constructs ranged from 0.530 to 0.623, and CR values ranged from 0.838 to 0.871 (see Table 7). Among them, Instrumental Support (ITS) had the highest AVE (0.623), indicating the strongest explanatory power of its measurement items; AI Acceptance (AIA) had an AVE of 0.530, which, although close to the threshold, still met the standard. All CR values exceeded 0.80, demonstrating excellent internal consistency [61].
Moreover, the validity of composite measurement models has been widely discussed in the academic community [62]. In recent years, researchers have emphasized that within the CB-SEM framework, the stability and equivalence of composite modeling have become new focal points for evaluating measurement reliability [63].
In summary, all latent variables in this study demonstrated good convergent validity and composite reliability, providing a solid measurement foundation for subsequent SEM hypothesis testing.

4.2.5. Discriminant Validity

Discriminant validity is used to verify the independence among latent variables and is a key indicator for assessing the structural soundness of a measurement scale. According to the criterion proposed by [59], a model exhibits good discriminant validity if the square root of each latent variable’s Average Variance Extracted (√AVE) exceeds its correlations with all other latent variables. In this study, correlations among latent variables and their √AVE values were calculated based on the results of confirmatory factor analysis (CFA). The results indicate that the √AVE of each latent variable is greater than its correlations with other latent variables, demonstrating strong discriminant validity in the measurement. This indicates that each construct shares more variance with its own indicators than with other constructs, satisfying the Fornell–Larcker criterion. In other words, each latent variable better explains its own indicators rather than being confounded by other latent variables. These findings indicate that the latent variables in the scale are well differentiated and structurally clear, supporting the measurement model’s validity and the discriminant validity assumption, thus providing a solid foundation for subsequent structural equation modeling (SEM), as shown in Table 8.

4.2.6. Pearson Correlation Analysis

In the process of scale validation, correlational analysis between variables is not only a key step for preliminary verification of theoretical hypotheses but also an important method for assessing potential multicollinearity risks [44]. This study used the Pearson Product–Moment Correlation Coefficient to examine the linear relationships among latent variables, ensuring that constructs are theoretically related while maintaining necessary independence. The Pearson correlation coefficient (r) ranges from –1 to +1, with values closer to 1 indicating stronger correlations [64]. In behavioral research, |r| = 0.10–0.30 is generally considered weak, 0.30–0.50 moderate, and above 0.50 strong [44]. A two-tailed significance level of 0.01 was set to ensure the statistical reliability of the correlation results.
The correlation matrix of latent variables obtained through Pearson correlation analysis using SPSS 25.0 is shown in Table 9. The results indicate that Emotional Support (ES), Informational Support (IFS), and Instrumental Support (ITS) are all significantly positively correlated (r = 0.215, 0.211, 0.294, all p < 0.01), confirming the multidimensional complementarity of social support [65]. Technostress (TS) is significantly negatively correlated with all three types of organizational support (e.g., TS and IFS: r = –0.295, p < 0.01), consistent with the buffering effect proposed by Tarafdar et al. [2], indicating that organizational support can mitigate employees’ perceived psychological stress from technology. Innovation Resistance (IR) is significantly positively correlated with Technostress (TS) (r = 0.443, p < 0.01), indicating that higher perceived technological pressure strengthens resistance, consistent with the Technology Adoption Resistance Model proposed by Bhattacherjee and Hikmet [66]. AI Adoption (AIA) is significantly positively correlated with Emotional Support (r = 0.205), Informational Support (r = 0.318), and Instrumental Support (r = 0.293), all p < 0.01, supporting supporting the resource conservation perspective that external support promotes positive adoption behavior [19].
Overall, the correlation analysis confirms that the constructs are both theoretically coherent and statistically distinct. None of the correlation coefficients exceeded 0.70, ruling out multicollinearity risks and meeting the modeling requirements proposed by Hair et al. [44]. These results provide a reliable statistical foundation for subsequent SEM path testing.

4.3. Structural Equation Modeling (SEM) Results

4.3.1. Model Fit Evaluation (SEM)

In structural equation modeling (SEM) analysis, assessing overall model fit is a fundamental step to evaluate the consistency between the theoretical model and empirical data, and it serves as a prerequisite for path estimation and mediation effect analysis [67]. This study employed the Maximum Likelihood Estimation (MLE) method to estimate structural path parameters and selected six mainstream fit indices for evaluation: chi-square/degrees of freedom ratio (CMIN/DF), root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker–Lewis index (TLI), goodness-of-fit index (GFI), and normed fit index (NFI), ensuring a comprehensive and scientific assessment of model adequacy [64].
The model fit results are presented in Table 10: CMIN/DF = 1.124, well below the recommended threshold of 3, indicating a reasonable control of model complexity; RMSEA = 0.017, within the “excellent fit” range (<0.05), reflecting minimized residual structures; and CFI (0.992), TLI (0.991), NFI (0.932), and GFI (0.940) all exceed the recommended standard of 0.90, further confirming a strong alignment between the model and sample data. Collectively, these results provide robust support for the validity of the theoretical model structure and establish a methodological foundation for subsequent path coefficient estimation and mediation mechanism analysis.
Notably, compared with prior confirmatory factor analysis (CFA) results (CMIN/DF = 3.115, RMSEA = 0.065), the current structural model demonstrates improved fit across several key indices. This improvement indicates that, after incorporating structural paths, the model parameters capture data relationships more accurately, enhancing the alignment between theory and reality and indirectly validating the theoretical suitability of the multidimensional support–technostress–technology acceptance mechanism constructed in this study.

4.3.2. Hypothesis Testing and Mediation Analysis

Based on the confirmed model fit in the structural equation modeling (SEM), the causal path relationships among latent variables in the theoretical model were further tested. By analyzing standardized path coefficients (β), critical ratios (CR), and significance levels (p-values), the validity of 12 hypothesized paths was evaluated, with detailed results presented in Figure 3 and Table 11.
First, regarding the predictive effects of organizational support on technostress (TS), all three types of support—emotional support (ES, β = −0.173, p = 0.015), informational support (IFS, β = −0.323, p < 0.001), and instrumental support (ITS, β = −0.164, p = 0.004)—exerted significant negative effects, confirming H1–H3. This indicates that multidimensional organizational support effectively alleviates employees’ cognitive load and emotional tension when facing technological challenges, consistent with prior findings that adequate organizational resources can buffer technostress during digital transformation.
Second, the results show that ES (β = −0.237, p < 0.001), IFS (β = −0.321, p < 0.001), and ITS (β = −0.265, p < 0.001) significantly and negatively influenced innovation resistance (IR), supporting H4–H6. This suggests that organizational support not only reduces technostress but also mitigates employees’ psychological resistance toward innovation, thereby improving their openness to adopting new technologies.
Furthermore, technostress (TS) had a significant positive effect on innovation resistance (IR) (β = 0.249, p < 0.001), confirming H7. This finding implies that excessive technological stress increases employees’ defensive and avoidance tendencies toward innovation, verifying the mediating role of technostress in shaping negative innovation attitudes.
In addition, innovation resistance (IR) showed a significant negative impact on AI adoption (AIA) (β = −0.205, p = 0.027), supporting H8, while the direct effect of emotional support (ES) on AIA (β = 0.026, p = 0.680) was not significant; thus, H9 was not supported. However, informational support (IFS, β = 0.193, p = 0.003) and instrumental support (ITS, β = 0.129, p = 0.017) both had significant positive influences on AIA, validating H10 and H11. These results indicate that employees’ perception of sufficient information and practical assistance from their organization directly enhances their willingness to adopt AI technologies.
Finally, technostress (TS) exerted a significant negative effect on AIA (β = −0.124, p = 0.021), confirming H12. This shows that employees experiencing higher levels of stress and technological overload are less likely to engage in AI-related behavioral intentions.
In summary, 11 out of the 12 hypotheses were supported, demonstrating that the proposed multi-path mechanism linking organizational support → technostress and innovation resistance → AI adoption is strongly validated. The findings emphasize that informational support serves as both a core driver and a necessary condition for sustainable AI adoption, while emotional and instrumental support mainly exert indirect influences through psychological relief and resource facilitation mechanisms.
To further validate the mediating mechanisms, a bootstrapping procedure with 5000 resamples and bias-corrected and accelerated (BCa) 95% confidence intervals was applied to examine all indirect effects.
As shown in Table 12, all indirect paths reached statistical significance (p < 0.05), confirming that Technostress (TS) and Innovation Resistance (IR) serve as crucial mediators linking Perceived Organizational Support (POS) to AI Technology Acceptance (AIA).
Specifically, the parallel mediation effects via TS and IR were both significant (e.g., IFS → TS → AIA, β = 0.043; IFS → IR → AIA, β = 0.089), indicating that different dimensions of support independently influence AI adoption through cognitive–affective stress mechanisms.
Moreover, the sequential mediation effects (e.g., IFS → TS → IR → AIA, β = 0.018; ITS → TS → IR → AIA, β = 0.012 *) were also significant, suggesting that technostress intensifies innovation resistance, which in turn reduces employees’ adoption intention. This dual-path mechanism demonstrates a hybrid parallel–sequential mediation structure, integrating both emotional and cognitive components of stress response.
Collectively, these findings empirically substantiate the hypothesized chain process wherein organizational support reduces employees’ stress and resistance toward AI technologies, thereby promoting acceptance behaviors. This result is consistent with Social Exchange Theory (SET) and Conservation of Resources Theory (COR), reinforcing that organizational support acts as both a resource provider and a psychological buffer against technostress, facilitating sustainable AI adoption.

4.4. Artificial Neural Network (ANN) Analysis

4.4.1. Model Configuration and Parameter Settings

To further examine the nonlinear and interactive mechanisms underlying employees’ AI technology acceptance, this study employed an Artificial Neural Network (ANN) model as a complementary analytical approach to the Structural Equation Modeling (SEM) results. While SEM identifies linear relationships among constructs, ANN is well suited to capturing complex, hierarchical, and nonlinear patterns that are often present in organizational behavior and technology adoption contexts. Therefore, integrating ANN with SEM provides a more comprehensive modeling strategy, allowing the study to assess both the theoretical structure and the predictive capability of the conceptual model.
Consistent with the conceptual framework, the ANN model incorporated five input variables—Emotional Support (ES), Informational Support (IFS), Instrumental Support (ITS), Technostress (TS), and Innovation Resistance (IR)—all derived from the latent variable (LV) scores of the SEM measurement model. Although the direct linear path from Emotional Support to AI technology acceptance (AIA) was not statistically significant in SEM, ES was intentionally retained in the ANN analysis. The decision reflects two methodological considerations: (1) ANN can reveal nonlinear or interaction effects that may be undetectable in linear SEM models, and (2) excluding a theoretically relevant predictor may artificially reduce the network’s learning capacity. Thus, all five organizational and psychological antecedents were used as input neurons to construct a complete modeling structure.
The ANN model was implemented using a Multi-Layer Perceptron (MLP) architecture in PyTorch (version 2.2.0), a high-performance deep learning framework widely used in predictive modeling research. Following established neural network design principles, a two-hidden-layer architecture was adopted, with 64 neurons in the first hidden layer and 32 neurons in the second hidden layer, as illustrated in Figure 4. This configuration balances modeling flexibility with computational tractability: the first layer captures broad nonlinear transformations, while the second layer refines higher-order feature representations associated with employees’ stress, resistance, and perceived organizational support.
Both hidden layers used the Rectified Linear Unit (ReLU) activation function, which is recognized for its efficiency in deep models and its ability to mitigate problems of vanishing gradients. Because the output variable—AI technology acceptance (AIA)—is continuous, the output node adopted a linear activation function, ensuring that predicted values remain on the original metric scale. This activation design follows best practices for regression-oriented ANN models in behavioral and management sciences.
To prevent overfitting—a common challenge when modeling psychological and organizational variables—two forms of regularization were applied. First, Dropout with a rate of 0.3 randomly deactivated 30% of neurons during each training iteration, reducing co-adaptation among nodes and improving generalization performance. Second, L2 weight regularization (weight decay = 1 × 10−3) penalized excessively large weights and stabilized the optimization process. The combination of Dropout and L2 regularization reflects a robust approach to controlling model complexity, particularly when training dataset sizes are moderate, as is typical in organizational research.
All input variables were standardized to Z-scores (mean = 0, standard deviation = 1) prior to training. Standardization ensures that predictors contribute proportionally to weight updates and prevents variables with larger scales from dominating the optimization process. This preprocessing step is essential for MLP models trained with gradient-based optimization algorithms.
Model training was conducted using the Adam optimizer, which combines momentum and adaptive learning rate techniques to enhance convergence stability and speed. The learning rate was set to 0.001, a standard and empirically validated setting for behavioral prediction models. The model was trained for 500 epochs with a batch size of 160, providing sufficient iterations for convergence while avoiding excessive training cycles that may induce overfitting. The Adam optimizer’s adaptive nature also reduced sensitivity to scale differences among predictors, particularly important when modeling LV scores from SEM.
To ensure stability and generalization, the ANN model was trained using a 10-fold cross-validation procedure. The dataset was randomly partitioned into ten equally sized folds, with nine used for training and one for testing in each iteration. The process was repeated ten times such that each fold served as the test set once. This strategy provides a more reliable estimation of predictive performance by reducing sampling bias and variance, particularly compared to single hold-out validation approaches. Cross-validation also improves the interpretability and credibility of ANN results in behavioral and organizational research, where external replication datasets are often unavailable.
During training, the network learned patterns from the SEM-derived latent variable scores through iterative forward and backward propagation processes. In the forward pass, weighted inputs were transformed through the two hidden layers to produce predicted AIA values. In the backward pass, errors between predicted and actual AIA values were propagated backward through the network, updating weights according to the gradients computed by Adam. This iterative optimization process allowed the ANN model to capture nonlinear interactions among organizational support, technostress, and resistance variables.
The resulting ANN architecture—comprising five input neurons, two hidden layers with 64 and 32 neurons, and one output neuron—provides a robust framework for assessing nonlinear prediction patterns. The configuration also enables subsequent sensitivity analysis and variable importance assessment, which are necessary to compare ANN-derived insights with SEM path effects. By integrating ANN with SEM, this study provides a richer understanding of both structural relationships and predictive behavior, addressing recent methodological calls for hybrid modeling approaches in the fields of organizational support, technostress research, and AI acceptance.

4.4.2. Cross-Validation and Prediction Results

To systematically assess the stability and generalization performance of the Artificial Neural Network (ANN) model in predicting employees’ AI acceptance (AIA), this study employed a 10-fold cross-validation strategy to train a Multi-Layer Perceptron (MLP) regressor. The model consisted of two hidden layers with 64 and 32 neurons, respectively, and the model was trained using the Adam optimizer (learning rate = 0.001), which ensured efficient convergence across the 2000 maximum iterations, achieving a balanced trade-off between nonlinear learning capacity and convergence stability.
The input variables included Emotional Support (ES), Informational Support (IFS), Instrumental Support (ITS), Technostress (TS), and Innovation Resistance (IR), with AI Technology Acceptance (AIA) as the output variable. Following standard machine learning procedures, all input variables were standardized to Z-scores to eliminate scale inconsistencies that might bias weight optimization during model training.
Table 13 summarizes the 10-fold cross-validation results of the ANN model, including mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), and the coefficient of determination (R2) for each fold. Overall, the model achieved an average RMSE of 0.1306, MAE of 0.1015, and R2 of 0.9748, indicating excellent predictive stability and generalization capability across folds.
The results further show that the ANN model maintained high predictive consistency, with minimal variance in RMSE and R2 values across folds. Notably, the lowest RMSE (0.0904) and highest R2 (0.9893) were observed in Fold 1, while Fold 8 exhibited the largest deviation (RMSE = 0.2093), which can be attributed to heterogeneity in data distribution rather than model instability. The low dispersion of error metrics across folds confirms that the model generalizes well to unseen samples without signs of overfitting or underfitting. Previous studies have highlighted that cross-validation can yield unstable error estimates under small-sample conditions, underscoring the need for robust evaluation frameworks [68]. In this study, the use of 10-fold cross-validation with randomized data splits minimizes this risk, ensuring stable and unbiased performance estimates.
To further visualize the ANN model’s predictive behavior, an additional 70–30 train–test split was conducted using the same hyperparameters (64 and 32 neurons in hidden layers, ReLU activation, Dropout = 0.3, L2 regularization = 1 × 10−3, learning rate = 0.001). Figure 5, Figure 6 and Figure 7 illustrate the model’s performance on this independent test set. The consistent alignment between predicted and actual AIA values demonstrates that the network’s nonlinear mapping generalizes well beyond the cross-validation results, confirming robust out-of-sample stability.

4.4.3. Variable Importance Analysis

To enhance the interpretability of the ANN model and identify the relative influence of input variables on employees’ AI technology acceptance (AIA), this study employed the Permutation Feature Importance (PFI) approach. PFI estimates the importance of each predictor by measuring the increase in RMSE after permuting each feature when the corresponding feature’s values are randomly permuted, while all model parameters remain unchanged [69].
Based on the previously trained ANN model (Section 4.4.2), the input variables—Emotional Support (ES), Informational Support (IFS), Instrumental Support (ITS), Technostress (TS), and Innovation Resistance (IR)—were analyzed to assess their marginal contributions to AIA.
Figure 8 illustrates the permutation-based importance results. Overall, Informational Support (IFS) exhibits the strongest influence (42.39%), followed by Innovation Resistance (IR, 21.51%) and Emotional Support (ES, 21.13%), suggesting that both cognitive and emotional factors are critical determinants of employees’ AI adoption. In contrast, Instrumental Support (ITS, 7.73%) and Technostress (TS, 7.24%) contribute less directly, implying that their effects may be indirectly mediated through other factors in the nonlinear ANN structure.
It is important to note that PFI may underestimate the contribution of correlated features [70]. For example, the strong association between IFS and ITS may lead to partial redundancy, where shuffling one variable does not fully disrupt the model’s predictive accuracy. To address this limitation, Altmann et al. [71] proposed a permutation importance correction approach that reduces bias in feature ranking by introducing statistical significance testing, thereby enhancing the robustness of model interpretability.
In summary, the PFI analysis reveals that informational and emotional supports (IFS and ES), together with innovation resistance (IR), are the most critical determinants of AIA (Figure 8). These findings complement the SEM results by uncovering nonlinear and interaction effects that are not captured in the linear model, thereby enhancing the theoretical understanding of employees’ sustainable AI adoption behavior.

4.5. Comparative Analysis: SEM vs. ANN

4.5.1. Variable Importance and Path Coefficients

Table 11 reports the standardized SEM path coefficients, and Figure 6 and Table 13 present the permutation feature importance of the ANN model. The results indicate both convergence and divergence across the two analytical approaches.
Informational Support (IFS) shows a significant positive effect on AIA in SEM (β = 0.193, p = 0.003) and receives the highest importance score in ANN (42.39%).
Innovation Resistance (IR) has a significant negative effect in SEM (β = −0.205, p = 0.027) and is also ranked as a major predictor in ANN (21.51%).
Emotional Support (ES) is statistically non-significant in the SEM structural model (β = 0.026, p = 0.680), whereas ANN assigns it a relatively high importance (21.13%).
Instrumental Support (ITS) shows a significant effect in SEM (β = 0.129, p = 0.017) but receives a lower importance score in ANN (7.73%).
Technostress (TS) exhibits a negative effect in SEM (β = −0.124, p = 0.021) and a moderate importance score in ANN (7.24%).
These findings indicate partial consistency in the ranking of predictors, with IFS and IR identified as key determinants in both models.

4.5.2. Indirect Effects and Predictive Structure

SEM mediation results demonstrate significant indirect effects through Innovation Resistance (IR) and Technostress (TS), including the pathways IFS → IR → AIA (β = 0.089, p < 0.001) and ES → IR → AIA (β = 0.057, p < 0.001).
ANN does not estimate mediation relationships. However, its predictive results reflect the relative contribution of each input variable to AIA without specifying causal or indirect pathways.

4.5.3. Predictive Performance Comparison

The predictive performance of the ANN model exceeds that of the SEM model.
The ANN achieves:
10-fold CV average R2 = 0.9748
RMSE = 0.1306, MAE = 0.1015
In comparison, the SEM structural model explains approximately 65–70% of the variance in AIA based on linear estimation.
SEM provides statistical significance testing and causal path estimation, whereas ANN focuses on prediction accuracy and variable importance.

4.5.4. Summary of Convergence and Divergence

Overall, IFS and IR emerge as key predictors across both SEM and ANN. Differences arise in the relative contributions of ES and ITS, reflecting variations in how the two models assess predictor influence.

4.5.5. Integrated Theoretical Implications

Across the SEM and ANN analyses, informational support (IFS) and innovation resistance (IR) consistently emerge as the most influential factors associated with employees’ AI adoption. While both methods identify these two variables as key predictors, differences appear in the relative contributions of emotional support (ES) and instrumental support (ITS). SEM indicates that ES has no significant direct association with AI adoption, whereas ANN assigns it moderate predictive importance. In contrast, ITS shows a significant linear association in SEM but receives lower importance in the ANN model.
Taken together, the combined results show areas of convergence—particularly regarding the roles of IFS and IR—and areas of divergence across methods, especially in the influence patterns of ES and ITS. These complementary findings reflect how linear estimation and nonlinear predictive modeling capture different aspects of the relationships among organizational support factors and AI adoption.

4.6. Necessary Condition Analysis (NCA)

To further examine the “necessity” and “threshold effects” of each support dimension on AI adoption, this study employs Necessary Condition Analysis (NCA). Unlike traditional regression or structural equation models, NCA focuses on the minimum conditions required for an outcome to occur, rather than on mean-level sufficiency [72]. This method emphasizes the “threshold logic” of behavioral mechanisms, meaning that certain key factors must reach a specific level for the behavioral outcome to manifest. This perspective provides a theoretical basis for examining the hierarchical and threshold-dependent characteristics of organizational support in AI adoption.
According to the NCA results (see Table 14), under the two ceiling methods (CE-FDH and CR-FDH), informational support (IFS) was identified as the only variable with a significant bottleneck effect. Particularly under the CE-FDH method, IFS demonstrated an effect size of 0.105, exceeding the common 0.1 threshold, indicating its necessary role in high-level AI adoption (AIA). This finding indicates that informational support is critical for the successful adoption of AI technologies. This result aligns with the findings of Venkatesh et al. [73] and Masrek et al. [74], which similarly indicate that informational support positively influences behavioral intentions, particularly in high-complexity technology adoption scenarios, where insufficient informational support can substantially limit users’ adoption intentions, as shown in Table 14.
In contrast, emotional support (ES) and instrumental support (ITS) exhibited relatively weak bottleneck effects under both methods. Specifically, the effect sizes of ES and ITS were 0.055 and 0.059, respectively, manifesting as marginal necessary conditions under both CE-FDH and CR-FDH. This indicates that although emotional and instrumental support have some influence on AI adoption, their effects are relatively weak and do not constitute critical drivers of AI technology adoption. Therefore, while these support factors may play a role in certain contexts, their influence in high-level adoption is far less significant than that of informational support. This finding aligns with the studies of Kurtessis et al. [4] and Mathieu, Eschleman, and Cheng [75], which suggest that emotional and instrumental support, while affecting technology adoption, are not decisive factors.
On the other hand, the effect sizes of innovation resistance (IR) and technostress (TS) were zero, indicating that these two variables do not constitute necessary conditions for AI adoption. Their bottleneck effects are zero, suggesting that innovation resistance and technostress are not “threshold conditions” that must be addressed during technology adoption. This finding is consistent with [74,76] and George-Reyes et al. [77], who also argued that innovation resistance and technostress act as potential inhibitory factors but do not constitute “bottlenecks” or “threshold conditions” in the technology adoption process.
Figure 9 presents scatter plots from the NCA (Necessary Condition Analysis), illustrating the bottleneck effects of different variables (e.g., informational support, emotional support) under the two ceiling methods (CE-FDH and CR-FDH). Each subplot depicts the impact of a specific variable on artificial intelligence adoption (AIA):
Taken together, these figures align with the overall NCA results, suggesting that Informational Support is the only critical necessary condition in the technology adoption process, whereas Emotional Support, Instrumental Support, Innovation Resistance, and Technostress exert more marginal and non-decisive effects on adoption behavior.
These figures align with the NCA results, suggesting that Informational Support plays a critical role in the technology adoption process, whereas Emotional Support, Instrumental Support, Innovation Resistance, and Technostress have more marginal effects on adoption behavior.

5. Discussion

5.1. Interpretation of Key Findings

Drawing upon the multidimensional framework of perceived organizational support and employees’ psychological responses to technology, this study integrates SEM, ANN, and NCA to provide a comprehensive understanding of AI adoption behavior. Across all analyses, informational support consistently emerges as the most influential driver of adoption intention. This underscores the important role of cognitive clarity and information transparency in shaping employees’ willingness to embrace new technologies and aligns with Social Exchange Theory [3], which posits that organizational provision of knowledge and training fosters reciprocal engagement and positive attitudes toward technological change.
Organizational support—specifically informational and instrumental support—plays an important role in alleviating psychological burdens such as technostress and innovation resistance. These findings align with prior research indicating that adequate guidance, timely communication, and accessible resources can mitigate stress responses triggered by technological complexity [78]. Emotional support also contributes to reducing stress [79], though its direct influence on adoption intention is comparatively weaker, suggesting that emotional reassurance alone may not translate into behavioral readiness without sufficient cognitive and instrumental resources.
The ANN results complement the SEM findings by identifying the relative importance of these resources under nonlinear data patterns. Informational support remains the most influential predictor, followed by innovation resistance and emotional support, while instrumental support and technostress exert secondary influence. NCA further identifies informational support as the only necessary condition for high-level adoption [9], suggesting that cognitive readiness forms a critical threshold that must be met before other types of support can facilitate adoption. This result reinforces earlier insights into the contingent nature of employee readiness for technological integration [66].
The differences observed between SEM and ANN do not indicate inconsistencies but instead reflect methodological complementarity. SEM captures linear and directional relationships, while ANN focuses on predictive influence under nonlinear interactions. Accordingly, the combination of these techniques provides a richer and more nuanced view of employees’ socio-cognitive responses to AI technologies.

5.2. Theoretical Implications

This study contributes to organizational support and digital transformation literature in several ways. First, it extends the applicability of Perceived Organizational Support theory to intelligent workplace contexts by demonstrating that emotional, informational, and instrumental resources jointly influence employees’ psychological adaptation. These findings may reflect a multi-layered pattern in which emotional support contributes to psychological safety, informational support enhances cognitive clarity, and instrumental support facilitates behavioral readiness [4].
Second, by identifying technostress and innovation resistance as key mediators, the study highlights the psychological transmission mechanisms through which support resources shape adoption behavior. This aligns with dual-path models of technostress, where supportive environments can reduce both anxiety and defensive responses to innovation [2], thereby enabling more adaptive technology use.
Third, by incorporating NCA, the study suggests the possibility of a “hierarchical necessity structure” in the POS literature. Informational support is shown to be the core necessary condition, while emotional and instrumental support operate as supplementary conditions [9]. This implies that support resources do not simply accumulate linearly but instead function through threshold-based mechanisms that must be satisfied to enable behavioral outcomes.
Overall, the findings demonstrate that AI adoption is shaped by employees’ emotional, cognitive, and behavioral resource configurations and is best conceptualized as an interdependent, multi-stage process rather than a purely linear function of supportive inputs [1].

5.3. Managerial Implications

Several practical implications emerge from this study.
First, organizations should prioritize informational support by offering structured training programs, transparent communication channels, and continuous feedback mechanisms. This approach reduces uncertainty and enhances employees’ sense of control during AI implementation [4].
Second, instrumental support—such as technical assistance, system usability improvements, and onboarding resources—can reduce employees’ learning burdens and lower barriers to adoption, consistent with prior observations on stress mitigation [78].
Third, emotional support remains essential for sustaining a psychologically safe culture that fosters trust and openness to technological learning. Creating such environments through empathetic leadership and positive communication contributes to long-term innovation readiness [79].
Finally, NCA results indicate that informational support is the most critical bottleneck. Organizations should thus prioritize filling cognitive and informational gaps before relying on emotional or material incentives. Without adequate informational clarity, employees are unlikely to reach the psychological thresholds necessary for behavioral change.

5.4. Methodological and Theoretical Contributions

Methodologically, this research demonstrates the value of integrating SEM, ANN, and NCA to enhance both explanatory and predictive insights. SEM reveals linear causal pathways, ANN captures nonlinear influences, and NCA identifies necessary conditions that bound behavioral outcomes. This tri-method approach aligns with Shmueli’s [80] call for methodological synergy between explanation and prediction and provides a more comprehensive understanding of technology adoption processes.
Theoretically, the study enriches the POS and AI adoption literature by introducing hierarchical logic and threshold effects as central mechanisms shaping employees’ psychological adaptation. The integrated model offers a useful lens for interpreting how emotional, cognitive, and instrumental resources jointly influence digital transformation in organizational settings.

5.5. Limitations and Future Research

Despite its contributions, the study has limitations. The dataset is drawn from a specific industry, and future work could strengthen generalizability through cross-industry or cross-cultural comparison. The cross-sectional design limits causal inference, suggesting the need for longitudinal or experimental approaches. Additional constructs such as AI trust, self-efficacy, and organizational ethical climate may further enrich the psychological mechanism model. Future studies may also incorporate mixed-method designs to deepen contextual understanding and extend theoretical interpretation.

6. Conclusions

This study integrates SEM, ANN, and NCA to provide a multidimensional analysis of how organizational support influences employees’ adoption of AI technologies. Across all methods, informational support emerges as the most pivotal factor, functioning as an important cognitive basis that supports employees to effectively engage with AI systems. Technostress and innovation resistance remain important psychological barriers, underscoring the need for organizations to reduce uncertainty and provide clear guidance during technological transitions [81].
The findings extend the applicability of Perceived Organizational Support theory by indicating that emotional, informational, and instrumental support contribute differently to employees’ psychological adaptation in intelligent work environments [9]. Informational support plays a primary role in building cognitive clarity, while emotional and instrumental resources contribute to a supportive climate that facilitates long-term engagement with AI. These results align with emerging research emphasizing the layered and adaptive nature of support structures in cognitively demanding contexts [82,83].
Methodologically, the combined SEM–ANN–NCA approach demonstrates a complementary triad of explanation, prediction, and diagnosis, responding to recent calls for integrating linear and nonlinear analytical perspectives in technology adoption research [83,84,85]. The multi-method design offers a replicable path for future organizational studies seeking to understand complex behavioral systems.
Practically, the study emphasizes the importance of prioritizing informational transparency and providing employees with clear, continuous cognitive support during AI implementation. As shown in recent AI adoption literature, cognitive clarity not only enhances willingness to use technology but also strengthens organizational adaptability in digital transformation processes [81,86].
In sum, creating a supportive environment—one that offers clear information, accessible resources, and a psychologically safe climate—plays a crucial role in enabling sustainable AI adoption. By reinforcing employees’ readiness and capability to engage with intelligent technologies, organizational support becomes a key driver of long-term digital transformation and organizational resilience.

Author Contributions

Conceptualization, Y.F. (Yu Feng); Methodology, Y.F. (Yu Feng); Software, Y.F. (Yi Feng); Validation, Y.F. (Yi Feng); Formal analysis, Y.F. (Yu Feng); Investigation, Y.F. (Yu Feng); Resources, Y.F. (Yu Feng); Data curation, Y.F. (Yu Feng); Writing—original draft, Y.F. (Yu Feng); Writing—review & editing, Y.F. (Yu Feng); Visualization, Y.F. (Yu Feng); Supervision, Y.F. (Yu Feng) and Z.L.; Project administration, Y.F. (Yu Feng); Funding acquisition, Y.F. (Yu Feng). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SPSSStatistical Package for the Social Sciences
AMOSAnalysis of Moment Structures
CRComposite Reliability
AVEAverage Variance Extracted
ESEmotional Support
IFSInformational Support
ITSInstrumental Support
TSTechnostress
IRInnovation Resistance
AIAAI Technology Acceptance
SEMStructural Equation Modeling
ANNArtificial Neural Network
NCANecessary Condition Analysis
POSPerceived Organizational Support
SETSocial Exchange Theory
CORConservation of Resources Theory
DOIDiffusion of Innovation Theory
TAMTechnology Acceptance Model

Appendix A

Table A1. Questionnaire.
Table A1. Questionnaire.
VariableCodeItemSource
Emotional Support (ES)ES1My organization offers emotional comfort and support when I use new technologies.Kossek et al. (2011) [65]
Mathieu et al. (2019) [75]
ES2When I face difficulties using AI technologies, my organization shows empathy and understanding.
ES3The organization pays attention to my emotional reactions during technological change.
ES4I feel encouraged and recognized by the organization when facing technical challenges.
ES5My organization ensures I do not feel isolated when adapting to new technologies.
Informational Support (IFS)IFS1The organization actively collects my feedback to improve AI application experiences.Esbensen Kim et al. (2004) [30]
Kossek et al. (2011) [65]
IFS2When I need to understand AI, the organization provides sufficient information.
IFS3When using new technologies, the organization provides guides or training materials.
IFS4The organization provides timely technical guidance when I use AI technology.
Instrumental Support (ITS)ITS1My organization provides appropriate devices for operating AI technologies.Chen et al. (2023) [86]
Soomro et al. (2024) [81]
ITS2The organization provides additional resources to improve efficiency in AI practices.
ITS3The organization offers auxiliary tools to optimize operational processes.
ITS4My organization provides maintenance support for AI equipment.
Technostress (TS)TS1It takes a lot of time to understand the functions of AI technology.Tarafdar et al. (2007) [78]
Erdmann et al. (2025) [82]
TS2The operational procedures of AI technology are overly complex.
TS3It is hard to keep up with the pace of AI technology development.
TS4Using AI technology makes my career prospects feel uncertain.
TS5AI technology speeds up my work rhythm significantly.
Innovation Resistance (IR)IR1I feel resistant to the actual application limitations of AI technology.Jain et al. (2024) [83]
Dul et al. (2020) [72]
IR2The diversity of AI models makes it difficult for me to adapt.
IR3I feel distrustful toward AI technology.
IR4Frequent updates of AI functions cause me psychological stress.
IR5I find it difficult to adapt to frequent AI function updates.
IR6I find using AI technology troublesome and feel repelled by it.
AI Technology Acceptance (AIA)AIA1I am willing to try and use AI technology in my work.Venkatesh et al. (2012) [73]
Zhang et al. (2024) [27]
AIA2AI technology helps to improve my work efficiency.
AIA3I believe AI technology is beneficial for my work.
AIA4I actively learn and master AI technology.
AIA5Using AI technology has increased my job satisfaction.

References

  1. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  2. Tarafdar, M.; Cooper, C.L.; Stich, J.F. The technostress trifecta-techno eustress, techno distress and design: Theoretical directions and an agenda for research. Inf. Syst. J. 2019, 29, 6–42. [Google Scholar] [CrossRef]
  3. Eisenberger, R.; Huntington, R.; Hutchison, S.; Sowa, D. Perceived organizational support. J. Appl. Psychol. 1986, 71, 500–507. [Google Scholar] [CrossRef]
  4. Kurtessis, J.N.; Eisenberger, R.; Ford, M.T.; Buffardi, L.C.; Stewart, K.A.; Adis, C.S. Perceived organizational support: A meta-analytic evaluation of organizational support theory. J. Manag. 2017, 43, 1854–1884. [Google Scholar] [CrossRef]
  5. Çini, M.A.; Erdirençelebi, M.; Akman, A.Z. The effect of organization employees’ perspective on digital transformation on their technostress levels and performance: A public institution example. Cent. Eur. Bus. Rev. 2023, 12, 33–57. [Google Scholar] [CrossRef]
  6. Lanzl, J. Social support as technostress inhibitor: Even more important during the COVID-19 pandemic? Bus. Inf. Syst. Eng. 2023, 65, 329–343. [Google Scholar] [CrossRef]
  7. Pirkkalainen, H.; Salo, M.; Tarafdar, M.; Makkonen, M. Deliberate or instinctive? Proactive and reactive coping for technostress. J. Manag. Inf. Syst. 2019, 36, 1179–1212. [Google Scholar] [CrossRef]
  8. Chandra, S.; Shirish, A.; Srivastava, S.C. Does technostress inhibit employee innovation? Examining the linear and curvilinear influence of technostress creators. Commun. Assoc. Inf. Syst. 2019, 44, 299–331. [Google Scholar] [CrossRef]
  9. Dul, J. Necessary condition analysis (NCA) logic and methodology of “necessary but not sufficient” causality. Organ. Res. Methods 2016, 19, 10–52. [Google Scholar] [CrossRef]
  10. Siano, A.; Vollero, A.; Conte, F.; Amabile, S. “More than words”: Expanding the taxonomy of greenwashing after the Volkswagen scandal. J. Bus. Res. 2017, 71, 27–37. [Google Scholar] [CrossRef]
  11. Palazzo, M.; Vollero, A.; Siano, A. Intelligent packaging in the transition from linear to circular economy: Driving research in practice. J. Clean. Prod. 2023, 388, 135984. [Google Scholar] [CrossRef]
  12. Mariani, M.M.; Machado, I.; Magrelli, V.; Dwivedi, Y.K. Artificial intelligence in innovation research: A systematic review, conceptual framework, and future research directions. Technovation 2023, 122, 102623. [Google Scholar] [CrossRef]
  13. Strielkowski, W.; Grebennikova, V.; Lisovskiy, A.; Rakhimova, G.; Vasileva, T. AI-driven adaptive learning for sustainable educational transformation. Sustain. Dev. 2025, 33, 1921–1947. [Google Scholar] [CrossRef]
  14. Rhoades, L.; Eisenberger, R. Perceived organizational support: A review of the literature. J. Appl. Psychol. 2002, 87, 698–714. [Google Scholar] [CrossRef]
  15. Aryee, S.; Budhwar, P.S.; Chen, Z.C. Organisational justice, trust foci, and work outcomes: Test of a mediated social exchange model. In Proceedings of the Annual Meeting of the Academy of Management, Washington, DC, USA, 3–8 August 2001. [Google Scholar]
  16. Fatima, T.; Masood, A. Impact of digital leadership on open innovation: A moderating serial mediation model. J. Knowl. Manag. 2024, 28, 161–180. [Google Scholar] [CrossRef]
  17. Wilson, T.D. Models in information behaviour research. J. Doc. 1999, 55, 249–270. [Google Scholar] [CrossRef]
  18. Savolainen, R. Everyday life information seeking: Approaching information seeking in the context of “way of life”. Libr. Inf. Sci. Res. 1995, 17, 259–294. [Google Scholar] [CrossRef]
  19. Hobfoll, S.E.; Halbesleben, J.; Neveu, J.P.; Westman, M. Conservation of resources in the organizational context: The reality of resources and their consequences. Annu. Rev. Organ. Psychol. Organ. Behav. 2018, 5, 103–128. [Google Scholar] [CrossRef]
  20. Feroz, K.; Kwak, M. Digital transformation (DT) and artificial intelligence (AI) convergence in organizations. J. Comput. Inf. Syst. 2024, 1–17. [Google Scholar] [CrossRef]
  21. Wang, H.; Ding, H.; Kong, X. Understanding technostress and employee well-being in digital work: The roles of work exhaustion and workplace knowledge diversity. Int. J. Manpow. 2023, 44, 334–353. [Google Scholar] [CrossRef]
  22. Fan, J.; Zhang, L.; Li, N.; Man, S. How the perceived substitution of AI technology hinders nurses’ innovation behavior: The mediating role of AI anxiety and human-AI cooperation intention and the moderating role of organizational AI readiness. BMC Nurs. 2025, 24, 832. [Google Scholar] [CrossRef]
  23. Pirkkalainen, H.; Salo, M.; Makkonen, M.; Tarafdar, M. Coping with Technostress: When Emotional Responses Fail. In Proceedings of the 38th International Conference on Information Systems (ICIS 2017), Seoul, Republic of Korea, 10–13 December 2017; Association for Information Systems: Atlanta, GA, USA, 2017; pp. 1–17. [Google Scholar]
  24. Chang, P.C.; Zhang, W.; Cai, Q.; Guo, H. Does AI-driven technostress promote or hinder employees’ artificial intelligence adoption intention? A moderated mediation model of affective reactions and technical self-efficacy. Psychol. Res. Behav. Manag. 2024, 17, 413–427. [Google Scholar] [CrossRef]
  25. Claudy, M.C.; Garcia, R.; O’Driscoll, A. Consumer resistance to innovation—A behavioral reasoning perspective. J. Acad. Mark. Sci. 2015, 43, 528–544. [Google Scholar] [CrossRef]
  26. Cieslak, V.; Valor, C. Moving beyond conventional resistance and resistors: An integrative review of employee resistance to digital transformation. Cogent Bus. Manag. 2025, 12, 2442550. [Google Scholar] [CrossRef]
  27. Zhang, Z.; Wang, X.; Su, C.; Zhang, X.; Sun, L.; Yuan, X. Technostress and employee safety performance in China: The moderating role of perceived organizational support and the mediating role of job burnout. J. Gen. Manag. 2024; ahead-of-print. [Google Scholar] [CrossRef]
  28. Bausch, D.; Kraemer, T.; Mauroner, O. Technology-induced stress and employee resistance in the context of digital transformation and identification of countermeasures. Int. J. Innov. Technol. Manag. 2024, 21, 2450029. [Google Scholar] [CrossRef]
  29. Uren, V.; Edwards, J.S. Technology readiness and the organizational journey towards AI adoption: An empirical study. Int. J. Inf. Manag. 2023, 68, 102588. [Google Scholar] [CrossRef]
  30. Esbensen Kim, H. Multivariate data analysis–in practice. In An Introduction to Multivariate Data Analysis and Experimental Design, 5th ed.; Aalborg University: Esbjerg, Denmark; Camo Process, A.S.: Oslo, Norway, 2004; pp. 75–78. [Google Scholar]
  31. Blau, P.M. Exchange and Power in Social Life; Routledge: London, UK, 2017. [Google Scholar] [CrossRef]
  32. Richter, N.F.; Schubring, S.; Hauff, S.; Ringle, C.M.; Sarstedt, M. When predictors of outcomes are necessary: Guidelines for the combined use of PLS-SEM and NCA. Ind. Manag. Data Syst. 2020, 120, 2243–2267. [Google Scholar] [CrossRef]
  33. Rani, S.; Danu, A.; Himanshu. Does digital competence really matter? The impact of attitude and ICT on research performance. J. Appl. Res. High. Educ. 2025; ahead-of-print. [Google Scholar] [CrossRef]
  34. Thiem, A. The logic and methodology of “necessary but not sufficient causality”: A comment on necessary condition analysis (NCA). Sociol. Methods Res. 2021, 50, 913–925. [Google Scholar] [CrossRef]
  35. Backhaus, K.; Erichson, B.; Gensler, S.; Weiber, R.; Weiber, T. Multivariate Analysis: An Application-Oriented Introduction, 2nd ed.; Springer Gabler: Wiesbaden, Germany, 2021. [Google Scholar] [CrossRef]
  36. Gurney, K. An Introduction to Neural Networks; CRC Press: Boca Raton, FL, USA, 1997. [Google Scholar] [CrossRef]
  37. Fazal-e-Hasan, S.M.; Amrollahi, A.; Mortimer, G.; Adapa, S.; Balaji, M.S. A multi-method approach to examining consumer intentions to use smart retail technology. Comput. Hum. Behav. 2021, 117, 106622. [Google Scholar] [CrossRef]
  38. El-Masri, M.; Al-Yafi, K.; Kamal, M.M. A task-technology-identity fit model of smartwatch utilisation and user satisfaction: A hybrid SEM-neural network approach. Inf. Syst. Front. 2023, 25, 835–852. [Google Scholar] [CrossRef]
  39. Popa, I.; Ștefan, S.C.; Olariu, A.A.; Breazu, A.; Cioc, M.M. Predictors of Employees’ Work Performance in Online and On-Site Conditions: A Combined use of PLS-SEM and NCA. Econ. Comput. Econ. Cybern. Stud. Res. 2024, 58, 265–279. [Google Scholar] [CrossRef]
  40. Ly, B. Digital Transformation, Organizational Culture, and Public Sector Flexibility in Emerging Economies: Insights from NCA and PLS-SEM Analysis. J. Knowl. Econ. 2025; ahead-of-print. [Google Scholar] [CrossRef]
  41. Hanifah, H.; Lee, Y.X.; Abdul, H. Revitalizing SMEs’ performance: Unleashing the dynamics of Technology-Organization-Environment factors for M-Commerce adoption. J. Sci. Technol. Policy Manag. 2025; ahead-of-print. [Google Scholar] [CrossRef]
  42. Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); SAGE: Singapore, 2014; ISBN 9781452217444. [Google Scholar]
  43. Abu-Mostafa, Y.S. Learning from hints in neural networks. J. Complex. 1990, 6, 192–198. [Google Scholar] [CrossRef]
  44. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
  45. George, D.; Mallery, P. IBM SPSS Statistics 26 Step by Step: A Simple Guide and Reference, 16th ed.; Routledge: London, UK, 2019. [Google Scholar] [CrossRef]
  46. DeVellis, R.F.; Thorpe, C.T. Scale Development: Theory and Applications, 5th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2021. [Google Scholar]
  47. Cortina, J.M. What is coefficient alpha? An examination of theory and applications. J. Appl. Psychol. 1993, 78, 98–104. [Google Scholar] [CrossRef]
  48. Tavakol, M.; Dennick, R. Making sense of Cronbach’s alpha. Int. J. Med. Educ. 2011, 2, 53–55. [Google Scholar] [CrossRef]
  49. Raykov, T.; Marcoulides, G.A. Introduction to Psychometric Theory; Routledge: London, UK, 2011. [Google Scholar] [CrossRef]
  50. Kaiser, H.F. An index of factorial simplicity. Psychometrika 1974, 39, 31–36. [Google Scholar] [CrossRef]
  51. Mertler, C.A.; Vannatta, R.A.; LaVenia, K.N. Advanced and Multivariate Statistical Methods: Practical Application and Interpretation; Routledge: London, UK, 2021. [Google Scholar] [CrossRef]
  52. Sarstedt, M.; Ringle, C.M.; Smith, D.; Reams, R.; Hair, J.F., Jr. Partial least squares structural equation modeling (PLS-SEM): A useful tool for family business researchers. J. Fam. Bus. Strategy 2014, 5, 105–115. [Google Scholar] [CrossRef]
  53. McNeish, D. Thanks coefficient alpha, we’ll take it from here. Psychol. Methods 2018, 23, 412–433. [Google Scholar] [CrossRef]
  54. Sijtsma, K. On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika 2009, 74, 107–120. [Google Scholar] [CrossRef]
  55. Clark, L.A.; Watson, D. Constructing validity: New developments in creating objective measuring instruments. Psychol. Assess. 2019, 31, 1412–1427. [Google Scholar] [CrossRef]
  56. Fabrigar, L.R.; Wegener, D.T.; MacCallum, R.C.; Strahan, E.J. Evaluating the use of exploratory factor analysis in psychological research. Psychol. Methods 1999, 4, 272–299. [Google Scholar] [CrossRef]
  57. Costello, A.B.; Osborne, J. Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Pract. Assess. Res. Eval. 2005, 10, 7. [Google Scholar] [CrossRef]
  58. Tabachnick, B.G.; Fidell, L.S. Using Multivariate Statistics, 7th ed.; Pearson: London, UK, 2019. [Google Scholar]
  59. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  60. Henseler, J.; Hubona, G.; Ray, P.A. Using PLS path modeling in new technology research: Updated guidelines. Ind. Manag. Data Syst. 2016, 116, 2–20. [Google Scholar] [CrossRef]
  61. Sarstedt, M.; Ringle, C.M.; Hair, J.F. Partial least squares structural equation modeling. In Handbook of Market Research; Springer International Publishing: Cham, Switzerland, 2021; pp. 587–632. [Google Scholar] [CrossRef]
  62. Nunnally, J.C.; Bernstein, I.H. Psychometric Theory, 3rd ed.; McGraw-Hill: Columbus, OH, USA, 1994. [Google Scholar]
  63. Landis, R.S.; Beal, D.J.; Tesluk, P.E. A comparison of approaches to forming composite measures in structural equation models. Organ. Res. Methods 2000, 3, 186–207. [Google Scholar] [CrossRef]
  64. Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. A Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  65. Kossek, E.E.; Pichler, S.; Bodner, T.; Hammer, L.B. Workplace social support and work–family conflict: A meta-analysis clarifying the influence of general and work–family-specific supervisor and organizational support. Pers. Psychol. 2011, 64, 289–313. [Google Scholar] [CrossRef]
  66. Bhattacherjee, A.; Hikmet, N. Physicians’ resistance toward healthcare information technology: A theoretical model and empirical test. Eur. J. Inf. Syst. 2007, 16, 725–737. [Google Scholar] [CrossRef]
  67. Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Richter, N.F.; Hauff, S. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 2nd ed.; SAGE Publications: Singapore, 2017. [Google Scholar] [CrossRef]
  68. Varoquaux, G. Cross-validation failure: Small sample sizes lead to large error bars. NeuroImage 2018, 180, 68–77. [Google Scholar] [CrossRef]
  69. Molnar, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, 2nd ed.; Zenodo: Geneva, Switzerland, 2022. [Google Scholar] [CrossRef]
  70. Fisher, A.; Rudin, C.; Dominici, F. All models are wrong, but many are useful: Learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 2019, 20, 1–81. [Google Scholar]
  71. Altmann, A.; Toloşi, L.; Sander, O.; Lengauer, T. Permut. Importance: A Corrected Feature Importance measure. Bioinformatics 2010, 26, 1340–1347. [Google Scholar] [CrossRef]
  72. Dul, J.; Van der Laan, E.; Kuik, R. A statistical significance test for necessary condition analysis. Organ. Res. Methods 2020, 23, 385–395. [Google Scholar] [CrossRef]
  73. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  74. Masrek, M.N.; Baharuddin, M.F.; Syam, A.M. Determinants of Behavioral Intention to Use Generative AI: The Role of Trust, Personal Innovativeness, and UTAUT II Factors. Int. J. Basic Appl. Sci. 2025, 14, 378–390. [Google Scholar] [CrossRef]
  75. Mathieu, M.; Eschleman, K.J.; Cheng, D. Meta-analytic and multiwave comparison of emotional support and instrumental support in the workplace. J. Occup. Health Psychol. 2019, 24, 387–409. [Google Scholar] [CrossRef]
  76. Abulail, R.N.; Badran, O.N.; Shkoukani, M.A.; Omeish, F. Exploring the Factors Influencing AI Adoption Intentions in Higher Education: An Integrated Model of DOI, TOE, and TAM. Computers 2025, 14, 230. [Google Scholar] [CrossRef]
  77. George-Reyes, C.E.; López-Caudana, E.O.; Avello-Martínez, R. Artificial intelligence adoption test based on UTAUT2 and complex thinking: Design with K coefficient and reliability analysis using structural equation modeling. Cogent Educ. 2025, 12, 2511446. [Google Scholar] [CrossRef]
  78. Tarafdar, M.; Tu, Q.; Ragu-Nathan, B.S.; Ragu-Nathan, T.S. The impact of technostress on role stress and productivity. J. Manag. Inf. Syst. 2007, 24, 301–328. [Google Scholar] [CrossRef]
  79. Saleem, F.; Malik, M.I. Technostress, quality of work life, and job performance: A moderated mediation model. Behav. Sci. 2023, 13, 1014. [Google Scholar] [CrossRef] [PubMed]
  80. Shmueli, G. To explain or to predict? Stat. Sci. 2010, 25, 289–310. [Google Scholar] [CrossRef]
  81. Soomro, S.; Fan, M.; Sohu, J.M.; Soomro, S.; Shaikh, S.N. AI adoption: A bridge or a barrier? The moderating role of organizational support in the path toward employee well-being. Kybernetes, 2024; ahead-of-print. [Google Scholar] [CrossRef]
  82. Erdmann, A.; Toro-Dupouy, L. The influence of the institutional environment on AI adoption in universities: Identifying value drivers and necessary conditions. Eur. J. Innov. Manag. 2025, 28, 4365–4398. [Google Scholar] [CrossRef]
  83. Jain, K.K.; Raghuram, J.N.V. Gen-AI integration in higher education: Predicting intentions using SEM-ANN approach. Educ. Inf. Technol. 2024, 29, 17169–17209. [Google Scholar] [CrossRef]
  84. Huang, Y.; Fu, S. Understanding farmers’ intentions to participate in traceability systems: Evidence from SEM-ANN-NCA. Front. Sustain. Food Syst. 2023, 7, 1246122. [Google Scholar] [CrossRef]
  85. Soomro, R.B.; Al-Rahmi, W.M.; Dahri, N.A.; Almuqren, L.; Al-Mogren, A.S.; Aldaijy, A. A SEM–ANN analysis to examine impact of artificial intelligence technologies on sustainable performance of SMEs. Sci. Rep. 2025, 15, 5438. [Google Scholar] [CrossRef]
  86. Chen, Y.; Hu, Y.; Zhou, S.; Yang, S. Investigating the determinants of performance of artificial intelligence adoption in hospitality industry during COVID-19. Int. J. Contemp. Hosp. Manag. 2023, 35, 2868–2889. [Google Scholar] [CrossRef]
Figure 1. Structural Equation Model Diagram. ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. Arrows represent the hypothesized causal paths (H1–H12) between constructs.
Figure 1. Structural Equation Model Diagram. ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. Arrows represent the hypothesized causal paths (H1–H12) between constructs.
Sustainability 17 11373 g001
Figure 2. GPower 3.1 calculation confirming the minimum sample size requirement.
Figure 2. GPower 3.1 calculation confirming the minimum sample size requirement.
Sustainability 17 11373 g002
Figure 3. SEM Analysis Model. ES = Emotional Support, IFS = Informational Support, ITS= Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance.
Figure 3. SEM Analysis Model. ES = Emotional Support, IFS = Informational Support, ITS= Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance.
Sustainability 17 11373 g003
Figure 4. Overall architecture of the Artificial Neural Network (ANN) model. The ANN model comprises five input variables (ES, IFS, ITS, TS, IR), two hidden layers (64 and 32 neurons), and one output node that predicts AI Technology Acceptance (AIA). ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. The dots (●) illustrate the neurons contained in each hidden layer (64 neurons in Layer 1 and 32 neurons in Layer 2).
Figure 4. Overall architecture of the Artificial Neural Network (ANN) model. The ANN model comprises five input variables (ES, IFS, ITS, TS, IR), two hidden layers (64 and 32 neurons), and one output node that predicts AI Technology Acceptance (AIA). ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. The dots (●) illustrate the neurons contained in each hidden layer (64 neurons in Layer 1 and 32 neurons in Layer 2).
Sustainability 17 11373 g004
Figure 5. Training loss curve of the ANN model. The loss decreases smoothly across 500 epochs, indicating stable convergence during the 70–30 split training process.
Figure 5. Training loss curve of the ANN model. The loss decreases smoothly across 500 epochs, indicating stable convergence during the 70–30 split training process.
Sustainability 17 11373 g005
Figure 6. Predicted vs. Actual (test set). The predicted curve closely follows the actual trend in the 30% test sample, confirming strong model accuracy and minimal deviation.
Figure 6. Predicted vs. Actual (test set). The predicted curve closely follows the actual trend in the 30% test sample, confirming strong model accuracy and minimal deviation.
Sustainability 17 11373 g006
Figure 7. Scatter plot of Actual vs. Predicted (test set). Each dot represents an individual observation in the test dataset. The dots are tightly clustered along the diagonal, showing high predictive consistency (R2 ≈ 0.95) under the 70–30 split validation.
Figure 7. Scatter plot of Actual vs. Predicted (test set). Each dot represents an individual observation in the test dataset. The dots are tightly clustered along the diagonal, showing high predictive consistency (R2 ≈ 0.95) under the 70–30 split validation.
Sustainability 17 11373 g007
Figure 8. Permutation-based feature importance of the ANN model. The bars represent the average percentage increase in RMSE after feature permutation. IFS, IR, and ES emerge as dominant predictors, while ITS and TS show weaker effects. Note: ES = Emotional Support; IFS = Informational Support; ITS = Instrumental Support; TS = Technostress; IR = Innovation Resistance.
Figure 8. Permutation-based feature importance of the ANN model. The bars represent the average percentage increase in RMSE after feature permutation. IFS, IR, and ES emerge as dominant predictors, while ITS and TS show weaker effects. Note: ES = Emotional Support; IFS = Informational Support; ITS = Instrumental Support; TS = Technostress; IR = Innovation Resistance.
Sustainability 17 11373 g008
Figure 9. NCA Results. ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. (a) The subfigure illustrates the relationship between Emotional Support (ES) and AI Adoption Intention (AIA), showing that ES exhibits a relatively weak bottleneck effect with a low effect size. (b) The subfigure analyzes the impact of Informational Support (IFS) on AIA. IFS is identified as the only variable exhibiting a significant bottleneck effect, particularly under the CE-FDH method, with an effect size of 0.105, exceeding the common empirical threshold of 0.1. (c) The subfigure depicts the relationship between Innovation Resistance (IR) and AIA, where IR shows no significant bottleneck effect under either method, with an effect size of zero. (d,e) The subfigures present the relationships of Instrumental Support (ITS) and Technostress (TS) with AIA, indicating that while these variables influence AI adoption, they are not decisive factors and have small effect sizes.
Figure 9. NCA Results. ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. (a) The subfigure illustrates the relationship between Emotional Support (ES) and AI Adoption Intention (AIA), showing that ES exhibits a relatively weak bottleneck effect with a low effect size. (b) The subfigure analyzes the impact of Informational Support (IFS) on AIA. IFS is identified as the only variable exhibiting a significant bottleneck effect, particularly under the CE-FDH method, with an effect size of 0.105, exceeding the common empirical threshold of 0.1. (c) The subfigure depicts the relationship between Innovation Resistance (IR) and AIA, where IR shows no significant bottleneck effect under either method, with an effect size of zero. (d,e) The subfigures present the relationships of Instrumental Support (ITS) and Technostress (TS) with AIA, indicating that while these variables influence AI adoption, they are not decisive factors and have small effect sizes.
Sustainability 17 11373 g009aSustainability 17 11373 g009b
Table 1. Descriptive statistics of population sample.
Table 1. Descriptive statistics of population sample.
CategoryDescriptionFrequencyPercentage
GenderMale21851.2
Female20848.8
AgeUnder 18 years358.2
18–24 years14834.7
25–30 years15937.3
31–40 years6014.1
41–50 years194.5
51–60 years51.2
EducationJunior High School or below153.5
High School/Vocational School12830.0
Associate Degree12529.3
Bachelor’s Degree12228.6
Master’s Degree or above368.5
Monthly ExpenditureBelow RMB 30004911.5
RMB 3001–50005813.6
RMB 5001–800013030.5
RMB 8001–12,00013832.4
Above RMB 12,0015112.0
OccupationMarketing/Sales/Business204.7
Procurement184.2
Administration122.8
Human Resources133.1
Product/Operations Staff368.5
Finance/Accounting/Cashier/Audit163.8
Business Manager184.2
Lawyer/Legal Affairs214.9
Designer378.7
Service Industry Staff276.3
Technical/Development Engineer204.7
Agricultural/Forestry/Animal Husbandry/Fishery Worker194.5
Worker/Laborer184.2
Full-time Homemaker30.7
Freelancer174.0
Retired/Pensioner51.2
Student358.2
Teacher337.7
Medical Staff194.5
Researcher225.2
Government/Party Personnel174.0
Total426100.0
Table 2. Descriptive Statistics and Tests of Normality for Each Dimension.
Table 2. Descriptive Statistics and Tests of Normality for Each Dimension.
VariablesSecond-Order VariablesMeanStandard Deviation (SD)SkewnessKurtosisPopulation
Mean (M)
Population
SD
ESES14.020.996−0.9410.3670.777390.604
ES24.130.953−1.1291.088
ES33.880.989−0.7660.167
ES44.10.952−0.980.591
ES54.041.006−1.0370.728
IFSIFS14.041.02−1.0390.6410.829670.688
IFS24.120.953−1.050.819
IFS33.91.044−0.7760.029
IFS43.961.033−0.8430.149
ITSITS13.681.091−0.59−0.3620.925530.857
ITS23.771.024−0.598−0.276
ITS33.491.115−0.502−0.452
ITS43.741.142−0.687−0.341
TSTS12.131.0880.784−0.0230.910360.829
TS22.041.0450.8760.188
TS32.341.1250.557−0.458
TS42.271.1350.689−0.238
TS52.421.1660.565−0.43
IRIR12.181.060.7670.0890.832650.693
IR22.261.1170.708−0.163
IR32.361.1760.621−0.406
IR42.071.0020.7560.013
IR52.010.9720.8510.274
IR62.231.0740.677−0.131
AIAAIA13.881.04−0.753−0.0310.843050.711
AIA23.821.028−0.625−0.202
AIA33.571.083−0.467−0.378
AIA43.721.08−0.586−0.36
AIA53.551.107−0.492−0.409
ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance.
Table 3. KMO Measure of Sampling Adequacy and Bartlett’s Test of Sphericity Results.
Table 3. KMO Measure of Sampling Adequacy and Bartlett’s Test of Sphericity Results.
ConstructCronbach’s AlphaNumber of Items
KMO Measure of Sampling Adequacy0.910
Bartlett’s Test of Sphericity
Approximate Chi-Square5864.468
Degrees of Freedom406
Significance0.000
Table 4. Cronbach’s α Reliability Test Results for Each Latent Variable.
Table 4. Cronbach’s α Reliability Test Results for Each Latent Variable.
ConstructCronbach’s AlphaNumber of Items
ES0.8535
IFS0.8364
ITS0.8684
TS0.8775
IR0.8706
AIA0.8495
ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance.
Table 5. Total Variance Explained.
Table 5. Total Variance Explained.
ComponentInitial EigenvaluesExtraction Sums of Squared Loadings Rotation Sums of Squared Loadings
Total% of VarianceCumulative %Total% of VarianceCumulative %Total% of VarianceCumulative %
18.35228.79828.7988.35228.79828.7983.57212.31912.319
22.6909.27538.0732.6909.27538.0733.41811.78524.104
32.3438.08146.1542.3438.08146.1543.21811.09635.200
42.1257.32853.4822.1257.32853.4823.19010.99946.199
52.0296.99660.4782.0296.99660.4782.8709.89756.095
61.4675.05965.5371.4675.05965.5372.7389.44265.537
70.6502.24267.779
80.6212.14069.919
90.5862.02171.940
100.5791.99873.938
110.5411.86475.802
120.5251.81177.614
130.5031.73479.348
140.4971.71581.063
150.4761.64382.706
160.4511.55784.262
170.4351.50085.762
180.4261.46987.231
190.4101.41288.644
200.3961.36790.011
210.3871.33391.344
220.3731.28892.631
230.3501.20693.837
240.3361.15794.994
250.3191.10096.094
260.3121.07597.169
270.3041.04998.217
280.2660.91799.134
290.2510.866100.000
Extraction method: Principal Component Analysis.
Table 6. Rotated Component Matrix a.
Table 6. Rotated Component Matrix a.
ItemComponent
123456
ES1 0.761
ES2 0.794
ES3 0.736
ES4 0.778
ES5 0.780
IFS1 0.762
IFS2 0.796
IFS3 0.809
IFS4 0.750
ITS1 0.819
ITS2 0.796
ITS3 0.807
ITS4 0.805
TS1 0.782
TS2 0.795
TS3 0.786
TS4 0.791
TS5 0.774
IR10.674
IR20.718
IR30.722
IR40.751
IR50.667
IR60.718
AIA1 0.726
AIA2 0.772
AIA3 0.748
AIA4 0.791
AIA5 0.758
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. a. Rotation converged after five iterations. ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance.
Table 7. Measurement Model Results: Factor Loadings, AVE, and CR.
Table 7. Measurement Model Results: Factor Loadings, AVE, and CR.
Second-Order VariablesVariablesEstimateAVECR
ES1<---ES0.7080.5400.854
ES2<---ES0.782
ES3<---ES0.721
ES4<---ES0.748
ES5<---ES0.711
IFS1<---IFS0.7340.5640.838
IFS2<---IFS0.805
IFS3<---IFS0.731
IFS4<---IFS0.732
ITS1<---ITS0.8000.6230.868
ITS2<---ITS0.782
ITS3<---ITS0.799
ITS4<---ITS0.775
TS1<---TS0.7980.5330.851
TS2<---TS0.807
TS3<---TS0.757
TS4<---TS0.734
TS5<---TS0.741
IR1<---IR0.7560.5310.871
IR2<---IR0.711
IR3<---IR0.732
IR4<---IR0.740
IR5<---IR0.709
IR6<---IR0.720
AIA1<---AIA0.7280.5300.849
AIA2<---AIA0.722
AIA3<---AIA0.701
AIA4<---AIA0.767
AIA5<---AIA0.724
ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. “<---“ indicates the direction of factor loading from the latent variable to its observed indicators.
Table 8. Discriminant Validity Test Results for Each Dimension of the Scale.
Table 8. Discriminant Validity Test Results for Each Dimension of the Scale.
ESIFSITSTSIRAIA
ES0.745
IFS0.215 **0.751
ITS0.294 **0.211 **0.789
TS−0.227 **−0.295 **−0.242 **0.730
IR−0.389 **−0.443 **−0.444 **0.443 **0.729
AIA0.205 **0.318 **0.293 **−0.303 **−0.380 **0.728
AVE0.7450.7510.7890.7300.7290.728
ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. ** At the 0.01 level (two-tailed), the correlation is significant.
Table 9. Pearson Correlation Matrix of Latent Variables.
Table 9. Pearson Correlation Matrix of Latent Variables.
ESIFSITSTSIRAIA
ES1
IFS0.215 **1
ITS0.294 **0.211 **1
TS−0.227 **−0.295 **−0.242 **1
IR−0.389 **−0.443 **−0.444 **0.443 **1
AIA0.205 **0.318 **0.293 **−0.303 **−0.380 **1
** At the 0.01 level (two-tailed), the correlation is significant. ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance.
Table 10. Model Fit Indices for the Structural Equation Model (SEM).
Table 10. Model Fit Indices for the Structural Equation Model (SEM).
ItemCMIN/DFNFITLICFIRMSEARFI
Excellent Value>1, <3>0.9>0.9>0.9<0.05>0.9
Good Value>3, <5>0.8>0.8>0.8<0.08>0.8
Result3.1720.9350.9440.9550.0600.921
Table 11. SEM Model Path Hypothesis Testing Results.
Table 11. SEM Model Path Hypothesis Testing Results.
HypothesesPath RelationEstimateS.E.C.R.pOutcome
H1TS<---ES−0.1730.071−2.4380.015Supported
H2TS<---IFS−0.3230.067−4.811***Supported
H3TS<---ITS−0.1640.057−2.8770.004Supported
H4IR<---ES−0.2370.056−4.240***Supported
H5IR<---IFS−0.3210.055−5.817***Supported
H6IR<---ITS−0.2650.046−5.799***Supported
H7IR<---TS0.2490.0465.416***Supported
H8AIA<---IR−0.2050.093−2.2090.027Supported
H9AIA<---ES0.0260.0640.4130.680Not Supported
H10AIA IFS0.1930.0652.9560.003Supported
H11AIA<---ITS0.1290.0542.3870.017Supported
H12AIA<---TS−0.1240.054−2.3100.021Supported
ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. p < 0.001: ***. “<---“ represents the hypothesized structural path from the independent variable to the dependent variable.
Table 12. Results of Bootstrapped Indirect Effect Tests for Mediation Paths.
Table 12. Results of Bootstrapped Indirect Effect Tests for Mediation Paths.
Indirect Pathβ (Original Sample)t-Valuep-Value95% CI (BCa)Significance
ITS → IR → AIA0.0804.2240.000[0.045, 0.117]Significant
TS → IR → AIA−0.0744.4840.000[−0.108, −0.044]Significant
IFS → IR → AIA0.0894.1810.000[0.050, 0.130]Significant
ES → IR → AIA0.0573.8380.000[0.029, 0.083]Significant
IFS → TS → AIA0.0432.5370.011[0.012, 0.075]Significant
ITS → TS → AIA0.0281.9830.047[0.004, 0.056]Marginally significant
ES → TS → AIA0.0242.1020.036[0.005, 0.045]Significant
IFS → TS → IR → AIA0.0183.0270.002[0.007, 0.031]Significant
ITS → TS → IR → AIA0.0122.5700.010[0.004, 0.022]Significant
ES → TS → IR → AIA0.0102.1470.032[0.002, 0.018]Significant
ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance, AIA = AI Technology Acceptance. “→” represents the directional path in the mediation pathway of the structural model.
Table 13. 10-Fold Cross-Validation Results of the ANN Model.
Table 13. 10-Fold Cross-Validation Results of the ANN Model.
FoldMSERMSEMAER2
10.0081720.0903980.0741370.989258
20.0142660.1194410.0956550.980289
30.0161760.1271840.1072070.971489
40.0113290.1064380.0883290.981027
50.0143830.1199280.1003710.977785
60.0175310.1324030.1017540.971974
70.0165360.1285920.0909270.973618
80.0437870.2092540.1646150.953393
90.0224220.1497380.0938810.974312
100.0151530.1230990.0983210.974764
Average0.0179750.1306470.101520.974791
Table 14. NCA Results for Latent Variables.
Table 14. NCA Results for Latent Variables.
VariableCE-FDHCR-FDHNotes
ES0.0550.044Exhibits weak necessity for the outcome variable, slightly reduced under CR-FDH.
IFS0.1050.065Represents a strong necessary condition (approaching the empirical 0.1 threshold under CE-FDH).
IR00Not a necessary condition and does not constitute a bottleneck.
ITS0.0590.044Serves as a marginal necessary condition, with a smaller effect than IFS.
TS00Similar to IR, does not constitute a necessary condition.
ES = Emotional Support, IFS = Informational Support, ITS = Instrumental Support, TS = Technostress, IR = Innovation Resistance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, Y.; Feng, Y.; Liu, Z. The Influence of Perceived Organizational Support on Sustainable AI Adoption in Digital Transformation: An Integrated SEM–ANN–NCA Model. Sustainability 2025, 17, 11373. https://doi.org/10.3390/su172411373

AMA Style

Feng Y, Feng Y, Liu Z. The Influence of Perceived Organizational Support on Sustainable AI Adoption in Digital Transformation: An Integrated SEM–ANN–NCA Model. Sustainability. 2025; 17(24):11373. https://doi.org/10.3390/su172411373

Chicago/Turabian Style

Feng, Yu, Yi Feng, and Ziyang Liu. 2025. "The Influence of Perceived Organizational Support on Sustainable AI Adoption in Digital Transformation: An Integrated SEM–ANN–NCA Model" Sustainability 17, no. 24: 11373. https://doi.org/10.3390/su172411373

APA Style

Feng, Y., Feng, Y., & Liu, Z. (2025). The Influence of Perceived Organizational Support on Sustainable AI Adoption in Digital Transformation: An Integrated SEM–ANN–NCA Model. Sustainability, 17(24), 11373. https://doi.org/10.3390/su172411373

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop