Next Article in Journal
Building Brand Trust Through Influencers: The Mediating Role of Consumer Engagement
Previous Article in Journal
Influencers’ Persuasive Power and Parasocial Relationships in Digital Consumption: Insights from Instagram and TikTok
Previous Article in Special Issue
Unpacking Dimensions of Metaverse Platforms to Enhance Immersive Experience and Brand Engagement Among Consumers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating Decision-Support Chatbot Acceptance Among Professionals: An Application of the UTAUT Model in a Marketing and Sales Context

Institute for Machine Learning and Analytics, Offenburg University of Applied Sciences, 77652 Offenburg, Germany
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 113; https://doi.org/10.3390/jtaer21040113
Submission received: 23 February 2026 / Revised: 25 March 2026 / Accepted: 26 March 2026 / Published: 7 April 2026
(This article belongs to the Special Issue Emerging Technologies and Marketing Innovation)

Abstract

This study investigates the acceptance of an AI-powered decision-support chatbot among professionals in a marketing and sales context, addressing a gap in technology acceptance research by examining data-intensive decision environments that remain underexplored. Building on the Unified Theory of Acceptance and Use of Technology (UTAUT), the study proposes an extended model incorporating Behavioral Intention, Performance Expectancy, Effort Expectancy, Social Influence, Output Quality, Time Saving, Source Trustworthiness, Cognitive Load, and Chatbot Self-Efficacy. An experimental study was conducted with 106 professionals using a chatbot-enhanced business analytics platform to complete marketing KPI analysis tasks. Data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results demonstrate that Behavioral Intention to use decision-support chatbots is significantly influenced by Performance Expectancy, Effort Expectancy, and Social Influence. Performance Expectancy is strongly driven by Output Quality, Time Saving, and Source Trustworthiness, while Effort Expectancy is significantly shaped by reduced Cognitive Load and higher Chatbot Self-Efficacy. The findings suggest that chatbot acceptance in professional decision-making depends not only on usability and performance beliefs but also on cognitive relief, trust in information sources, and efficiency gains, highlighting important implications for both theory and the design of AI-based decision-support systems.

1. Introduction

The integration of artificial intelligence (AI) into business operations has accelerated dramatically in recent years, with conversational agents increasingly deployed across various organizational functions [1,2,3].
Chatbots, as AI-powered conversational interfaces, have evolved beyond their initial implementation as customer service tools to become sophisticated decision-support systems capable of processing complex data and generating actionable insights to inform management decisions [4]. The potential for these technologies to transform professional decision processes is particularly salient in data-intensive domains such as marketing and sales, where timely access to performance metrics and their interpretation can significantly impact strategic outcomes [5,6].
Professional decision-making in contemporary organizations involves processing substantial volumes of information from diverse sources under significant time constraints [7,8]. The complexity of this process is amplified in marketing and sales contexts, where professionals must simultaneously evaluate multiple key performance indicators (KPIs) spanning customer engagement, lead generation, conversion rates, and sales efficiency to formulate effective strategies [9,10,11]. Traditional decision-support systems often struggle to provide integrated, real-time insights that balance comprehensiveness and accessibility [12].
AI-powered chatbots offer a transformative solution to this challenge by providing natural-language interfaces to complex data analytics, thereby democratizing access to business intelligence and facilitating more agile decision-making [13,14]. By simplifying access to analytical insights, such systems may improve decision quality, efficiency, and user acceptance in data-driven electronic commerce environments, where organizations increasingly rely on integrated analytics platforms to monitor performance, customer behavior, and marketing activities.
Despite the theoretical advantages of AI chatbots for professional decision support, their successful implementation hinges critically on user acceptance and sustained use, factors that cannot be presumed solely based on technological capabilities [15,16].
Previous research has demonstrated that numerous psychological, social, and organizational factors influence technology acceptance across different contexts. The Unified Theory of Acceptance and Use of Technology (UTAUT) has emerged as a robust framework for understanding these multifaceted determinants of technology adoption and sustained usage [17].
The present study addresses a significant research gap by applying the UTAUT model to examine the acceptance of AI chatbots explicitly designed for professional decision support in marketing and sales environments. While previous research has examined chatbot acceptance in various domains [18,19,20], the unique context of professional decision support, particularly in marketing and sales operations, remains underexplored. This context presents distinctive challenges and opportunities given the high-stakes nature of professional decisions [21], the complexity of marketing and sales KPIs [22], and the potential organizational resistance to AI-mediated decision processes [23,24].
The practical significance of this research is substantial, as organizations increasingly invest in AI technologies to enhance decision-making processes, often without a clear understanding of the factors that determine their acceptance and effective use by professional users [25,26]. By identifying the key determinants of chatbot acceptance in this specific context, this study provides actionable insights for technology developers, organizational change managers, and professionals contemplating the implementation of AI-powered decision-support systems. These insights can inform design specifications, implementation strategies, and organizational policies to maximize technology acceptance and return on investment.
From a theoretical perspective, this study contributes to the technology acceptance literature in several ways. Specifically, it extends the Unified Theory of Acceptance and Use of Technology (UTAUT) to the emerging context of AI-driven decision-support chatbots used in professional analytical environments. While prior research has primarily examined chatbot adoption in customer service, e-commerce, or social interaction settings, considerably less attention has been paid to chatbots that support complex managerial decision-making tasks.
Additionally, the study integrates cognitive and informational determinants into the UTAUT framework, including cognitive load, source trustworthiness, time saving, and output quality. These constructs capture key characteristics of human–AI interaction in data-intensive decision environments, where users must interpret AI-generated insights and incorporate them into analytical workflows.
Furthermore, by examining chatbot acceptance within a professional marketing and sales analytics context, the research highlights how perceptions of usability, information reliability, and efficiency gains jointly shape technology adoption in decision-support systems. In doing so, the study contributes to a more nuanced understanding of how conversational AI technologies influence technology acceptance in complex organizational settings.
The primary objectives of this research are threefold: (1) to identify the critical factors influencing professional acceptance of AI chatbots for decision support in marketing and sales contexts; (2) to assess the relative importance of these factors in determining Behavioral Intention; (3) to develop an enhanced theoretical model that specifically addresses the unique aspects of decision-support chatbot acceptance.

2. Theoretical Framework

The theoretical framework underpinning this study synthesizes constructs from the Unified Theory of Acceptance and Use of Technology (UTAUT) to explain user intentions to adopt decision-support chatbots. Grounded in the foundational work of Venkatesh et al. [17] on the UTAUT, this integrative model encompasses both technological affordances and psychological evaluations that jointly shape adoption behavior.
The central proposition asserts that Behavioral Intention to use decision-support chatbots arises from three primary antecedents, Performance Expectancy, Effort Expectancy, and Social Influence, forming a multilayered explanatory structure that captures the interplay between individual perceptions and organizational dynamics influencing technology acceptance [17].

2.1. Behavioral Intention

Behavioral Intention, as conceptualized within the Unified Theory of Acceptance and Use of Technology (UTAUT), is the central dependent variable in technology acceptance research. Venkatesh et al. [17] developed this construct by synthesizing theoretical foundations from multiple predecessor models, which defines Behavioral Intention as the strength of one’s intention to perform a specified behavior.
In technology adoption contexts, Behavioral Intention reflects users’ conscious plans to engage with technological systems, serving as a psychological bridge between cognitive evaluations and actual usage behavior [17,27,28,29,30,31]. This conceptualization has proven robust across diverse technological domains, consistently demonstrating predictive validity for subsequent technology adoption behaviors [32,33,34,35].

2.2. Performance Expectancy

Performance Expectancy also builds upon Venkatesh et al. [17] Unified Theory of Acceptance and Use of Technology (UTAUT), which conceptualized this construct as an integration of prior constructs from foundational technology adoption theories. Venkatesh et al. [17] formally defined Performance Expectancy as the degree to which an individual believes that using the system will help them attain gains in job performance.
In chatbot adoption studies, Performance Expectancy consistently emerges as a primary determinant of usage intentions [36]. Evidence from technology adoption research confirms the robust direct influence of Performance Expectancy on Behavioral Intention across diverse technological domains [37].
Within the context of professional decision-support chatbots, Performance Expectancy manifests as users’ belief that the technology improves decision-making accuracy, accelerates key performance indicator (KPI) analysis, and generates actionable managerial insights [38,39,40]. For professionals operating in marketing and sales environments, these perceptions translate into enhanced strategic outcomes through real-time customer engagement analytics and optimized sales efficiency [41].
H1. 
Performance Expectancy positively influences users’ Behavioral Intention toward using the decision-support chatbot.

2.3. Effort Expectancy

Effort Expectancy, also derived from the Unified Theory of Acceptance and Use of Technology (UTAUT), refers to the degree of ease associated with using the system, emphasizing the intuitiveness of the user interface and the minimization of cognitive demand during interaction [17].
Recent research on technology highlights the critical importance of this construct, demonstrating that users are likely to discontinue using tools that require excessive learning effort, even when those tools are functionally superior [42,43]. Empirical evidence from studies further indicates that the direct effect of Effort Expectancy on Behavioral Intention intensifies under conditions of cognitive resource constraints, which are typical in professional decision-making environments [44,45,46]. Findings from decision-support system research demonstrate that improvements in interface intuitiveness and usability significantly increase users’ intentions to adopt [47,48,49].
H2. 
Effort Expectancy positively influences users’ Behavioral Intention to use the decision-support chatbot.

2.4. Social Influence

Social influence, a core construct within the Unified Theory of Acceptance and Use of Technology (UTAUT), is defined as the degree to which an individual perceives that essential others believe they should use the system [17].
In organizational contexts, Social Influence reflects the extent to which peer opinions, supervisory expectations, and broader organizational culture collectively shape individual technology adoption behaviors [50]. Empirical studies consistently demonstrate that social influence exerts a significant direct effect on Behavioral Intention to use chatbots across various technological and professional contexts [18,20].
H3. 
Social influence positively influences users’ Behavioral Intention to use the decision-support chatbot.

2.5. Output Quality

Output Quality encompasses the accuracy, relevance, completeness, and format of the information a chatbot presents [51,52,53]. High-quality outputs not only foster user trust but also enhance the system’s perceived added value. Within UTAUT-related frameworks and information systems (IS) success models, such as the DeLone & McLean model [54], Output Quality is widely recognized as a fundamental determinant of positive technology evaluations and user acceptance.
A growing body of research suggests that superior chatbot Output Quality enhances Performance Expectancy by facilitating more effective, error-free decision-making [55]. When chatbots deliver precise analytics, contextually relevant insights, and tailored recommendations, professionals can base their decisions on robust, timely, and easily interpretable information. Empirical evidence from customer service and management domains demonstrates that such high-caliber outputs lead users to anticipate greater improvements in work performance, thereby reinforcing the link between Output Quality and Performance Expectancy [36,56,57,58,59,60,61].
H4. 
Perceived Output Quality of the chatbot positively influences Performance Expectancy.

2.6. Time Saving

Time Saving is widely recognized as a critical benefit of adopting chatbot technologies, particularly in professional environments where decision-making efficiency is paramount [62]. Within the context of UTAUT and related models, Time Saving is conceptually linked to Performance Expectancy, as users are more likely to perceive a system as valuable when it demonstrably reduces the time required to complete essential tasks [63,64].
Chatbots can expedite the retrieval and synthesis of complex information, automate repetitive tasks, and provide instantaneous responses to user queries, thereby contributing to a more efficient workflow [65]. Empirical research supports this association, showing that users’ perceptions of chatbots’ time-saving benefits strongly predict their Performance Expectancy. Notably, these time-saving advantages have been empirically verified primarily in IT-intensive activities, where automation and rapid data access significantly reduce operational workload [66,67,68]. In service and management contexts, the deployment of AI-powered chatbots has significantly reduced response and handling times, thereby enhancing perceived work performance and overall productivity [69].
H5. 
Perceived Time Saving through the chatbot use positively influences Performance Expectancy.

2.7. Source Trustworthiness

Source Trustworthiness, grounded in source credibility theory and social psychology research, refers to the degree to which users perceive an information source as reliable, honest, and unbiased [70,71]. Originally conceptualized in persuasion research by Hovland & Weiss [72], Source Trustworthiness encompasses key dimensions such as integrity, benevolence, and the predictability of information providers [72,73,74,75].
In the context of chatbots, Source Trustworthiness manifests through users’ confidence in the AI system’s data provenance, algorithmic transparency, and the perceived absence of malicious intent or systematic bias. When users believe that chatbot-generated information originates from credible, verifiable sources, their overall trust in the system and, consequently, its perceived usefulness tend to increase. Empirical validations of Source Trustworthiness in technology adoption research reveal robust associations with Performance Expectancy. Studies demonstrate that source credibility directly influences users’ performance expectations, as confidence in the reliability of chatbot information is often a prerequisite for recognizing its functional utility [18,76,77,78,79].
H6. 
Source Trustworthiness of the chatbot positively influences the Performance Expectancy.

2.8. Cognitive Load

Cognitive Load, grounded in Sweller’s [80] Cognitive Load Theory, refers to the mental effort required by working memory during problem-solving or learning tasks. This framework differentiates among three types of Cognitive Load: intrinsic load, representing the inherent complexity of the task; extraneous load, resulting from suboptimal system or instructional design; and germane load, reflecting the cognitive resources devoted to schema construction and automation [81].
In the context of chatbot interactions, Cognitive Load encompasses the mental effort required to formulate queries, interpret AI-generated responses, and integrate chatbot-provided information into existing knowledge structures [82,83,84]. High cognitive demands may hinder user engagement and perceived usability, whereas systems that minimize cognitive strain tend to promote more seamless interaction.
Empirical research in educational and interactive learning contexts demonstrates that chatbot-mediated feedback can effectively reduce Cognitive Load, particularly when users engage with complex or conceptually demanding material. This reduction in cognitive effort has been shown to enhance perceived ease of use, a key component of Effort Expectancy in technology acceptance models [85,86,87,88].
H7. 
Reduced Cognitive Load through the chatbot positively influences the Effort Expectancy.

2.9. Chatbot Self-Efficacy

Computer Self-Efficacy, initially conceptualized by Compeau & Higgins [89], refers to an individual’s self-assessment of their ability to use computer systems to perform specific tasks effectively.
Within the context of chatbot adoption, Chatbot Self-Efficacy reflects users’ confidence in their ability to interact effectively with conversational interfaces, navigate dynamic dialogue structures, and extract relevant or actionable information through natural language interactions [90,91,92].
Empirical studies across various technological domains consistently highlight Self-Efficacy as a significant determinant of Effort Expectancy, and demonstrate its strong association with perceived ease of use [93,94]. In particular, research on generative artificial intelligence has confirmed that individuals with higher levels of Self-Efficacy exhibit stronger intentions to adopt and use technology efficiently [95].
H8. 
Chatbot Self-Efficacy positively influences the Effort Expectancy.

3. Methodology

3.1. Measurement Instrument

The research model Figure 1 comprises nine latent constructs, each operationalized through multiple reflective measurement items. To ensure content validity, all items were adapted from well-established instruments or concepts in the extant literature. Whenever feasible, items were derived from prior studies on artificial intelligence and business analytics to maintain contextual relevance for this research.
During the instrument development phase, a pre-test was conducted with three experienced users of data analytics platforms. Their feedback was used to refine the wording of items, ensuring conceptual clarity and improving comprehensibility. All constructs were measured using a seven-point Likert scale, “strongly disagree” (1) to “strongly agree” (7). Table 1 presents the constructs, their associated measurement items, and corresponding conceptual sources.

3.2. Data Collection

Data for this study were collected through an experimental research design conducted in collaboration with the Stuttgart Media University (HdM) and the company Valueworks GmbH. Participants were recruited via the professional and academic networks of both institutions. Upon registration, participants could select individual time slots for their participation in the study.
Although the recruitment relied on professional and academic networks, the study targeted individuals with experience in business analytics, marketing, or data-driven decision environments. This sampling strategy ensured that participants were familiar with analytical tasks and thus able to meaningfully evaluate the usefulness of the chatbot-supported system.
The experimental environment was based on a business analytics platform developed by Valueworks GmbH, a technology company headquartered in Karlsruhe, Germany [102]. The platform, designed to support marketing and sales decision-making, was integrated with an AI-powered decision-support chatbot.
At the beginning of the experiment, participants were shown a tutorial video introducing the platform’s core features. This instructional video explained the platform’s main navigation structure, key functions, and analytical capabilities, excluding the chatbot component. Following this introduction, participants were assigned three marketing-related analytical tasks that required them to interpret key performance indicators (KPIs) using the platform. This first phase was designed to establish a baseline understanding of how the system operates without AI assistance.
In the second phase, participants were granted access to the chatbot feature through an additional menu option within the platform. They were then asked to complete three marketing-related problem-solving tasks, this time using the chatbot as a decision-support tool. The chatbot enabled participants to ask questions in natural language, request clarifications, and receive progressively refined, detailed explanations and data-driven insights relevant to the tasks.
After the experimental phase, participants were asked to complete a structured questionnaire measuring the constructs outlined in Table 1 (e.g., Performance Expectancy, Effort Expectancy, Cognitive Load, Source Trustworthiness). In addition, demographic variables such as age, gender, academic background, and company size were collected to facilitate subsequent statistical analysis.

3.3. Data Analysis

To empirically evaluate the proposed research model and test the associated hypotheses, this study employed Structural Equation Modeling (SEM) using the Partial Least Squares (PLS) approach. SEM was selected for its ability to simultaneously estimate multiple interdependent relationships among latent constructs, including direct, mediating, and moderating effects. Compared with traditional linear regression techniques, SEM provides a more comprehensive analytical framework, allowing the concurrent examination of measurement and structural components within complex theoretical models. Given that PLS-SEM does not rely on distributional assumptions and does not yield conventional parametric significance tests for model parameters, non-parametric bootstrapping was employed to estimate standard errors and to compute t-values for hypothesis testing. Following best-practice recommendations, 10,000 bootstrap resamples were generated to obtain robust estimates of the significance levels for both direct and indirect effects within the structural model [103]. Following the methodological guidelines proposed by Hair et al. [104], a two-step analytical procedure was adopted. In the first stage, the measurement model was evaluated to examine the reliability and validity of the constructs. Subsequently, in the second stage, the structural model was analyzed to test the hypothesized relationships among the latent variables.

4. Results

4.1. Descriptive Statistics of the Sample

A total of 106 participants completed the experimental study and subsequent questionnaire. As shown in Table 2, 64.15% of the respondents identified as male, while 35.85% identified as female. The sample demonstrated a diverse age distribution: 14.15% of participants were aged 20–30, 50.94% were aged 31–40, 21.70% were aged 41–50, and 13.21% were aged 51+. The sample size is considered adequate for PLS-SEM analysis. Following the “10-times rule” and recent recommendations in PLS methodology, the minimum sample size should exceed ten times the maximum number of structural paths pointing at a latent construct [105,106]. In the proposed research model, the highest number of incoming paths is three (for Performance Expectancy and Behavioral Intention), suggesting a minimum sample size of 30 observations. Therefore, the final sample of 106 participants substantially exceeds this threshold and provides sufficient statistical power for model estimation.
In terms of educational attainment, most of the respondents held advanced degrees: 67.92% possessed a master’s degree, 25.47% a bachelor’s degree, 3.77% a high school diploma, and 1.89% a doctoral degree, and 0.94% held a professorship. This composition indicates a well-educated sample with substantial academic backgrounds, aligning with the professional orientation of the study’s target population.
Regarding organizational context, 13.21% of participants were employed in small enterprises (1–10 employees), 25.47% in medium-sized organizations (11–100 employees), 33.96% in larger firms (101–1000 employees), and 27.36% in huge organizations (more than 1000 employees). This heterogeneity in company size provides a balanced representation across different business environments, which is particularly relevant for understanding the acceptance of AI-based decision-support systems across organizational scales.

4.2. Assessment of the Measurement Model

The measurement model was evaluated to ensure the reliability and validity of all latent constructs before testing the structural relationships. Following established guidelines, indicator reliability, internal consistency reliability, convergent validity, and discriminant validity were examined.
First, indicator reliability was assessed by inspecting the outer loadings of each item. All standardized loadings exceeded the recommended threshold value of 0.70, demonstrating that each indicator contributed substantially to its respective construct [107].
Second, internal consistency reliability was evaluated using Cronbach’s alpha and composite reliability (ρₐ and ρ_c). The Cronbach’s alpha values for all constructs were above 0.70, meeting the minimum reliability criterion suggested by Dijkstra & Henseler [108]. Similarly, the composite reliability coefficients (ρₐ = 0.792; ρ_c = 0.903, minimum) surpassed the recommended cutoff of 0.70, indicating satisfactory internal consistency across all constructs [107,108].
Third, convergent validity was assessed using average variance extracted (AVE). Consistent with Fornell & Larcker [109], an AVE value of 0.50 or higher indicates acceptable convergent validity. All constructs in this study exhibited AVE scores of 0.777 or greater, confirming that the corresponding latent construct explained a substantial portion of indicator variance.
Table 3 presents the detailed results for outer loadings, Cronbach’s alpha, composite reliability (ρₐ and ρ_c), and AVE for each construct.
To ensure that all constructs in the measurement model represent distinct conceptual entities, discriminant validity was assessed using three complementary approaches.
Initially, the cross-loadings of each indicator were examined. As shown in Table 4, the loading of each indicator on its corresponding construct was consistently higher than its loadings on other constructs, indicating satisfactory discriminant validity at the indicator level [110].
Next, the Fornell–Larcker criterion was applied to assess construct-level discriminant validity. According to this criterion, the square root of each construct’s average variance extracted (AVE) should exceed its correlations with all other constructs [109]. The results in Table 5 confirm that this condition was met for all constructs, demonstrating that each latent variable shares more variance with its indicators than with any other construct in the model.
Third, the heterotrait–monotrait (HTMT) ratio of correlations was assessed, following the approach proposed by Henseler et al. [111]. HTMT values below 0.90 indicate adequate discriminant validity. As shown in Table 6, all HTMT ratios were well below this conservative threshold, further supporting the constructs’ distinctiveness in the proposed model.

4.3. Assessment of the Structural Model

Following confirmation of the measurement model’s reliability and validity, the structural model was evaluated to examine the hypothesized relationships among the latent constructs and assess the model’s overall explanatory power. The assessment focused on three key indicators of model quality: the coefficient of determination (R2), effect size (ƒ2), and the significance of path coefficients [112].
The coefficient of determination (R2) represents the proportion of variance in the endogenous constructs that is explained by the model’s exogenous variables. In line with the guidelines proposed by Hair et al. [112], R2 values of 0.75, 0.50, and 0.25 correspond to substantial, moderate, and weak levels of predictive accuracy, respectively. As shown in Table 7, the R2 values for the endogenous constructs ranged from 0.572 to 0.743, indicating that the model has moderate to substantial explanatory power across the tested relationships.
Beyond assessing the explanatory power of the endogenous constructs through R2 values, the effect size (ƒ2) of each exogenous construct was examined to evaluate its individual contribution to the model. The effect size provides insight into the extent to which an exogenous latent variable contributes to the explained variance of an endogenous variable when included in the model, compared to when it is omitted [104].
Following Cohen’s [113] widely cited guidelines, ƒ2 values of 0.02, 0.15, and 0.35 represent small, medium, and large effects, respectively. The results summarized in Table 8 indicate that effect sizes range from 0.160 to 0.385, suggesting predominantly medium-to-large effects across the model’s relationships. Specifically, the largest effect was observed for Output Quality on Performance Expectancy (ƒ2 = 0.385), while the smallest was observed for Social Influence on Behavioral Intention (ƒ2 = 0.160).
Before interpreting the structural path coefficients, it is essential to verify that multicollinearity does not bias the estimated relationships between latent constructs. To this end, the Variance Inflation Factor (VIF) was examined for all model relationships. Following the recommendation of Hair et al. [104], VIF values below 5.0 indicate acceptable collinearity and confirm that the predictors are statistically independent.
As shown in Table 9, all VIF values range from 1.160 to 1.874, which are well below the critical limit, indicating no multicollinearity concerns within the structural model. This finding reinforces the robustness of the subsequent path coefficient estimates and supports the model’s overall validity.
The next step involved assessing the hypothesized relationships among the latent constructs by estimating path coefficients. Path coefficients represent standardized regression weights that indicate both the strength and the direction of relationships within a structural model, ranging from −1 to +1. Coefficients approaching +1 suggest a strong positive relationship, whereas values closer to −1 reflect a strong negative association [104].
To determine the statistical significance of these relationships, a non-parametric bootstrapping procedure with 10,000 subsamples was employed. This approach enables robust estimation of standard errors and t-statistics without assuming normal data distribution [104]. The resulting path coefficients and their significance levels are presented in Figure 2, with detailed results summarized in Table 10.
The findings indicate that all hypothesized direct relationships between constructs were positive and statistically significant (p < 0.01), supporting the proposed structural paths. Specifically, Cognitive Load (β = 0.450, t = 4.239, p < 0.001) and Chatbot Self-Efficacy (β = 0.399, t = 4.335, p < 0.001) exhibited strong positive effects on Effort Expectancy. Similarly, Output Quality (β = 0.416, t = 3.629, p < 0.001), Source Trustworthiness (β = 0.293, t = 4.961, p < 0.001), and Time Saving (β = 0.317, t = 2.966, p < 0.01) significantly enhanced Performance Expectancy. Moreover, Performance Expectancy (β = 0.410, t = 3.972, p < 0.001), Effort Expectancy (β = 0.334, t = 3.388, p < 0.01), and Social Influence (β = 0.258, t = 3.571, p < 0.001) exerted significant positive effects on Behavioral Intention.
In addition to the direct relationships, several significant indirect effects were identified, confirming the mediating role of Effort Expectancy and Performance Expectancy within the model. As summarized in Table 11, all indirect paths were positive and statistically significant, indicating that the influence of cognitive, technological, and trust-related constructs on Behavioral Intention is partially mediated through effort and performance-related expectations.

5. Discussion

The objective of this study was to examine the determinants influencing the adoption of a decision-support chatbot integrated into an analytical business platform. Drawing on UTAUT literature, the research model proposes eight hypotheses to explain Performance Expectancy, Effort Expectancy, and users’ Behavioral Intention to adopt the chatbot-enhanced system. Overall, the empirical results provide strong support for the theoretical framework and yield several notable insights.

5.1. Interpretation of Key Findings

The analysis confirms the substantial role of Effort Expectancy in shaping Behavioral Intention (H2). Both Cognitive Load (H7) and Chatbot Self-Efficacy (H8) exhibit substantial and significant effects on Effort Expectancy, aligning with the broader literature suggesting that reduced cognitive demands and greater confidence in technology use translate into improved perceptions of ease of use. Winkler & Söllner [84] have shown that AI-based systems can support users by reducing cognitive load through adaptive assistance and intuitive interaction mechanisms, thereby facilitating more efficient task completion. Likewise Lan et al. [95] show that AI self-efficacy positively influences perceived usefulness and ease of use, which in turn strengthens behavioral intention toward AI.
These findings underscore that well-designed conversational AI interfaces can meaningfully reduce cognitive strain compared to traditional navigation structures, an advantage particularly relevant for data-intensive tasks.
The results also provide crucial support for the antecedents of Performance Expectancy, which serve as a central predictor of Behavioral Intention (H1).
All proposed antecedents, Output Quality (H4), Time Saving (H5), and Source Trustworthiness (H6), demonstrate significant positive effects on Performance Expectancy. Among these, Output Quality has the greatest influence, suggesting that users’ perceptions of accurate, relevant, and actionable outputs are the most decisive factor in assessing the utility of decision-support chatbots. This finding is consistent with prior research emphasizing the importance of information quality in the acceptance of AI-based systems. For example, Prakash & Das [114] show that perceived information quality is a key determinant of trusting beliefs toward AI-driven conversational systems, as users are more likely to rely on systems that provide accurate, reliable, and up-to-date information.
The significant effects of Time Savings and Source Trustworthiness further reinforce the idea that users value efficiency gains and reliable information sources when evaluating AI-based decision tools. These results resonate with prior research indicating that speed and trust are central mechanisms through which intelligent systems generate perceived performance benefits. Camilleri [18] shows that source trustworthiness significantly predicts performance expectancy, as users are more likely to believe that an AI-based system will improve their job performance when the provided information is considered reliable and dependable. Similarly, Nguyen & Mailik [58] demonstrate that intelligent tools that deliver accurate information in a timely manner and adapt to user needs are perceived as more useful in knowledge-intensive work environments.
Third, Social Influence (H3) shows a significant, albeit smaller, effect on Behavioral Intention. While Social Influence is often more prominent in early-stage or mandatory-use settings, its impact in this professional context suggests that norms and peer expectations continue to shape the acceptance of AI tools in organizational environments. Prior research on the acceptance of chatbots in an enterprise context indicates that normative beliefs are driven by peer expectations rather than formal authority. Brachten et al. [20] show that peer influence has a stronger effect on an individual’s normative beliefs than the influence of superiors, suggesting that social dynamics within work groups can significantly affect technology acceptance. This finding is especially relevant for enterprise platforms, where adoption decisions often involve both individual perceptions and broader cultural or managerial signals.
While Social Influence significantly affects behavioral intention, its effect size is comparatively smaller than those of Performance Expectancy and Effort Expectancy. This suggests that, in professional analytical environments, users may rely more strongly on perceived system usefulness and ease of use rather than on normative pressure from colleagues or supervisors.

5.2. Theoretical Implications

The findings of this study offer several important theoretical implications that enrich and extend existing knowledge on the adoption of conversational AI in analytical business environments. By integrating constructs from UTAUT with Cognitive Load theory and source credibility frameworks, the study demonstrates that chatbot-assisted decision making is shaped by a broader set of perceptual and cognitive mechanisms than previously assumed. In particular, the results highlight the central role of Cognitive Load as a key antecedent of Effort Expectancy, thereby positioning cognitive relief as an essential explanatory mechanism in technology adoption research. While the UTAUT framework traditionally emphasizes functional beliefs, such as perceived usefulness and ease of use, the present findings underscore that reducing mental effort is equally pivotal for user acceptance, especially in data-intensive decision contexts. This insight offers a meaningful extension to classic adoption models by suggesting that conversational interfaces should not be viewed solely as usability enhancers but also as cognitive-support systems that restructure how users interact with complex analytical tasks. Prior research on AI-based conversational agents supports this perspective by showing that such systems can provide real-time guidance, reduce cognitive load, and help users process complex information more effectively. Yan et al. [115] demonstrate that conversational AI can support users by breaking down complex analytical content into manageable elements and providing adaptive assistance, thereby enabling users to extract actionable insights from data-intensive environments.
Moreover, the study advances theoretical understanding by demonstrating that perceptions of Source Trustworthiness and Time Saving significantly shape Performance Expectancy, elements that have received comparatively limited attention in prior Information Systems research. The effect of Source Trustworthiness illustrates that users evaluate chatbot recommendations based not only on their functional output but also on the perceived credibility and reliability of the underlying information sources. This finding enriches emerging discussions on algorithmic transparency, explainability, and trust in AI, positioning trustworthiness as a core antecedent to perceived performance benefits in AI-mediated decision-making. Prior research supports this interpretation by showing that source credibility directly influences users’ performance expectations, as confidence in the reliability of chatbot information is often a prerequisite for recognizing its functional utility [18]. The significant impact of Time Saving further suggests that efficiency enhancements constitute a primary value mechanism of conversational AI, especially in settings characterized by information overload and high temporal pressures. These insights offer new conceptual pathways for understanding how intelligent systems produce perceived value beyond mere accuracy or usability. Prior research on AI-based chatbot systems similarly highlights that speed and responsiveness are central contributors to perceived usefulness and service performance. Andrade & Tumelero [69] show that chatbot applications improve efficiency by enabling fast, uninterrupted, and highly accessible interactions, allowing users to obtain information within seconds instead of engaging in time-consuming manual processes. Such efficiency gains are particularly relevant in environments where users must process large amounts of information within limited time frames.
Recent research on AI-driven decision-support systems in business strategy by Martins [116] also emphasizes that the value of intelligent tools extends beyond automation, as AI implementation can improve operational efficiency, reduce cognitive bias in strategic decision-making, and increase real-time flexibility in dynamic and competitive market environments. Several studies on AI-based decision-support systems also confirm these findings that intelligent assistants can significantly improve decision quality and efficiency by reducing information search time, supporting interpretation of complex data, and enabling more interactive exploration of analytical results [117,118,119]. In addition, prior studies report that the adoption of AI-based decision-support tools can contribute to measurable economic benefits, including cost reductions, improved resource allocation, and increased profitability, as intelligent systems enable organizations to optimize processes and make faster, more informed strategic decisions [116,120,121].

5.3. Practical Implications

In addition to these theoretical contributions, the findings carry several practical implications that can guide organizations and system designers in the effective implementation of decision-support chatbots. As conversational AI technologies increasingly support data-driven decision-making, organizations must ensure that such systems are designed and introduced in ways that foster user acceptance and trust, for example by providing transparent system behavior, reliable data integration, and user-friendly interaction design during the implementation process.
Organizations should prioritize the reliability and quality of chatbot-generated outputs. The results show that Output Quality and Source Trustworthiness significantly influence users’ Performance Expectancy. This suggests that decision-support chatbots should be integrated with reliable data sources and validated analytical models. Providing transparent explanations of how insights are generated and clearly indicating underlying data sources can further strengthen users’ confidence in the system. In practice, this may include features such as source references or short explanations of the applied analytical method, allowing users to better evaluate the reliability of chatbot-generated recommendations.
The design of chatbot interfaces should focus on usability and intuitive interaction to reduce perceived effort and Cognitive Load. Conversational interfaces can simplify access to complex analytical information, but poorly structured responses or unclear query mechanisms may increase the cognitive effort required from users. Organizations should therefore ensure that chatbot interactions support natural-language queries while presenting analytical insights in a structured, easily interpretable format, for example by using guided prompts, predefined query templates, interactive visualizations, or step-by-step response structures that help users navigate complex analytical tasks.
Organizations should support the adoption of decision-support chatbots through targeted training and onboarding initiatives. The results indicate that Chatbot Self-efficacy influences Effort Expectancy, suggesting that users who feel more confident in interacting with conversational systems perceive them as easier to use. Providing practical guidance on how to formulate effective queries and interpret chatbot-generated insights can therefore facilitate user adoption and reduce resistance toward AI-supported decision-making tools. Such support may include short training sessions, onboarding tutorials, example queries, or integrated help functions within the chatbot interface.
Finally, organizations should emphasize the efficiency benefits associated with chatbot-supported analytics systems. The results indicate that perceived Time Saving positively influences Performance Expectancy, highlighting the importance of demonstrating how conversational AI can accelerate routine analytical tasks such as KPI monitoring, report generation, and exploratory data analysis. Communicating these productivity gains can increase user-perceived value of the technology and encourage broader adoption within data-driven organizational environments.

6. Limitations and Future Research

Despite offering essential insights into the determinants of user acceptance of decision-support chatbots embedded within data analytics platforms, this study is not without limitations. These limitations also provide promising avenues for future research.
The research employed an experimental design with a single prototype developed by a single organization. Although this approach ensured experimental control and ecological validity, it may limit the generalizability of the findings to other chatbot architectures, interface designs, or industry contexts. Future studies should validate the proposed model across diverse technological implementations and organizational settings, including different types of AI-enhanced decision-support systems.
The data were collected within a controlled environment where participants completed predefined analytical tasks. While this design allowed for a consistent comparison, it does not fully capture the complexity of real-world use, where decision processes may be more dynamic, iterative, and multi-stakeholder. Longitudinal field studies enable researchers to examine how adoption intentions and real use behavior evolve over time as users develop familiarity, trust, and reliance on the chatbot in authentic work contexts.
Although the sample size of 106 participants is sufficient for the applied PLS-SEM methodology, future studies could benefit from larger samples to further enhance the generalizability of the findings.
The study relied on self-reported measures to assess Behavioral Intention and perceptual constructs, such as Performance Expectancy, Effort Expectancy, and Trustworthiness. Self-reported data are subject to common-method bias and may not accurately reflect actual system usage. Future research could enrich the analysis by incorporating objective behavioral data, such as log-file analytics, chatbot interaction frequency, and task completion accuracy and speed.
The participant sample was primarily drawn from individuals connected to the university network and the industry partner. Although the sample spans a broad age range and a sufficient educational distribution, it may not fully represent professional populations that rely heavily on decision-support systems. Future studies should replicate the findings with decision-makers from various industries, hierarchical levels, and technological proficiency backgrounds.
While the original UTAUT model includes moderators such as age, gender, experience, and voluntariness of use, these variables were not incorporated into the present model to maintain parsimony and to focus on the structural relationships between the core constructs and the extended explanatory variables. Future research could examine whether these moderators influence the adoption of AI-based decision-support chatbots in professional contexts.
Finally, although the structural model accounts for substantial variance in key outcome variables, additional psychological, contextual, or organizational factors may further enhance explanatory power. Future research could explore variables such as algorithmic transparency, domain-specific expertise, perceived risk, privacy concerns, or organizational readiness for AI adoption. Investigating potential moderating effects, such as task complexity or user experience with AI, would also provide deeper insights into the boundary conditions of chatbot acceptance.

7. Conclusions

This study investigated the factors shaping the adoption of a chatbot-enhanced analytical platform by integrating UTAUT elements with cognitive, informational, time-saving, and trust-related determinants. By examining how these determinants interact with established technology acceptance constructs, the study provides a comprehensive understanding of the mechanisms that drive users’ Behavioral Intentions toward conversational AI in complex decision-support environments.
The empirical results reveal that Effort Expectancy is strongly influenced by Cognitive Load and Chatbot Self-Efficacy, underscoring the importance of cognitive facilitation in shaping perceptions of system usability. In parallel, Performance Expectancy emerges as a key mediator of chatbot adoption, driven by users’ perceptions of Output Quality, trustworthiness, and efficiency gains. Both constructs, Effort Expectancy and Performance Expectancy, ultimately play central roles in explaining Behavioral Intention, confirming their foundational relevance within the technology adoption literature. Furthermore, the study shows that Social Influence, although less dominant than performance- or effort-based evaluations, still exerts a meaningful effect, underscoring the role of organizational context in shaping individual user behavior.
Collectively, these findings advance the theoretical discourse by emphasizing that the acceptance of AI-driven conversational tools cannot be explained solely by traditional adoption constructs. Instead, cognitive processes and perceptions of information reliability are critical for understanding how users engage with intelligent systems embedded in data-intensive work environments. The results provide strong evidence that chatbots serve not merely as interface enhancements, but as cognitive and informational agents that fundamentally shape analytical workflows.
From a practical standpoint, the study underscores the need for organizations to prioritize Output Quality, ensure transparency in AI-driven recommendations, and design interfaces that reduce cognitive strain. By addressing these factors, organizations can foster greater trust, stronger perceptions of performance, and ultimately greater adoption. As AI continues to transform decision-support systems, understanding these mechanisms is essential for developing tools that are not only technologically advanced but also effectively aligned with human cognitive and organizational dynamics.
Finally, while the study offers important insights, it also opens avenues for future research. Longitudinal studies may examine how user perceptions evolve as users gain experience with chatbots. Additional research could explore domain-specific differences or incorporate experimental designs to isolate the causal effects of trust cues or cognitive support features. Such work will deepen our understanding of how conversational AI can be optimally integrated into analytical ecosystems and leveraged to enhance human decision-making.

Author Contributions

Conceptualization, S.K. and J.S.; methodology, S.K. and J.S.; software, S.K.; validation, S.K.; formal analysis, S.K.; investigation, S.K.; resources, S.K.; data curation, S.K. and J.S.; writing—original draft preparation, S.K.; writing—review and editing, S.K. and J.S.; visualization, S.K.; supervision, S.K. and J.S.; project administration, S.K. and J.S.; funding acquisition, S.K. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Invest BW, Ministry of Economic Affairs, Labour and Tourism Baden-Württemberg.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by Ethics Committee of the Stuttgart Media University (protocol code DE 224 427 890 and date of approval 1 July 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data supporting the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to thank the Stuttgart Media University and the Institute for Applied Artificial Intelligence for their support of this study. We also gratefully acknowledge the Offenburg University of Applied Sciences and the Institute for Machine Learning and Analytics. In addition, we thank Valueworks GmbH for developing and providing the platform and chatbot used in this study. We also acknowledge support by the Open Access Publication Fund of the Offenburg University of Applied Sciences.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
BIBehavioral Intention
CLCognitive Load
CSEChatbot Self-Efficacy
EEEffort Expectancy
HTMTHeterotrait–Monotrait Ratio
ISInformation Systems
KPIKey Performance Indicator
OQOutput Quality
PEPerformance Expectancy
PLS-SEMPartial Least Squares Structural Equation Modeling
SEMStructural Equation Modeling
SISocial Influence
STSource Trustworthiness
TAMTechnology Acceptance Model
TSTime Saving
UTAUTUnified Theory of Acceptance and Use of Technology

References

  1. Bin Rashid, A.; Uddin, A.S.M.N.; Azrin, F.A.; Saad, K.S.K.; Hoque, M.E. 3D Bioprinting in the Era of 4th Industrial Revolution—Insights, Advanced Applications, and Future Prospects. Rapid Prototyp. J. 2023, 29, 1620–1639. [Google Scholar] [CrossRef]
  2. Sony, M.; Naik, S.S. Key Ingredients for Evaluating Industry 4.0 Readiness for Organizations: A Literature Review. Benchmarking Int. J. 2019, 27, 2213–2232. [Google Scholar] [CrossRef]
  3. von Wolff, R.M.; Hobert, S. Chatbots at Digital Workplaces—A Grounded-Theory Approach for Surveying Application Areas and Objectives. Pac. Asia J. Assoc. Inf. Syst. 2020, 12, 63–103. [Google Scholar] [CrossRef]
  4. Leonardi, P.M.; Treem, J.W. Knowledge Management Technology as a Stage for Strategic Self-Presentation: Implications for Knowledge Sharing in Organizations. Inf. Organ. 2012, 22, 37–59. [Google Scholar] [CrossRef]
  5. Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Artificial Intelligence for Decision Making in the Era of Big Data—Evolution, Challenges and Research Agenda. Int. J. Inf. Manag. 2019, 48, 63–71. [Google Scholar] [CrossRef]
  6. Lages, L.F.; Lancastre, A.; Lages, C. The B2B-RELPERF Scale and Scorecard: Bringing Relationship Marketing Theory into Business-to-Business Practice. Ind. Mark. Manag. 2008, 37, 686–697. [Google Scholar] [CrossRef]
  7. Lavalle, S.; Lesser, E.; Shockley, R.; Hopkins, M.; Kruschwitz, N. Big Data, Analytics and the Path From Insights to Value. MIT Sloan Manag. Rev. 2011, 52, 21–32. [Google Scholar]
  8. Gupta, M.; George, J.F. Toward the Development of a Big Data Analytics Capability. Inf. Manag. 2016, 53, 1049–1064. [Google Scholar] [CrossRef]
  9. Jabbar, A.; Akhtar, P.; Dani, S. Real-Time Big Data Processing for Instantaneous Marketing Decisions: A Problematization Approach. Ind. Mark. Manag. 2020, 90, 558–569. [Google Scholar] [CrossRef]
  10. Huang, M.-H.; Rust, R.T. A Strategic Framework for Artificial Intelligence in Marketing. J. Acad. Mark. Sci. 2021, 49, 30–50. [Google Scholar] [CrossRef]
  11. Moisander, J.; Närvänen, E.; Valtonen, A. Interpretive Marketing Research. In Marketing Management; Routledge: London, UK, 2020; pp. 237–253. [Google Scholar]
  12. Derksen, C.; Walter, F.M.; Akbar, A.B.; Parmar, A.V.E.; Saunders, T.S.; Round, T.; Rubin, G.; Scott, S.E. The Implementation Challenge of Computerised Clinical Decision Support Systems for the Detection of Disease in Primary Care: Systematic Review and Recommendations. Implement. Sci. 2025, 20, 33. [Google Scholar] [CrossRef]
  13. Castillo, D.; Canhoto, A.I.; Said, E. The Dark Side of AI-Powered Service Interactions: Exploring the Process of Co-Destruction from the Customer Perspective. Serv. Ind. J. 2021, 41, 900–925. [Google Scholar] [CrossRef]
  14. Kim, J.H.; Kim, M.; Kwak, D.W.; Lee, S. Home-Tutoring Services Assisted with Technology: Investigating the Role of Artificial Intelligence Using a Randomized Field Experiment. J. Mark. Res. 2022, 59, 79–96. [Google Scholar] [CrossRef]
  15. Følstad, A.; Nordheim, C.B.; Bjørkli, C.A. What Makes Users Trust a Chatbot for Customer Service? An Exploratory Interview Study. In Internet Science. INSCI 2018. Lecture Notes in Computer Science; Bodrunova, S., Ed.; Springer: Cham, Switzerland, 2018; pp. 194–208. [Google Scholar]
  16. Rese, A.; Ganster, L.; Baier, D. Chatbots in Retailers’ Customer Communication: How to Measure Their Acceptance? J. Retail. Consum. Serv. 2020, 56, 102176. [Google Scholar] [CrossRef]
  17. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward A Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  18. Camilleri, M.A. Factors Affecting Performance Expectancy and Intentions to Use ChatGPT: Using SmartPLS to Advance an Information Technology Acceptance Framework. Technol. Forecast. Soc. Change 2024, 201, 123247. [Google Scholar] [CrossRef]
  19. Gursoy, D.; Chi, O.H.; Lu, L.; Nunkoo, R. Consumers Acceptance of Artificially Intelligent (AI) Device Use in Service Delivery. Int. J. Inf. Manag. 2019, 49, 157–169. [Google Scholar] [CrossRef]
  20. Brachten, F.; Kissmer, T.; Stieglitz, S. The Acceptance of Chatbots in an Enterprise Context—A Survey Study. Int. J. Inf. Manag. 2021, 60, 102375. [Google Scholar] [CrossRef]
  21. Seth, T.; Muhuri, P.K. Hesitant and Uncertain Linguistics Based Executive Decision Making Using Risk and Regret Aversion: Methods, Implementation and Analysis. MethodsX 2024, 12, 102706. [Google Scholar] [CrossRef]
  22. Ghahremani-Nahr, J.; Nozari, H. A Survey for Investigating Key Performance Indicators in Digital Marketing. Int. J. Innov. Mark. Elem. 2021, 1, 1–6. [Google Scholar] [CrossRef]
  23. Parry, K.; Cohen, M.; Bhattacharya, S. Rise of the Machines: A Critical Consideration of Automated Leadership Decision Making in Organizations. Group Organ. Manag. 2016, 41, 571–594. [Google Scholar] [CrossRef]
  24. Newell, S.; Marabelli, M. Strategic Opportunities (and Challenges) of Algorithmic Decision-Making: A Call for Action on the Long-Term Societal Effects of ‘Datification’. J. Strateg. Inf. Syst. 2015, 24, 3–14. [Google Scholar] [CrossRef]
  25. Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  26. Song, Y.; Qiu, X.; Liu, J. The Impact of Artificial Intelligence Adoption on Organizational Decision-Making: An Empirical Study Based on the Technology Acceptance Model in Business Management. Systems 2025, 13, 683. [Google Scholar] [CrossRef]
  27. Al-Saedi, K.; Al-Emran, M.; Ramayah, T.; Abusham, E. Developing a General Extended UTAUT Model for M-Payment Adoption. Technol. Soc. 2020, 62, 101293. [Google Scholar] [CrossRef]
  28. Raza, S.A.; Qazi, W.; Khan, K.A.; Salam, J. Social Isolation and Acceptance of the Learning Management System (LMS) in the Time of COVID-19 Pandemic: An Expansion of the UTAUT Model. J. Educ. Comput. Res. 2021, 59, 183–208. [Google Scholar] [CrossRef]
  29. Joshi, H. Integrating Trust and Satisfaction into the UTAUT Model to Predict Chatbot Adoption—A Comparison between Gen-Z and Millennials. Int. J. Inf. Manag. Data Insights 2025, 5, 100332. [Google Scholar] [CrossRef]
  30. Vărzaru, A.A. Assessing Artificial Intelligence Technology Acceptance in Managerial Accounting. Electronics 2022, 11, 2256. [Google Scholar] [CrossRef]
  31. Grassini, S.; Aasen, M.L.; Møgelvang, A. Understanding University Students’ Acceptance of ChatGPT: Insights from the UTAUT2 Model. Appl. Artif. Intell. 2024, 38, 2371168. [Google Scholar] [CrossRef]
  32. Richad, R.; Vivensius, V.; Sfenrianto, S.; Kaburuan, E.R. Analysis of Factors Influencing Millennial’s Technology Acceptance of Chatbot in the Banking Industry in Indonesia. Int. J. Civ. Eng. Technol. 2019, 10, 1270–1281. [Google Scholar] [CrossRef]
  33. Ronaghi, M.H.; Forouharfar, A. A Contextualized Study of the Usage of the Internet of Things (IoTs) in Smart Farming in a Typical Middle Eastern Country within the Context of Unified Theory of Acceptance and Use of Technology Model (UTAUT). Technol. Soc. 2020, 63, 101415. [Google Scholar] [CrossRef]
  34. Tian, W.; Ge, J.; Zhao, Y.; Zheng, X. AI Chatbots in Chinese Higher Education: Adoption, Perception, and Influence among Graduate Students—An Integrated Analysis Utilizing UTAUT and ECM Models. Front. Psychol. 2024, 15, 1268549. [Google Scholar] [CrossRef]
  35. Zeebaree, M.; Agoyi, M.; Aqel, M. Sustainable Adoption of E-Government from the UTAUT Perspective. Sustainability 2022, 14, 5370. [Google Scholar] [CrossRef]
  36. Al-Emran, M.; AlQudah, A.A.; Abbasi, G.A.; Al-Sharafi, M.A.; Iranmanesh, M. Determinants of Using AI-Based Chatbots for Knowledge Sharing: Evidence From PLS-SEM and Fuzzy Sets (FsQCA). IEEE Trans. Eng. Manag. 2024, 71, 4985–4999. [Google Scholar] [CrossRef]
  37. Ojiaku, O.C.; Ezenwafor, E.C.; Osarenkhoe, A. Integrating TTF and UTAUT Models to Illuminate Factors That Influence Consumers’ Intentions to Adopt Financial Technologies in an Emerging Country Context. Int. J. Technol. Mark. 2024, 18, 113–135. [Google Scholar] [CrossRef]
  38. Holden, R.J.; Karsh, B.-T. The Technology Acceptance Model: Its Past and Its Future in Health Care. J. Biomed. Inform. 2010, 43, 159–172. [Google Scholar] [CrossRef] [PubMed]
  39. Liu, L.; Miguel Cruz, A.; Rios Rincon, A.; Buttar, V.; Ranson, Q.; Goertzen, D. What Factors Determine Therapists’ Acceptance of New Technologies for Rehabilitation—A Study Using the Unified Theory of Acceptance and Use of Technology (UTAUT). Disabil. Rehabil. 2015, 37, 447–455. [Google Scholar] [CrossRef]
  40. Shachak, A.; Kuziemsky, C.; Petersen, C. Beyond TAM and UTAUT: Future Directions for HIT Implementation Research. J. Biomed. Inform. 2019, 100, 103315. [Google Scholar] [CrossRef]
  41. Mishra, S.; Ewing, M.T.; Cooper, H.B. Artificial Intelligence Focus and Firm Performance. J. Acad. Mark. Sci. 2022, 50, 1176–1197. [Google Scholar] [CrossRef]
  42. Sohaib, O.; Hussain, W.; Asif, M.; Ahmad, M.; Mazzara, M. A PLS-SEM Neural Network Approach for Understanding Cryptocurrency Adoption. IEEE Access 2020, 8, 13138–13150. [Google Scholar] [CrossRef]
  43. Blut, M.; Wang, C. Technology Readiness: A Meta-Analysis of Conceptualizations of the Construct and Its Impact on Technology Usage. J. Acad. Mark. Sci. 2020, 48, 649–669. [Google Scholar] [CrossRef]
  44. Saadé, R.; Bahli, B. The Impact of Cognitive Absorption on Perceived Usefulness and Perceived Ease of Use in On-Line Learning: An Extension of the Technology Acceptance Model. Inf. Manag. 2005, 42, 317–327. [Google Scholar] [CrossRef]
  45. Prastawa, H.; Ciptomulyono, U.; Laksono-Singgih, M.; Hartono, M. The Effect of Cognitive and Affective Aspects on Usability. Theor. Issues Ergon. Sci. 2019, 20, 507–531. [Google Scholar] [CrossRef]
  46. Che Jailani, A.S.; Omar, R.; Sharudin, S.A.; Ahmad Faudzi, M.; Che Cob, Z. Understanding Cognitive Load’s Effect on Mobile Application Usability: A Review. In Digital Innovation in Knowledge Management; Springer: Cham, Switzerland, 2025; pp. 334–345. [Google Scholar]
  47. Gong, Y.; Kang, H. Usability and Clinical Decision Support. In Clinical Decision Support Systems. Health Informatics; Berner, E., Ed.; Springer: Cham, Switzerland, 2016; pp. 69–86. [Google Scholar]
  48. Rödle, W.; Wimmer, S.; Zahn, J.; Prokosch, H.-U.; Hinkes, B.; Neubert, A.; Rascher, W.; Kraus, S.; Toddenroth, D.; Sedlmayr, B. User-Centered Development of an Online Platform for Drug Dosing Recommendations in Pediatrics. Appl. Clin. Inform. 2019, 10, 570–579. [Google Scholar] [CrossRef]
  49. Mucha, H.; Robert, S.; Breitschwerdt, R.; Fellmann, M. Usability of Clinical Decision Support Systems. Z. Arbeitswiss. 2023, 77, 92–101. [Google Scholar] [CrossRef]
  50. Martin, C. Barriers to the Open Government Data Agenda: Taking a Multi-Level Perspective. Policy Internet 2014, 6, 217–240. [Google Scholar] [CrossRef]
  51. Coperich, K.; Cudney, E.; Nembhard, H. Continuous Improvement Study of Chatbot Technologies Using a Human Factors Methodology. In Proceedings of the 2017 Industrial and Systems Engineering Conference, Pittsburgh, PA, USA, 20–23 May 2017; pp. 1–6. [Google Scholar]
  52. Saaty, T.L. How to Make a Decision: The Analytic Hierarchy Process. Eur. J. Oper. Res. 1990, 48, 9–26. [Google Scholar] [CrossRef]
  53. Johari, N.M.; Nohuddin, P.N. Quality Attributes for a Good Chatbot: A Literature Review. Int. J. Electr. Eng. Technol. (IJEET) 2021, 12, 109–119. [Google Scholar]
  54. Delone, W.H.; McLean, E.R. The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar] [CrossRef]
  55. Harlow, H.D. Developing a Knowledge Management Strategy for Data Analytics and Intellectual Capital. Meditari Account. Res. 2018, 26, 400–419. [Google Scholar] [CrossRef]
  56. Chowdhury, S.; Budhwar, P.; Dey, P.K.; Joel-Edgar, S.; Abadie, A. AI-Employee Collaboration and Business Performance: Integrating Knowledge-Based View, Socio-Technical Systems and Organisational Socialisation Framework. J. Bus. Res. 2022, 144, 31–49. [Google Scholar] [CrossRef]
  57. Malik, A.; De Silva, M.T.T.; Budhwar, P.; Srikanth, N.R. Elevating Talents’ Experience through Innovative Artificial Intelligence-Mediated Knowledge Sharing: Evidence from an IT-Multinational Enterprise. J. Int. Manag. 2021, 27, 100871. [Google Scholar] [CrossRef]
  58. Nguyen, T.-M.; Malik, A. Impact of Knowledge Sharing on Employees’ Service Quality: The Moderating Role of Artificial Intelligence. Int. Mark. Rev. 2022, 39, 482–508. [Google Scholar] [CrossRef]
  59. Shaikh, F.; Afshan, G.; Anwar, R.S.; Abbas, Z.; Chana, K.A. Analyzing the Impact of Artificial Intelligence on Employee Productivity: The Mediating Effect of Knowledge Sharing and Well-being. Asia Pac. J. Hum. Resour. 2023, 61, 794–820. [Google Scholar] [CrossRef]
  60. Panda, M.; Hossain, M.M.; Puri, R.; Ahmad, A. Artificial Intelligence in Action: Shaping the Future of Public Sector. Digit. Policy Regul. Gov. 2025, 27, 668–686. [Google Scholar] [CrossRef]
  61. Delgado, S.; Villamarin, A.; Insuasti, J. AI-Powered Chatbots in Organizations: A Systematic Literature Review. J. Inf. Syst. Eng. Manag. 2025, 10, 452–460. [Google Scholar] [CrossRef]
  62. Boloș, M.I.; Rusu, S.; Sabău-Popa, C.D.; Gherai, D.S.; Negrea, A.; Crișan, M.-I. AI Chatbots: Fast Tracking Sustainability Report Analysis for Enhanced Decision Making. Amfiteatru Econ. 2024, 26, 1241–1255. [Google Scholar] [CrossRef]
  63. Gupta, A.; Dogra, N. Tourist Adoption of Mapping Apps: A UTAUT2 Perspective of Smart Travellers. Tour. Hosp. Manag. 2017, 23, 145–161. [Google Scholar] [CrossRef]
  64. Rahman, M.M.; Sloan, T. User Adoption of Mobile Commerce in Bangladesh: Integrating Perceived Risk, Perceived Cost and Personal Awareness with TAM. Int. Technol. Manag. Rev. 2017, 6, 103–124. [Google Scholar] [CrossRef]
  65. Cordero, J.; Barba-Guaman, L.; Guamán, F. Use of Chatbots for Customer Service in MSMEs. Appl. Comput. Inform. 2026, 22, 185–197. [Google Scholar] [CrossRef]
  66. Chatterjee, S.; Liu, C.L.; Rowland, G.; Hogarth, T. The Impact of AI Tool on Engineering at ANZ Bank An Emperical Study on GitHub Copilot Within Coporate Environment. arXiv 2024, arXiv:2402.05636. [Google Scholar] [CrossRef]
  67. Pandey, R.; Singh, P.; Wei, R.; Shankar, S. Transforming Software Development: Evaluating the Efficiency and Challenges of GitHub Copilot in Real-World Projects. arXiv 2024, arXiv:2406.17910. [Google Scholar] [CrossRef]
  68. Ng, K.K.; Fauzi, L.; Leow, L.; Ng, J. Harnessing the Potential of Gen-AI Coding Assistants in Public Sector Software Development. arXiv 2024, arXiv:2409.17434. [Google Scholar] [CrossRef]
  69. De Andrade, I.M.; Tumelero, C. Increasing Customer Service Efficiency through Artificial Intelligence Chatbot. Rev. Gestão 2022, 29, 238–251. [Google Scholar] [CrossRef]
  70. Petty, R.E.; Cacioppo, J.T. The Elaboration Likelihood Model of Persuasion. In Advances in Experimental Social Psychology; Berkowitz, L., Ed.; Academic Press: Cambridge, MA, USA, 1986; Volume 19, pp. 123–205. [Google Scholar]
  71. Li, H.; See-To, E.W.K. Source Credibility Plays the Central Route: An Elaboration Likelihood Model Exploration in Social Media Environment with Demographic Profile Analysis. J. Electron. Bus. Digit. Econ. 2024, 3, 36–60. [Google Scholar] [CrossRef]
  72. Hovland, C.I.; Weiss, W. The Influence of Source Credibility on Communication Effectiveness. Public Opin. Q. 1951, 15, 635–650. [Google Scholar] [CrossRef]
  73. Komiak, S.Y.X.; Benbasat, I. A Two-Process View of Trust and Distrust Building in Recommendation Agents: A Process-Tracing Study. J. Assoc. Inf. Syst. 2008, 9, 727–747. [Google Scholar] [CrossRef]
  74. Lankton, N.; McKnight, D.H.; Tripp, J. Technology, Humanness, and Trust: Rethinking Trust in Technology. J. Assoc. Inf. Syst. 2015, 16, 880–918. [Google Scholar] [CrossRef]
  75. Robert, L.P.; Denis, A.R.; Hung, Y.-T.C. Individual Swift Trust and Knowledge-Based Trust in Face-to-Face and Virtual Team Members. J. Manag. Inf. Syst. 2009, 26, 241–279. [Google Scholar] [CrossRef]
  76. Sussman, S.W.; Siegal, W.S. Informational Influence in Organizations: An Integrated Approach to Knowledge Adoption. Inf. Syst. Res. 2003, 14, 47–65. [Google Scholar] [CrossRef]
  77. Kang, J.-W.; Namkung, Y. The Information Quality and Source Credibility Matter in Customers’ Evaluation toward Food O2O Commerce. Int. J. Hosp. Manag. 2019, 78, 189–198. [Google Scholar] [CrossRef]
  78. Onofrei, G.; Filieri, R.; Kennedy, L. Social Media Interactions, Purchase Intention, and Behavioural Engagement: The Mediating Role of Source and Content Factors. J. Bus. Res. 2022, 142, 100–112. [Google Scholar] [CrossRef]
  79. Camilleri, M.A.; Filieri, R. Customer Satisfaction and Loyalty with Online Consumer Reviews: Factors Affecting Revisit Intentions. Int. J. Hosp. Manag. 2023, 114, 103575. [Google Scholar] [CrossRef]
  80. Sweller, J. Cognitive Load Theory, Learning Difficulty, and Instructional Design. Learn. Instr. 1994, 4, 295–312. [Google Scholar] [CrossRef]
  81. Klepsch, M.; Schmitz, F.; Seufert, T. Development and Validation of Two Instruments Measuring Intrinsic, Extraneous, and Germane Cognitive Load. Front. Psychol. 2017, 8, 1997. [Google Scholar] [CrossRef]
  82. Arbaugh, J.B. How Instructor Immediacy Behaviors Affect Student Satisfaction and Learning in Web-Based Courses. Bus. Commun. Q. 2001, 64, 42–54. [Google Scholar] [CrossRef]
  83. Gupta, S.; Bostrom, R.P.; Huber, M. End-User Training Methods: What We Know, Need to Know. ACM SIGMIS Database DATABASE Adv. Inf. Syst. 2010, 41, 9–39. [Google Scholar] [CrossRef]
  84. Winkler, R.; Soellner, M. Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. Acad. Manag. Proc. 2018, 2018, 15903. [Google Scholar] [CrossRef]
  85. Lundqvist, K.O.; Pursey, G.; Williams, S. Design and Implementation of Conversational Agents for Harvesting Feedback in ELearning Systems. In Scaling up Learning for Sustained Impact. EC-TEL 2013. Lecture Notes in Computer Science; Hernández-Leo, D., Ley, T., Klamma, R., Harrer, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8095, pp. 617–618. [Google Scholar]
  86. Chaudhri, V.K.; Gunning, D.; Lane, H.C.; Roschelle, J. Intelligent Learning Technologies: Applications of Artificial Intelligence to Contemporary and Emerging Educational Challenges. AI Mag. 2013, 34, 10–12. [Google Scholar] [CrossRef]
  87. Gherheș, V.; Obrad, C. Technical and Humanities Students’ Perspectives on the Development and Sustainability of Artificial Intelligence (AI). Sustainability 2018, 10, 3066. [Google Scholar] [CrossRef]
  88. Sanusi, I.T.; Oyelere, S.S.; Vartiainen, H.; Suhonen, J.; Tukiainen, M. Developing Middle School Students’ Understanding of Machine Learning in an African School. Comput. Educ. Artif. Intell. 2023, 5, 100155. [Google Scholar] [CrossRef]
  89. Compeau, D.R.; Higgins, C.A. Computer Self-Efficacy: Development of a Measure and Initial Test. MIS Q. 1995, 19, 189–211. [Google Scholar] [CrossRef]
  90. Candello, H.; Pinhanez, C.; Figueiredo, F. Typefaces and the Perception of Humanness in Natural Language Chatbots. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems; ACM: New York, NY, USA, 2017; pp. 3476–3487. [Google Scholar]
  91. Chou, C.-M.; Shen, T.-C.; Shen, T.-C.; Shen, C.-H. Influencing Factors on Students’ Learning Effectiveness of AI-Based Technology Application: Mediation Variable of the Human-Computer Interaction Experience. Educ. Inf. Technol. 2022, 27, 8723–8750. [Google Scholar] [CrossRef]
  92. Li, J.; Zhou, Y.; Yao, J.; Liu, X. An Empirical Investigation of Trust in AI in a Chinese Petrochemical Enterprise Based on Institutional Theory. Sci. Rep. 2021, 11, 13564. [Google Scholar] [CrossRef]
  93. Usman M.Bus, O.; Septianti, A.; Susita, D.; Marsofiyati, M. The Effect of Computer Self-Efficacy and Subjective Norm on The Perceived Usefulness, Perceived Ease of Use and Behavioural Intention to Use Technology. J. Southeast Asian Res. 2020, 2020, 753259. [Google Scholar] [CrossRef]
  94. Hasan, B. Examining the Effects of Computer Self-Efficacy and System Complexity on Technology Acceptance. Inf. Resour. Manag. J. 2007, 20, 76–88. [Google Scholar] [CrossRef]
  95. Lan, Y.; Liu, S.; Chen, H.; Xia, L. Configurational Effects of Personal Innovativeness, Self-Efficacy, and Perceived Risk on AI Adoption in Media Students. Sci. Rep. 2026, 16, 5681. [Google Scholar] [CrossRef]
  96. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  97. Leppink, J.; Paas, F.; Van der Vleuten, C.P.M.; Van Gog, T.; Van Merriënboer, J.J.G. Development of an Instrument for Measuring Different Types of Cognitive Load. Behav. Res. Methods 2013, 45, 1058–1072. [Google Scholar] [CrossRef]
  98. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology; Hancock, P.A., Meshkati, N., Eds.; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  99. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  100. Sohn, K.; Kwon, O. Technology Acceptance Theories and Factors Influencing Artificial Intelligence-Based Intelligent Products. Telemat. Inform. 2020, 47, 101324. [Google Scholar] [CrossRef]
  101. Cheung, C.M.K.; Lee, M.K.O.; Rabjohn, N. The Impact of Electronic Word-of-mouth. Internet Res. 2008, 18, 229–247. [Google Scholar] [CrossRef]
  102. ValueWorks—The Intelligent Operating System for Executives ValueWorks GmbH. Available online: https://valueworks.ai/ (accessed on 11 March 2026).
  103. He, P.; Zhang, J. Dual-Pathway Effects of Product and Technological Attributes on Consumer Engagement in Augmented Reality Advertising. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 196. [Google Scholar] [CrossRef]
  104. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to Use and How to Report the Results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  105. Barclay, D.W.; Higgins, C.A.; Thompson, R. The Partial Least Squares Approach to Causal Modeling: Personal Computer Adoption and Use as Illustration. Technol. Stud. 1995, 2, 285–309. [Google Scholar]
  106. Hair, J.F.; Ringle, C.M.; Sarstedt, M. PLS-SEM: Indeed a Silver Bullet. J. Mark. Theory Pract. 2011, 19, 139–152. [Google Scholar] [CrossRef]
  107. Barta, S.; Gurrea, R.; Flavián, C. Using Augmented Reality to Reduce Cognitive Dissonance and Increase Purchase Intention. Comput. Hum. Behav. 2023, 140, 107564. [Google Scholar] [CrossRef]
  108. Dijkstra, T.K.; Henseler, J. Consistent Partial Least Squares Path Modeling1. MIS Q. 2015, 39, 297–316. [Google Scholar] [CrossRef]
  109. Fornell, C.; Larcker, D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  110. Liébana-Cabanillas, F.; Molinillo, S.; Ruiz-Montañez, M. To Use or Not to Use, That Is the Question: Analysis of the Determining Factors for Using NFC Mobile Payment Systems in Public Transportation. Technol. Forecast. Soc. Change 2019, 139, 266–276. [Google Scholar] [CrossRef]
  111. Henseler, J.; Ringle, C.M.; Sarstedt, M. A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equation Modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  112. Hair, F.J., Jr.; Sarstedt, M.; Hopkins, L.; Kuppelwieser, V.G. Partial Least Squares Structural Equation Modeling (PLS-SEM). Eur. Bus. Rev. 2014, 26, 106–121. [Google Scholar] [CrossRef]
  113. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Routledge: New York, NY, USA, 2013. [Google Scholar]
  114. Prakash, A.V.; Das, S. (Why) Do We Trust AI?: A Case of AI-Based Health Chatbots. Australas. J. Inf. Syst. 2024, 28, 1–43. [Google Scholar] [CrossRef]
  115. Yan, L.; Martinez-Maldonado, R.; Jin, Y.; Echeverria, V.; Milesi, M.; Fan, J.; Zhao, L.; Alfredo, R.; Li, X.; Gašević, D. The Effects of Generative AI Agents and Scaffolding on Enhancing Students’ Comprehension of Visual Learning Analytics. Comput. Educ. 2025, 234, 105322. [Google Scholar] [CrossRef]
  116. Martins, M.R. Artificial Intelligence in Business Strategy: How AI Driven Analytics Is Reshaping Decision Making. Int. J. Humanit. Inf. Technol. 2025, 7, 2025. [Google Scholar]
  117. Egwuatu, O.V. Al-Driven Decision Support Systems for Business Strategy. World J. Adv. Res. Rev. 2025, 27, 1752–1769. [Google Scholar] [CrossRef]
  118. Mohamed, G. Comparative Analysis of AI-Driven Decision Support Systems and Traditional Spreadsheets: Evaluating Accuracy and Consistency in Business Intelligence. J. Sci. Technol. 2025, 30, 80–89. [Google Scholar] [CrossRef]
  119. Dachepalli, V. AI-Driven Decision Support Systems in ERP. Int. J. Comput. Sci. Data Eng. 2025, 2, 1–7. [Google Scholar] [CrossRef]
  120. Bokhonko, I.; Kubasiak, M.; Oleksa-Kaźmierczak, A. AI-Driven Decision Support Systems in Strategic Business Management: A Case-Based Analysis. Sci. Pap. Silesian Univ. Technol. Organ. Manag. Ser. 2025, 2025, 49–61. [Google Scholar] [CrossRef]
  121. Narne, S.; Adedoja, T.; Mohan, M.; Ayyalasomayajula, T. AI-Driven Decision Support Systems in Management: Enhancing Strategic Planning and Execution. Int. J. Recent Innov. Trends Comput. Commun. 2024, 12, 268–276. [Google Scholar]
Figure 1. Research Model.
Figure 1. Research Model.
Jtaer 21 00113 g001
Figure 2. Resulting Path Coefficients.
Figure 2. Resulting Path Coefficients.
Jtaer 21 00113 g002
Table 1. Measurement Items and Conceptual Sources.
Table 1. Measurement Items and Conceptual Sources.
ConstructItemItems ContentConceptual Source
Behavioral Intention (BI)BI1Assuming I had access to the chatbot on the platform, I would plan to use it in the future to analyze marketing metrics.[17,96]
BI2Assuming I had access to the chatbot on the platform, I would use it to analyze marketing metrics.
BI3Assuming I had access to the chatbot on the platform, I would predict that I would use it to analyze marketing metrics.
Cognitive Load (CL)CL1I could use the chatbot on the platform without feeling mentally overwhelmed.[81,97,98]
CL2Using the chatbot on the platform would not overly strain my concentration.
CL3I would feel mentally relieved when using the chatbot on the platform.
Chatbot Self-Efficacy (CSE)CSE1I could use the chatbot on the platform correctly if there are instructions or help available.[17,92,96]
CSE2I could use the chatbot on the platform correctly even if no one is available to assist me.
CSE3I could use the chatbot on the platform correctly if someone first shows me how it works.
Effort Expectancy (EE)EE1Operating the chatbot on the platform would require little effort on my part.[17,18,99]
EE2I would find it easy to learn how to use the chatbot on the platform.
EE3Interacting with the chatbot on the platform would be clear and understandable for me.
EE4I would find the chatbot on the platform easy to use.
Output Quality (OQ)OQ1The results I would receive from the chatbot on the platform would be of high quality.[18,70,76,96]
OQ2I would not have any issues with the quality of the output from the chatbot on the platform.
OQ3I would consider the results produced by the chatbot on the platform to be excellent.
Performance Expectancy (PE)PE1The chatbot on the platform could increase my productivity in analyzing marketing metrics.[17,18,99,100]
PE2I would find the chatbot on the platform useful for analyzing marketing metrics.
PE3Using the chatbot on the platform would help me analyze marketing metrics more efficiently.
PE4Using the chatbot on the platform would improve my performance in analyzing marketing metrics.
Social Influence (SI)SI1People who influence my behavior would think that I should use the chatbot on the platform.[17,18,99]
SI2People who are important to me would support my use of the chatbot on the platform.
SI3People whose opinions I value would prefer that I use the chatbot on the platform.
Source Trustworthiness (ST)ST1I would trust the content provided by the chatbot on the platform.[18,70,101]
ST2I would consider the information provided by the chatbot on the platform to be credible.
Time Saving (TS)TS1Using the chatbot on the platform would reduce the time I spend on data-driven decisions.[66,67,68]
TS2The chatbot on the platform would help me get the required information faster.
TS3By using the chatbot on the platform, I could analyze marketing metrics in less time.
Table 2. Sample Characteristics.
Table 2. Sample Characteristics.
CharacteristicsOptionFrequencyPercentage (%)
GenderMale6864.15%
Female3835.85%
Age20–301514.15%
31–405450.94%
41–502321.70%
>511413.21%
EducationHigh School Diploma43.77%
Bachelor’s Degree2725.47%
Master’s Degree7267.92%
Doctorate21.89%
Professor10.94%
Company Size1–101413.21%
11–1002725.47%
101–10003633.96%
>10002927.36%
Table 3. Outer loadings, composite reliability, and convergent validity of the constructs.
Table 3. Outer loadings, composite reliability, and convergent validity of the constructs.
ConstructItemsOuter LoadingsCronbach’s AlphaComposite Reliability (ρₐ)Composite Reliability (ρ_c)Average Variance Extracted (AVE)
Behavioral Intention (BI)BI10.9440.9420.9420.9630.896
BI20.954
BI30.941
Cognitive Load (CL)CL10.9150.8840.8850.9280.812
CL20.922
CL30.866
Chatbot Self-Efficacy (CSE)CSE10.8880.8580.8680.9130.777
CSE20.876
CSE30.881
Effort Expectancy (EE)EE10.8460.9040.9050.9330.777
EE20.880
EE30.908
EE40.891
Output Quality (OQ)OQ10.9350.9250.9300.9520.869
OQ20.950
OQ30.911
Performance Expectancy
(PE)
PE10.8750.9270.9320.9480.821
PE20.923
PE30.907
PE40.918
Social Influence (SI)SI10.9490.9560.9570.9710.919
SI20.971
SI30.955
Source Trustworthiness (ST)ST10.8980.7870.7920.9030.824
ST20.917
Time Saving (TS)TS10.9330.9170.9190.9480.858
TS20.938
TS30.906
Table 4. Cross-loadings between constructs.
Table 4. Cross-loadings between constructs.
ConstructBICLCSEEEOQPESISTTS
BI10.9440.4790.5360.6050.6380.6900.4800.6050.742
BI20.9540.4740.4990.6270.5180.6300.4830.5410.705
BI30.9410.5120.5630.6920.5860.6530.4650.5470.729
CL10.4660.9150.5670.6330.3440.4790.2370.7040.464
CL20.4520.9220.5170.5790.3520.4300.2950.6390.432
CL30.4750.8660.5020.6320.3490.4760.2010.5390.533
CSE10.4640.4780.8880.5320.4530.4530.1270.5260.510
CSE20.4950.5700.8760.6610.3570.4700.2620.6270.559
CSE30.5290.4930.8810.5420.4330.4730.1150.5510.534
EE10.5870.5130.6210.8460.5700.5700.3280.6120.604
EE20.6260.5630.6310.8800.4620.5330.2670.5020.591
EE30.5850.6530.5770.9080.4730.5310.2790.5390.611
EE40.5910.6800.5100.8910.5030.5420.3730.5690.574
OQ10.6380.4300.4840.5830.9350.7620.3760.5180.647
OQ20.4970.3370.4190.5000.9500.6900.2020.4320.535
OQ30.5770.3060.3930.5000.9110.6600.2990.3870.579
PE10.5370.3650.3770.4460.6500.8750.1730.5590.523
PE20.6440.4980.4870.5800.6940.9230.2640.5700.691
PE30.6230.5160.5310.6060.6680.9070.2830.6230.728
PE40.7010.4720.5060.5870.7290.9180.3780.6400.706
SI10.4680.2990.1620.3510.2560.2810.9490.3340.346
SI20.4950.2420.1920.3400.2840.2720.9710.3240.385
SI30.4810.2370.2160.3250.3690.3340.9550.3370.443
ST10.5650.6340.5600.5550.4080.5690.3980.8980.468
ST20.5210.6310.6170.5860.4640.6300.2390.9170.496
TS10.7600.5050.6160.6330.6020.7200.3820.5590.933
TS20.7010.4390.5980.5720.5920.6640.3620.4570.938
TS30.6640.5290.4720.6700.5610.6580.3910.4550.906
Table 5. Fornell–Larcker Criterion.
Table 5. Fornell–Larcker Criterion.
ConstructBICLCSEEEOQPESISTTS
BI0.947
CL0.5160.901
CSE0.5630.5880.881
EE0.6780.6840.6630.881
OQ0.6140.3870.4660.5690.932
PE0.6950.5140.5280.6170.7580.906
SI0.5030.2700.1980.3530.3160.3080.959
ST0.5960.6970.6490.6290.4820.6620.3460.908
TS0.7670.5300.6090.6750.6320.7360.4090.5310.926
Table 6. HTMT Ratios.
Table 6. HTMT Ratios.
ConstructBICLCSEEEOQPESISTTS
BI
CL0.564
CSE0.6250.668
EE0.7340.7630.745
OQ0.6550.4240.5250.621
PE0.7390.5630.5870.6700.814
SI0.5300.2960.2100.3810.3330.322
ST0.6940.8360.7820.7470.5590.7710.404
TS0.8230.5870.6810.7420.6830.7910.4360.622
Table 7. Coefficient of Determination (R2) for Endogenous Constructs.
Table 7. Coefficient of Determination (R2) for Endogenous Constructs.
ConstructR2R2 Adjusted
BI0.6410.631
EE0.5720.564
PE0.7430.735
Table 8. Effect Size (ƒ2) for Structural Model Paths.
Table 8. Effect Size (ƒ2) for Structural Model Paths.
Pathƒ2
CL → EE0.309
CSE → EE0.243
EE → BI0.183
OQ → PE0.385
PE → BI0.286
SI → BI0.160
ST → PE0.227
TS → PE0.209
Table 9. Inner Variance Inflation Factor (VIF) Values.
Table 9. Inner Variance Inflation Factor (VIF) Values.
PathVIF Value
CL → EE1.528
CSE → EE1.528
EE → BI1.694
OQ → PE1.751
PE → BI1.638
SI → BI1.160
ST → PE1.466
TS → PE1.874
Table 10. Direct Effects.
Table 10. Direct Effects.
PathOriginal SampleSample MeanStandard DeviationT Statisticsp Values
CL → EE0.4500.4550.1064.2390.000
CSE → EE0.3990.3960.0924.3350.000
EE → BI0.3340.3290.0993.3880.001
OQ → PE0.4160.4040.1153.6290.000
PE → BI0.4100.4170.1033.9720.000
SI → BI0.2580.2510.0723.5710.000
ST → PE0.2930.2960.0594.9610.000
TS → PE0.3170.3270.1072.9660.003
Table 11. Indirect Effects.
Table 11. Indirect Effects.
PathOriginal SampleSample MeanStandard DeviationT Statisticsp Values
CL → BI0.1500.1480.0542.8000.005
CSE → BI0.1330.1320.0542.4510.014
OQ → BI0.1710.1660.0582.9650.003
ST → BI0.1200.1230.0383.1290.002
TS → BI0.1300.1390.0652.0100.044
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kottmann, S.; Seitz, J. Investigating Decision-Support Chatbot Acceptance Among Professionals: An Application of the UTAUT Model in a Marketing and Sales Context. J. Theor. Appl. Electron. Commer. Res. 2026, 21, 113. https://doi.org/10.3390/jtaer21040113

AMA Style

Kottmann S, Seitz J. Investigating Decision-Support Chatbot Acceptance Among Professionals: An Application of the UTAUT Model in a Marketing and Sales Context. Journal of Theoretical and Applied Electronic Commerce Research. 2026; 21(4):113. https://doi.org/10.3390/jtaer21040113

Chicago/Turabian Style

Kottmann, Sven, and Jürgen Seitz. 2026. "Investigating Decision-Support Chatbot Acceptance Among Professionals: An Application of the UTAUT Model in a Marketing and Sales Context" Journal of Theoretical and Applied Electronic Commerce Research 21, no. 4: 113. https://doi.org/10.3390/jtaer21040113

APA Style

Kottmann, S., & Seitz, J. (2026). Investigating Decision-Support Chatbot Acceptance Among Professionals: An Application of the UTAUT Model in a Marketing and Sales Context. Journal of Theoretical and Applied Electronic Commerce Research, 21(4), 113. https://doi.org/10.3390/jtaer21040113

Article Metrics

Back to TopTop