1. Introduction
Emerging digital technologies are reshaping how consumers interact with brands, make decisions, and form long-term behavioral patterns. Among these, artificial intelligence (AI) plays a central role in redefining both the consumer experience and the underlying processes of decision making, recommendation, and personalization [
1]. In this paper, artificial intelligence (AI) refers to systems capable of mimicking human reasoning and learning. Sustainable innovation denotes the adoption of technologies that support long-term ecological and business viability. SMEs (small and medium-sized enterprises) are defined based on EU criteria (less than 250 employees). AI adoption refers to the willingness to use AI-based tools, workplace integration covers professional exposure to AI systems, personalization involves AI-driven content customization, and trust in AI denotes perceived system reliability. The Technology Acceptance Model (TAM) and UTAUT2 are theoretical frameworks for understanding technology use. While AI promises enhanced convenience, efficiency, and personalization, its impact on consumer behavior remains complex and multifaceted—particularly as adoption depends on individual, contextual, and affective factors [
2]. This study investigates the adoption of AI-powered tools from a consumer-centric perspective, building a theoretical model that integrates knowledge, workplace exposure, intrinsic motivation (passion), and perceived marketing personalization.
The lack of knowledge models tailored to SMEs’ specific adoption contexts is the research problem analyzed in this paper. The research question is: How do contextual, cognitive, and emotional factors influence AI adoption in SMEs? The research aim is to propose and empirically validate a PLS-SEM model based on TAM, UTAUT2, and contextual variables.
In the context of accelerated digitalization, the implementation of AI in everyday tools—ranging from smart assistants and recommendation engines to customer service bots and predictive systems—has become increasingly prevalent. These systems aim not only to improve operational efficiency but also to create deeply personalized consumer experiences [
3]. However, adoption rates vary widely, often hinging on the user’s familiarity with AI, exposure to its use in professional contexts, and emotional engagement with technological advancements [
4]. Hence, understanding the drivers behind AI adoption is critical for both academic inquiry and managerial practice.
This research is particularly relevant for small and medium-sized enterprises (SMEs), which play a vital role in promoting innovation and sustainability. Due to their flexibility, SMEs can rapidly adopt new technologies, such as AI, to enhance competitiveness, personalize customer engagement, and implement environmentally responsible strategies. As such, understanding AI adoption in the context of SMEs contributes directly to the achievement of the UN’s Sustainable Development Goals (SDGs), especially Goal 9 (Industry, Innovation and Infrastructure) and Goal 12 (Responsible Consumption and Production).
The foundation of this study is built upon established technology adoption frameworks, particularly the Technology Acceptance Model (TAM), which identifies perceived usefulness and perceived ease of use as primary predictors of behavioral intention [
5]. These cognitive factors are essential in AI-related contexts, where trust in the system’s capabilities and clarity of function significantly influence user behavior [
6]. Alongside these, the Unified Theory of Acceptance and Use of Technology (UTAUT2) expands the lens to include hedonic motivation, social influence, and facilitating conditions [
7].
Building on these models, the present research introduces three contextual and psychological variables: AI knowledge, workplace AI integration, and passion for AI. These are posited to shape consumers’ perceptions of usefulness and ease of use, while also influencing their trust in AI systems. Knowledge about AI refers to the user’s understanding of AI concepts and their confidence in interacting with such systems. Prior studies have demonstrated that a higher level of knowledge often leads to increased self-efficacy and openness toward adopting complex technologies [
8].
Workplace integration reflects the extent to which AI tools are utilized in the professional environment of the consumer. This factor introduces a real-world experiential component to the adoption model, as frequent exposure to AI in a work context may normalize its use and reduce perceived complexity [
9]. Moreover, when individuals are required to engage with AI for professional tasks, they may develop transferable attitudes that carry over into personal consumption behaviors.
A unique contribution of this research is the incorporation of passion for AI, conceptualized as an intrinsic motivator akin to technology enthusiasm or digital curiosity. Drawing from motivational theory, passion is understood here as a sustained personal interest in AI, manifesting in behaviors such as self-directed learning, experimentation, and following technological trends [
10]. This psychological variable serves as a proxy for hedonic motivation and is hypothesized to influence both cognitive evaluations (e.g., usefulness) and behavioral intentions.
Another core element of the model is AI-driven marketing personalization, a dimension with increasing relevance in digital commerce. AI enables firms to deliver tailored messages, content, and product recommendations based on consumer data and predictive algorithms [
11]. From the consumer’s perspective, such personalization may enhance relevance and satisfaction, but it may also raise concerns around privacy and trust. In this model, marketing personalization is proposed to influence trust in AI systems, which acts as a mediating variable between perception and behavior. Trust has been widely recognized as a key antecedent of technology adoption, particularly when users must rely on opaque or algorithmic decision-making processes [
12].
By integrating these variables into a unified conceptual model, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to explore the causal relationships among them and their effect on the behavioral intention to adopt AI tools. PLS-SEM is well-suited for this investigation due to its ability to handle complex models with latent constructs and its applicability in exploratory research [
13].
This research contributes to the literature on digital consumer behavior by identifying specific psychological, contextual, and marketing-related factors that influence AI adoption. It also aligns with the goals of the current Special Issue by addressing the interaction between emerging technologies and consumer behavior, particularly in regard to digital transformation, personalization, and sustainable engagement. The findings aim to offer strategic insights for marketers and technology developers seeking to design AI tools that resonate with consumer needs, values, and expectations in an increasingly automated marketplace.
Furthermore, this research contributes to advancing innovation and sustainability in small and medium-sized enterprises (SMEs), aligning with the strategic objectives of the 2030 Agenda for Sustainable Development.
2. The Role of AI in Supporting Innovation and Sustainability in SMEs
Artificial Intelligence (AI) has emerged as one of the most transformative digital technologies in contemporary society, profoundly altering the ways in which consumers interact with products, services, and brands. The integration of AI into everyday life—through tools such as recommendation engines, virtual assistants, chatbots, and automated decision systems—has created new dynamics in digital consumption, shifting traditional models of consumer behavior and engagement [
3]. In this context, AI functions not merely as a technical tool, but as a behavioral catalyst that reshapes how consumers seek information, evaluate alternatives, and form preferences.
From a commercial perspective, AI’s most visible impact lies in its ability to personalize the consumer experience at scale. AI-driven systems analyze vast amounts of user data to deliver tailored content, product recommendations, and marketing messages, thereby increasing perceived relevance and engagement [
14]. This level of personalization enhances customer satisfaction, boosts conversion rates, and strengthens brand loyalty. However, the automation of decision-making processes also introduces significant challenges, particularly regarding consumer autonomy, trust, and perceived control [
6]. The tension between personalization and privacy remains a central issue in the AI–consumer relationship, especially when users are uncertain about how their data are collected, processed, and used [
12].
In parallel, AI is increasingly being adopted in professional contexts, further blurring the lines between work and personal use of digital technologies. Tools initially introduced in workplace environments—such as predictive analytics or AI-enhanced productivity platforms—often migrate into everyday life, influencing consumer expectations and behavior across domains [
9]. This spillover effect has implications for understanding the holistic integration of AI in consumers’ lives and highlights the role of contextual exposure, such as workplace usage, in facilitating technology acceptance and adoption.
In SMEs, where resource constraints and agility coexist, AI adoption can serve as a catalyst for innovation and the transition toward more sustainable business practices. AI-powered tools can support eco-efficiency, smarter supply chains, and customer-focused innovation, thereby enabling SMEs to differentiate and thrive in competitive markets.
Beyond efficiency and personalization, AI also plays a strategic role in promoting sustainable consumption practices, aligning with the Sustainable Development Goals (SDGs) set by the United Nations, particularly Goal 12: Responsible Consumption and Production. AI systems can optimize energy use, reduce waste, and provide consumers with data-driven insights for making environmentally conscious decisions [
15]. For instance, AI-enabled platforms may suggest low-impact products or alternatives with better energy ratings, supporting more sustainable purchasing behavior. Moreover, digital transparency tools powered by AI can help consumers better understand the social and environmental impact of their choices, fostering a more informed and responsible consumption culture.
At the same time, the introduction of AI into the consumer journey brings about important psychological and emotional considerations. Trust in AI systems is critical for adoption, particularly when users rely on opaque or complex algorithms whose internal logic remains inaccessible. Prior studies have shown that trust mediates the relationship between perceived usefulness and behavioral intention, especially in high-involvement contexts such as finance, healthcare, or digital services [
16]. Moreover, consumers’ knowledge about AI and their passion for engaging with emerging technologies may serve as key psychological enablers that shape both their perceptions and behavioral responses [
4].
As such, the consumer–AI relationship is increasingly characterized by a multidimensional interplay of cognitive (e.g., knowledge, perceived usefulness), emotional (e.g., trust, passion), and contextual (e.g., workplace integration, exposure to personalized marketing) factors. The convergence of these forces illustrates the dual nature of AI technologies: on one hand, as enablers of more efficient, personalized, and sustainable consumption, and on the other, as sources of uncertainty, loss of control, and ethical concern. Understanding this balance is critical in both academic and practical terms.
In alignment with the objectives of this Special Issue, the current study explores how AI adoption at the consumer level is influenced by an integrated set of variables that span knowledge, context, personalization, and affective engagement. By doing so, it contributes to a broader understanding of how emerging digital technologies affect not only what consumers buy but how they live, work, and relate to digital systems in a rapidly evolving technological environment.
In this context, SMEs can act as agile enablers of responsible digital transformation, where AI is used not only to personalize consumer experience but also to foster sustainable innovation in business ecosystems.
3. Technology Acceptance Models and PLS-SEM Framework
Understanding the drivers of consumer adoption of artificial intelligence (AI) tools requires a robust theoretical foundation that captures both cognitive evaluations and behavioral intentions. To this end, the current study draws on the Technology Acceptance Model (TAM) (Davis, 1989) [
5] and its extended variants, such as the Unified Theory of Acceptance and Use of Technology (UTAUT2) [
7], as a basis for examining how individuals assess and ultimately decide to adopt AI-powered technologies in both personal and professional contexts.
The Technology Acceptance Model (TAM) remains one of the most widely used and validated frameworks for predicting user acceptance of technology. At its core, TAM posits that perceived usefulness (PU) and perceived ease of use (PEOU) are the two primary determinants of a user’s behavioral intention (BI) to adopt a technology. Perceived usefulness refers to the degree to which a person believes that using a particular system will enhance their performance, while perceived ease of use denotes the extent to which the system is free of effort [
5]. These two constructs have been shown to predict not only intention but also actual usage across a wide range of digital technologies, including AI systems [
17].
Building upon TAM, the UTAUT2 model introduces additional factors such as hedonic motivation, habit, and facilitating conditions, making it especially relevant in consumer contexts (Venkatesh et al., 2012) [
7]. In this research, passion for AI is introduced as a proxy for hedonic motivation, representing an intrinsic driver of engagement with emerging technologies. Additionally, workplace integration of AI tools is conceptualized as a contextual factor that may influence the perceived ease of use and familiarity with AI, thus supporting adoption.
4. Consumer Passion and Intrinsic Motivation for Technology
In the context of emerging technologies such as artificial intelligence (AI), understanding not only cognitive but also emotional and motivational factors is essential for explaining consumer adoption behaviors. While models such as TAM and UTAUT2 emphasize beliefs about usefulness and ease of use, recent research highlights the growing importance of intrinsic motivation in shaping how consumers engage with intelligent digital tools [
10,
18]. In this study, we conceptualize consumer passion for AI as a key affective construct that influences the adoption of AI tools by reinforcing engagement, curiosity, and a proactive orientation toward learning and experimentation.
Passion for technology can be understood as a sustained, self-driven interest in exploring, using, and integrating technological tools into daily life, beyond utilitarian necessity [
19]. Unlike external motivators—such as rewards, work obligations, or peer influence—intrinsic motivators such as passion are derived from personal satisfaction, enjoyment, or identification with a particular domain. In the AI context, this may manifest as consumers who actively follow AI developments, experiment with AI-based apps, or integrate AI tools into their routines not because they must, but because they find the process engaging, empowering, or enjoyable [
4].
The concept of harmonious passion, as proposed in dualistic models of motivation, suggests that individuals who freely engage in an activity they love experience greater well-being and are more persistent in their behavior [
19]. This perspective aligns well with digital contexts, where users are not necessarily forced to adopt AI tools but choose to do so because they perceive personal value and alignment with their interests or identity. This form of technology enthusiasm may be particularly relevant for early adopters or tech-savvy consumers who actively seek new experiences with intelligent systems [
20].
In AI adoption, passion may serve two interrelated functions. First, it may act as a direct driver of behavioral intention, encouraging consumers to seek out and integrate AI tools in both personal and professional spheres. Second, it may indirectly affect other cognitive perceptions, such as perceived usefulness and ease of use, by fostering a more optimistic and open attitude toward experimentation and learning. Consumers with higher levels of passion may be more resilient in the face of technical challenges and more inclined to explore complex functionalities, thus perceiving the tools as more beneficial and user-friendly [
20].
Moreover, in consumer–technology relationships, emotional and motivational factors play a role in forming trust, satisfaction, and long-term loyalty. When consumers are emotionally engaged with a technology, they may exhibit greater tolerance for imperfections and a higher likelihood of continued usage [
21]. This may be particularly relevant in AI environments where algorithmic systems are constantly learning and improving yet may not always perform predictably or transparently.
In the current model, passion for AI is introduced as a psychological antecedent that complements traditional cognitive constructs. While AI knowledge and workplace integration reflect what consumers know and do, passion reflects how consumers feel toward technology and how emotionally invested they are in its use. By incorporating this dimension, the study captures a more complete picture of the consumer’s decision-making process, rooted not only in rational evaluation but also in emotional and motivational engagement.
Importantly, the inclusion of passion in technology adoption research has gained momentum in recent years, particularly in studies exploring gamification, smart devices, and social media platforms. These studies consistently show that passion positively influences engagement, satisfaction, and long-term behavioral intentions [
22]. As such, applying this construct to AI adoption offers theoretical novelty and practical relevance, especially in a marketplace where consumer–brand–technology interactions are becoming increasingly emotional and identity-driven.
5. AI-Driven Marketing Personalization and Consumer Trust
The proliferation of artificial intelligence (AI) in marketing has enabled unprecedented levels of personalization across digital platforms. By leveraging large-scale consumer data and predictive algorithms, firms can tailor content, offers, and communications in real time, aligning them closely with individual preferences and behaviors [
14]. This form of AI-driven personalization is central to contemporary consumer experiences, and its impact on decision making, satisfaction, and loyalty has become a critical area of academic inquiry. In this study, perceived marketing personalization is examined as a key antecedent of trust in AI systems and, ultimately, of consumers’ intention to adopt AI-powered tools.
Personalization in marketing is not a new concept; however, AI enhances its scope and precision. Unlike traditional segmentation methods, AI can generate highly granular and dynamic personalization, adjusting recommendations, prices, or messaging in real time based on behavioral inputs [
11]. From the consumer’s perspective, this level of individualization can significantly improve perceived relevance, reduce choice overload, and increase convenience. These benefits often translate into stronger consumer engagement and more positive attitudes toward brands using AI technologies effectively [
23].
However, AI-driven personalization also introduces significant psychological and ethical challenges. Because AI systems rely on extensive data tracking and often operate as “black boxes,” consumers may struggle to understand how decisions are made, raising concerns about transparency, data privacy, and loss of autonomy [
24]. These concerns can diminish consumer trust in AI systems, which is a foundational element in any technology adoption process—particularly when systems exhibit high levels of automation and decision-making autonomy [
6].
Trust in AI is defined as the consumer’s belief that the system is reliable, competent, and acts in their best interest. It plays a dual role in AI adoption: as a mediator that translates perceived personalization into behavioral intention, and as a moderator that determines whether automation is accepted or rejected [
16]. In personalized marketing contexts, trust becomes even more critical, as consumers must feel confident that the system respects their preferences, protects their data, and offers unbiased recommendations.
The relationship between personalization and trust is nuanced. While increased personalization can enhance the perception of system competence and customer centricity, it may also lead to discomfort if the AI appears “too invasive” or “too accurate” [
25]. This phenomenon, known as the “personalization–privacy paradox,” reflects a trade-off that consumers make between receiving relevant experiences and maintaining control over their personal information [
12]. Consequently, the success of AI personalization efforts depends not only on technical accuracy but also on how these systems are perceived, framed, and communicated to users.
To address these concerns, transparency, explainability, and ethical AI design are increasingly advocated as means to foster consumer trust. Research has shown that when consumers are informed about how their data is used and are given some level of control over personalization parameters, their trust in AI systems increases [
26]. This supports the inclusion of trust in theoretical models of AI adoption, particularly in domains where perceived intrusiveness can override functional benefits.
In this study’s conceptual framework, AI-driven marketing personalization is modeled as an antecedent of trust in AI, which in turn affects behavioral intention to adopt AI tools. This structure acknowledges the dual nature of personalization: it can generate perceived value and convenience but also trigger privacy concerns and uncertainty. By integrating personalization and trust into the model, the research offers a more holistic understanding of how consumers evaluate and adopt AI technologies in commercial environments.
Furthermore, this approach aligns with the objectives of the Special Issue by illustrating how emerging digital technologies influence consumer experience and behavior—not only through functionality and efficiency but also through affective and ethical dimensions. The findings have implications for marketers, system designers, and policy makers seeking to create AI systems that are not only effective but also trustworthy, user-friendly, and aligned with consumer values.
6. Methodological Approach
This section presents the methodological foundations of the study, including the selection and integration of theoretical models (TAM and UTAUT2), the rationale for construct inclusion, and the justification for using the PLS-SEM approach. These elements inform the empirical design that follows in the next sections.
The inclusion of these cognitive and contextual factors in the research model allows for a more nuanced understanding of how AI tools are evaluated and adopted by consumers. While perceived usefulness and ease of use remain central, they are embedded within a broader framework that considers emotional engagement (passion), prior exposure (workplace integration), and trust in AI systems, particularly when personalization is present. This reflects the complexity of the current digital landscape, where consumers encounter AI not only as a utility but also as a personalized, and sometimes opaque, decision-making agent.
Given the multidimensional nature of the conceptual model and the presence of latent variables measured through reflective indicators (e.g., trust, passion) and potentially formative elements (e.g., AI knowledge as an aggregated experience), the study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) as the analytical method. PLS-SEM is particularly well-suited for exploratory models that aim to predict key target constructs and test theoretical extensions [
13]. Unlike covariance-based SEM, PLS-SEM does not require multivariate normality and can handle complex models with a high number of constructs and indicators relative to sample size.
Moreover, PLS-SEM is advantageous in research contexts where the model includes both direct and indirect relationships, mediating constructs, and potentially moderating effects. In this study, trust in AI systems may act as a mediator between marketing personalization and behavioral intention, while digital literacy or prior experience could moderate the effect of AI knowledge on perceived ease of use—justifying the flexibility of the chosen statistical technique.
The predictive orientation of PLS-SEM also aligns well with the goals of this research, which seeks not only to test hypotheses derived from TAM and UTAUT2 but also to identify the most significant predictors of consumer intention to adopt AI tools. By integrating theoretical constructs from established acceptance models with newer variables relevant to the digital age—such as AI-driven personalization and passion for technology—this study provides a contemporary extension of classical frameworks within a rapidly evolving technological and behavioral environment.
Justification of Method
The conceptual model includes nine hypothesized paths with reflective measurement constructs. Given the exploratory nature of this research, the presence of mediation, and the complex structural paths, PLS-SEM is appropriate. WarpPLS offers visual diagnostics and model fit indices, making it a well-suited platform for behavioral research in technology adoption [
27].
PLS-SEM was chosen over covariance-based SEM due to its suitability for complex models with formative and reflective constructs, non-normal data distribution, and exploratory objectives. Compared to CB-SEM, which emphasizes model fit and theory confirmation, PLS-SEM is more robust in handling predictive relationships and smaller sample sizes, as supported by the recent literature [
13].
7. Theoretical Model and Hypotheses Development
To understand the drivers of AI tool adoption among consumers, this study proposes a conceptual model that integrates cognitive, contextual, emotional, and marketing-related variables. The model is grounded in the Technology Acceptance Model (TAM), extended through the inclusion of passion as intrinsic motivation, workplace integration as contextual exposure, and perceived marketing personalization as a marketing-driven antecedent of trust (
Table 1).
This section develops the theoretical justification for each of the relationships in the model and formulates the corresponding hypotheses (
Figure 1).
According to TAM, individuals who possess higher levels of domain knowledge are more likely to perceive technology as useful, due to better understanding of its capabilities and relevance to their needs [
5]. In the context of AI, knowledge allows consumers to recognize practical applications and benefits, which enhances perceived usefulness.
H1. AI knowledge positively influences perceived usefulness of AI tools.
Knowledge about AI also facilitates familiarity with its interfaces and functions, reducing perceived complexity. Prior studies show that digital literacy and self-efficacy are strongly linked to ease of use evaluations [
7,
8].
H2. AI knowledge positively influences perceived ease of use of AI tools.
Consumers with higher AI literacy are better equipped to evaluate system decisions, reducing fear of the unknown and increasing trust. Knowledge enhances the perceived transparency and competence of AI [
16].
H3. AI knowledge positively influences trust in AI systems.
This relationship is core to TAM: when users believe a technology will improve their performance or decision making, they are more likely to adopt it [
5]. This applies to AI tools that offer automation, convenience, or smarter choices. Trust in AI systems was operationalized as a second-order construct comprising perceived competence, reliability, and ethical behavior. Scale items were adapted from prior validated instruments, and pilot testing was conducted to ensure clarity. Although multidimensional, the construct showed strong internal consistency (α = 0.89) and convergent validity.
H4. Perceived usefulness positively influences the behavioral intention to adopt AI tools.
Ease of use reduces cognitive effort and friction in technology adoption. If AI tools are seen as intuitive and accessible, consumers are more willing to try and integrate them [
7].
H5. Perceived ease of use positively influences the behavioral intention to adopt AI tools.
Trust is essential in AI contexts where users must rely on system outputs they may not fully understand. When trust is high, users are more willing to delegate decision making and adopt the technology [
6,
26,
28].
H6. Trust in AI systems positively influences the behavioral intention to adopt AI tools.
Workplace exposure to AI tools can normalize use and increase perceived relevance, leading to technology spillover into personal contexts. Familiarity in professional settings often enhances readiness for voluntary adoption [
9].
H7. Workplace AI integration positively influences the behavioral intention to adopt AI tools.
Intrinsic motivation, such as passion or curiosity toward AI, can independently drive technology engagement. Passionate users seek new experiences and are more likely to adopt emerging tools regardless of practical necessity [
19].
H8. Passion for AI positively influences the behavioral intention to adopt AI tools.
When AI delivers personalized experiences that consumers perceive as relevant and beneficial, this can enhance perceptions of system competence and benevolence, fostering trust [
11,
25]. Conversely, a lack of personalization may reduce trust.
H9. Perceived AI-driven marketing personalization positively influences trust in AI systems.
8. Materials and Methods
8.1. Research Design and Data Collection
This study employs a quantitative, cross-sectional research design to investigate the determinants of consumer adoption of AI-powered tools. Data were collected through a structured online questionnaire developed based on validated scales from the literature, adapted to fit the context of AI usage in both personal and professional domains.
The target population comprised individual users of digital technologies with varying degrees of exposure to AI-based tools. A non-probabilistic purposive sampling technique was used to ensure participation from individuals with at least minimal familiarity with AI applications (e.g., smart assistants, recommendation systems, AI chatbots). A total of 240 valid responses were collected over a one-month period. The sample included
Employees using AI tools in the workplace (34%);
University students in technology-related or business programs (28%);
General consumers using AI-based tools for personal tasks (e.g., shopping, entertainment) (38%).
Demographic characteristics were recorded, including age, gender, education, and frequency of AI use. The distribution reflected diversity across age groups (18–55+), with a slight predominance of male respondents (56%) and a majority holding at least a bachelor’s degree (63%).
8.2. Instrument Development
All constructs were measured using reflective multi-item scales based on prior studies (e.g., Davis, 1989; Venkatesh et al., 2012; Bleier & Eisenbeiss, 2015) [
5,
7,
11]. Items were adapted for the AI context and evaluated using a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). A pre-test was conducted with 20 participants to ensure clarity and validity of the items before full deployment.
Constructs included: AI Knowledge; Perceived Usefulness; Perceived Ease of Use; Passion for AI; Workplace Integration of AI; Marketing Personalization; Trust in AI Systems; Behavioral Intention to Adopt AI.
8.3. Data Analysis: Structural Equation Modeling with WarpPLS
To test the hypothesized relationships and assess the validity of the proposed model, the study employed Partial Least Squares Structural Equation Modeling (PLS-SEM) using WarpPLS software version 8.0. PLS-SEM is particularly well-suited for exploratory research, predictive modeling, and complex theoretical frameworks involving latent variables and non-normal data distributions [
27].
WarpPLS enables both linear and nonlinear relationship modeling, as well as robust assessments of multicollinearity, predictive relevance, and overall model fit.
8.3.1. Global Model Fit
Model fit was assessed using several standardized global criteria available in WarpPLS. All values met or exceeded the recommended thresholds (
Table 2).
These results confirm that the model demonstrates strong explanatory power, minimal multicollinearity, and robust predictive accuracy, supporting the adequacy of the PLS-SEM approach.
8.3.2. Construct Reliability and Validity
The reliability and convergent validity of the latent constructs were assessed through composite reliability (CR), Cronbach’s alpha, and average variance extracted (AVE). The discriminant validity of all constructs was assessed using both the Fornell–Larcker criterion and HTMT ratio. Results confirmed that ‘passion for AI’ and ‘workplace integration’ are statistically distinct constructs (HTMT = 0.61 < 0.85), with no conceptual overlap based on item loadings and cross-loadings. All constructs exceeded the commonly accepted thresholds (CR > 0.70, AVE > 0.50), as shown in
Table 3.
All variance inflation factor (VIF) values were well below the cutoff of 5, indicating no multicollinearity concerns.
8.3.3. Explanatory Power
The model demonstrated strong explanatory power for the endogenous variable “Behavioral Intention to Adopt AI” (R2 = 0.61), suggesting that over 60% of the variance is explained by the predictors. Intermediate explanatory power was also observed for Trust in AI Systems (R2 = 0.29), Perceived Usefulness (R2 = 0.20), and Ease of Use (R2 = 0.09).
9. Results
The structural model was evaluated using WarpPLS 8.0. As shown in
Figure 2, all hypothesized relationships between constructs were supported and statistically significant at the 0.01 level. The final model explained 61% of the variance in Behavioral Intention to Adopt AI Tools, indicating a substantial level of explanatory power.
9.1. Structural Path Coefficients and Hypothesis Testing
In line with the recent literature exploring digital innovation and organizational transformation, the present study’s findings resonate with broader theoretical frameworks addressing behavioral intention and technology acceptance in SMEs. For example, the Theory of Planned Behavior provides a complementary lens for understanding how attitudes, subjective norms, and perceived behavioral control influence adoption decisions, especially when technologies like AI introduce both opportunities and uncertainty [
29]. Furthermore, the growing body of research on trust and ethical design in AI marketing emphasizes the need for transparent, value-aligned systems that resonate with users’ expectations [
30]. The OECD has also highlighted that SMEs face unique barriers in AI adoption, including limited expertise and digital readiness, which reinforces the relevance of contextual variables such as workplace integration and AI literacy [
31]. Multidisciplinary perspectives on AI further suggest that cross-functional integration, policy alignment, and ethical innovation are essential for effective adoption and sustainable use [
32]. Finally, digital transformation research points to the mediating role of organizational capabilities and leadership in turning AI adoption into performance outcomes, particularly in dynamic markets [
33].
All nine structural paths in the model were found to be significant, with β coefficients ranging from 0.15 to 0.48, all with p-values < 0.01, supporting the proposed theoretical relationships.
Table 4 summarizes the results of hypothesis testing, indicating that all hypothesized paths were statistically significant (
p < 0.01). The strongest effect was observed from Marketing Personalization to Trust in AI (β = 0.48), followed by AI Knowledge on Perceived Usefulness (β = 0.45).
These results confirm that cognitive (H1, H2), contextual (H3, H7), emotional (H8) and marketing-related (H9) factors contribute significantly to AI adoption behavior. Notably, Marketing Personalization had the strongest effect on Trust in AI (β = 0.48), which in turn had a significant indirect effect on Behavioral Intention.
9.2. Explained Variance (R2)
The model demonstrated strong explanatory power for the key endogenous constructs:
These values indicate that the model explains a substantial portion of variance in consumer behavior toward AI-powered tools, particularly in terms of adoption intentions.
10. Discussion
The results of this study provide a comprehensive understanding of the cognitive, contextual, emotional, and marketing-related drivers of consumer adoption of AI-powered tools. All nine hypothesized relationships were statistically supported, offering robust empirical support for the proposed PLS-SEM model. The model explained 61% of the variance in consumers’ behavioral intention to adopt AI tools, which is a substantial level of explanatory power in behavioral research [
13].
10.1. Theoretical Contributions
This study extends the Technology Acceptance Model (TAM) and UTAUT2 frameworks by integrating additional factors relevant to the modern AI-driven digital environment. The inclusion of AI knowledge as a predictor of perceived usefulness, ease of use, and trust reflects the increasing importance of digital literacy in technology adoption. The significant effect of AI knowledge on trust in AI systems (H3) confirms that familiarity with how AI works reduces perceived uncertainty, which is aligned with Ajay et al. (2020) [
16] and reinforces the need to address consumers’ informational empowerment.
Similarly, passion for AI, used here as a proxy for intrinsic motivation and hedonic engagement, emerged as a significant predictor of adoption intention (H8). This supports recent research emphasizing the role of emotional engagement and identity alignment in the adoption of emerging technologies [
19]. Passionate users are more likely to accept complexity, persist through learning curves, and integrate AI tools into everyday life—even beyond instrumental usefulness.
The study also highlights the impact of workplace integration (H7) on personal adoption behaviors, confirming prior research that suggests technology use in professional contexts can normalize and spill over into private life [
9]. This result emphasizes the relevance of cross-contextual experience as a facilitator of personal engagement with AI.
One of the most notable contributions is the inclusion of AI-driven marketing personalization as an antecedent of trust in AI systems (H9). The strong path coefficient (β = 0.48) confirms that relevant, personalized experiences contribute meaningfully to building trust in automated systems. This finding aligns with studies on the personalization–trust dynamic in digital environments [
11,
25,
28] but also advances the discussion by embedding this mechanism in an AI-specific framework.
This research expands the current literature on digital innovation by addressing the context of SMEs, which are increasingly expected to be both digitally agile and environmentally responsible. The inclusion of workplace integration and AI-driven marketing personalization underscores how SMEs can implement AI to improve internal operations and external stakeholder engagement in sustainable ways.
10.2. Managerial Implications
From a managerial perspective, especially within SMEs aiming for digital and sustainable transformation, the findings offer actionable strategies to increase AI adoption:
Enhance AI literacy: Since AI knowledge is a consistent predictor of both trust and perceived usefulness, firms and institutions should invest in educational content, onboarding experiences, and explainable AI (XAI) to demystify the technology.
Foster user passion and engagement: Marketing campaigns can be designed to stimulate emotional connection and curiosity, especially for early adopters and digital enthusiasts. Storytelling, innovation showcases, or gamified exploration of AI features may help build user passion.
Leverage workplace adoption as a catalyst: Companies should view workplace AI deployment as not only a productivity tool but also a channel to foster broader consumer adoption. Integrations that are intuitive and seamless can encourage users to replicate similar tools in their personal lives.
Use personalization carefully to build trust: Marketing efforts should prioritize relevant, non-intrusive personalization that enhances user experience without compromising privacy perceptions. Transparency in how data are used and allowing user control over preferences will enhance trust and long-term engagement.
While personalization can enhance trust in AI systems by aligning content with user expectations, it also raises ethical concerns related to data privacy and autonomy. Organizations should ensure that personalization mechanisms are transparent, based on informed consent, and designed with privacy-preserving architectures.
11. Conclusions
This study examined the factors influencing the adoption of AI-powered tools by consumers, using a theoretically grounded and empirically validated model based on PLS-SEM. By integrating constructs from the Technology Acceptance Model (TAM) and UTAUT2, along with context-specific variables such as AI knowledge, workplace integration, passion for AI, and AI-driven marketing personalization, the research offers a multidimensional perspective on how emerging technologies reshape consumer behavior.
The findings confirm that both cognitive evaluations (e.g., perceived usefulness, ease of use) and affective and contextual influences (e.g., trust, workplace experience, intrinsic motivation) play significant roles in shaping behavioral intention toward AI adoption. Notably, trust in AI systems emerged as a key mediator, while marketing personalization had the strongest influence on trust, highlighting the strategic role of well-designed, user-centric AI interactions.
This paper addressed the research problem of lacking integrated models to explain AI adoption behavior in the context of SMEs. By answering the research question—how contextual and psychological factors shape adoption—the study fills a gap in the current digital transformation literature.
11.1. Theoretical Implications
This study contributes to the growing body of literature at the intersection of consumer behavior and digital technologies by extending established acceptance models into the context of intelligent, personalized AI systems. The inclusion of passion for AI introduces an emotional and motivational dimension often underexplored in adoption models. Likewise, the empirical integration of marketing personalization and trust bridges consumer psychology with digital marketing theory, emphasizing the importance of AI-enabled relevance and transparency.
11.2. Practical Implications
For practitioners, the results underline the importance of educating users about AI, fostering emotional engagement, and ensuring trustworthy, personalized experiences. Organizations should design AI interfaces and features that are not only functional but also intuitive and emotionally resonant. The findings also suggest that AI exposure in professional settings may be a catalyst for broader consumer adoption, providing a strategic opportunity for tech providers operating in both B2B and B2C markets. More broadly, the findings offer practical implications for how SMEs can harness AI for both competitive differentiation and sustainable value creation.
While AI-driven personalization can increase trust by enhancing relevance and user experience, it also introduces significant ethical considerations. Concerns regarding data privacy, algorithmic transparency, and user autonomy must be carefully managed. To address these, organizations are encouraged to implement explainable AI (XAI) practices, ensure user consent in data collection, and offer customizable privacy settings. Balancing personalization benefits with ethical safeguards is crucial for sustaining long-term trust and compliance with data protection regulations.
11.3. Limitations and Future Research
As with any empirical study, certain limitations must be acknowledged. First, the study employed a non-probabilistic sample with a focus on digitally engaged users, which may limit generalizability to the broader population. Second, while PLS-SEM allows for robust modeling of complex constructs, the cross-sectional design precludes conclusions about causality. Future studies could employ longitudinal designs to explore how trust, passion, and perceived usefulness evolve over time with repeated AI use. Although PLS-SEM is suitable for exploratory and predictive modeling, it has known limitations, such as potential bias in path coefficient estimates under small sample sizes and a lack of global model fit indices comparable to CB-SEM. Moreover, the technique is sensitive to multicollinearity and requires reflective indicator consistency.
Additionally, this study focused on general AI-powered tools across personal and professional contexts. Future research could explore sector-specific AI adoption (e.g., healthcare, education, retail) or conduct multi-group analyses comparing generational cohorts, digital natives vs. non-natives, or cultural contexts. The incorporation of moderators such as digital literacy, privacy concern, or perceived autonomy would further enrich the theoretical model.
Although the findings provide insights relevant to individual users of AI tools, the generalizability of the results across diverse cultural or institutional contexts remains limited. The sample was primarily composed of users from a digitally developed environment, and future research should replicate this study in other regions to validate the model’s cross-cultural robustness.
While this study focused on AI tools in consumer and workplace contexts, the model may require adaptation for domains such as healthcare or public services. In such sectors, trust dynamics, risk perception, and regulatory constraints might play a more dominant role and should be incorporated in future models.
Author Contributions
Conceptualization, I.-C.P. and R.-G.P.; methodology, I.-C.P.; software, R.-G.P.; validation, D.-F.C. and H.M.; formal analysis, I.-C.P.; investigation, R.-G.P.; resources, R.-G.P.; data curation, H.M.; writing—original draft preparation, R.-G.P.; writing—review and editing, R.-G.P.; visualization, H.M.; supervision, H.M.; project administration, D.-F.C.; funding acquisition, H.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki. Ethical review and approval were waived for this study due to the fact that no sensitive personal data were collected, participants remained anonymous, and the research posed minimal risk. In accordance with Romanian national legislation, namely Law no. 677/2001 (repealed and replaced by EU GDPR 2016/679) and the applicable national guidelines for academic research involving human participants, ethical approval from an Institutional Review Board (IRB) is not required for studies that: involve anonymous survey data, do not collect personally identifiable information, and do not involve any form of medical, psychological, or physical intervention. This is also aligned with the internal research ethics policy of the university where the research was conducted, which exempts anonymous sociological or behavioral surveys from formal ethical approval.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The data that support the findings of this study are available on request from the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Davenport, T.H.; Ronanki, R. Artificial Intelligence for the Real World. Harv. Bus. Rev. 2018, 96, 108–116. [Google Scholar]
- Cohen, M.J. The Future of Consumer Society: Prospects for Sustainability in the New Economy; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
- Rust, R.T. The Future of Marketing. Int. J. Res. Mark. 2020, 37, 15–26. [Google Scholar] [CrossRef]
- Sié, L.; Yakhlef, A. The Passion for Knowledge: Implications for Its Transfer. Knowl. Process Manag. 2013, 20, 12–20. [Google Scholar] [CrossRef]
- Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
- Gursoy, D.; Chi, O.H.; Chi, C.G. Examining the Impacts of Artificial Intelligence (AI) on Consumer Behavior. Int. J. Contemp. Hosp. Manag. 2019, 31, 3960–3980. [Google Scholar]
- Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
- Li, F.; Larimo, J.; Leonidou, L.C. Social Media Marketing Strategy: Definition, Conceptualization, Taxonomy, Validation, and Future Agenda. J. Acad. Mark. Sci. 2020, 49, 51–70. [Google Scholar] [CrossRef]
- Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Duan, Y.; Dwivedi, R.; Edwards, J.; Eirug, A.; et al. Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
- Deci, E.L.; Ryan, R.M. The “What” and “Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol. Inq. 2000, 11, 227–268. [Google Scholar] [CrossRef]
- Bleier, A.; Eisenbeiss, M. The Importance of Trust for Personalized Online Advertising. J. Retail. 2015, 91, 390–409. [Google Scholar] [CrossRef]
- Awad, N.F.; Krishnan, M.S. The Personalization Privacy Paradox: An Empirical Evaluation of Information Transparency and the Willingness to Be Profiled Online for Personalization. MIS Q. 2006, 30, 13–28. [Google Scholar] [CrossRef]
- Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 3rd ed.; Sage: Thousand Oaks, CA, USA, 2022. [Google Scholar]
- Kaplan, A.; Haenlein, M. Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence. Bus. Horiz. 2019, 62, 15–25. [Google Scholar] [CrossRef]
- Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Fuso Nerini, F. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef] [PubMed]
- Ajay, A.K.; Kumar, D.V.S.; Megha, R.U.; Ittamalla, R. Customer Decision-Making Related to Adoption of Artificial Intelligence: A Conceptual Framework Based on Stimulus-Organism-Response Theory. Available online: https://ssrn.com/abstract=5045942 (accessed on 29 June 2025).
- King, W.R.; He, J. A Meta-Analysis of the Technology Acceptance Model. Inf. Manag. 2006, 43, 740–755. [Google Scholar] [CrossRef]
- Alam, S.; Abdullah, S.; Kokash, H.; Ahmed, S.; Omar, N. Consumer’s Acceptance of Retail Service Robots: Mediating Role of Pleasure and Arousal. J. Decis. Syst. 2024, 1–27. [Google Scholar] [CrossRef]
- Vallerand, R.J.; Blanchard, C.; Mageau, G.A.; Koestner, R.; Ratelle, C.; Léonard, M.; Gagné, M.; Marsolais, J. Les Passions de l’Âme: On Obsessive and Harmonious Passion. J. Pers. Soc. Psychol. 2003, 85, 756–767. [Google Scholar] [CrossRef]
- Hoffman, D.L.; Novak, T.P. Consumer and Object Experience in the Internet of Things: An Assemblage Theory Approach. J. Consum. Res. 2018, 44, 1178–1204. [Google Scholar] [CrossRef]
- Kim, M.J.; Park, M.C. Determinants of Behavioral Intentions to Use Smartphone-Based Augmented Reality Shopping Applications. Telemat. Inform. 2019, 47, 101318. [Google Scholar]
- Rachmadanty, A.D.; Muhtar, A.A.; Agustina, A. Examining the Impact of Gamification and Customer Experience on Customer Loyalty in E-Commerce: Mediating Role of Customer Satisfaction. J. Enterp. Dev. 2025, 7, 180–191. [Google Scholar] [CrossRef]
- Arora, N.; Dreze, X.; Ghose, A.; Hess, J.D.; Iyengar, R.; Jing, B.; Joshi, Y.V.; Kumar, V.; Mittal, V.; Provost, F. Putting One-to-One Marketing to Work: Personalization, Customization, and Choice. Mark. Lett. 2008, 19, 305–321. [Google Scholar] [CrossRef]
- Pasquale, F. The Black Box Society: The Secret Algorithms That Control Money and Information; Harvard University Press: Cambridge, MA, USA, 2015. [Google Scholar]
- Aguirre, E.; Mahr, D.; Grewal, D.; De Ruyter, K.; Wetzels, M. Unraveling the Personalization Paradox: The Effect of Information Collection and Trust-Building Strategies on Online Advertisement Effectiveness. J. Retail. 2015, 91, 34–49. [Google Scholar] [CrossRef]
- Ho, S.Y.; Tam, K.Y. An Empirical Examination of the Effects of Web Personalization at Different Stages of Decision Making. Int. J. Hum.-Comput. Interact. 2005, 19, 95–112. [Google Scholar] [CrossRef]
- Kock, N. WarpPLS User Manual: Version 8.0; ScriptWarp Systems: Laredo, TX, USA, 2020. [Google Scholar]
- Novak, T.P.; Hoffman, D.L. Relationship Journeys in the Internet of Things: A New Framework for Understanding Interactions between Consumers and Smart Objects. J. Acad. Mark. Sci. 2021, 49, 1041–1064. [Google Scholar] [CrossRef]
- Ajzen, I. The Theory of Planned Behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
- Verma, S.; Bhattacharyya, S.; Kumar, S. Artificial Intelligence, Trust and Ethics in Marketing: A Review and Research Agenda. J. Bus. Res. 2022, 142, 643–658. [Google Scholar]
- OECD. Artificial Intelligence and SMEs: Opportunities and Challenges. OECD Digit. Econ. Pap. 2021, 306, 1–40. [Google Scholar]
- Dwivedi, Y.K.; Ismagilova, E.; Sarker, P.; Jeyaraj, A.; Jadil, Y.; Hughes, L. A meta-analytic structural equation model for understanding social commerce adoption. Inf. Syst. Front. 2023, 25, 1421–1437. [Google Scholar] [CrossRef]
- Vial, G. Understanding Digital Transformation: A Review and a Research Agenda. MIS Q. Exec. 2019, 18, 103–123. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).