Next Article in Journal
Descriptive Study on State and Trait Anxiety Levels in University Students and Their Potential Influencing Factors
Previous Article in Journal
Uneven Grounds: Class, Gender, and the Social Distribution of Work Flexibility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Correlational and Configurational Perspectives on the Determinants of Generative AI Adoption Among Spanish Zoomers and Millennials

by
Antonio Pérez-Portabella
1,2,
Mario Arias-Oliva
3,
Graciela Padilla-Castillo
4 and
Jorge de Andrés-Sánchez
5,*
1
Departament d’Estudis de Comunicació, Universitat Rovira i Virgili, 43002 Tarragona, Spain
2
Facultad de Ciencias de la Información, Universidad Complutense de Madrid, 28040 Madrid, Spain
3
Marketing Department, Faculty of Business & Economy, University Complutense of Madrid, Campus de Somosaguas, 28223 Madrid, Spain
4
Journalism and New Media Department, Faculty of Information Sciences, University Complutense of Madrid, Avenida Complutense 3, 28040 Madrid, Spain
5
Social and Business Research Laboratory, University Rovira i Virgili, Campus de Bellissens, 43204 Reus, Spain
*
Author to whom correspondence should be addressed.
Societies 2025, 15(10), 285; https://doi.org/10.3390/soc15100285
Submission received: 27 August 2025 / Revised: 2 October 2025 / Accepted: 3 October 2025 / Published: 11 October 2025
(This article belongs to the Special Issue Technology and Social Change in the Digital Age)

Abstract

Generative Artificial Intelligence (GAI) has become a topic of increasing societal and academic relevance, with its rapid diffusion reshaping public debate, policymaking, and scholarly inquiry across diverse disciplines. Building on this context, the present study explores the factors influencing GAI adoption among Spanish digital natives (Millennials and Zoomers), using data from a large national survey of 1533 participants (average age = 33.51 years). The theoretical foundation of this research is the Theory of Planned Behavior (TPB). Accordingly, the study examines how perceived usefulness (USEFUL), innovativeness (INNOV), privacy concerns (PRI), knowledge (KNOWL), perceived social performance (SPER), and perceived need for regulation (NREG), along with gender (FEM) and generational identity (GENZ), influence the frequency of GAI use. A mixed-methods design combines ordered logistic regression to assess average effects and fuzzy set qualitative comparative analysis (fsQCA) to uncover multiple causal paths. The results show that USEFUL, INNOV, KNOWL, and GENZ positively influence GAI use, whereas NREG discourages it. PRI and SPER show no statistically significant differences. The fsQCA reveals 17 configurations leading to GAI use and eight to non-use, confirming an asymmetric pattern in which all variables, including PRI, SPER, and FEM, are relevant in specific combinations. These insights highlight the multifaceted nature of GAI adoption and suggest tailored educational, communication, and policy strategies to promote responsible and inclusive use.

1. Introduction

Artificial Intelligence (AI) is progressively permeating diverse domains of society, reshaping organizational practices and daily life, and becoming embedded in managerial processes and quality-of-life enhancements [1]. Among AI technologies, generative AI (GAI) plays a particularly prominent role because of its ability to create original content, such as text, images, and music [2]. Prominent examples of GAI applications, such as ChatGPT, DALL·E 2, and GitHub Copilot, exert a significant influence across domains, including education [3], marketing, software engineering, and healthcare [4].
Rapid deployment of GAI presents significant ethical and social challenges. Among the most prominent are the spread of algorithmic biases, generation of misinformation, opacity of decision-making processes, and potential loss of jobs that require intellectual effort [5]. Additionally, the use of language models that operate as “black boxes” has raised concerns regarding transparency and public trust [6].
On the other hand, although GAI has the potential to augment individual creativity and optimize efficiency in cognitively demanding and creative activities, it tends to homogenize innovation, affecting cultural and creative diversity [7]. Moreover, privacy concerns have also increased. Because GAI is trained on large datasets, it can unintentionally capture and reproduce sensitive information. This entails the risks of personal data exposure and non-compliance with regulations [8,9].
This study assesses the drivers and barriers to the use of GAI in Spain using a macro-survey of the Spanish Center for Sociological Research [10] on the use of GAI in Spanish society. The analysis focuses on the factors driving its adoption among younger generations, specifically those born after 1980, who are typically referred to as digital natives and have grown up and been educated in the age of information technologies [11]. Digital natives included Millennials (members of Generation Y) and Zoomers (members of Generation Z). Although generational boundaries are somewhat blurred and there is no absolute consensus among scholars, this study adopts the most widely accepted criterion: Millennials were born between 1980 and the mid-1990s, while those belonging to Zoomers were born from 1995 onward [12].
Focusing on Millennials and Zoomers to analyze the use of GAI is justified by the marked attitudinal, technological, and motivational differences that set them apart from older generations such as Generation X and Baby Boomers regarding information technologies [13]. Zoomers have grown up in a fully digital environment, showing a greater willingness to adopt emerging technologies, such as GAI, both in educational and professional contexts. In contrast, older generations tend to exhibit higher levels of skepticism, ethical concerns, and resistance to technological change [14]. Furthermore, recent studies show that the key behavioral motivators differ significantly across generations: Zoomers place more value on intrinsic motivation and are more responsive to motivational cues, whereas members of Generation X tend to respond to social extrinsic motivators and Millennials to introjected regulation mechanisms [5].
Spain offers a particularly compelling case for analyzing GAI adoption among digital natives. While it shares general trends in technological uptake with other European countries, it also exhibits distinctive features, such as relatively lower investment in R&D and a strong influence of EU-level policies. Framing Spain in this manner enables cross-national comparisons and adds to the broader debate on how younger generations engage with GAI.
The theoretical framework used to analyze the influence of explanatory variables based on survey data [10] is represented graphically in Figure 1. These variables fall under the Theory of Planned Behavior (TPB) [15], which has previously been used to model the acceptance and use of AI [16,17]. We identified three categories of behavioral predictors: attitude, perceived behavioral control, and subjective norms.
Attitude is defined as the degree of favor or disfavor a person is assigned to performing a specific behavior [15]. In technology adoption models such as the Technology Acceptance Model [18], the main antecedent of attitude is perceived usefulness. GAI enables the automation of knowledge-based tasks and supports complex processes, such as text generation, programming, and customer service, thereby increasing efficiency and productivity in various sectors [19]. Additionally, two intrinsic user dimensions are considered formative factors of attitude. First, user innovativeness is often significant in shaping attitudes toward AI use [20]. Second, there may be a loss of control over personal data privacy [6].
Perceived behavioral control has been conceptualized in the literature from various perspectives, including self-efficacy and perceived control over the outcomes of an action [21,22]. In this study, perceived behavioral control is addressed through the user’s level of knowledge and understanding of how GAI functions, as adequate technical comprehension is essential for its effective use [23].
Subjective norm refers to the perceived social pressure a person feels to perform a certain behavior [15]. In the context of GAI, this study considers such pressure to be shaped by both beliefs about its societal benefits and the perceived need to establish regulations to mitigate potential risks [2].
In addition to generational identity, gender is also relevant for understanding technology adoption, as prior research has documented differences in attitudes toward innovation, perceived behavioral control, and how social influence is perceived [24]. Accordingly, sex was included in the analysis as a control variable to provide a more nuanced view of GAI adoption among Spanish digital natives.
This study adopts a hybrid methodological strategy that integrates correlational and configurational approaches to examine the determinants of GAI use. From a traditional perspective, correlational analysis is applied to quantify the link between independent variables and the dependent variable to identify patterns of direct influence [25]. However, the approach is not limited to this linear analysis; it also incorporates principles from complexity theory. This theory recognizes that in social systems, multiple causal pathways can lead to the same outcome and that the presence of an effect does not necessarily imply its reverse under opposite conditions [25,26]. This approach is particularly suitable for studying phenomena such as technology adoption, where the same decision, accepting or rejecting an innovation, may be driven by radically different combinations of factors across individuals [27].
To address this complexity, this study employs fuzzy set Qualitative Comparative Analysis (fsQCA), a technique that explores causal configurations and visualizes how different conditions interact to explain a specific outcome [26]. Owing to its capacity to capture non-linear and equifinal relationships, fsQCA has gained popularity since the mid-2010s in fields such as marketing and consumer behavior research [24].
In line with the theoretical model presented in Figure 1, this study poses two main research questions (RQs).
  • The first (RQ1) aims to measure the average effect of the explanatory variables on willingness to use GAI, addressed using a variable-oriented approach through ordinal logistic regression.
  • The second (RQ2) explores how the different factors in the conceptual model form causal combinations (paths) that affect both the adoption and reluctance to generate AI, analyzed through the configurational approach of fsQCA.

2. Conceptual Ground

2.1. Development of the Correlational Hypotheses of the Model

2.1.1. Hypotheses About Attitudinal Variables

Generative AI encompasses a wide range of applications that are perceived to be valuable in organizational and societal contexts. It contributes to productivity gains and informed decision-making across domains such as healthcare, education, and industry [2]. In addition, generative AI fosters innovation and enables the resolution of complex problems, thereby exerting a positive influence on the broader processes of economic and social development [5]. Within educational settings, generative AI facilitates the personalization of learning trajectories and supports more inclusive pedagogical practices [3,28].
From a sustainability perspective, GAI can optimize processes, minimize waste, and expand access to essential services, such as healthcare and transportation [29]. It also provides indirect ecological benefits by replacing physical travel and manual tasks with digital solutions [30]. However, its implementation involves potentially significant environmental costs related to energy consumption, carbon emissions, and the use of critical materials for data infrastructure [31]. Nevertheless, it is important to acknowledge significant risks, particularly the tendency of generative AI systems to produce responses that are syntactically plausible, yet factually inaccurate, a phenomenon often referred to as hallucination [32].
The perception of these benefits and costs influences the acceptance of GAI among employees [33], consumers [20], and users in academic environments [23,34,35,36]. Therefore, if we name perceived usefulness USEFUL, we propose the following:
Hypothesis 1 (H1).
USEFUL has a positive link with the use of GAI.
On the other hand, an individual’s willingness to experiment with new technologies, referred to as innovativeness (INNOV), is a determining factor in the perception and adoption of technological innovation are perceived and adopted [37]. Thus, GAI is more readily accepted when it is perceived as an augmentative tool [38]. Moreover, it is reasonable to expect that individuals with higher levels of innovativeness who experience greater digital well-being will also report higher well-being in their interactions and coexistence with AI [39].
Individuals characterized by high levels of INNOV generally perceive emerging technologies as more useful, accessible, and compatible with their existing practices, thereby increasing the likelihood of adoption [40]. When faced with the same technology, innovative individuals develop more favorable beliefs about their use [37]. This relationship has been confirmed in both the context of general artificial intelligence [20,41] and specifically in relation to GAI [42]. Therefore, we propose:
Hypothesis 2 (H2).
INNOV has a positive link with the use of GAI.
The use of Internet-based technologies presents significant challenges for personal data protection [43]. In the case of GAI, these risks are especially pronounced as users are exposed to vulnerabilities inherent to the online environment. These concerns are exacerbated by intrinsic AI features, such as massive data collection, limited oversight of data reuse, and algorithmic opacity, which intensify both social concern and regulatory demands [44].
GAI operates by processing large volumes of data, much of which are sensitive. This creates risks, such as unauthorized identification of individuals or automated decision-making, that may be unfair, biased, or difficult to explain [9]. Of particular concern is the potential for unauthorized third-party access, whether internal or external, which compromises the fundamental principles of privacy and autonomy and can result in severe consequences, such as fraud or identity theft [45].
Empirical evidence regarding the impact of privacy concerns (PRI) on the acceptance of GAI is mixed. While some studies suggest that a heightened perception of risk inhibits the use of these technologies [41,46], others, such as [47], find no statistically significant relationship, at least, in educational contexts. In any case, we proposed
Hypothesis 3 (H3).
PRI is negatively associated with the use of GAI.

2.1.2. Hypothesis About the Variable Related to Behavioral Control

Prior experience or knowledge (KNOWL) of a technology enhances users’ self-efficacy, which directly influences perceived ease of use, a central determinant of acceptance [48]. This ease of use not only promotes the intention to use the technology but also increases its perceived usefulness [49].
In the case of AI, prior knowledge may foster perceptions of greater usefulness and lower risk [50]. This has been proven by studies showing higher acceptance of AI among healthcare science students [51,52]. Furthermore, in educational contexts, self-confidence in using GAI tools has been shown to significantly increase acceptance [17,23,34]. Therefore, we state:
Hypothesis 4 (H4).
KNOWL is positively associated with the use of GAI.

2.1.3. Hypotheses About Subjective Norm Variables

Subjective norm, or social influence, is understood in the context of technological acceptance as the perceived social desirability of using an innovation [49]. This perception has been widely documented as a relevant factor in the adoption of both AI in general [20,53] and GAI in different domains, such as society [54], industry [33], and academia [55,56]. Within this dimension, we integrate two perceptions: that GAI generates social performance (SPER) and that its implications need regulation (NREG).
Generative AI is a reshaping domain ranging from routine activities to advanced intellectual processes [2,5]. However, it also raises concerns about possible overreliance on automated responses, which could weaken critical thinking and promote uncritical or outsourced creativity, particularly in educational settings [4,7,57].
From an ethical and cultural perspective, GAI simultaneously represents opportunities and risks. It can democratize access to cognitive and linguistic resources but also replicate biases by being trained on data dominated by certain cultural frameworks, thus affecting the fair representation of values, rights, and ways of life [6]. Likewise, its use facilitates socially undesirable practices such as the creation and spread of fake news [57].
Economically, GAI could exacerbate inequality; while skilled workers may see wage gains, less-skilled workers are at greater risk of displacement [58]. Assessing sustainability cannot be limited to efficiency or carbon footprint, but must include social, ecological, and intergenerational justice dimensions [31]. In fact, the environmental impact of GAI has a nonlinear effect, such that when used appropriately, it can have positive environmental consequences [59], but beyond a certain threshold, it presents a negative global effect [31]. Therefore, we postulate the following:
Hypothesis 5 (H5).
SPER has a positive link with the use of GAI.
Conversely, the ethical, social, and legal risks associated with AI have generated a growing public perception of the need to regulate (NREG) technologies, which have been analyzed extensively in the literature. Concerns range from privacy and misinformation to safety and algorithmic bias [8,60]. This has generated distrust in social sectors and slowed automation projects owing to the lack of technical transparency [44]. Another relevant aspect is the perception among various stakeholders that GAI may function as a substitutive tool [38].
As a result, social and political pressure has intensified to develop regulatory frameworks that ensure transparency, fairness, data protection, and human rights [61]. The academic community has also stressed the urgency of adopting measures to audit these models and reduce their structural risks [6]. So, it is stated:
Hypothesis 6 (H6).
NREG is negatively associated with the use of GAI.

2.1.4. Influence of Sociodemographic Variables

The influence of demographic factors such as age, work experience (closely associated with age), and gender on technology adoption has been extensively documented in theoretical frameworks such as the Unified Theory of Acceptance and Use of Technology (UTAUT) and its extensions [62,63]. In this study, we included two factors: identifying as female (FEM) and belonging to Generation Z (GENZ).
Gender differences have been widely documented in the adoption of emerging technologies, although the evidence remains somewhat inconsistent. Research shows that women and men differ not only in their engagement with technology, but also in their self-perceptions, with women often considering themselves less capable. Gender role beliefs—namely, societal expectations that women are less interested in or proficient with technology—have been identified as a key explanatory factor for these differences. Such perceptions may foster negative experiences and reinforce feelings of uncertainty among women [24].
In the case of GAI, preliminary studies likewise suggest that women may exhibit greater sensitivity to ethical and privacy concerns, which can reduce their inclination to adopt such tools compared with men [64]. This pattern also extends to self-perceived competence: men are more likely to rate their AI skills positively and display greater trust in technological outputs, whereas women often demand more conceptual clarity, transparency regarding how the technology functions, and concrete examples before accepting its use [65]. Recent evidence further indicates that women tend to experience greater difficulty than men in distinguishing AI-generated from human-written texts [66], which may heighten skepticism and reinforce adoption gaps. Taken together, these insights provide a rationale for anticipating gender differences in GAI adoption. Given the novelty of this technology and the limited availability of systematic research, the following hypothesis should be regarded as exploratory and open to further empirical validation:
Hypothesis 7 (H7).
Females have a lower propensity to use GAI than males.
The adoption and perception of AI, especially GAI, may differ notably among generations. Zoomers, who have grown up in a highly connected digital environment, tend to display more favorable attitudes toward adopting emerging technologies. They value their potential to improve productivity, personalize learning, and simplify daily tasks and are generally more willing to experiment with these tools in educational and creative contexts [14]. In contrast, although older generations, including Millennials, recognize the potential benefits of GAI, they tend to adopt a more cautious stance. Their concerns focus on ethical risks, potential dehumanization of processes, and the need for regulatory frameworks to ensure responsible use [14,67]. Therefore, we propose:
Hypothesis 8 (H8).
Zoomers have a higher propensity to use GAI members than Millennials.

2.2. Development of Configurational Laws on the Use of Generative AI (GAIU)

No single user profile exists in terms of technology. Rogers [68] identifies five categories based on willingness to adopt innovations: innovators, early adopters, early majority, late majority, and laggards. For instance, while innovators are motivated by experimentation, the early majority adopt technology after observing its effectiveness among other users. However, this is not the only typological approach. Birkland [69] proposed five profiles: enthusiasts, pragmatists, socializers, traditionalists, and gatekeepers. In the context of blockchain applications, Tankovic et al. [70] identified four types of digital natives: innovators, cautious, skeptics, and suspicious.
The classification of non-users has also expanded, recognizing that non-adoption is not always due to a lack of access, but can result from informed and situational decisions. Ethical, social, or functional factors, such as privacy concerns, lack of skills, cultural rejection, or perceived irrelevance influence non-use [71]. Gauttier [72] distinguished four profiles: resisters, rejecters, expelled, and excluded profiles. Additionally, there are substitute users (dependent on third parties) and convertibles (potential users) [71].
In this regard, both the use and non-use of GAI can stem from diverse pathways resulting from multiple combinations of conditions. For example, a pragmatic or early adopter profile might be associated with perceptions of usefulness and knowledge, while an innovator profile would be linked to a propensity for experimentation, even without clear utility. From this perspective, complexity theory and the fsQCA method allow for the identification of diverse configurations that traditional correlational analyses do not capture [26].
Moreover, attitudes toward technology tend to be asymmetric; acceptability does not always lead to adoption, but unacceptability can result in rejection [72]. In the field of GAI adoption, the reasons for its use are not symmetrical with those cited for its rejection [19]. In this context, fsQCA is particularly well-suited for analyzing non-symmetric causal relationships, although it can also address symmetric situations [73].
Unlike hypothetico-deductive approaches, fsQCA does not formulate traditional hypotheses but rather “soft laws,” that is, propositions based on combinations of causal conditions [74]. Instead of focusing on linear relationships, it identifies the causal patterns that produce a specific outcome.
Based on this approach, the following propositions are proposed regarding the paths leading to the use and non-use of GAI in Figure 2:
Proposition 1 (P1).
In the configurations that precede the use of GAI, the predominant conditions are the presence of USEFUL, INNOV, KNOWL, SPER, and GENZ, and the absence of PRI, NREG, and FEM.
Proposition 2 (P2).
In the configurations that precede the non-use of GAI, the predominant conditions are the absence of USEFUL, INNOV, KNOWL, SPER, and GENZ, and the presence of PRI, NREG, and FEM.
Proposition 3 (P3).
The configurations associated with the use and non-use of GAI are not symmetrical.

3. Material and Data Analysis

3.1. Sampling

The study was based on a large-scale national survey carried out in Spain, designed to be representative of the adult population aged 18 years and above. A total of 4004 valid interviews were collected using random selection of telephone numbers (17.4% landline, 82.6% mobile), with quotas by gender, age, and stratification across Spain’s 17 autonomous communities and two autonomous cities. Fieldwork was conducted between February 6 and February 15, 2025.
For the purposes of the present research, we focused exclusively on digital natives, defined here as individuals born from 1980 onward, that is, aged 18 to 45 at the time of the survey. Respondents aged 46 or older were excluded. In addition, participants who reported never having heard of artificial intelligence were removed because their responses would not be meaningful for the analysis of GAI adoption. After applying these criteria, the final analytical sample consisted of 1533 valid cases.
Using G*Power 3.1 [75], it was verified that the available sample size provided 80% statistical power for a significance level of 5% and an effect size of 0.01, in the context of a linear regression with eight explanatory variables. Furthermore, for the test of individual coefficient significance, effect sizes of 0.01 offered 99% power at the same significance level.

3.2. Sociodemographic Profile

Table 1 shows the participants’ sociodemographic characteristics. Regarding gender, 42.60% of the participants were men, and 57.40% were women. In terms of age, 294 participants (19.18%) were up to 25 years old, 562 (36.66%) were between 26 and 35 years old, and 677 (44.16%) were between 36 and 45 years old. The mean age of the sample was 33.51 years old, (standard deviation, 7.62 years). The vast majority of respondents (88.91%, or 1363 individuals) reported only Spanish nationality, 5.28% held dual nationality, and 5.81% did not have Spanish nationality.
Regarding educational level, 98 individuals (6.39%) had no more than a primary education, 551 (35.94%) had completed secondary education, and 884 (57.66%) had attended university. In terms of household income, the majority were concentrated in intermediate ranges: 32.88% reported a monthly income between €3001 and €6000, followed by 32.16% with an income between €1801 and €3000. A total of 20.94% reported an income between €900 and €1800 per month, while 4.96% reported less than €900. Only 5.81% reported an income of >€6000 per month. Finally, 3.26% of the respondents did not answer this question.

3.3. Measurement of Variables

The questionnaire was designed and administered by the Spanish Center for Sociological Research (CIS), which follows standardized procedures for survey design, pre-testing, and validation in its national studies. Different measurement formats were employed depending on the nature of each construct: attitudinal variables, such as perceived usefulness or privacy concerns, were measured through Likert-type scales, while factual knowledge and demographic variables were captured with categorical or dichotomous items. The specific details of items and scales are provided in Table 2.
To ensure measurement quality, we assessed both internal consistency and convergent validity. Specifically, items were subjected to exploratory factor analysis, requiring loadings above 0.6 and an average variance extracted (AVE) of at least 0.5, supported by Bartlett’s test of sphericity. Reliability was confirmed as Cronbach’s alpha exceeded 0.6 in all cases [76], and composite reliability was above the 0.7 threshold [77].
As for the response variable, respondents indicated how frequently they used various GAIs (such as ChatGPT and Gemini) on a 6-point scale (ranging from 0 to 5). Regarding the attitudinal explanatory variables (USEFUL and INNOV), both are composed of multiple items: the former is measured on a 3-point scale, while the latter uses a 10-point scale. The variables related to perceived behavioral control included PRI and KNOWL, each measured using a single item on a 10-point Likert scale.
Subjective norm variables were also multi-item constructs. The SPER consists of three items evaluated on a 3-point scale, while the REG comprises five items answered on a 5-point scale. Finally, Generation Z membership is determined based on age, and gender is expressed in a binary form.

3.4. Data Analysis

3.4.1. Analysis of Research Question 1

The analysis corresponding to Research Question 1, which aims to test the hypotheses developed in Section 2.1 (from H1 to H8), involves the use of regression methods. As shown in Table 3, the outcome variable GAIU is measured on a 5-point scale that captures usage frequency, with values ranging from 0 (never) to 5 (daily). Given its ordinal nature, the use of ordinal logistic regression is justified for modeling.
As indicated in Table 3, the input variables composed of multiple items (USEFUL, INNOV, SPER, and NREG) were quantified using the standardized value of the first principal component extracted through varimax rotation. To justify this extraction, factor loadings of all items above 0.6 were required, and the average variance extracted, and Bartlett’s test of sphericity were examined. Internal consistency and convergent reliability were also evaluated.
Additionally, as shown in Table 3, for single-item variables measured on multi-point scales (PRI and KNOWL), standardized values were used. The gender variable (FEM) is dichotomous, taking a value of one if the respondent identifies as a woman. Age was operationalized through generational membership, specifically belonging to Generation Z. Given that generational boundaries are blurred [11], individuals aged 25 or younger were considered full members of Generation Z, and those older than 35 were classified as Generation Y. For individuals aged between 25 and 35, Generation Z membership was modeled linearly, assigning a value of 0 to those aged at least 35.

3.4.2. Analysis of Research Question 2

The analysis of RQ2 requires the use of fsQCA, which involves calibrating the variables included in the analysis using membership functions. Accordingly, we generically denote the membership function of variable X as m X . The procedure for adjusting the membership functions is detailed in Table 3.
The answer to this research question was determined following the protocol outlined by [73] with the help of fsQCA 3.1 software [78]. This implies the following steps.
1
We analyzed the necessary condition status for the presence and absence of explanatory factors in the willingness and non-willingness to try GAI. The presence of variable X is measured by its membership function m X ; its absence, denoted as ¬ X , is measured by m ¬ X = 1 m X . Thus, while the membership function of GAIU is denoted as m G A I U , non-use GAIU has a membership degree of m ¬ G A I U = 1 m G A I U
2
We performed an analysis of sufficient conditions. For this assessment, it is necessary to construct recipes (also referred to in the literature as prime implicates or configurations) that make up the intermediate solution (IS) and the parsimonious solution (PS) for both GAIU and ¬GAIU. These recipes are interpreted as antecedents, pathways, or profiles linked to adherence to and reluctance to GAI. Prime implications of the IS are obtained using assumptions about the presence or absence of the exogenous variables in WTR and ¬WTR, based on the hypotheses developed in Section 2.1.
3
We present the set of primes that implicates WTR and ¬WTR (i.e., their intermediate solutions) and interpret them. We distinguish between core conditions that appear simultaneously in IS and PS, and peripheral conditions that appear only in the IS recipes. The former functions as a strong cause and the latter as a weaker cause [25].
4
The measures of consistency (CON) and coverage (COV) allow the assessment of the explanatory power of the IS and of each implicated individual prime. Consistency quantifies the significance of a prime implicate or overall solution with desirable values of >0.8. Coverage indicates empirical relevance and can be interpreted as a measure of the effect size [26].

4. Results

4.1. Descriptive Statistics and Response of Research Objective 1

The descriptive statistics of the items included in the study are presented in Table 4. The analysis revealed that ChatGPT emerged as the most frequently used generative AI application among the respondents. The average of the aggregated variable measuring the frequency of use of all GAIs (GAIU) was 2.66, which, according to the scale in Table 2, corresponds to usage between “several times a month” and “several times a week.”
Items associated with perceived usefulness indicate a predominantly negative assessment of GAI’s impact of GAI on the labor market (mean of 1.76, below the neutral value of 2). In contrast, more positive than negative impacts are perceived in the areas of the environment, medicine and health, and economy. Regarding the INNOV and KNOWL dimensions, mean values fell below the neutral point of 5.5, and in the case of INNOV, no item even reached a value of 5.
Concerns about Internet privacy show significant importance, with a mean of 3.75 on a maximum scale of 4. However, the average evaluation of the items that capture perceived positive externalities for society (SPER) does not exceed a value of two out of three in any case. In contrast, the items related to the NREG showed values above 4, indicating the majority support for regulating the development and use of AI.
Regarding the psychometric quality of the scales, the item loadings for USEFUL, INNOV, SPER, and NREG all exceed the 0.6 threshold, and the average variance extracted by the first component surpassed 50% for INNOV, SPER, and NREG. In the case of USEFUL, the explained variance is slightly lower at 47.70% but remains very close to the desired threshold. For all scales, Bartlett’s test rejects the hypothesis that the correlation matrix is the identity matrix, supporting the suitability of the exploratory factor analysis. Accordingly, the scales demonstrate adequate convergent reliability. Likewise, the internal consistency of the scales can be considered acceptable, as Cronbach’s alpha is above 0.6 in all cases [76], and composite reliability exceeds 0.7 [77].
Table 5 presents the results of ordinal logistic regression. McFadden’s pseudo R2 was 0.13, indicating an acceptable model fit [79]. Additionally, the model was statistically significant, with a likelihood ratio of 695.34 (p < 0.0001).
To evaluate the hypotheses proposed in Section 2.1, odds ratios (ORs) were analyzed in relation to the reference value of 1. For USEFUL, the OR was 1.461 (p < 0.0001), and for INNOV, the OR is 1.364 (p < 0.0001), supporting the acceptance of H1 and H2. Regarding PRI, the OR was 0.981 (p = 0.6965); although the direction of the relationship between PRI and GAIU, as postulated, was not statistically significant. For KNOWL, the OR was 2.547 (p < 0.0001), thus supporting H4.
Regarding the subjective norm variables, only NREG was statistically significant (OR = 0.840, p = 0.0012). Among the sociodemographic variables, although the OR for FEM suggests that men may have a higher propensity to use GAI, this result was not significant. In contrast, Zoomers showed a significantly greater inclination to use GAI than did Millennials (OR = 1.739, p < 0.0001).

4.2. Analytical Outcomes of Research Objective 2

Table 6 presents the results of necessity analysis. It can be observed that no variable reached the threshold to be considered a necessary condition for the occurrence of GAIU, as none of the consistency values exceed the 0.9 threshold.
As expected, the consistency of USEFUL, INNOV, KNOWL, SPER, and GENZ in explaining GAIU was higher than their respective negations. Likewise, the consistency of ¬USEFUL, ¬INNOV, ¬KNOWL, ¬SPER, and ¬GENZ in explaining non-use was higher than the consistency of their presence. Both findings are consistent with the hypothesized positive relationship between USEFUL, INNOV, KNOWL, SPER, and GENZ, and the use of GAI.
Similarly, the consistency of ¬PRI, ¬REG, and ¬FEM in explaining GAIU was greater than that of their affirmative versions. Conversely, PRI, REG, and FEM showed higher consistency than their respective negations when analyzing the non-use of GAI. These results are consistent with the hypothesized negative relationship among PRI, REG, FEM, and GAI use.
Figure 3 shows that the CON of the solution for GAI use was 0.867 and the coverage (COV) was 0.505. A total of 17 configurations were identified, allowing all variables to function as conditions in at least eight prime implications.
The roles of USEFUL, INNOV, KNOWL, GENZ, NREG, and FEM as conditions are consistent with Proposition P1. For variables expected to have a positive relationship with GAI use (USEFUL, INNOV, KNOWL, and GENZ), their presence is required in most configurations in which they appear as conditions. Specifically, USEFUL and KNOWL appeared exclusively in their affirmative forms in the configurations in which they were present (11 and 13 times, respectively). In the case of INNOV and GENZ, their presence is also considerably more frequent than their absence: INNOV appears as a present condition in eight configurations and is absent in only 1, while GENZ appears 10 times as present and 2 times as absent.
The inclusion of NREG and FEM as conditions in the GAIU configuration was also consistent with P1. FEM appears negated in all configurations in which it is included (13 times, 12 of which are core conditions), indicating that these paths are associated with inhibiting factors. The NREG appears as a negated condition in six configurations as a core condition and once as a peripheral condition, while its affirmative presence appears only once as a core condition.
In contrast, the role of PRI and perceived social performance SPER in GAIU configurations contradicts Proposition P1 and the hypothesized direction of their relationship with GAI use, meaning that P1 is only partially supported. Although PRI appears as a condition in 11 configurations, in nine of them, its presence is required, and only in two is its absence relevant. SPER appears in nine prime implicates, and in the majority (five) its absence—not its presence—constitutes a core condition.
Figure 4 presents an intermediate solution in the absence of GAI use (GAIU). This indicates that CON = 0.857 and COV = 0.439. Eight configurations were identified in which all variables functioned as conditions in at least four prime implicates. Furthermore, it was observed that there were no peripheral conditions; all were core conditions.
The role of all variables aligns with Proposition 2 (P2), which is therefore fully supported. For the variables expected to have a positive relationship with GAI use (USEFUL, INNOV, KNOWL, SPER, and GENZ), their absence consistently appears in all paths leading to ¬GAIU in which they participate. This occurred four times for ¬USEFUL, six times for ¬INNOV, eight times for ¬KNOWL, five times for ¬SPER, and six times for ¬GENZ (i.e., belonging to Generation Y).
As for the variables expected to have a negative relationship with GAI use (PRI, NREG, and FEM), their presence in the GAIU configurations is more common than in their absence. In the case of the FEM (being female), whenever this variable appears as a condition, its presence is always required (six times). Regarding PRI and NREG, in most cases, they also appear as affirmative conditions: PRI appears in six prime implicates with required presence and in two with absence; NREG is part of the five prime implicates, with four requiring its presence and only one requiring its absence.
Figure 3. Intermediate solution of use of GAI.
Figure 3. Intermediate solution of use of GAI.
Societies 15 00285 g003
Figure 4. Intermediate solution of non-use of GAI.
Figure 4. Intermediate solution of non-use of GAI.
Societies 15 00285 g004
The comparison between Figure 3 and Figure 4 allows us to accept Proposition 3 (P3) because the configurations associated with use (GAIU) and non-use (GAIU) are clearly asymmetric. A total of 17 configurations were identified for GAIU and 8 for ¬GAIU. Moreover, virtually no path toward GAIU has a corresponding counterpart in GAIU, and vice versa. For example, the first configuration for GAIU (KNOWL•¬NREG•¬FEM•GENZ) has no opposite equivalent in ¬GAIU (¬KNOWL•NREG•FEM•¬GENZ).
However, while some variables act symmetrically as conditions in both solutions (GAIU and ¬GAIU), others do not. For example, the variable KNOWL (knowledge) displays fairly symmetrical behavior; in the GAIU configurations in which it appears, it is affirmed and negated in those for non-use of GAI. In contrast, the variable PRI does not exhibit such symmetry, as it tends to appear in both paths to GAIU and GAIU.

5. Discussion

5.1. General Considerations

In the context of rapid technological expansion, understanding the drivers and barriers to the use of generative artificial intelligence (GAI) is crucial, not only for designing effective interventions but also for theorizing emerging patterns of technology adoption. This study provides relevant empirical evidence on the factors explaining the adoption of GAI among young adults in Spain—specifically Millennials and Zoomers—integrating correlational and configurational approaches within the framework of the Theory of Planned Behavior (TPB).
This study seeks to explain the use of GAI (GAIU) based on the average influence of variables such as the perceived usefulness of GAI (USEFUL), respondent’s innovative orientation (INNOV), privacy concerns (PRI), level of knowledge about GAI (KNOWL), perceived social performance (SPER), and perception of the need for regulation (NREG). Sociodemographic variables such as gender (FEM, coded as female) and belonging to Generation Z (GENZ) were also included.
The first research question (RQ1) aimed to quantify the average influence of these explanatory variables on GAIU, for which ordered logistic regression was applied. The results show that the variables USEFUL, INNOV, KNOWL, and GENZ have a positive and statistically significant relationship with GAIU, while NREG shows a significant negative relationship, in line with the theoretical expectations.
The positive relationship between perceived usefulness (USEFUL) and the acceptance and use of AI—particularly GAI—has been widely documented in the reviewed literature [20,23,33,34,35,36]. The positive relationship between INNOV and GAIU also finds support in the literature, both in relation to AI broadly [20,41] and GAI specifically [42]. The lack of statistical significance of the influence of PRI on GAIU is consistent with previous findings [47], which also found no statistically significant relationship in the educational context.
The control variable KNOWL appears to be the most relevant variable in explaining GAI use, both in the healthcare sector [51,52] and educational contexts [17,23,34].
The fact that the perception of the need for GAI regulation negatively influences its use suggests that how society and the environment perceive GAI affects its adoption. The relevance of social influence on AI use has been reported for both general AI [20,53] and GAI specifically [33,54,55,56]. Additionally, the greater predisposition of GENZ to GAI aligns with the reviewed literature [14,67].
The fsQCA results identified a plurality of causal paths leading to both the use and non-use of GAI. Seventeen configurations explain GAI use and eight explain non-use, highlighting the asymmetry between causal logics. This approach shows that there is no single path to technology acceptance but rather a diverse combination of conditions that act together to facilitate or hinder the use of GAI. These findings can be explained by the existence of different user profiles [68,69,70] and nonuser profiles [71,72]. This complexity is amplified by the fact that these theoretical profiles are ideal types that rarely appear in pure form in practice but rather as diverse combinations.
Through fsQCA, we gain additional insights beyond the regression model into how the explanatory variables contribute to both the use and non-use of GAI. The positive influence postulated for USEFUL, INNOV, KNOWL, and GENZ and the negative influence of NREG with GAIU are confirmed by their roles as conditions in the GAIU and ¬GAIU configurations. USEFUL, INNOV, KNOWL, and GENZ are typically required to be present in use configurations and absent in non-use configurations. Conversely, NREG tends to be absent in GAIU configurations and is present in GAIU.
We also observe that the statistical insignificance of PRI can be attributed to its presence being affirmed in both GAIU and ¬GAIU configurations, where it is a condition. Its role in GAIU configurations aligns with the hypothetically negative correlation proposed in H3. However, its presence in GAIU suggests a positive relationship, effectively neutralizing the overall effect. In various GAIU configurations, there are different user profiles where knowledge of GAI and concern for online privacy, which may be mitigated by digital literacy, go hand in hand. Examples include GAIU = USEFUL•PRI•KNOWL•¬FEM•GENZ and GAIU = PRI•KNOWL•¬FEM•GENZ. This finding reconciles the view that some authors argue that privacy concerns inhibit AI use [41,46], whereas others find no significant relationship [47].
The statistical insignificance of SPER in GAIU is due to it being mostly negated as a condition in both use and non-use configurations. While its negation in most ¬GAIU configurations aligns with hypothesis H5 (positive correlation), its similar role in GAIU configurations contradicts this hypothesis.
For Spanish digital natives, the results carry specific generational implications beyond statistical confirmation. The positive effects of perceived usefulness and innovativeness show that this cohort does not view GAI merely as a neutral technological tool but rather as a means of enhancing creativity, learning, and employability. In a labor market marked by high youth unemployment and precarious job conditions, these attributes are especially salient, as they align with young Spaniards’ aspirations to gain a competitive edge and improve career prospects. Knowledge also highlights the central role of digital literacy in shaping adoption, suggesting that access to training and skill development may be decisive in fostering responsible and productive use.
At the same time, the discouraging effect of regulatory concerns reflects a critical awareness of institutional and governance issues surrounding GAI in Spain. This suggests that young users are not unconditionally enthusiastic but remain attentive to broader societal debates about regulation, accountability, and ethical risks. The limited influence of privacy and social performance can be interpreted through their extensive immersion in digital ecosystems, where privacy trade-offs are often normalized and peer influence exerts less pressure than pragmatic assessments of utility. Taken together, these findings portray Spanish digital natives as both pragmatic and critical adopters, motivated by the opportunities GAI provides while remaining aware of its potential risks. This dual stance opens new avenues for research on how cultural, economic, and policy contexts shape generational engagement with emerging technologies and invites comparative analyses across countries facing similar but not identical conditions.
Beyond these average effects, it is important to stress that Spanish digital natives are far from homogeneous in their adoption or rejection of GAI. While regression analysis offers insights into the average contribution of each explanatory factor, the configurational approach reveals that multiple distinct patterns can lead to the same outcome, whether adoption or non-adoption. This shows that there is no single linear path toward embracing or rejecting GAI but rather diverse combinations of drivers and barriers. Such heterogeneity underscores the value of adopting both correlational and configurational perspectives, as it enables researchers to capture the complexity of generational behavior and the coexistence of different logics of adoption within the same cohort.

5.2. Implications for Theory and Practice

The first theoretical implication is that a TPB-based analytical framework can provide deep insights by combining correlational (in this case, ordered logistic regression) and configurational tools (fsQCA). While the former quantifies the average variable’s influence on GAIU, the latter reveals how these variables combine in the sample to explain both use and non-use.
These findings have direct implications for policymakers, educators, technology developers, and social actors. These implications extend beyond the promotion of GAI adoption, emphasizing the need for responsible and ethical use.
First, the evidence suggests that increasing knowledge about GAI is one of the most effective strategies to foster meaningful engagement. This implies integrating specific AI and GAI training into educational curricula, especially at the university and vocational levels, and developing digital literacy programs that target young adults. Importantly, such literacy should not only focus on technical skills, but also cultivate ethical awareness and critical thinking, equipping students to recognize biased or inaccurate outputs and to use generative AI responsibly. For instance, universities could integrate GAI ethics into media literacy courses, while vocational training centers might organize workshops in which students critically compare AI-generated outputs to human work.
Second, innovation orientation has emerged as a key driver. Strengthening the link between GAI and innovation could foster favorable predispositions toward adoption, but this should be accompanied by transparency regarding limitations and risks to avoid unrealistic expectations.
Third, although the privacy risk was not statistically significant, its presence in some fsQCA inhibitory configurations suggests that it should not be ignored. Transparency in data use and system logic is crucial for building trust. Technology developers should implement explainability and data-control mechanisms to reduce resistance, particularly among ethically sensitive groups. Companies can implement transparency dashboards that show users the data sources and decision rules behind the generated outputs, directly addressing concerns about explainability.
An additional salient finding is that perceptions of the necessity of regulation exert an inhibitory effect on adoption. This suggests that some perceive the absence of regulation as a risk that can hinder adoption. Policymakers should therefore prioritize clear and trustworthy governance frameworks that reduce uncertainty while safeguarding against misuse, bias, and a lack of accountability. Communicating existing or developing regulatory frameworks is key to reinforcing public trust. As a practical example, governments may run public campaigns to clarify how existing EU initiatives, such as the AI Act, regulate generative systems, thereby reducing uncertainty and enhancing citizens’ trust.
Finally, the positive effect of being a Zoomer on GAI adoption suggests that this group should be considered a key catalyst for technology diffusion. Adoption strategies could benefit from a generationally segmented approach, where GENZ members act as “digital contagion agents” in educational, professional, or social environments. Simultaneously, it is essential to design targeted interventions for Millennials, who, though also digital natives, may be more skeptical or hold stronger ethical concerns. Social actors can contribute by fostering public debate that balances opportunities with risks, reinforcing that adoption is not an end in itself, but a process aligned with ethical standards and societal values.
Taken together, these findings underscore the necessity of a multi-stakeholder strategy integrating educational initiatives, transparent communication, robust regulation, and generationally differentiated approaches to foster inclusive, ethical, and socially sustainable adoption of GAI.

6. Conclusions

6.1. Main Findings

This study identifies the key factors influencing the adoption of GAI among young adults in Spain. Both quantitative and configurational approaches highlighted the significant roles in GAI use of USEFUL, INNOV, KNOWL, NREG, and GENZ.
Correlational models show that attitudinal and cognitive variables carry more explanatory weight than classic sociodemographic factors such as gender or privacy concerns. The configurational analysis (fsQCA), for its part, highlights the existence of multiple causal paths toward GAI use or rejection, underscoring the complex and asymmetric nature of technology adoption. The inclusion of social variables such as SPER and NREG represents a relevant theoretical innovation by extending the classic subjective norm framework into ethical–social dimensions.

6.2. Study Limitations

Despite its contributions, this study has several limitations. First, it uses a non-probabilistic sample composed of Zoomers and Millennials, which, while allowing for deeper insight into how explanatory factors influence GAI use, limits the generalizability of results to older generations. Second, the cross-sectional nature of the design prevents strong causal inferences and limits temporal conclusions about adoption. Additionally, the use of self-reported measures (e.g., AI knowledge) may introduce social desirability bias or recall errors.
The operationalization of some variables (e.g., SPER or KNOWL) warrants further refinement in future research, for instance, through the development of more robust scales or triangulation with qualitative data. Finally, while the combined use of logistic regression and fsQCA captures complementary patterns, other approaches such as structural equation modeling or machine learning could further enrich the analysis.

6.3. Future Research Directions

Based on these findings, future studies could broaden the sample to include other age groups, professional profiles, or cultural contexts. Longitudinal studies would also be valuable in assessing how attitudes and behaviors toward GAI evolve over time, especially as these technologies become more integrated into daily life.
Another promising direction involves further exploring the ethical and social implications of GAI by incorporating more qualitative or mixed-methods approaches to examine the tensions between innovation, regulation, and social values. Studying the roles of mediators and moderators, such as digital literacy, trust in technology, and institutional context, would also be relevant in understanding the relationship between predictive variables and effective GAI use.
Finally, replicating this study in other countries would help in contrasting the findings and enrich the intercultural understanding of GAI adoption across diverse populations.

Author Contributions

Conceptualization, A.P.-P. and M.A.-O.; Methodology, A.P.-P. and G.P.-C.; software: J.d.A.-S.; validation: M.A.-O.; formal analysis: A.P.-P.; investigation. A.P.-P. and M.A.-O.; resources, G.P.-C.; data curation, J.d.A.-S.; writing—original draft preparation, A.P.-P. and J.d.A.-S.; writing—review and editing, G.P.-C.; visualization, J.d.A.-S.; supervision, G.P.-C.; project administration, M.A.-O. and G.P.-C.; funding acquisition, J.d.A.-S. and M.A.-O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Telefonica and the Telefonica Chair on Smart Cities of the Universitat Rovira i Virgili and Universitat de Barcelona (Project Number: 42. DB.00.18.00).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available at https://www.cis.es/es/detalle-ficha-estudio?origen=estudio&codEstudio=3495 (accessed 2 July 2025).

Acknowledgments

During the preparation of this manuscript, the authors used ChatGPT-4 in some instances for text translation from Spanish to English and in others for English editing (grammar, syntax, and vocabulary). The authors reviewed and revised all AI-generated content and take full responsibility for the final version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
AVEAverage variance extracted
FEMBeing female
fsQCAFuzzy set Qualitative Comparative Analysis
GAIGenerative artificial intelligence
GAIUUse of Generative artificial intelligence
GENZBelonging to Generation Z
INNOVInnovativeness
KNOWLKnowledge
NREGNeed for regulation
PRIPrivacy concerns
SPERSocial performance
TPBTheory of planned behaviour
USEFULUsefulness

References

  1. Polak, P.; Anshari, M. Exploring the multifaceted impacts of artificial intelligence on public organizations, business, and society. Humanit. Soc. Sci. Commun. 2024, 11, 1–3. [Google Scholar] [CrossRef]
  2. Feuerriegel, S.; Hartmann, J.; Janiesch, C.; Zschech, P. Generative AI. Bus. Inf. Syst. Eng. 2024, 66, 111–126. [Google Scholar] [CrossRef]
  3. Lo, C.K. What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  4. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  5. Mahmoud, A.B.; Kumar, V.; Spyropoulou, S. Identifying the Public’s Beliefs about Generative Artificial Intelligence: A Big Data Approach. IEEE Trans. Eng. Manag. 2025, 77, 827–841. [Google Scholar] [CrossRef]
  6. Runcan, R.; Hațegan, V.; Toderici, O.; Croitoru, G.; Gavrila-Ardelean, M.; Cuc, L.D.; Rad, D.; Costin, A.; Dughi, T. Ethical AI in Social Sciences Research: Are We Gatekeepers or Revolutionaries? Societies 2025, 15, 62. [Google Scholar] [CrossRef]
  7. Doshi, A.R.; Hauser, O.P. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci. Adv. 2024, 10, eadn5290. [Google Scholar] [CrossRef]
  8. Sebastian, G. Privacy and Data Protection in ChatGPT and Other AI Chatbots: Strategies for Securing User Information. Int. J. Secur. Priv. Pervasive Comput. 2024, 15, 14. [Google Scholar] [CrossRef]
  9. Shahriar, S.; Allana, S.; Fard, M.H.; Dara, R. A Survey of Privacy Risks and Mitigation Strategies in the Artificial Intelligence Life Cycle. IEEE Access 2023, 11, 61829–61854. [Google Scholar] [CrossRef]
  10. Centro de Investigaciones Sociológicas. Inteligencia Artificial. Estudio 3495 2025. Available online: https://www.cis.es/documents/d/cis/es3495mar-pdf (accessed on 3 June 2025).
  11. Wang, H.Y.; Sigerson, L.; Cheng, C. Digital nativity and information technology addiction: Age cohort versus individual difference approaches. Comput. Hum. Behav. 2019, 90, 1–9. [Google Scholar] [CrossRef]
  12. Kutlák, J. Individualism and self-reliance of Generations Y and Z and their impact on working environment: An empirical study across 5 European countries. Probl. Perspect. Manag. 2021, 19, 39–52. [Google Scholar] [CrossRef]
  13. Akçayır, M.; Dündar, H.; Akçayır, G. What makes you a digital native? Is it enough to be born after 1980? Comput. Hum. Behav. 2016, 60, 435–440. [Google Scholar] [CrossRef]
  14. Chan, C.K.Y.; Lee, K.K.W. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  15. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process 1991, 50, 179–211. [Google Scholar] [CrossRef]
  16. Al-Emran, M.; Abu-Hijleh, B.; Alsewari, A.A. Exploring the Effect of Generative AI on Social Sustainability Through Integrating AI Attributes, TPB, and T-EESST: A Deep Learning-Based Hybrid SEM-ANN Approach. IEEE Trans. Eng. Manag. 2024, 71, 14512–14524. [Google Scholar] [CrossRef]
  17. Gado, S.; Kempen, R.; Lingelbach, K.; Bipp, T. Artificial intelligence in psychology: How can we enable psychology students to accept and use artificial intelligence? Psychol. Learn. Teach. 2021, 21, 37–56. [Google Scholar] [CrossRef]
  18. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  19. Khizar, H.M.U.; Ashraf, A.; Yuan, J.; Al-Waqfi, M. Insights into ChatGPT adoption (or resistance) in research practices: The behavioral reasoning perspective. Technol. Forecast. Soc. Change 2025, 215, 124047. [Google Scholar] [CrossRef]
  20. Gansser, O.A.; Reich, C.S. A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technol. Soc. 2021, 65, 101535. [Google Scholar] [CrossRef]
  21. Ajzen, I. Perceived Behavioral Control, Self-Efficacy, Locus of Control, and the Theory of Planned Behavior. J. Appl. Soc. Psychol. 2002, 32, 665–683. [Google Scholar] [CrossRef]
  22. Sparks, P.; Guthrie, C.A.; Shepherd, R. The Dimensional Structure of the Perceived Behavioral Control Construct. J. Appl. Soc. Psychol. 1997, 27, 418–438. [Google Scholar] [CrossRef]
  23. Al Darayseh, A. Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Comput. Educ. Artif. Intell. 2023, 4, 100132. [Google Scholar] [CrossRef]
  24. Sobieraj, S.; Krämer, N.C. Similarities and differences between genders in the usage of computer with different levels of technological complexity. Comput. Hum. Behav. 2020, 104, 106145. [Google Scholar] [CrossRef]
  25. Fiss, P.C. Building better causal theories: A fuzzy set approach to typologies in organization research. Acad. Manag. J. 2011, 54, 393–420. [Google Scholar] [CrossRef]
  26. Woodside, A.G. Embrace perform model: Complexity theory, contrarian case analysis, and multiple realities. J. Bus. Res. 2014, 67, 2495–2503. [Google Scholar] [CrossRef]
  27. de Andrés-Sánchez, J.; Arias-Oliva, M.; Souto-Romero, M. Antecedents of the Intention to Use Implantable Technologies for Nonmedical Purposes: A Mixed-Method Evaluation. Hum. Behav. Emerg. Technol. 2024, 2024, 1064335. [Google Scholar] [CrossRef]
  28. Gadekallu, T.R.; Yenduri, G.; Kaluri, R.; Rajput, D.S.; Lakshmanna, K.; Fang, K.; Chen, J.; Wang, W. The role of GPT in promoting inclusive higher education for people with various learning disabilities: A review. PeerJ Comput. Sci. 2025, 11, e2400. [Google Scholar] [CrossRef]
  29. Singh, A.; Patel, N.P.; Ehtesham, A.; Kumar, S.; Khoei, T.T. A Survey of Sustainability in Large Language Models: Applications, Economics, and Challenges. In Proceedings of the 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2025; pp. 8–14. [Google Scholar] [CrossRef]
  30. Wang, S.; Zhang, H. Green entrepreneurship success in the age of generative artificial intelligence: The interplay of technology adoption, knowledge management, and government support. Technol. Soc. 2024, 79, 102744. [Google Scholar] [CrossRef]
  31. Bossert, L.N.; Loh, W. Why the carbon footprint of generative large language models alone will not help us assess their sustainability. Nat. Mach. Intell. 2025, 7, 164–165. [Google Scholar] [CrossRef]
  32. Alier, M.; García-Peñalvo, F.; Camba, J.D. Generative artificial intelligence in education: From deceptive to disruptive. Int. J. Interact. Multimed. Artif. Intell. 2024, 8, 5–14. [Google Scholar] [CrossRef]
  33. Yudhistyra, W.I.; Srinuan, C. Exploring the Acceptance of Technological Innovation Among Employees in the Mining Industry: A Study on Generative Artificial Intelligence. IEEE Access 2024, 12, 165797–165809. [Google Scholar] [CrossRef]
  34. Hsu, W.-L.; Silalahi, A.D.K.; Tedjakusuma, A.P.; Riantama, D. How Do ChatGPT’s Benefit–Risk Paradoxes Impact Higher Education in Taiwan and Indonesia? An Integrative Framework of UTAUT and PMT with SEM & fsQCA. Comput. Educ. Artif. Intell. 2025, 8, 100412. [Google Scholar] [CrossRef]
  35. Sobaih, A.E.E.; Elshaer, I.A.; Hasanein, A.M. Examining students’ acceptance and use of ChatGPT in Saudi Arabian higher education. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 709–721. [Google Scholar] [CrossRef]
  36. Strzelecki, A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact. Learn. Environ. 2024, 32, 5142–5155. [Google Scholar] [CrossRef]
  37. Agarwal, R.; Prasad, J. A conceptual and operational definition of personal innovativeness in the domain of information technology. Inf. Syst. Res. 1998, 9, 204–215. [Google Scholar] [CrossRef]
  38. Li, F.; Wei, X.; Wang, C.; Zhang, C.; Yu, G.; Yang, Y.; Liu, Y. How do users perceive AI? A dual-process perspective on enhancement and replacement. Telemat. Inform. Rep. 2025, 19, 100227. [Google Scholar] [CrossRef]
  39. Montag, C.; Rahman, S.; Thrul, J.; AlBeyahi, F.; Ali, R. Bringing computer and social/psychological/medical sciences together: The case of digital well-being and AI well-being. Telemat. Inform. Rep. 2025, 19, 100225. [Google Scholar] [CrossRef]
  40. Lu, J.; Yao, J.E.; Yu, C.-S. Personal innovativeness, social influences and adoption of wireless Internet services via mobile technology. J. Strateg. Inf. Syst. 2005, 14, 245–268. [Google Scholar] [CrossRef]
  41. Chalutz Ben-Gal, H. Artificial intelligence (AI) acceptance in primary care during the coronavirus pandemic: What is the role of patients’ gender, age and health awareness? A two-phase pilot study. Front. Public Health 2023, 10, 931225. [Google Scholar] [CrossRef]
  42. Biloš, A.; Budimir, B. Understanding the Adoption Dynamics of ChatGPT among Generation Z: Insights from a Modified UTAUT2 Model. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 863–879. [Google Scholar] [CrossRef]
  43. Wirtz, J.; Lwin, M.O.; Williams, J.D. Causes and consequences of consumer online privacy concern. Int. J. Serv. Ind. Manag. 2007, 18, 326–348. [Google Scholar] [CrossRef]
  44. KPMG International. Privacy in the New World of AI: How to Build Trust in AI through Privacy 2023. Available online: https://kpmg.com/xx/en/our-insights/ai-and-technology/privacy-in-the-new-world-of-ai.html (accessed on 3 June 2025).
  45. Martin, K.D.; Zimmermann, J. Artificial intelligence and its implications for data privacy. Curr. Opin. Psychol. 2024, 58, 101829. [Google Scholar] [CrossRef]
  46. Moon, W.-K.; Xiaofan, W.; Holly, O.; Kim, J.K. Between Innovation and Caution: How Consumers’ Risk Perception Shapes AI Product Decisions. J. Curr. Issues Res. Advert. 2025, 1–23. [Google Scholar] [CrossRef]
  47. Chung, J.; Kwon, H. Privacy fatigue and its effects on ChatGPT acceptance among undergraduate students: Is privacy dead? Educ. Inf. Technol. 2025, 30, 12321–12343. [Google Scholar] [CrossRef]
  48. Compeau, D.R.; Higgins, C.A. Computer self-efficacy: Development of a measure and initial test. MIS Q. 1995, 19, 189–210. [Google Scholar] [CrossRef]
  49. Venkatesh, V.; Bala, H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  50. Araujo, T.; Helberger, N.; Kruikemeier, S.; de Vreese, C.H. In AI We Trust? Perceptions About Automated Decision-Making by Artificial Intelligence. AI Soc. 2020, 35, 611–623. [Google Scholar] [CrossRef]
  51. Alshutayli, A.A.; Asiri, F.M.; Arshi Abutaleb, Y.B.; Alomair, B.A.; Almasaud, A.K.; Almaqhawi, A. Assessing Public Knowledge and Acceptance of Using Artificial Intelligence Doctors as a Partial Alternative to Human Doctors in Saudi Arabia: A Cross-Sectional Study. Cureus 2024, 16, e64461. [Google Scholar] [CrossRef] [PubMed]
  52. Lokaj, B.; Pugliese, M.; Kinkel, K.; Lovis, C.; Schmid, J. Barriers and Facilitators of Artificial Intelligence Conception and Implementation for Breast Imaging Diagnosis in Clinical Practice: A Scoping Review. Eur. Radiol. 2023, 34, 2096–2109. [Google Scholar] [CrossRef]
  53. Kelly, S.; Kaye, S.-A.; Oviedo-Trespalacios, O. What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat. Inform. 2023, 77, 101925. [Google Scholar] [CrossRef]
  54. Camilleri, M.A. Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework. Technol. Forecast. Soc. Change 2024, 201, 123247. [Google Scholar] [CrossRef]
  55. Elshaer, I.A.; Hasanein, A.M.; Sobaih, A.E.E. The Moderating Effects of Gender and Study Discipline in the Relationship between University Students’ Acceptance and Use of ChatGPT. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 1981–1995. [Google Scholar] [CrossRef] [PubMed]
  56. Strzelecki, A.; ElArabawy, S. Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. Br. J. Educ. Technol. 2024, 55, 1209–1230. [Google Scholar] [CrossRef]
  57. Sætra, H.S. Generative AI: Here to stay, but for good? Technol. Soc. 2023, 75, 102372. [Google Scholar] [CrossRef]
  58. Liu, F.; Liang, C. Analyzing wealth distribution effects of artificial intelligence: A dynamic stochastic general equilibrium approach. Heliyon 2025, 11, e41943. [Google Scholar] [CrossRef] [PubMed]
  59. Wang, S.; Wang, F.; Zhu, Z.; Wang, J.; Tran, T.; Du, Z. Artificial intelligence in education: A systematic literature review. Expert. Syst. Appl. 2024, 252, 124167. [Google Scholar] [CrossRef]
  60. Alkamli, S.; Alabduljabbar, R. Understanding privacy concerns in ChatGPT: A data-driven approach with LDA topic modeling. Heliyon 2024, 10, e39087. [Google Scholar] [CrossRef]
  61. Haidemariam, T.; Gran, A.-B. On the problems of training generative AI: Towards a hybrid approach combining technical and non-technical alignment strategies. AI Soc. 2025. [Google Scholar] [CrossRef]
  62. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  63. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  64. Rahman, M.M.; Babiker, A.; Ali, R. Motivation, Concerns, and Attitudes Towards AI: Differences by Gender, Age, and Culture. In Proceedings of the International Conference on Human-Computer Interaction, Washington, DC, USA, 29 June–4 July 2024. [Google Scholar] [CrossRef]
  65. Armutat, S.; Wattenberg, M.; Mauritz, N. Artificial Intelligence—Gender-Specific Differences in Perception, Understanding, and Training Interest. Int. Conf. Gend. Res. 2024, 7, 36–43. [Google Scholar] [CrossRef]
  66. Moravec, V.; Hynek, N.; Skare, M.; Gavurova, B.; Kubak, M. Human or machine? The perception of artificial intelligence in journalism, its socio-economic conditions, and technological developments toward the digital future. Technol. Forecast. Soc. Change 2024, 200, 123162. [Google Scholar] [CrossRef]
  67. Bartneck, C.; Yogeeswaran, K.; Sibley, C.G. Personality and demographic correlates of support for regulating artificial intelligence. AI Ethics 2024, 4, 419–426. [Google Scholar] [CrossRef]
  68. Rogers, E.M. Diffusion of Innovations, 5th ed.; Free Press: New York, NY, USA, 2003. [Google Scholar]
  69. Birkland, J.L.H. Understanding the ICT User Typology and the User Types. In Gerontechnology; Emerald Publishing Limited: Leeds, UK, 2019; pp. 95–106. [Google Scholar] [CrossRef]
  70. Tanković, A.; Prodan, M.; Benazić, D. Consumer segments in blockchain technology adoption. South. East. Eur. J. Econ. Bus. 2023, 18, 162–172. [Google Scholar] [CrossRef]
  71. Augustin, L.; Pfrang, S.; Wolffram, A.; Beyer, C. The value of the non-user: Developing (non-)user profiles for the development of a robot vacuum with the use of the (non-)persona. Proc. Des. Soc. 2021, 1, 3131–3140. [Google Scholar] [CrossRef]
  72. Gauttier, S. ‘I’ve got you under my skin’–The role of ethical consideration in the (non-) acceptance of insideables in the workplace. Technol. Soc. 2019, 56, 93–108. [Google Scholar] [CrossRef]
  73. Pappas, I.O.; Woodside, A.G. Fuzzy-set Qualitative Comparative Analysis (fsQCA): Guidelines for research practice in Information Systems and marketing. Int. J. Inf. Manag. 2021, 58, 102310. [Google Scholar] [CrossRef]
  74. Rutten, R.; Rubinson, C. A Vocabulary for QCA 2022. Available online: https://compasss.org/wp-content/uploads/2023/02/vocabulary.pdf (accessed on 3 June 2025).
  75. Faul, F.; Erdfelder, E.; Buchner, A.; Lang, A.-G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav. Res. Methods 2009, 41, 1149–1160. [Google Scholar] [CrossRef] [PubMed]
  76. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  77. Taber, K.S. The use of Cronbach’s alpha when developing and reporting research instruments in science education. Res. Sci. Educ. 2018, 48, 1273–1296. [Google Scholar] [CrossRef]
  78. Ragin, C.C. User’s Guide to Fuzzy-Set/Qualitative Comparative Analysis 3.0 2018. Available online: https://sites.socsci.uci.edu/~cragin/fsQCA/download/fsQCAManual.pdf (accessed on 1 January 2020).
  79. McFadden, D. Conditional Logit Analysis of Qualitative Choice Behavior. In Frontiers in Econometrics; Zarembka, P., Ed.; Academic Press: New York, NY, USA, 1974; pp. 105–142. [Google Scholar]
Figure 1. Conceptual ground.
Figure 1. Conceptual ground.
Societies 15 00285 g001
Figure 2. Framework used to model configurational assessment.
Figure 2. Framework used to model configurational assessment.
Societies 15 00285 g002
Table 1. Profile of the sample used in this paper (N = 1533).
Table 1. Profile of the sample used in this paper (N = 1533).
FactorNumberPercentage
Sex
Men65342.60%
Women88057.40%
Age
Up to 25 years29419.18%
From 26 to 35 years56236.66%
From 36 to 45 years67744.16%
Nationality
Spanish136388.91%
Spanish and others815.28%
Other895.81%
Academic degree
Primary or less986.39%
Secondary55135.94%
University88457.66%
Monthly Household Income
Less than €900764.96%
From €900 to €180032120.94%
From €1801 to €300049332.16%
From €3001 to €600050432.88%
From € 6001895.81%
Non answered503.26%
Table 2. Measurement of the variables used in this study.
Table 2. Measurement of the variables used in this study.
VariablesResponses
Output variable: frequency of using GAI (GAIU)
GAIU1 = Chat GPT
GAIU2 = Gemini
GAIU3 = Microsoft copilot
GAIU4 = Perplexity
GAIU5 = Other
Never = 1; Once = 1; Multiple times a year = 2; Multiple times a year = 3; Multiple times a week = 4; Daily = 5
Input variables
Usefulness (USEFUL): GAI is useful for
USEFUL1 = Labour market
USEFUL2 = Environment
USEFUL3 = Heathcare
USEFUL4 = Economy
From disagreement = 1 to agreement = 3
Innovativeness (INNOV): What is your degree of well-being in the next circumstances:
INNOV1: Get a surgery by a robot
INNOV2: Traveling in an autonomous automobile
INNOV3: Using a chatbot to get a customer service
From completely disagreement = 1 to completely agreement =10
Privacy risk (PRIV): Is the privacy in internet important for you? From nothing = 1 to a lot = 4
Knowledge (KNOWL) = Assess your knowledge and familiarity with artificial intelligence.From complete unawareness = 1 to complete awareness = 10.
Social performance (SPER): AI may promote
SPER1= Human analytical and reflective capacity
SPER2 = The protection of people’s rights
SPER3 = Culture, values, and ways of life
SPER4 = Humanity as a whole
Harmful = 1; Neutral = 2; Beneficial = 3
Need for regulation (NREG). I belief that:
NREG1 = The design, programming, and training of artificial intelligence systems should be subject to regulatory oversight.
NREG2 = Companies and organizations must be required to disclose whenever artificial intelligence is used in place of human involvement.
NREG3 = The application and deployment of artificial intelligence ought to be regulated.
NREG4 = Artificial intelligence poses risks to the protection of intellectual property rights.
NREG5 = The establishment of stronger ethical guidelines and legal safeguards for artificial intelligence is among the most critical challenges currently confronting humanity.
From full disagreement = 1 to full agreement = 5. Neutral value = 3.
Sex (FEM)Male = 0 and Female = 1
Generation Z (GENZ)Determined based on age
Table 3. Operationalization of variables to respond RO1 and RO2.
Table 3. Operationalization of variables to respond RO1 and RO2.
VariablesOrdinal Logit Regression (RO1)fsQCA (RO2)
Output variable:
Frequency of using GAI (GAIU)
The standardized value of GAIU = Max{USE1, USE2,…, USE5}. The categories are GAIU∈{0,1,2,3,4,5} m G A I U = G A I U 5
Input variables
Usefulness (USEFUL)The standardized first principal component of its items m U S E F U L is equal to 1 for values of USEFUL at or above the 90th percentile, and 0 for values below the 10th percentile. Between the 10th and 90th percentiles, the degree of membership is linearly graded.
Innovativeness (INNOV)
The standardized first principal component of its items m I N N O V is equal to 1 for values of INNOV at or above the 90th percentile, and 0 for values below the 10th percentile. Between the 10th and 90th percentiles, the degree of membership is linearly graded.
Privacy risk (PRIV)The standardized value of the item m P R I V is equal to 1 for values of PRIV at or above the 90th percentile, and 0 for values below the 10th percentile. Between the 10th and 90th percentiles, the degree of membership is linearly graded.
Knowledge (KNOWL)The standardized value of the item m K N O W L is equal to 1 for values of KNOWL at or above the 90th percentile, and 0 for values below the 10th percentile. Between the 10th and 90th percentiles, the degree of membership is linearly graded.
Social performance (SPER)The standardized value of the first principal component of its items m S P E R is equal to 1 for values of SPER at or above the 90th percentile, and 0 for values below the 10th percentile. Between the 10th and 90th percentiles, the degree of membership is linearly graded.
Need for regulation (NREG).The standardized value of the first principal component of its items m N R E G is equal to 1 for values of NREG at or above the 90th percentile, and 0 for values below the 10th percentile. Between the 10th and 90th percentiles, the degree of membership is linearly graded.
Sex (FEM)Male = 0 and Female = 1 m F E M = F E M
Generation Z (GENZ)Continuous variable in the [0, 1] range. Being 25 years old or younger indicates full membership in Generation Z (value = 1), while being older than 35 indicates full membership in Generation Y (value = 0). For individuals between 25 and 35 years old, membership is linearly graded. m G E N Z = G E N Z
Table 4. Summary statistics of study items.
Table 4. Summary statistics of study items.
VariablesMeanSDFLCACRAVE
Output variable (GAIU)
GAIU1 = ChatGPT 2.401.85
GAIU2 = Gemini 0.571.23
GAIU3 = Microsoft copilot 0.731.44
GAIU4 = Perplexity 0.160.662
GAIU5 = Other0.931.55
GAIU 2.661.82
Input variables
Usefulness (USEFUL) 0.6310.78147.70%
USEFUL 11.760.8320.713
USEFUL 22.190.8570.631
USEFUL 32.590.7030.655
USEFUL 42.110.8380.757
Innovativeness (INNOV) 0.6120.79156.90%
INNOV14.172.960.767
INNOV24.452.790.844
INNOV34.982.950.639
Privacy risk (PRI)3.750.511111100%
Knowledge (KNOWL)5.182.12111100%
Social performance (SPER) 0.6680.80351.40%
SPER11.840.8890.614
SPER21.670.7560.733
SPER31.620.7540.741
SPER41.830.8370.771
Need for Regulation (NREG) 0.8050.86957.50%
NREG14.31.020.835
NREG24.460.9020.681
NREG34.420.9630.851
NREG43.91.190.642
NREG54.191.060.76
Note: (a) FL = Factor loading by the 1st principal component, CA = Cronbach’s alpha, CR = composite reliability, and AVE = average variance extracted; (b) In USEFUL, INNOV, PRI, SPER, NREG, the unicity of the correlation matrix was rejected with the Barlett test with p < 0.0001.
Table 5. Results of ordered logistic regression.
Table 5. Results of ordered logistic regression.
FactorsCoefficientSDORz-Statisticp-ValueAcceptance
USEFUL0.3790.0601.4616.286<0.0001H1 = Accepted
INNOV0.3110.0581.3645.400<0.0001H2 = Accepted
PRI−0.0190.0480.981−0.3900.6965H3 = Rejected
KNOW0.9350.0562.54716.560<0.0001H4 = Accepted
SPER0.0900.0581.0941.5460.1221H5 = Rejected
NREG−0.1750.0540.840−3.2300.0012H6 = Accepted
FEM−0.0090.1000.991−0.0920.9265H7 = Rejected
GENZ0.5530.1171.7394.714<0.0001H8 = Accepted
Note: (a) SD stands for standard deviation; OR = odds ratio. (b) McFadden’s pseudo R2 = 0.13099. The Log-likelihood ratio statistic was 695.34 (p < 0.0001).
Table 6. Results of necessity analysis.
Table 6. Results of necessity analysis.
Use of GAINon Use of GAI
CONSCOVCONSCOV
USEFUL0.730.670.490.51
INNOV0.740.680.480.50
PRI0.520.760.480.81
KNOWL0.760.770.460.53
SPER0.720.620.490.48
NREG0.550.660.590.79
FEM0.460.370.540.50
GENZ0.680.440.440.32
¬USEFUL0.560.540.660.72
¬INNOV0.540.520.660.72
¬PRI0.580.240.420.19
¬KNOWL0.520.460.730.73
¬SPER0.550.560.630.72
¬NREG0.730.510.510.40
¬FEM0.590.630.410.50
¬GENZ0.520.640.540.77
Note: CONS stands for consistency and COV for coverage.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pérez-Portabella, A.; Arias-Oliva, M.; Padilla-Castillo, G.; Andrés-Sánchez, J.d. Correlational and Configurational Perspectives on the Determinants of Generative AI Adoption Among Spanish Zoomers and Millennials. Societies 2025, 15, 285. https://doi.org/10.3390/soc15100285

AMA Style

Pérez-Portabella A, Arias-Oliva M, Padilla-Castillo G, Andrés-Sánchez Jd. Correlational and Configurational Perspectives on the Determinants of Generative AI Adoption Among Spanish Zoomers and Millennials. Societies. 2025; 15(10):285. https://doi.org/10.3390/soc15100285

Chicago/Turabian Style

Pérez-Portabella, Antonio, Mario Arias-Oliva, Graciela Padilla-Castillo, and Jorge de Andrés-Sánchez. 2025. "Correlational and Configurational Perspectives on the Determinants of Generative AI Adoption Among Spanish Zoomers and Millennials" Societies 15, no. 10: 285. https://doi.org/10.3390/soc15100285

APA Style

Pérez-Portabella, A., Arias-Oliva, M., Padilla-Castillo, G., & Andrés-Sánchez, J. d. (2025). Correlational and Configurational Perspectives on the Determinants of Generative AI Adoption Among Spanish Zoomers and Millennials. Societies, 15(10), 285. https://doi.org/10.3390/soc15100285

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop